Hacker News new | past | comments | ask | show | jobs | submit login

Every time a new framework or tool comes out and everybody jumps on it. I always wonder if somebody will realize that you are trading one set of problems and work for another.

As engineers we really need to stop supporting these sort of effort and take the time to help each other become better engineers that write and maintain our own code. We need to promote learning and mastering the underlying concepts that things like kubernetes tries to hide and shield engineers from.

In most cases tools like kubernetes are so vast and huge so they can be the solution looking for many problems.

It is also curious how once kubernetes became big how many small shops needed "Google" level orchestration to manage a hand full of systems. And how hard people ripped their software stack apart into many many micro services just to increase the container count.

I think if most engineers took a step back and said "I don't know" and took some time to truly understand the requirements of the project they are working on they would find a truly elegant and maintainable solution that did not require tweaking your problem to fit a given solution.

Every tool and library / dependency you add to your solution is only adding more code and complications that you will not be a expert in and one day will find your self at the whim of the provider.

Far to often do we include tens of thousands of lines of code of somebody else's work all for a handful of lines of code that if somebody would have had the confidence and support from other engineers around to try and truly understand the problem domain could have implemented and owned the solution.

The general trend I see as I get older is that we are valuing the first to a solution rather than a more correct solution. Only to be stuck with a solution that requires constant care and work around.

So I plead to all engineers, devlopers, programmers or whatever you call your self. Please stop and take a moment and think hard about how you would solve any given problem without the use of external code first. Then compare your solution to the off the shelf "solution looking for a problem". You might surprise your self.

I will also like to point out that if when solving a problem your solution looks like a shopping list of third party tools libraries and services; you might not fully understand the problem domain.

-- sorry for the rant --




I don't think your comment applies at all to Kubernetes.

K8s truly simplifies dev-ops and even the smallest team and website can greatly benefit from it.

I speak from 7 years of experience managing my company's infrastructure's website:

Before kubernetes, I ran my company's stack on very cheap bare metal from OVH. It was great while it lasted, but as the company grew and my team grew, it's become harder and harder to maintain this infrastructure. And I'm not only talking about our production servers. In reality you have to maintain your prod, your staging, and, worst of all, your local development environment.

Over time, your production staging and dev all become entirely heterogeneous. Each environment ends up running totally different/incompatible versions of all your stack's softwares, and no amount of Ansible/SaltStack/Puppet script will save you from this. All those scripts become a nightmare to maintain and your infrastructure as a whole becomes brittle with, for example, bugs happening only in production, but never on your local dev environment.

K8s came as a savior to all my issues: I burned all my old ansible scripts and rewrote all my infrastructure in k8s. Now my prod, staging and dev env are 99% the same. It saves me a tremendous amount of time and headaches. I taught my team how to use minikube and create a replica of our production with one command line on their local computer.

K8s is far from being just a trendy buzzwordy shiny new cool toy to play with. It solves real world problems that dev ops have. I am so glad this technology exists and I hope to never have to go back to writing ansible/puppet/whatever scripts.


> Over time, your production staging and dev all become entirely heterogeneous.

I've found that this is simply the reality, excepting test-only environments. Developers need to be able to run the applications out of an IDE, compiled with special instrumentation, or whatever else they need to do. It's not possible to support every combination of what they need.

Also, there's some significant developer overhead sometimes. I had another team give me a set of VMs for their product, so I could test a new config. It was great in that it was just like production, but not so great in that it was just like production. How do I log into these VMs? How does authentication work? Where are things located on the VMs? How do I generate an updated VM? You end up having to train everyone in dev ops, because you've handed them a copy of production.


I hadn't considered the benefits for dev/local instances...

I've known about Kubernetes for some time, but my current job never deploys anything that Kubernetes could improve, so I put it aside and hoped to someday get a chance to toy with it.

I've setup vagrant images pre-loaded with our app for several non-developers to use locally but it sounds like Kubernetes would be a far better way to manage those as well as staging servers.

Unfortunately my current company is 100% against third-party hosting/involvement so I couldn't even use it for staging - our stagings servers are Windows-based, ancient, and internal...


For me (small startup) this is the killer feature of k8s. I'm not operating at a scale where "cluster scheduling" is a thing I need to care about, though self-healing and load-balanced services are nice.

To be able to stand up an exact copy of my application on a dev machine, or even better in a review app per-branch (complete with DNS entry and TLS cert) is incredibly valuable. You can run through essentially all of the deploy pipeline before even merging, including SSL config tests etc.


In addition to the dev benefits, there is also a built in cloud scaling story and strategy.

It's not like every app needs that kind of robustness, but there is a certain calming security in knowing that if any part of your Kubernetes deployed app actually needs to go "web scale", or someone asks about 100 times the users you have ever considered, that the answer is straightforward and reasonably pre-configured.


Kubernetes can't run the local images for your devs (Kubernetes must operate over a cluster; minikube spins up a virtual one). You're just thinking of containers.

I saw another comment that did this just a minute ago. Is k8s-hysteria getting so out of control that it's consuming Docker whole? Seems like it may be.

----- EDITING IN RESPONSE TO GOUGGOUG -----

HN's limitation on my post rate has kicked in. Response to gouggoug follows this note, since I'm not allowed to post it.

This rate limit was originally installed to make me stop speaking against the kool-aid Kubernetes hivemind and now it's filling its purpose quite well. See this thread for the original infraction: https://news.ycombinator.com/item?id=14453705 . After the fact, dang has justified the rate limit by saying I was engaging in a flame war. Read the offending thread and judge for yourselves.

Remember, YC doesn't want you to ask if you need Kubernetes or not. They just want you to use it. If someone on HN says otherwise too frequently, they'll rate limit that person's account, as they've done to mine.

Doesn't matter if you have 10 years of history on the site. Doesn't matter if you have 10k+ karma. Only matters that you're counteracting the narrative that Google is paying a lot of money to push.

No matter how frilly and new-age someone makes themselves out to be, people only have so much tolerance for argument when there's money, power, and prestige on the line. HN is no exception. There's an inverse correlation between the credibility of counter-arguments and the urgency of the situation; crazy stuff won't get much retaliatory fire because most people can tell it's crazy, but non-crazy stuff that counteracts their goals will be pushed down, because most people can tell it's not crazy.

----- BEGIN RESPONSE TO GOUGGOUG -----

He says that he wants Kubernetes to replace a local Vagrant image. Kubernetes doesn't replace Vagrant. To replace Vagrant, he would want Docker, rkt, etc., not Kubernetes. Kubernetes solves a different problem. Yet he says that he wants to try Kubernetes to fix the problem that Kubernetes doesn't fix.

k8s and Vagrant address wholly separate concerns (where to run things rather than how to run things). The poster I replied to is conflating Kubernetes and Docker, the underlying containers that do the actual execution.

> Maybe some less advanced users using k8s don't realize that it heavily uses docker (or rkt, or whatever container runtime you could think of), but how is that an issue?

How is it not an issue? Is it OK for developers to not know the difference between a compiler and an IDE now? A web server and a browser? A computer case and a CPU? A network card and a modem? These things are not mere details, even if they are often used together. Technical professionals who can't differentiate between these aren't just "less advanced users", they're posers.

k8s is a huge chunk of crap to throw in between you and your applications. One should, at the very least, have an accurate high-level idea of what it does before they go around telling everyone that they need it.


I'm not sure what your comment is about. I'll glance over the first 3 sentences since I'm not sure at all what you are trying to say and jump directly to the fourth one:

> Is k8s-hysteria getting so out of control that it's consuming Docker whole? Seems like it may be.

This confuses me the most. K8s and Docker are complementary technologies, not in opposition to each other. Maybe some less advanced users using k8s don't realize that it heavily uses docker (or rkt, or whatever container runtime you could think of), but how is that an issue? That doesn't mean there's a k8s-hysteria going on.


> even the smallest team and website can greatly benefit from it

I don't think that's true. You have to be at least large enough to justify reasonable investment in microservices.


True, but the investment pays off at about the 4th executable running.


> I don't think your comment applies at all to Kubernetes.

It does. I've deployed and maintained our company's infrastructure on Kubernetes for over a year now.

> K8s came as a savior to all my issues: I burned all my old ansible scripts and rewrote all my infrastructure in k8s.

This doesn't make any sense.

Ansible scripts out changes to make to a system. Kubernetes deals only with opaque images and does not change them at all, it simply runs them.

>K8s is far from being just a trendy buzzwordy shiny new cool toy to play with. It solves real world problems that dev ops have. I am so glad this technology exists and I hope to never have to go back to writing ansible/puppet/whatever scripts.

I'm not really sure how to reply to this without being accused of bad faith, but I'll just reiterate again, your post does not make sense because it is talking about tools that do different things. It's like saying you love hammers and hope to never use wrenches again.

That said, Kubernetes is a shiny buzzword that is much overvalued.


> Ansible scripts out changes to make to a system. Kubernetes deals only with opaque images and does not change them at all, it simply runs them.

You are exactly right.

Ansible makes the best effort possible to bring your system to a given state (the state you coded in Ansible). All those tools (puppet/salt/ansible) do this exact same thing, and they all manage to do it more or less well.

However, the keyword here is "best effort". That is, it is in practice really hard to consistently bring a system from a random state to a given state X, because of the randomness of your starting state.

Kubernetes doesn't do that, it just manages the lifecycle of your state and makes sure that things run. As an added bonus, it allows you to inject some external configuration to your system, and some other "cool stuff".

You build your state in terms of container images, that once they've been built are by matter of fact set in stone. You then instruct K8s to run all those images.

That, to me, is much more powerful than "scripting out changes to a system".

My scripts are always unreliable and run inconsistently because I make mistakes. In contrast, my container images always run the same way, be they scheduled on my dev machine running mac os, or on my GCE cluster, or my microsoft azure nodes.

> your post does not make sense because it is talking about tools that do different things. It's like saying you love hammers and hope to never use wrenches again.

Those tools k8s and ansible/puppet/etc set out to do the _same thing_. That's the nuance here.

They all set out to bring your infrastructure to the state you programmed.

It so happens that k8s (and more precisely container images) are much better at keeping at consistent state for obvious reasons (container images are state).

K8s is just the cherry on the "container technology cake". It schedules things and makes sure they run, for you.

You weren't too far off with your hammer analogy, but it's more like: ansible is a manual saw and it gives me many blisters, K8s is a chainsaw.


> However, the keyword here is "best effort". That is, it is in practice really hard to consistently bring a system from a random state to a given state X, because of the randomness of your starting state.

This is the logic that BOSH follows. Why try to converge to a desired state if you can just recreate it from a known-good state?

Why do you care about grooming individual servers when your goal is a distributed system?

Configuration management tools make the classical sysadmin's life much easier. "I have a small group of giant expensive machines to run". The constant struggle of impinging chaos on the systems that Must Not Stop Ever.


You still have to define the state to solidify, whether that state is represented by a Docker image or not. That means you still have to script the system before you can crystallize it.

Any mistakes you make are still there, so the fact that your scripts are mistake-prone doesn't change anything either way. k8s doesn't help you there, it just adds a nice thick new layer of stuff to break.

Snapshotting state in opaque binary "images" via Docker layers/images is a different thing than constructing that state. You can't take a picture of something that doesn't exist. You can, and possibly should, use Docker and appropriate configuration management mechanisms together, but you definitely shouldn't pretend that they address the same issues.

Your systems scripted with configuration management will deploy the same way on any VM, local or cloud, and Docker. They all start from a base image, whether it's VDI, AMI, a Docker image, or whatever. It is true that Docker makes it faster to load different states than the other systems, but this has nothing to do with configuration management's role, and it also doesn't come for free; there are tradeoffs to consider here, as in anything else.

There are several on this forum who feel that their paychecks depend on the widespread adoption of k8s and/or Docker, and at least some have the ears of the moderators. I'd say at this point, you've revealed enough of your mindset, experience, and intention to make it clear that there is no point going further into this, so let's leave it here and move on. Especially since I'm not going to be allowed to reply, because YC specifically and intentionally doesn't allow k8s skeptics to post very much. Why is that an issue that HN has to create an artificial consensus around? Hmm...


I agreed with you until the conspiracy theory. Sort of a self-fulfilling thing, that.


There was no theory until my account was rate-limited for posts suggesting that people not use databases on Kubernetes. These were ruled too "tedious" by dang; they rate-limited my account and marked my post at [0] as off-topic. Maybe I just missed the part of the guidelines that said not to be too dry when discussing container orchestration.

It's not really self-fulfilling if it's already happened; I think that's just called "information about an event".

I think my view that humans are imperfect (yes, even the super-fancy humans at YC, who do have a horse in this race as investors in Docker Inc.) and will censure things they dislike is plenty justifiable given the events. That's especially the case if these people are being pressured to take that action by other high-status individuals.

For the record, dang has explicitly disclaimed my theory, rather suggesting that asking people not to run their database in Kubernetes constitutes a flame war. I don't have the link to that handy but I'm sure it wouldn't be hard to find for an interested party.

The reader is free to ascribe motives as they see fit.

[0] https://news.ycombinator.com/item?id=14453705


The thing about k8s is it sort of forces you into best practices, immutable images are built and run instead of ansible or config management duct-taping them together. So he replaced his ansible playbooks that build his apps with Dockerfiles

How are you deploying and maintaining your k8s clusters? There is some bootstrapping for nodes but k8s is distro and cloud agnostic... Stateless golang binaries plus etcd. It seems like you should be singing it's praises in that regard on the provisioning front?

Funny enough there has been some work on "self hosted kubernetes" where even the components themselves are inside k8s. Pretty cool will probably be the future of cluster bootstrapping: https://github.com/kubernetes-incubator/bootkube

I'm curious for your reasons for saying k8s is overvalued. You are the first person I have seen that has worked with it and came away with this opinion!


> So he replaced his ansible playbooks that build his apps with Dockerfiles

Docker is not part of Kubernetes. This is what I was talking about. The benefits of Docker/containers are not inherent benefits of Kubernetes. It is important that we attribute things to the correct platform.

For scripting the system, Dockerfiles are obviously inadequate, that shouldn't take much explanation. I discussed this more at [0]. Ansible (and I assume other config management tools) can be used to invoke image building. [1]

You can use and benefit from containers without Kubernetes. Kubernetes is about running software over a cluster of anonymous hardware resources. Some applications are well-suited to that, and some aren't.

> I'm curious for your reasons for saying k8s is overvalued. You are the first person I have seen that has worked with it and came away with this opinion!

First, being overvalued doesn't mean it holds no value. There are use cases for which Kubernetes is well-suited, but it is just not a good solution for many types of applications, and people are (rather dangerously) diving in to this system head-first without taking a step back to appreciate its implications.

It's very similar to the way in which everyone pounced on Mongo and ended up regretting it when they had cause to ask, "Wait, what's a transaction?" [2]

In short, efficient use of Kubernetes requires software without masters, controllers, or other stateful components. It needs software that can be vaporized and rematerialized on command and continue humming along happy and safe. While it's true that, in theory, many services have had this as a requirement given the elastic behaviors of web applications, there are still frequently manual steps and/or performance problems associated with bringing nodes up or down.

Very little software has been written in a truly stateless manner, and some things will never really fully assimilate that model, because it doesn't make any sense for them to do so (databases).

New things that are greenfield and written specifically for deployment on Kubernetes shouldn't have this problem (though I would expect many of them do), but that means that Kubernetes is not going to be much of a benefit for the vast majority of existing applications, which is what people are expecting to get from it. k8s would be much less popular if "you need to redevelop a lot of your code to really take advantage of most of these features". Thus, it's overvalued.

Again, it looks like people are starting to conflate the underlying container runtimes with Kubernetes. Note that the orchestration and scheduling benefits of Kubernetes are separate from potential benefits gained from containerization.

This once again shows that the group that controls the user interface controls the platform, and ironically, it's why k8s won't be what Google hoped; now that Azure and Amazon are offering k8s-as-a-service, people will remain glued to the Azure and Amazon interface, feel pride in their acquisition of a new buzzword, and Google will have gained no significant market share.

> You are the first person I have seen that has worked with it and came away with this opinion!

Yeah, I wonder if this is truly the case, or if others are just more prudent than me and don't want to say things that will make others dismiss them as philistines. :P

[0] https://news.ycombinator.com/item?id=16240500

[1] http://docs.ansible.com/ansible/latest/docker_image_module.h...

[2] http://hackingdistributed.com/2014/04/06/another-one-bites-t...


Thanks for your feedback, you do have some good points.

> Docker is not part of Kubernetes. This is what I was talking about.

Yes, perhaps I should have said Dockerfiles + k8s Manifests

> In short, efficient use of Kubernetes requires software without masters, controllers, or other stateful components.

StatefulSets + Persistent volumes solve this quite well: https://kubernetes.io/docs/concepts/workloads/controllers/st...

K8s is becoming (if not already) the default cloud native "OS". Write a manifest for your application and people can easily run it anywhere. The big clouds offering K8S as a service only improve this!


> and no amount of Ansible/SaltStack/Puppet script will save you from this.

And I am afrade you missed my point entirely if you are bring up yet again more tools and frameworks.

Owning your stack from the top to the bottom - with very few exceptions is what I am suggesting.

I too also speak from experience, and have a few years on you -- not that any of that matters.

If your environments diverged then you did not own your stack properly. It is within every engineers ability to build out a stack that runs the same on each environment. It just means understanding that problem domain and taking the time to write the code that fits.

Far too many times have I smacked the hand of a coworker suggesting using any of these bail wire and duct tape solutions you just mentioned. The end result is we have a wonderful system that we own and can ensure meets our needs. The same binaries, configurations and images are pushed from alpha to staging and finally production with full control of evey artifact at the start of the pipeline.


I'm curious. Could you describe the kind of infrastructure you manage and the tools do you use? Are all your tools written in house? If so, what will happen when the author leaves the company?


> -- sorry for the rant --

It's not unreasonable, but there is a gradient. Linux was a toy, now it's probably involved with handling more real commerce than any other software ever written. Javascript-land was a riot of colour and decay, but basically React and Webpack have won the dominant position.

Evolutionary explosions don't last forever. Deciding to not pick winners is smart early on. But once the ecosystem settles down and there are clear winners, it's time to accept the new normal.

If I sit down to write a web app, I do not first create my own programming language. I do not write an OS. I do not develop a database, or a web server, or a transport-layer protocol. I take the ones that exist off the shelf and use those.

This is only possible because some options are so dominant that they have driven nearly everything out. By doing so they attract to themselves the overwhelming share of effort and attention. They grow a rich cast of supporting systems, they get bug fixes sooner, they serve as the jumping-off points for the most exciting new possibilities.

We are past the stage where "I will write my own cloud platform" is a defensible position. I believe it past about 2 years ago, actually.


>We need to promote learning and mastering the underlying concepts

Should we do the same thing for programming languages? Make sure people are learning and mastering assembly?

>Only to be stuck with a solution that requires constant care and work around.

I've seen this attitude from people who didn't want to use ansible/chef/puppet because their shell scripts were "good enough". Like the shell scripts didn't require constant care..


You should master any subject you were hired for to do work in.

I find it hard to equate deploying and maintaining your production environments to assembly.


> I've seen this attitude from people who didn't want to use ansible/chef/puppet because their shell scripts were "good enough".

That attitude is also quite myopic.

Not to condescend, but if one isn't working on cloud-native apps in federated environments then maybe maybe maybe one should be intellectually humble enough to allow that there are some challenges people face that make "good enough" a non-starter, prohibitively expensive, or require direct competition with AWS/MS/Google in their core competencies.

Everything is a spectrum. Serious work can be done on a single server with no backups... Shell scripts can carry serious work far... Ansible or Docker Swarm rock for what they are... but the multi-cloud orchestration platform we're discussing rocks a oodles of things those solutions aren't.


Well it is a good thing I did not suggest shell scripts, ansible or puppet. I suggested owning it stack. Which means wiring code and tools to suite your needs.

Deployment and upgrades should be built in - as in it is part of your product. Not some afterthought.

It's odd how the two replies have read have jumped to conclusions about what one might use if they were not using kubernetes.


Perhaps if you shared some more concrete detail about what it means to "own the stack" people would be less inclined to fill in the blank in a way you hadn't intended to communicate?


> Deployment and upgrades should be built in - as in it is part of your product. Not some afterthought.

Kubernetes is one of several products that offer environment agnostic mechanisms to handle deployment and upgrades.

If you're dealing with hundreds of 'products', or a mountain of microservices, or large batch job scheduling, putting the lot of it on Kubernetes is addressing deploying, upgrading, monitoring, securing, discovering, and many other things up front in a shared, consistent, and portable manner.

A forethought, not an afterthought...

I agree that owning your core systems is valuable. Focusing on core IT competencies and business value creation are more valuable, though, and if you're not dealing with multi-cloud-platform resource management I can't imagine how to justify writing tools that compete with established, mature, industry standard solutions. You're always better to focus on your unique advantages and competencies... If you're ok using an RDBMS system in your stack, then using a scheduling/upgrading framework should be pretty ok, too.


> I think if most engineers took a step back and said "I don't know" and took some time to truly understand the requirements of the project they are working on they would find a truly elegant and maintainable solution that did not require tweaking your problem to fit a given solution.

> The general trend I see as I get older is that we are valuing the first to a solution rather than a more correct solution. Only to be stuck with a solution that requires constant care and work around.

This, 1000 times over. Right now I'm trying to figure out how to turn a project around that started without me and has gone down the path of using an inadequate (but mildly popular) open source tool instead of building something that's actually designed to solve our problem, not someone else's.


K8s is different though.

It's an incarnation of a highly-effective & efficient infrastructure paradigm, verified in over a decade long serving of entire google's compute tasks.

It's not really new.

Disclaimer: I work in Google's compute infrastructure team.


There is clearly a lot of unfinished tooling and features, as evidenced by the extremely high commit rate.

Compare that to an old piece of infrastructure like "mount" or "chroot".

Is it too much to ask for base infrastructure to be boring?

Granted, I have nothing against the Kubernetes project and it might be fantastic. I don't think it's wise to encourage people to use/learn it at this point, at least for production projects.


Everything is anything to certain degree.

As a core compute system, k8s is more sufficient to be a candidate for everyone to learn than any thing in the same category on earth.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: