Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

having repeatable infrastructure from day 1 is great. kubernetes is the simplest way to have that. it's not the only way, but it's provider agnostic, has a lot of well maintained and understood tooling around, and splits clearly artifacts from deployment (i.e. no scripts configuring stuff after startup, everything is packaged and frozen when you build and upload the container image)

> Solve problems as they arise, not in advance.

while this does make sense for supporting varying requirements the lean way, it fails to address the increased costs that rearchitecting a solution mid-lifecycle incurs.

> Do more with less.

goddamn slogan driven blogging. what is the proposed solution? doesn't say. are we supposed to log in every prod/test/dev machine and maintain the deps by hand with yum/apt? write our own chef/puppet scripts? how is that better than docker images running kubernetes? The comparison between solutions is the interesting part.

op never say. guess "works on his pc" is enough for him, we can only assume he envision a battery of mac laptops serving pages from a basement, with devs cautiously tipotoing around network cables to deliver patches via external usb drives



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: