Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you are using Go (which solves most of your dependency problems) and SQLite (which means you don't need to integrate with an external database via service discovery) why do you need Docker at all?


Perhaps because Docker has great stories for deployment. Many of the complexities of deployment are handled for you (writing a systemd unit, managing rollback, etc).


Yeah, but for Docker to handle the complexities of deployment, you first need to handle the complexities of Docker. So OP's question is valid: for most Go apps, all you have to do is compile a binary and copy it to the server - no Docker or other paraphernalia required. Of course that may not be so simple due to various reasons, but it helps to keep that possibility in mind...


> you first need to handle the complexities of Docker

The complexity of Docker is not that big for a Go deployment though, especially if you have all the other bits for orchestrating your Docker containers (for the rest of your stack) already in place. You mostly just copy the binary into a slim image and you are done.


> You mostly just copy the binary into a slim image and you are done.

You don't need docker for that, just 'tar czf my-layer.tar.gz my-dir'. If you want a manifest file, you can get the digest using `sha256sum my-layer.tar.gz`.


Agree, and most complexities will occur in enterprise environments when the os/hardware is locked down — which can make something like SQLite “hard” as would any cpu/disk-bound container. However that should be a platform teams job to resolve, not a backend dev.


> all you have to do is compile a binary and copy it to the server

Docker does this quite well, and solves a bunch of other problems you're likely to have regardless. One really simple example is "how do you copy it to the server?" Do you have ssh keys for your server on development machines? How do you handle the "Oh I'll just remote in and fix this one X"?

It's also _crazily_ easy to get started with, and handle middling amounts of scale. If you're on AWS, Elastic Beanstalk is plug and play. If you're not, DigitalOcean App platform will host it for you, with automated deployments from git for $5/month with basically no configuration needed from you.


I don't get this answer.

Publishing your docker image to some open or private store still needs to happen. Then the host needs to be updated. This is not really that much simpler than "scp your binaries to the server". And has many more moving parts that can fail.

Now, there are better ways to distribute go projects than scp, for example heroku style or by just abusing its builtin git support.


Yea, Docker is crazy to me. I can never understand people who think it's awesome or something. It's awful.

Of course, if you're doing somethign in Python/Ruby/Node.js then it's useful because these language do not provide a reliable method to compile source code into an independent program/application.

But that does not make it "awesome" or anything. It's just a band-aid. It's an extra layer to hide fundamental design problems.

With Go, the root problem is solved: the language has a compiler that reliably produces statically linked binary executable programs.


Docker isn't a cache-all for containers. Docker has a pretty terrible deployment story at this point and I forsee in a couple years it will probably be considered legacy compared to Podman/K3d/Local K8s/etc


I think people use the word Docker to mean "produce an OCI-compliant image that a CRI runtime can run".


Podman does the same thing, as well as other tools. Using Docker to refer to something that doesn't use Docker is like saying that your Android Phone is an iPhone.


> Docker has a pretty terrible deployment story (...)

This is the very first time I ever heard such nonsense. In all companies I've been, Docker is a renowned problem solver, not only for production deplyments but also for local testing environments and deployments.

It even shines as a stand-alone barebones clustering solution with Docker swarm mode.


I’ve honestly not touched docker in years, but all of my prod apps run in docker. So it’d be paradoxical for me to see I don’t need docker, but I really want to.


Docker can be considered a deployment tool. You package your application in an image and run said image. Development and test of that application does not have to be in a docker image.


Nor does deployment. Podman, Rancher, etc all solve those challenges without relying on Docker and with Docker changing their licensing I don't think it will be the de-facto tool in a couple years. There will be others that will replace it.


Docker has become synonymous with OCI images, and my comment was exactly in that context (or at least state of mind) - I should have stated that as well.


> So it’d be paradoxical for me to see I don’t need docker, but I really want to.

I find it quite amusing that projects that try to position themselves as Docker alternatives end up basing their presentation on how their project can be used just like Docker, down to Docker's choice of command line interface.


I deploy my app (c# with sqlite, some native dependencies) using custom-made scripts to ubuntu, centos and redhat dedicated servers and it's a major pain to have a separate script for each version of each OS. I'll switch to docker soon so that I have a single target


> If you are using Go (which solves most of your dependency problems) and SQLite (which means you don't need to integrate with an external database via service discovery) why do you need Docker at all?

Dependencies is not really a problem with Docker, nor the thing it is designed to solve. If dependencies was the problem people cared about, everyone would just go with the single statically linked executable/fat JAR and no one would ever be bothered with Docker.

Docker is primarily about containerization, but it's also about ease of packaging and deployment. It's also a deployment format that provides horizontal scaling for free.

Also, the one-database-per-service architecture pattern is quite common, as well as ephemeral databases and local caching, and keep in mind that SQLite also supports in-memory databases.


Okay, I'll bite.

> It's also a deployment format that provides horizontal scaling for free.

Um. Docker does not provide horizontal scaling at all, for that you need orchestration. And those tools are anything but free if your time has any value.


I think what the OP means that using Docker you can use tools that give horizontal scaling for free (see: Cloud Run etc.)


The claim that horizontal scaling is free with Docker is simply not accurate.

Maybe it will be some day, but these orchestration and deployment tools built on Docker have enormous hidden costs.

IME, Docker takes an enormous amount of focus away from the customer problem and moves it to the how to get this mess working problem.

Now may be the right time in a given business to make that shift in focus, but to claim that it will be "free" is just misleading (IME).


> IME, Docker takes an enormous amount of focus away from the customer problem and moves it to the how to get this mess working problem.

This is the opposite in my experience: Docker lets me focus on the business problem by making deployment easier.


Docker lets me focus on cool problems like "which commands can I use to free up space on this cloud-based container-running VM, given that there are 0 bytes free, and many tools will crash if they can't make a tempfile/dir?".

At least, that's been my experience when maintaining a mess of other people's Docker crap.


> Um. Docker does not provide horizontal scaling at all, for that you need orchestration.

It does. Please take the time to learn about Docker and it's Docker swarm feature.


They probably have 100 other apps and deploying this one without docker would make it a snowflake.


Because the deployment environment is or may be a Kubernetes cluster or some other kind of containerized environment. Wrapping up your application in a neat package makes other people's jobs easier - to them, it's a black box container, not a binary they need to install and manage on a server.


It looks great on CV.


And now he can add front page of HN


I think the only reason I still use docker for everything is for automatic restarts without worrying about another tool for whatever language/framework I'm working with


Uniformity perhaps? I for one help manage several workloads that has become quite a lot easier to manage through containers and I'dd rather deal with that than having a few workloads being deployed differently from everything else. Maybe containerizing a go project like this is more work than strictly necessary, it would be a quick 10 line Dockerfile and pretty much done.


In a production environment one would probably deploy using something like Fargate, Kubernetes or Fly.io.


> In a production environment one would probably deploy using something like Fargate, Kubernetes or Fly.io.

Docker swarm mode is pretty good and terribly easy to get up and running in no time.

I have a few small personal projects hosted on Herzner on a couple of Docker swarm mode deployments with 100% uptime in the past two years, and all it took to get that infra up and running is installing Docker on a bare Linux node.

The only downside I'm aware is that inter-node traffic speeds can be relatively low.


I've worked with docker swarm extensively. I've managed it but also automated the deployment and implemented several features to ease deployment using the swarm API.

Swarm pros:

- Easy to setup

- Easy to run

- Relatively easy to debug

Swarm cons:

- Many problems persist for years. Some because of lack of resources, others because the problem is simply too hard.

- The community and automation around swarm is small

- Problems solved by third party tools, apps, etc. in kubernetes require in-house workarounds or solutions (e.g. there was an API to perform autoscaling, but we had to write the python app that will read data from prometheus and scale-in/out the deployment)

What I found it with Swarm is that it was extremely resilient. At some point the swarm cluster was running on AWS for ~2 years with minimal maintenance. That would have been impossible even for a managed EKS cluster for example. There are simply too many things that can go wrong.


Oh. TIL. Sounds very useful!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: