Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Awesome Docker Compose Examples (github.com/haxxnet)
375 points by thunderbong on Feb 25, 2023 | hide | past | favorite | 82 comments


There's a lot of "tool" selections in that repo.

If anyone is looking for ready to go web app examples aimed at both development and production with Docker Compose, I maintain:

    - https://github.com/nickjj/docker-flask-example
    - https://github.com/nickjj/docker-rails-example
    - https://github.com/nickjj/docker-django-example
    - https://github.com/nickjj/docker-node-example
    - https://github.com/nickjj/docker-phoenix-example
About once a week or so I update them to their latest versions for everything.

The examples use a combination of services for each tech stack such as web + worker + postgres + redis + esbuild + tailwind. The Rails example is set up for Hotwire and runs Action Cable as a dedicated service along with Sidekiq where as the Flask and Django examples use Celery as a worker. You can easily swap things out since the examples are starter projects that you can clone + rename (they all come with a rename script), you're meant to customize them to build your app on top of.


This looks great. Definitely a few idioms I will have to explore further.

I can use Docker in a basic sense, but it is amazing to me how much black arts still exists for what has become a cornerstone of modern deployment. Lots of conflicting/dated advice about best practices. Unsure which advice is still required/applies to podman, etc.


> but it is amazing to me how much black arts still exists for what has become a cornerstone of modern deployment.

What?! Unless you are using complicated networking, docker is simple. Dockerfiles are essentially just annoying-syntax shell scripts.

We have plenty of black arts in computing. Docker isn't one of them.


So, am I still supposed to specify UID and GID? Should I be using Alpine or Debian? How do I handle loading certificates? Should I use an override config for development? In this thread, someone was indicating that the naive volume option does not work over SSH. Also in this thread was a request to pin by tag and not hash. Do I still need to worry about Docker blowing a hole in iptables?

Maybe not black magic, but there are a lot of subtle optimizations for which there is much conflicting guidance.


> So, am I still supposed to specify UID and GID?

If you run your container as a non-root user and create a user without setting a UID / GID it'll default to 1000:1000, so unless your Docker host's user isn't 1000:1000 then things work out of the box. A lot of this is general Linux knowledge around user / file permissions, not so much Docker.

> Should I be using Alpine or Debian?

Debian, no contest in my opinion.

> How do I handle loading certificates?

You can volume mount them or deal with SSL certificates in a way where Docker isn't involved such as running nginx on your Docker host directly or putting a load balancer in front of your app and handling SSL there.

> Should I use an override config for development?

You can use the same docker-compose.yml file in all environments and tweak things with environment variables. The compose file supports variable interpolation. Docker Compose profiles also let you control which services to run in each environment, it's even configurable by a single env variable.

> Do I still need to worry about Docker blowing a hole in iptables?

If you use -p 8000:8000, yes this will publish the port in a way where the outside world can access it. Likewise without Docker if you edit iptables to allow that port it will too. I wouldn't classify this as blowing a hole in iptables. This is "user configured application to make a port open to the world".

> Maybe not black magic, but there are a lot of subtle optimizations for which there is much conflicting guidance.

In the end my example apps address most of these issues for you. You're on your own with certificates since that varies on your deployment, but everything else is fully set up and ready to go and it protects yourself from blowing a hole in iptables since it only publishes the port to localhost by default, not 0.0.0.0. This would let nginx or another web server access it directly on your Docker host but no one else.


This is how people end up using Kubernetes, despite its warts.

[edit: added second clause]


Kubernetes doesn't answer any of the parents questions though, does it? You still need a Dockerfile for k8s and all those questions still apply.


'black art' maybe not, overcomplicated mess? Yes!

I've tried to use the docker compose command to limit the maximum number of CPUs used, I failed, even though it's possible: the support team managed to do it, but why did I fail? Because the doc and the design aren't great..


These are excellent, thank you.

I maintain similar Django and Flask + compose stacks on behalf of the startup studio I work for so it’s fun to compare notes.

For our Django stack, for instance, we have also settled on Postgres, on celery+redis, and on whitenoise. black/flake8/isort also seem universally agreeable. We also throw in pyright and generally make extensive use of type hints.

For the front-end, we’re currently Create React App with TypeScript and Tailwind. I’d love for us to move away from CRA, so your use of esbuild is helpful to see. (I’d personally be happy using HTMX or Turbo/Stimulus but for the moment a JSON API backend with a React SPA front-end seems more comfortable for more of the CTOs who hop on board.)

We also supply some minimally opinionated glue at the API layer. On the back end we have a base View that provides a few helper methods for transmuting invalid Form instances to nice JSON replies that the TypeScript API invocation code works with gracefully. (We used to push DRF but have lots of feedback from older startups than ran with it and had regrets down the road.)


Offtopic, but I've googled Tailwind because of this comment. This seems absolutely crazy to me, in the worst sense imaginable. So, there was (and still is) "style" attribute in HTML. It leads to lots of repetition and yadda yadda, so people started using classes instead to write their CSS. There are dozens of schools of "best practices", or meta-frameworks, or actual frameworks to make it all more manageable, but now typically each element still has like 5 different classes specifically to define styles. So, finally someone got tired of it and came up with an ingenious solution: JS framework that dynamically defines CSS-classes with CSS-property-like names to write styles right in the HTML again, but to use class attribute instead of style attribute?! What's even the point?!


Hah! This was pretty much my response when I encountered Tailwind for the first time.

This post by Tailwind’s author gives some more perspective: https://adamwathan.me/css-utility-classes-and-separation-of-...


I had the same concern, this article perfectly captures the problem and gives perspective to the solution Tailwind provides - Thanks for sharing!


Classes are not perfect, they introduce relative meaning (or meaninglesness) and in prototyping its often a huge waste of time.

Other than that I agree.


no offense, but man, the amount of tooling you guys are using sounds to me insane. How is a person able to oversee and understand everything. The older i get the more i feel distantiated and disconnected to these modern practices. I am afraid that if I ever have to find a new workspace i wont be able to succeed because of this.


No offense taken. As a fellow old person (whose first computer as a young kid was a TI-99/4A) I definitely feel this pain.

But because I am also an old engineer who works with a lot of other old engineers we are lucky to have a shocking number of human-years’ accumulated experience not only selecting a pile of tools and frameworks but also sticking with them from zero to $BIGCO. Put another way, I’ve shot myself in the foot more times than I can count with these tools and I do it less often and in more esoteric ways these days.

I can’t claim the same degree of experience with all these tools, of course. Pyright is bleeding edge. Tailwind is still the new kid. Relative to (say) Python and Django or Flask, React is new too. Most of the pain and learnings come from these newer moving parts.

Is it “better”? That depends on the axes of evaluation. As a startup studio where the 70% case might be “SaaS that takes a back-office process held together today by Excel & email and makes it way better” the answer is: sometimes, unequivocally yes. Not just because of the tools and the potential velocity they can confer, but because of the kinds of teams we can build around them. Sometimes, plain old Django with nothing added is a clearly better choice.

In the end, every startup is its own snowflake. We try to select “starter stacks” that balance industry familiarity with our ability to offer meaningful operational perspective. We definitely don’t think of them as the final world.


No offense but that list of tools is miniscule and you should pause if you think it's anywhere near "insane". For comparison a regular handyman with a basic set of tools easily has 50+ if they just started. It's definitely possible to understand all of that and way more. Unless you're fresh out of uni and never worked on a professional company with more than a couple of people you'd see many more.


I'm not GP, but I used to write firmware. I had a C compiler, a debugger, and emacs.

IIUC, if you write a web application in the MS world, you have Visual Studio, ASP.Net, and IIS.

There doesn't have to be piles of libraries.


I definitely relate to this but frequently challenge myself and then overcome it. You just have to bite the bullet and carve out some time to try them out.

The key thing to understand is that all these things are shrouded in incomprehensible jargon and alien sounding names that make it incredibly intimidating to get started. But the fact is, it is a wide but shallow pool of jargon sitting on top of the same old computing fundamentals that have been around since the 1970s.

You will find if you know your fundamentals, then the jargon is far less difficult to overcome than it seems. You just need a bit of exposure and if you spend any amount of time playing with the tech it just starts to happen by osmosis. (If there are fundamentals you aren't solid on, treat it as an opportunity to bed that in - even these things are not generally super complex in the end).


I'm so-called full stack developer so I have to be on top of it and it's hard. I'm constantly behind so I spend quite a lot of time catching up whenever I'm getting chance. It's possible (e.g. I think that I can one-handed build an application with react frontend and java/golang/node backend, build a k8s cluster and deploy everything, following best practices), but it's lot of things that you'll forget eventually.

I think that one sane way is specialization. Become an expert in writing nginx-ingress yamls in some big corporation. Another sane way is to throw away modern tech and stay with old tech. You can deploy perl cgi to OpenBSD just like you could 20 years away. Another way is to carefully select technologies which were proven for 7+ years. I'm trying to follow this way.


I think it was a subtle trend of making everything in tiny blocks while javascript / html5 evolving a lot, it created a jungle. But deno/bun/esbuild minded tools are more integrated, faster and leaner it seems (hot reload, typechecking, high perf all in one)


Taking a quick look and nothing I saw manages ssl - do you even package that or do you have that as entirely different steps?


It's not included with these projects, as someone else mentioned in a reply to you, it can be handled in a number of different spots.

For example, a load balancer running 1 level above your server or perhaps nginx running directly on your Docker host without Docker.

A while back I wrote up why I prefer running nginx outside of Docker at: https://nickjanetakis.com/blog/why-i-prefer-running-nginx-on...


Your paragraph on SSL is basically what I have come to understand so thank you, I just thought I must be stupid or something.


Not the same guy, but many cloud providers take care of ssl in their load balancers.


Awesome repos, starred. Do you have any preferences for next steps, such as deploy/management? And if you’ve seen it, what do you you think of https://github.com/mrsked/mrsk?


Thanks.

Yep I've seen mrsk and even briefly chatted with DHH about it a day or 2 after he open sourced it.

I have mixed feelings:

On one hand I think it's fantastic Rails is starting to take on deployment officially. This is going to lead things to a better place in the long run. Generally speaking DHH has a really good track record for making things that feel good to use.

On the other hand, I think the project is trying to reinvent too many things that already exist and doesn't account for Docker Compose. For example, for literally the last 7 years I've been deploying any Dockerized web application with Docker Compose to 1 or more servers with about 15-20 lines of YAML using Ansible to set up the server and git to deploy the code. It doesn't matter if it's Rails, Flask, Django, Phoenix, Node, Go or whatever. It's all the same. Those 20 lines also include everything from taking a blank slate Debian / Ubuntu box to production ready, complete with self managed system updates, locking down SSH, iptables, various server configurations / optimizations, nginx, HTTPS, database backups, sane logging and everything else you'd expect.

I don't have anything ready yet but I've been slowly working on https://nickjanetakis.com/courses/deploy-to-production to assemble all of this into a course.


Thanks! I will use this.


Does this start rails app and postgre db on the same server?


By default yes but you can change a single environment variable to not run Postgres or Redis through Docker Compose. You can also very easily choose to run Puma and Sidekiq on different servers. It leverages Docker Compose v2 profiles.

I'd suggest watching and reading: https://nickjanetakis.com/blog/a-guide-for-running-rails-in-...

It's a full end to end walk through of the example Rails app and how it all works when it comes to Docker and Docker Compose. I just recorded it a month ago, it covers everything with up to date info vs the code in the repo.


Any golang?


Nope, I haven't run any Go code in production that was running in Docker.

Go's ecosystem is also pretty fragmented so it would be difficult to create a solution that lots of folk would be happy with. I get the impression most folks who use Go want to pick everything themselves (not a bad thing, just something I noticed). Plus I'm only 1 person doing this in my free time with no income sources attached to these projects, I don't have the capacity.

With that said, most of the apps are very similar when it comes to the Docker bits. You could take any of the examples and replace XXX with Go. I'd suggest basing it off the Phoenix example because that one covers using a 2nd Docker build stage to create a release. With Go being able to compile a self-contained binary that pattern of using a build stage to copy that over would be similar.


I think I’ve asked this question before but I really don’t understand why there is no managed docker compose hosting solution. I think Swarm did that, but I believe they stopped? Every project I worked on used compose locally, but had to transpose to some k8s/nomad setup in production. I’d love to just run ‘docker compose deploy’ or publish or whatever and it just gets deployed to the cloud + load balancing and autoscaling. What am I missing?


It's not exactly what you're asking for, but <https://fly.io> is pretty neat. It's easy to get going and will deploy any docker images you want with a bit of their config added.

As for AWS ECS mentioned in the other reply -- it's great if you ever get it working, but it is a nightmare to learn and use.


Docker Compose cannot currently be deployed on fly.io: https://community.fly.io/t/deploy-with-docker-compose-yaml/4... .

Unfortunately, there's a huge gap between "single container" and "multiple, co-dependent containers". It's a far cry between services hosting a single container and ones offering to host entire stacks.


Right, I wasn't suggesting you could deploy with Docker Compose on fly.io. Hence why I prefaced my suggestion with "It's not exactly what you're asking for..."

And I think you're exaggerating the "huge gap" there. Fly.io can run multiple docker containers for you. Their docs are great so it wouldn't take much effort to learn how to create an equivalent or better setup that covers everything Docker Compose does (and more).


I agree fly.io comes close, but you’d still need to transition to ‘something else’ for deploying your compose project.


Coolify (https://coolify.io/) has support for deploying docker-compose based app. It works quite well and is easy to use. You still need to manage the server on which you deploy the tool, though.


Swarm still is a thing. The recentl docker release even has new features in it.

Also see https://github.com/BretFisher/awesome-swarm


You can do something like this with Bunnyshell (bunnyshell.com) It imports your docker-compose.yml and deploys to k8s. It has CLI and web UI.


You can use ECS for that.


Security note: specifying no version, or a version tag (and not an @-hash) in the docker image name allows DockerHub or the image publisher to replace the code underneath you on container restarts (ie RCE), as they are not cryptographically assured.


Using @-hashes doesn't assure you of not getting pwned the same way here. How are you getting the @-hash, if not just looking at what the tag points to?

Sure, the image is changing now only at intentional times (as opposed to just any restart), but you're still not getting an assurance that a "trusted" upstream isn't going to RCE you. Security updates & patches come out with such frequency that the number of windows of opportunity are still plentiful.

Better advice would be to use tags, but to vendor the images somewhere and point at the vendored copies. You still get the "doesn't change on random restart", you have an idea of what version you're on that the hash doesn't tell you, you don't have to deal with Docker hashes being the weird AF things they are (e.g., you side-step the confusion that library/foo@sha256:XXXXX == vendored/foo@sha256:YYYYY), and you're more respectful of upstream's bandwidth and you're not dependent on their reliability concerns.

(And frankly, given how poorly most of the industry handles security updates, I'd argue that sitting on a very loose tag might be more secure overall, because then security patches might be happening at random times, as opposed to not at all. The trade-off of the RCE is worth it, and there are ways for upstreams to mitigate the risk there, centrally. Most companies struggle with just vendoring, and few get to actually keeping that up-to-date, IME due to a lack of willingness to apply the necessary resources to the problem.)


> Using @-hashes doesn't assure you of not getting pwned the same way here

It does, changes to the image could be pushed to "latest" or a specific tag, but the hash cannot change, once you've established that a specific hash is secure, it'll remain secure (or rather, as secure as you first established it as).


I think the parent comments point was that it’s difficult, if not impossible, to verify the security of even a particular hash. It’s still vulnerable to the same dependency chain vulnerabilities as pinning to latest, but instead locking in a particular version and _hoping_ that it wasn’t pwned. Additionally, you are then not getting any exploit fixes that may be included in newer versions, so even if there was a vulnerability you are now stuck with it until you decide to manually update.

To be honest if you’re that concerned with dependency attacks like that then you should just be hosting your own image registry and building your images yourself, and then only being vulnerable to dependency attacks within the OS distributions and such.


This is one task that Dependabot excels at.

You can use a image like golang:1.20.1-alpine3.17@sha256:48f336ef8366b9d6246293e3047259d0f614ee167db1869bdbc343d6e09aed8a and be able to both see the version (human-parseable)and the hash (machine-parseable)

Dependabot will update both the version and hash parts of the tag in a pull request. Pretty magical if you ask me. I haven't found a way for it to auto-apply yet but Renovate can do it if you want automatic updates.


A future compromise of a publisher key by a malware injecting party cannot compromise you if you don't update. Pulling from a tag leaves you open to this at any time without warning if the image publisher is compromised.


You would have to down (remove) the container to change the image, if the image is present with the tag if wont get force pulled (singular node scenario), unless you have that image locally, heck this is not that straightforward. I like the idea of using digests though, using both head explodes, explicitly, another head explosion may prevent some headaches.


Silently replacing version tags in the container registry is currently one of my biggest pet peeves.


...and you'd have to do that for every single security update for every single service that you run. If you need that level of security that might be appropriate, but most users need security patches more than they need to be concerned with a novel attack that requires DockerHub to intend to RCE them.


While the track record of security in the industry is pretty laughable, I do like to delude myself that things are improving.

How many RCEs are discovered per year in baseline Debian/Ubuntu? Seems far more likely that security holes are in the library/application code layered on-top of an image.


> novel attack

Harmful code pushed into large software repositories masquerade as something else is not novel and is starting to happen more and more.


FWIW the standard now says the file should be called "compose.yaml"; "docker-compose.yaml" is supported but deprecated


Ambitious decision. As if docket-compose is the only tool that uses the word compose, and there is no other extremely common tool, that uses for configuration, for instance, files like composer.json and composer.lock.


I’m on my phone and haven’t double checked this, but from my understanding, I believe it’s part of an effort to encourage compatibility with other tools like podman. The ambiguity is somewhat intentional, as an active decision to separate the implementation name from the configuration standard.


composer.json is from PHP. Config file for package manage (almost universal).

So it will be even bigger pain in the keyboard to write them :(


Equally confusing and often misunderstood is that the latest version of a compose file should not have a version field at all!

It is a blend of v2, v3 and any newly added attributes (v3 some times supported less than v2), that is now called the compose spec.

Implementations are supposed to do feature detection based on what attributes are used in the file. Make sure your docker compose is new enough, otherwise it thinks absence of version field means v1!


Yeah I was gonna complain about this but honestly I don't even know what the hell they're doing with the version field anymore. I suspect it's because they want to force you to upgrade because otherwise they may have to support legacy longer.


That is a big change.

After June 2023 Docker Compose v1 will officially no longer be supported. I probably won't switch right then and there to the new filename but that is at least one ball moving in the right direction to switch over to the new file.

The other problem is more of a human one. There's 7+ years worth of blog posts, videos and documentation referencing docker-compose.yml. Having all of those become invalid is a big price to pay when onboarding new folks to Docker. When learning something new, it stinks when you find conflicting or different information all over the place.

Long story short, I will switch to using it but it's going to take time.


Nice collection, although I don't really like binding volumes to host directories, because then you can't really use docker over SSH. I'm working on my own similar project here that exclusively uses docker named volumes: https://github.com/enigmaCurry/d.rymcg.tech


What do you mean by "can't really use docker over SSH"? I'm not sure what that means, or how it's related to bind mounts.


I mean using a remote Docker context over SSH, where I run `docker` commands on my laptop, but it runs the containers on a remote Docker server. My laptop does not run the docker daemon, so its just a client.

If you tell me to run

    docker run -v ${HOME}/stuff:/stuff alpine
It will mount /home/stuff on the server, not my own home directory on my laptop. I would have to run another process that rsync's my local /home/stuff to the server.


Thanks for the reply. I didn't know that was a thing.

Though, I'm still confused by your example of having to rsync /home/stuff to the server. If you use a named Docker volume, is the remote Docker container somehow using a volume you have located on your laptop? Wouldn't you still have to transfer the volume from laptop to server?


In my README I explain how to setup the Docker context over SSH.

In my system all of the files get written to the volume from only three places:

    * From the docker image through VOLUME (fresh volumes copy the data from the image on start)

    * From a template container that writes config files.

    * From the container itself, writing files as it runs.
What I don't do is create a directory someplace and manually edit files and mount them.

When I run `docker build` on my laptop, this does copy files to the server (Docker designed build this way, and you have to set .dockerignore file to ignore files you dont want copied).


What if the docker daemon is on a storage server and the host volume of /stuff contains, say, 10 terabytes of photo album content?

    > If you tell me to run
    >  
    >     docker run -v ${HOME}/stuff:/stuff alpine
    > 
    > It will mount /home/stuff on the server, not my own  
    > home directory on my laptop.
That seems about right?

And then you note:

> What I don't do is create a directory someplace and manually edit files and mount them.

But if /stuff is photos, and another container, say, runs ingestion tools, or some other photo collection processing, you don't let it touch the same data volume?

Looking at your repo, I see your docker-compose volumes map e.g. data to data …

    volumes:
      - data:/data
… which is what I do, so I guess I'm not following what you're saying to do differently.

For instance, mounting a volume that can be edited by other containers lets me insta-move large files or sets of files between steps of containers, by container a doing a move not copy from its work path to its destination path watched as an incoming path by container b.


> Looking at your repo, I see your docker-compose volumes map e.g. data to data … volumes: - data:/data

Theres two ways to mount a volume, one with a / and one without.

/some/directory:data

some-volume:data

The first mounts a directory

The second mounts a named volume.

I suggest doing the second so that it can be maintained directly by docker through `docker volume create|rm` or `docker compose up|down [-v]`.


> What if the docker daemon is on a storage server and the host volume of /stuff contains, say, 10 terabytes of photo album content?

In this extreme example I think probably a bind mount might make sense, especially if the files are already there. But the named volume would just be stored in /var/lib/docker/volumes/some-volume-name, so as long as that /var/lib has 10TB I don't see the problem.

I can use my sftp container [1] to be able to sftp directly into a volume, but I've not yet transferred 10TB with it :)

[1] https://github.com/EnigmaCurry/d.rymcg.tech/tree/master/sftp


docker etc is on a flash partition, data is on a few 16 drive arrays…

… but this made me discover this:

    docker volume create \
     --driver local \
     --opt type=cifs \
     --opt device=//uxxxxx.your-server.de/backup \
     --opt o=addr=uxxxxx.your-server.de,\
           username=uxxxxxxx,\
           password=*****,\
           file_mode=0777,dir_mode=0777 \
     --name cif-volume
So that simplifies a lot for my use cases.

Thank you for all the replies!


Thats really cool, I've only ever used the local driver with default settings, which stores to /var/lib/docker, but I guess theres different storage drivers that do different sorts of storage. nice find!


Replying in series as I see more of your edits:

> For instance, mounting a volume that can be edited by other containers lets me insta-move large files or sets of files between steps of containers,

As long as the containers are on the same docker host, many containers can mount the same volume.


Just run docker compose on the remote machine (and copy the compose file with scp). I don’t see a lot of benefits if you run docker compose on another machine than the docker daemon. You can even automate it with ansible, terraform, or similar, if you like it neat.

Or just go with k8s if you need a more complex setup.


That is an entirely valid approach, but this way it feels like the server requires more maintainance. I can destroy a docker named volume through the lifecycle of the container. But I cant delete a system directory. (edit: I mean using `docker compose down -v` it deletes a named volume, but not a system directory)

Also, its really nice to be able to use `docker context use SERVER_NAME` so I can switch contexts (servers) very easily.


> because then you can't really use docker over SSH

Most of our App deployments are done with GHCR + SSH + Docker Compose with GitHub Actions on every commit [1]

[1] https://docs.servicestack.net/ssh-github-action-deployment


But names volumes kind of make backups harder + you will need bind mount for external config files anyway.


> kind of make backups harder

How so? For me it makes it easier, just have to backup one directory: /var/lib/docker/volumes

> you will need bind mount for external config files anyway

I create all my config files from templates, generated by a container, with the config entirely driven by environment variables, and this runs before my main container (via `depends_on`) it writes the config to a named volume. So there are no "external" config files, only "internal" ones.


repos sitting in my Bookmarks.

* https://github.com/docker/awesome-compose

* https://github.com/PostHog/posthog

* https://github.com/growthbook/growthbook

* https://github.com/fleetdm/fleet

Supposedly, One can search github using `language:typescript filename:docker-compose.yml stars:>1000` but it's not working for me somehow.


Nice examples. I personally really love compose. For all the complexity in the container space it really nails down a fanatic experience.


> I personally really love compose. For all the complexity in the container space it really nails down a fanatic experience.

Fanatics love docker, for sure. It's… fantastic!


Lol, I didn't catch that. You know what I mean.


Might not totally be a coincidence that it started life as an independent thing to give Docker a better experience, that Docker then acquired/acqui-hired.


I've stopped using Docker Compose and only use Kubernetes. Even when working locally.

I don't want the impedence mismatch of working locally with Docker and remotely with K8s.

That being said, K8s IS harder to setup initially.

Having to setup an NFS share locally to be able to properly mount data in a PV is suboptimal.


Given that these are a lot of homelab examples; this is great. I use a bunch of these and plan to use more.

But part of me still makes me think about what I think might be the gold standard of self-hosted, cloud-like things, and that's Syncthing? As in, all of these things should try to figure out how to be like syncthing?


> - TZ=Europe/Berlin

OMG, why?! It is even had a daylight savings!




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: