Hacker Newsnew | past | comments | ask | show | jobs | submit | kaylynb's commentslogin

I've run my homelab with podman-systemd (quadlet) for awhile and every time I investigate a new k8s variant it just isn't worth the extra hassle. As part of my ancient Ansible playbook I just pre-pull images and drop unit files in the right place.

I even run my entire Voron 3D printer stack with podman-systemd so I can update and rollback all the components at once, although I'm looking at switching to mkosi and systemd-sysupdate and just update/rollback the entire disk image at once.

The main issues are: 1. A lot of people just distribute docker-compose files, so you have to convert it to systemd units. 2. A lot of docker images have a variety of complexities around user/privilege setup that you don't need with podman. Sometimes you need to do annoying userns idmapping, especially if a container refuses to run as root and/or switches to another user.

Overall, though, it's way less complicated than any k8s (or k8s variant) setup. It's also nice to have everything integrated into systemd and journald instead of being split in two places.


Nice! I’ve been using a similar approach for years with my own setup: https://github.com/Mati365/hetzner-podman-bunjs-deploy. It’s built around Podman and systemd, and honestly, nothing has broken in all that time. Super stable, super simple. Just drop your units and go. Rock solid.


Neat. I like to see other takes on this. Any reason to use rootless vs `userns=auto`? I haven't really seen any discussion of it other than this issue: https://github.com/containers/podman/discussions/13728


You can use podlet to convert compose files to quadlet files. https://github.com/containers/podlet


It works pretty well. I've also found that some AI models are pretty decent at it too. Obviously need to fix up some of the output but the tooling for conversion is much better than when I started.


Just a single (or bunch of independent) 'node'(s) though right?

To me podman/systems/quadlet could just as well be an implementation detail of how a k8s node runs a container (the.. CRI I suppose, in the lingo?) - it's not replacing the orchestration/scheduling abstraction over nodes that k8s provides. The 'here are my machines capable of running podman-systemd files, here is the spec I want to run, go'.


My servers are pets not cattle. They are heterogeneous and collected over the years. If I used k8s I'd end up having to mostly pin services to a specific machine anyway. I don't even have a rack: it's just a variety of box shapes stacked on a wire shelf.

At some point I do want to create a purpose built rack for my network equipment and maybe setup some homogenous servers for running k8s or whatever, but it's not a high priority.

I like the idea of podman-systemd being an impl detail of some higher level orchestration. Recent versions of podman support template units now, so in theory you wouldn't even need to create duplicate units to run more than one service.


Same experience, my workflow is to run the container from a podman run command, check it runs correctly, podlet to create a base container file, edit the container file (notably with volume and networks in other quadet file) and done (theorically).

I believe the podman-compose project is still actively maintened and could be a nice alternative for docker-compose. But the podman's interface with systemd is so enjoyable.


I don't know if podman-compose is actively developed, but it is unfortunately not a good alternative for docker-compose. It doesn't handle the full feature set of the compose spec and it tends to catch you by surprise sometimes. But the good news is, the new docker-compose (V2) can talk to podman just fine.


Chunk gen makes sense to implement last or never. If you want a performant Minecraft server you need to pregen all the chunks anyway. You can still later regen chunks that have never been visited to get new chunkgen on updates since chunks store the inhabited time.

I think Minecraft server re-implementations are pretty neat and I like to see when a new one comes out. There are also specific purpose server impls like MCHPRS for doing fast redstone compilation for technical minecraft.


I think a high performance block-for-block compatible chunk generation program would be great for anarchy Minecraft servers or generally servers with an "infinite" minecraft world where pre-generating all chunks is not possible.


Oh yeah I agree. There's a lot of fun problems to solve with Minecraft servers. I didn't mean to imply that there are no reasons for good chunk gen. I'm primarily into technical survival so my personal priorities wouldn't be chunk gen.


Another advantage of printed parts is it's easy to use alternative part designs. There are lots of variants on the standard parts that you most likely will start replacing stock parts with as soon as you get addicted.


I really like Silverblue and run it on a couple of secondary machines (like in my workshop), but it’s still rough for anything off the beaten path.

The largest pain points for me:

- Any kernel modules. I know Ublue has images but I wish Red Hat would just have an official solution that doesn’t require hacky RPMs and such.

- Kernel cmdline args or any initramfs changes: can’t package in image and need to be applied manually. Maybe it’s possible to build a custom initramfs to distribute?

- Secure boot and enrolling moks is very annoying. My current workstation just uses sbctl to sign a UKI against custom keys and everything “just works”. This is part of why kernel modules are a pain in Silverblue too.

If you don’t care about kernel modules with secure boot it’s quite nice though. Practically zero maintenance.


After working with these types of systems, I'm convinced we need a new type of package manager that works with overlays and merges package databases somehow. That way you can update the underlying image (at your own peril, maybe) and have the overlay package manager see the new versions. Constantly rebuilding everything when the underlying changes is a waste.


Nix?


AFAIK Nix wouldn't solve this as it has the same issue (/nix/var/nix/db). Here's a scenario to better illustrate:

I'm using systemd nspawn with my host root as a lowerdir overlay. In this container I install some packages not present on the host. The overlay upperdir now includes the new packages and the new package database. I upgrade my host, and now the nspawn package database is wildly out of date because overlay doesn't track line-level file changes.

OverlayFS is really handy but it causes a ton of churn from rebuilding everything.


The kernel modules problem really highlights why the push to do more in userspace in recent years increasingly makes sense. Hope to see more kernel changes to support this push.


1 Gbps is nice when downloading games and updates. Since everything is digital it can be the difference between waiting 30 minutes or 3 hours. IE: You play a game the night it's released/updated or wait until after work the next day.

Upload speed probably makes more sense for more use cases though. I used to have symmetric 1Gbps fiber and never bothered to setup QOS as my upload was never saturated.

I moved and am stuck with "1Gbps" Comcast. Which really means 25Mbps upload. I had to setup qdiscs on my gateway and split my network into tiers to get acceptable upload speeds and latency for the workstations in my home. I maybe have more uploads than 'normal' people, as I have automated backups that store data off-site, but normal people have "backups" in the form of cloud storage I think.

Uploading videos (to YouTube, for example) is painfully slow. I'm simulating living in Australia when I upload a video.


Linux is about as consistent as Windows these days. /etc isn’t really a “dumping ground” for everything and is quite static now. Just diffed a snapshot of /etc on my workstation from a month ago and there aren’t really any changes I didn’t put there myself. It’s reasonable to make /etc immutable on a lot of systems; something impossible with the registry.

Home is a bit more chaotic but most applications follow the XDG specs. Mostly. Less so with cache and state files (vscode, for example, dumps tons of cache/state files in .config instead of .cache and .local/state). And weird things like Flatpak shoving everything in .var.

I’d say things generally behave about as well as windows apps which often treat documents as a dumping ground for all kinds of files. I always have to go to pcgamingwiki to find game data locations without having to check half a dozen places.

For administration I really like the /usr and /etc divide. Vendor files go on /usr, and overrides for the running system in /etc. It’s useful to be able to peek in /usr to see defaults.You don’t really get that ability with the registry. With some more modern setups /etc is bootstrapped from /usr (for example, with systemd-tmpfiles) and you can “factory reset” a system by clearing out /etc (with some asterisks around restoring state for a few of the legacy state files still kept in /etc if the system is has manually created users/groups).


Sometimes breaking changes for ZFS are backported to LTS anyway.


Oh :-\ Thanks for the warning, I guess I'll have to remain vigilant. Switching to LTS certainly significantly reduces the frequency of incompatibilities, so I'm definitely going to remain on it, but I guess its not the perfect fix I thought it was.


The article title is very misleading. This isn't bypassing FDE in any way. It's just getting a root shell on a machine you have physical access to with a particular boot configuration.

Clever? Yes. But no encryption is bypassed.

Most systems will only be listening to PCR 7 anyway, so a similar attack could be done by loading your own custom bootloader, or possibly reading messages on the SPI bus when booting. This is just a nice trick that's easier/faster.

There is a balance of convenience versus security and this could be prevented easily by disabling recovery shell or registering more PCRS (with correct boot setup), but would be much more annoying to remotely administer since you could get failure states where the TPM won't release the keys in a variety of situations.

Ultimately TPM-only unlock is a significant increase in security vs unsophisticated attackers and probably fine for 99% of people, but isn't something to rely on if you are concerned about sophisticated attackers.

Even with perfect PCR setup and enrolling only custom keys in UEFI, a running machine is still vulnerable. Cold boot or DMA attacks (Thunderbolt or PCI) are just a few that come to mind. These sound extremely sophisticated but are easily done even with hobbyist equipment. Any running machine with currently unlocked disks should be assumed to be possible to compromise with physical access.

If interested in Linux boot chain Poettering has a good read: https://0pointer.net/blog/brave-new-trusted-boot-world.html

There are a lot of interesting talks around Linux boot security in the upcoming All Systems Go! conference: https://all-systems-go.io/

Microsoft has info regarding boot security in BitLocker Countermeasures: https://learn.microsoft.com/en-us/windows/security/operating...


Most people don't realize how dangerous automobiles really are. Accidents are one of the leading causes of death in the US until people get to be 45+ and cancer starts overtaking it. Even in 2020-2021 CDC data that probably has less people driving than usual: https://wisqars.cdc.gov/data/lcd/home. One thing I liked about living in a city was being able to walk to work. I'm reluctant to work anywhere I have to commute because the last few years have likely significantly reduced my chance of accidental death purely due to remote work.

In some ranges it's the leading cause, although overdose deaths sometimes win out. It's kind of absurd that this isn't brought up in articles about remote work. Newspapers run articles about gun violence every day despite it being wwaaaayyyy less likely than dying in an automobile accident, but the far greater danger of commuting is almost never discussed.


"one"of. "Accidental" death is there leading cause of death. But that doesn't mean "car accidents" it just means accidents. For the 35-44 rage 22k died from poisoning and 6k died from car accidents. It is still a lot, but not the #1 cause of death. About 42k out of near 3 million deaths are due to car accidents (each year)


This has been my experience in the US. C++ is my favorite language. I learned programming using it in the mid nineties. I still keep up with developments although I wouldn't consider myself the most skilled with it anymore.

The only job I've ever had that used it was a civil engineering company in a small city in the deep south, and it was mostly just C. The pay was good for the area, but nothing spectacular.

I moved to Seattle and the only C++ jobs were at FAANGs, and only a small portion of jobs at those companies. I worked at two FAANGs and only used Java and C#. I learned frontend web stacks largely due to job flexibility and it almost always pays the best vs amount of stress/work needed to put in.

Yeah I could probably write C++ at <insert FAANG> for 2x the salary but I'd also have to work 80h work weeks and deal with FAANG internal politics and sabotage from coworkers, depending on FAANG and which variation of "don't call it stack ranking" they use this year.

On the other hand I can use TypeScript and work from my home office. I've been considering moving back to the south just because much lower living expenses, family, and availability of remote work for web stacks. I can't get that with C++.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: