Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be so nice to get a brief description of what this is, and why it is useful, without a bunch of project names and buzzwords. Can it be boiled down to a few simple concepts that are delivered by this project?


It is a Linux distribution that is based on Fedora Silverblue (another Linux distribution). What makes these distributions special is that they have an immutable/readonly root file system, which means that only /var (which contains /var/home) and /etc can be manipulated directly by root.

By having less moving parts, it is much easier to provide a Linux operating system that works (more) reliably on many different machines.


Thank you. Why isn't Fedora Silverblue sufficient by itself? What does this project add?

And if you don't mind me also asking, what features are described by the term "cloud-native"?


> What does this project add?

This project uses this upcoming feature in Fedora: https://fedoraproject.org/wiki/Changes/OstreeNativeContainer... and switches to its consumption model entirely.

On stock Fedora you're pulling from a distribution-hosted ostree endpoint to do updates. With Bluefin (and other universal-blue.org images) you're pulling from a container registry (in this case ghcr.io, but you can push your builds to any registry or host your own).

We ingest Fedora daily and then add codecs, a bunch of hardware enablement support via udev rules, add a few pain-in-the-ass-otherwise things like the obs virtual cam, xbox controller support, etc. Then that image is pushed to ghcr.io and the local package manager uses that.

We also enable Flathub out of the box and Distrobox: https://github.com/89luca89/distrobox - then ship a few preconfigured boxes for you to play with: Ubuntu, Fedora, and Wolfi.

Then we have another image that you can rebase to by typing `just devmode` in a terminal which adds vscode with devcontainers, devpod, devbox, kvm/qemu, incus/lxc/lxd, some nice monospace fonts, cockpit, kind, docker, and a bunch of associated cluster tools.

And since we build everything in CI there's no local package conflicts like in upstream Fedora when the main repo and rpmfusion repos are conflicting, your PC only ever gets successfully built images.


> This project uses this upcoming feature in Fedora: https://fedoraproject.org/wiki/Changes/OstreeNativeContainer... and switches to its consumption model entirely.

Does that mean every system update will download a complete image file or is there some mechanism to only download the diffs?


Hey Jorge! Just chiming in to say that I really appreciate the work you've done on ublue-os. I've been using your nvidia image on my Linux workstation, and it's been really great. Thanks!


> Why isn't Fedora Silverblue sufficient by itself? What does this project add?

I'm not affiliated with either distribution. Both distributions provide different out-of-the-box experiences. Fedora Silverblue tries to be a general purpose desktop OS while BlueFin has a focus on software development. You can still adapt both of them for the same purposes, but BlueFin might be a nicer starting point if you actually are a developer.

The biggest difference to classic read-write Linux distributions is that you have an immutable base system which includes software that you might find useful or useless. In addition, you can install software as Flatpaks into your home directory. It is also possible to change the software that is part of the immutable base system, but it is typically something you want to avoid.

> what features are described by the term "cloud-native"?

I suppose it refers to the container management software that is included in the immutable base system.


Bluefin try to sort of mimick Ubuntu, so Ubuntu people may feel more at home with a Fedora distro.


That seems extremely reductive...


How about the "cloud native" part?


The cloud-native part is that it's built with tools and things you'd see in cloud-native like OCI containers, gitops, etc. Here's the containerfile as an example of how it's put together: https://github.com/ublue-os/bluefin/blob/main/Containerfile

And then all the developer patterns are cloud native like devcontainers instead of using the host packages, included k8s, podman, and docker tooling, etc.


Seeing systemd invoked in a Dockerfile looks so wrong.


So "Cloud Native" speaks to multiple aspects of how universal-blue is both built, distributed, and some of the guiding principals behind the project.

I'll start at the very basics, where we define "Cloud Native": Cloud native is the software approach of building, deploying, and managing modern applications in cloud computing environments.

I'll get a little hand wavy here as our desktops/laptops aren't typically defined as a "cloud" (read: grouping of machines, typically behind an API that someone else manages but makes available to users or customers). However - we can look at it as a target platform for deployment. How universal-blue gets there is the real interesting part. That "made by cloud native nerds" is a very compact way of describing how the project is built, tested, and rolled out.

Universal-blue images are all built in CI. In this case - its a combination of base layer components - sometimes built in COPR for some projects that are included in the product operating system, and then those COPR built artifacts are included by a Containerfile. Along with all the goodness contained in SilverBlue (fedora rpm artifacts).

That containerfile is built, tested, and signed in GitHub actions, and a manifest is then updated (somewhere, i don't actually know where ublue looks for these manifests to identify it has an update - it might just be the GHCR registry - but don't hold me to that).

Now this probably all sounds like something you see in your day to day if you work in infrastructure, or in a company producing software/services for the web. But what's really unique from a consumed operating system perspective is that those builds and tests effectively gatekeep the "blessed" configuration for universal-blue. Classically you have kernel modules that get built on your machine using a technique known as DKMS (Dynamic Kernel Module System). Every update to the kernel you have to re-build some library as part of the update process. And if your particular version of a module hasn't been vetted with what kernel you just pulled you can be left in a rather bad state - I've had this happen to me with the proprietary nvidia drivers as an example.

How ublue delivers these modules is part of that not-so-secret sauce that makes it wonderful. These modules are built in the cloud in that same release pipeline, and if they fail - they don't get released! You simply don't see an update for that day, and things hum along just fine. This breaking happening somewhere other than your computer is part of that reliability promise - you wont be dealing with kernel module breakage, the release maintainers will have to resolve the break (or communicate with upstream to find a solution) so your incoming stream of updates can be unblocked.

Finally - there are a lot of "patterns" - or process to achieve a desired outcome, that has been piloted in the Cloud Native world. Someone mentioned CloudNative makes them think of CoreOS. I'm glad you brought this up - If you keep your older versions (by pinning, or ublue keeps the last known booted image by default, and you can change this to keep say 10 if you wanted) - you can always roll back to the version that worked before you encountered a fault. This same pattern exists in the base-line SilverBlue distribution.

This is not an exhaustive analysis but I've already penned a good novel here. I hope this helps outline how universal-blue brings Cloud Native to the desktop. I encourage you to kick the tires and try it out, even if only in a virtual machine.


Having personally also fucked up a silverblue install (featuring a DKMS kernel module I built to support my DSLR camera hdmi capture card) with proprietary nvidia drivers - and then let it sit on that partition long enough that my fedora version was too out of date to pull updates for; and as someone who builds CI pipelines in $DAYJOB: thank you so very very much.


It’s better to just write something yourself than use a gpt—-even if you’re not confident in your ability to write.


Oh i have a rather hard time to notice AI comments if the language they are written in isn't my native one.

Could you tell me what's most suspicious about the text? Imo the structure is a bit to well rounded and it kind of reads like a transcript of something someone said not like a comment.

Doesn't look like gpt4 to me, someone should make a "guess the LLM" game.


It's a sad world when a thoughtful, well-structured, obviously experience-based and informative comment is immediately assumed to be word guessing machine generated garbage.


This doesn't look like GPT. I have never seen GPT say "probably ghcr, but don't hold me to that".


So similar to NixOS with Impermanence? Does it have home read-only too?


home is located under /var/home and symlinked as "/home" in the root filesystem. So no, home is not read-only.


It's could native! What more do you need to know?! :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: