Does anyone know what the state of FreeBSD is these days regarding the possible support of CUDA devices? It always pained me that installing it as an OS on my old hardware meant losing access to that, and a large part of the usefulness of the GPU therein.
I have always used FreeBSD to build petabyte scale ZFS-NFS servers for HPC.
This latest round involved NFSv4.2 and that FreeBSD to be enrolled into RedHAT IDM for auth. It was dog slow like I could not believe, and kept spewing a slurry of errors. Without having the luxury of time to solve all the issues, we ended up going with RedHat EL 9 purely to avoid the slowness from IDM integration.
I hope someone else has the time to figure out the issues with IPA integration in FreeBSD.
I know I sound entitled expecting a pkg install ipa-client
Without doing any of the work. But, my point of writing here is merely to highlight a problem that others who are in much better position that I am to fix it can take a look.
I'm confused. Getting IPA integration looks relatively straightforward. You need to make sure that the FreeBSD packages have certain integrations enabled. If you aren't happy with the default package settings, you can raise that with the maintainer
It's easy to change the package settings anyway. You can do this if you manually build them yourself using something like portmaster for a one-off server, or run your own package and build server with poudriere, if its for a cluster of servers.
Sorry for the less useful rant. Yes, we are integrating with IPA for just uid/gid mappings.
From the little time we got to debug before giving up, FreeBSD was fast and working as usual before IPA client enrolment was done.
We tried NFSv3 (with separate lockd) and NFSv4.2 both only performing at about 20% the throughput of what we currently get out of ZFS on Linux. Both ZFS versions being the same. ZFS-2.1
It felt like we were in uncharted waters with the IPA client and gave up instead of fixing the problems.
Unfortunately, it is going to stay mystically weird. We moved away to Linux in the interest of time. Yes, directory integration killed performance to unbearable levels. But, the fact that directory integration felt put together through uncharted channels in the first place made us not to waste time on it anymore.
It's basically still the only game in town for a high performance shared posix filesystem with multiple writers and builtin support in basically all operating systems.
As an application developer, I probably wouldn't choose to design a system that needed it.
The NFS 4.2 spec was released late 2016. Whats your newfangled alternative? Some cloud service (other guys computer) that runs a bunch of actual tech (like NFS) behind the scenes?
Beneath the layers of containers, rest-api's, js frameworks and web interfaces, there's someone that makes use of serious stuff.
NFS scales and operates incredibly well however it has constraints.
Like any tool, it has its appropriate uses.
Why would you exclude a useful tool from your arsenal because of perceived downsides that are easy to hide from the layer above with appropriate system design?
In fairness, I'm not aware of any preceding container/jail/whatever system that handled the packaging and distribution of software like docker did, and I suspect to most users that's more important than the containment features. Or am I forgetting something else that did that, too?
You actually don't need that all packaging stuff for ease-of-use much. Compile your application with required libraries to a subfolder, define the jail's root as that subfolder, and fire the app.
It's similar under Linux's CGroups. You fire the binary inside another CGroup and, that application thinks that it's the only thing on the machine, and may feel a little lonely, with its own IP and isolated network stack and such.
> You actually don't need that all packaging stuff for ease-of-use much. Compile your application with required libraries to a subfolder, define the jail's root as that subfolder, and fire the app.
I can download and run anything in any accessible OCI registry with a single `docker run` command. The only thing I've seen that makes jails even remotely close to being as easy and quick to use... is using runj to recreate the same experience.
> It's similar under Linux's CGroups. You fire the binary inside another CGroup and, that application thinks that it's the only thing on the machine, and may feel a little lonely, with its own IP and isolated network stack and such.
Yeah, I know how the isolation features work, my point is that they're not nearly as important. For the vast majority of docker use, you could replace the actual container with chroot and not really make any difference, and the chroot is only to save the trouble of having to modify the application to handle host paths, not as a security barrier. Docker's killer feature isn't containers, it's images that are easy to move around and run, and to slightly lesser degree Dockerfiles that provided a standardized format to describe how to build those images.
I love FreeBSD, but I think this kind of illustrates how the community has really dropped the ball on containers: Nobody cares that jails are technically superior and far more elegant under the hood, or that they've been here longer than on linux, or whether their security position is better. Docker was and is overwhelmingly better UX; all the under the hood stuff is effectively irrelevant to users.
Well, not all shops just blindly install docker, pull a docker compose stack and call it day.
Many of us out there compile their apps, stash into a cgroup or a jail and fire it. Just because mainstream developers using "that one thing", it doesn't mean that the whole world has abandoned other ways to do things.
I personally doesn't pull any big software package from an OCI registry, because I don't trust the installation and don't like the reduced configurability of the packages involved. If I'm lazy for that day, I pull Debian minimal, build the thing I need, push to our local registry and call it a day.
Otherwise I install that thing properly, run it under cgroups, or under its own VM if the workload warrants it, and get a mug of coffee.
> Nobody cares that jails are technically superior and far more elegant under the hood...
> Many of us out there compile their apps, stash into a cgroup or a jail and fire it.
> If I'm lazy for that day, I pull Debian minimal, build the thing I need, push to our local registry and call it a day.
I'm 100% sympathetic to not using public images, but it really sounds like you've just re-invented docker with extra steps. Or in the second case, like you are using docker just without the public images?
First case is not re-inventing Docker, it's just not changing our ways. We continue doing what we're doing, but using modern facilities to isolate or resource-control the workload we're running.
In the second case, the only thing I pull is "Debian Slim" from the public repo. Since I'm using Debian for a very long time, I can actually verify down to last bit (package/binary checksums, etc.). Then built the rest from the source + Debian repositories inside a Dockerfile.
I push the container to local registry, and Dockerfile to our local Git server.
Because the previous systems were about applications security, not about shipping the developer's machine into production, which is what most Docker containers are about.
Personally I find the FreeBSD package approach more elegant. We have the "pkg" package manager as standard on the system. If you want to install into a jail you simply add the -j flag.
There has not been a lot of documentation on how to do service jails. Maybe because a full jail (with a full and reasonably sized OS) is so trivial to setup.
But there is no denying that Docker have had great marketing and a good use case.
I do however think the real reason no matter how much of a FreeBSD Jail or Solaris Zones fan boy you are - you know that Linux is the elephant in the room. That brand name alone without arguing distributions.
Linux already had a huge market and mindshare compared to FreeBSD or Solaris. And when Linux got CGroups you had (within reason) feature parity. For anyone who are not daily distro switchers changing your base OS is no minor feat. So even if CGroups was not perfect the incentive to switch is not huge.
But it does make me sad that Docker has become so prevalent that a Docker image is the only way some projects make releases. All too commonly without documenting the build steps. Do not be so hard on your posix friends, please!
You seem to be in the "read the code" camp. I am not.
Rather than using plain language and reason about dependencies, limitations, workarounds and more or less informed choices you need to infer this from the Docker file. Pray that they have left even a single little comment regarding non obvious issues.
Akin to saying everything is documented in the Makefile. Why not a quick glance a main() to see if we parse any args.
You can give people step by step instructions on where to go. But you empower them with a map.
Documentation is hard. Quality documentation even harder. A dockerfile is a very poor substitution for me.
I do not mind projects who prefer Docker and they get all my love and appreciation if they document the manual building steps. And I am totally fine by them telling me I should probably not do that. But I am seeing more projects skipping this and instead spend effort and time on using and debugging with Docker.
But I am old and my beard is getting grey. I have learned to fear the programmer who tells me to read the code to understand the system. I have been told that it is concise and is good for velocity. Not everything old is good and we should move on. I for one miss the days when documentation was considered a priority.
From my understanding, Linux CGroups proved itself to be a viable alternative to jails, because, it provided "Linux inside Linux" in practice. Also, if my memory serves right, jails had some performance impact for the workload inside the jail.
Docker, as you know better than me, is just a wrapper over CGroups, nothing else, plus a container image format, basically.
In the end, CGroups provided what jails provided with the same and better performance, and someone leveraged that subsystem with a tool, before everyone else.
> No performance impact running inside the jail as it is basically just a jail id managed by the kernel.
Thanks for letting me know.
> No reason to spread FUD.
I was just relaying what our system and BSD guy said back in the day. I have nothing against BSD. That quote was before hardware virtualization days, BTW.
Virtual Machines (VMs) is 1960's technology. Started out on the mainframes.
x86 virtualization came along in the 1990's. VMware Desktop everybody knows today was released around 1999. Other players have been around.
This is the same year jails was introduced with FreeBSD 4.
The funny thing is that jails are often referenced as "virtualization without the overhead".
Jails are really fast as it is the same shared kernel. Hardware VMs have additional security advantage as a guest kernel vulnerability will (normally) not affect ofter guests. But it comes at a cost.
Thanks for the trip down the memory lane. While I didn't use any mainframes, I know the partitioning on these beasts existed when I was running VMWare on my desktop around 2000.
However, I remarked that the quote is dated before hardware virtualization, namely VT-x/AMD-V, VT-d, etc., hence hardware segmentation of processors and I/O devices were not possible on that era's x86 hardware.
I have no qualms against jails and/or BSD. I'm not a flamewar person. As I said, this was a quote from our System/BSD guy, nothing else.
Maybe the hardware was saturated, maybe that kernel had a regression, maybe without hardware virtualization/offloading there was some kind of overhead in these days, I don't know.
All in all, they are nice technologies, and I like/support them all.
As I see it, there are roughly four main related features that Docker brings, though it is far from the only solution to bring these features:
1. The ability to create services in a clean slate environment. Unix processes tend to inherit by default most environment attributes--working directory, open file descriptors, etc.--and having a way to create a process (or group of processes) without inheriting all of that stuff is easy.
2. Virtualization/isolation in a lighter weight solution than a full virtual machine.
3. Checkpointing that allows you to consistently restart an image with a given configuration, even if you go on to horribly destroy it in tests or whatnot.
4. A language for designing how to construct these images (the Dockerfile).
I suspect the popularity of Docker mostly comes from the latter, and I'd definitely like to see something like that but utilizing jails and ZFS features instead.
My primary use of Docker is as an isolated-installation package manager that’s portable cross-distro and cross-distro-version with exceptionally well-documented and easy-to-test config locations and data storage paths (so much easier to be sure you’ve backed up everything you need to).
Docker minus the giant well-populated well-maintained image library would be almost useless to me.
[edit] one daemon improved immeasurably by Docker is Samba, of all things. The invocations are a little arcane, but once you’ve got them figured out it’s one extra option line to add a user, one line to add a share, repeat as needed. So very much better than relying on distro-magic to make it work, or, god forbid, trying to configure it manually with some config file that ends up inexplicably having no portability, or is silently ignored despite the claims of the docs, or doesn’t work at all because there’s some option commented out by default that definitely shouldn’t be. Docker forced them to finally make the “I just want to share a damn directory, to these users, either read only or read-write” use case, which is probably the vast majority of use of Samba, straightforward, concise, and reliable to configure.
They have. Its called LXC. There are some differences when compared to "modern" jails (which, due to virtual network and cpu sets, are more close to solaris Zones in terms of functionality), but basically the same core functionality.
I needed to look at your comment for 10 minutes before I got what you mean. So you want containers that run on any POSIX operating systems?
You can do this with ptrace() which is in POSIX, but its going to be slow for i/o heavy applications. Unfortunately POSIX doesn't provide anything else (seccomp would already be a step up). See also proot[1], gVisor[2] and User Mode Linux[3] which all use ptrace().
Sorry, here's a slightly more serious attempt at a decent comment: my hunch is that most of what people look for in Docker could be accomplished by simple file system isolation and a few env variables. Something that could be hacked together with chroot and shell scripts.
This much simpler solution could be replicated in basically any decent OS and wouldn't require shipping an entire Linux kernel and deal with slow IO and the network hoops to jump through on anything that isn't Linux.
The same directory of ready-made solutions that Docker enjoys today could be rebuilt uppon this simpler stack.
When you realize that linking to the right libraries gets you most of the way there already, you begin to think there's a gem of a solution hidden in plain sight among all this mess.
If you mean just Linux and FreeBSD. Even with those two, the API is the same in name only. It's wildly different. Shouldn't share the same name really.
Can I just install it on my laptop, and expect that I'll have graphics, hardware acceleration, wifi, suspend/hibernate out of the box? (There was the live hurd distribution ArchHURD that offered exactly this, I could just put the CD in, and then I had a familiar system with gnome, firefox and such.)
I know there was ghostBSD and desktopBSD for this in the past, but I am not sure what is recommended today?
> NomadBSD is a persistent live system for USB flash drives, based on FreeBSD®. Together with automatic hardware detection and setup, it is configured to be used as a desktop system that works out of the box, but can also be used for data recovery, for educational purposes, or to test FreeBSD®'s hardware compatibility.
Though I am not sure when/if it will be rebased to this new release of FreeBSD.
If your laptop is 8-10+ years old, surprisingly yes (not sure about the suspend/hibernate part, I doubt it). I tried to recycle a friend's 2 older laptops and most everything worked out of the box. I even installed the "Chicago" Theme (looks like win95) [1] for ease of use.
Impossibile to tell, I've tried Free, Open and NetBSD on my older ThinkPad and surprisingly OpenBSD turned out to have the best out-of-the-box compatibility. It seemed however to be the worst performance-wise with fans going crazy whenever a browser is opened up. I ended up with Linux.
> I've been "very lucky" with five computers in a row now, for almost a decade. Linux is actually a good, reliable desktop OS these days.
This. I've been using Linux as a primary OS across all my devices for about twenty-five years. This comprises half a dozen Raspberry Pi 4s, two laptops and a monster workstation (AMD|Intel x86_64; ASUS, Dell, and ASUS, respectively.)
I used to have enormous linux compat problems with laptops as a matter of course, and I earned my stripes doing custom kernel compiles with patches, etc. The last time I had to do this was, I think, 2016 or so.
In particular, the past half-decade's worth of installs have all Just Worked -- at least, after I turn off Secure Boot.
My feeling is that after you’ve compiled your kernel a few times, everything just works because you learned to debug it the hard way and it seems easy now.
What I understand as “just works” is having a webcam, bluetooth, audio, graphic acceleration, hibernation, wifi, media keys, etc, all working when you first boot after the install.
By your definition, Linux just worked literally the first time I tried it (Linux Mint in 2015) and each subsequent install as well. The distros I have the most experience using are Linux Mint, LMDE, Qubes, and NixOS. They've all cooperated flawlessly with my hardware. I'd recommend Mint for literally anybody, especially since you can try it on a live USB and verify your hardware before installing.
(There is one exception: my client wanted me to use this out-of-date screen casting device which only had support for Windows and Mac. I tried to get it to work for all of ten minutes, and did not succeed.)
To be clear, not every question requires a question mark :) Punctuation is not a DTD.
And to be clear about the earlier thing: even if I didn't know how to do anything with Linux, it is now possible to install and have everything just work. Including and especially all the parts listed.
When I say "Just Work", I'm speaking as a model user, not a model hacker (with apologies to Umberto Eco's "model reader/model writer" distinction)
You should try wifibox! It works surprisingly well and is very fast.
Basically, it's a tiny linux running inside bhyve doing all the wifi stuff.
But 14.0 has many updates to the WiFi so I'm not sure it is even needed anymore.
I installed 14 on an HP Proliant microserver, 4GB RAM with 4x SATA and AMD dual core 1.5ghz CPU. It worked very well for the backup and light fileserving duties I have planned for it.
I struggle to find a use for BSD outside of my opnsense VM. *BSD makes for a great gateway device (OpenBSD w/ HAProxy & CARP) and I've used it for that, but even for a webserver, I can't justify it over Linux if I want to run .NET-anything (only unofficial ports exist).
What are the most popular markets for the BSDs today? Anything outside of research/edu?
> What are the most popular markets for the BSDs today? Anything outside of research/edu?
There recently was a vendor summit (which happens regularly: every year (?) in November):
> Join us for the November 2023 FreeBSD Vendor Summit. The event will take place November 2-3, 2023. The Summit provides commercial FreeBSD users with the unique opportunity to meet face-to-face with developers and contributors to get features requested, problems solved, and needs met. It also opens up discussion on improving and enhancing the operating system. Registration is now open. The program includes talks from NetApp, Netflix, ARM and more! Register today!
> It’s a solid starting point if you don’t want to have to worry about GPL in a product you don’t intend to open source.
Specifically if you don't want to worry about 'contamination' between your secret sauce and the open source bits.
It's generally a good idea to upstream as much as you can so you're not carrying custom patches internally when you don't have to. See "The Value of Upstream First":
The Nintendo Switch and the PS4 both use modified versions of FreeBSD, though "modified" could mean "changed so much as to be nearly indistinguishable".
WhatsApp is no longer running FreeBSD. Prior to acquisition, everything was bare metal managed hosting at SoftLayer and we had all FreeBSD except one Linux host for reasons I can't remember (maybe an expirement for calling?). After acquisition, there was a migration to Facebook hosting that included moving to Facebook's flavor of containerized Linux.
Not because Linux is better, but to fit better within Facebook's operations[1], and Erlang runs on many platforms, so it was a much smaller effort to get our code running on Linux than to get FB's server management software to work for FreeBSD. Server hardware was quite a bit different, so we had no apples to apples comparisons of which OS was more efficient or whatever else. During initial migration, BEAM support of kqueue was much better than epoll, but that got worked out, and I feel like Linux's memory usage reporting is worse than FreeBSD's, but it's a weakness of both. I was never comfortable in the FB server environment, so I left in late 2019, when the FreeBSD server count was reduced to a small enough number that I ran out of things to do.
[1] Much of the server team had experience with acquisitions at Yahoo! and the difficulties of making an operations team focused on one OS support acquired teams on another OS. With the many other technical and policy differences between WA and FB, eliminating the OS difference was an easy choice to reduce friction. Our host count, which was large at SoftLayer, was small at Facebook, even after factoring in increased numbers because the servers were smaller and the operations less stable.