Back in the days when Apple acquired NeXT, Linux was undergoing lots of development and wasn't well established. Linux being a monolithic kernel didn't offer the levels of compartmentalization that Mach did.
As things now stand, FreeBSD represents many of the benefits of Darwin and the open source nature of Linux. If you seek a more secure environment without Apple's increasing levels of lock-in, then FreeBSD (and the other BSDs) merit consideration for deployment.
Isn’t FreeBSD a monolithic kernel? I don’t believe it provides the compartmentalisation that you talk about.
As I understand it Mach was based on BSD and was effectively a hybrid with much of the existing BSD kernel running as a single big task under the microkernel. Darwin has since updated the BSD kernel under microkernel with the current developments from FreeBSD.
> Throughout this time the promise of a "true" microkernel had not yet been delivered. These early Mach versions included the majority of 4.3BSD in the kernel, a system known as a POE Server, resulting in a kernel that was actually larger than the UNIX it was based on.
> XNU was originally developed by NeXT for the NeXTSTEP operating system. It was a hybrid kernel derived from version 2.5 of the Mach kernel developed at Carnegie Mellon University, which incorporated the bulk of the 4.3BSD kernel modified to run atop Mach primitives,
Is the driver support fit for using FreeBSD as a desktop OS these days?
Last I tried (~10 years ago) I gave up and I assumed FreeBSD was a Server OS, because I couldn't for the life of me get Nvidia drivers working in native resolution. I don't recall specifics but Bluetooth was problematic also.
> As things now stand, FreeBSD represents many of the benefits of Darwin and the open source nature of Linux.
No. FreeBSD has committed the original sin of UNIX by deliberately dropping support for all non-Intel architectures, intending to focus on optimising FreeBSD for the Intel ISA and platforms. UNIX portability and support for a diverse range of CPU's and hardware platforms are ingrained in the DNA of UNIX, however.
I would argue that FreeBSD has paid the price for this decision – FreeBSD has faded into irrelevance today (despite having introduced some of the most outstanding and brilliant innovations in UNIX kernel design) – because the FreeBSD core team bet heavily on Intel remaining the only hardware platform in existence, and they missed the turn (ARM, RISC-V, and marginally MIPS in embdedded). Linux stepped in and filled in the niche very quickly, and it now runs everywhere. FreeBSD is faster but Linux is better.
And it does not matter that Netflix still runs FreeBSD on its servers serving up the content at the theoretical speed of light – it is a sad living proof of FreeBSD having become a niche within a niche.
P.S. I would also argue that the BSD core teams (Free/Net/Open) were a major factor in the downfall of all BSD's, due to their insular nature and, especially in the early days, a near-hostile attitude towards outsiders. «Customers» voted with their feet – and chose Linux.
Having used continuously both FreeBSD and Linux, wherever they are best suited, since around 1995 until today, I disagree.
In my opinion the single factor that has contributed the most to a greater success for Linux than for FreeBSD has been the transition to multithreaded and multicore CPUs even in the cheapest computers, which has started in 2003 with the SMT Intel Pentium 4, followed in 2005 by the dual-core AMD CPUs.
Around 2003, FreeBSD 4.x was the most performant and the most reliable operating system for single-core single-thread CPUs, for networking or storage applications, well above Linux or Microsoft Windows (source: at that time I was designing networking equipment and we had big server farms on which the equipment was tested, under all operating systems).
However it could not use CPUs with multiple cores or threads, so on such CPUs it fell behind Linux and Windows. The support introduced in FreeBSD 5.x was only partial and many years have passed until FreeBSD had again a competitive performance on up-to-date CPUs. Other BSD variants were even slower in their conversion to multithreaded support. During those years the fraction of users of *BSD systems has diminished a lot.
The second most important factor has been the much smaller set of device drivers for various add-on interface cards than for Linux. Only few hardware vendors have provided FreeBSD device drivers for their products, mostly only Intel and NVIDIA, and for the products of other vendors there have been few FreeBSD users able to reverse engineer them and write device drivers, in comparison with Linux.
The support for non-x86 ISAs has also been worse than in Linux, but this was just a detail among the general support for less kinds of hardware than Linux.
All this has been caused by positive feedback, FreeBSD has started with fewer users, because by the time when the lawsuits have been settled favorably for FreeBSD most potential users had already started to use Linux. Then the smaller number of users have been less capable of porting the system to new hardware devices and newer architectures, which has lead to even lower adoption.
Nevertheless, there have always been various details in the *BSD systems that have been better than in Linux. A few of them have been adopted in Linux, like the software package systems that are now ubiquitous in Linux distributions, but in many cases Linux users have invented alternative solutions, which in enough cases were inferior, instead of studying the *BSD systems and see whether an already existing solution could be adopted instead of inventing yet another alternative.
Whilst I do agree with most of your insights and the narrative of historic events, I also believe that BSD core teams were a major contributing factor to the demise of BSD's (however unpopular such an opinion might be).
The first mistake was that all BSD core teams flatly refused to provide native support for the JVM back in its heyday. They eventually partially conceded and made it work using Linux emulation; however, it was riddled with bugs, crashes and other issues for years before it could run Java server apps. Yet, users clamoured to run Java applications, like, now and vociferously.
The second grave mistake was to flatly refuse to support containerisation (Docker) due to not being kosher. Linux based containerisation is what underpins all cloud computing today. Again, the FreeBSD arrived too late, and it was too little.
P.S. I still hold the view that FreeBSD made matters even worse by dropping support for non-Intel platforms early on – at a stage when its bleak future was already all but certain. New CPU architectures are enjoying a renaissance, whilst FreeBSD nervously sucks its thumb by the roadside of history.
Docker was created in 2013, long after BSDs had lost all their popularity. And, fwiw, FreeBSD pioneered containers long before Linux: https://en.m.wikipedia.org/wiki/FreeBSD_jail
FreeBSD jails are advanced chroot++. Albeit they do set a precedent for a predessor of true containers, they have:
1. Minimal kernel isolation.
2. Optional network stack isolation via VNET (but not used by default).
3. Rudimentary resource controls with no default enforcement (important!).
4. Simple capability security model.
Most importantly, since FreeBSD was a very popular choice for hosting providers at the time, jails were originally invented to fully support partitioned-off web hosting, rather than to run self-sufficient, fully contained (containerised) applications as first-class citizens.
The claim to have invented true containers belongs to Solaris 10 (not Linux) and its zones. Solaris 10 was released in January 2005.
I believe you have wrong view of how secure FreeBSD Jails are - definitely a lot more secure the rootless Podman for a start.
Isolation: With rootless Podman it seems to be on the same level as Jails - but only if You run Podman with SELinux or AppArmor enabled. Without SELinux/AppArmor the Jails offer better isolation. When you run Podman with SELinux/AppArmor and then you add MAC Framework (like mac_sebsd/mac_jail/mac_bsdextended/mac_portacl) the Jails are more isolated again.
Kernel Syscalls Surface: Even rootless Podman has 'full' syscall access unless blocked by seccomp (SELinux). Jails have restricted use of syscalls without any additional tools - and that can be also narrowed with MAC Framework on FreeBSD.
Firewall: You can not run firewall inside rootless Podman container. You can run entire network stack and any firewall like PF or IPFW independently from the host inside VNET Jail - which means more security.
TL;DR: FreeBSD Jails are generally more secure out-of-the-box compared to Podman containers and even more secure if you take the time to add additional layers of security.
> How battle-tested are FreeBSD Jails?
Jails are in production since 1999/2000 when they were introduced - so 25 years strong - very well battle tested.
Docker is with us since 2014 so that means about 10 years less - but we must compare to Podman ...
Rootless support for Podman first appeared late 2019 (1.6) so only less then 6 years to test.
That means Jails are the most battle tested of all of them.
Not quite accurate history of SMP. FreeBSD had SMP well before 5.0, but not "fine grained" which is what the 5.0 release was all about. But the conversion led to many regressions.
I don't know if this had much affect on anything, but another thing that hindered using FreeBSD for some users was that Linux worked better as a dual boot system with DOS/Windows on a typical home PC.
There were two problems.
The first was that FreeBSD really wanted to own the whole disk. If you wanted to dual boot with DOS/Windows you were supposed to put FreeBSD on a separate disk. Linux was OK with just having a partition on the same disk you had DOS/Windows on. For those of us whose PCs only had one hard disk, buying a copy of Partition Magic was cheaper than buying a second hard disk.
The reason for this was that the FreeBSD developers felt that multiple operating system on the same disk was not safe due to the lack of standards for how to emulate a cylinder/head/sector (CHS) addressing scheme on disks that used logical block addressing (LBA). They were technically correct, but greatly overestimated the practical risks.
In the early days PC hard disks used CHS addressing, and the system software such as the PC BIOS worked in those terms. Software using the BIOS such as DOS applications and DOS itself worked with CHS addresses and the number of cylinders, heads, and sectors per track (called the "drive geometry") they saw matched the actual physical geometry of the drive.
The INT 13h BIOS interface for low level disk access allowed for a maximum of 1024 cylinders, 256 heads, and 63 sectors per track (giving a maximum possible drive size of 8 GB if the sectors were 512 bytes).
At some point as disks got bigger drives with more than 63 sectors per track became available. If you had a drive with for example 400 cylinders, 16 heads, and 256 sectors per track you would only be able to access about 1/4 of the drive using CHS addressing that uses the actual drive geometry.
It wasn't really practical to change the INT 13h interface to give the sectors per track more bits, and so we entered the era of made up drive geometries. The BIOS would see that the disk geometry is 400/16/256 and make up a geometry with the same capacity that fit within the limits, such as 400/256/16.
Another place with made up geometry was SCSI disks. SCSI used LBA addressing. If you had a SCSI disk on your PC whatever implemented INT 13h handling for that (typically the BIOS ROM on your SCSI host adaptor) would make up a geometry. Different host adaptor makers might use different algorithms for making up that geometry. Non-SCSI disk interfaces for PCs also moved to LBA addressing, and so the need to make up a geometry for INT 13h arose with those too, and different disk controller vendors might use a different made up geometry.
So suppose you had a DOS/Windows PC, you repartitioned your one disk to make room for FreeBSD, and went to install FreeBSD. FreeBSD does not use the INT 13h BIOS interface. It uses its own drivers to talk to the low level disk controller hardware and those drivers use LBA addressing.
It can read the partition map and find the entry for the partition you want to install on. But the entries in the partition map use CHS addressing. FreeBSD would need to translate the CHS addresses from the partition map into LBA addresses, and to do that it would need to know the disk geometry that whatever created the partition map was using. If it didn't get that right and assumed a made up geometry that didn't match the partitioner's made up geometry the actual space for DOS/Windows and the actual space for FreeBSD could end up overlapping.
In practice you can almost always figure out from looking at the partition map what geometry the partitioner used with enough accuracy to avoid stomping on someone else's partition. Partitions started at track boundaries, and typically the next partition started as close as possible to the end of the previous partition and that sufficiently narrows down where the partition is supposed to be in LBA address space.
That was the approach taken by most SCSI vendors and it worked fine. I think eventually FreeBSD did start doing this too but by then Linux had become dominant in the "Dual boot DOS/Windows and a Unix-like OS on my one disk PC" market.
The other problem was CD-ROM support. FreeBSD was slow to support IDE CD-ROM drives. Even people who had SCSI on their home PC and used SCSI hard disks were much more likely to have an IDE CD-ROM than a SCSI CD-ROM. SCSI CD-ROM drives were several times more expensive and it wasn't the interface that was the bottleneck so SCSI CD-ROM just didn't make much sense on a home PC.
For many then it came down to with Linux they could install they didn't need a two disk system and they could install from a convenient CD-ROM, but for FreeBSD they would need a dedicated disk for it and would have to deal with a stack of floppies.
Related fun fact up to maybe a decade ago: If you had a disk labeled/partitioned in FBSDs 'dangerously dedicated' style, and tried to image that, or reading the image of that with some forensic tool called Encase (running under Windows of course, how else could it be?), this tool would crash that Windows with an irrecoverable blew screen :)
I am very skeptical that it's primarily caused by the focus on Intel CPUs. FreeBSD already fell into obscurity way before RISC-V. And even though they missed the ARM router/appliance boat, Linux already overtook FreeBSD when people were primarily using Linux for x86 servers and (hobbyist) desktops. The Netcraft has confirmed: BSD is dying Slashdot meme was from the late 90ies or early 2000s. Also, if this was the main reason, we would all be using OpenBSD or NetBSD.
IMO it's really a mixture of factors, some I can think of:
- BSD projects were slowed down by the AT&T lawsuit in the early 90ies.
- FreeBSD focused more on expert users, whereas Linux distributions focused on graphical installers and configuration tools early on. Some distributions had graphical installers at the end of the 90ies. So, Linux distributions could onboard people who were looking for a Windows alternative much more quickly.
- BSD had forks very early on (FreeBSD, NetBSD, OpenBSD, BSDi). The cost is much higher than multiple Linux distributions, since all BSDs maintain their own kernel and userland.
- The BSDs (except BSDi) were non-profits, whereas many early Linux distributions were by for-profit companies (Red Hat, SUSE, Caldera, TurboLinux). This gave Linux a larger development and marketing budget and it made it easier to start partnerships with IBM, SAP, etc.
- The BSDs projects were organized as cathedrals and more hierarchical, so made it harder for new contributors to step in.
- The BSD projects provided full systems, whereas in Linux distributions would piece together systems. This made Linux development messier, but allowed quicker evolution and made it easier to adapt Linux for different applications.
- The GPL put a lot more pressure on hardware companies to contribute back to the Linux kernel.
Besides that there is probably also a fair amount of randomness involved.
The AT&T lawsuits are a moot point, as they were all settled in the early 1990s. They are the sole reason why FreeBSD and NetBSD even came into existence – by forking the 4.4BSD-Lite codebase after the disputed code had been eliminated or replaced with non-encumbered reimplementations. Otherwise, we would all be running on descendants of 4.4BSD-Lite today.
Linux has been running uninterruptedly on s/390 since October 1999 (31-bit support, Linux v2.2.13) and since January 2001 for 64-bit (Linux v2.4.0). Linux mainlined PPC64 support in August 2002 (Linux v2.4.19), and it has been running on ppc64 happily ever since, whereas FreeBSD dropped ppc64 support around 2008–2010. Both s/390 and ppc64 (as well as many others) are hardly hobbyist platforms, and both remain in active use today. Yes, IBM was behind each port, although the Linux community has been a net $0 beneficiary of the porting efforts.
I am also of the opinion that licensing is a red herring, as BSD/MIT licences are best suited for proprietary, closed-source development. However, the real issue with proprietary development is its siloed nature, and the fact that closed-source design and development very quickly start diverging from the mainline and become prohibitively expensive to maintain in-house long-term. So the big wigs quickly figured out that they could make a sacrifice and embrace the GPL to reduce ongoing costs. Now, with the *BSD core team-led development, new contributors (including commercial entities) would be promptly shown the door, whereas the Linux community would give them the warmest welcome. That was the second major reason for the downfall of all things BSD.
The AT&T lawsuits are a moot point, as they were all settled in the early 1990s. They are the sole reason why FreeBSD and NetBSD even came into existence – by forking the 4.4BSD-Lite codebase after the disputed code had been eliminated or replaced with non-encumbered reimplementations. Otherwise, we would all be running on descendants of 4.4BSD-Lite today.
The lawsuit was settled in Feb 1994, FreeBSD was started in 1993. FreeBSD was started because development on 386BSD was too slow. It took FreeBSD until Nov 1994 until it rebased on BSD-Lite 4.4 (in FreeBSD 2.0.0).
At the time 386BSD and then FreeBSD were much more mature than Linux, but it took from 1992 until the end of 1994 for the legal clarity around 386BSD/FreeBSD to clear up. So Linux had about three years to try to catch up.
> FreeBSD has committed the original sin of UNIX by deliberately dropping support for all non-Intel architectures, intending to focus on optimising FreeBSD for the Intel ISA and platforms.
FreeBSD supports amd64 and aarch64 as Tier 1 platforms and a number of others (RiscV, PowerPC, Arm7) as Tier 2
FreeBSD started demoting non-Intel platforms around 2008-2010, with FreeBSD 11 released in 2016 only supporting x86. The first non-Intel architecture support was reinstated in April 2021, with the official release of FreeBSD 13, which is over a decade of the time having been irrevocably lost.
Plainly, FreeBSD has missed the boat – the first AWS Graviton CPU was released in 2018, and it ran Linux. Everything now runs Linux, but it could have been FreeBSD.
Not really everywhere, exactly because of GPL, most embedded FOSS OSes are either Apache or BSD based.
It is not only Netflix, Sony is also quite found of cherry picking stuff from BSDs to their Orbit OS.
Finally, I would assert Linux kernel as we know it today, is only relevant as the ones responsible for its creation still walk this planet, and like every project, when the creators are no longer around it will be taken into directions that no longer match the original goals.
As things now stand, FreeBSD represents many of the benefits of Darwin and the open source nature of Linux. If you seek a more secure environment without Apple's increasing levels of lock-in, then FreeBSD (and the other BSDs) merit consideration for deployment.