Hacker News new | past | comments | ask | show | jobs | submit | swatson741's comments login

Maybe I'm not understanding the insight here, but it sort of seams like having confluence defeats the purpose of logical semantics.

My specific concern is that by having logical semantics in a language you can represent non-deterministic ambiguous computations, but for this you need divergent paths which, if I understand correctly, the authors have removed from their language. So what's the point of doing this?


> My specific concern is that by having logical semantics in a language you can represent non-deterministic ambiguous computations

As I understand it the verse calculus can only represent non-deterministic unambiguous computations, and that follows from confluence. The point is that it's the non-determinism that's useful, not ambiguity. Am I understanding correctly?


The paper has this to say:

> Choice is a fundamental feature of all functional logic languages. In VC, choice is expressed in the syntax of the term (“laid out in space”) rather than, as is more typical, handled by non-deterministic rewrites and backtracking (“laid out in time”). This makes VC completely deterministic, unlike most functional logic languages which are non-deterministic by design (Section 6.1).

So, the language is deterministic which is a result of being confluent. And going to section 6.1 as suggested says this:

> In contrast, our rules never pick one side or the other of a choice. And yet, (3 +(20 | 30)) can still make progress by floating out the choice (rule choose in Fig. 3), thus (3 +20) | (3 +30). In effect, choices are laid out in space (in the syntax of the term), rather than being explored by non-deterministic selection. Rule choose is not a new idea.

So the syntax is "ambiguous" and given context with "choose" to make it unambiguous.

To answer your question more plainly it's the ambiguity that's important. Non-determinism usually follows as a natural consequent.

However, it may be a hinderance or it may be desired. Usually we're only interested in one single useful result. But, if I do a search in a data structure for all occurrences of X, and there are 5 of them then I may want a result of all 5 occurrences of X.


> So, the language is deterministic which is a result of being confluent. And going to section 6.1 as suggested says this:

Non-determinism in programming language theory does not mean _physical_ non-determinism.

> To answer your question more plainly it's the ambiguity that's important. Non-determinism usually follows as a natural consequent.

Non-determinism in programming language theory does not require ambiguity. Non-determinism here means something more like that the program's execution will search for matching solutions as if it knew them non-deterministically, but the search process will be deterministic (and it's almost invariably a depth-first search).


I can't speak to the point of doing this, but (IIRC/IIUC) you're talking paths and they're talking the entire computation tree, i.e., a term in their calculus represents all solutions, and computing normal forms makes them easy to read off (?). Perhaps there's some meat in how they handle the equivalent of `bagof/3`/`setof/3`.


To put it in a single statement: Pascal is only suitable to study computer science. This is why he doesn't like the language.


Which is just not true. Pascal is going strong within the Lazarus / RAD community.


Which wasn't really true even when he wrote it.


Whenever I see the Darwin kernel brought into the discussion I can't help but wonder how different things could have been if Apple had just forked Linux and ran their OS services on top of that.

Especially when I think about how committed they are to Darwin it really paints a poor image in my mind. The loss that open source suffers from that, and the time and money Apple has to dedicate to this with a disproportionate return.


There was never a right time for Apple to make such a switch. NeXTSTEP predates Linux, and when it was adapted into Mac OS X, Apple couldn't afford a wholesale kernel replacement project on top of everything else, and Linux in the late 1990s was far from being an obviously superior choice. Once they were a few versions in to OS X and solidly established as the most successful UNIX-like OS for consumer PCs, switching to a Linux base would have been an expensive risk with very little short-term upside.

Maybe if Apple had been able to keep classic MacOS going five years longer, or Linux had matured five years earlier, the OS X transition could have been very different. But throwing out XNU in favor of a pre-2.6 Linux kernel wouldn't have made much sense.


I agree with all of this. Moreover depending on what Torvalds chooses to do Apple may have ended up with a more expensive XNU in the end which would have been a disaster. Although I think Apple can deal with Torvalds just fine who really knows how that would have played out.


It would not be fine. It would have never been fine. It would have been a titanic clash of egos and culture, producing endless bickering and finger-pointing, with little meeting of minds. Apple runs the most vertically integrated general systems model outside of mainframes. Linux and its ecosystem represent the least.

In any case, as others have noted, the timeline here w.r.t nextstep is backwards.


Making a switch is one thing, but using Linux from the start for OS X would have made more sense. The only reason that didn't happen is because of Jobs' attachment to his other baby. It wasn't a bad choice, but it was a choice made from vanity and ego over technical merit.


You haven’t really expanded on why basing off the Linux kernel would have made more sense, especially at the time.

People have responded to you with timelines explaining why it couldn’t have happened but you seem to keep restating this claim without more substance or context to the time.

Imho Linux would have been the wrong choice and perhaps even the incorrect assumption. Mac is not really BSD based outside of the userland. The kernel was and is significantly different and would’ve hard forked from Linux if they did use it at the time.

Often when people say Linux they mean (the often memes) GNU/Linux , except GNU diverged significantly from the posix command line tools (in that sense macOS is truer) and the GPL3 license is anathema to Apple.

I don’t see any area where basing off Linux would have resulted in materially better results today.


Well for starters, it would have better memory management. The XNU kernel's memory manager has poor time complexity. If I create a bunch of sparse memory maps using mmap() then XNU starts to croak once I have 10,000+ of them.


Please re read the comment you’re responding to about how the kernel would have diverged significantly even if they did use the Linux kernel. Unless you think a three decade old kernel would have the same characteristics as today.

What benefit would it have had at the time? What guarantees would it have given at the time that would have persisted three decades later?


This presumes that Apple brought in Jobs as a decision maker, and NeXTSTEP was attached baggage. At the time, the reverse was true - Apple purchased NeXTSTEP as their future OS, and Jobs came along for the ride. Given the disaster that was Apple's OS initiatives in the 90s, I doubt the Apple board would have bought into a Linux adventure.


Why wouldn't Apple have been interested in a Linux option? They bought NeXTSTEP because of Jobs. Linux was already useable as a desktop OS in 2000, and they could have added in the UX stuff and drivers for their particular macs on top of it. There wouldn't have been any downsides for them, and it would have strengthened something that was hurting their biggest rival.


> Linux was already useable as a desktop OS in 2000

Apple made its decision in 1996.


Not only was the acquisition during the 1990's, as someone that happened to be a Linux zealot up to around 2004, usable was quite relative in 2000, if one had the right desktop parts.

And it only became usable as Solaris/AIX/HP-UX replacement thanks to the money IBM, Oracle and Compaq pumped into Linux's development around 2000, it is even on the official timeline history.


In the early 2000's, Linux was practically unusable as a desktop OS because the only "fully functional" web browser was Internet Explorer. Netscape 4.x "worked" but was incredibly unstable and crashed roughly every half hour. Mozilla / Phoenix / Firefox wasn't done yet. Chrome didn't exist.

It was a very different world. We won't even talk about audio and video playback. I was an early Linux user, having done my first install in 1993, and sadly ran Windows on my desktop then because the Linux desktop experience was awful.


Safari came out in 2003.


Yeah, but I didn't use a Mac back then. And early 2000's web development was heavily biased towards IE.


Jobs initially did not want to come back to Apple. Apple bought NeXTSTEP because between it and BeOS, Jean-Louis Gassee overplayed his hand and was asking way too much money for the acquisition. Apple then defaulted to NeXT. Jobs thought Apple was hopeless just like everyone else did at the time and didn't want to take over a doomed company to steer it into the abyss, and it's not like NeXT was doing great at the time.

>There wouldn't have been any downsides for them

Really? NO downsides???

- throwing away a decade and a half of work and engineering experience (Avie Tevanian helped write Mach, this is like having Linus being your chief of software development and saying "just switch to Hurd!")

- uncertain licensing (Apple still ships ancient bash 3.2 because of GPL)

- increased development time to a shipping, modern OS (it already took them 5 years to ship 10.0, and it was rough)

That's just off the top of my head. I believe you think there wouldn't have been any downsides because you didn't stop to think of any, or are ideaologically disposed to present the Linux kernel in 1996 as being better or safer than XNU.


> Jean-Louis Gassee overplayed his hand

Well, there’s a parallel universe! Beige boxes running BeOS late-90s-cool maybe, but would we still have had the same upending results for mobile phones, industrial design, world integration, streaming media services…


>it would have strengthened something that was hurting their biggest rival.

If by biggest rival you mean Microsoft, it was Microsoft who saved Apple from bancrupcy in 1997.


Microsoft did that not out of charity to Apple but as an attempt to fend off the DOJ trial accusing it of being a monopoly


The investment Microsoft famously made in Apple in 1997 did not prevent Apple from going bankrupt. By the time the money was in Apple's accounts, its fortunes were already reversed.

The fact Microsoft announced they were investing, and that they were committed to continue shipping Office to Mac, definitely helped.


In 1996, Apple evaluated the options and decided (quite reasonably) that NeXTSTEP - the whole OS including kernel, userland, and application toolkit – was a better starting point than various other contenders (BeOS, Solaris, ...) to replace the failed Copland. Moreover, by acquiring NeXT, Apple got NeXTSTEP, NeXT's technical staff (including people like Bud Tribble and Avie Tevanian), and (ultimately very importantly) Steve Jobs.


AFAICT Linux wasn't even ported to PowerPC at the time of NextSTEP being acquired by Apple.


Apple was firstly involved in porting Linux to PPC, albeit running on top of Mach 3 in MkLinux, since early 1996:

https://en.m.wikipedia.org/wiki/MkLinux


Back in the days when Apple acquired NeXT, Linux was undergoing lots of development and wasn't well established. Linux being a monolithic kernel didn't offer the levels of compartmentalization that Mach did.

As things now stand, FreeBSD represents many of the benefits of Darwin and the open source nature of Linux. If you seek a more secure environment without Apple's increasing levels of lock-in, then FreeBSD (and the other BSDs) merit consideration for deployment.


Isn’t FreeBSD a monolithic kernel? I don’t believe it provides the compartmentalisation that you talk about.

As I understand it Mach was based on BSD and was effectively a hybrid with much of the existing BSD kernel running as a single big task under the microkernel. Darwin has since updated the BSD kernel under microkernel with the current developments from FreeBSD.


Mach was never based on BSD, it replaced it. Mach is the descendant of the Accent and Aleph kernels. BSD came into the frame for the userland tools.

"Mach was developed as a replacement for the kernel in the BSD version of Unix," (https://en.wikipedia.org/wiki/Mach_(kernel))

Interestingly, MkLinux was the same type of project but for Linux instead of BSD (i.e. Linux userland with Mach kernel).


It's not just the user land but much of the BSD kernel too. From https://en.wikipedia.org/wiki/Mach_(kernel)

> Throughout this time the promise of a "true" microkernel had not yet been delivered. These early Mach versions included the majority of 4.3BSD in the kernel, a system known as a POE Server, resulting in a kernel that was actually larger than the UNIX it was based on.

And https://en.wikipedia.org/wiki/XNU

> XNU was originally developed by NeXT for the NeXTSTEP operating system. It was a hybrid kernel derived from version 2.5 of the Mach kernel developed at Carnegie Mellon University, which incorporated the bulk of the 4.3BSD kernel modified to run atop Mach primitives,

MkLinux is similar. https://en.wikipedia.org/wiki/MkLinux

> The name refers to the Linux kernel being adapted to run as a server hosted on the Mach microkernel, version 3.0.


Is the driver support fit for using FreeBSD as a desktop OS these days?

Last I tried (~10 years ago) I gave up and I assumed FreeBSD was a Server OS, because I couldn't for the life of me get Nvidia drivers working in native resolution. I don't recall specifics but Bluetooth was problematic also.


I don't think so. Here's a report from this month: https://freebsdfoundation.org/blog/february-2025-laptop-supp...

Looks like (some) laptops might sleep and wifi is on the way! (with help from Linux drivers)


> As things now stand, FreeBSD represents many of the benefits of Darwin and the open source nature of Linux.

No. FreeBSD has committed the original sin of UNIX by deliberately dropping support for all non-Intel architectures, intending to focus on optimising FreeBSD for the Intel ISA and platforms. UNIX portability and support for a diverse range of CPU's and hardware platforms are ingrained in the DNA of UNIX, however.

I would argue that FreeBSD has paid the price for this decision – FreeBSD has faded into irrelevance today (despite having introduced some of the most outstanding and brilliant innovations in UNIX kernel design) – because the FreeBSD core team bet heavily on Intel remaining the only hardware platform in existence, and they missed the turn (ARM, RISC-V, and marginally MIPS in embdedded). Linux stepped in and filled in the niche very quickly, and it now runs everywhere. FreeBSD is faster but Linux is better.

And it does not matter that Netflix still runs FreeBSD on its servers serving up the content at the theoretical speed of light – it is a sad living proof of FreeBSD having become a niche within a niche.

P.S. I would also argue that the BSD core teams (Free/Net/Open) were a major factor in the downfall of all BSD's, due to their insular nature and, especially in the early days, a near-hostile attitude towards outsiders. «Customers» voted with their feet – and chose Linux.


Having used continuously both FreeBSD and Linux, wherever they are best suited, since around 1995 until today, I disagree.

In my opinion the single factor that has contributed the most to a greater success for Linux than for FreeBSD has been the transition to multithreaded and multicore CPUs even in the cheapest computers, which has started in 2003 with the SMT Intel Pentium 4, followed in 2005 by the dual-core AMD CPUs.

Around 2003, FreeBSD 4.x was the most performant and the most reliable operating system for single-core single-thread CPUs, for networking or storage applications, well above Linux or Microsoft Windows (source: at that time I was designing networking equipment and we had big server farms on which the equipment was tested, under all operating systems).

However it could not use CPUs with multiple cores or threads, so on such CPUs it fell behind Linux and Windows. The support introduced in FreeBSD 5.x was only partial and many years have passed until FreeBSD had again a competitive performance on up-to-date CPUs. Other BSD variants were even slower in their conversion to multithreaded support. During those years the fraction of users of *BSD systems has diminished a lot.

The second most important factor has been the much smaller set of device drivers for various add-on interface cards than for Linux. Only few hardware vendors have provided FreeBSD device drivers for their products, mostly only Intel and NVIDIA, and for the products of other vendors there have been few FreeBSD users able to reverse engineer them and write device drivers, in comparison with Linux.

The support for non-x86 ISAs has also been worse than in Linux, but this was just a detail among the general support for less kinds of hardware than Linux.

All this has been caused by positive feedback, FreeBSD has started with fewer users, because by the time when the lawsuits have been settled favorably for FreeBSD most potential users had already started to use Linux. Then the smaller number of users have been less capable of porting the system to new hardware devices and newer architectures, which has lead to even lower adoption.

Nevertheless, there have always been various details in the *BSD systems that have been better than in Linux. A few of them have been adopted in Linux, like the software package systems that are now ubiquitous in Linux distributions, but in many cases Linux users have invented alternative solutions, which in enough cases were inferior, instead of studying the *BSD systems and see whether an already existing solution could be adopted instead of inventing yet another alternative.


Whilst I do agree with most of your insights and the narrative of historic events, I also believe that BSD core teams were a major contributing factor to the demise of BSD's (however unpopular such an opinion might be).

The first mistake was that all BSD core teams flatly refused to provide native support for the JVM back in its heyday. They eventually partially conceded and made it work using Linux emulation; however, it was riddled with bugs, crashes and other issues for years before it could run Java server apps. Yet, users clamoured to run Java applications, like, now and vociferously.

The second grave mistake was to flatly refuse to support containerisation (Docker) due to not being kosher. Linux based containerisation is what underpins all cloud computing today. Again, the FreeBSD arrived too late, and it was too little.

P.S. I still hold the view that FreeBSD made matters even worse by dropping support for non-Intel platforms early on – at a stage when its bleak future was already all but certain. New CPU architectures are enjoying a renaissance, whilst FreeBSD nervously sucks its thumb by the roadside of history.


Docker was created in 2013, long after BSDs had lost all their popularity. And, fwiw, FreeBSD pioneered containers long before Linux: https://en.m.wikipedia.org/wiki/FreeBSD_jail


FreeBSD jails are advanced chroot++. Albeit they do set a precedent for a predessor of true containers, they have:

  1. Minimal kernel isolation.

  2. Optional network stack isolation via VNET (but not used by default).

  3. Rudimentary resource controls with no default enforcement (important!).

  4. Simple capability security model.
Most importantly, since FreeBSD was a very popular choice for hosting providers at the time, jails were originally invented to fully support partitioned-off web hosting, rather than to run self-sufficient, fully contained (containerised) applications as first-class citizens.

The claim to have invented true containers belongs to Solaris 10 (not Linux) and its zones. Solaris 10 was released in January 2005.


> 3. Rudimentary resource controls with no default enforcement (important!).

Seems pretty extensive to me, including R/W bytes/s and R/W ops/s:

* https://docs.freebsd.org/en/books/handbook/jails/#jail-resou...

* https://klarasystems.com/articles/controlling-resource-limit...

* https://man.freebsd.org/cgi/man.cgi?query=rctl


I believe you have wrong view of how secure FreeBSD Jails are - definitely a lot more secure the rootless Podman for a start.

Isolation: With rootless Podman it seems to be on the same level as Jails - but only if You run Podman with SELinux or AppArmor enabled. Without SELinux/AppArmor the Jails offer better isolation. When you run Podman with SELinux/AppArmor and then you add MAC Framework (like mac_sebsd/mac_jail/mac_bsdextended/mac_portacl) the Jails are more isolated again.

Kernel Syscalls Surface: Even rootless Podman has 'full' syscall access unless blocked by seccomp (SELinux). Jails have restricted use of syscalls without any additional tools - and that can be also narrowed with MAC Framework on FreeBSD.

Firewall: You can not run firewall inside rootless Podman container. You can run entire network stack and any firewall like PF or IPFW independently from the host inside VNET Jail - which means more security.

TL;DR: FreeBSD Jails are generally more secure out-of-the-box compared to Podman containers and even more secure if you take the time to add additional layers of security.

> How battle-tested are FreeBSD Jails?

Jails are in production since 1999/2000 when they were introduced - so 25 years strong - very well battle tested.

Docker is with us since 2014 so that means about 10 years less - but we must compare to Podman ...

Rootless support for Podman first appeared late 2019 (1.6) so only less then 6 years to test.

That means Jails are the most battle tested of all of them.

Hope that helps.

Regards,

vermaden


And HP-UX before them, with HP-UX Vaults already in 1999.


Not quite accurate history of SMP. FreeBSD had SMP well before 5.0, but not "fine grained" which is what the 5.0 release was all about. But the conversion led to many regressions.


I don't know if this had much affect on anything, but another thing that hindered using FreeBSD for some users was that Linux worked better as a dual boot system with DOS/Windows on a typical home PC.

There were two problems.

The first was that FreeBSD really wanted to own the whole disk. If you wanted to dual boot with DOS/Windows you were supposed to put FreeBSD on a separate disk. Linux was OK with just having a partition on the same disk you had DOS/Windows on. For those of us whose PCs only had one hard disk, buying a copy of Partition Magic was cheaper than buying a second hard disk.

The reason for this was that the FreeBSD developers felt that multiple operating system on the same disk was not safe due to the lack of standards for how to emulate a cylinder/head/sector (CHS) addressing scheme on disks that used logical block addressing (LBA). They were technically correct, but greatly overestimated the practical risks.

In the early days PC hard disks used CHS addressing, and the system software such as the PC BIOS worked in those terms. Software using the BIOS such as DOS applications and DOS itself worked with CHS addresses and the number of cylinders, heads, and sectors per track (called the "drive geometry") they saw matched the actual physical geometry of the drive.

The INT 13h BIOS interface for low level disk access allowed for a maximum of 1024 cylinders, 256 heads, and 63 sectors per track (giving a maximum possible drive size of 8 GB if the sectors were 512 bytes).

At some point as disks got bigger drives with more than 63 sectors per track became available. If you had a drive with for example 400 cylinders, 16 heads, and 256 sectors per track you would only be able to access about 1/4 of the drive using CHS addressing that uses the actual drive geometry.

It wasn't really practical to change the INT 13h interface to give the sectors per track more bits, and so we entered the era of made up drive geometries. The BIOS would see that the disk geometry is 400/16/256 and make up a geometry with the same capacity that fit within the limits, such as 400/256/16.

Another place with made up geometry was SCSI disks. SCSI used LBA addressing. If you had a SCSI disk on your PC whatever implemented INT 13h handling for that (typically the BIOS ROM on your SCSI host adaptor) would make up a geometry. Different host adaptor makers might use different algorithms for making up that geometry. Non-SCSI disk interfaces for PCs also moved to LBA addressing, and so the need to make up a geometry for INT 13h arose with those too, and different disk controller vendors might use a different made up geometry.

So suppose you had a DOS/Windows PC, you repartitioned your one disk to make room for FreeBSD, and went to install FreeBSD. FreeBSD does not use the INT 13h BIOS interface. It uses its own drivers to talk to the low level disk controller hardware and those drivers use LBA addressing.

It can read the partition map and find the entry for the partition you want to install on. But the entries in the partition map use CHS addressing. FreeBSD would need to translate the CHS addresses from the partition map into LBA addresses, and to do that it would need to know the disk geometry that whatever created the partition map was using. If it didn't get that right and assumed a made up geometry that didn't match the partitioner's made up geometry the actual space for DOS/Windows and the actual space for FreeBSD could end up overlapping.

In practice you can almost always figure out from looking at the partition map what geometry the partitioner used with enough accuracy to avoid stomping on someone else's partition. Partitions started at track boundaries, and typically the next partition started as close as possible to the end of the previous partition and that sufficiently narrows down where the partition is supposed to be in LBA address space.

That was the approach taken by most SCSI vendors and it worked fine. I think eventually FreeBSD did start doing this too but by then Linux had become dominant in the "Dual boot DOS/Windows and a Unix-like OS on my one disk PC" market.

The other problem was CD-ROM support. FreeBSD was slow to support IDE CD-ROM drives. Even people who had SCSI on their home PC and used SCSI hard disks were much more likely to have an IDE CD-ROM than a SCSI CD-ROM. SCSI CD-ROM drives were several times more expensive and it wasn't the interface that was the bottleneck so SCSI CD-ROM just didn't make much sense on a home PC.

For many then it came down to with Linux they could install they didn't need a two disk system and they could install from a convenient CD-ROM, but for FreeBSD they would need a dedicated disk for it and would have to deal with a stack of floppies.


Related fun fact up to maybe a decade ago: If you had a disk labeled/partitioned in FBSDs 'dangerously dedicated' style, and tried to image that, or reading the image of that with some forensic tool called Encase (running under Windows of course, how else could it be?), this tool would crash that Windows with an irrecoverable blew screen :)

I loved that!


I am very skeptical that it's primarily caused by the focus on Intel CPUs. FreeBSD already fell into obscurity way before RISC-V. And even though they missed the ARM router/appliance boat, Linux already overtook FreeBSD when people were primarily using Linux for x86 servers and (hobbyist) desktops. The Netcraft has confirmed: BSD is dying Slashdot meme was from the late 90ies or early 2000s. Also, if this was the main reason, we would all be using OpenBSD or NetBSD.

IMO it's really a mixture of factors, some I can think of:

- BSD projects were slowed down by the AT&T lawsuit in the early 90ies.

- FreeBSD focused more on expert users, whereas Linux distributions focused on graphical installers and configuration tools early on. Some distributions had graphical installers at the end of the 90ies. So, Linux distributions could onboard people who were looking for a Windows alternative much more quickly.

- BSD had forks very early on (FreeBSD, NetBSD, OpenBSD, BSDi). The cost is much higher than multiple Linux distributions, since all BSDs maintain their own kernel and userland.

- The BSDs (except BSDi) were non-profits, whereas many early Linux distributions were by for-profit companies (Red Hat, SUSE, Caldera, TurboLinux). This gave Linux a larger development and marketing budget and it made it easier to start partnerships with IBM, SAP, etc.

- The BSDs projects were organized as cathedrals and more hierarchical, so made it harder for new contributors to step in.

- The BSD projects provided full systems, whereas in Linux distributions would piece together systems. This made Linux development messier, but allowed quicker evolution and made it easier to adapt Linux for different applications.

- The GPL put a lot more pressure on hardware companies to contribute back to the Linux kernel.

Besides that there is probably also a fair amount of randomness involved.


The AT&T lawsuits are a moot point, as they were all settled in the early 1990s. They are the sole reason why FreeBSD and NetBSD even came into existence – by forking the 4.4BSD-Lite codebase after the disputed code had been eliminated or replaced with non-encumbered reimplementations. Otherwise, we would all be running on descendants of 4.4BSD-Lite today.

Linux has been running uninterruptedly on s/390 since October 1999 (31-bit support, Linux v2.2.13) and since January 2001 for 64-bit (Linux v2.4.0). Linux mainlined PPC64 support in August 2002 (Linux v2.4.19), and it has been running on ppc64 happily ever since, whereas FreeBSD dropped ppc64 support around 2008–2010. Both s/390 and ppc64 (as well as many others) are hardly hobbyist platforms, and both remain in active use today. Yes, IBM was behind each port, although the Linux community has been a net $0 beneficiary of the porting efforts.

I am also of the opinion that licensing is a red herring, as BSD/MIT licences are best suited for proprietary, closed-source development. However, the real issue with proprietary development is its siloed nature, and the fact that closed-source design and development very quickly start diverging from the mainline and become prohibitively expensive to maintain in-house long-term. So the big wigs quickly figured out that they could make a sacrifice and embrace the GPL to reduce ongoing costs. Now, with the *BSD core team-led development, new contributors (including commercial entities) would be promptly shown the door, whereas the Linux community would give them the warmest welcome. That was the second major reason for the downfall of all things BSD.


The AT&T lawsuits are a moot point, as they were all settled in the early 1990s. They are the sole reason why FreeBSD and NetBSD even came into existence – by forking the 4.4BSD-Lite codebase after the disputed code had been eliminated or replaced with non-encumbered reimplementations. Otherwise, we would all be running on descendants of 4.4BSD-Lite today.

The lawsuit was settled in Feb 1994, FreeBSD was started in 1993. FreeBSD was started because development on 386BSD was too slow. It took FreeBSD until Nov 1994 until it rebased on BSD-Lite 4.4 (in FreeBSD 2.0.0).

At the time 386BSD and then FreeBSD were much more mature than Linux, but it took from 1992 until the end of 1994 for the legal clarity around 386BSD/FreeBSD to clear up. So Linux had about three years to try to catch up.


> FreeBSD has committed the original sin of UNIX by deliberately dropping support for all non-Intel architectures, intending to focus on optimising FreeBSD for the Intel ISA and platforms.

FreeBSD supports amd64 and aarch64 as Tier 1 platforms and a number of others (RiscV, PowerPC, Arm7) as Tier 2

https://www.freebsd.org/platforms/


It is irrelevant what FreeBSD supports today.

FreeBSD started demoting non-Intel platforms around 2008-2010, with FreeBSD 11 released in 2016 only supporting x86. The first non-Intel architecture support was reinstated in April 2021, with the official release of FreeBSD 13, which is over a decade of the time having been irrevocably lost.

Plainly, FreeBSD has missed the boat – the first AWS Graviton CPU was released in 2018, and it ran Linux. Everything now runs Linux, but it could have been FreeBSD.


Not really everywhere, exactly because of GPL, most embedded FOSS OSes are either Apache or BSD based.

It is not only Netflix, Sony is also quite found of cherry picking stuff from BSDs to their Orbit OS.

Finally, I would assert Linux kernel as we know it today, is only relevant as the ones responsible for its creation still walk this planet, and like every project, when the creators are no longer around it will be taken into directions that no longer match the original goals.


Interestingly enough, Apple did contribute to porting Linux to PowerPC Macs in the mid-1990s under the MkLinux project, which started in 1996 before Apple’s purchase of NeXT later that year:

https://en.m.wikipedia.org/wiki/MkLinux

I don’t think there was any work done on bringing the Macintosh GUI and application ecosystem to Linux. However, until the purchase of NeXT, Apple already had the Macintosh environment running on top of Unix via A/UX (for 68k Macs) and later the Macintosh Application Environment for Solaris and HP-UX; the latter ran Mac OS as a Unix process. If I remember correctly, the work Apple did for creating the Macintosh Application Environment laid the groundwork for Rhapsody’s Blue Box, which later became Mac OS X’s Classic environment. It is definitely possible to imagine the Macintosh Application Environment being ported to MkLinux. The modern FOSS BSDs were also available in 1996, since this was after the settlement of the lawsuit affecting the BSDs.

Of course, running the classic Mac OS as a process on top of Linux, FreeBSD, BeOS, Windows NT, or some other contemporary OS was not a viable consumer desktop OS strategy in the mid 1990s, since this required workstation-level resources at a time when Apple was still supporting 68k Macs (Mac OS 8 ran on some 68030 and 68040 machines). This idea would’ve been more viable in the G3/G4 era, and by the 2000s it would have be feasible to give each classic Macintosh program its own Mac OS process running on top of a modern OS, but I don’t think Apple would have made it past 1998 without Jobs’ return, not to mention that the NeXT purchase brought other important components to the Mac such as Cocoa, IOKit, Quartz (the successor to Display PostScript) and other now-fundamental technologies.


> I don’t think there was any work done on bringing the Macintosh GUI and application ecosystem to Linux.

QTML (which became the foundation of the Carbon API) was OS agnostic. The Windows versions of QuickTime and iTunes used QTML, and in an alternate universe Apple could've empowered developers to bring Mac OS apps to Windows and Linux with a more mature version of that technology.


Completely forget about MkLinux. The timing is fascinating.

MkLinux was released in February 1996 whilst Copland got officially cancelled in August 1996.

So it's definitely conceivable that internally they were considering to just give up on the Copland microkernel and run it all on Linux. And maybe this was a legitimate third option to BeOS and NeXT that was never made public.


What's crazy is that MkLinux was actually Linux-on-Mach, not just a baremetal PowerPC Linux. The work they did to port Mach to PowerPC for MkLinux was then reused in the port of NeXTSTEP Mach to PowerPC. Everything was very intertwined.


Also, MkLinux wasn't that stable. I experimented a bit with it at the time and it wasn't really ripe for production. It kind of worked, but there would have been lots of work to be invested (probably more than Apple could afford) to turn this into a mainstream OS.


Why would we want more of a monoculture? We've put so many eggs in one basket already. I hope we see more diversity in kernels, not further consolidation.

Taken a different way, it feels similar to suggesting Apple should rebase safari on chromium.


> Whenever I see the Darwin kernel brought into the discussion I can't help but wonder how different things could have been if Apple had just forked Linux

XNU is only partially open sourced – the core is open sourced, but significant chunks are missing, e.g. APFS filesystem.

Forking Linux might have legally compelled them to make all kernel modules open source–which while that would likely be a positive for humanity, isn't what Apple wants to do


At one point NeXT considered distributing GCC under the GPL with some proprietary parts linked at first boot into the binary.

Stallman after speaking with lawyers rejected this.

https://sourceforge.net/p/clisp/clisp/ci/default/tree/doc/Wh...

Look for "NeXT" on this page.


Stallman’s insistence that a judge would side with him is pretty arrogant in my opinion; eg looking at Oracle v. Google decades later and how folks deciding the case seemed to be confused about technical matters.


I don't think it was "arrogant" – if you read the link, he explains that he originally thought differently, but he changed his mind based on what his lawyer told him. I don't think you can label a non-lawyer "arrogant" for accepting the legal advice of their own attorney – whether that advice is correct or not can be debated, but it isn't arrogant for someone to trust the correctness of their own lawyer's advice.


1) We are talking about the late 90s, well before Ubuntu, where Desktop Linux was pretty poor in terms of features and polish.

2) Apple had no money or time to invest in rewriting NeXTStep for a completely new kernel they had no experience in. Especially when so many of the dev team was involved in sorting out Apple's engineering and tech strategy as well as all the features needed to make it more Mac like.

3) Apple was still using PowerPC at the time which NeXTStep supported but Linux did not. It took IBM a couple of years to get Linux running.


> Apple had no money or time to invest in rewriting NeXTStep for a completely new kernel they had no experience in.

And even if they had had the money and time, Avie Tevanian¹ was a principal designer and engineer of Mach². There was no NeXTSTEP-based path where the Mach-derived XNU would not be at the core of Apple's new OS family.

¹ https://en.wikipedia.org/wiki/Avie_Tevanian ² https://en.wikipedia.org/wiki/Mach_(kernel)


>1) We are talking about the late 90s, well before Ubuntu, where Desktop Linux was pretty poor in terms of features and polish.

I think it's hard to understate how much traction Linux had in the late 90's/ early 2000's. It felt like ground breaking stuff was happening pretty much all the time, major things were changing rapidly every release and it felt exciting and genuinely revolutionary to download updates and try out all the new things it really felt like you were on the bleeding edge, your system would break all the time but it was fun and exciting.

I remember reading Slashdot daily being excited to try out every new distribution I'd see on distrowatch, I'd download and build kernels fairly regularly etc.

Things I can remember from back in those days:

- LILO to GRUB boot loader changes

- Going from EXT2 to EXT3 and all the other experimental filesystems that kept coming out.

- Sound system changing from OSS to ASLA

- Introduction of /sys

- Gentoo and all the memes (funroll-loops website)

- Udev and being able to hotplug usb devices

- Signalfd

- Splice/VMsplice

- Early wireless support and the cursed "ndiswrapper"

Nowadays Linux is pretty stable and dare I say it "boring" (in a good way). It's probably mostly because I've gotten older and have way less free time to spend living on the bleeding edge. It feels like Linux has gone from something you had to wrestle with constantly to have a working system to a spot where nowadays everything "mostly works" out of the box. I can't remember last time I've had to cntrl + alt + backspace my desktop for example.

Last major thing I can remember hearing about and being excited for was io_uring.


Yes, and all of that completely uninteresing for Apple's customer base.


> Apple had no money or time to invest in rewriting NeXTStep for a completely new kernel they had no experience in.

I broadly agree, but it is more nuanced than that. They actually had experience with Linux. Shortly before acquiring NeXT, they did the opposite of what you mentioned and ported Linux to the Mach microkernel for their MkLinux OS. It was cancelled at some point, but had things turned a bit differently, it could have ended up more important than it actually did.


Diverse systems are more resilient. It's probably a good thing for IT in a general sense, even if it's not the most efficient


Keep in mind they were also looking at BeOS which is more real time and notably not unix/Linux. I wish I lived in the timeline that they went with it as I'm a huge Be fan.


Control is important. Apple has never had to fight with Torvalds or IBM or Microsoft over getting something added to the kernel. Just look at the fiasco when Microsoft wanted to add a driver for their virtualization system to the kernel.

Also, one thing you'll notice about big companies - they know that not only is time valuable, worst-case time is important too. If someone in an open-source ecosystem CAN delay your project, that's almost as bad as if they regularly DO delay your project. This is why big companies like Google tend to invent everything themselves, I.E. Google may have "invented Kubernetes" (really, an engineer at Google uninvolved with the progenitor of K8s - Borg - invented it based on Borg), but they still use Borg, which everyone Xoogler here likes to say is "not as good as k8s". Yet they still use it. Because it gives them full control, and no possibility of outsiders slowing them down.


>Whenever I see the Darwin kernel brought into the discussion I can't help but wonder how different things could have been if Apple had just forked Linux and ran their OS services on top of that.

They have a long history with XNU and BSD. And Linux has s GPL license which might not suit Apple.

>Especially when I think about how committed they are to Darwin it really paints a poor image in my mind. The loss that open source suffers from that, and the time and money Apple has to dedicate to this with a disproportionate return.

They share a lot of code with FreeBSD, NetBSD and OpenBSD. Which are open source. And Darwin is open source, too. So there's no loss that open source suffers.


The world is better with multiple flavors instead of one bloated one.


One must consider the loss of control moving to Linux would bring. Even Google is reconsidering with fuchsia inline to replace Linux on Android.


It would never have worked. So many of the things that owning xnu has made possible would never have happened on top of Linux. The things you can do when you know each and every customer of the stack, and you all belong to the same business with common objectives and leadership direction just can’t be done in the open-source context.


Based on how often they pull in updated bits from FreeBSD (pretty much never), an Apple fork of Linux would be more or less Linux 2.4 today.

I don't know what the loss that open source suffers is in this context?

I don't think Apple would need to spend less time or money on their kernel grafted ontop of Linux 2.4 vs their kernel grafted on top of FreeBSD 4.4


Because presumably the GPL would force them to release their modifications. Apple gets/got away with leeching off the BSDs because of the permissive license.


They release their kernel source more or less timely without the GPL.


A bit off topic, but is there any data or estimates of how often big companies use modified versions GPL software/libraries for their web services without releasing their modifications?


Seen differently, I think it's great that there is yet another kernel being maintained out there.

Imagine if Apple decided to open source Darwin: wouldn't that be a big win for open source?


The core of it always has been open source: https://github.com/apple-oss-distributions/xnu


Defiantly worth studying SML imo. Pattern matching is a cool feature. Although it's not as comprehensive as most of the pattern matchers in Lisp. You can't match on bitfields, comparisons other than equality by value, etc.

Datatypes are just ok. Classes would be better. It's sort of strange to represent lists (and everything else) as enumerations. It's not really essential or fundamental but I guess that's what Lisp is for.


I suppose by enumerations you mean sum types. I would argue that these are pretty fundamental? you have product types (structs/records/tuples) - a value is made up of x and y - and sum types - a value can be either X or Y. I think the combination of these is what you need to precisely express any concrete data type.


I did mean sum types, variants, etc. It's not really clear what I meant by representing the data but I'm referring to type inference. SML can't solve the problem, and Lisp doesn't have it.


Somewhat droll isn't it? Everyone else is moving on to topology, and you're specializing in Linear Algebra (done right).


With all these desirable cookie cutter Taiwan derivative computers out on the market it really makes you wonder when we’re going start getting BestBuy/Insignia branded laptops being sold like RadioShack used to do.


I think Radio Shack had Tandy and Tandy made computers and leather working products. Does anyone know more about this?


Apparently it was the other way around: Tandy started as a hobby store and bought the tiny RadioShack brand in 1963, and spun off Tandy Leather at some point. I never knew they were founded by the same dude. My mom bought beads and supplies from Tandy Leather when I was a kid and I never made the connection to Tandy computers and Radio Shack!


Not too interested in the course but, I found the instructor's essay interesting.

https://www.scottaaronson.com/papers/philos.pdf


It's not mentioned in the article for obvious reasons but, html + css + javascript = modern web browsers


I'm very curious what sort of games were made in elisp. It's not really the first thing that comes to mind when I think about games programming.


Every copy of GNU Emacs comes bundled with the text adventure Dunnet [0].

Dunnet was originally written by Ron Schnell in 1982, as a Maclisp program running under TOPS-20. [1] In 1992, he ported it to Emacs Lisp; however, the Emacs Lisp version is more than just a simple port of the original, it extends the game with new rooms/items/puzzles, but also removes MIT-centric content–e.g. the "endgame" computer at the end of the game was originally named MIT-SALLY, was located at MIT, and was accessed via Chaosnet–the GNU Emacs version removes all those (dated) MIT references–although the GNU Emacs version contains (obviously intentionally) equally dated (albeit more widely recognisable) content such as a VAX 11/780

[0] https://en.wikipedia.org/wiki/Dunnet_(video_game)

[1] Original is here: https://github.com/Quogic/DunnetPredecessor/blob/master/foo....


I've been an Emacs user for 20+ years and had never stumbled across that game, thanks for highlighting it!

I found the CPU, the Key (guarded by the bear) and made it into the house. Was a fun diversion I guess I need to play it more seriously sometime.



It has snake and tetris in the default installation.


I'm surprised no one has joined the two and invented Snake Tetris.


They have; it's just usually called SnakeTris.


Malyon it's a ZMachine interpreter which works mostly fine.


Playing "This thing all things devours" is one of the most profound gaming experiences I have had, and I happened to use Malyon. Why wouldn't I use the best text editor to play an inform game?


Ditto, I loved devours too, and it's a libre game:

https://jxself.org/git/devours.git

Spiritwrak it's Zorkian but libre licensed unlike Zork [1-3] / Dungeon.

And, OFC, Emacs has Inform-mode, a helper for Inform6 and Inform6-lib.


R6RS has a book that describes everything simply except for setting up the interpreter. [1]

Racket has all of that and everything else you need to get started like tooling and modules. [2]

Both lisps are no nonsense scheme variants.

[1](https://www.scheme.com/tspl4/) [2](https://racket-lang.org)


I was going to suggest Racket, too. Its documentation is some of the best I've ever come across. It doesn't hurt that it's beautiful, too. I smile every time I use Racket or its documentation.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: