Fedora stupidly uses beta compiler in new release, Torvalds blindly upgrades, makes breaking, unreviewed changes in kernel, then flames the maintainer who was working on cleanly updating the kernel for the not-yet-released compiler?
> you didn't coordinate with anyone. You didn't search lore for the warning strings, you didn't even check -next where you've now created merge conflicts. You put insufficiently tested patches into the tree at the last minute and cut an rc release that broke for everyone using GCC <15. You mercilessly flame maintainers for much much less.
Hypocrisy is an even worse trait than flaming people.
> Hypocrisy is an even worse trait than flaming people.
Eh I mean everyone's a hypocrite if you dig deep enough—we're all a big nest of contradictions internally. Recognition of this and accountability is paramount though. He could have simply owned his mistake and swallowed his pride and this wouldn't have been such an issue.
On the one hand, sure, fine. He has raked people for less. However this is just an RC. Further, how long has Linus been doing this?
I remember Maddox on xmission having a page explaining that while he may make a grammatical error from time to time, he has published literally hundreds of thousands of words, and the average email he receives contains 10% errors.
However, Linus is well-known for being abrasive, abusive, call it what you want. If you can't take it, don't foist it, Linus. Even if you've earned the right, IMO.
I'd say if you're doing truly-heroic solo efforts, then you can earn that. (But I can only think of fictional examples.) For team efforts like the Linux kernel, sure, no amount of individual contribution to that project grants you the right to belittle the other contributors.
This idea that if you've done great things, then you've earned the right to treat people poorly, needs to go away. It's toxic and gross, and we should expect and demand better of our heroes (and ourselves).
Fabrice Bellard's work is impressive, but I wouldn't call it heroic. I was thinking more like, the grumpy-guts who ensures the local homeless shelter is adequately stocked with food, clean bedding, and toiletries, day-in and day-out, even in the depths of winter. You're allowed to be vaguely misanthropic in your interpersonal relationships if you're doing something like that, at least in my book.
Again, the only non-fictional people I know who qualify, are actually really nice to people.
IMHO Cook is following good development practices.
You need to know what you support.
If you are going to change, it must be planned somehow.
I find Torwalds reckless by changing his development environment before release.
If he really needs that computer to release the kernel, it must be stable one.
Even better: it should be a VM (hosted somewhere) or part of a CI-CD pipeline.
The real problem here was "-Werror", dogmatically fixing warnings, and using the position of privilege to push in last-minute commits without review.
Compilers will be updated, they will have new warnings, this has happened numerous times and will happen in the future. The linux kernel has always supported a wide range of compiler versions, from the very latest to 5+ years old.
I've ranted about "-Werror" in the past, but to try to keep it concise: it breaks builds that would and should otherwise work. It breaks older code with newer compiler and different-platform compiler. This is bad because then you can't, say, use the exact code specified/intended without modifications, or you can't test and compare different versions or different toolchains, etc. A good developer will absolutely not tolerate a deluge of warnings all the time, they will decide to fix the warnings to get a clean build, over a reasonable time with well-considered changes, rather than be forced to fix them immediately with brash disruptive code changes. And this is a perfect example why. New compiler fine, new warnings fine. Warnings are a useful feature, distinct from errors. "-Werror" is the real error.
With or without -Werror, you need your builds to be clean with the project's chosen compilers.
Linux decided, on a whim, that a pre-release of GCC 15 ought to suddenly be a compiler that the Linux project officially uses, and threw in some last-minute commits straight to main, which is insane. But even without -Werror, when the project decides to upgrade compiler versions, warnings must be silenced, either through disabling new warnings or through changing the source code. Warnings have value, and they only have value if they're not routinely ignored.
For the record, I agree that -Werror sucks. It's nice in CI, but it's terrible to have it enabled by default, as it means that your contributors will have their build broken just because they used a different compiler version than the ones which the project has decided to officially adopt. But I don't think it's the problem here. The problem here is Linus's sudden decision to upgrade to a pre-release version of GCC which has new warnings and commit "fixes" straight to main.
This is my take-away as well. Many projects let warnings fester until they hit a volume where critical warnings are missed amidst all the noise. That isn't ideal, but seems to be the norm in many spaces (for instance the nodejs world where it's just pages and pages of warnings and deprecations and critical vulnerabilities and...).
But pushing breaking changes just to suppress some new warning should not be the alternative. Working to minimize warnings in a pragmatic way seems more tenable.
Ironically, as a NodeJS dev, I was going to say the opposite: I'm very used to the idea that you have a strict set of warnings that block the build completely if they fail, and I find it very strange in the C world that this isn't the norm. But I think that's more to do with being able to pin dependencies more easily: by default, everyone on projects I work with uses the same set of dependencies always, including build departments and NodeJS versions. And any changes to that set of dependencies will be recorded as part of the repository history, so if be warnings/failures show up, it's very easy to see what caused it.
Whereas in a lot of the C (and C++, and even older Python) codebases I've seen, these sorts of dependencies aren't locked to the same extent, so it's harder to track upgrades, and therefore warnings are more likely to appear, well without warning.
But I think it's also probably the case that a C expert will produce codebases that have no warnings, and a C novice will produce codebases filled with warnings, and the same for JS. So I can imagine if you're just "visiting" the other language's ecosystem, you'll see worse projects and results than if you've spent a while there.
I find it surprising that linus bases his development and release tools based on whatever's in the repositories at that time. Surely it is best practice to pin to a specified, fixed version and upgrade as necessary, so everyone is working with the same tools?
This is common best practice in many environments...
Linus surely knows this, but here he's just being hard headed.
That can work, but it can also bring quite a few issues. Mozilla effectively does this; their build process downloads the build toolchain, including a specific clang version, during bootstrap, i.e., setting up the build environment.
This is super nice in theory, but it gets murky if you veer off the "I'm building current mainline Firefox path".
For example, I'm a maintainer of a Firefox fork that often lags a few versions behind. It has substantial changes, and we are only two guys doing the major work, so keeping up with current changes is not feasible. However, this is a research/security testing-focused project, so this is generally okay.
However, coming back to the build issue, apparently, it's costly to host all those buildchain archives. So they get frequently deleted from the remote repository, which leads to the build only working on machines that downloaded the toolchain earlier (i.e., not Github action runner, for example).
Given that there are many more downstream users of effectively a ton of kernel versions, this quickly gets fairly expensive and takes up a ton of effort unless you pin it to some old version and rarely change it.
So, as someone wanting to mess around with open source projects, their supporting more than 1 specific compiler version is actually quite nice.
Conceptually it's no different than any other build dependency. It is not expensive to host many versions. $1 is enough to store over 1000 compiler versions which would be overkill for the needs of the kernel.
Because then, if something that is expected to compile doesn't compile correctly, you know that you should check your compiler version. It is the exact same reason why you don't just specify which library your project depends on but also the libraries' version.
People are usually going to go through `make`, I don't see a reason that couldn't be instrumented to (by default) acquire an upstream GCC vs whatever forked garbage ends up in $PATH
Why would it go wrong, the ABI is stable and independent of compiler? You would hit issues with C++ but not C. I have certainly built kernels using different versions of GCC than what /lib stuff is compiled with, without issue.
You'd think that, but in effect kconfig/kbuild has many cases where they say "if the compiler supports flag X, use it" where X implies an ABI break. Per task stack protectors comes to mind.
I'm completely unsure whether to respond "it was stable, he was running a release version of Fedora" or "there's no such thing as stable under Linux".
The insanity is that the Kernel, Fedora and GCC are so badly coordinated that the beta of the compiler breaks the Kernel build (this is not a beta, this is a pre-alpha in a reasonable universe...is the Kernel a critical user of GCC? Apparently not), and a major distro packages that beta version of the compiler.
To borrow a phrase from Reddit: "everybody sucks here" (even Cook, who looks the best of everyone here, seems either oblivious or defeated about how clownshoes it is that released versions of major linux distros can't build the Kernel. The solution of "don't update to release versions" is crap).
(Writing this from a Linux machine, which I will continue using, but also sort of despise).
The GCC 15 transition has been very disruptive, but Fedora is known for being on the bleeding edge ("first" is in the "four foundations" [1]). Be glad because eventually everyone will get GCC 15, and we've worked out most of the problems for you already.
GCC 15.1 was released today. Your Fedora release was two weeks earlier, now using a nonexistent version of 15.0.1, ironically now including bugs you reported and that were fixed for 15.1. That just seems like poor decision making.
You're belittling the large amount of work done across thousands of packages to get them ready for GCC 15, which did involve backporting fixes to GCC 15 itself. All those fixes went into GCC upstream. GCC 15.1 was released two hours ago as of writing this message, even before the US wakes up, yet I'm sure there will be a build of it in Fedora later today.
Gentoo also has a tracker [1] for GCC 15 issues that they've been working on as well. (Note: GCC 15 is masked in Gentoo so you have to go out of your way to install it)
GCC 2.96 lasted a year or more and even after GCC 3.0 was released it wasn't able to compile a working kernel. This lasted two weeks and the issue is just a new warning; it's just bad timing across the release cycles of two projects.
And reverted them as soon as the issue became apparent.
> then flames the maintainer who was working on cleanly updating the kernel for the not-yet-released compiler?
Talking aboutchanges that he had not pushed by the time Linus published the release candidate.
Also the "not yet released" seems to be a red herring, as the article notes having beta versions of compilers in new releases is a tradition for some distros, so that should not be unexpected. It makes some sense since distros tend to stick to a compiler for each elease, so shipping a soon to be out of maintenance compiler from day one will only cause other issues down the road.
> They could live with an older version of GCC for a year.
That's just not what Fedora is, though. Being on the bleeding edge is foundational to Fedora, even if it's sometimes inconvenient. If you want battle-tested and stable, don't run Fedora, but use Debian or something.
> I dont love the idea of Apple eco system for server side development.
100%! Everyone repeat after me: "macOS Is Not A Server OS"
macOS is approximately the worst OS you can run a server on:
1. It is buggy. We had a bug in Sonoma where our CI machines would freeze on some filesystem access. The bug was fixed during Sonoma's lifetime... but only released in Sequoia. Before that we had a rare bug that plagued us for years (once every few months across several CI machines) where a process would fail to execute with "/bin/sh: cannot execute binary file", indicating an erroneous ENOEXEC from the kernel (that bug quietly disappeared)
2. It has no LTS version. See above filesystem hang. Want a fix? Cool, it comes only in the next major OS version, along with a host other changes you didn't want, and new bugs! (see point 4 below)
3. It is just poorly documented. Apple's doc are awful, poorly searcheable, they will change things and not document it, you're left with the community trying to reverse-engineer everything (I cannot recommend eclecticlight.co enough!)
4. It's just bloated with cruft for consumers. Upgrade to Sequoia 15.3? You got a free download of "Apple Intelligence" models, there go a few GBs! Again rely on the community to come up with the magic settings to disable stuff.
Ask me how I feel about macOS as a server.
(I lament the death of XServe, which could've driven more server-focused software quality)
> A critical piece of history here is to understand the really stupid way in which GCC does cross compiling. Traditionally, each GCC binary would be built for one target triple. [...] Nobody with a brain does this ^2
You're doing GCC a great disservice by ignoring its storied and essential history. It's over 40 years old, and was created at a time where there were no free/libre compilers. Computers were small and slow. Of course you wouldn't bundle multiple targets in one distribution.
LLVM benefitted from a completely different architecture and starting from a blank slate when computers were already faster and much larger, and was heavily sponsored by a vendor that was innately interested in cross-compiling: Apple. (Guess where LLVM's creator worked for years and lead the development tools team)
The older I get the more this kind of commentary (the OP, not you!) is a total turn off. Systems evolve and there's usually, not always, a reason for why "things are the way they are". It's typically arrogance to have this kind of tone. That said I was a bit like that when I was younger, and it took a few knockings down to realise the world is complex.
"This was the right way to do it forty years ago, so that's why the experience is worse" isn't a compelling reason for a user to suffer today.
Also, in this specific case, this ignores the history around LLVM offering itself up to the FSF. gcc could have benefitted from this fresh start too. But purely by accident, it did not.
> "This was the right way to do it forty years ago, so that's why the experience is worse" isn't a compelling reason for a user to suffer today.
On my system, "dnf repoquery --whatrequires cross-gcc-common" lists 26 gcc-*-linux-gnu packages (that is, kernel / firmware cross compilers for 26 architectures). The command "dnf repoquery --whatrequires cross-binutils-common" lists 31 binutils-*-linux-gnu packages.
The author writes, "LLVM and all cross compilers that follow it instead put all of the backends in one binary". Do those compilers support 25+ back-ends? And if they do, is it good design to install back-ends for (say) 23 such target architectures that you're never going to cross-compile for, in practice? Does that benefit the user?
My impression is that the author does not understand the modularity of gcc cross compilers / packages because he's unaware of (or doesn't care for) the scale that gcc aims at.
> And if they do, is it good design to install back-ends for (say) 23 such target architectures that you're never going to cross-compile for, in practice? Does that benefit the user?
rustc --print target-list | wc -l
287
I'm kinda surprised at how large that is, actually. But yeah, I don't mind if I have the capability to cross-compile to x86_64-wrs-vxworks that I'm never going to use.
I am not an expert on all of these details in clang specifically, but with rustc, we take advantage of llvm's target specifications, so you that you can even configure a backend that the compiler doesn't yet know about by simply giving it a json file with a description. https://doc.rust-lang.org/nightly/nightly-rustc/rustc_target...
While these built-in ones aren't defined as JSON, you can ask the compiler to print one for you:
I'd love to learn what accident you're referring to, Steve!
I vaguely recall the FSF (or maybe only Stallman) arguing against the modular nature of LLVM because a monolothic structure (like GCC's) makes it harder for anti-GPL actors (Apple!) to undermine it. Was this related?
There is nothing "unmodular" about GCC -- considering that it supports plenty of architectures, operating systems, and languages.
The big difference, which people seem to miss in the context of the GNU project and GNU system, is that modularity is for free software projects. GCC is planty modular, and very easy to extend in any way shape or form .. if you abide by the license!
If you want to be a parasite on a project licensed under the GNU GPL, you will have a rough ride .. that is after all the whole idea of copyleft.
iPhones have terrible heat dispersion compared to even a fanless computer like a macbook air. You get a few minutes at full load before thermal throttling kicks in, so you could do the occasional build of your iPhone app on an iPhone but it'd be pretty terrible as a development platform.
At work we had some benchmarking suites that ran on physical devices and even with significant effort put into cooling them they spent more time sleeping waiting to cool off than actually running the benchmarks.
Anecdotally, some years ago the Zebra Puzzle [1] made the rounds in my team. Two people solved it: myself, a young intern, who mapped out the constraints as a physical puzzle that I was able to solve visually, and a more seasoned colleague who used Prolog.
This is exciting, convergence is always good, but I'm confused about the value of putting the tracking information in a git commit header as opposed to a git trailer [1] where it currently lives.
In both cases, it's just metadata that tooling can extract.
Edit: then again, I've dealt with user error with the fragile semantics of trailers, so perhaps a header is just more robust?
Mostly because it is jarring for users that want to interact with tools which require these footers -- and the setups to apply them, like Gerrit's change-id script -- are often annoying, for example supporting Windows users but without needing stuff like bash. Now, I wrote the prototype integration between Gerrit and Jujutsu (which is not mainline, but people use it) and it applies Change-Id trailers automatically to your commit messages, for any commits you send out. It's not the worst thing in the world and it is a little fiddly bit of code.
But ignore all that: the actual _outcome_ we want is that it is just really nice to run 'jj gerrit send' and not think about anything else, and that you can pull changes back in (TBD) just as easily. I was not ever going to be happy with some solution that was like, "Do some weird git push to a special remote after you fix up all your commits or add some script to do it." That's what people do now, and it's not good enough. People hate that shit and rail at you about it. They will make a million reasons up why they hate it; it doesn't matter though. It should work out of the box and do what you expect. The current design does that now, and moving to use change-id headers will make that functionality more seamless for our users, easier to implement for us, and hopefully it will be useful to others, as well.
In the grand scheme it's a small detail, I guess. But small details matter to us.
I don't know if it's the only or original reason, but one nice consequence of the reverse hex choice is that it means change IDs and commit IDs have completely different alphabets ('0-9a-f' versus 'z-k'), so you can never have an ambiguous overlap between the two.
Jujutsu mostly doesn't care about the real "format" of a ChangeId, though. It's "really" just any arbitrary Vec<u8> and the backend itself has to define in some way and describe a little bit; the example backend has a 64-byte change ID, for example.[1] To the extent the reverse hex format matters it's mostly used in the template language for rendering things to the user. But you could also extend that with other render methods too.
That's a downside of using headers, not a reason for using them. If upstream git changes to help this, it would involve having those preserve the headers. (though cherry-pick has good arguments of preserving vs generating a new one)
what does "Free" mean? What do you think the author meant in this context?
The problem is that "Free" means two things in English, which is why some like to use the French/Spanish "Libre" instead, to separate the "free-as-in-speech" from "free-as-in-beer".
I haven't seen a person in this thread use free in the sense of payment yet, it seems pretty disambiguous that everyone is referring to free in the sense of liberty.
This includes the point the person you are responding to is trying to make, in which case they are noting that open source and source available are not the same things. Their point seems to be that open source, by the very definition to most who matter/care about the concept, implies the "free/libre/disentangled" portion. Whether that means derivative continuation (GPL and its kin) or not (MIT, BSD, etc).
> This is kinda like Docker/Podman thing on Linux – but secure instead.
How true is this? I know jails have been around longer than Linux containers, which are explicitly not designed as "secure" isolation (which is why people like fly.io use VMs instead).
How battle-tested are FreeBSD jails?
In particular, I note we're talking FreeBSD, not OpenBSD, which is the one that's all about security.
Sure - lets have a discussion about differences between security of FreeBSD Jails and Linux Podman containers.
Isolation: With rootless Podman it seems to be on the same level as Jails - but only if You run Podman with SELinux or AppArmor enabled. Without SELinux/AppArmor the Jails offer better isolation. When you run Podman with SELinux/AppArmor and then you add MAC Framework (like mac_sebsd/mac_jail/mac_bsdextended/mac_portacl) the Jails are more isolated again.
Kernel Syscalls Surface: Even rootless Podman has 'full' syscall access unless blocked by seccomp (SELinux). Jails have restricted use of syscalls without any additional tools - and that can be also narrowed with MAC Framework on FreeBSD.
Firewall: You can not run firewall inside rootless Podman container. You can run entire network stack and any firewall like PF or IPFW independently from the host inside VNET Jail - which means more security.
TL;DR: FreeBSD Jails are generally more secure out-of-the-box compared to Podman containers and even more secure if you take the time to add additional layers of security.
> How battle-tested are FreeBSD Jails?
Jails are in production since 1999/2000 when they were introduced - so 25 years strong - very well battle tested.
Docker is with us since 2014 so that means about 10 years less - but we must compare to Podman ...
Rootless support for Podman first appeared late 2019 (1.6) so only less then 6 years to test.
That means Jails are the most battle tested of all of them.
Running containers inside VMs in multitenant scenarios is so common that Google though of inventing gVisor which you can think of as a highly paravirtualized guest OS that is lighter than a full VM but still based on similar virtualization principles for isolation.
I admire Kees Cook's patience.