> What does it take to rewrite the foundational components of one of the world’s most popular Linux distributions? Ubuntu serves over 12 million daily desktop users alone, and the systems that power it, from sudo to core utilities, have been running for decades with what Jon Seager, VP of Engineering for Ubuntu at Canonical, calls “shaky underpinnings.”
> In this episode, we talk to Jon about the bold decision to “oxidize” Ubuntu’s foundation. We explore why they’re rewriting critical components like sudo in Rust, how they’re managing the immense risk of changing software that millions depend on daily, and what it means to modernize a 20-year-old operating system without breaking the internet.
> Debian's shipped rustc won't be able to compile Apt since nearly all rust devs are bleeding edge targeters.
This is nonsense. Apt devs can target a rustc release and that release can be the same release that ships with Debian? Moreover, since those apt devs may have some say in the matter, they can choose to update the compiler in Debian!
> The entire language culture is built around this rapid improvement.
... Because this is a cultural argument about how some people really enjoy having their codebase be 6 years behind the latest language standard, not about any actual practical problem.
And I can understand how someone may not be eager to learn C++20's concepts or to add them immediately to a code base, but upgrades to your minimum Rust version don't really feel like that. It's much more like "Wow that's a nifty feature, I immediately understand and I'd like to use in the std lib. That's a great alternative to [much more complex thing...]" See, for example, OnceLock added at 1.70.0: https://doc.rust-lang.org/std/sync/struct.OnceLock.html
> but rustc-1.63.0 still compiles this program that panics at runtime.
An index OOB error? Here, it's important to remember the Rust panic is still memory safe. Perhaps you should read the article, or read up on what undefined behavior is?[0] Here, the Rust behavior is very well defined. It will either abort or unwind.[1]
If you prefer different behavior, there is the get method on slices.[2]
Statically asserting at compile time that all memory access are either in-bounds, or will result in a controlled unwind or exit of the process, guarantees there are no memory safety violations.
We understand you're saying it's not possible in the general case to assert that all memory accesses are in bounds. Instead of that, if you ensure all memory accesses are either in bounds or that they at least do not violate memory safety, you've achieved the requirement of "memory safety", regardless of runtime inputs.
> This subthread is about claims of STATIC checking of memory safety. A panic is not static checking. Perhaps you should read what you respond to.
Oh, I read it.
Rust, and for that matter, the person to whom you are replying, above, never claimed that Rust could statically check array bounds. You created that straw-man. Yes, Rust does use static analysis, as one important method to achieve memory safety, but Rust doesn't require static analysis in every instance for it to be Rust's big step function, or for Rust to achieve memory safety.
Yes, certain elements of memory safety can only be achieved at runtime. Fine? But using static analysis to achieve certain elements of memory safety at compile time is obviously better where possible, rather than only at runtime, such as re: Java or Fil-C?
If the only way of triggering spatial memory unsafety in C++ was vector[i] and that operation was defined to always interrupt execution, then yes, C++ would be considered memory safe. But that is not the case.
The equivalent of vector[i] in Rust is Vex::get_unchecked, which is marked as unsafe, not the default that people reach for normally.
We are, however, talking in this subthread about the compiler inserting bounds checks and (incorrectly) calling the process "static checking".
I refuted that point by pointing out that the same process, if done manually in C++, would not be considered "static analysis that provides memory safety for array access".
While I don't like the tone of the grandparent, comparing to Go is kinda irrelevant when it used structured concurrency as the example of how to solve it. It is of course also not a panacea..
> Llvm is a different licence (sic) not illegal to examine.
FSF considers Apache 2 incompatible with the GPL2 because of its "additional conditions".
I happen to agree with you that, at the very least, we haven't fully grappled with the fact that FOSS, like the Linux, is published to the Internet, and freely available to read, by anyone. Obviously, there should be a distinction between reading and copying, just like there is a distinction between reading and copying a literary work.
The issue -- as I see it -- is that many GPL fanatics just don't see it the same way. I believe Linus has even opined that any filesystem which was developed after Linux, whose developers are aware of Linux, could be considered a "derived work". This is of course ridiculous, but the GPL, if read without due care and funneled through social media slop, can be ridiculous.
It's of course not ridiculous. It's why black box reverse engineering exists and is generally legal, while white box reverse engineering is generally illegal. It doesn't matter whether it applies to proprietary or free software, copyright applies equally to both.
Copyright applies to literal reproduction of documents; not to ideas. It is straightforwardly allowed to read some implementation of an idea, think about it, and write your own implementation of the same idea.
IBM published its initial BIOS code in a manual bundled with the PC. Having any knowledge of it, even if you don't implement it verbatim makes you tainted, and makes you guilty in any of the subsequent cases.
This is why Black Box Reverse Engineering Exists.
Same is true for console reverse engineering. No self respecting reverse engineer reads code leaks from official console development. Otherwise they'd be in a legal hot water.
This is serious stuff and there's no blurry line in this.
There's no legal concept called "tainting". Black boxing is just a means by which you try to make yourself completely irreproachable (for if you haven't seen something, then it's outright impossible that you copied it). It obviously doesn't follow that the converse is true. If it were, musicians would not listen to other's music nor would painters look at other's paintings!
(usual factual elements in determining the possibility of a copyright infringement in U.S. law).
I agree with you that it's possible in principle that copyright infringement would not be found even when there was evidence of access. But I think the courts would usually give the defendant a higher burden in that case. You can see in the Wikipedia article that there has been debate about whether access becomes more relevant when the similarity is greater and less relevant when the similarity is less (apparently the current Ninth Circuit standard on that is "no"?).
I agree with all that (that is on actual legal basis, not personal preferences on societal structure), and it’s all the more frustrating to think about the power asymmetry in the era of LLM trained on basically every written material that can be found out there, and in that case oh yes you absolutely can go with it regardless of how verbatim some code they output can be.
The more disturbing thing about LLMs is, it's "fair-use" to scrape everything for them, but it's a liability for the user to use the code, text, whatever.
If it emits a large block of copyrighted material, you'll be again in legal hot water.
Considering even fair-use can be abused (see what GamersNexus is going through) at-will, it looks even more bleaker than at first glance.
That seems totally reasonable to me. A large part of fair use is about the purpose of the use. It seems like a reasonable compromise that what is fair use in one context might not be in another. I can't think of any alternative to fair use that would make more sense.
I think the only unreasonable part is llm companies are implicitly or sometimes explicitly advertizing their products output as being fit for use in other projects. I think that is a false advertising problem.
You're correct that copyright does not apply to ideas only implementations. However if you take an existing implementation and base yours on it, generally the original work's copyright applies (there are a whole lot of details this is skimming over).
As an example, if you take a painting someone else made, and try and make your own version using the original as a reference, that is probably subject to the original author's copyright. On the other hand if you both happen to paint the same sunset its all ok.
I think you're more stating how you would like copyright to work, not how it actually does.
So clean room the llvm analysis and work from a spec carried out of the room.
It's a really low bar to avoid, tbh. The point is that people have hobbies. And aspects of this work can look like a hobbyist "but i don't want to do it that way" view.
As a consumer of compiler products it doesn't have to matter to me, nor as a user of compilers. It's only observations reading the comments and the article which brought this to mind: llvm is proof by example and is a different kind of open source, it's not a barrier I would struggle to pierce, for my own personal view of code licences.
(I'm old enough to have read the gnu manifesto when it first published btw)
I have the feeling you're arguing against and about something I never said.
To clarify, I'll restate: "I believe Linus has even opined that any filesystem which was developed after Linux, whose developers are aware of Linux, could be considered a 'derived work'. [The view that any new filesystem, simply aware of, but created independent of, and after Linux, is a derived work of Linux] is of course ridiculous,..."
> It's why black box reverse engineering exists and is generally legal, while white box reverse engineering is generally illegal.
Oh, I agree a clean room implementation is generally the best legal practice. I am just not sure there are cases on point that always require a clean room implementation, because I am aware of cases which expressly don't require clean room implementations (see Sony v. Connectix and Sega Enterprises Ltd. v. Accolade, Inc). And, given the factual situation has also likely changed due to FOSS and the Internet, I am saying some of these questions are likely still open, even if you regard them as closed.
I agree, some of the Linux people have a very broad notion of what counts as a derived work, but I haven't seen much in the way of actual case law to support the conclusion that "white box reverse engineering is generally illegal".
Software generally receives wide protection for its 'non-literal elements', but it's not the case that every possible iota of its function falls under its protectable expression. Indeed, plenty of U.S. courts have subscribed to the "abstraction-filtration-comparison test" which explicitly describes how some characteristics of a program can be outside the scope of its protection.
But in practice, many of the big software copyright cases have been about literal byte-for-byte copying of some part of a program or data structure, so that the issue comes down to either contract law or fair-use law (which Sega v. Accolade and Google v. Oracle both fall under). The scope of fine-grained protection for 'non-literal elements' seems relatively unexplored, and the maximalist interpretation has seemingly proliferated from people wanting to avoid any risk of legal trouble.
> redox is a pretty cool experimental piece of software that might be the os of the future, why not do it there?
Because people want to use Rust where they use C, right now? Whereas yours is a perfectly fine criticism, it ignores that people want the good stuff, everywhere, in the things they actually use every day. And since this is something the project lead wants to do, this doesn't seem to problem/live issue.
> 2025-08-11 NVIDIA reiterated the request to postpone disclosure until mid-January 2026.
> 2025-08-12 Quarkslab replied that the bugs were first reported in June 18th and mid-January was well past the standard 90 day normally agreed for coordinated disclosure and that we did not see a rationale for postponing publication by, at a minimum, 3 months. Therefore Quarkslab continued with the publication deadline set to September 23rd 2025 and offered to extend the deadline an additional 30 days provided NVIDIA gave us some insights about the full scope of affected products and if the fixes are to be released as a stand alone security fix, as opposed to rolled into a version bump that includes other code changes.
Richest corporation in the world needs 7 months to remedy? Why not 4 years?
reply