Hacker News new | past | comments | ask | show | jobs | submit login

Marcan links to an email by Ted Tso'o (https://lore.kernel.org/lkml/20250208204416.GL1130956@mit.ed...) that is interesting to read. Although it starts on a polarising note ("thin blue line"), it does a good job of explaining the difficulties that Linux maintainers face and why they make the choices they do.

It makes sense to be extremely adversarial about accepting code because they're on the hook for maintaining it after that. They have maximum leverage at review time, and 0 leverage after. It also makes sense to relax that attitude for someone in the old boys' network because you know they'll help maintain it in the future. So far so good. A really good look into his perspective.

And then he can't help himself. After being so reasonable, he throws shade on Rust. Shade that is just unfortunately, just false?

- "an upstream language community which refuses to make any kind of backwards compatibility guarantees" -> Rust has a stability guarantee since 1.0 in 2015. Any backwards incompatibilities are explicitly opt-in through the edition system, or fixing a compiler bug.

- "which is actively hostile to a second Rust compiler implementation" - except that isn't true? Here's the maintainer on the gccrs project (a second Rust compiler implementation), posting on the official Rust Blog -> "The amount of help we have received from Rust folks is great, and we think gccrs can be an interesting project for a wide range of users." (https://blog.rust-lang.org/2024/11/07/gccrs-an-alternative-c...)

This is par for the course I guess, and what exhausts folks like marcan. I wouldn't want to work with someone like Ted Tso'o, who clearly has a penchant for flame wars and isn't interested in being truthful.




> And then he can't help himself. After being so reasonable, he throws shade on Rust. Shade that is just unfortunately, just false?

Many discussions online (and offline) suffer from a huge-group of people who just can't stop themselves from making their knee-jerk reactions public, and then not thinking about it more.

I remember the "Filesystem in Rust" video (https://www.youtube.com/watch?v=WiPp9YEBV0Q&t=1529s) where there are people who misunderstand what the "plan" is, and argue against being forced to use Rust in the Kernel, while the speaker is literally standing in front of them and saying "no one will be forced to use Rust in the Kernel".

You can literally shove facts in someone's face, and they won't admit to being wrong or misunderstand, and instead continue to argue against some points whose premise isn't even true.

I personally don't know how to deal with this either, and tend to just leave/stop responding when it becomes clear people aren't looking to collaborate/learn together, but instead just wanna prove their point somehow and that's the most important part for them.


If you watch that YouTube link, you'll see the same guy Ted Tso'o accusing the speaker of wanting to convert people to the "religion promulgated by Rust". I think he apologised for this flagrant comment, but this email shows he hasn't changed his behaviour in the slightest.


His email seems very reasonable to me (the thin-blue-line comment is a bit weird though). To me the problem are that some Rust people seem to expect that the Linux maintainers (that put in a tremendous amount of work) just have to go out of their way to help them achieve their goals - even if the maintainers are not themselves convinced about it and later have to carry the burden.


How many times will this need to be said: the Rust maintainers have committed to handling all maintenance of Rust code, and handling all breakage of their code by changes on the C side. The only "burden" the C maintainers have to carry is to CC a couple of extra people on commits when APIs change.


As of today, the burden is uncertain and the Rust crowd has not been fixing things quickly enough since they are manual fixes:

https://lore.kernel.org/rust-for-linux/20250131135421.GO5556...

> Then I think we need a clear statement from Linus how he will be working. If he is build testing rust or not.

> Without that I don't think the Rust team should be saying "any changes on the C side rests entirely on the Rust side's shoulders".

> It is clearly not the process if Linus is build testing rust and rejecting PRs that fail to build.

For clarity, tree-wide fixes for C in the kernel are automated via Coccinelle. Coccinelle for Rust is constantly unstable and broken which is why manual fixes are required. Does this help to explain the burden that C developers are facing because of Rust and how it is in addition to their existing workloads?


> For clarity, tree-wide fixes for C in the kernel are automated via Coccinelle. Coccinelle for Rust is constantly unstable and broken which is why manual fixes are required. Does this help to explain the burden that C developers are facing because of Rust and how it is in addition to their existing workloads?

Yes actually, I really wish someone would bring that sort of thing to the forefront, because that's a great spot to welcome new contributors.


They've said that, but nobody believes them, and can you blame them given we JUST saw another big rust maintainer resign?

I'd be suspicious these guys aren't in it for the long haul if they don't get their way and will leave the Rust they shoved in the kernel to bit rot if things don't go their way w.r.t enough rust adoption fast enough. "If you don't let us add even more rust, we will resign from the project and leave you to maintain the rust that's already there and that we added, and that you said you didn't want to add because you didn't trust us to not resign".

Rust 4 Linux people just proving the points of the maintainers scared of abandonment.

The rust 4 Linux people unfortunarely give the impression of caring more about rust than about the kernel, and it's clear that many are willing to make that perfectly clear by abandoning the larger project.

The whole thing needs to be scrapped and rethought with better and more committed leadership. This past 6 months to a year has been embarrassing and done nothing but confirm the fears of anti rust people.


These people were quitting after being harassed. So I do not think you are arguing in good faith.


This is not what the "Rust kernel policy" [1] of Rust for Linux says.

[1]: https://rust-for-linux.com/rust-kernel-policy

> Who is responsible if a C change breaks a build with Rust enabled?

> The usual kernel policy applies. So, by default, changes should not be introduced if they are known to break the build, including Rust.

> Didn't you promise Rust wouldn't be extra work for maintainers?

> No, we did not. Since the very beginning, we acknowledged the costs and risks a second language introduces.


You conveniently left out

> However, exceptionally, for Rust, a subsystem may allow to temporarily break Rust code. The intention is to facilitate friendly adoption of Rust in a subsystem without introducing a burden to existing maintainers who may be working on urgent fixes for the C side. The breakage should nevertheless be fixed as soon as possible, ideally before the breakage reaches Linus.


Yes, "breakage should be fixed as soon as possible". Not "Rust for Linux team will fix the breakage as soon as possible".

The exception is allowing the subsystem to break Rust code temporarily. If you accept a patch in C that breaks Rust code, and the Rust for Linux team doesn't fix it quickly enough, you either need to fix the Rust code yourself, remove it, or re-write it in C. All of this would take time and energy from all the non-R4L kernel devs.

This is why people are reluctant to accept too much mixing of the C and Rust codebases, because even the Rust for Linux team isn't promising to fix breakages in Rust for Linux code.


Just to be clear, this is the situation. A rando submits a C patch that breaks Rust code, a maintainer accepts this patch and then demands that the R4L devs fix the breakage introduced by someone else and reviewed by themselves. The rando who broke the thing isn't around, and the person who reviewed the change takes no responsibility.

Have I gotten that right?

And then you're presenting this situation as "the Rust for Linux team isn't promising to fix breakages in Rust for Linux code". Somewhat disingenuous.


To say Rust won't cause extra work for C developers, this is exactly what people are claiming. This is from the comment I originally replied to.

> The Rust maintainers have committed to handling all maintenance of Rust code, and handling all breakage of their code by changes on the C side. The only "burden" the C maintainers have to carry is to CC a couple of extra people on commits when APIs change.

But this is not actually true it seems. Even the Rust for Linux policy doesn't say this. But because of the incorrect statement that keeps getting repeated, people are calling Kernel devs unreasonable for being reluctant to Rust patches.


> A rando submits a C patch that breaks Rust code, a maintainer accepts this patch and then demands that the R4L devs fix the breakage introduced by someone else and reviewed by themselves. The rando who broke the thing isn't around, and the person who reviewed the change takes no responsibility.

Well, firstly, "randos" aren't getting their patches easily accepted anyway.

And secondly, what's the problem with this? You want one of the following options:

1. Everyone who wants to submit a patch also be proficient in Rust,

Or

2. You want the reviewer to also be proficient in Rust

You don't think that's an unnecessary burden for the existing maintainers?

The burden should be on the people who want to introduce the second language.


Why? You seem to describe one potential scenarios where Rust creates additional work.


You just proved the C devs point..


This is not how the kernel works. You cannot rely on someone's "commitment" or "promise". Kernel maintainers was to have very good control over the kernel and they want strong separation of concern. As long as this is not delivered, it will be very hard to accept the Rust changes.


At some level this is just concern trolling. There is nothing the Rust developers could possibly do or say that would alleviate the concern you've just expressed. You are asking for something that is impossible.

What could they possibly "deliver" beyond a strong commitment to fix the code in a timely manner themselves?


It is not concern trolling. It is a harsh disagreement.

Some kernel developers really do feel that any Rust in the kernel will eventually mean that Rust gets accepted as a kernel language, and that they will eventually have to support it, and they the only way to prevent this is to stop any Rust development right now.

And yes, there's nothing that the R4L group can offer to be get around that belief. There isn't any compromise on this. Either Rust is tried, then spreads, then is accepted, or it's snuffed out right now.

A big mistake by R4L people is seeing anti-Rust arguments as "unfair" and "nontechnical." But it is a highly technical argument about the health of the project (though sometimes wrapped in abusive language). Rust is very scary, and calling out scared people as being unfair is not effective.


OP said "As long as this is not delivered"

There is nothing to deliver that would satisfy this argument. Pretending like the disagreement is about a failure of the R4L folks to do "enough" when in fact there is nothing they could do is toxic behavior.

If you go back digging in the LKML archives, Christoph's initial response to Rust was more of a "let's prove it can be useful first with some drivers"

https://lore.kernel.org/lkml/YOVNJuA0ojmeLvKa@infradead.org/

https://lore.kernel.org/lkml/YOW2auE24e888TBE@infradead.org/

That has now been done. People (particularly Marcan) spent thousands of hours writing complex and highly functional drivers in Rust and proved out the viability, and now the goalposts are being moved.

R4L people are allowed to get upset about people playing lucy-with-the-football like this wasting their f***ing time.


Yeah, that's an interesting point that even Christoph sounded open to it in the past. Thank you!


> There is nothing the Rust developers could possibly do or say that would alleviate the concern you've just exprssed.

They could do exactly what Ted Ts'o suggested in his email [1] that Marcan cited: They could integrate more into the existing kernel-development community, contribute to Linux in general, not just in relation to their pet projects, and over time earn trust that, when they make promises with long time horizons, they can actually keep them. Because, if they can't keep those promises, whoever lets their code into the kernel ends up having to keep their promises for them.

[1] https://lore.kernel.org/lkml/20250208204416.GL1130956@mit.ed...


Many of them have, in fact, done all of those things, and have done them over a time horizon measured in years. Many of the R4L developers are paid by their employers specifically to work on R4L and can therefore be considered reasonably reliable and not drive-by contributors.

Many existing maintainers are not "general contributors"

It is unreasonable (and a recipe for long-term project failure) to expect every new contributor to spend years doing work they don't want to do (and are not paid to do) before trusting them to work on the things they do want (and are paid) to do.

Christoph refused to take onboard a new maintainer. The fight from last August was about subsystem devs refusing to document the precise semantics of their C APIs. These are signs of fief-building that would be equally dangerous to the long-term health of the project if Rust was not involved whatsoever.


I disagree. If you want to provide technical leadership by massively changing the organization and tooling of a huge project that has been around a long time, it should be absolutely mandatory to spend years building trust and doing work that you don't want to do.

That's just how programming on teams and trust and teamwork actually works in the real world. Especially on a deadly serious not-hobby project like the kernel.

Sometimes you are gonna have to do work that doesn't excite you. That's life doing professional programming.

Everything Ted Tso recommended is just common sense teamwork 101 stuff and it's just generally good advice for programmers in their careers. The inability of rust people to follow it will only hurt them and doom their desire to be accepted by larger more important projects in the long run. Programming on a team is a social affair and pretending you don't have to play by the rules because you have such great technical leadership is arrogant.


Wedson had been working in the kernel for 4.5 years!


> It is unreasonable (and a recipe for long-term project failure) to expect every new contributor to spend years doing work they don't want to do (and are not paid to do) before trusting them to work on the things they do want (and are paid) to do.

It is absolutely reasonable if the work they want to do is to refactor the entire project.


it's like saying to people that they cannot add for example npu subsystem to kernel because they should first work for 10 years in other subsystems like filesystems on with they know little about.

sound absurd? just replace subsystems in above with C/Rust and the rest is the same.

Folks that maintain rust are responsible for rust code, if they won't deliver what is needed, their rust subsystem will fail, not C codebase, so it's in their own interests to keep things smooth.

my feeling is that some people think that C is the elite language and rust is just something kids like to play with nowadays, they do not want learn why some folks like that language or what it even is about.

I think the same discussion is when Linux people hate systemd, they usually have single argument that it's agains Unix spirit and have no other arguments without understanding why other thinks may like that init system.


> it's like saying to people that they cannot add for example npu subsystem to kernel because they should first work for 10 years in other subsystems like filesystems on with they know little about. sound absurd? just replace subsystems in above with C/Rust and the rest is the same.

No it's not. What you're missing is that if the Rust folks are unable, for whatever reasons, to keep their promises, it falls on the up-tree maintainers to maintain their code. Which, being Rust code, implies that the existing maintainers will have to know Rust. Which they don't. Which makes it very expensive for them to keep those broken promises.

To look at it another way, the existing maintainers probably have a little formula like this in their heads:

Expected(up-tree burden for accepting subsystem X) = Probability(X's advocates can't keep their long-term promises) * Expected(cost of maintaining X for existing up-tree maintainers).

For any subsystem X that's based on Rust, the second term on the right hand side of that equation will be unusually large because the existing up-tree maintainers aren't Rust programmers. Therefore, for any fixed level of burden that up-tree maintainers are willing to accept to take on a new subsystem, they must keep the first term correspondingly small and therefore will require stronger evidence that the subsystem's advocates can keep their promises if that subsystem is based on Rust.

In short, if you're advocating for a Rust subsystem to be included in Linux, you should expect a higher than usual evidence bar to be applied to your promises to soak up any toil generated by the inclusion of your subsystem. It’s completely sensible.


> What you're missing is that if the Rust folks are unable, for whatever reasons, to keep their promises, it falls on the up-tree maintainers to maintain their code.

But that's the thing, the deal was that existing maintainers do not need to maintain that code.

Their role is to just forward issues/breaking changes to rust maintainer in case those were omitted in CC.

You are using the same argument that was explained multiple times already in this thread: no one is forcing anybody to learn rust.


The point is that “the deal” assumes that the Rust folks will keep their promises for the long haul. Which kernel maintainers, who have witnessed similar promises fall flat, are not willing to trust at face value.

What if, in years to come, the R4L effort peters out? Who will keep their promises then? And what will it cost those people to keep those broken promises?

The existing kernel maintainers mostly believe that the answers to the questions are “we will get stuck with the burden” and “it will be very expensive since we are not Rust programmers.”


Isn't it the same as with support for old hardware? Alpha arch, intel itanium, floppy drives?

Those are all in similar situation, where there is noone to maintain it as none of maintsiners have access to such hardware to event test of that is working correctly.

From time to time we see that such thing is discovered that is not working at all for long time and noone noticed and is dropped from kernel.

The same would happen to rust if noone would like to maintain it.

Rust for Linux is provided as experimental thing and if it won't gain traction it will be dropped in the same way curl dropped it.


The reason the maintainers can drop support for hardware nobody uses is that dropping support won't harm end users. The same cannot be expected of Rust in the kernel. The Rust For Linux folks, like most sensible programmers, intend to have impact. They are aiming to create abstractions and drivers that will deliver the benefits of Rust to users widely, eliminating classes memory errors, data races, and logic bugs. Rust will not be limited to largely disposable parts of Linux. Once it reaches even a small degree of inclusion it will be hard to remove without affecting end users substantially.


> You are using the same argument that was explained multiple times already in this thread: no one is forcing anybody to learn rust.

I think this sort of statement is what is setting the maintainers against the R4L campaigners.

In casual conversation, campaigners say "No one is being forced to learn Rust". In the official statements (see upthread where I made my previous reply) it's made very clear that the maintainers will be forced to learn Rust.

The official policy trumps any casual statement made while proselytising.

Repeating the casual statement while having a different policy comes across as very dishonest on the part of the campaigners when delivered to the maintainers.


The issue with systemd was that many people felt that it was pushed onto them while previously such things would just exist and got adopted slowly if people liked it and then actively adopted it. This model worked fine, e.g. there were many different window managers, editors, etc. and people just used what they liked. For init systems, distributions suddenly decided that only systemd is supported and left people who did not want it out in the cold. It is similar with Rust. It is not an offer, but something imposed onto people who have no interest in it (here: kernel maintainers).


If users of other init systems don't want to make the substantial investment in maintaining support for those other init systems, then their complaints weren't worth much.


To start, not resigning when things don't go their way. That tendency is doing a lot to make the claim of rust people saying they will handle the burden of rust code unbelievable.


The standard procedure is to maintain a fork/patchset that does what you want and you maintain it for years proving that you will do the work you committed to.

Once it’s been around long enough, it has a much better chance of being merged to main.


That has already been the case with Asahi Linux - for years. It exists as a series of forked packages.

The thing is, you do still have to present a light at the end of the tunnel. If, after years of time investment and proven commitment, you're still being fed a bunch of non-technical BS excuses and roadblocks, people are going to start getting real upset.


However, it may only get merged in by being conceptually re-thought and reimplemented, like the Linux USB or KGI projects back in the day.

The general pushback for changes in Linux are against large impactful changes. They want your code to be small fixes they can fully understand, or drivers that can be excluded from the build system if they start to crash or aren't updated to a new API change.

You can't take a years-maintained external codebase and necessarily convert it to an incremental stream of small patches and optional features for upstream maintainers, unless you knew to impose that sort of restriction on yourself as a downstream maintainer.


As a non-interested observer; I think it will need to be said until the commitment becomes credible. I don't know how it would become credible, but it's clearly not considered credible by at least those of the kernel maintainers who are worried about the maintenance burden of accepting rust patches.


That doesn't line with https://rust-for-linux.com/rust-kernel-policy#who-is-respons... (which may be a recent change?)


I don't think it is this simple.


> the Rust maintainers have committed to handling all maintenance of Rust code, and handling all breakage of their code by changes on the C side.

How have they "committed"? By saying they commit[1], I presume -- but what more? Anyone can say anything. I think what makes the "old guard" kernel maintainers nervous is the lack of a track record.

And yes, I know that's a kind of a lifting-yourself-by-your-bootstraps problem. And no, I don't know of any solution to that. But I do know that, like baron Münchhausen, you can't ride your high horse around the swamp before you've pulled your self out of it.

___

[1]: And, as others in this thread have shown, that's apparently just out of one side of their collective mouth: The official "Rust kernel policy" says otherwise.


> "the thin-blue-line comment is a bit weird though"

In US, "thin blue line" is a colloquialism for police officers who typically wear blue and "toe the line." You should not be downvoted/shadowbanned/abused for your post, IMHO.


> Ted Tso'o accusing the speaker of wanting to convert people to the "religion promulgated by Rust"

Given the online temper tantrum thrown by marcan, Ted Tso'o's comment seems totally reasonable, regardless of one's opinion of Rust in the Linux kernel.


Ted gave that rant 6 months ago, and it was in fact unreasonable if you look at the details of what they were discussing.

You're trying to use Marcan's ragequit to ex-post-facto justify Ted T'so when it's literally the other way around.


> tharne 4 minutes ago | parent | context | flag | on: Resigning as Asahi Linux project lead

>

> > Ted Tso'o accusing the speaker of wanting to convert people to the "religion promulgated by Rust"

> That seems totally reasonable. Putting aside the technical merits of the Rust language for the moment, the Rust community suffers from many of the same issues currently hobbling the Democratic Party in the United States. Namely, it often acts like a fundamentalist religion where anyone who dares dissent or question something is immediately accused of one or another moral failings. People are sick of this nonsense and are willing to say something about it.

It's really interesting that every time I open a thread like this, countless people come out swinging with this claim that Rust is totally this religion and cult, while the rest of the thread will be full of C evangelism and vague rhetorics about how nothing like this ever works, while actively contributing to making sure it won't this time either.

99% of insufferable Rust vs. C interactions I've come across it was the C fella being the asshole. So sorry, but no, not very convincing or "totally reasonable" at all.


> 99% of insufferable Rust vs. C interactions I've come across it was the C fella being the asshole. So sorry, but no, not very convincing or "totally reasonable" at all.

This has also been my observation as a C++ developer who finds themselves in a fair few C/C++-aligned spaces. There are exceptions, but in most of those spaces the amount of Rust Derangement Syndrome I've witnessed is honestly kind of tiresome at this point.


> Rust Derangement Syndrome

Thank you for proving me so right so readily.


...they were agreeing with you, not being insufferable.


> ...they were agreeing with you, not being insufferable.

Those two things are not mutually exclusive :-)


Quite frankly, if I had the realization that despite assurances to the contrary, that my contributions to a project had been sabotaged for months or even years up to that point, I would have also had a hard time keeping a smile on my face.

This is ultimately what this drama comes down to. Not if Rust should or shouldn't be in the kernel, but with kernel maintainers' broken promises and being coy with intentions until there is no other option than to be honest, with the reveal that whatever time and effort a contributor had put in was a waste from the start.

It seems like the folks who didn't want Rust in the kernel will be getting their way in the end, but I had better never hear another complaint about the kernel not being able to attract new talent.


I can't believe you're the first person I find in this conversation who raises this issue. This is the exact reason why Marcan flipped his lid. Linus publicly championed a very technically complex initiative and then left all those contributors to the wolves when things didn't progress without a hiccup. Especially damning when you consider that at every step, the fief lords in Linux have seemingly done everything in their power to set up the r4l people for failure and Linus hasn't so much as squeaked at them. He personally cut the knot and asserted that Rust is Linux's future, but he constantly allows those below him to relitigate the issue with new contributors (who can't fight back because even though they're contributing by the supposed rules, they don't have enough social buy-in).


> He personally cut the knot and asserted that Rust is Linux's future,

When did he say that?

In any event, that could be true (Rust is Linux's future) while the statement "R4L is not in Linux's future" is also true.

IOW, in principle, I may agree with you on something. That doesn't mean I agree with your specific implementation.


> my contributions to a project had been sabotaged

I really wish people would stop throwing around the word "sabotaged". No one "sabotaged" anything. The opposition has been public from the beginning.

If I'm opposed to something, and someone asks my opinion about it in a private conversation, it is not "sabotage" to express my opinion. So far I haven't seen any evidence that those opposed to a mixed-language code base organized behind the scenes to hamper progress in anyway. Instead, their opposition has been public and in most cases instant.

Are people not allowed to be opposed to things anymore?


[flagged]


> The same people community clutching pearls

This is an unfortunate typo because your meaning is completely lost.

If your claim is that individuals are being hypocritical then you may have a point. Especially if you can produce examples.

But if you mean community vs community then you have simply bought in to the religious debate which isn’t interesting.


[flagged]


[flagged]


I don't care about Rust but calling a whole community with tens or hundreds of thousands of people toxic will generally get a downvote from me.


[flagged]


The distinction in this particular case is that Rust tried to "downvote" a lkml discussion to get code merged. No one much cares about rendering color in HN comments, except to the extent that we're having this discussion (which you found valuable enough to contribute to!) in an invisible flagged subthread because Rust people don't want to have it, apparently.

It's just tiresome. And it boiled over here, because no matter how enthusiastic the Rust people can be their youthful exuberance pales in influence in comparison with the the talent and impact of the Linux Kernel maintainers. And the resulting tantrum shows it.


>The distinction in this particular case is that Rust tried to "downvote" a lkml discussion to get code merged.

You're attributing to "the Rust community" an imaginary offense that did not actually happen that way and couldn't be attributed that way even if it did. And then you make claims about how "the Rust community" is toxic. Right.


We have used that video as an exercise in how not to achieve change. Assuming everyone is acting in good faith, the presenter missed the opportunity to build consensus before the talk, Tsu unwilling to budge a bit, but most of all the moderator unable to prevent the situation from exploding. This could have been handled much better by each of them.

In contrast to the parent: yes, the presenter says „you don’t have to use rust, we are not forcing you“ but he fails to address the concern that a change they introduce would error downstream and someone else had to clean up afterwards.


>In contrast to the parent: yes, the presenter says „you don’t have to use rust, we are not forcing you“ but he fails to address the concern that a change they introduce would error downstream and someone else had to clean up afterwards.

He did not fail to address that concern. And then Ted shouted him down for 2 minutes such that he couldn't get 2 syllables in to respond.


> We have used that video as an exercise in how not to achieve change

I'm not disagreeing with anything you said, just curious who the "we" you're referring to here, are you a kernel developer or something similar?


> Assuming everyone is acting in good faith

Why would we assume that Ted repeatedly using strawman fallacies, bleating appeals to emotion and acting like a victim...all the while shouting people down...evidence of "acting in good faith"?

When you shout over someone like that you're nothing but a bully.

> he fails to address the concern that a change they introduce would error downstream and someone else had to clean up afterwards.

Because that "concern" was a strawman. It demonstrated that Ted either did not understand what the presenters were asking for, or simply didn't like others asking him to do something, because he's very important and nobody tells him what to do.

As has been exhaustively explained by others in previous HN threads and elsewhere: the Rust developers were asking to be informed of changes so that Rust developers could update their code to accommodate the change.

Ted loses his shit and starts shouting nonsense about others forcing people to learn Rust, and so on.

> but most of all the moderator unable to prevent the situation from exploding

When someone is being abusive to others, the issue is never "the people on the receiving end are not handling it as best they can."

Further: did it occur to you that Ted's infamous short temper, and his "status" as a senior kernel developer, might be why the moderator was hesitating to respond?

Imagine how Ted would have reacted if he was told to speak respectfully, lower his voice, and stop talking over others. Imagine how the army of nerds who think Ted's behavior was acceptable or understandable.


I don't understand how abusive bullies like Ted are allowed the privilege of being a senior kernel developer. This feels, in the end, like the fault of Linus, for allowing abusive maintainers to maintain their grip.


Linus was the original abusive bully maintainer, that's how. He's improved his personal use of language, but the culture that he initiated continues unabated. Linux's existing success as a project is used as evidence that it doesn't need any changes to the kernel maintainers' culture.


Up until recently (assuming you believe he's genuinely reformed), Torvalds was also one of those abusive bullies, remember.


I'm not a rust or c developer.

> As has been exhaustively explained by others in previous HN threads and elsewhere: the Rust developers were asking to be informed of changes so that Rust developers could update their code to accommodate the change.

I don't understand why you don't see this as "a really big deal". The C developers make a breaking change. They fix all the C code, then they write an email to the Rust devs explaining the changes.

Then the process of making the change stops, and the C devs have to wait for a Rust dev to read the email, review the C changes, fix and test the resulting rust, and check in the update. (including any review process there is on the rust side.)

Is it hours, days, or weeks? Are there 2 people that know and can fix the code, or are there 100's. Do the C devs have visibility into the Rust org to know its being well run and risks are mitigated?

This is adding a hard dependency on a third party organization.

I would never dream of introducing this kind of dependency in my company or code.


This is kernel development we're talking about. It progresses carefully, not a the breakneck pace of a continuous integration SaaS platform that is single-minded about pushing features out as quickly as possible.

A better analogy would be like an API inside of a monolithic app that has multiple consumers on different teams. One team consumes the API and wants to be notified of breaking changes. The other team says "Nah, too much work" and wants to be able to break the API without worrying about consequences.

If having multiple consumers of an API or interface is a goal, you make communication a priority.


> Why would we assume that Ted […] acting in good faith?

Because he has done more for Linux than you ever will. Therefore, he gets all the benefit of the doubt, and you are assumed wrong


> "no one will be forced to use Rust in the Kernel"

Is this true, though? One reason for this altercation seems to be the basic circumstance that in Linux kernel development, if there is a dependency between two pieces of code A and B, the responsibility to keep B consistent with changes to A lies, in order, with anyone proposing patches to A, the subsystem maintainer for A, and finally the subsystem maintainer for B. If B is Rust code, such as a binding, then that's potentially up to 3 people who don't want to use Rust being forced to use Rust.


They're not "forced to use Rust". They are maybe forced to work with Rust developers of whichever subsystem needs to be updated, but that would always have been the case with the C developers of whichever subsystem needs to be updated too.


I don't think that is a correct interpretation. As I understand it, Linux does not have a notion of someone being obliged to help facilitate a patch, especially if it's not the narrow case of a maintainer and a patch to the subsystem they are in charge of. What do you do if you are a C developer modifying system A, your change has implications for system B which includes Rust code, and none of the Rust developers involved with B care to volunteer time to draft the necessary changes to Rust code for you?

The same situation of course also arises between C-only subsystems, but then the natural solution is that you have to go and understand system B well enough yourself that you can make the necessary changes to it and submit them as part of your patch. In that situation you are "forced to use C", but that's a free square because you are always forced to use C to contribute to Linux code.


>They're not "forced to use Rust". They are maybe forced to work with Rust developers of whichever subsystem needs to be updated

So if the maintainer of subsystem X can be forced to work with the rust developers of their own subsystem, then that rust developer just got promoted to co-maintainer with veto power. Effectively that's what they'd be, right? I can see why maintainers might not like that. Especially if they don't think the rust dev is enough of a subject matter expert on the subsystem.


If a subsystem C developer makes a change and introduces a bug in another driver or subsystem (also written in C) as a result, then you would expect them to be able to help at least insofar as explaining what they changed.

That isn't "effective co-maintainership".


I've been in a spot kinda like this. I've maintained C++ with python interfaces. In my case I wrote both. I know how interlocked the changes were. If I touched code that was exposed to the python, I updated the python interface and the consumers of that python interface.

It was nothing like making changes that cut across into another developer's C++ code (hell, I would even update their python interfaces/consumers too). That was temporary coordination. The python part was much more frequent and required much more detailed understanding of the internal APIs, not just the surface.

Having someone else responsible for the python part would have come at a huge cost to velocity as the vast majority of my changes would be blocked on their portion. It's ridiculous to imply it's equivalent to coordinating changes with another subsystem.


It's absolutely not true, it's one of the lies being told by Rust 4 Linux people. The end goal is absolutely to replace every last line of C code with Rust, and that's what they will openly tell you if you speak to them behind closed doors. That's why there is always an implicit threat directed at the C maintainers about job loss or "being on the right side of history". The Rust 4 Linux people are absolutely attempting a hostile takeover and nobody should believe a word that comes out of their mouths in public mailing lists when they are contradicting it so consistently behind closed doors.


> You can literally shove facts in someone's face, and they won't admit to being wrong or misunderstand, and instead continue to argue against some points whose premise isn't even true.

This is like probably 80% of people and fundamentally why the world is a hellscape instead of a utopia.


Maybe try not shoving things into people's faces and you'll find them to be much friendlier.


Nah, that's not the case. Not for that 80%. And even if they are friendlier, being willfully ignorant with a smile and a nod doesn't make it better.


The speaker doesn't understand the audience question and doesn't respond to it.

The audience member points out that they shouldn't encode the semantics into the Rust type system because that would mean that refactoring the C code breaks Rust, which is not an acceptable situation. The speaker responds to this by saying essentially "tell me what the semantics are and I'll encode them in the Rust type system." That's maximally missing the point.

The proposal would cause large classes of changes to C to break the build, which would dramatically slow down kernel development, even if a small handful of Rust volunteers agree to eventually come in and fix the build.

> You can literally shove facts in someone's face, and they won't admit to being wrong or misunderstand, and instead continue to argue against some points whose premise isn't even true.

I have to say that I used to be excited about Rust, but the Rust community seems very toxic to me. I see a lot of anger, aggression, vindictiveness, public drama, etc. On HN you not infrequently see down voting to indicate disagreement. These clashes with the Linux maintainers look really bad for Rust to me. So bad that I'm pretty convinced Rust as a language is over if they're no longer banging on the technical merits and are instead banging on the table.

I'm sure there are great things about the community. But I would encourage the community to have higher standards of behavior if they want to be taken seriously. The Linux team seem like they're trying to look beyond the childishness because they are optimistic about the technical merits, but they must be so tired of the drama.


> I have to say that I used to be excited about Rust, but the Rust community seems very toxic to me. I see a lot of anger, aggression, vindictiveness, public drama, etc.

I had the same impression.

Why all this drama is 90% of the time around Rust people?


Because many of them are still relatively young. There is nothing wrong with youth, but it can contribute to over-zealousness.

Also, for many of them, Rust is the first systems language they've ever touched. And that fact alone excites them. Because now they can "dream big" too.

But they have bought into the whole C/C++ are by default insecure and therefore garbage. In their mind, no mortal could ever write so much as a single safe function in those languages. So their points of view are always going to be based on that premise.

What they fail to recognize is that an operating system kernel, by virtue of the tasks it has to perform- things like mapping and unmapping memory, reading/writing hardware registers, interacting with peripherals, initiating dma transfers, context switching, etc.- have effects on the underlying hardware and the runtime environment; effects that neither the type system nor the temporal memory safety of Rust can model, because it happens at a level lower than the language itself. Rust's safety guarantees are helpful, but they are not infallible at that level. The kernel literally changes the machine out from under you.

They further fail to appreciate the significant impedance mismatch between C and Rust. When one language has concepts that are in fact constraints that another language simply does not have, there is going to be friction around the edges. Friction means more work. For everyone. From planning to coding to testing to rollout.

So you have well-intentioned, excited, but still self-righteous developers operating from what they perceive to be a position of superiority, who silently look down upon the C developers, and behave in a manner that (to outsiders at least) demonstrates that they really do believe they're better, even if they don't come right out and say it.

Just read the comments in any thread involving Rust. It is inconceivable to them that anybody would be so stupid or naive as to question the utility of the Rust language. To them, the language is unassailable.

The petty drama and social media brigading on top of it, along with the propensity to quit when the going gets tough, it's pretty easy to see why some people feel the way they do about the whole thing.

A programming language is not a religion. It is not a way of life. It is a tool. It's not like it's a text editor or something.


> A programming language is not a religion. It is not a way of life. It is a tool. It's not like it's a text editor or something.

I really hope that last sentence was a joke.


It was :)


> But they have bought into the whole C/C++ are by default insecure and therefore garbage. In their mind, no mortal could ever write so much as a single safe function in those languages.

No one thinks this except some strawman that you've devised. No point in reading anything else in this comment when this is so blatantly absurd and detached from reality.


I'm sorry sir/mam, but that is simply not true.

All you have to do is read comments from members of the Rust community online, in every public forum where Rust is discussed in any way.

Understand, I am not trying to villainize an entire community of software developers; but for you to say something that's blatantly false is to just stick your head in the sand.

You should try and read the words people write. Opinions are not formed in a vacuum.

Edit: to be clear- I have no problems with Rust the language beyond some ergonomic concerns. I am not a Rust hater, nor am I a zealot. I do advocate for C# a lot for application code though. But I do not deride others' language preferences. You should not dismiss my observations because I used hyperbole. Obviously not every Rust dev thinks you can't write a secure C/C++ function; don't pick out the one hyperbolic statement to discredit my entire post. Bad form.


The kernel is not exactly known for being drama-free, and this drama didn't start with "Rust people", it started with Christoph.


Was Hellwig first, or Andre Hedrick, or maybe Hans Reiser?


Because that’s all your brain are capable of recalling, the ones with Rust-tagging.

With just a google search away you can dismiss your claim of 90% but you don’t want to do it because you only believe what you want to believe.


> The audience member points out that they shouldn't encode the semantics into the Rust type system because that would mean that refactoring the C code breaks Rust, which is not an acceptable situation. The speaker responds to this by saying essentially "tell me what the semantics are and I'll encode them in the Rust type system." That's maximally missing the point.

You have to encode your API semantics somewhere.

Either you encode them at the type system and find out when it compiles, or you encode it at runtime, and find out when it crashes (or worse, fails silently).


I disagree, they didn't straight out pointed this, because this is nonsense. Semantic changes can break anything, even if it's some intermediary API.

There are more breakage in rust due to the type-system-related semantics, but ideally a C dev would also want their system to break if the semantics aren't right. So this is a criticism on C..?

So following this argument, they don't want Rust because C falls short? Nonsense.

edit: The speaker did mention that they didn't want to force limited use on the base APIs, but that for a great deal of their usage, they could have determined fixed semantics, and make intermediary APIs for it. So this was not about limiting the basic APIs.


Here are the software requirements (inferred from the commenter):

- (1) the C code will be refactored periodically

- (2) when refactored internally it can break C code, but the change author should fix any breaking in C

- (3) Rust must not break when (1) happens

It's the Rust devs' job to meet those requirements if they want to contribute. It looks in the video like they don't understand this, which is pretty basic.


> You can literally shove facts in someone's face, and they won't admit to being wrong or misunderstand, and instead continue to argue against some points whose premise isn't even true.

I think that's part of the gag.

"These people are members of a community who care about where they live... So what I hear is people caring very loudly at me." -- Leslie Knope

https://www.youtube.com/watch?v=areUGfOHkMA


>"These people are members of a community who care about where they live... So what I hear is people caring very loudly at me." -- Leslie Knope

that's a very healthy and - I feel - correct attitude towards this kind of criticism. I love when wisdom comes from stupid places.


Its quite a well-known wisdom. I think someone in one of Nintendo or Sony's studios has said it too, in the form of: a complaint is worth twice a compliment.

Satisfied customers will tell you they think your stuff is great, but dissatisfied customers will be able to hone in on exactly where the problem is.

You can even extend this to personal life: if someone tells you your shabby car doesn't fit with the nice suits you wear, you can either take it as a personal attack and get irritated, or take it as feedback and wash your car, spruce up the upholstery and replace the missing wheel cap. In effect they helped you take note of something.


One does not "hone in" on anything. To hone a thing is to make it sharper or more acute by removing parts of it with an abrasive. The word you are looking for is "home", as in a homing missile, etc.

Yes, this is a criticism. Hopefully it's twice as effective as being nice. 8)


Multiple dictionaries recognize the usage of "hone in" to mean "sharpening" your focus on something rather than "home in" which is to move towards something.


Dictionaries also (incorrectly) recognize "literally" to mean "figuratively". They aren't exactly a compelling source these days.


Show me one.

When "literally" is used in a figurative way, it's an intensifier. It means "very much". It never means "figuratively".


I went down a slight rabbit hole for this: apparently both are correct, although "hone in" doesn't seem to have a ground source and has gotten institutionalized in our lexicon over time.

By the way, I don't mind the nit at all! English is not my first language and I slip up occasionally, so refreshers are welcome :-)


You knew what they meant, which is clear if you’re able to correct the use of language accurately. This isn’t a criticism per se, but an acknowledgment that language evolves and part of the way it does that is acceptance that “incorrect” usage, once common enough, is seldom reversed.


You may not hone in on anything, but people who are better at English do.

This would be doubly ironic if you're a native English speaker. Are you?


"All happy customers are alike; each unhappy customer is unhappy in its own way" - Tolstoy.


this isn't true, of course, but it sounds good


It's almost true, though, in a way: The only difference is, Tolstoy wrote "families", not "customers".


> dissatisfied customers will be able to hone in on exactly where the problem is

This sounds like a truism, when it isn't. The client may know something is wrong, but good luck at them identifying it. Some times, the client will convince themselves that something is wrong when it isn't. There were people complaining about lag in WoW, they responded by cutting the latency number in half... except that it wasn't cut in half, it was just measured as time to server rather than roundtrip. The complains died out immediately and they were hailed as "very savvy developers that listen to their customers".


who asked him to fix Rust bindings or even look at the Rust code? Seems like an emotional nut.


> You can literally shove facts in someone's face, and they won't admit to being wrong or misunderstand, and instead continue to argue against some points whose premise isn't even true.

It's called a strawman fallacy, and like all fallacies, it's used because the user is either intellectually lazy and can't be bothered to come up with a proper argument, or there isn't a proper argument and the person they're using it against is right.


If an honest alien says "we don't want to convert humans to our religion" that means you can have whatever religion you want. If a dishonest alien says it, it might mean "we don't want to convert humans because we are going to kill all humans", it's selectively true - they aren't going to convert us - and leaves us to imagine that we can have our own religion. But it's not the whole truth and we actually won't be able to[1].

An honest "no one will be forced to use Rust in the Kernel" would be exactly what it says. A paltering reading could be "we want to make Rust the only language used in the Kernel but you won't be forced to use it because you can quit". i.e. if you are "literally shoving facts in someone's face" and they don't change then they might think you are not telling the whole truth, or are simply lying about your goals.

[1] https://en.wikipedia.org/wiki/Paltering


And yet, you are bringing made-up analogy that suits you to the discussion.


> Rust has a stability guarantee since 1.0 in 2015. Any backwards incompatibilities are explicitly opt-in through the edition system, or fixing a compiler bug.

Unfortunately OP has a valid point regarding Rust's lack of commitment to backwards compatibility. Rust has a number of things that can break you that are not considered breaking changes. For example, implementing a trait (like Drop) on a type is a breaking change[1] that Rust does not consider to be breaking.

[1]: https://users.rust-lang.org/t/til-removing-an-explicit-drop-...


I think we're mixing 2 things here: language backward-compatibility, vs. standard practices about what semver means for Rust libraries. The former is way stronger than the latter.


> language backward-compatibility, vs. standard practices about what semver means

I've read and re-read this several times now and for the life of me I can't understand the hair you're trying to split here. The only reason to do semantic versioning is compatibility...


I assume that they mean that you can use Rust as a language without its standard library. This matters here since the Kernel does not use Rust's standard library as far as I know (only the core module).

I'm not aware of semver breakage in the language.

Another important aspect is that Semver is a social contract, not a mechanical guarantee. The Semver spec dedicates a lot of place to clarify that it's about documented APIs and behaviors, not all visible behavior. Rust has a page where it documents its guarantees for libraries [0].

[0] https://doc.rust-lang.org/cargo/reference/semver.html


> Another important aspect is that Semver is a social contract, not a mechanical guarantee for.

Although there are mechanical aids for it: https://crates.io/crates/cargo-semver-checks


The failure mentioned above wasnt a case of the language changing behaviour, but rather the addition of a trait impl in the standard library conflicting with a trait impl in a third party crate, causing the build breakage.


The Rust compiler/language has no notion of semver. Saying "Rust is unstable b/c semver blah blah" is a tad imprecise. Semver only matters in the context of judging API changes of a certain library (crate).

> The only reason to do semantic versioning is compatibility

Sure. But "compatibility" needs to be defined precisely. The definition used by the Rust crate ecosystem might be slightly looser than others, but I think it's disingenuous to pretend that other ecosystems don't have footnotes on what "breaking change" means.


> But "compatibility" needs to be defined precisely.

Compatibility is defined precisely! You're definition requires scare quotes. You want to define it "Precisely" so that you can permit incompatible behavior. No one who cares about compatibility does that, it's just an excuse.

Look, other languages do this differently. Those of use using C99 booleans know we need to include a separate header to avoid colliding with the use of "bool" in pre-existing code, etc... And it sort of sucks, but it's a solved problem. I can build K&R code from 1979 on clang. Rust ignored the issue, steamrollered legacy code, and tried to sweep it under the rug with nonsense like this.


I think you are trying very hard to disagree on basic stuff that works very similarly across different language ecosystems, and (looking at other responses) that you're very angry. Disengaging.


I'll point out again that C, the poster child for ancient language technology, has been able to evolve its syntax and feature set with attention to not breaking legacy code. Excusing the lack of such attention via linguistic trickery about "defining compatibility precisely" does indeed kinda irk me. And disengaging doesn't win the argument.


The fundamental issue here is that any kind of inference can have issues on the edges. If you write code using fully qualified paths all the time, then this semver footgun can never occur.


I was hit by a similar thing. Rust once caused regression failures in 5000+ packages due to incompatibility with older "time" packages [1]. It was considered okay. At that point, I don't care what they say about semver.

[1]: https://github.com/rust-lang/rust/issues/127343#issuecomment...


The comment you linked to explicitly shows that a maintainer does not consider this "okay" at all. T-libs-api made a mistake, the community got enraged, T-libs-api hasn't made such a mistake since. The fact that it happened sucks, but you can't argue that they didn't admit the failure.


"a maintainer"

The way you word that makes it sound like "the maintainers" and "T-libs-api" do not consider this "okay". Reading just above the linked comment, however, puts a very different impression of the situation:

> We discussed this regression in today's @rust-lang/libs-api team meeting, and agree there's nothing to change on Rust's end. Those repos that have an old version of time in a lockfile will need to update that.


You're reading an artifact of a point in time, before the it hit stable and the rest of the project found out about this. t-libs-api misunderstood the impact because in the past there had been situations that looked similar and were unproblematic to go ahead with, but weren't actually similar. There were follow up conversations, both in public and private, where the consensus arrived was that this was not ok.


What I'm hearing is that the nature of the issue was recognized - that this was a breaking change; but that the magnitude of the change and the scale of the impact of that break was underestimated.

TBH that does not inspire confidence. I would expect that something claiming or aspiring to exhibit good engineering design would, as a matter of principle, avoid any breaking change of any magnitude in updates that are not intended to include breaking changes.


Thanks for clarifying. I took a look as well, and the very first reply confirms your opinion and that of the GP's parent. Plenty of downvotes and comments that come after criticizing the maintainers, "I am not sure how @rust-lang/libs-api can look at 5400 regressions and say "eh, that's fine"."

Not sure why people are trying to cover this up.


It's not covering I up. The people that commented, including the one you quote are part of the project.


You are sincere. I believe this is not a cover-up but more of a misunderstanding. Think this way: many people coming to that github thread don't know who are core rust devs but they can clearly see the second commenter is involved. That comment denied this being a major issue and concluded the decision was made as a team. To the public and perhaps some kernel devs, this may be interpreted as the official attitude.


The change itself was very reasonable. They only missed the mark on how that change was introduced. They should have waited with it until the next Rust edition, or at least held back a few releases to give users of the one affected package time to update.

The change was useful, fixing an inconsistency in a commonly used type. The downside was that it broke code in 1 package out of 100,000, and only broke a bit of useless code that was accidentally left in and didn't do anything. One package just needed to delete 6 characters.

Once the new version of Rust was released, they couldn't revert it without risk of breaking new code that may have started relying on the new behavior, so it was reasonable to stick with the one known problem than potentially introduce a bunch of new ones.


But that is not how backwards compatibility works. You do not break user space. And user space is pretty much out of your control! As a provider of a dependency you do not get to play such games with your users. At least not, when those users care about reliability.


That was a mistake and a breakdown in processes that wasn't identified early enough to mitigate the problem. That situation does not represent the self imposed expectations on acceptable breakage, just that we failed to live up to it and by the time it became clearer that the change was problematic it was too late to revert course because then that would have been a breaking change.

Yes: adding a trait to an existing type can cause inference failures. The Into trait fallback, when calling a.into() which gives you back a is particularly prone to it, and I've been working on a lint for it.


TBH that's a level of quality control that probably informs the Linux kernel dev's view of Rust reliability - it's a consideration when evaluating the risk of including that language.


Are you sure you want to start comparing the quality control of C and Rust packaging or reliability?


Your comment misunderstands the entire point and risk assessment of what's being talked about.

It's about the overall stability and "contract" of the tooling/platform, not what the tooling can control under it. A great example was already given: It took clang 10 years to be "accepted."

It has nothing to do with the language or its overall characteristics, it's about stability.


I trust the quality control of the Linux kernel devs a lot more than the semantics of a language.


Kernel devs more than almost everyone else are well aware that even the existing C toolchains are imperfect.


Maintaining backward compatibility is hard. I am sympathetic. Nonetheless, if the rust dev team think this is a big deal, then clarify in release notes, write a blog post and make a commitment that regression at this level won't happen again. So far, there is little official response to this event. The top comment in the thread I point to basically thinks this is nothing. It is probably too late do anything for this specific issue but in future it would be good to explain and highlight even minor compatibility issues through the official channel. This will give people more confidence.


> Nonetheless, if the rust dev team think this is a big deal, then clarify in release notes, write a blog post and make a commitment that regression at this level won't happen again. So far, there is little official response to this event.

There was an effort to write such a blog post. I pushed for it. Due to personal reasons (between being offline for a month and then quitting my job) I didn't have the bandwidth to follow up on it. It's on my plate.

> The top comment in the thread I point to basically thinks this is nothing.

I'm in that thread. There are tons of comments by members of the project in that thread making your case.

> It is probably too late do anything for this specific issue but it would be good to explain and highlight even minor compatibility issues through the official channel.

I've been working on a lint to preclude this specific kind of issue from ever happening again (by removing .into() calls that resolve to its receiver's type). I customized the diagnostic to tell people exactly what the solution is. Both of these things should have been in place before stabilization at the very least. That was a fuck up.

> This will give people more confidence.

Agreed.


Thanks for the clarification. This has given me more confidence in rust's future.


It’s hard for me to tell if you’re describing a breakdown in the process for evolving the language or the process for evolving the primary implementation.

Bugs happen, CI/CD pipelines are imperfect, we could always use more lint rules …

But there’s value in keeping the abstract language definition independent of any particular implementation.


> At that point, I don't care what they say about semver.

Semver, or any compatibility scheme, really, is going to have to obey this:

> it is important that this API be clear and precise

—SemVer

Any detectable change being considered breaking is just Hyrum's Law.

(I don't want to speak to this particular instance. It may well be that "I don't feel that this is adequately documented or well-known that Drop isn't considered part of the API" is valid, or arguments that it should be, etc.)


Implementing (or removing) Drop on a type is a breaking change for that type's users, not the language as a whole. And only if you actually write a trait that depends on types directly implementing Drop[0].

Linux breaks internal compatibility far more often than people add or remove Drop implementations from types. There is no stability guarantee for anything other than user-mode ABI.

[0] AFAIK there is code that actually does this, but it's stuff like gc_arena using this in its derive macro to forbid you from putting Drop directly on garbage-collectable types.


> is a breaking change _for that type's users_, not the language as a whole.

And yet the operating mantra...the single policy that trumps all others in Linux kernel development...

is don't break user space.


Rust compiler upgrades being backwards incompatible has nothing to do whatsoever with keeping userspace ABI compatible.

GCC also occassionally breaks compability with the kernel, btw.


> Linux breaks internal compatibility far more often than people add or remove Drop implementations from types. There is no stability guarantee for anything other than user-mode ABI.

I think that's missing the point of the context though. When Linux breaks internal compatibility, that is something the maintainers have control over and can choose not to do. When it happens to the underlying infrastructure the kernel depends on, they don't have a choice in the matter.


Removing impl Drop is like removing a function from your C API (or removing any other trait impl): something library authors have to worry about to maintain backwards compatibility with older versions of their library. A surprising amount of Rust's complexity exists specifically because the language developers take this concern seriously, and try to make things easier for library devs. For example, people complain a lot about the orphan rules but they ensure adding a trait impl is never a breaking change.


Meaning of this code has not changed since Rust 1.0. It wasn't a language change, nor even anything in the standard library. It's just a hack that the poster wanted to work, and realized it won't work (it never worked).

This is equivalent of a C user saying "I'm disappointed that replacing a function with a macro is a breaking change".

Rust had actual changes that broke people's code. For example, any ambiguity in type inference is deliberately an error, because Rust doesn't want to silently change meaning of users' code. At the same time, Rust doesn't promise it won't ever create a type inference ambiguity, because it would make any changes to traits in the standard library almost impossible. It's a problem that happens rarely in practice, can be reliably detected, and is easy to fix when it happens, so Rust chose to exclude it from the stability promise. They've usually handled it well, except recently miscalculated "only one package needed to change code, and they've already released a fix", but forgot to give users enough time to update the package first.


I'm curious now. What are the backwards compatibility guarantees for C?


As long as you compile with the version specified (e.g., `-std=c11`) I think backwards compatibility should be 100%. I've been able to compile codebases that are decades old with modern compilers with this.


In practice, C has a couple of significant pitfalls that I've read about.

First is if you compile with `-Werror -Wall` or similar; new compiler diagnostics can result in a build failing. That's easy enough to work around.

Second, nearly any decent-sized C program has undefined behavior, and new compilers may change their handling of undefined behavior. (E.g., they may add new optimizations that detect and exploit undefined behavior that was previously benign.) See, e.g., this post by cryptologist Daniel J. Bernstein: https://groups.google.com/g/boring-crypto/c/48qa1kWignU/m/o8...


Why not entirely wrong, the UB issues are bit exaggerated in my opinion. My C code from 20 years ago still works fine even when using modern compilers. In any case, our plan is to remove most of UB and there is quite good progress. Complaining that your build fails with -Werror seems a bit weird. If you do not want it, why explicitly request this with -Werror?


Just curious, when you’re done removing all the UB, how will you know you’ve been successful? UB is hard to find isn’t it?


To be clear: I mean UB in the C standard. The cases where there is UB are mostly spelled out explicitly, so we can go through all the cases and define behavior. There might be cases where there is implicit UB, i.e. something is accidentally not specified, but this has always been fixed when noticed. It is not possible to remove all cases of UB though, but the plan is to add a memory safe mode where there is no UB similar to Rust.


The warning argument is silly. It just means that your code is not up to par with the modern standards. -Wall is a moving goalpost and it's getting new warnings added with every release of a TC because TC developers are trying to make your code more secure.


I mean, yeah, I said it was easy enough to work around. But it's an issue I've seen raised in a discussions of C code maintenance. (The typical conclusion is that using `-Wall -Werror` is a mistake for long-lived, not-actively-developed code.) Apologies if I overstated the case.


Not even close to 100%, the reason that it feels like every major C codebase in industry is pinned to some ancient compiler version is because upgrading to a new toolchain is fraught. The fact that most Rust users are successfully tracking relatively recent versions of the toolchain is a testament to how stable Rust actually is in practice (an upgrade might take you a few minutes per million lines of code).


IDK about "industry" but I can't think of any prominent C or C++ open-source codebase that requires a specific version of gcc or clang to compile.


Try following your favourite distro's bug tracker during GCC upgrade. Practically every update breaks some packages, sometimes less, sometimes more (esp. when GCC changes their default flags).

Here's one example of workarounds in ~100 packages that broke when upgrading to GCC 10: https://github.com/search?q=repo%3ANixOS%2Fnixpkgs%20fcommon...


The Linux kernel was that way for a while, years ago.


gets() was straight-up removed in C11.

Every language has breaking changes. The question is the frequency, not if it happens at all.

The C and C++ folks try very hard to minimize breakage, and so do the Rust folks. Rust is far closer to those two than other languages. I'm not willing to say that it's the same, because I do not know how to quantify it.


But you can still use gets() if you're using C89 or C99[1], so backwards compatibility is maintained.

Rust 2015 can still evolve (either by language changes or by std/core changes) and packages can be broken by simply upgrading the compiler version even if they're still targeting Rust 2015. There's a whole RFC[2] on what is and isn't considered a breaking change.

[1]: https://gcc.godbolt.org/z/5jb1hMbrx

[2]: https://rust-lang.github.io/rfcs/1105-api-evolution.html


> so backwards compatibility is maintained.

That's not what backwards compatibility means in this context. You're talking about how a compiler is backwards compatible. We're talking about the language itself, and upgrading from one versions of the language to the next.

Rust 2015 is not the same thing as C89, that is true.

> packages can be broken by simply upgrading the compiler version

This is theoretically true, but in practice, this rarely happens. Take the certainly-a-huge-mistake time issue discussed above. I actually got hit by that one, and it took me like ten minutes to even realize that it was the compiler's fault, because upgrading is generally so hassle free. The fix was also about five minutes worth of work. Yes, they should do better, but I find Rust upgrades to be the smoothest of any ecosystem I've ever dealt with, including C and C++ compilers.


(side note: I don't think I've ever thanked you for your many contributions to the Rust ecosystem, so let me do that now: thank you!)

> You're talking about how a compiler is backwards compatible. We're talking about the language itself, and upgrading from one versions of the language to the next.

That's part of the problem. Rust doesn't have a spec. The compiler is the spec. So I don't think we can separate the two in a meaningful way.


You're welcome!

> So I don't think we can separate the two in a meaningful way.

I think that in that case, you'd compare like with like, upgrading both.

I do agree that gcc and clang supporting older specs with a flag is a great feature, and is something that Rust cannot do right now.

But the results of the annual survey have come out: https://blog.rust-lang.org/2025/02/13/2024-State-Of-Rust-Sur...

And 90% of users use the current stable version for development. 7.8% use a specific stable released within the past year.

These numbers are only so high because it is such a small hassle to update even large Rust codebases between releases.

So yes, in theory, breakage can happen. But that's in theory. In practice, this isn't a thing that happens very much.


C (and C++) code breaks all the time between toolchain versions (I say "toolchain" to include compiler, assembler, linker, libc, etc.). Some common concerns are: headers that include other headers, internal-but-public-looking names, macros that don't work the way people think they do, unusual function-argument combinations, ...

Decades-old codebases tend to work because the toolchain explicitly hard-codes support for the ways they make assumptions not provided by any standard.


For the purposes of linux kernel, there's essentially a custom superset of C that is defined as "right" for linux kernel, and there are maintainers responsible for maintaining it.

While GCC with few basic flag will, in general, produce binary that cooperates with kernel, kbuild does load all those flags for a reason.


> For the purposes of linux kernel, there's essentially a custom superset of C that is defined as "right" for linux kernel

Superset? Or subset? I'd have guessed the latter.


Superset. ANSI/ISO C is not a good language to write a kernel in, because the standards are way more limiting than some people would think - and leaves a lot to implementation.

So it's a superset in terms of what's defined


The backwards compatibility guarantee for C is "C99 compilers can compile C99 code". If they can't, that's a compiler bug. Same for other C standards.

Since Rust doesn't have a standard, the guarantee is "whatever the current version of the compiler can compile". To check if they broke anything they compile everything on crates.io (called a crater run).

But if you check results of crater runs, almost every release some crates that compiled in the previous version stop compiling in the new version. But as long as the number of such breakages it not too large, they say "nothing is broken" and push the release.


Can you provide an example for the broken-crater claim? As far as I'm aware, Rust folks don't break compatibility that easily, and the one time that happened recently (an old version of the `time` crate getting broken by a compiler update), there were a lot of foul words thrown around and the maintainers learned their lesson. Are you sure you aren't talking about crates triggering UB or crates with unreliable tests that were broken anyway?


I am not following this too closely (the time issue seemed pretty severe though), but there are compatibility changes listed in the release notes very frequently: https://github.com/rust-lang/rust/blob/master/RELEASES.md


What do you mean? Rust 1.0 can compile Rust 1.0. Rust 1.1 can compile Rust 1.1.


C makes a distinction between the language version and the compiler version. Rust does not. That's the problem people are discussing here.


C99 isn't a compiler version. It's a standard. Many versions of GCC, Clang and other compilers can compile C99 code. If you update your compiler from gcc 14.1 to gcc 14.2, both versions can still compile standard code.


There is also a very high level of backwards compatibility between versions of ISO C because there is a gigantic amount of code is updated if there is change. So such changes are done only for important reasons or after a very long deprecation period.


But Rust 1.0 can't compile Rust 1.1.

And as others have noted, C99 is a standard and Rust lacks one.


But Rust 1.0 can't compile Rust 1.1

That's an impossible standard to hold Rust to, did you mean it the other way around? A C89 compiler can't compile all of C99 either.


But v1 of a C99 compiler can compile all of C99, and v2 of a C99 compiler can still compile all of C99.


Right, which is basically the opposite of what backwards incompatibility means. Imagine if GCC 14.2.0 was only guaranteed to be able to compile "C 14.2.0".


> "which is actively hostile to a second Rust compiler implementation" - except that isn't true?

Historically the Rust community has been extremely hostile towards gccrs. Many have claimed that the work would be detrimental to Rust as a language since it would split the language in two (despite gccrs constantly claiming they're not trying to do that). I'm not sure if it was an opinion shared by the core team, but if you just browse Reddit and Twitter you would immediately see a bunch of people being outright hostile towards gccrs. I was very happy to see that blog post where the Rust leadership stepped up to endorse it properly.

Just one reference: In one of the monthly updates that got posted on Reddit (https://old.reddit.com/r/rust/comments/1g1343h/an_update_on_...) a moderator had to write this:

> Hi folks, because threads on gccrs have gotten detailed in the past, a reminder to please adhere to the subreddit rules by keeping criticism constructive and keeping things in perspective.


The LKML quote is alleging that the upstream language developers (as opposed to random users on Reddit) are opposed to the idea of multiple implementations, which is plainly false, as evidenced by the link to the official blog post celebrating gccrs. Ted T'so is speaking from ignorance here.


I think it’s more pointed towards people like me who do think that gccrs is harmful (I’m not a Rust compiler/language dev - just a random user of the language). I think multiple compiler backends are fine (eg huge fan of rustc_codegen_gcc) but having multiple frontends I think can only hurt the ecosystem looking at how C/C++ have played out vs other languages like Swift, Typescript etc that have retained a single frontend. In the face of rustc_codegen_gcc, I simply see no substantial value add of gccrs to the Rust ecosystem but I see a huge amount of risk in the long term.


(emphasis mine)

> opposed to the idea of multiple implementations, which is plainly false, as evidenced by the link to the official blog post celebrating gccrs. Ted T'so is speaking from ignorance here.

Why use so strong words? Yes, there's clearly a misunderstanding here, but why do we need to use equally negative words towards them? Isn't it more interesting to discuss why they have this impression? Maybe there's something with the communication from the upstream language developers which hasn't been clear enough? It's a blog post which is a few months old so if that's the only signal it's maybe not so strange that they've missed it?

Or maybe they are just actively lying because they have their own agenda. But I don't see how this kind of communication, assuming the worst of the other part, beings us any closer.


> Why use so strong words?

I'm not going to mince words here. Ted T'so should know better than to make these sorts of claims, and regardless of where he got the impression from, his confident assertion is trivially refutable, and it's not the job of the Rust project to police whatever incorrect source he's been reading, and they have demonstrably been supportive of the idea of multiple implementations. This wouldn't even be the first alternative compiler! Several Rust compiler contributors have their own compilers that they work on.

The kernel community should demand better from someone in such a position of utmost prominence.


For whatever it's worth, I did believe that some of the Rust team was very hostile towards gccrs, but that behavior has completely changed, and it seems like they're receiving a lot of support these days.

Reddit... is reddit.


> > > Hi folks, because threads on gccrs have gotten detailed in the past

Here's guessing they meant "derailed".


These are often the same classification of individual who tend to modify their viewpoints towards “change is progressing rapidly in an area that I don’t understand and this scares me.” Anytime an expert in a particular area has their expertise challenged or even threatened by a new technology it is perfectly human to react in a way that is defensive towards the perceived threat. Part of growth as a human is recognizing our perceived biases and attempting to mitigate them, hopefully extending this into other areas of our lives as well. After all, NIMBYS probably started out with reasonable justifications for why they want to keep their communities the way they currently are - it’s comfortable and it works, and they’re a significant contributor to the community. Any external “threat” to this concept becomes elevated to a moral crusade against the invaders who are encroaching upon their land, when really they’re jousting against windmills.


Or "change is progressing rapidly in an area I am working 20 years and I have seen this kind of thing failing before"


I think confronting those volunteers that maintain open-source software with arguments such as "you just do not want to learn new things" , "are scared of change", etc. is very unfair. IMHO new ideas should prove themselves and not pushed through because google wants it or certain enthusiastic groups believe this is the future. If Rust is so much better, people should just build cool stuff and then it will be successful anyway.


Overall, this whole situation seems entirely weird to me. All this stuff such as Unix, Linux, and C ecosystem was build by C programmers and maintained for decades mostly voluntarily, while most of the industry pushed into other directions (with a gigantic influx of money). It is completely amazing that Linux become so successful against all the odds. Certainly it also then had a lot industry support, but I used it already before most of this and witnessed all the development. But somehow, C programmers are now suddenly portrayed as the evil gatekeepers, not stepping aside fast enough, because some want to see change. In the past, the people wanting to see something new in the open-source community would need to convince the community by building better things, not by pushing aggressively into existing projects.


I believe the Rust for Linux project was started by a Linux guy, rather than a Rust guy, and many of the Rust for Linux maintainers have come at this from a perspective of "we are Linux maintainers who want to use Rust" rather than "we are Rust users who want our code to be in Linux".

I think it's important to be wary of simplistic narratives (such as "C vs Rust"). Maintaining a complex piece of software comes with tradeoffs and compromises, and the fewer languages you have to worry about the better. On the other hand, the Asahi Linux team have been quite explicit that without Rust, they wouldn't have achieved a fraction of what they have. So clearly there is a lot of value in RfL for Linux as a whole, if implemented well. And that value is reflected in the decision from Linus that RfL should be supported, at least for now.


> many of the Rust for Linux maintainers have come at this from a perspective of "we are Linux maintainers who want to use Rust" rather than "we are Rust users who want our code to be in Linux".

This might be true, but do you have any actual quantifiable evidence for it? Because FWIW, from what I as an outsider see (mainly in threads like this), all the drama looks very much like "we are Rust users who want our code to be in Linux".


It is entirely unclear to me where the value actually is. It seems google is funding it for some reason. And some people clearly have a lot of opinions that this is "the future". People had similarly strong opinions about various other things in the past.


It's worth reading Asahi Lina's posts about writing a driver in a Linux - she is very explicit that what they've achieved would not have been possible without Rust.

See, for example:

https://xcancel.com/linaasahi/status/1577667445719912450?s=4... https://vt.social/@lina/113056457969145576 https://asahilinux.org/2022/11/tales-of-the-m1-gpu/

In fairness, this is one team working on one project, but if they're attributing much of their success to Rust, it's probably worth listening to and understanding why, particularly as I don't believe they were particularly evangelistic about Rust before this project.

I have no idea about the Google funding, but Marcan's blog post is very explicit they they do not have any corporate sponsorship. If you believe that to be untrue, please explain your reasoning rather than spreading unsubstantiated rumours.


People have written drivers before. I do not care what this guy thinks, he is just some very opinionated person.


This isn't the first time a new language is proposed for the kernel though.

At some point there was some brief discussion for C++ in the kernel and that was essentially immediately killed by Linus. And he was essentially right.


Yeah I certainly don’t want to mischaracterize anyone here and I attempted to communicate how this is really a knee-jerk, human reaction to something new making inroads into a space people have extensive expertise in. New ideas additionally shouldn’t be derided based upon the poor behavior of some in the community.


Fair, I acknowledge I may have misrepresented this group who are against the Rust community as not being experts in this this space; they certainly are. Rust doesn’t have to be the answer but if we treat others (namely Rust supporters) and their solutions as dead-on-arrival because it’s implemented in a technology we’re not entirely familiar with how can we get to a point where we’re solving difficult problems? Especially if we create an unwelcoming space for contribution?


> After all, NIMBYS probably started out with reasonable justifications for why they want to keep their communities the way they currently are

Bad example IMO. What is reasonable about this? http://radicalcartography.net/bayarea.html


May be a poor example, it’s what came to mind initially. I don’t think the end results are at all the same but I think the initial emotions around why you may balk at something new entering your community have parallels to the topic at hand.


[flagged]


Want to have a conversation on what you agree or disagree with? I may have a newer account but definitely not a kid, in fact I have kids of my own


Yeah ok lol


Glad you took the time to read my thoughts and respond :) have a good one friend


> Any backwards incompatibilities are explicitly opt-in through the edition system, or fixing a compiler bug.

This is a very persistent myth, but it’s wrong. Adding any public method to any impl can break BC (because its name might conflict with a user-defined method in a trait), and the Rust project adds methods to standard library impls all the time.


This is true, strictly speaking, but rarely causes problems. Inherent methods are prioritized over trait methods, so this only causes problems if two traits suddenly define a single method, and the method is invoked in an ambiguous context.

This is a rare situation, and std thrives to prevent it. For example, in [1], a certain trait method was called extend_one instead of push for this reason. Crater runs are also used to make sure the breakage is as rare as T-libs-api has expected. The Linux kernel in particular only uses core and not std, which makes this even more unlikely.

[1]: https://github.com/rust-lang/rust/issues/72631


Okay, but “they try to avoid issues” is not the same as “they guarantee never to intentionally break BC except to fix compiler bugs”.


That's just not true. If the user has defined on the struct and a trait also has the method name, the struct's impl is used. Multiple traits can have methods named the same too.

https://play.rust-lang.org/?version=stable&mode=debug&editio...


My comment might have been technically wrong as originally stated; I’ve since edited to try to correct/clarify.

What I really meant is the case where a method is added to a standard struct impl that conflicts with a user-defined trait.

For example, you might have implemented some trait OptionExt on Option with a method called foo. If now a method called foo is added to the standard option struct, it will conflict.


Look at the linked code - it literally shows what happens in that case. What happens is not what you're saying.


This code compiles on 1.81 and fails to compile on 1.82: https://godbolt.org/z/9GbbMKjcf


You are always free to use fully-qualified paths to protect yourself from any change in your dependencies (including std) that would break inference (by making more than one method resolve).


That's true for literally every non static function in C, given the lack of namespaces. So it can't be a blocker.


New versions of gcc don't cause new C standard library functions to exist.


> which is actively hostile to a second Rust compiler implementation

Which is hilarious since Linux itself was actively hostile to the idea of a second C compiler supporting it. Just getting Linux to support Clang instead of only GCC was a monumental task that almost certainly only happened because Android forced it to happen.


It happened because the Android people put in the work to make it happen both in Linux and in Clang/LLVM.


Putting in the work is one thing, which is what the Rust-in-Linux people are also doing, but there's also the political requirement to force maintainers to accept it. Android was big enough, and happy enough to fork seeing as it had already done that before, that it forced a lot of hands with top-down mandates.

Rust, despite having Linus' blessing to be in the kernel, is still just getting rejected just because it's Rust, completely unrelated to any technical merits of the code itself.


hello!


Thank you to share the Ted T'so LKM post. Can you explain the culture reference "thin blue line"? I never heard it before.


The "thin blue line" is a term that typically refers to the concept of the police as the line between law-and-order and chaos in society.[1] The "blue" in "thin blue line" refers to the blue color of the uniforms of many police departments.

[1] https://en.wikipedia.org/wiki/Thin_blue_line


It's a motto used by American law enforcement to justify extrajudicial punishment. Since they are the "thin blue line" that separates the public from anarchy, they are justified in acting independently to "protect" us when judges and juries do not "cooperate".


Not just extrajudicial punishment, but overlooking corrupt acts and crimes from fellow officers. That it's more important to maintain the 'brotherhood' than to arrest an officer caught driving home drunk.


No, that's not really true.

Directly, "the thin blue line" expresses the idea that the police are what separates society from chaos.

It doesn't inherently suggest police are justified in acting outside the law themselves, though, of course, various people have suggested this (interestingly, from both a pro-police and anti-police perspective).

It seems obvious to me that the post was using this phrase in the sense of being a thin shield from chaos.


That is a very strange take. The phrase isn't American and has no negative connotation. It has nothing to do with "extrajudicial punishment". It simply refers to the (obvious) fact that what separates societies from anarchy is the "thin blue line" of law enforcement.

Rowan Atkinson had a sitcom set in a London police station in the 90s called "The Thin Blue Line". Are you under the impression he was dogwhistling about extrajudicial violence?


This is what really confused me about the article. I read the mailing list post and had no idea what was controversial about thin blue line. In fact, I thought most of that post was fairly reasonable.

I'd never heard of the extrajudicial punishment aspect of the phrase (though I had heard the phrase itself) and it didn't show up when I googled, but I'm not American, so maybe there's some cultural differences.


"Thin blue line" is a popular phrase of the so called "American culture war". During the heyday of the Black Lives Matter movement, it was used as a self-identification by those who did not agree with criticisms of the nations policing and justice systems. A closely related symbol is the Punisher[0] skull from Marvel comics.

[0]: https://knowyourmeme.com/memes/punisher-skull

All in all, this could just be another instance of the "culture war" inflaming every other minor disagreement with Ted playfully using the phrase and Marcan misinterpreting it. Or it could be Ted slipping up with their politics. From what I know about Marcan and what can be inferred from his post, they do seem like someone the alt-right would persecute.


Wow, hadn't heard of the punisher skull association either! It seems that it hasn't really traveled that much outside of America.

I had a look, and it seems that Ted Ts'o is American, so I guess we should assume he understands the cultural significance of the phrase (even though I didn't).


All the extrajudicial stuff is pure political and ideological wank by a subset of ideological extremists. Pay no attention to any of it. It's an attempt to redefine the term for narrative creation purposes.


In the US, this is a reference to the belief that members of law enforcement should be loyal folirst to other members of law enforcement and only secondarily to the law. Or at least that is how I have always understood it.


It seems obvious that that’s not what Ted intended it to mean, since it wouldn’t even make sense in this context (the debate doesn’t really seem to be about whether maintainers should be loyal to other maintainers).

A more charitable interpretation would be “we’re the only line of defense protecting something good and valuable from the outside world, so people should give significant weight to our opinions and decisions”. Which, to be clear, I would still mostly disagree with WRT the police, but it at least doesn’t explicitly endorse corruption.


The thin blue line comes from the thin red line, where a line of British redcoats held back a heavy cavalry charge in the crimean war. I've always taken it to mean that police officers consider themselves soldiers holding the last line of defence against wild enemies. Which is itself a controversial and probably unhelpful way to think about your job as a police officer.


There are many ways to state that without invoking corruption. I think Ted is telling the truth of who he is by choosing that phrase intentionally - we aren't talking about an idiot who just says stuff, he's a smart guy.


Given that "invoking corruption" is neither the plain meaning of those words, nor does it even make sense in this context, I don't think it's reasonable to claim Ted did so.


Ted Tso is an American, he was born in California, did his schooling in the US, and has worked here most (all?) of his career. As such he can be expected to know that "the thin blue line" is an idiom that carries with it a lot of connotation.

It's perfectly reasonable to assume he was aware of the implications of his words and chose to use them anyway.


I'm American, I was born in Arizona, I did my schooling in the US, and I have worked in the US for all of my career. I disagree with your assertion that "thin blue line" necessarily implies support for corruption.

And by the way, so does Wikipedia: https://en.wikipedia.org/wiki/Thin_blue_line doesn't mention this interpretation at all. The closest thing is this sentence, which is really not saying the same thing at all, and at any rate only presenting it as something "critics argue", rather than the settled meaning of the phrase.

> Critics argue that the "thin blue line" represents an "us versus them" mindset that heightens tensions between officers and citizens and negatively influences police-community interactions by setting police apart from society at large.


And yet I've never seen that phrase used other than when cops are defending their colleague who is on video murdering/raping/beating someone innocent, or by those calling for reform who are criticizing the cops covering for each other's crimes.


Even I have seen it used in other senses by Americans, and I've never been to America. AFAICT it has only acquired that sense, at least to the extent it currently has, after #BLM. Might be an age thing, that most of your cultural impressions are of a more recent date than the majority of mine? (And, say, Ted T'so's.)


> yet I've never seen that phrase used other than when cops are defending their colleague

Well, now you have!


And it's in a context where some group of people with special power is acting in bad faith to avoid having to follow the rules, and setting up "us vs them" arguments to do so!


As important context, it gained popularity in response to the Black Lives Matter movement.


I think you might be mistaking the "thin blue line" concept with the "blue / all lives matter" in this case, thin blue line is neither new nor newly popular with BLM.


Certainly more popular since then; probably swept along by "blue lives matter". Have you seen that black-and-blue version of the American flag, with, what is it, six or seven blue stripes (or lines)? How old is that?


https://en.wikipedia.org/wiki/Thin_blue_line TLDR is it's the idea that the police are the one thing stopping society from instantly dissolving into chaos so they shouldn't be questioned (even when they kneel on someone's neck until they die)


> - "an upstream language community which refuses to make any kind of backwards compatibility guarantees" -> Rust has a stability guarantee since 1.0 in 2015. Any backwards incompatibilities are explicitly opt-in through the edition system, or fixing a compiler bug.

The most charitable interpretation I can imagine is that the Rust-in-Linux project needs specific nightly features, and those don't get stability guarantees. But I think this is still pretty unfair to complain about; my impression is there's a lot of appetite on the Rust side to get those stabilized.

I also think...

> we know, through very bitter experience, that 95+% of the time, once the code is accepted, the engineers which contribute the code will disappear, never to be seen again.

...that while there's truth in this, there's also a large extent to which it's a self-fulfilling prophecy. Someone might want to stick it out to get their work into mainstream once, but then take a look at the process once it's in the mirror and say never again.

...and:

> Instead of complaining about maintainers for who are unreasonably caring about these things, when they are desparately under-resourced to do as good of a job as they industry demands, how about meeting us half-way and helping us with these sort of long-term code health issues?

It's really hard for me to not see "let's actually write down the contract for these functions, ideally via the type system" as doing exactly that. Which seems to me to be the central idea Ted Ts'o was ranting about in that infamous video.


If comments as benign as "thin blue line" causes fragile entryist/activists to flee, I say Ted and the kernel team are doing the right thing. Projects as critical as the Linux kernel shouldn't be battlegrounds for the grievance of the week, nor should they be platforms for proselytizing. Marcan and others like him leave long paths of destruction in their wake. Lots of projects have been turned upsidedown by the drama they seem to bring with them everywhere. The salient point is contributors need to be more than "drive by" submitters for their pet projects. This isn't specific to Rust in the kernel, look at how much of an uphill battle bcachefs was/is.


I didn't even know what the whole issue with the "thin blue line" comment was until I read this thread. I was never under the impression "thin blue line" was about corruption or brutality, I think people are conflating "thin blue line" with "blue lives matter", which is an entirely different subject.


Quite wild to see this being downvoted, because by downvoting, surely one implies the inverse of your post to be the truth, such that projects such as the Linux kernel should be battlegrounds for the grievance of the week, should be platforms for proselytizing, and so forth.

Very strange to see little to no empathy for kernel maintainers in this situation.


Most people would not interpret downvoting as "I believe that the exact opposite of every single sentence in your post is true".


Most people don't compulsively downvote every post that they only mildly disagree with.


Look, I don't know what to say without just assuming you're approaching this discussion in bad faith.

Saying people "compulsively downvote" the stuff above is already a strong claim that you have no way to substantiate. I think more broadly what you're claiming is that the people downvoting you and anonfordays are emotional and doing so out of political zealotry, and... again, that a pretty strong claim.

People can downvote a post not because they strongly disagree with its claims, but because they strongly dislike its inflammatory tone ("fragile entryist", "Marcan and others like him leave long paths of destruction in their wake", etc).

People who strongly disagree with a post don't necessary believe the exact opposite of its claims. They can disagree with some of the claims and agree with others, or disagree with the very framing of the post.

If I say "we should outlaw all guns because gun crimes are awful" and you disagree, that doesn't mean you think gun crimes are great.


> Rust has a stability guarantee since 1.0 in 2015. Any backwards incompatibilities are explicitly opt-in through the edition system, or fixing a compiler bug.

The community is more than just the language and compiler vendor(s). It's everyone using the language, with particular emphasis on the developers of essential libraries and tools that those users use and on which they're reliant.

In this sense, based on every time I've attempted to use Rust (even after 1.0), Ts'o's remark ain't inaccurate from what I can tell. If I had a nickel for every Rust library I've seen that claims to only support Rust Nightly, I'd have... well, a lot of nickels. Same with Rust libraries not caring much about backward-compatibility; like yeah, I get it during pre-1.0, or while hardly anyone's using it, but at some point people are using it and you are signaling that your library's "released", and compatibility-breaking changes after that point make things painful for downstream users.

> Here's the maintainer on the gccrs project (a second Rust compiler implementation), posting on the official Rust Blog

Same deal here. The Rust developers might be welcoming of additional implementations, but the broader community might not be. I don't have enough information to assess whether the Rust community is "actively hostile" to a GCC-based Rust implementation, but from what I can tell there's little enthusiasm about it; the mainstream assumption seems to be that "Rust" and its LLVM-based reference compiler are one and the same. Maybe (hopefully) that'll change.

----

The bigger irony here, in any case, is that the Linux community has both of these very same problems:

- While the kernel itself has strict backwards-compatibility guarantees for applications, the libraries those applications use (including absolutely critical ones like glibc) very much do not. The ha-ha-only-serious observation in the Linux gaming community is that - thanks to Wine/Proton - the Windows API is the most stable ABI for Linux applications. Yeah, a lot of these issues are addressable with containerization, or by static compilation, but it's annoying that either are necessary for Linux-native applications to work on old and new distros alike.

- As marcan alludes to in the article, the Linux community is at least antipathetic (if not "actively hostile") to Linux-compatible kernels that are not Linux, be they forks of Linux (like Android) or independent projects that support running Linux applications (WSL 1/2, FreeBSD, some illumos distros, etc.). The expectation is that things be upstreamed into "the" Linux, and the norms around Linux development make out-of-tree modules less-than-practical. This is of course for good reason (namely: to encourage developers to contribute back to upstream Linux instead of working in silos), but it has its downsides - as marcan experienced firsthand.


It's also not truthful because many of the Rust maintainers are long time C contributors.

Marcan also linked to this resignation of a Rust Maintainer:

https://lore.kernel.org/lkml/20240828211117.9422-1-wedsonaf@...

which references this fantastic exchange:

https://www.youtube.com/watch?v=WiPp9YEBV0Q&t=1529s

I am not a C person, or a kernel level person, I just watch this from the sideline to learn something every now and then (and for the drama). But this exchange is really stunning to me. It seems so blatantly obvious to me that systematically documenting (in code!) and automatically checking semantic information that is required to correctly use an API is a massive win. But I have encountered this type of resistance (by very smart developers building large systems) in my own much smaller and more trivial context. To some degree, the approach seems to be: "If I never write down what I mean precisely, I won't have to explain why I changed things." A more charitable reading of the resistance is: Adding a new place where the semantics are written down (code, documentation and now type system) gives one more way in which they can be out of sync or subtly inconsistent or overly restrictive.

But yeah, my intuitive reaction to the snippet above is just incredulity at the extreme resistance to precisely encoding your assumptions.


Your charitable reading is too charitable. One of the benefits of using types to help guarantee properties of programs (e.g. invariants) is that types do not get out of sync with the code, because they are part of the code, unlike documentation. The language implementation (e.g. the compiler) automatically checks that the types continue to match the rest of the code, in order to catch problems as early as possible.


I'm not a kernel developer, and never done anything of the sorts either. But, I think the argument is that if they have two versions of something (the C version + the Rust bindings), the logic/behavior/"semantics" of the C version would need to be encoded into the Rust types, and if a C-only developer changes the C version only, how are they supposed to proceed with updating the Rust bindings if they don't want to write Rust?

At least that's my understanding from the outside, someone please do correct me if wrong.


That was a large part of the disagreement.

Rust developers were saying it would be their job to do this. But then someone said Linus rejected something because it broke Rust. GKH backed the Rust developers and said that was an exception not a rule, but didn't know Linus' stance for sure.

Then Linus chimes in because of one of Hector's replies, but at the time of my reading did not clarify what his actual stance is here.


> but at the time of my reading did not clarify what his actual stance is here.

Whatever he says is guaranteed to piss off at least one side of the argument.


You still have to make the stance clear. Avoiding conflicts and dealing with them are two different things.


Yeah it's not an easy discussion for sure, but he has to say something.

At the rate we're going here the existing kernel devs will alienate any capable new blood, and Linux will eventually become Google Linux(TM) as the old guard goes into retirement and the only possible way forward is through money.


Doesn't that assume all "capable new blood" is enthusiastic about rust in the kernel? It seems like a pretty big assumption


You’re not wrong (in that I was insinuating something like that), but I’ll point out it’s an almost equally big assumption that we’re somehow going to find a trove of capable developers interested in devoting their careers to coding in ancient versions of C.


Do you really think there are no young people wanting to work on an operating system written in C? I'm very skeptical that all young people interested in operating systems see Rust as the future. I personally feel it's the other way around, it's Google and companies like that who really want Rust in Linux, the young kernel devs are a minority.


It's not that people think that there are no young people wanting to work in C, it's that the number of competent programmers who want to use C, or do use C, are both decreasing every year. That has been the trend for quite a while now.

So there will presumably be fewer and fewer programmers, young or old, that want to work in C.

C is one of the most entrenched and still-important languages in the world, so it probably has more staying power than Fortran, COBOL, etc. So the timeline is anybody's guess, but the trajectory is pretty clear.

There are a lot of languages that people prefer to C which aren't well-suited to OS programming (golang, Java) but Rust is one that can do the same job as C, and is increasingly popular, and famously well-loved by its users.

There's no guarantee that Rust will work out for Linux. Looks unlikely, to me, actually. But I think it's pretty clear that Linux will face a dwindling talent pool if the nebulous powers that actually control it collectively reject everything that is not C.


Let's add Swift support to Linux :)


> how are they supposed to proceed with updating the Rust bindings if they don't want to write Rust?

If I've interpreted it correctly (and probably not, given the arguments), Linus won't accept merge requests if they break the Rust code, so the maintainer would need to reach out to the Rust for Linux (or someone else) to fix it if they didn't want to themselves.

And some lead maintainers don't want to have to do that, so said no Rust in their subsystem.


Which is a moot point because the agreement right now is that Rust code is allowed to break, so the C developer in question can just ignore Rust, and a Rust person will take care of it for them.


As of today, the burden is uncertain and the Rust crowd has not been fixing things quickly enough since they are manual fixes:

https://lore.kernel.org/rust-for-linux/20250131135421.GO5556...

> Then I think we need a clear statement from Linus how he will be working. If he is build testing rust or not.

> Without that I don't think the Rust team should be saying "any changes on the C side rests entirely on the Rust side's shoulders".

> It is clearly not the process if Linus is build testing rust and rejecting PRs that fail to build.

For clarity, tree-wide fixes for C in the kernel are automated via Coccinelle. Coccinelle for Rust is constantly unstable and broken which is why manual fixes are required. Does this help to explain the burden that C developers are facing because of Rust and how it is in addition to their existing workloads?


> Does this help to explain the burden that C developers are facing because of Rust and how it is in addition to their existing workloads?

Yep, thanks!


> Which is a moot point because the agreement right now is that Rust code is allowed to break, so the C developer in question can just ignore Rust

So then the argument that even semantics encoded in the Rust types, can be out of the date compared to the actual code, is actually a real thing? I read that somewhere else here in the comments, but didn't understand how the types could ever be out-of-date, but this would explain that argument.


That's exactly what would happen "types get out of date". I'm not sure what you are familiar with. But imagine in python a new version of a library is released that now has an extra required argument on a function.


As I understand it everything Rust is seen as "optional", so a CONFIG_RUST=n build that succeeds means a-OK, then some Rust person will do a CONFIG_RUST=y build, see it's broken, fix it, and submit a patch.

I may be wrong, but that's how I understood it, but who knows how Linus will handle any given situation. ¯\_(ツ)_/¯


Yes, but generic code complicates the picture. The things I saw were like: The documentation says you need a number but actually all you need is for the + operator to be defined. So if your interface only accepts numbers it is unnecessarily restrictive.

Conversely some codepath might use * but that is not in the interface, so your generic code works for numbers but fails for other types that should work.


> Yes, but generic code complicates the picture. The things I saw were like: The documentation says you need a number but actually all you need is for the + operator to be defined. So if your interface only accepts numbers it is unnecessarily restrictive.

if you really need a number, why not use a type specifically aligned to that (something like f32|f64|i32|i64 etc...) instead of relying on + operator definition?

> Conversely some codepath might use * but that is not in the interface, so your generic code works for numbers but fails for other types that should work.

do we agree that if it's not in the interface you are not supposed to use it? conversely if you want to use it, the interface has to be extended?


Yes, the second case is a bug in the interface.

For the first case you have it the wrong way around. My generic code would work on things that are not numbers but I prevent you from calling it because I didn't anticipate that there would be things you can add that are not numbers. (Better example: require an array when you really only need an iterable).


It's practically impossible to document all your assumptions in the type system. Attempting to do so results in code that is harder to read and write.

You have a choice between code that statically asserts all assumptions in the type system but doesn't exist, is slow, or a pain to work with, and code that is beautiful, obvious, performant, but does contain the occasional bug.

I am not against static safety, but there are trade offs. And types are often not the best way to achieve static safety.


> And types are often not the best way to achieve static safety.

That’s a sort of weird statement to make without reference to any particular programming language. Types are an amazing way to achieve static safety.

The question of how much safety you can reasonably achieve using types varies wildly between languages. C’s types are pretty useless for lots of reasons - like the fact that all C pointers are nullable. But moving from C to C++ to Rust to Haskell to ADA gives you ever more compile time expressivity. That type expressivity directly translates into reduced bug density. I’ve been writing rust for years, and I’m still blown away by how often my code works correctly the first time I run it. Yesterday the typescript compiler (technically esbuild) caught an infinite loop in my code at compile time. Wow!

I’d agree that every language has a sweet spot. Most languages let you do backflips in your code to get a little more control at compile time at the expense of readability. For example, C has an endless list of obscure __compiler_directives that do all sorts of things. Rust has types like NonZeroUsize - which seem like a good idea until you try it out. It’s a good idea, but the ergonomics are horrible.

But types can - and will - take you incredibly far. Structs are a large part of what separates C from assembler. And types are what separates rust from C. Like sum types. Just amazing.


Encoding assumptions and invariants in the type system is a spectrum. Rust, by it's very nature, places you quite far along that spectrum immediately. One should consider if the correctness achieved by this is worth the extra work. However, if there is one place where correctness is paramount, surely it's the Linux Kernel.

> [..]Attempting to do so results in code that is harder to read and write.

> You have a choice between code that statically asserts all assumptions in the type system but doesn't exist, is slow, or a pain to work with, and code that is beautiful, obvious, performant, but does contain the occasional bug.

I don't think you are expressing objective truth, this is all rather subjective. I find code that encodes many assumptions in the type system beautiful and obvious. In part this is due to familiarity, of course something like this will seem inscrutable to someone who doesn't know Rust, in the same way that C looks inscrutable to someone who doesn't know any programming.


> Encoding assumptions and invariants in the type system is a spectrum. Rust, by it's very nature, places you quite far along that spectrum immediately.

Compared to, say, dependent type systems, Rust really isn't that far along. The Linux kernel has lots of static analyzers, and then auxiliary typedefs, Sparse, and sanitizers cover a significant area of checks in an ad-hoc way. All Rust does is formalize them and bring them together.

And getting Rust into the kernel slowly, subsystem by subsystem, means that the formalization process doesn't have to be disruptive and all-or-nothing.


But if the info is info the user of your code needs in order to interface correctly, the point that you can't document everything is moot. You already have to document this in the documentation anyways.


The particular C maintainers in discussion refused to provide even textual documentation.


> It makes sense to be extremely adversarial about accepting code because they're on the hook for maintaining it after that. They have maximum leverage at review time, and 0 leverage after.

I don't follow. The one with zero leverage is the contributor, no? They have to beg and plead with the maintainers to get anything done. Whereas the maintainers can yank code out at any time, at least before when the code makes it into an official stable release. (Which they can control - if they're not sure, they can disable the code to delay the release as long as they want.)


Maintainers can't yank out code if that leads to feature, performance or user space regressions.


Entire filesystems and classes of drivers have been purged from the kernel over time. Removing stuff is not impossible as some here suggest.


Yes, but removing stuff takes a lot of time and effort that most maintainers want to spend doing something more productive or fun.


That's useful context because as a complete laymen I thought his message was largely reasonable (albeit I am not unsympathetic to the frustration of being on the other side)!


Here is what I learned the hard way: the request sounds reasonable. And that doesn’t matter (sucks, I know.)

Here is the only thing that matters in the end (I learned this an even harder way. I really worked like the L4R people approach this and was bitten by counter-examples left, right, and center): The Linux Kernel has to work. This is even more important than knowing why it works. There is gray area and you only move forward by rejecting anything that doesn’t have at least ten years of this kind of backwards compatible commitment. All of it. Wholesale. (And yes, this blatantly and callously disregards many gods efforts sounding like the tenuous and entitled claim "not good enough".)

But it’s the only thing that has a good chance of working.

Saying that gravity is a thing is not the same attitude as liking that everyone is subject to gravity. But hoping that gravity just goes away this once is wishful thinking of the least productive kind.

Rust is not "sufficiently committed" to backwards compatibility. Firstly, too young to know for sure and the burden is solely on "the rust community" here. (Yes, that sucks. Been there.)

Secondly, there were changes (other posters mentioned "Drop") and how cargo is treated that counter indicate this.

Rust can prove all the haters wrong. They will then be even more revered that Linux and Debian. But they have to prove this. That is a time consuming slog. With destructive friction all the way.

This is the way.


> Marcan links to an email by Ted Tso'o (https://lore.kernel.org/lkml/20250208204416.GL1130956@mit.ed...) that is interesting to read. Although it starts on a polarising note ("thin blue line")

Can I say that I was immediately put off by the author conflating the "thin blue line" quote from with a political orientation?

The full quote (from the article) being: "Later in that thread, another major maintainer unironically stated “We are the ‘thin blue line’”, and nobody cared, which just further confirmed to me that I don’t want to have anything to do with them."

The way I read it, "thin blue line" is being used as a figure of speech. I get what they are referring to and I don't see an endorsement. It doesn't necessarily means a right-wing affiliation or sympathy.

To me it seems like the author is projecting a right-wing affiliation and a political connotation where there is none (at least not officially, as far as I can see on https://thunk.org/tytso/) in order to discredit Theodore Ts'o. Which is a low point, because attacking Ts'o on a personal level means Martin is out of ammunitions to back their arguments.

But then again, Hector Martin is the same person that though that brigading and shaming on social media is an acceptable approach to collaboration in the open source space:

    "If shaming on social media does not work, then tell me what does, because I'm out of ideas."
from https://lkml.org/lkml/2025/2/6/404

To me, from outside, Hector Martin looks like a technically talented but otherwise toxic person that is trying to use public shaming on social media and ranting on his blog as tools and tactics to impose their will and force the otherwise democratic process of development the linux kernel. And the on top of everything it's behaving like a victim.

It's a good thing they are resigning, in my opinion.


Thank you for pointing this out—willfully, uncharitably misinterpreting “thin blue line” as used by Ts'o demonstrates a severe lack of empathy for people in his position.

Jumping to conclusions about police brutality and so forth (as many here in the comments are doing) is very frustrating to see, because, in context, the intent of his phrasing is very clear to anyone who doesn't needlessly infer Contemporary Political Nonsense in literally everything they read.


Perhaps merge requests should have to go through a process of learning the codebase first, and submitting increasingly more complex fixes before jumping to really complex requests.

It can be hard when solving your own acute issue - doing so doesn't mean it is the only fix or the one the project should accept.

Even if it's beneath someone's talent to have to do it, it is an exercise of community building.


> This is par for the course I guess, and what exhausts folks like marcan. I wouldn't want to work with someone like Ted Tso'o, who clearly has a penchant for flame wars and isn't interested in being truthful.

I am acquainted with Ted via the open source community, we have each other on multiple social media networks, and I think he's a really great person. That said, I also recognize when he gets into flame wars with other people in the open source social circles, and sometimes those other people are also friends or acquaintances.

I can think of many times Ted was overly hyperbolic, but he was ultimately correct. Here is the part of the Linux project I don't like sometimes, which was recently described well in this recent thread. Being correct, or at least being subjectively correct by having extremely persuasive arguments, yet being toxic... is still toxic and unacceptable. There are a bazillion geniuses out there, and being smart is not good enough anymore in the open source world, one has to overcome those toxic "on the spectrum" tendencies or whatever, and be polite while making reasonable points. This policy extends to conduct as well as words written in email/chat threads. Ted is one of those, along side Linus himself, who has in the past indulged into a bit of shady conduct or remarks, but their arguments are usually compelling.

I personally think of these threads in a way related to calculus of infinitesimals, using the "Standard Parts" function to zero away hyperbolic remarks the same way the math function zeros away infinitesimals from real numbers, sorta leaving the real remarks. This is a problem, because it's people like me, arguably the reasonable people, who through our silence enable these kind of behaviours.

I personally think Ted is more right than wrong, most of the time. We do disagree sometimes though, for example Ted hates the new MiB/KiB system of base-2 units, and for whatever reasons like the previous more ambiguous system of confusingly mixed base-10/base-2 units of MB/Mb/mb/KB/Kb/kb... and I totally got his arguments that a new standard makes something confusing already even more confusing, or something like that. Meh...


> Ted hates the new MiB/KiB system of base-2 units, and for whatever reasons like the previous more ambiguous system of confusingly mixed base-10/base-2 units of MB/Mb/mb/KB/Kb/kb

Here's my best argument for the binary prefixes: Say you have a cryptographic cipher algorithm that processes 1 byte per clock cycle. Your CPU is 4 GHz. At what rate can your algorithm process data? It's 4 GB/s, not 4 GiB/s.

This stuff happens in telecom all the time. You have DSL and coaxial network connections quantified in bits per second per hertz. If you have megahertz of bandwidth at your disposal, then you have megabits per second of data transfer - not mebibits per second.

Another one: You buy a 16 GB (real GB) flash drive. You have 16 GiB of RAM. Oops, you can't dump your RAM to flash to hibernate, because 16 GiB > 16 GB so it won't fit.

Clarity is important. The lack of clarity is how hundreds of years ago, every town had their own definition of a pound and a yard, and trade was filled with deception. Or even look at today with the multiple definitions of a ton, and also a US gallon versus a UK gallon. I stand by the fact that overloading kilo- to mean 1024 is the original sin.


> Another one: You buy a 16 GB (real GB) flash drive. You have 16 GiB of RAM. Oops, you can't dump your RAM to flash to hibernate, because 16 GiB > 16 GB so it won't fit.

Right but the problem here is that RAM is produced in different units than storage. It seems strictly worse if your 16GB of RAM doesn't fit in your 16GB of storage because you didn't study the historical marketing practices of these two industries, than if your 16 GiB of RAM doesn't fit in your 16 GB of storage because at least in the second case you have something to tip you off to the fact that they're not using the same units .


    > I can think of many times Ted was overly hyperbolic, but he was ultimately correct. Here is the part of the Linux project I don't like sometimes, which was recently described well in this recent thread. Being correct, or at least being subjectively correct by having extremely persuasive arguments, yet being toxic... is still toxic and unacceptable.
I want to say that I am thankful in this world that I am a truly anonymous nobody who writes codes for closed-source mega corp CRUD apps. Being a tech "public figure" (Bryan Cantrill calls it "nerd famous") sounds absolutely awful. Every little thing that you wrote on the Internet in the last 30 years is permanently recorded (!!!), then picked apart by every Tom, Dick, Harry, and Internet rando. My ego could never survive such a beating. And, yet, here we are in 2025, where Ted T'so continues to maintain a small mountain of file system code that makes the Linux world go "brrr".

Hot take: Do you really think you could have done better over a 30 year period? I can only answer for myself: Absolutely fucking not.

I, for one, am deeply thankful for all of Ted's hard work on Linux file systems.


There are plenty of "nerd famous" people who manage it by just not being an asshole. If you're already an asshole being "nerd famous" is going to be rough, yes, but maybe just don't be one?


Only charlatans completely avoid saying things that could get them into trouble.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: