Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Rust has a stability guarantee since 1.0 in 2015. Any backwards incompatibilities are explicitly opt-in through the edition system, or fixing a compiler bug.

Unfortunately OP has a valid point regarding Rust's lack of commitment to backwards compatibility. Rust has a number of things that can break you that are not considered breaking changes. For example, implementing a trait (like Drop) on a type is a breaking change[1] that Rust does not consider to be breaking.

[1]: https://users.rust-lang.org/t/til-removing-an-explicit-drop-...



I think we're mixing 2 things here: language backward-compatibility, vs. standard practices about what semver means for Rust libraries. The former is way stronger than the latter.


> language backward-compatibility, vs. standard practices about what semver means

I've read and re-read this several times now and for the life of me I can't understand the hair you're trying to split here. The only reason to do semantic versioning is compatibility...


I assume that they mean that you can use Rust as a language without its standard library. This matters here since the Kernel does not use Rust's standard library as far as I know (only the core module).

I'm not aware of semver breakage in the language.

Another important aspect is that Semver is a social contract, not a mechanical guarantee. The Semver spec dedicates a lot of place to clarify that it's about documented APIs and behaviors, not all visible behavior. Rust has a page where it documents its guarantees for libraries [0].

[0] https://doc.rust-lang.org/cargo/reference/semver.html


> Another important aspect is that Semver is a social contract, not a mechanical guarantee for.

Although there are mechanical aids for it: https://crates.io/crates/cargo-semver-checks


The failure mentioned above wasnt a case of the language changing behaviour, but rather the addition of a trait impl in the standard library conflicting with a trait impl in a third party crate, causing the build breakage.


The Rust compiler/language has no notion of semver. Saying "Rust is unstable b/c semver blah blah" is a tad imprecise. Semver only matters in the context of judging API changes of a certain library (crate).

> The only reason to do semantic versioning is compatibility

Sure. But "compatibility" needs to be defined precisely. The definition used by the Rust crate ecosystem might be slightly looser than others, but I think it's disingenuous to pretend that other ecosystems don't have footnotes on what "breaking change" means.


> But "compatibility" needs to be defined precisely.

Compatibility is defined precisely! You're definition requires scare quotes. You want to define it "Precisely" so that you can permit incompatible behavior. No one who cares about compatibility does that, it's just an excuse.

Look, other languages do this differently. Those of use using C99 booleans know we need to include a separate header to avoid colliding with the use of "bool" in pre-existing code, etc... And it sort of sucks, but it's a solved problem. I can build K&R code from 1979 on clang. Rust ignored the issue, steamrollered legacy code, and tried to sweep it under the rug with nonsense like this.


I think you are trying very hard to disagree on basic stuff that works very similarly across different language ecosystems, and (looking at other responses) that you're very angry. Disengaging.


I'll point out again that C, the poster child for ancient language technology, has been able to evolve its syntax and feature set with attention to not breaking legacy code. Excusing the lack of such attention via linguistic trickery about "defining compatibility precisely" does indeed kinda irk me. And disengaging doesn't win the argument.


The fundamental issue here is that any kind of inference can have issues on the edges. If you write code using fully qualified paths all the time, then this semver footgun can never occur.


I was hit by a similar thing. Rust once caused regression failures in 5000+ packages due to incompatibility with older "time" packages [1]. It was considered okay. At that point, I don't care what they say about semver.

[1]: https://github.com/rust-lang/rust/issues/127343#issuecomment...


The comment you linked to explicitly shows that a maintainer does not consider this "okay" at all. T-libs-api made a mistake, the community got enraged, T-libs-api hasn't made such a mistake since. The fact that it happened sucks, but you can't argue that they didn't admit the failure.


"a maintainer"

The way you word that makes it sound like "the maintainers" and "T-libs-api" do not consider this "okay". Reading just above the linked comment, however, puts a very different impression of the situation:

> We discussed this regression in today's @rust-lang/libs-api team meeting, and agree there's nothing to change on Rust's end. Those repos that have an old version of time in a lockfile will need to update that.


You're reading an artifact of a point in time, before the it hit stable and the rest of the project found out about this. t-libs-api misunderstood the impact because in the past there had been situations that looked similar and were unproblematic to go ahead with, but weren't actually similar. There were follow up conversations, both in public and private, where the consensus arrived was that this was not ok.


What I'm hearing is that the nature of the issue was recognized - that this was a breaking change; but that the magnitude of the change and the scale of the impact of that break was underestimated.

TBH that does not inspire confidence. I would expect that something claiming or aspiring to exhibit good engineering design would, as a matter of principle, avoid any breaking change of any magnitude in updates that are not intended to include breaking changes.


Thanks for clarifying. I took a look as well, and the very first reply confirms your opinion and that of the GP's parent. Plenty of downvotes and comments that come after criticizing the maintainers, "I am not sure how @rust-lang/libs-api can look at 5400 regressions and say "eh, that's fine"."

Not sure why people are trying to cover this up.


It's not covering I up. The people that commented, including the one you quote are part of the project.


You are sincere. I believe this is not a cover-up but more of a misunderstanding. Think this way: many people coming to that github thread don't know who are core rust devs but they can clearly see the second commenter is involved. That comment denied this being a major issue and concluded the decision was made as a team. To the public and perhaps some kernel devs, this may be interpreted as the official attitude.


The change itself was very reasonable. They only missed the mark on how that change was introduced. They should have waited with it until the next Rust edition, or at least held back a few releases to give users of the one affected package time to update.

The change was useful, fixing an inconsistency in a commonly used type. The downside was that it broke code in 1 package out of 100,000, and only broke a bit of useless code that was accidentally left in and didn't do anything. One package just needed to delete 6 characters.

Once the new version of Rust was released, they couldn't revert it without risk of breaking new code that may have started relying on the new behavior, so it was reasonable to stick with the one known problem than potentially introduce a bunch of new ones.


But that is not how backwards compatibility works. You do not break user space. And user space is pretty much out of your control! As a provider of a dependency you do not get to play such games with your users. At least not, when those users care about reliability.


That was a mistake and a breakdown in processes that wasn't identified early enough to mitigate the problem. That situation does not represent the self imposed expectations on acceptable breakage, just that we failed to live up to it and by the time it became clearer that the change was problematic it was too late to revert course because then that would have been a breaking change.

Yes: adding a trait to an existing type can cause inference failures. The Into trait fallback, when calling a.into() which gives you back a is particularly prone to it, and I've been working on a lint for it.


TBH that's a level of quality control that probably informs the Linux kernel dev's view of Rust reliability - it's a consideration when evaluating the risk of including that language.


Are you sure you want to start comparing the quality control of C and Rust packaging or reliability?


Your comment misunderstands the entire point and risk assessment of what's being talked about.

It's about the overall stability and "contract" of the tooling/platform, not what the tooling can control under it. A great example was already given: It took clang 10 years to be "accepted."

It has nothing to do with the language or its overall characteristics, it's about stability.


I trust the quality control of the Linux kernel devs a lot more than the semantics of a language.


Kernel devs more than almost everyone else are well aware that even the existing C toolchains are imperfect.


Maintaining backward compatibility is hard. I am sympathetic. Nonetheless, if the rust dev team think this is a big deal, then clarify in release notes, write a blog post and make a commitment that regression at this level won't happen again. So far, there is little official response to this event. The top comment in the thread I point to basically thinks this is nothing. It is probably too late do anything for this specific issue but in future it would be good to explain and highlight even minor compatibility issues through the official channel. This will give people more confidence.


> Nonetheless, if the rust dev team think this is a big deal, then clarify in release notes, write a blog post and make a commitment that regression at this level won't happen again. So far, there is little official response to this event.

There was an effort to write such a blog post. I pushed for it. Due to personal reasons (between being offline for a month and then quitting my job) I didn't have the bandwidth to follow up on it. It's on my plate.

> The top comment in the thread I point to basically thinks this is nothing.

I'm in that thread. There are tons of comments by members of the project in that thread making your case.

> It is probably too late do anything for this specific issue but it would be good to explain and highlight even minor compatibility issues through the official channel.

I've been working on a lint to preclude this specific kind of issue from ever happening again (by removing .into() calls that resolve to its receiver's type). I customized the diagnostic to tell people exactly what the solution is. Both of these things should have been in place before stabilization at the very least. That was a fuck up.

> This will give people more confidence.

Agreed.


Thanks for the clarification. This has given me more confidence in rust's future.


It’s hard for me to tell if you’re describing a breakdown in the process for evolving the language or the process for evolving the primary implementation.

Bugs happen, CI/CD pipelines are imperfect, we could always use more lint rules …

But there’s value in keeping the abstract language definition independent of any particular implementation.


> At that point, I don't care what they say about semver.

Semver, or any compatibility scheme, really, is going to have to obey this:

> it is important that this API be clear and precise

—SemVer

Any detectable change being considered breaking is just Hyrum's Law.

(I don't want to speak to this particular instance. It may well be that "I don't feel that this is adequately documented or well-known that Drop isn't considered part of the API" is valid, or arguments that it should be, etc.)


Implementing (or removing) Drop on a type is a breaking change for that type's users, not the language as a whole. And only if you actually write a trait that depends on types directly implementing Drop[0].

Linux breaks internal compatibility far more often than people add or remove Drop implementations from types. There is no stability guarantee for anything other than user-mode ABI.

[0] AFAIK there is code that actually does this, but it's stuff like gc_arena using this in its derive macro to forbid you from putting Drop directly on garbage-collectable types.


> is a breaking change _for that type's users_, not the language as a whole.

And yet the operating mantra...the single policy that trumps all others in Linux kernel development...

is don't break user space.


Rust compiler upgrades being backwards incompatible has nothing to do whatsoever with keeping userspace ABI compatible.

GCC also occassionally breaks compability with the kernel, btw.


> Linux breaks internal compatibility far more often than people add or remove Drop implementations from types. There is no stability guarantee for anything other than user-mode ABI.

I think that's missing the point of the context though. When Linux breaks internal compatibility, that is something the maintainers have control over and can choose not to do. When it happens to the underlying infrastructure the kernel depends on, they don't have a choice in the matter.


Removing impl Drop is like removing a function from your C API (or removing any other trait impl): something library authors have to worry about to maintain backwards compatibility with older versions of their library. A surprising amount of Rust's complexity exists specifically because the language developers take this concern seriously, and try to make things easier for library devs. For example, people complain a lot about the orphan rules but they ensure adding a trait impl is never a breaking change.


Meaning of this code has not changed since Rust 1.0. It wasn't a language change, nor even anything in the standard library. It's just a hack that the poster wanted to work, and realized it won't work (it never worked).

This is equivalent of a C user saying "I'm disappointed that replacing a function with a macro is a breaking change".

Rust had actual changes that broke people's code. For example, any ambiguity in type inference is deliberately an error, because Rust doesn't want to silently change meaning of users' code. At the same time, Rust doesn't promise it won't ever create a type inference ambiguity, because it would make any changes to traits in the standard library almost impossible. It's a problem that happens rarely in practice, can be reliably detected, and is easy to fix when it happens, so Rust chose to exclude it from the stability promise. They've usually handled it well, except recently miscalculated "only one package needed to change code, and they've already released a fix", but forgot to give users enough time to update the package first.


I'm curious now. What are the backwards compatibility guarantees for C?


As long as you compile with the version specified (e.g., `-std=c11`) I think backwards compatibility should be 100%. I've been able to compile codebases that are decades old with modern compilers with this.


In practice, C has a couple of significant pitfalls that I've read about.

First is if you compile with `-Werror -Wall` or similar; new compiler diagnostics can result in a build failing. That's easy enough to work around.

Second, nearly any decent-sized C program has undefined behavior, and new compilers may change their handling of undefined behavior. (E.g., they may add new optimizations that detect and exploit undefined behavior that was previously benign.) See, e.g., this post by cryptologist Daniel J. Bernstein: https://groups.google.com/g/boring-crypto/c/48qa1kWignU/m/o8...


Why not entirely wrong, the UB issues are bit exaggerated in my opinion. My C code from 20 years ago still works fine even when using modern compilers. In any case, our plan is to remove most of UB and there is quite good progress. Complaining that your build fails with -Werror seems a bit weird. If you do not want it, why explicitly request this with -Werror?


Just curious, when you’re done removing all the UB, how will you know you’ve been successful? UB is hard to find isn’t it?


To be clear: I mean UB in the C standard. The cases where there is UB are mostly spelled out explicitly, so we can go through all the cases and define behavior. There might be cases where there is implicit UB, i.e. something is accidentally not specified, but this has always been fixed when noticed. It is not possible to remove all cases of UB though, but the plan is to add a memory safe mode where there is no UB similar to Rust.


The warning argument is silly. It just means that your code is not up to par with the modern standards. -Wall is a moving goalpost and it's getting new warnings added with every release of a TC because TC developers are trying to make your code more secure.


I mean, yeah, I said it was easy enough to work around. But it's an issue I've seen raised in a discussions of C code maintenance. (The typical conclusion is that using `-Wall -Werror` is a mistake for long-lived, not-actively-developed code.) Apologies if I overstated the case.


Not even close to 100%, the reason that it feels like every major C codebase in industry is pinned to some ancient compiler version is because upgrading to a new toolchain is fraught. The fact that most Rust users are successfully tracking relatively recent versions of the toolchain is a testament to how stable Rust actually is in practice (an upgrade might take you a few minutes per million lines of code).


IDK about "industry" but I can't think of any prominent C or C++ open-source codebase that requires a specific version of gcc or clang to compile.


Try following your favourite distro's bug tracker during GCC upgrade. Practically every update breaks some packages, sometimes less, sometimes more (esp. when GCC changes their default flags).

Here's one example of workarounds in ~100 packages that broke when upgrading to GCC 10: https://github.com/search?q=repo%3ANixOS%2Fnixpkgs%20fcommon...


The Linux kernel was that way for a while, years ago.


gets() was straight-up removed in C11.

Every language has breaking changes. The question is the frequency, not if it happens at all.

The C and C++ folks try very hard to minimize breakage, and so do the Rust folks. Rust is far closer to those two than other languages. I'm not willing to say that it's the same, because I do not know how to quantify it.


But you can still use gets() if you're using C89 or C99[1], so backwards compatibility is maintained.

Rust 2015 can still evolve (either by language changes or by std/core changes) and packages can be broken by simply upgrading the compiler version even if they're still targeting Rust 2015. There's a whole RFC[2] on what is and isn't considered a breaking change.

[1]: https://gcc.godbolt.org/z/5jb1hMbrx

[2]: https://rust-lang.github.io/rfcs/1105-api-evolution.html


> so backwards compatibility is maintained.

That's not what backwards compatibility means in this context. You're talking about how a compiler is backwards compatible. We're talking about the language itself, and upgrading from one versions of the language to the next.

Rust 2015 is not the same thing as C89, that is true.

> packages can be broken by simply upgrading the compiler version

This is theoretically true, but in practice, this rarely happens. Take the certainly-a-huge-mistake time issue discussed above. I actually got hit by that one, and it took me like ten minutes to even realize that it was the compiler's fault, because upgrading is generally so hassle free. The fix was also about five minutes worth of work. Yes, they should do better, but I find Rust upgrades to be the smoothest of any ecosystem I've ever dealt with, including C and C++ compilers.


(side note: I don't think I've ever thanked you for your many contributions to the Rust ecosystem, so let me do that now: thank you!)

> You're talking about how a compiler is backwards compatible. We're talking about the language itself, and upgrading from one versions of the language to the next.

That's part of the problem. Rust doesn't have a spec. The compiler is the spec. So I don't think we can separate the two in a meaningful way.


You're welcome!

> So I don't think we can separate the two in a meaningful way.

I think that in that case, you'd compare like with like, upgrading both.

I do agree that gcc and clang supporting older specs with a flag is a great feature, and is something that Rust cannot do right now.

But the results of the annual survey have come out: https://blog.rust-lang.org/2025/02/13/2024-State-Of-Rust-Sur...

And 90% of users use the current stable version for development. 7.8% use a specific stable released within the past year.

These numbers are only so high because it is such a small hassle to update even large Rust codebases between releases.

So yes, in theory, breakage can happen. But that's in theory. In practice, this isn't a thing that happens very much.


C (and C++) code breaks all the time between toolchain versions (I say "toolchain" to include compiler, assembler, linker, libc, etc.). Some common concerns are: headers that include other headers, internal-but-public-looking names, macros that don't work the way people think they do, unusual function-argument combinations, ...

Decades-old codebases tend to work because the toolchain explicitly hard-codes support for the ways they make assumptions not provided by any standard.


For the purposes of linux kernel, there's essentially a custom superset of C that is defined as "right" for linux kernel, and there are maintainers responsible for maintaining it.

While GCC with few basic flag will, in general, produce binary that cooperates with kernel, kbuild does load all those flags for a reason.


> For the purposes of linux kernel, there's essentially a custom superset of C that is defined as "right" for linux kernel

Superset? Or subset? I'd have guessed the latter.


Superset. ANSI/ISO C is not a good language to write a kernel in, because the standards are way more limiting than some people would think - and leaves a lot to implementation.

So it's a superset in terms of what's defined


The backwards compatibility guarantee for C is "C99 compilers can compile C99 code". If they can't, that's a compiler bug. Same for other C standards.

Since Rust doesn't have a standard, the guarantee is "whatever the current version of the compiler can compile". To check if they broke anything they compile everything on crates.io (called a crater run).

But if you check results of crater runs, almost every release some crates that compiled in the previous version stop compiling in the new version. But as long as the number of such breakages it not too large, they say "nothing is broken" and push the release.


Can you provide an example for the broken-crater claim? As far as I'm aware, Rust folks don't break compatibility that easily, and the one time that happened recently (an old version of the `time` crate getting broken by a compiler update), there were a lot of foul words thrown around and the maintainers learned their lesson. Are you sure you aren't talking about crates triggering UB or crates with unreliable tests that were broken anyway?


I am not following this too closely (the time issue seemed pretty severe though), but there are compatibility changes listed in the release notes very frequently: https://github.com/rust-lang/rust/blob/master/RELEASES.md


What do you mean? Rust 1.0 can compile Rust 1.0. Rust 1.1 can compile Rust 1.1.


C makes a distinction between the language version and the compiler version. Rust does not. That's the problem people are discussing here.


C99 isn't a compiler version. It's a standard. Many versions of GCC, Clang and other compilers can compile C99 code. If you update your compiler from gcc 14.1 to gcc 14.2, both versions can still compile standard code.


There is also a very high level of backwards compatibility between versions of ISO C because there is a gigantic amount of code is updated if there is change. So such changes are done only for important reasons or after a very long deprecation period.


But Rust 1.0 can't compile Rust 1.1.

And as others have noted, C99 is a standard and Rust lacks one.


But Rust 1.0 can't compile Rust 1.1

That's an impossible standard to hold Rust to, did you mean it the other way around? A C89 compiler can't compile all of C99 either.


But v1 of a C99 compiler can compile all of C99, and v2 of a C99 compiler can still compile all of C99.


Right, which is basically the opposite of what backwards incompatibility means. Imagine if GCC 14.2.0 was only guaranteed to be able to compile "C 14.2.0".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: