Rust's borrow checker is unique in the sense that it is production-ready. Cyclone is indeed prior art, but it's not as if it ever got beyond the research project stage.
> Rust actually causes far more code churn - there's virtually a monthly cycle of some Rust compiler update requiring changes to existing code.
This is really surprising to me. Which changes have you had to deal with here? The only one I can think of is the major Edition 2024 upgrade, which changed the `#[no_mangle]` attribute to `#[unsafe(no_mangle)]`. But the old syntax still compiles just fine with the newest compiler using Edition 2021.
I guess it's because we use clippy to lint all the Rust code, combined with always pulling the latest Rust compiler (from Fedora Rawhide). But it's a real thing, there's a lot of churn.
The good thing is clippy / rustc has very good diagnostics and usually tells you "do X to fix this".
I think what clippy needs the most is a way to say, "make all clippy warnings as errors, but only for warnings up to Rust 1.80". And then devs can update that number whenever they want, as opposed to whenever new updates come in.
Or tie it to the current edition, or the current MSRV if defined.
A recent one I had was needing to include lifetimes to having to atleast specify the anon lifetime with the underscore. I forget what the actual circumstance was.
Unsafe Rust actually has a great runtime analyzer: Miri. It's very easy to just run `cargo +nightly miri test` in your project to get some confidence in the more questionable choices along the way.
If that's all you need, the state of the art is very available already through the JVM and the .NET CLR, as well as a handful others depending on your use case. Most of those also come with decent languages, and great facilities to leverage the GC to its maximum.
But GCs aren't magic and you will never get rid of all the overhead. Even if the CPU time is not noticeable in your use case, the memory usage fundamentally needs to be at least 2-4x the actual working set of your program for GCs to be efficient. That's fine for a lot of use cases, especially when RAM isn't scarce.
Most people who use C or C++ or Rust have already made this calculation and deemed the cost to be something they don't want to take on.
That's not to say Fil-C isn't impressive, but it fills a very particular niche. In short, if you're bothering with a GC anyway, why wouldn't you also choose a better language than C or C++?
I don't understand the need to hammer in the point that Fil-C is only valuable for this tiny, teeny, irrelevant microscopic niche, while not even talking about what the niche is? To be clear, the niche is rebuilding your entire GNU/Linux userland with full memory safety and completely acceptable performance, tomorrow, without rewriting anything, right? Is this such a silly little idiosyncratic hobby?
So I don’t want to come off as dismissive of the effort - it’s certainly impressive!
The reason I’m not super excited is based on the widely publicized findings from Google and Microsoft (IIRC) about memory safety issues in their code: The vast majority is in new code.
As such, the returns on running the entire userspace with Fil-C may be quite diminished from the get-go. Those who need to guard against UB bugs in seriously battle-hardened C software in production are definitely a small niche.
But that doesn’t mean it isn’t also very useful as a tool during development.
Hmm, so if they're writing new memory unsafe code in C/C++, presumably to remain within their already established and entrenched C/C++ ecosystems, why isn't Fil-C interesting as a way to thwart memory safety issues in that new code?
Because every problem detected by Fil-C is already a serious problem in the existing code.
As a mitigation strategy, that becomes less interesting as the quality of that code increases, but you still pay the full cost regardless of whether there are actually any bugs.
That can certainly be valuable to you, but as a developer, the more interesting proposition is about how not to ship bugs in the first place.
As others have said, programs that have already been written are plainly not in the business of "not...[shipping] bugs in the first place". New code is new code; old code is old code.
It seems like there are constant updates for 20 year old packages on my Ubuntu systems. Ubuntu 20.04 Focal Fossa (first released April 2020) glibc had an update on 2025-05-28. Current stable updated glibc 2025-09-22. To say nothing about the rest of the packages in that operating system.
Oh, look at the time, a few more CVEs in C code, posted 3 hours ago to Hacker News: "X.Org Security Advisory: multiple security issues X.Org X server and Xwayland"
> The reason I’m not super excited is based on the widely publicized findings from Google and Microsoft (IIRC) about memory safety issues in their code: The vast majority is in new code
This makes perfect sense to me.
Which is why I don't at all understand the current fetish with rewriting things that have been working well for decades in Rust. Such as coreutils. Or apt.
It feels like an almost deliberate crippling of progress by diverting top talent into useless avenues, much like string theory in physics, or SLS/Artemis.
> It feels like an almost deliberate crippling of progress by diverting top talent into useless avenues, much like string theory in physics, or SLS/Artemis.
You don't have to be a "top talent" to rewrite old unix utilities. The hard part is writing it safely, which in Rust can be done without "top talent."
There's a contingent of rust fans that show up on every story about C – their premise is that C code is unsafe and most safety-critical C code should be rewritten in rust.
Fil-C is new and is a viable competitor to rust, that's why you're hearing all asides about tiny niches, unacceptable performance degradation, etc.
Hacker News is not a place where any one group brigrades a thread. There are people who prefer C who don't want a GC, people who prefer Rust who don't want C, people who prefer Rust who agree with Fil-C for legacy C, people who don't prefer C or Rust and may use languages with GC.... We all have interests and face people who denigrate them in bad faith. If you have specific objections to inaccurate statements in this thread, then state them. I'll do the same for any technology if I'm qualified to make statements on it.
I'm not saying that people don't comment in bad faith here. But if someone's first thought upon seeing significant negativity is that a coordinated, massive campaign is occurring, that someone is probably wrong. Commenting in that vein also harms discussion. If Hacker News is to foster good discussion, then simply not being a troll or ideologue is not sufficient.
There really are many people here, with largely diverse opinions. Don't lump people together unless they lump themselves together.
If you missed it, djb himself posted this cute graph of "nearly 9000 microbenchmarks of Fil-C vs. clang on cryptographic software (each run pinned to 1 core on the same Zen 4)":
I've heard Filip has some ideas about optimizing array performance to avoid capability checks on every access... doing that thread safely seems like an interesting challenge but I guess there are ways!
Sure of course I followed that link. I've really got no idea what the horizontal axis is! But there is a huge cluster of results between 1x and 1.5x execution time.
And, the kind of code he is interested in is not necessarily the same as the kind of code I'm interested in. In fact I know it's not!
As one more data point, compiling my little benchmark with gcc, without any optimisation flag.
There’s no Rust fans here, only GC skeptics. GC skeptics existed long before anyone dreamed of Rust and will survive Rust as well.
It’s a pretty reasonable objection too (though I personally don’t agree). C has always been chosen when performance is paramount. For people who prioritise performance it must feel a bit weird to leave performance on the table in this way.
And Jesus Christ, give it a rest with this “Rust fans must be thinking” stuff. It sounds deranged.
No, back in the day C was used for everything. Vim was not written in C because it needed to wring every last bit of performance out of text editing.
Rewriting everything in rust "for memory-safety" is a false tradeoff given the millions of lines of C code out there and the fact that rewrites always introduce new bugs.
Please, I’m begging you, stop talking about Rust. You’re shoehorning Rust into a discussion where it hasn’t been mentioned, just to hate on some imaginary people you think are pushing Rust here. No one is talking about that. You sound deranged and obsessed.
The vast majority of the conversation here is about GC and the performance implications of that. Please stick to the rest of the thread.
I almost always find that building Boehm GC as a malloc replacement (malloc() -> GC_malloc(), free() -> NOP), and then using LD_PRELOAD to get it used makes any random C/C++ program not only still work but also run faster.
Not only that, but you can then use GC_FREE_SPACE_DIVISOR to tune RAM usage vs speed to your liking on a program by program (or even instance by instance) basis, something completely impossible with malloc().
I think Fil-C is for people who are using software that has already been written, not for people who are trying to pick what language to write new software in. A substantial amount of software has, after all, already been written.
It's super fun to write C and C++ code in Fil-C because it's like this otherworldly crossover between Java and C/C++:
- Unlike Java, you get fantastic startup times.
- Unlike Java, you get access to actual syscall APIs.
- Unlike Java, you can leverage the ecosystem of C/C++ libraries without having to write JNI wrappers (though you do have to be able to compile those libraries with Fil-C).
- Like Java, you can just `new` or `malloc` without `delete`ing or `free`ing.
I like C, have a probably unhealthy relationship with C++ where I am amazed by what it can do and then get unrealistic expectations it keeps failing to fulfill, and don't really like Java.
You know Julia Ecklar's song where she says that programming in assembler is like construction work with a toothpick for a tool? I feel like C, C++, or Java are like having a teaspoon instead. Maybe Java is a tablespoon. I'd rather use something like OCaml or a sane version of Python without the Mean Girls community infighting. I just haven't found it.
On the other hand, the supposedly more powerful languages don't have a great record of shipping highly usable production software. There's no Lisp or Ruby or Lua alternative to Firefox, Linux, or LLVM.
> Like Java, you can just `new` or `malloc` without `delete`ing or `free`ing.
Is your intention that people use the Fil-C garbage collector instead of free()? Or is it just a backstop in case of memory leak bugs?
Can the GC be configured to warn or panic if something is GCed without free()? Then you could detect memory leak bugs by recompiling with Fil-C - with less overhead than valgrind, although I’m guessing still more than ASan - but more choices is always a good thing.
But I'm not sure it's worth porting your code to Fil-C just to get that property. Because Fil-C still needs to track the memory allocation with its garbage collector. there isn't much advantage to even calling free. If you don't have a use-after-free bug, then it's just adding the overhead of marking the allocation as freed.
And if you do have a use-after-free bug, you might be better off just letting it silently succeed, as it would in any other garbage collected language. (Though, probably still wise to free when the data in the allocation is now invalid).
IMO, if you plan to use Fil-C in production, then might as well lean on the garbage collector. If you just want memory safety checking during QA, I suspect you are better off sticking with ASan. Though, I will note that Fil-C will do a better job at detecting certain types of memory issues (but not use-after-free)
I wasn’t talking about use-after-free, I was talking about memory leaks - when you get a pointer from malloc(), and then you destroy your last copy of the pointer without ever having called free() on it.
Can the GC be configured to warn/panic if it deallocates a memory block which the program failed to explicitly deallocate?
Question , would this be a desirable outcome for drivers. As far as i can tell most kernel driver crashes are the ones that would benefit from such protection.Plus obviate the need to do full rewrites - if such a GC can protect from the faults and help with the recovery.Assuming the GC after recovery process is similar to erlang BEAM where a reload can bring back healthy state.
> Is your intention that people use the Fil-C garbage collector instead of free()? Or is it just a backstop in case of memory leak bugs?
Wow great question!
My intention is to give folks powerful options. You can choose:
- Compile your code with Fil-C while still maintaining it for Yolo-C. In that case, you'll be calling free(). Fil-C's free() behavior ensures no GC-induced leaks (more on that below) so code that does this will not have leaks in Fil-C.
- Fully adopt Fil-C and don't look back. In that case, you'll probably just lean on the GC. You can still fight GC-induced leaks by selectively free()ing stuff.
- Both of the above, with `#ifdef __FILC__` guards to select what you do. I think you will want to do that if your C program has a custom GC (this is exactly what I did with emacs - I replaced its super awesome GC with calls to my GC) or if you're doing custom arena allocations (arenas work fine in Fil-C, but you get more security benefit, and better memory usage, if you just replace the arena with relying on GC).
The reason why the GC is there is not as a backstop against memory leaks, but because it lets me support free() in a totally sound way with deterministic panic on any use-after-free. Additionally, the way that the GC works means that a program that free()s memory is immune to GC-induced memory leaks.
What is a GC-induced leak? For decades now, GC implementers like me have noticed the following phenomena:
- Someone takes a program that uses manual memory management and has no known leaks or crashes in some set of tests, and converts it to use GC. The result is a program that leaks on that set of tests! I think Boehm noticed this when evangelizing his GC. I've noticed it in manual conversions of C++ code to Java. I've heard others mention it in GC circles.
- Someone writes a program in a GC'd language. Their top perf bug is memory leaks, and they're bad. You scratch your head and wonder: wasn't the whole point of GC to avoid this?
Here's why both phenomena happen: folks have a tendency keep dangling pointers to objects that they are no longer using. Here's an evil example I once found: there's a Window god-object that gets created for every window that gets opened. And for reasons, the Window has a previousWindow pointer to the Window from which the user initiated opening the window. The previousWindow pointer is used in initialization of the Window, but never again. Nobody nulled previousWindow.
The result? A GC-induced leak!
In a malloc/free program, the call to previousWindow.destroy() (or whatever) would also delete (free()) the object, and you'd have a dangling pointer. But it's fine because nobody dereferences it. It's a correct case of dangling pointers! But in the GC'd program, the dangling program keeps previousWindow around, and then there's previousWindow.previousWindow, and previousWindow.previousWindow.previousWindow, and... you get the idea.
This is why Fil-C's answer to free() isn't to just ignore it. Fil-C strongly supports free():
- Freeing an object immediately flags the capability as being empty and free. No memory accesses will succeed on the object anymore.
- The GC does not scan any outgoing references from freed objects (and it doesn't have to because the program can't access those references). Note that there's almost a race here, except https://fil-c.org/safepoints saves us. This prevents previousWindow.previousWindow from leaking.
- For those pointers in the heap that the GC can mutate, the GC repoints the capability to the free'd singleton instead of marking the freed object. If all outstanding pointers to a freed object are repointable, then the object won't get marked, and will die. This prevents previousWindow from leaking.
> Can the GC be configured to warn or panic if something is GCed without free()?
Nope. Reason: the Fil-C runtime itself now relies on GC, and there's some functionality that only a GC can provide that has proven indispensable for porting some complex stuff (like CPython and Perl5).
It would take a lot of work to convert the Fil-C runtime to not rely on GC. It's just too darn convenient to do nasty runtime stuff (like thread management and signal handling) by leaning on the fact that the GC prevents stuff like ABA problems. And if you did make the runtime not rely on GC, then your leak detector would go haywire in a lot of interesting ports (like CPython).
But, I think someone might end up doing this exercise eventually, because if you did it, then you could just as well build a version of Fil-C that has no GC at all but relies on the memory safety of sufficiently-segregated heaps.
Okay, that's brilliant. I didn't even imagine that the GC-induced leak problem was even solvable. I guess the freed-but-not-GCed object could be arbitrarily large, but that's almost never going to be a gradual leak.
Even if you have a large freed object, it’ll really get GC’d unless it’s referenced from a root that the GC can’t edit. I allow for such things to simplify the runtime, but they’re rare.
As a GC dev I just found the emacs GC to be so nicely engineered:
- The code is a pleasure to read. I understood it very quickly.
- lots of features! Very sophisticated weak maps, weak references, and finalizers. Not to mention support for heap images (the portable dumper).
- the right amount of tuning but nothing egregious.
It’s super fun to read high quality engineering in an area that I am passionate about!
Is it possible to use Fil-C as a replacement for valgrind/address sanitizer/leak sanitizer? I.e. say I have a C program that does manual memory management already. Can I then compile it with Fil-C and have it panic/assert on heap use after free, uninitialized memory read (including stack), array out of bounds read, etc?
> Nope. Reason: the Fil-C runtime itself now relies on GC, and there's some functionality that only a GC can provide that has proven indispensable for porting some complex stuff (like CPython and Perl5).
What if there was a flag you could set on an allocation, “must be freed”. An app can set the “must be freed” flag on its allocations, meaning when the GC collects the allocation, it checks if free() has been called on it, and if it hasn’t, it logs a warning (or even panics), depending on process configuration flags. Meanwhile, internal allocations by the runtime won’t set that flag, so the GC will never panic/warn on collecting them.
It seems kind of analogous to disabling hyperthreading. Sure there's an immediate performance hit, but in exchange you're now protected from entire classes of vulnerabilities. A few years later, no one remembers or cares about that old setback that has been long since eclipsed by subsequent hardware advancements.
Modern hardware is stupidly fast compared to what existed at the time that a lot of C/C++ projects first started. My M2 MacBook Air has 5x higher multi-core performance than my previous daily driver (a 2015 MacBook Pro, a highly capable machine in its own right), and the new iPhone is now even faster than that. I'd happily accept a worst-case 4x slowdown of all user space C/C++ code in the interest of security, especially when considering how much of that code is going to be written by AI going forward.
I do not think this is niche in the slightest. I would very happily take a 2-4x slowdown for almost all of the web facing C software I run if I get guaranteed memory safety. I will be using at the very least fil-c openssh (and likely much more) on every machine I run.
Sure, that makes sense. The point I’m making is just that from an engineering perspective, that also implies that there is no longer any reason for that software you’re running to be written in C at all.
Sure there is. Making tough choices between alternatives based on where to allocate a limited amount of manpower is an engineering choice. Choosing to use Fil-C to recompile existing (established, stabilized, functional...) software rather than rewrite it is an engineering choice.
Apologies ahead of time as this is pure FUD, That is I don't actually know what I am talking about but had an interesting thought.
Remember the Debian weak keys kerfuffle, That was caused because the Debian package maintainer saw a warning about using uninitialized memory, fixed it, and then it turned out that uninitialized memory was a critical seed for the openssl random number generator.
Anyhow my stupid FUD thought. is there a weak-key equivalent bug that shows up now that your C compiler is memory safe?
Even if you can't use something like Fil-C in your release/production builds, being able to e.g. compile unit tests with it to catch memory safety bugs is a huge win. My team use gcc for its mips codegen, but I'm working on adopting the clang bounds-safety annotations for test builds for exactly this reason.
Yeah, I haven't yet taken a serious look into it from that perspective yet, but similar came to mind; while, outside of bootstrapping the JDK from GCJ, Boehm GC hasn't been super relevant to me for "release" builds of anything, it's been useful in leak detection mode on occasion.
I figure even if you cannot use, or do not want to use, something like Fil-C in production, there's solid potential for it to augment whatever existing suite of sanitizers and other tools that one may already build against.
If you write your software in a language that needs GC, everybody using your software needs GC, but they're guaranteed to get memory safety.
If you write your software in an unsafe, non-GC language, nobody needs GC, but nobody gets memory safety either.
This is why many software developers chose the latter option. If there were some use cases in which GC wasn't acceptable for their software, nobody would get GC, even the people who could afford it, and would prefer the increased memory safety.
Fil-C lets the user make this tradeoff. If you can accept the GC, you compile with Fil-C, otherwise you use a traditional C compiler.
I would say that Rust would be a better choice rarher than patching memory safety on top of C. But I think the reason for this is that most, if not all, cryptographic reference implementations are in C. So they want to use existing reference implementations without having to port them to Rust.
IMO cryptographers should start using Rust for their reference implementations, but I also get that they'd rather spend their time working on their next paper rather than learning a new language.
I'm not a practioner of cryptography, but I would be wary about timing attacks that might become possible if such a dynamic runtime is introduced. At least relevant pieces of code would need to be re-evaluated in the Fil-C environment.
But maybe you could use C as the "glue language" and then the build better performing libraries in Rust for C to use. Like in Python!
Good call! Fil-C does in fact have a way to let you build and run OpenSSL with its constant time crypto. I don't know how this works exactly but I guess it's relatively easy to guarantee it's safe.
The original poster got pretty much all of Debian running in Fil-C, in a fairly brief amount of time.
Re-writing even a single significant library or package in Rust would take exponentially longer, so in this case Rust would not be "a better choice", but rather a non-starter.
Memory safety is a very small concern for most cryptographic implementations (e.g Side Channel attacks).
Rust solves essentially none of the other concerns.
IIRC SHA3's reference implementation had an integer overflow in a counter that made finding collisions trivial, as it meant that some blocks of the input weren't considered.
> IMO cryptographers should start using Rust for their reference implementations
IMO they should not, because if I look at a typical Rust code, I have no clue what is going on even with a basic understanding of Rust. C, however, is as simple as it gets and much closer to pseudocode.
Good cryptographic code should match its algorithmic description. Rust enables abstractions that allow this. C does not. That you have some familiarity with C and not Rust should not be a contributing factor.
I say this as someone who has written cryptographic code that’s been downloaded millions of times.
The problem in terms of reference implementations is exactly the abstractions. Reference implementations should be free of abstractions and should be understandable. Abstractions make code much less understandable.
I say this as someone who has been involved in cryptography and has read through dozens of reference implementations. Stick to C, not Python or Rust, it is much easier to understand because the abstractions are just there to hide code. Less abstractions in reference implementations = better. If you do not think so, I will provide you a code snippet of a language of my own choosing that is full of abstractions, and let us see that you understand exactly what it does. You will not. That is the point.
Abstractions are the only way to make sense of cryptography. Pretending otherwise leads to cross layer bugs and vulnerabilities.
Of course bad abstractions are bad, but that doesn’t mean no abstraction is good.
Reference implementations must NOT have abstractions like this. Rust encourages it. Lots of Rust codebase is filled with them. Your feelings for Rust is irrelevant. C is simple and easy to understand, therefore reference implementations must be in C. Period.
Hiding things is totally great in crypto code! When I’m implementing signature verification, I shouldn’t have to worry how the underlying, eg, field or elliptic curve algebra is implemented!
In fact there have been many crypto bugs which insufficiently abstract this kind of stuff away.
> And Linux won't arbitrarily irrevocably brick your computer because of an automatic update.
To the average user, it absolutely will. Unless they happen to run on particularly well-supported hardware, the days of console tinkering aren't gone, even on major distros.
What's fixable to the average Linux user and what's fixable to the average person (whose job is not to run Linux) are two very, very different things.
If you run a modern distro with a modern filesystem, you can at the very least have automatic snapshots that actually work, and you can restore to a previous state if an update breaks things. The same cannot be said for Windows.
I have booted from snapshots on Ubuntu with ZFS plenty of times and it has worked fine. I've also used Snapper with btrfs and restored from backup and it's worked fine. I've also booted from snapshots in NixOS and it has worked fine. I actually cannot think of a time where any of those examples didn't work fine.
The borrow tracker tracks whether there is 1, more than 1, or no references to a pointer at any particular time and rust automatically drops it when that last reference (the owning one) goes away. Sounds like compile time reference counting to me :P
I didn't invent this way of referring to it, though I don't recall who I stole it from. It's not entirely accurate, but it's a close enough description to capture how rust's mostly automatic memory management works from a distance.
So the problem here is that it is almost entirely wrong. There is no reference count anywhere in the borrow checker’s algorithm, and you can’t do the things with borrows that you can do with reference counting.
It’s just not a good mental model.
For example, with reference counting you can convert a shared reference to a unique reference when you can verify that the count is exactly 1. But converting a `&T` to a `&mut T` is always instantaneous UB, no exceptions. It doesn’t matter if it’s actually the only reference.
Borrows are also orthogonal to dropping/destructors. Borrows can extend the lifetime of a value for convenience reasons, but it is not a general rule that values are dropped when the last reference is gone.
There is a reference count in the algorithm in the sense that the algorithm must keep track of the number of live shared borrows derived from a unique borrow or owned value so that it knows when it becomes legal to mutate it again (i.e. to know when that number goes to zero) or if there are still outstanding ones.
Borrow checking is necessary for dropping and destructors in the sense that without borrows we could drop an owned value while we still have references to it and get a use after free. RAII in rust only works safely because we have the borrow checker reference counting for us to tell us when its again safe to mutate (including drop) owned values.
Yes, rust doesn't support going from an &T to an &mut T, but it does support going from an <currently immutable reference to T> to a <mutable reference to T> in the shape of going from an &mut T which is currently immutably borrowed to an &mut T which is not borrowed. It can do this because it keeps track of how many shared references there are derived from the mutable reference.
You're right that it's possible to leak the owning reference so that the object isn't freed when the last reference is gone - but it's possible to leak a reference in runtime reference runtime reference counted language too.
But yes, it's not a perfect analogy, merely a good one. It's most likely that the implementation doesn't just keep a count of references for instance, but a set of them to enable better diagnostics and more efficient computation.
> More than anything, the Rust community is hyper-fixated on stability and correctness. It is very much the antithesis to “move fast and break things”.
Cargo always picks the newest version of a dependency, even if that version is incompatible with the version of Rust you have installed.
You're like "build this please", and it's like "hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad"
They finally addressed this bug -- optionally (the default is still to break the build at the slightest provocation) -- in January this year (which of course, requires you upgrade your compiler again to at least that)
What a bunch of shiny-shiny chasing idiots with a brittle build system. It's designed to ratchet forward your dependencies and throw new bugs and less-well-tested code at you. That's absolutely exhausting. I'm not your guinea pig, I want to build reliable, working systems.
There are additionally versioning standards for shared objects, so you can have two incompatible versions of a library live side-by-side on a system, and binaries can link to the one they're compatible with.
> Cargo always picks the newest version of a dependency, even if that version is incompatible with the version of Rust you have installed.
> PKG_CHECK_MODULES([libfoo], [libfoo >= 1.2.3])
This also picks the newest version that might be incompatible with your compiler, if the newer version uses a newer language standard.
> You're like "build this please", and it's like "hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad"
Also possible in the case of your example.
> What a bunch of shiny-shiny chasing idiots with a brittle build system.
Autoconf as an example of non-brittle build system? Laughable at best.
This is whataboutism to deflect from Rust's basic ethos being to pull in the latest shiny-shiny.
> This also picks the newest version that might be incompatible with your compiler, if the newer version uses a newer language standard.
It doesn't, it just verifies what the user has already installed (with apt/yum/dnf) is suitable. It certainly doesn't connect to the network and go looking for trouble.
The onus is on library authors to write standard-agnostic, compiler-agnostic headers, and that's what they do:
> This is whataboutism to deflect from Rust's basic ethos being to pull in the latest shiny-shiny.
No. You set a bar for Cargo that the solution you picked does not reach either.
> It doesn't, it just verifies what the user has already installed (with apt/yum/dnf) is suitable.
There's no guarantee that that is compatible with your project though. You might be extra unlucky and have to bring in your own copy of an older version. Plus their dependencies.
Perfect example of the pile of flaming garbage that is C dependency "management". We haven't even mentioned cross-compiling! It multiplies all this C pain a hundredfold.
> The onus is on library authors to write standard-agnostic, compiler-agnostic headers, and that's what they do:
You're assuming that the used feature can be represented in older language standards. If it doesn't, you're forced to at least have that newer compiler on your system.
> [...] standard-agnostic, compiler-agnostic headers [...]
> For linking, shared objects have their [...]
Compiler-agnostic headers that get compiled to compiler-specific calling conventions. If I recall correctly, GCC basically dictates it on Linux. Anyways, I digress.
> shared objects have their own versioning to allow backwards-incompatible versions to exist simultaneously (libfoo.so.1, libfoo.so.2).
Oooh, that one is fun. Now you have to hope that nothing was altered when that old version got built for that new distro. No feature flag changed, no glibc-introduced functional change.
> hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad
If we look at your initial example again, Cargo followed your project's build instructions exactly and unfortunately pulled in a package that is for some reason incompatible with your current compiler version. To fix this you have the ability to just specify an older version of the crate and carry on.
Looking at your C example, well, I described what you might have to do and how much manual effort that can be. Being forced to use a newer compiler can be very tedious. Be it due to bugs, stricter standards adherence or just the fact that you have to do it.
In the end, it's not a fair fight comparing dependency management between Rust and C. C loses by all reasonable metrics.
I listed a specific thing -- that Rust's ecosystem grinds people towards newness, even if goes so far to actually break things. It's baked into the design.
I don't care that it's hypothetically possible for that to happen with C, I care that practically, I've never seen it happen.
Whereas, the single piece of software I build that uses Rust, _without changing anything_ (already built before, no source changes, no compiler changes, no system changes) -- cargo install goes off to the fucking internet, finds newer packages, downloads them, and tells me the software it could build last week can't be build any more. What. The. Fuck. Cargo, I didn't ask you to fuck up my shit - but you did it anyway. Make has never done that to me, nor has autoconf.
Show me a C environment that does that, and I'll advise you to throw it out the window and get something better.
There have been about 100 language versions of Rust in the past 10 years. There have been 7 language versions of C in the past 40. They are a world apart, and I far prefer the C world. C programmers see very little reason to adopt "newer" C language editions.
It's like a Python programmer, on a permanent rewrite treadmill because the Python team regularly abandon Python 3.<early version> and introduce Python 3.<new version> with new features that you can't use on earlier Python versions, asking how a Perl programmer copes. The Perl programmer reminds them that the one Perl binary supports and runs every version of Perl from 5.8 onwards, simultaneously, and the idea of making all the developers churn their code over and over again to keep up with latest versions is madness, the most important thing is to make sure old code keeps running without a single change, forever. The two people are simply on different planets.
> I don't care that it's hypothetically possible for that to happen with C, I care that practically, I've never seen it happen.
I don't think your anecdotal experience is enough to redeem the disarray that is C dependency management. It's nice to pretend though.
> and tells me the software it could build last week can't be build any more. What. The. Fuck. Cargo, I didn't ask you to fuck up my shit - but you did it anyway. Make has never done that to me, nor has autoconf.
If you didn't get my point in previous comment, let me put it more frankly - it is your skill issue if you aren't fixing your crates to a specific version but depend on them remaining constant. This is not Cargo's fault.
> Make has never done that to me, nor has autoconf.
Yeah, because they basically guarantee nothing nor allow working around any of the potential issues I've already described.
But you do get to wait for a thousandth time for it to check the size of some types. All those checks are a literal proof how horrible the ecosystem is.
> There have been about 100 language versions of Rust in the past 10 years
There's actually four editions and they're all backwards-compatible.
> C programmers see very little reason to adopt "newer" C language editions.
This doesn't "pick" anything, it only locates and validates the version specified and installed by the user, it does not start to fetch newer versions.
I would use a newer version of C, and consider picking C++, if the choice was between C, C++, Ada, and Rust. (If pattern matching would be a large help, I might consider Rust).
For C++, there is vcpkg and Conan. While they are overall significantly or much worse options than what Rust offers according to many, in large part due to C++'s cruft and backwards compatibility, they do exist.
The way you've described both of those solutions demonstrates perfectly how C package management is an utter mess. You claim to be very familiar with the C ecosystem yet you describe them based on their own description. Not once have you seen them in use? Both of those are also (only) slightly younger than Rust by the way.
So after all these decades there's maybe something vaguely tolerable that's also certainly less mature than what even Rust has. Congrats.
You might be mixing up who you are replying to. I never claimed that C and C++ package management are better than what Rust offers overall. In some regards, Cargo is much better than what is offered in C and C++. I wouldn't ascribe that to a mess, more the difficulty of maintaining backwards compatibility and handling cruft. However, I know of people that have had significant trouble handling Rust. For instance, the inclusion of Rust in bcachefs and handling it in Debian that whole debacle.
They did not have an easy time including Rust software as I read it. Maybe just initial woes, but I have also read other descriptions of distribution maintainers having trouble with integrating Rust software. Dynamic binding complaints? I have not looked into it.
It received a lot of attention and "visibility" because it caused a lot of pain to some people. I am befuddled why you would wrongly attempt to dismiss this undeniable counter-example.
Somebody is attempting to characterize the Rust community in general as being similar to other programming communities that value velocity over stability, such as the JS ecosystem and others.
I’m pointing out that incidents such as this are incredibly rare, and extremely controversial within the community, precisely because people care much more about stability than velocity.
Indeed, the design of the Rust language itself is in so many ways hyper-fixated on correctness and stability - it’s the entire raison d’etre of the language - and this is reflected in the culture.
Comparing with JS ecosystem is very telling. Some early Rust developers, come from the JS ecosystem (especially at Firefox), and Cargo takes inspiration from the JS ecosystem, like with lock files. But JS ecosystem is a terrible baseline to compare with regarding stability. Comparing a language's stabilitity with JS ecosystem says very little. You should have picked a systems language to compare with.
And your post is itself a part of the Rust community, and it itself is an argument against what you claim in it. If you cannot or will not own up to the 1.80 time crate debacle, or mention the 1.80 time crate debacle proactively as a black mark that weighs on Rust's conscience and that it will take time to rebuild trust and confidence in Rust's stability because of it, well, your priorities, understood as in the Rust community's priorities, are clear, and they do not, in practice, lie with stability, safety and security, nor with being forthcoming.
Yes, people are allowed to do stupid things, but other people are also allowed to call them out on those stupid things. Especially if they make other people's lives harder, like in this case.
Whether it’s “stupid” remains to be seen. I personally would not have made this choice at this point in time, but the way some people seem to consider “Rust program has a bug” to be newsworthy is… odd.
> but the way some people seem to consider “Rust program has a bug” to be newsworthy is… odd.
But the fact that program X was written in Rust is, on the other hand, newsworthy? And there is nothing odd in the fact that the first property of the software that is advertised is the fact that it was made in Rust.
Rewriting already perfectly working and feature-complete software and advertising your version as a superior replacement while neglecting to implement existing features that users relied on is pretty stupid, yes.
Replacing a program which implements these features and is a core foundation of the OS, which one that doesn't, to mock people not using the latest language is.
however once software that has been only rewritten for the sake of being written in Rust starts affecting large distributions like Ubuntu, that's a different issue...
however one could argue that Ubuntu picking up the brand new Rust based coreutils instead of the old one is a 2nd order effect of "let's rewrite everything in Rust, whether it makes sense of not"
Nobody - least of all the authors of uutils - is forcing Ubuntu to adopt this change. I personally feel like it's a pretty radical step on Ubuntu's part, but they are free to make such choices, and you are free to not use Ubuntu if you believe it is detrimental.
There's no "however" here. Rewriting anything in Rust has no effect on anybody by itself.
With current GPU architectures, this seems unlikely. Like, you would need a ton of cells with almost perfectly aligned inputs before even the DMA bus roundtrip is worth it.
We’re talking at least hundreds of thousands of cells, depending on the calculation, or at least a number that will make the UI very sad long before you’ll see a slowdown from calculation.
There isn't always a DMA roundtrip; unified memory is a thing. But programming for the GPU is very awkward at a systems level. Even with unified memory, there is generally no real equivalent to virtual memory or mmap() so you have to shuffle your working set in and out of VRAM by hand anyway (i.e. backing and residency is managed explicitly, even with "sparse" allocation api's that might otherwise be expected to ease some of the work). Better GPU drivers may be enough to mitigate this, along with broad-based standardization of some current vendor-specific extensions (it's not clear that real HW changes are needed) but this creates a very real limitation in the scale of software (including the AI kind) you can realistically run on any given device.
reply