Why can’t something be faster than C? If a language is able to convey more information to a backend like LLVM, the backend could use that to produce more optimised code than what it could do for C.
For example, if the language is able to say, for any two pointers, the two pointers will not overlap - that would enable the backend to optimise further. In C this requires an explicit restrict keyword. In Rust, it’s the default.
grep (C) is about 5-10x slower than ripgrep (Rust). That’s why ripgrep is used to execute all searches in VS Code and not grep.
Or a different tack. If you wrote a program that needed to sort data, the Rust version would probably be faster thanks to the standard library sort being the fastest, across languages (https://github.com/rust-lang/rust/pull/124032). Again, faster than C.
Happy to give more examples if you’re interested.
There’s nothing special about C that entitles it to the crown of “nothing faster”. This would have made sense in 2005, not 2025.
First, I would say that "ripgrep is generally faster than GNU grep" is a true statement. But sometimes GNU grep is faster than ripgrep and in many cases, performance is comparable or only a "little" slower than ripgrep.
Secondly, VS Code using ripgrep because of its speed is only one piece of the picture. Licensing was also a major consideration. There is an issue about this where they originally considered ripgrep (and ag if I recall correctly), but I'm on mobile so I don't have the link handy.
The kind of code you can write in rust can indeed be faster than C, but someone will wax poetic about how anything is possible in C and they would be valid.
The major reason that rust can be faster than C though, is because due to the way the compiler is constructed, you can lean on threading idiomatically. The same can be true for Go, coroutines vs no coroutines in some cases is going to be faster for the use case.
You can write these things to be the same speed or even faster in C, but you won’t, because it’s hard and you will introduce more bugs per KLOC in C with concurrency vs Go or Rust.
> but someone will wax poetic about how anything is possible in C and they would be valid.
Not at all would that be valid.
C has a semantic model which was close to how early CPUs worked, but a lot has changed since. It's more like CPUs deliberately expose an API so that C programmers could feel at home, but stuff like SIMD and the like is non-existent in C besides as compiler extensions. But even just calling conventions, the stack, etc are all stuff you have no real control over in the C language, and a more optimal version of your code might want to do so. Sure, the compiler might be sufficiently smart, but then it might as well convert my Python script to that ultra-efficient machine code, right?
So no, you simply can't write everything in C, something like simd-json is just not possible. Can you put inline assembly into C? Yeah, but I can also call inline assembly from Scratch and JS, that's not C at all.
Also, Go is not even playing in the same ballpark as C/C++/Rust.
If you don't count manual SIMD intrinsics or inline assembly as C, then Rust and FORTRAN can be faster than C.
This is mainly thanks to having pointer aliasing guarantees that C doesn't have. They can get autovectorization optimizations where C's semantics get in the way.
Of course many things can be faster than C, because C is very far from modern hardware. If you compile with optimisation flags, the generated machine code looks nothing like what you programmed in C.
It is quite easy for C++ and Rust to both be faster than C in things larger than toy projects. C is hardly a panacea of efficiency, and the language makes useful things very hard to do efficiently.
You can contort C to trick it into being fast[1], but it quickly becomes an unmaintainable nightmare so almost nobody does.
1: eg, correct use of restrict, manually creating move semantics, manually creating small string optimizations, etc...
Fortran has been faster than C, because C has aliasing, preventing optimizations. At least for decades this was why for some applications Fortran was just faster.
It's not just "a sufficiently smart compiler", without completely unrealistic (as in "halting problem" unrealistic, in the general case) "smartness".
So no, C is inherently slower than some other languages.
We were presumably talking about an ideal massless space [Minkowski] in which the speed of light in a vaccuum is considered -- that is what c is defined as.
Besides the famous "C is not a low-level language" blog post.. I don't even get what you are thinking. C is not even the performance queen for large programs (the de facto standard today is C++ for good reasons), let alone for tiny ultra hot loops like codecs and stuff, which are all hand-written assembly.
It's not even hard to beat C with something like Rust or C++, because you can properly do high level optimizations as the language is expressive enough for that.
I’m fine with it if they decrease the subscription fee to a quarter. Nobody I know has 4 TVs in their house. Charging for “4 screens” while enforcing this new policy is a simple cue to unsub Netflix for good.
As someone who likes to keep in check what’s going on back home while being permanently away from it, I HATE how twitter trends are hijacked by political parties to shit on the other parties. There trends are purely dogshit and nothing more than memes. There is no value I get from the trending section. Anything important is lost in the noise. I tried to switch to Worldwide trends I remember twitter used to have but got rid of it for some reason.
Originally, the term has really been just opposed to batch processing, in connection with systems like Whirlwind, SAGE, the DEC PDP-1 (as the first commercial real-time system) and is tightly connected to the idea of interactive computing. (Another early real-time system outside of the MIT tech path was MIDSAC, 1953.)
Compare Digital founder Ken Olsen's use of the term in his oral history, "The original computing was based on the way people had done computations before. You'd collect all the data, bring it together, process it and send the answers back. The idea of processing it, real time, took a long time to develop. In the world of commercial processing, it's just in the last few years that batch processing has started to disappear. The replacement for it is now called transactional processing, where if you make a transaction with a bank, it is instantly taken care of." (https://www.computerhistory.org/collections/catalog/10263035...)
There are many applications to this, each coming with their own set of implications. E.g., if you want a smoothly running interactive program on an early PDP computer, you want to have program paths of about equal runtime duration. Which may mean in turn that you would want opt out of no-operation paths as late as possible, by this stabilizing run-time, rather than as early as possible. (E.g., we perform calculations anyway, but apply the result only under a certain condition.) Or, if it is about complex processes, where it may mean that we will fulfil a contract in a guaranteed span of time to facilitate cooperations of any kind (e.g, what Olsen calls transactional processing or matching sampling rates with digital computers as a replacement for analog lab computers). Or it may be bound to a particular domain, where stale data isn't of any use (e.g., compare Whirlwind's origin in a digital flight simulator.) Or it may be just about a system being able to respond to input at all.
These C-hating articles are getting out of hand on HN. I see atleast one every week on the front page. If you don't like C, STFU, pick-up whatever FOTM BS you like, GTFO of here, and leave us "normies" alone who have to get actual stuff done which absolutely requires C.