It's surprisingly slow. Switching files in the tab list has a noticeable delay. Typing is higher latency than both Emacs (lsp-mode activated) and my web browser. Also uses approximately 60MiB more than my Emacs. It starts fast though!
I wouldn't complain about this stuff if it wasn't for their tagline being 'it's fast' and they're losing to Emacs Lisp (not a language amenable to being very fast) with a highly optimized C core.
I looked at their plugins, they're compiled into WASM and run in some VM. Maybe that's part of it?
How did you manage to make zed slower than emacs? From my experience, latency in zed sometimes even feels negative. Everything is instant: editing, lsp commands, file switching
In contrast, all my attempts at emacs ended up in dropping it due to latency issues (mostly because I work on remote machines)
> I looked at their plugins, they're compiled into WASM and run in some VM. Maybe that's part of it?
No. The ones I've looked just set up stuff, like launching a language server. They shouldn't be involved in typing.
I think it's related to the GPU usage. It's easy to introduce delays when you do GPU compositing, and the OS will already be doing its own.
As for emacs, IIRC they did some ugly things to update the UI directly instead of going through the normal event loop, which was causing compatibility issues later on.
>How long are you realistically "waiting for the compiler and linker"? 3 seconds? You're not recompiling the whole project after all, just one source file typically
10 minutes.
>If I wanna use a debugger though, now that means a full recompile to build the project without optimizations, which probably takes many minutes.
Typically always compile with debug support. You can debug an optimized build as well. Full recompile takes up to 45 minutes.
The largest reason to use a debugger is the time to recompile. Kinda, I actually like rr a lot and would prefer that to print debugging.
10 minutes for linking? The only projects I've touched which have had those kinds of link times have been behemoths like Chromium. That must absolutely suck to work with.
Have you tried out the Mold linker? It might speed it up significantly.
> You can debug an optimized build as well.
Eh, not really. Working with a binary where all variables are optimized out and all operators are inlined is hell.
>10 minutes for linking? The only projects I've touched which have had those kinds of link times have been behemoths like Chromium. That must absolutely suck to work with.
I don't know the exact amounts of time per phase, but you might change a header file and that will of course hurt you a lot more than 1 translation unit.
> Eh, not really. Working with a binary where all variables are optimized out and all operators are inlined is hell.
Yeah, but sometimes that's life. Reading the assembly and what not to figure things out.
>That must absolutely suck to work with.
Well, you know, I also get to work on something approximately as exciting as Chromium.
> I don't know the exact amounts of time per phase, but you might change a header file and that will of course hurt you a lot more than 1 translation unit.
Yeah, which is why I was talking about source files. I was surprised that changing 1 source file (meaning re-compiling that one source file and then re-linking the project) takes 10 minutes. If you're changing header files then yeah it's gonna take longer.
FWIW, I've heard from people who know this stuff that linking is actually super slow for us :). I also wanted to try out mold, but I couldn't manage to get it to work.
Ownership and lifetimes are arguably not about Rust, but about how we construct programs today. Big, interdependent, object graphs are not a good way of constructing programs in Rust or C++.
You don't need to learn how git works internally to be able to use it. You need to know a lot about filesystems in order to use them: Folders, files, symbolic links, copy, cut, paste, how folders can exist on different devices, etc. There's just a tonne of assumed knowledge regarding them, and it's very obvious when you meet someone that doesn't have it (regular people often don't have all it).
Subversion also isn't some thing humming along invisibly in the background, it has its own quirks that you need to learn or you'll get stung.
Yes, Rust is better. Implicit numeric conversion is terrible. However, don't use atoi if you're writing C++ :-). The STL has conversion functions that will throw, so separate problem.
The numeric conversion functions in the STL are terrible. They will happily accept strings with non-numeric characters in them: they will convert "123abc" to 123 without giving an error. The std::sto* functions will also ignore leading whitespace.
Yes, you can ask the std::sto* functions for the position where they stopped because of invalid characters and see if that position is the end of the string, but that is much more complex than should be needed for something like that.
These functions don't convert a string to a number, they try to extract a number from a string. I would argue that most of the time, that's not what you want. Or at least, most of the time it's not what I need.
atoi has the same problem of course, but even worse.
std::from_chars will still accept "123abc". You have to manually check if all parts of the string have been consumed. On the other hand, " 123" is not accepted, because it starts with an invalid character, so the behaviour isn't "take the first acceptable number and parse that" either.
To get the equivalent of Rust's
if let Ok(x) = input.parse::<i32>() {
println!("You entered {x}");
} else {
eprintln!("You did not enter a number");
}
you need something like:
int x{};
auto [ptr, ec] = std::from_chars(input.data(), input.data() + input.size(), x);
if (ec == std::errc() && ptr == input.data() + input.size()) {
std::cout << "You entered " << x << std::endl;
} else {
std::cerr << "You did not enter a valid number" << std::endl;
}
I find the choice to always require a start and and end position, and not to provide a method that simply passes or fails, to be quite baffling. In C++26, they also added an automatic boolean conversion for from_chars' return type to indicate success, which considers "only consumed half the input from the start" to be a success.
Maybe I'm weird for mostly writing code that does straightforward input-to-number conversions and not partial string parsers, but I have yet to see a good alternative for Rust's parse().
I guess there's a place for functions that extract or parse partially, but IMO there is a real need for an actual conversion function like Rust's parse() or Python's int() or float(). I think it's a real shame C++ (and C as well) only offers the first and not the second.
It's bad if it alters values (e.g. rounding). Promotion from one number representation to another (as long as it preserves values) isn't bad. This is trickier than it might seem, but Virgil has a good take on this (https://github.com/titzer/virgil/blob/master/doc/tutorial/Nu...). Essentially, it only implicitly promotes values in ways that don't lose numeric information and thus are always reversible.
In the example, Virgil won't let you pass "1000.00" to an integer argument, but will let you pass "100" to the double argument.
Aside from the obvious bit size changes (e.g. i8 -> i16 -> i32 -> i64, or f32 -> f64), there is no "hierarchy" of types. Not all ints are representable as floats. u64 can represent up to 2^64 - 1, but f64 can only represent up to 2^53 with integer-level precision. This issue may be subtle, but Rust is all about preventing subtle footguns, so it does not let you automatically "promote" integers to float - you must be explicit (though usually all you need is an `as f64` to convert).
> Aside from the obvious bit size changes (e.g. i8 -> i16 -> i32 -> i64, or f32 -> f64), there is no "hierarchy" of types.
Depends on what you want from such a hierarchy, of course, but there is for example an injection i32 -> f64 (and if you consider the i32 operations to be undefined on overflow, then it’s also a homomorphism wrt addition and multiplication). For a more general view, various Schemes’ takes on the “numeric tower” are informative.
Virgil allows the maximum amount of implicit int->float injections that don't change values and allows casts (in both directions) that check if rounding altered a value. It thus guarantees that promotions and (successful) casts can't alter program behavior. Given any number in representation R, promotion or casting to type N and then casting back to R will return the same value. Even for NaNs with payloads (which can happen with float <-> double).
I disagree: when you use floats, you implicitly accept the precision loss/roundings that comes with using floats..
IMHO int to float implicit conversion is fine as long as you have explicit float to int conversion.
If the conversion will always succeed (for example an 8-bit unsigned integer to a 32-bit unsigned integer), the From trait would be used to allow the conversion to feel implicit.
If the conversion could fail (for example a 32-bit unsigned integer to an 8-bit unsigned integer), the TryFrom trait would be used so that an appropriate error could be returned in the Result.
These traits prevent errors when converting between types and clearly mark conversions that might fail since they return Result instead of the output type.
It's a bit annoying that there is a standard desugaring of all of this, but there's no actual way of implementing this sugaring yourself. I guess you could have a token macro (or whatever they're called) at the top scope and do it yourself?
Anyway, my conclusion is that in order to learn Rust you can't use any syntactic sugar the language provides, those are 'expert features'. There's no way I'd be able to figure out why this happens. Do rust-analyzer provide a way of desugaring these constructs?
I wouldn't complain about this stuff if it wasn't for their tagline being 'it's fast' and they're losing to Emacs Lisp (not a language amenable to being very fast) with a highly optimized C core.
I looked at their plugins, they're compiled into WASM and run in some VM. Maybe that's part of it?