Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just speculating: Rust can hand over more hints to the code generator. Eg you don't have to worry about aliasing as much as with C pointers. See https://en.wikipedia.org/wiki/Aliasing_(computing)#Conflicts...


This makes a lot of sense to me, though I don’t know the official answer so I’m just sort of guessing along too.

Linked from the article is another on how they used c2rust to do the initial translation.

https://trifectatech.org/blog/translating-bzip2-with-c2rust/

For our purposes, it points out places where the code isn’t very optimal because the C code has no guarantees on the ranges of variables, etc.

It also points out a lot of people just use ‘int’ even when the number will never be very big.

But with the proper type the Rust compiler can decide to do something else if it will perform better.

So I suspect your idea that it allows unlocking better optimizations though more knowledge is probably the right answer.


Ergonomics of using the right data structures and algorithms can also play a big role. In C, everything beyond a basic array is too much hassle.


Yeah, that was Brian Cantrill's realization when for the sake of learning he rewrote a part of dtrace in Rust and was shocked when he saw his naive reimplementation being significantly faster than his original code, and the answer boiled down to “I used a BTreeMap" in Rust because it's in std”.


hmm.. i wonder how it would compare then with clang+linux, clang+stl or hotspot+j2ee.

reminds me a bit of the days when perl programs would often outrun native c/c++ for common tasks because ultimately they had the most efficient string processing libraries baked into the language built-ins.

how is space efficiency? last i checked, because of big libraries and ease of adding them to projects, a lot of rust binaries tend to be much larger than their traditional counterparts. how might this impact overall system performance if this trade-off is made en-masse? (even if more ram is added to counteract loss of vm page cache, does it also start to impact locality and cache utilitization?)

i'd be curious how something like redox benchmarks against traditional linux for real world workloads and interactivity measures.


For whatever it's worth, details of my findings are in [0].

[0] https://bcantrill.dtrace.org/2018/09/28/the-relative-perform...


pretty cool! in isolation looks awesome! i'm still a little curious about the impacts increased executable image size, especially in a complete system.

if all the binaries are big, does it start to crowd out cache space? does static linking make sense for full systems?


The kernel will only load the parts of the binary you actually run, and can drop the disk cache for those parts that haven't been ran in a while.

So more than the absolute size of the binary, you should worry about how much is actually in the 'active set'.


And even non standard library crates are really easy to use in Rust.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: