Yeah, that was Brian Cantrill's realization when for the sake of learning he rewrote a part of dtrace in Rust and was shocked when he saw his naive reimplementation being significantly faster than his original code, and the answer boiled down to “I used a BTreeMap" in Rust because it's in std”.
hmm.. i wonder how it would compare then with clang+linux, clang+stl or hotspot+j2ee.
reminds me a bit of the days when perl programs would often outrun native c/c++ for common tasks because ultimately they had the most efficient string processing libraries baked into the language built-ins.
how is space efficiency? last i checked, because of big libraries and ease of adding them to projects, a lot of rust binaries tend to be much larger than their traditional counterparts. how might this impact overall system performance if this trade-off is made en-masse? (even if more ram is added to counteract loss of vm page cache, does it also start to impact locality and cache utilitization?)
i'd be curious how something like redox benchmarks against traditional linux for real world workloads and interactivity measures.
pretty cool! in isolation looks awesome! i'm still a little curious about the impacts increased executable image size, especially in a complete system.
if all the binaries are big, does it start to crowd out cache space? does static linking make sense for full systems?