> A programmer being careless will be careless with Rust "unsafe" too.
Programmers will be careless, sure, but you can't really use unsafe without going out of your way to. Like, no-one is going to write "unsafe { *arr.get_unchecked(index) }" instead of "arr[index]" when they're not thinking about it.
> So you look at all the 300 unmaintained dependencies a typical Rust projects pulls in via cargo and look at all the "unsafe" blocks to screen it?
No, of course not, I run "cargo geiger" and let the computer do it.
I think unmaintained dependencies are less likely, and easier to check, in the Rust world. Ultimately what defines the attack surface is the number of lines of code, not how they're packaged, and C's approach tends to lead to linking in giant do-everything frameworks (e.g. people will link to GLib or APR when they just wanted some string manipulation functions or a hash table, which means you then have to audit the whole framework to audit that program's dependencies. And while the framework might look well-maintained, that doesn't mean that the part your program is using is), reimplementing or copy-pasting common functions because they're not worth adding a dependency for (which is higher risk, and means that well-known bugs can keep reappearing, because there's no central place to fix it once and for all), or both. And C's limited dependency management means that people often resort to vendoring, so even if your dependency is being maintained, those bugfixes may not be making their way into your program.
> And this idea existed before with managed languages... Safe Java in the browser and so. Also sounded plausible but was similarly highly exaggerated as the Rust story.
Java has quietly worked. It didn't succeed in the browser or on the open-source or consumer-facing desktop for reasons that had nothing to do with safety (in some cases they had to do with the perception of safety), but backend processing or corporate internal apps are a lot safer than they used to be, without really having to change much.
Programmers will be careless, sure, but you can't really use unsafe without going out of your way to. Like, no-one is going to write "unsafe { *arr.get_unchecked(index) }" instead of "arr[index]" when they're not thinking about it.
> So you look at all the 300 unmaintained dependencies a typical Rust projects pulls in via cargo and look at all the "unsafe" blocks to screen it?
No, of course not, I run "cargo geiger" and let the computer do it.
I think unmaintained dependencies are less likely, and easier to check, in the Rust world. Ultimately what defines the attack surface is the number of lines of code, not how they're packaged, and C's approach tends to lead to linking in giant do-everything frameworks (e.g. people will link to GLib or APR when they just wanted some string manipulation functions or a hash table, which means you then have to audit the whole framework to audit that program's dependencies. And while the framework might look well-maintained, that doesn't mean that the part your program is using is), reimplementing or copy-pasting common functions because they're not worth adding a dependency for (which is higher risk, and means that well-known bugs can keep reappearing, because there's no central place to fix it once and for all), or both. And C's limited dependency management means that people often resort to vendoring, so even if your dependency is being maintained, those bugfixes may not be making their way into your program.
> And this idea existed before with managed languages... Safe Java in the browser and so. Also sounded plausible but was similarly highly exaggerated as the Rust story.
Java has quietly worked. It didn't succeed in the browser or on the open-source or consumer-facing desktop for reasons that had nothing to do with safety (in some cases they had to do with the perception of safety), but backend processing or corporate internal apps are a lot safer than they used to be, without really having to change much.