Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because, by definition, it’s not a memory-safe, TLS stack at that point. Security is only as strong as its weakest link. If critical components aren’t memory safe, we don’t usually call it memory safe overall or claim it’s in a memory safe language without clear qualifiers.

The detractors are talking about how they’re marketing or describing it. They want the memory safe and Rust labels to only be used for memory safe and purely-Rust programs. That’s fair.

Outside the marketing, the stack is good work. I’m grateful for all the TLS teams do to keep us safer.




I am switching to zig after writing rust professionally for 5+ years, but this take doesn’t make any sense having small amount of unsafe primitives is not the same as having all of your code unsafe. Especially higher level logic code can have a lot of mistakes, and the low level primitives very likely will be written by more experienced and careful people. This is the whole point of rust, even if it is questionable if it reaches it. Title only says rustls beats the other libraries which is objectively true so don’t see what is misleading here.


>this take doesn’t make any sense having small amount of unsafe primitives is not the same as having all of your code unsafe

I've been arguing this for years. It makes the area you need to review more tightly much smaller. Making it way easier to find bugs in the first place. I'm sometimes wondering if unsafe was the right choice of keyword. Because to people that don't understand the language, it conveys the sense that Rust doesn't help with memory safety at all.

I've written a bunch of Rust, and rarely needed to use unsafe. I'd say less than 0.1% of the lines written.

Aside from that unsafe Rust still has a lot more safety precautions than standard C++. It doesn't even deactivate the borrow checker. [1]

[1] https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html


In the past, safe vs unsafe meant whether it preserved invariants in all executions of your code. Was your code type- and memory-safe by default in all situations? Was there a guarantee? If so, it was safe. If breaking that guarantee or outside the type system, it was “unsafe.”

Note: I don’t know enough about Rust to tell you how they should label it.

Another thing you might find interesting is external verification of unsafe modules. What you do is build static analyzers, verifiers, etc that can prove the absence of entire categories of bugs, esp memory safety. It’s usually for small code. You run that on the code that doesn’t use memory safety.

Another technique is making a verified, reference implementation that’s used to confirm the high-performance implementation. Their interfaces and structure are designed to match. Then, automated methods for equivalence checking verify the unsafe code matches the safe code in all observed cases. The equivalence might be formal and/or test generators.

You can also wrap the unsafe code in safe interfaces that force it to be used correctly. I imagine the Rust TLS does this to some degree. Projects like miTLS go further to enforce specific, security properties during interactions between verified and unsafe code.

The last thing to consider are abstraction gap attacks. If mixing languages or models, then the behavior of one can make the other unsafe just because they work differently. Especially in how the compiler structures or links them. This led to real vulnerabilities in Ada code that used C just due to interactions, not the C code. Although previously checked by eye, there’s a new field called secure compilation or abstract compilation trying to eliminate the integration vulnerabilities.

Lastly, if not too bad for performance, some sandboxed the unsafe code with the interfaces checking communication both ways. Techniques used include processes (seL4), segments (GEMSOS), capabilities (CHERI), and physical (FPGA coprocessors). It’s usually performance-prohibitive to separate crypto primitives like this. Whereas, coprocessors can have verified crypto and be faster, though. (See Cryptol-generated VHDL.)


There’s no disagreement between us on the value of using mostly memory safe code. I’ve advocated it here for years.

I also promoted techniques to verify the “unsafe” portions by using different, verification methods with some kind of secure linking to avoid abstraction gap attacks.

The detractors were complaining about changing the definition of memory-safe code. It was code in a language that was immune to classes of memory safety errors. If the code compiles, the errors probably can’t occur. A guarantee.

The new definition they’re using for this project includes core blocks written in a memory unsafe language that might also nullify the safety guarantees in the other code. When compiled, you don’t know if it will have memory errors or not. That contradicts what’s expected out of memory-safe code.

So, people were objecting to it being described as memory safe Rust if it included code blocks of memory-unsafe, not-Rust code. There’s projects that write the core, performance-critical blocks in safe languages. There’s also those doing making crypto safer, like Galois’ Cryptol or SPARK Skein. So, using the right terminology helps users know what they’re getting and reviewers do apples to apples comparisons.

For this one, they might say it’s “mostly safe Rust with performance blocks written in unsafe assembler for speed.” (Or whatever else is in there.) The high-security community has often talked like that. Instead of hurting perception, it makes suppliers more trustworthy with our users more educated on well-balanced security.


> Title only says rustls beats the other libraries which is objectively true so don’t see what is misleading here.

You are correct.

Although, communication has two parts: sending and receiving.

Application named “rustFoo”, automatically is an advertising for rust, and title “RustFoo is faster than Foo” for many implies “rust is faster than <probablyC>”.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: