Haven't read the book. I feel like this is effectively "Rust: The Hard Parts" as a topic. Mostly in that from my admittedly limited understanding of rust, lifetimes and shared memory, Redis in rust semantics is almost an anti pattern.
I intend to look into this book as I find the topic interesting. But do feel that it's tackling some of the more cumbersome areas of rust. I'm also curious if there will be an effort to be binary file compatible.
With the Redis licensing kerfuffle and since fork of valkey I wonder about some of the other redis alike databases there are and will be. I think that rust is just a hard fit for this use case. But who knows.
I think redis design is probably easy becasue it's single threaded. So probably not
But wasn't the use you described the whole point of rust getting alot of investment from Mozilla in the first place. The browser had all those problems you describe and they thought the language makes it easier
"Fearless concurrency" yes. In Rust's model we can't make a lot of the easy mistakes. Safe Rust won't let you, for example, try to share an Rc<T> between two threads, because while Rc<T> is perfectly sound and has good performance locally, if it's possible for two concurrent operations to touch the object bad things may happen, so we can't ever (in Safe Rust) see such an object from more than one thread - in this case you should use the (more expensive on some hardware) Arc<T> if there might be more threads.
In safe Rust understanding why all this works isn't your problem, it just does. Hardcore implementation work (say, you're making a new kind of mutual exclusion primitive) will need unsafe and also rely on proper understanding of how to (and whether to) enable use of your type this way.
No, it's not a problem at all. There are already several high-performance databases written in Rust.
Lifetimes are mainly a learning barrier for new users, and affect which internal API designs are more convenient, but they're not a constraint on the types of applications you can write.
Rust strongly guides users towards using immutability and safeguards uncontrolled shared mutability, but you can use shared mutable memory if you want. In single-threaded programs that's trivial. In multi-threaded programs shared mutability is inherently difficult for reasons beyond Rust, and Rust's safeguards actually make the problem much more tractable.
Writing a Rust clone was one of the first projects I did when trying to learn Rust ~4-5 years ago. On top of that I opted for an async version with a database-per-core design.
Throughout the whole project I never once ran into lifetimes, and I gave up once I had to think about cross thread transactions, which had nothing to do with Rust the language.
Well, I don't exactly know your scope but I was doing stuff the other day with sqlite and using it on a single thread while having a multi-thread webserver. My solution was to have a wrapper that's "multithreaded" and it'd just accept a `FnMut(transaction) -> Result<T,DbError> + Send + Sync + static` and queue them internally so they ran one at a time. Since all the lambdas are "hardcoded" (like the rest of my source code) the static requirement isn't a big deal.
My main problem with it is that I ended up de/serializing T across the FnMut boundary. But given that the FnMut knows what T is it seems like there should be some way to use unsafe and pointers to remove the serialization if it's ever a performance problem.
I intend to look into this book as I find the topic interesting. But do feel that it's tackling some of the more cumbersome areas of rust. I'm also curious if there will be an effort to be binary file compatible.
With the Redis licensing kerfuffle and since fork of valkey I wonder about some of the other redis alike databases there are and will be. I think that rust is just a hard fit for this use case. But who knows.