I think this is an interesting question and I'm not sure why it was downvoted.
One argument here is that the whole mental model of Rust is that yes, most things can be solved with off-the-shelf synchronization methods. What the Rust type system gives you is a way for library authors to write those methods and say "Yes, this hashmap is actually a concurrent hashmap, safely usable from multiple threads" - and in turn for these types to compose, and to see if the data stored in the hashmap is itself thread-safe. Then the application author doesn't need to worry about any of this. As long as the code compiles (and the library authors weren't going out of their way to lie about types and make the Rust compiler ignore it), you know there are no thread-safety issues and you can just treat synchronization as a problem for someone else to solve. That's the ideal - just like the average application shouldn't have its own crypto implementation, it shouldn't have its own concurrency implementation.
A simple case where you need a concurrent hashmap (at least conceptually) is writing a multi-threaded web server or similar. When a thread wakes up, it gets told that there's data on some particular connection. It then needs to look up that connection in a table and manipulate the request object associated with that connection. Any thread could be woken up for any incoming traffic, so this table needs to be cross-thread. You could do one thread per connection, and arguably more people should, but people very often find that they hit scaling limit with that approach.
But yes, most of the time you should avoid shared mutable state. The Rust type system is designed around making it hard to accidentally have shared mutable state. Anecdotally, that feels like the most common cause of thread-safety issues - not people thinking up front about how they need to design their program around a concurrent hashmap.
Thanks for commenting :-). I probably would have classified what you describe as a coarse-grained synchronization problem though. There needs to be a little runtime-y support for the connection and request object handling, but that can live in a central place and be exposed as a simple (blocking?) function call. It's almost an "off-the-shelf" situation. Individual request handlers can be programmed without any concern for synchronization (at least with regards to the request object, and apart from that most request handlers are rather isolated from each other). Any particular request is never handled by more than 1 thread at once. It's only 1 logical thread, as is evidenced by the existence of thread-per-connection implementations.
I am starting to think that this is a good example of what I wanted to express in my parent post - maybe synchronization issues can more often than not constrained to a few central places, maybe we don't have to litter code with locks and unlocks so much? Maybe web request processing architecture is not that primitive and other domains can learn from it? Maybe a lot of nitty-gritty synchronization could be centralized if the "it needs to happen right here, right now" constraint is lifted?
One argument here is that the whole mental model of Rust is that yes, most things can be solved with off-the-shelf synchronization methods. What the Rust type system gives you is a way for library authors to write those methods and say "Yes, this hashmap is actually a concurrent hashmap, safely usable from multiple threads" - and in turn for these types to compose, and to see if the data stored in the hashmap is itself thread-safe. Then the application author doesn't need to worry about any of this. As long as the code compiles (and the library authors weren't going out of their way to lie about types and make the Rust compiler ignore it), you know there are no thread-safety issues and you can just treat synchronization as a problem for someone else to solve. That's the ideal - just like the average application shouldn't have its own crypto implementation, it shouldn't have its own concurrency implementation.
A simple case where you need a concurrent hashmap (at least conceptually) is writing a multi-threaded web server or similar. When a thread wakes up, it gets told that there's data on some particular connection. It then needs to look up that connection in a table and manipulate the request object associated with that connection. Any thread could be woken up for any incoming traffic, so this table needs to be cross-thread. You could do one thread per connection, and arguably more people should, but people very often find that they hit scaling limit with that approach.
But yes, most of the time you should avoid shared mutable state. The Rust type system is designed around making it hard to accidentally have shared mutable state. Anecdotally, that feels like the most common cause of thread-safety issues - not people thinking up front about how they need to design their program around a concurrent hashmap.