Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> gcc seems to use a fixed size "pool" of mutexes attributed to a shared_ptr according to a hash of its pointee address, when dealing with atomic operations. In other words, no rocket-science for atomic shared pointers until now.

A side effect of the shared pool is that you might have two totally unrelated pointers share the same mutex, leading to surprising locking behaviors. Something to keep in mind.



surprising locking behaviors – there is a phrase to strike terror into the heart of any sufficiently experienced programmer.


As I explained in a commentary: As far as I understand gcc implementation. In a pseudo-code it would be:

Mutex mutexes[SIZE_OF_POOL];

int hash(void* p) { ... }

std::shared_ptr<t> atomic_load( const std::shared_ptr<t>* p )

{

    int r = hash(p);

    lock(mutextes[r % SIZE_OF_POOL]);
    return *p; // Safe copy of p.
}

All the other atomic operations (store, etc) use the same lock pool, so it should be fine for the copy! If you want to take a look at gcc's implementation:

https://github.com/gcc-mirror/gcc/blob/bd3f0a53c07086e978ea4...

https://github.com/gcc-mirror/gcc/blob/bd3f0a53c07086e978ea4...


Yes, that's my understanding of the implementation as well. I was commenting on the fact that you might have two totally unrelated p1 and p2, where (hash(p1) % SIZE_OF_POOL) == (hash(p2) % SIZE_OF_POOL), that will end up sharing the same mutex.


Yes definitely tricky. It might give a heart attack to some high frequency system designers! I will investigate this topic in future: maybe someone can come up with a spin-lock or lock free solution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: