When I read the documentations of Mutex and RwLock, the difference I see is the following:
Mutex can have only one reader or writer at a time,
RwLock can have one writer or multiple readers at a time.
When you put it that way, RwLock seems always better (less limited) than Mutex, why would I use it, then?
Sometimes it is better to use a Mutex over an RwLock in Rust:
RwLock<T> needs more bounds for T to be thread-safe:
Mutex requires T: Send to be Sync,
RwLock requires T to be Send and Sync to be itself Sync.
In other words, Mutex is the only wrapper that can make a T syncable. I found a good and intuitive explanation in reddit:
Because of those bounds, RwLock requires its contents to be Sync, i.e. it's safe for two threads to have a &ptr to that type at the same time. Mutex only requires the data to be Send, because conceptually you can think of it like when you lock the Mutex it sends the data to your thread, and when you unlock it the data gets sent to another thread.
Use Mutex when your T is only Send and not Sync.
Preventing writer starvation
RwLock does not have a specified implementation because it uses the implementation of the system. Some read-write locks can be subject to writer starvation while Mutex cannot have this kind of issue.
Mutex should be used when you have possibly too many readers to let the writers have the lock.
Mutex is a simple method of locking to control access to shared resources.
At the same time, only one thread can master a mutex, and threads with locked status can access shared resources.
If another thread wants to lock a resource that has been mutexed, the thread hangs until the locked thread releases the mutex.
Read write locks are more complex than mutex locks.
Threads using mutex lack read concurrency.
When there are more read operations and fewer write operations, read-write locks can be used to improve thread read concurrency.
Let me summarize for myself:
The implementation of read-write lock is more complex than that of
mutual exclusion lock, and the performance is poor.
The read-write lock supports simultaneous reading by multiple threads. The mutex lock does not support simultaneous reading by multiple threads, so the read-write lock has high concurrency.
Related
What is an "async" mutex as opposed to a "normal" mutex? I believe this is the difference between tokio's Mutex and the normal std lib Mutex. But I don't get, conceptually, how a mutex can be "async". Isn't the whole point that only one thing can use it at a time?
Here's a simple comparison of their usage:
let mtx = std::sync::Mutex::new(0);
let _guard = mtx.lock().unwrap();
let mtx = tokio::sync::Mutex::new(0);
let _guard = mtx.lock().await;
Both ensure mutual exclusivity. The only difference between an asynchronous mutex and a synchronous mutex is dictated by their behavior when trying to acquire a lock. If a synchronous mutex tries to acquire the lock while it is already locked, it will block execution on the thread. If an asynchronous mutex tries to acquire the lock while it is already locked, it will yield execution to the executor.
If your code is synchronous, there's no reason to use an asynchronous mutex. As shown above, locking an asynchronous mutex is Future-based and is designed to be using in async/await contexts.
If your code is asynchronous, you may still want to use a synchronous mutex since there is less overhead. However, you should be mindful that blocking in an async/await context is to be avoided at all costs. Therefore, you should only use a synchronous mutex if acquiring the lock is not expected to block. Some cases to keep in mind:
If you need to hold the lock over an .await call, use an asynchronous mutex. The compiler will usually reject this anyway when using thread-safe futures since most synchronous mutex locks can't be sent to another thread.
If your lock is contentious (i.e. if you expect the mutex to already be locked when you want it), you should use an asynchronous mutex. This can happen when synchronizing multiple tasks into a pool or bounded queue.
If you have complicated and/or computationally-heavy updates, those should probably be moved to a blocking pool anyway where you'd use a synchronous mutex.
The above cases are all three sides of the same coin: if you expect to block, use an asynchronous mutex. If you don't know whether your mutex usage will block or not, err on the side of caution and use an asynchronous mutex. Using an asynchronous mutex where a synchronous one would suffice only leaves a small amount of performance on the table, but using a synchronous mutex where you should've used an asynchronous one could be catastrophic.
Most situations I run into with mutexes are when synchronizing simple data structures, where the update methods are well-encapsulated to acquire the lock, update the data, and release the lock. Did you know a simple println! requires locking a mutex? Those uses of mutexes can be synchronous and used even in an asynchronous context. Even if the lock does block, it often is no more impactful than a process context switch which happens all the time anyway.
Note: Tokio's Mutex does have a .blocking_lock() method which is helpful if both locking behaviors are needed. So the mutex can be both synchronous and asynchronous!
See also:
Why do I get a deadlock when using Tokio with a std::sync::Mutex?
std::sync::Mutex vs futures:lock:Mutex vs futures_lock::Mutex for async on the Rust forum
Which kind of mutex should you use? in the Tokio documentation
On using std::sync::Mutex in the Tokio tutorial on shared state
When a lock (or try_lock) function of a mutex discovers that the mutex is already locked (by another thread (presumably)), could it try to determine whether the owning thread is (or was extremely recently) running on another core?
Knowing whether the owner is running gives an indication of a possible reason the thread still holds the lock (why it couldn't unlock it): if the thread owning the mutex is either "runnable" (waiting for an available core and time slice) or "sleeping" or waiting for I/O... then clearly spinning on the mutex is not useful.
I understand that non recursive, non "safe", non priority inheritance mutexes (that I call "anonymous" mutexes: mutexes that in practice can be unlocked by another thread as they are just common semaphores) carry as little information as possible, but surely that could be determined for those "owned" mutexes that know the identify of the locking thread.
Another more useful information would be more how much time it was locked, but that would potentially require adding a field in the mutex object.
Is that ever done?
The docs says that locking from a thread and unlocking from another a rwlock results in undefined behaviour. I have an array and two threads, one allocating it and one deallocating it, this happens in a cycle, and there are also some threads reading/writing in it, but they never overlap so no synchronization is needed there. The problem is the read/write threads still try to use the array in the timeframe between dealloc - alloc. I was thinking of using a read lock for the read/write threads and lock the array for writing in the dealloc thread and unlocking writing in alloc thread. But this results in undefined behavior since they happen on different threads. What would be the right approach in this case?
You need some variable that stores the state. You can protect that variable with a lock. So when a thread needs to check or change the state, it acquires the lock, checks or changes the state, and then releases the lock.
Suppose I use a lock hierarchy to avoid deadlock. If I use reader-writer mutexes, how should I think about and use these? Do there exist (can I think of) a distinct reading lock and writing lock in the hierarchy for each reader-writer mutex? (If yes, this would imply that these two locks could be assigned different levels in the hierarchy.) Does using reader-write mutexes introduce possibilities of deadlock into the hierarchy? (If yes, how (if at all) can that be avoided?) What about "upgradeable" locks (reader locks which can be turned into writer locks without unlocking the mutex first)?
Yes, I've seen advice to avoid (specifically reader-writer) mutexes if possible. This is not about whether to use them generally; just assume that there existed a problem best solved by reader-writer mutexes. Similarly, do not suggest alternatives to the lock hierarchy just because you'd generally favour them. (If however combining reader-writer mutexes with lock hierarchies does introduce a possibility of deadlocks, feel free to suggest alterations to the used concepts.)
It might be helpful to know that I'm thinking about multi-threaded programs using the Boost Thread library. The reader-writer mutex class is called shared_mutex there; unique_lock is an exclusive (writer) lock; shared_lock is a shared (reader) lock; upgrade_lock is a reader lock that can temporarily be upgraded to a writer lock.
You have to treat it as a single lock for the purposes of lock ordering. If you have lock A and a R-W lock B, and have two threads that do this:
lock A, wait for read(B)
lock write(B), wait for A
they will still deadlock.
I read that mutex and binary semaphore are different in only one aspect, in the case of mutex the locking thread has to unlock, but in semaphore the locking and unlocking thread can be different?
Which one is more efficient?
Assuming you know the basic differences between a sempahore and mutex :
For fast, simple synchronization, use a critical section.
To synchronize threads across process boundaries, use mutexes.
To synchronize access to limited resources, use a semaphore.
Apart from the fact that mutexes have an owner, the two objects may be optimized for different usage. Mutexes are designed to be held only for a short time; violating this can cause poor performance and unfair scheduling. For example, a running thread may be permitted to acquire a mutex, even though another thread is already blocked on it, creating a deadlock. Semaphores may provide more fairness, or fairness can be forced using several condition variables.