I am taking a course on concurrency. The text says that multi-threading allows high throughput as it takes advantage of the multiples cores of the cpu.
I have a question about locking in the context of multiple cores. If we have multiple threads and they are running in different cpu cores, why can't two threads acquire the same lock? How does os protect against such scenarios?
Locking and locks are for synchronization to prevent data corruption when multiple threads want to write to the same memory.
Generally you run multiple threads and use locking only in critical situations.
If two or more threads want to write into the same place at the same time then the multi core calculation is limited. Of course you can use no locking in this situation but results can be unpredictable at that moment.
For example to write multi-threaded calculation of matrix multiplication you make a thread for every row of the resulting matrix. There is no locking needed because every thread writes to different place and this scenario can fully benefit from multiple processors.
If you want to permit more than one shared access to a resource then you can use Semaphore (in java).
If we have multiple threads and they are running in different cpu cores, why can't two threads acquire the same lock?
The purpose of mutex/lock is to implement mutual exclusion - only one thread can lock a mutex at a time. Or, in other words, many threads cannot lock the same mutex at the same time, by definition. This mechanism is needed to allow multiple threads to store into or read from a shared non-atomic resource without data race conditions.
How does os protect against such scenarios?
OS support is needed to prevent the threads from busy-waiting when locking a mutex that is already locked by another thread. Linux implementations of mutex (and semaphore) use futex to put the waiting threads to sleep and wake them up when the mutex is released.
Here is a longer explanation from Linus Torvalds of how mutex is implemented.
Related
I get the idea that if locking and unlocking a mutex is an atomic operation, it can protect the critical section of code in case of a single processor architecture.
Any thread, which would be scheduled first, would be able to "lock" the mutex in a single machine code operation.
But how are mutexes any good when the threads are running on multiple cores? (Where different threads could be running at the same time on different "cores" at the same time).
I can't seem to grasp the idea of how a multithreaded program would work without any deadlock or race condition on multiple cores?
The general answer:
Mutexes are an operating system concept. An operating system offering mutexes has to ensure that these mutexes work correctly on all hardware that this operation system wants to support. If implementing a mutex is not possible for a specific hardware, the operating system cannot offer mutexes on that hardware. If the operating system requires the existence of mutexes to work correctly, it cannot support that hardware at all. How the operating system is implementing mutexes for a specific hardware is unsurprisingly very hardware dependent and varies a lot between the operating systems and their supported hardware.
The detailed answer:
Most general purpose CPUs offer atomic operations. These operations are designed to be atomic across all CPU cores within a system, whether these cores are part of a single or multiple individual CPUs.
With as little as two atomic operations, atomic_or and atomic_and, it is possible to implement a lock. E.g. think of
int atomic_or ( int * addr, int val )
It atomically calculates *addr = *addr | val and returns the old value of *addr prior to performing the calculation. If *lock == 0 and multiple threads call atomic_or(lock, 1), then only one of them will get 0 as result; only the first thread to perform that operation. All other threads get 1 as result. The one thread that got 0 is the winner, it has the lock, all other threads register for an event and go to sleep.
The winner thread now has exclusive access to the section following the atomic_or, it can perform the desired work and once it is done, it just clears the lock again (atomic_and(lock, 0)) and generates a system event, that the lock is now available again.
The system will then wake up one, some, or all of the threads that registered for this event before going to sleep and the race for the lock starts all over. Either one of the woken up threads will win the race or possibly none of them, as another thread was even faster and may have grabbed the lock in between the atomic_and and before the other threads were even woken up but that is okay and still correct, as it's still only one thread having access. All threads that failed to obtain the lock go back to sleep.
Of course, the actual implementations of modern systems are often much more complicated than that, they may take things like threads priorities into account (high prio threads may be preferred in the lock race) or might ensure that every thread waiting for a mutex will eventually also get it (precautions exist that prevent a thread from always losing the lock-race). Also mutexes can be recursive, in which case the system ensures that the same thread can obtain the same mutex multiple times without deadlocking and this requires some additional bookkeeping.
Probably needless to say but atomic operations are more expensive operations as they require the cores within a system to synchronize their work and this will slow their processing throughput. They may be somewhat expensive if all cores run on a single CPU but they may even be very expensive if there are multiple CPUs as the synchronization must take place over the CPU bus system that connects the CPUs with each other and this bus system usually does not operate at CPU speed level.
On the other hand, using mutexes will always slow down processing to begin with as providing exclusive access to resources has to slow down processing if multiple threads ever require access at the same time to continue their work. So for implementing mutexes this is irrelevant. Actually, if you can implement a function in a thread-safe way using just atomic operations instead of full featured mutexes, you will quite often have a noticeable speed benefit, despite these operations being more expensive than normal operations.
Threads are managed by the operating system, which among other things, is responsible for scheduling threads to cores, so it can also avoid scheduling a specific thread onto a core.
A mutex is an operating-system concept. You're basically asking the OS to block a thread until some other thread tells the OS it's ok
On modern operating systems, threads are an abstraction over the physical hardware. A programmer targets the thread as an abstraction for code execution. There is no separate abstraction for working on a hardware core available. The operating system is responsible for mapping threads to physical cores.
A mutex is a data structure that lives in system memory. Any thread that has access can read that memory position, regardless of what thread or core it is running in. It doesn't matter whether your code is executing on core 1 or 20, its still has the ability to read the current state of the lock.
In other words, regardless of the number of threads or cores, there is only shared system memory for them to act on.
I have multiple threads that are accessing the same data and it is too painful to make them thread safe. Therefore, they are now forced to run only on one CPU core using CPU affinity and only one thread can be running at the same time.
I was wondering if it is possible to group these threads and let them float to other CPU cores all together ? In this way, I don't have to spare one CPU core for these threads.
This is based on Unix/BSD platform
There is no way to do this on Windows. I don't know about Unix/Linux but I doubt it's possible.
Note, that this does not make your system thread-safe. Even on uni-processor machines thread-safety is a concern.
i++
is not atomic. Two thread can both read i, then compute i+1, then write i. That results in a lost update.
You need to throw this approach away. Probably, you should be using a global lock that you hold around accesses to shared mutable state. That makes all these concerns go away and is reasonably simple to implement.
Either make the code thread safe or use just one thread.
The simplest solution is probably the "one big lock" model. With this model, one lock protects all the data that's shared among the threads and not handled in a thread-safe way. The threads start out always holding the lock. Then you identify all the points where the threads can block and release the lock during that block.
Suppose we have a dual-core machine with a mainstream, modern OS capable to utilize both the cores.
If I have two threads, P1 and Q1 within the same process, and they happen to commence creating child threads, say, P2 and Q2, at approximately the same machine cycle, will OS perform the thread creation concurrently?
I heard thread creation is expensive, so the question came forth...
Thanks in advance.
Any reasonably well designed OS can have multiple processors executing kernel code at the same time. Therefore some of the tasks involved in a thread creation can be happening concurrently. But there will be some necessary serialization to manipulate some shared data structures (e.g. allocating memory, inserting a newly created threat structure into a global list). The processors could contend for the same lock thereby reducing concurrency.
Systems/applications which make new threads so often that the overhead of thread creation actually matters are probably designed wrong (doing too little useful work in a thread relative to the startup time, and not taking advantage of the obvious optimization of reusing short-lived threads from a pool).
It will be sorta-concurrently. There are aspects of thread-creation that cannot proceed in parallel - it would be unfortunate if the kernel memory-manager allocated both threads the same stack!
Thread creation is sufficiently expensive that it's worth while avoiding doing it at all during an app. run, hence the popularity of thread pools. Long-running tasks that block can be threaded off and left for the life of the app - often this means that explicit thread termination, (awkward at best, almost impossible at worst, from user code), is not necessary.
I think developers continually start and stop threads because they like to think of them as 'functions', where you 'pass parameters' in at the start and 'return' results when the thread ends. Ths is not the best way of conceptualizing threads.
From what I have learned about Mutexes - they generally provide a locking capability on a shared resources. So if a new thread wants to access this locked shared resource - it either quits or has to continually poll the lock (and wastes processor cycles in waiting for the lock).
However, a monitor has condition variables which provides a more asynchronous way for waiting threads - by putting them on wait queue and thereby not making them consume processor cycles.
Would this be the only advantage of monitors over mutexes (or any general locking mechanism without condition variables) ?
Mutexes are low level construct. They just provide mutual exclusion and memory visibility/ordering. Monitors, on the other hand, are higher level - they allow threads to wait for an application specific condition to hold.
So, in some cases monitors are just overkill over a simple lock/unlock, but in most cases mutexes alone are not nearly enough - so you see them used with one or more condition variables - conceptually using monitors equivalent.
I think, a monitor locks an object (Multi thread cannot access the object at the same time.)
While a mutex locks a process (multi-thread only one can go through the process.)
I read that mutex and binary semaphore are different in only one aspect, in the case of mutex the locking thread has to unlock, but in semaphore the locking and unlocking thread can be different?
Which one is more efficient?
Assuming you know the basic differences between a sempahore and mutex :
For fast, simple synchronization, use a critical section.
To synchronize threads across process boundaries, use mutexes.
To synchronize access to limited resources, use a semaphore.
Apart from the fact that mutexes have an owner, the two objects may be optimized for different usage. Mutexes are designed to be held only for a short time; violating this can cause poor performance and unfair scheduling. For example, a running thread may be permitted to acquire a mutex, even though another thread is already blocked on it, creating a deadlock. Semaphores may provide more fairness, or fairness can be forced using several condition variables.