Semaphores and Mutex for Thread and Process Synchronization - multithreading

I am confused with the usage of semaphores and mutexes at thread and process level. Can we use semphores and mutexes for both thread and process synchronization, or do we have different semaphores and mutexes both at thread and process level? My question is with reference to the POSIX API's.

The answer to both questions is yes. You can create both mutexes and semaphores as either process-shared or not. So you can use them as interprocess or interthread synchronization objects, but you have to specify which when you create them.
Of course, you must create the synchronization object in memory that is shared by all contexts that wish to access it. With threads, that's trivial since they share a view of memory. With processes, you have to create the synchronization object in shared memory specifically.

Synchronization protects elements when they share data or when their tasks must be ordered.
Processes and threads basically are the same (with differences) they are pieces of computation that make some work, the only thing you have to pay attention is when you are working with processes and when with threads but the method used is the same.

Related

Can multiple threads acquire lock on the same object?

I am taking a course on concurrency. The text says that multi-threading allows high throughput as it takes advantage of the multiples cores of the cpu.
I have a question about locking in the context of multiple cores. If we have multiple threads and they are running in different cpu cores, why can't two threads acquire the same lock? How does os protect against such scenarios?
Locking and locks are for synchronization to prevent data corruption when multiple threads want to write to the same memory.
Generally you run multiple threads and use locking only in critical situations.
If two or more threads want to write into the same place at the same time then the multi core calculation is limited. Of course you can use no locking in this situation but results can be unpredictable at that moment.
For example to write multi-threaded calculation of matrix multiplication you make a thread for every row of the resulting matrix. There is no locking needed because every thread writes to different place and this scenario can fully benefit from multiple processors.
If you want to permit more than one shared access to a resource then you can use Semaphore (in java).
If we have multiple threads and they are running in different cpu cores, why can't two threads acquire the same lock?
The purpose of mutex/lock is to implement mutual exclusion - only one thread can lock a mutex at a time. Or, in other words, many threads cannot lock the same mutex at the same time, by definition. This mechanism is needed to allow multiple threads to store into or read from a shared non-atomic resource without data race conditions.
How does os protect against such scenarios?
OS support is needed to prevent the threads from busy-waiting when locking a mutex that is already locked by another thread. Linux implementations of mutex (and semaphore) use futex to put the waiting threads to sleep and wake them up when the mutex is released.
Here is a longer explanation from Linus Torvalds of how mutex is implemented.

Dynamic variable declaration inside a thread

As I got to know that apart from data segment and code segment threads also share heap segment
What resources are shared between threads??
then if I create a variable dynamically using malloc() or calloc() inside the thread then does that variable would be accessible to all the other threads of the same process?
Theoretically, if you know the memory address. Yes, heap allocated variables should be accessible from any thread within the same process.
{malloc, calloc, realloc, free, posix_memalign} of glibc-2.2+ are thread safe
http://linux.derkeiler.com/Newsgroups/comp.os.linux.development.apps/2005-07/0323.html
Original post
Generally, malloc/new/free/delete on multi threaded systems are thread safe, so this should be no problem - and allocating in one thread , deallocating in another is a quite common thing to do.
As threads are an implementation feature, it certainly is implementation dependant though - e.g. some systems require you to link with a multi threaded runtime library.
And this
Besides also answered in: the link you posted
Threads differ from traditional multitasking operating system
processes in that:
processes are typically independent, while threads exist as subsets of
a process processes carry considerable state information, whereas
multiple threads within a process share state as well as memory and
other resources processes have separate address spaces, whereas
threads share their address space processes interact only through
system-provided inter-process communication mechanisms. Context
switching between threads in the same process is typically faster than
context switching between processes.
So, yes it is.

Difference between Mutex, Semaphore & Spin Locks

I am doing experiments with IPC, especially with Mutex, Semaphore and Spin Lock.
What I learnt is Mutex is used for Asynchronous Locking (with sleeping (as per theories I read on NET)) Mechanism, Semaphore are Synchronous Locking (with Signaling and Sleeping) Mechanism, and Spin Locks are Synchronous but Non-sleeping Mechanism.
Can anyone help me to clarify these stuff deeply?
And another doubt is about Mutex, when I wrote program with thread & mutex, while one thread is running another thread is not in Sleep state but it continuously tries to acquire the Lock. So Mutex is sleeping or Non-sleeping???
First, remember the goal of these 'synchronizing objects' :
These objects were designed to provide an efficient and coherent use of 'shared data' between more than 1 thread among 1 process or from different processes.
These objects can be 'acquired' or 'released'.
That is it!!! End of story!!!
Now, if it helps to you, let me put my grain of sand:
1) Critical Section= User object used for allowing the execution of just one active thread from many others within one process. The other non selected threads (# acquiring this object) are put to sleep.
[No interprocess capability, very primitive object].
2) Mutex Semaphore (aka Mutex)= Kernel object used for allowing the execution of just one active thread from many others, within one process or among different processes. The other non selected threads (# acquiring this object) are put to sleep. This object supports thread ownership, thread termination notification, recursion (multiple 'acquire' calls from same thread) and 'priority inversion avoidance'.
[Interprocess capability, very safe to use, a kind of 'high level' synchronization object].
3) Counting Semaphore (aka Semaphore)= Kernel object used for allowing the execution of a group of active threads from many others, within one process or among different processes. The other non selected threads (# acquiring this object) are put to sleep.
[Interprocess capability however not very safe to use because it lacks following 'mutex' attributes: thread termination notification, recursion?, 'priority inversion avoidance'?, etc].
4) And now, talking about 'spinlocks', first some definitions:
Critical Region= A region of memory shared by 2 or more processes.
Lock= A variable whose value allows or denies the entrance to a 'critical region'. (It could be implemented as a simple 'boolean flag').
Busy waiting= Continuosly testing of a variable until some value appears.
Finally:
Spin-lock (aka Spinlock)= A lock which uses busy waiting. (The acquiring of the lock is made by xchg or similar atomic operations).
[No thread sleeping, mostly used at kernel level only. Ineffcient for User level code].
As a last comment, I am not sure but I can bet you some big bucks that the above first 3 synchronizing objects (#1, #2 and #3) make use of this simple beast (#4) as part of their implementation.
Have a good day!.
References:
-Real-Time Concepts for Embedded Systems by Qing Li with Caroline Yao (CMP Books).
-Modern Operating Systems (3rd) by Andrew Tanenbaum (Pearson Education International).
-Programming Applications for Microsoft Windows (4th) by Jeffrey Richter (Microsoft Programming Series).
Here is a great explanation of the difference between semaphores and mutexes:
http://blog.feabhas.com/2009/09/mutex-vs-semaphores-–-part-1-semaphores/
The short answer has to do with ownership at least with binary semaphores but I suggest you read the entire article.
Mutex is the locking mechanism while the semaphore is the wait and signal mechanism.
Both have different applications.
There is a very good explanation given by the IISC professor.
Link for video

What is the difference between semaphore and mutex in implementation?

I read that mutex and binary semaphore are different in only one aspect, in the case of mutex the locking thread has to unlock, but in semaphore the locking and unlocking thread can be different?
Which one is more efficient?
Assuming you know the basic differences between a sempahore and mutex :
For fast, simple synchronization, use a critical section.
To synchronize threads across process boundaries, use mutexes.
To synchronize access to limited resources, use a semaphore.
Apart from the fact that mutexes have an owner, the two objects may be optimized for different usage. Mutexes are designed to be held only for a short time; violating this can cause poor performance and unfair scheduling. For example, a running thread may be permitted to acquire a mutex, even though another thread is already blocked on it, creating a deadlock. Semaphores may provide more fairness, or fairness can be forced using several condition variables.

thread synchronization vs process synchronization

can we use the same synchronization mechanisams for both thread synchronization and process synchronization
what are thes synchronization mechanisams that are avilable only within the process
semaphores are generally what are used for multi process synchronization in terms of shared memory access, etc.
critical sections, mutexes and conditions are the more common tools for thread synchronization within a process.
generally speaking, the methods used to synchronize threads are not used to synchronize processes, but the reverse is usually not true. In fact its fairly common to use semaphores for thread synchronization.
There are several synchronization entities. They have different purposes and scope. Different languages and operating system implement them differently. On Windows, for one, you can use monitors for synching threads within a processes, or mutex for synching processes. There are semaphores, events, barriers... It all depends on the case. .NET provides so called slim versions that have improved performance but target only in-process synching.
One thing to remember though. Synching processes requires system resource, allocation and manipulation (locking and releasing) of which take quite a while.
An application consists of one or more
processes. A process, in the simplest
terms, is an executing program. One or
more threads run in the context of the
process. A thread is the basic unit to
which the operating system allocates
processor time. A thread can execute
any part of the process code,
including parts currently being
executed by another thread.
Ref.
As to specific synchronisation constructs, that will depend on the OS/Environment/language
One difference: Threads within a process have equal access to the memory of the process. Memory is typically private to a process, but can be explicitly shared.

Resources