Mutex lock: what does "blocking" mean? - multithreading

I've been reading up on multithreading and shared resources access and one of the many (for me) new concepts is the mutex lock. What I can't seem to find out is what is actually happening to the thread that finds a "critical section" is locked. It says in many places that the thread gets "blocked", but what does that mean? Is it suspended, and will it resume when the lock is lifted? Or will it try again in the next iteration of the "run loop"?
The reason I ask, is because I want to have system supplied events (mouse, keyboard, etc.), which (apparantly) are delivered on the main thread, to be handled in a very specific part in the run loop of my secondary thread. So whatever event is delivered, I queue in my own datastructure. Obviously, the datastructure needs a mutex lock because it's being modified by both threads. The missing puzzle-piece is: what happens when an event gets delivered in a function on the main thread, I want to queue it, but the queue is locked? Will the main thread be suspended, or will it just jump over the locked section and go out of scope (losing the event)?

Blocked means execution gets stuck there; generally, the thread is put to sleep by the system and yields the processor to another thread. When a thread is blocked trying to acquire a mutex, execution resumes when the mutex is released, though the thread might block again if another thread grabs the mutex before it can.
There is generally a try-lock operation that grab the mutex if possible, and if not, will return an error. But you are eventually going to have to move the current event into that queue. Also, if you delay moving the events to the thread where they are handled, the application will become unresponsive regardless.
A queue is actually one case where you can get away with not using a mutex. For example, Mac OS X (and possibly also iOS) provides the OSAtomicEnqueue() and OSAtomicDequeue() functions (see man atomic or <libkern/OSAtomic.h>) that exploit processor-specific atomic operations to avoid using a lock.
But, why not just process the events on the main thread as part of the main run loop?

The simplest way to think of it is that the blocked thread is put in a wait ("sleeping") state until the mutex is released by the thread holding it. At that point the operating system will "wake up" one of the threads waiting on the mutex and let it acquire it and continue. It's as if the OS simply puts the blocked thread on a shelf until it has the thing it needs to continue. Until the OS takes the thread off the shelf, it's not doing anything. The exact implementation -- which thread gets to go next, whether they all get woken up or they're queued -- will depend on your OS and what language/framework you are using.

Too late to answer but I may facilitate the understanding. I am talking more from implementation perspective rather than theoretical texts.
The word "blocking" is kind of technical homonym. People may use it for sleeping or mere waiting. The term has to be understood in context of usage.
Blocking means Waiting - Assume on an SMP system a thread B wants to acquire a spinlock held by some other thread A. One of the mechanisms is to disable preemption and keep spinning on the processor unless B gets it. Another mechanism probably, an efficient one, is to allow other threads to use processor, in case B does not gets it in easy attempts. Therefore we schedule out thread B (as preemption is enabled) and give processor to some other thread C. In this case thread B just waits in the scheduler's queue and comes back with its turn. Understand that B is not sleeping just waiting rather passively instead of busy-wait and burning processor cycles. On BSD and Solaris systems there are data-structures like turnstiles to implement this situation.
Blocking means Sleeping - If the thread B had instead made system call like read() waiting data from network socket, it cannot proceed until it gets it. Therefore, some texts casually use term blocking as "... blocked for I/O" or "... in blocking system call". Actually, thread B is rather sleeping. There are specific data-structures known as sleep queues - much like luxury waiting rooms on air-ports :-). The thread will be woken up when OS detects availability of data, much like an attendant of the waiting room.

Blocking means just that. It is blocked. It will not proceed until able. You don't say which language you're using, but most languages/libraries have lock objects where you can "attempt" to take the lock and then carry on and do something different depending on whether you succeeded or not.
But in, for example, Java synchronized blocks, your thread will stall until it is able to acquire the monitor (mutex, lock). The java.util.concurrent.locks.Lock interface describes lock objects which have more flexibility in terms of lock acquisition.

Related

Understanding Condition Variables - Isn't there always spin-waiting "somewhere"?

I am going to explain my understanding of this OS construct and appreciate some polite correction.
I understand thread-safety clearly and simply.
If there is some setup where
X: some condition
Y: do something
and
if X
do Y
is atomic, meaning that if at the exact moment in time
doing Y
not X
there is some problem.
By my understanding, the lowest-level solution of this is to use shared objects (mutexes). As an example, in the solution to the "Too Much Milk" Problem
Thead A | Thread B
-------------------------------------
leave Note A | leave Note B
while Note B | if no Note A
do nothing | if no milk
if no milk | buy milk
buy milk | remove Note B
remove Note A |
Note A and Note B would be the shared objects, i.e. some piece of memory accessible by both threads A and B.
This is can be generalized (beyond milk) for 2-thread case like
Thead A | Thread B
-------------------------------------
leave Note A | leave Note B
while Note B | if no Note A
do nothing | if X
if X | do Y
do Y | remove Note B
remove Note A |
and there is some way to generalize it for the N-thread case (so I'll continue referring to the 2-thread case for simplicity).
Possibly incorrect assumption #1: This is the lowest-level solution known (possible?).
Now one of the defficiencies of this solution is the spinning or busy-wait
while Note B
do nothing
because if the do Y is an expensive task then the thread scheduler will keep switching to Thread A to perform this check, i.e. the thread is still "awake" and using processing power even when we "know" its processing is to perform a check that will fail for some time.
The question then becomes: Is there some way we could make Thread A "sleep", so that it isn't scheduled to run until Note B is gone, and then "wake up"???
The Condition Variable design pattern provides a solution and it built on top of mutexes.
Possibly incorrect assumption #2: Then, isn't there still some spinning under the hood? Is the average amount of spinning somehow reduced?
I could use a logical explanation like only S.O. can provide ;)
Isn't there still some spinning under the hood.
No. That's the whole point of condition variables: It's to avoid the need for spinning.
An operating system scheduler creates a private object to represent each thread, and it keeps these objects in containers which, for purpose of this discussion, we will call queues.
Simplistic explanation:
When a thread calls condition.await(), that invokes a system call. The scheduler handles it by removing the calling thread from whatever CPU it was running on, and by putting its proxy object into a queue. Specifically, it puts it into the queue of threads that are waiting to be notified about that particular condition.
There usually is a separate queue for every different thing that a thread could wait for. If you create a mutex, the OS creates a queue of threads that are waiting to acquire the mutex. If you create a condition variable, the OS creates a queue of threads that are waiting to be notified.
Once the thread's proxy object is in that queue, nothing will wake it up until some other thread notifies the condition variable. That notification also is a system call. The OS handles it (simplest case) by moving all of the threads that were in the condition variable's queue into the global run queue. The run queue holds all of the threads that are waiting for a CPU to run on.
On some future timer tick, the OS will pick the formerly waiting thread from the run queue and set it up on a CPU.
Extra credit:
Bad News! the first thing the thread does after being awakened, while it's still inside the condition.await() call, is it tries to re-lock the mutex. But there's a chance that the thread that signalled the condition still has the mutex locked. Our victim is going to go right back to sleep again, this time, waiting in the queue for the mutex.
A more sophisticated system might be able to optimize the situation by moving the thread directly from the condition variable's queue to the mutex queue without ever needing to wake it up and then put it back to sleep.
yes, on the lowest, hardware level instructions like Compare-and-set, Compare-and-swap are used, which spin until the condition is met, and only then make set (assignment). This spin is required each time we put a thread in a queue, be it queue to a mutex, to condition or to processor.
Then, isn't there still some spinning under the hood? Is the average amount of spinning somehow reduced?
That's a decision for the implementation to make. If spinning works best on the platform, then spinning can be used. But almost no spinning is required.
Typically, there's a lock somewhere at the lowest level of the implementation that protects system state. That lock is only held by any thread for a tiny split second as it manipulates that system state. Typically, you do need to spin while waiting for that inner lock.
A block on a mutex might look like this:
Atomically try to acquire the mutex.
If that succeeds, stop, you are done. (This is the "fast path".)
Acquire the inner lock that no thread holds for more than a few instructions.
Mark yourself as waiting for that mutex to be acquired.
Atomically release the inner lock and set your thread as not ready-to-run.
Notice the only place that there is any spinning in here is in step 3. That's not in the fast path. No spinning is needed after the call in step 5 does not return to this thread until the lock is conveyed to this thread by the thread that held it.
When a thread releases the lock, it checks the count of threads waiting for the lock. If that's greater than zero, instead of releasing the lock, it acquires the inner lock protecting system state, picks one of the threads recorded as waiting for the lock, conveys the lock to that thread, and tells the scheduler to run that thread. That thread then sees step 5 return from its call with now holding the lock.
Again, the only waiting is on that inner lock that is used just to track what thread is waiting for what.

Who blocks who in blocking synchronization?

From Programming Language Pragmatics, by Scott
synchronization can be implemented either by spinning (also called
busy-waiting) or by blocking.
In busy-wait synchronization, a thread runs a loop in which it keeps
reevaluating some condition until that condition becomes true (e.g.,
until a message queue becomes nonempty or a shared variable attains a
particular value)—presumably as a result of action in some other
thread, running on some other core.
In blocking synchronization (also called scheduler-based synchronization), the
waiting thread voluntarily relinquishes its core to some other thread. Before doing so, it leaves a note in some data structure associated with the synchronization
condition. A thread that makes the condition true at some point in the future will
find the note and take action to make the blocked thread run again.
Why is this synchronization mechanism called "blocking"?
Who blocks who?
Thanks.
Why is [it] called "blocking"?
Think of yourself driving to Grandma's house for thanksgiving dinner, when you come upon an accident scene: Tow truck operators are hooking up a big truck that's laying on its side, across the entire road. The traffic reporter on the radio says, "it's blocking both lanes." You might say that your way is blocked by the accident.
Who blocks who?
Like the text that you quoted says, It's voluntary. When you come upon the accident scene, you could just turn around and find a different way, but depending on how long the detour is, and on how soon it looks like they'll have the truck off to the side of the road, you might volunteer to sit and wait.
Software usually sits and waits. It's easier to write software that just waits, and its easier to to read and understand it. The kind of software that does not sit and wait is called a wait free algorithm, and they can be very tricky to write.
The Busy-waiting causes multiple unnecessary context switches as a process/thread repeatedly attempts to enter the critical section in a loop, so it costs much CPU time.
Blocking synchronization avoid this problem by having a process/thread block. It will be put into a waiting queue,instead of attempting to gain CPU time, the process/thread simply waits idly. No CPU cycles are wasted by a blocked process/thread, so other processes/threads can continue without unnecessarily sharing cycles. Once a critical section has been released by some other process/thread, something will wakeup the blocked process/thread.
So this is why this synchronization mechanism called blocking, it will be blocked or can not get cpu again util the lock is released by others.
Who blocks it? I would say the mechanism does it. It put the thread/process which not get the lock into a queue to wait and there is something like monitor to monitor the lock, once the lock is released, it will retrieve one from blocked queue.

How the epoll(), mutex and semaphore alike system calls are implemented behind the scene?

This is really a question confusing me for a long time. I tried googling a lot but still don't quite understand. My question is like this:
for system calls such as epoll(), mutex and semaphore, they have one thing in common: as soon as something happens(taking mutex for example, a thread release the lock), then a thread get woken up(the thread who are waiting for the lock can be woken up).
I'm wondering how is this mechanism(an event in one thread happens, then another thread is notified about this) implemented on earth behind the scene? I can only come up with 2 ways:
Hardware level interrupt: For example, as soon as another thread releases the lock, an edge trigger will happen.
Busy waiting: busy waiting in very low level. for example, as soon as another thread releases the lock, it will change a bit from 0 to 1 so that threads who are waiting for the lock can check this bit.
I'm not sure which of my guess, if any, is correct. I guess reading linux source code can help here. But it's sort of hard to a noob like me. It will be great to have a general idea here plus some pseudo code.
Linux kernel has a built-in object class called "wait queue" (other OSes have similar mechanisms). Wait queues are created for all types of "waitable" resources, so there are quite a few of them around the kernel. When thread detects that it must wait for a resource, it joins the relevant wait queue. The process goes roughly as following:
Thread adds its control structure to the linked list associated with the desired wait queue.
Thread calls scheduler, which marks the calling thread as sleeping, removes it from "ready to run" list and stashes its context away from the CPU. The scheduler is then free to select any other thread context to load onto the CPU instead.
When the resource becomes available, another thread (be it a user/kernel thread or a task scheduled by an interrupt handler - those usually piggy back on special "work queue" threads) invokes a "wake up" call on the relevant wait queue. "Wake up" means, that scheduler shall remove one or more thread control structures from the wait queue linked list and add all those threads to the "ready to run" list, which will enable them to be scheduled in due course.
A bit more technical overview is here:
http://www.makelinux.net/ldd3/chp-6-sect-2

If I "get back to the main thread" then what exactly happens, and how do interrupts work with threads?

Background: I was using Beej's guide and he mentioned forking and ensuring you "get the zombies". An Operating Systems book I grabbed explained how the OS creates "threads" (I always thought it was a more fundamental piece), and by quoting it, I mean it the OS decides nearly everything. Basically they share all external resources, but they split the register and stack spaces (and I think a 3rd thing).
So I get to the waitpid function which http://www.qnx.com's developer docs explain very well. In fact, I read the entire section on threads, minus all the types of conditions after a Processes and Threads google.
The fact that I can split code up and put it back together doesn't confuse me. HOW I can do this is confusing.
In C and C++, your program is a Main() function, which goes forward, calls other functions, maybe loops forever (waiting for input or rendering), and then eventually quits or returns. In this model I see NO reason for it to stop beyond a "I'm waiting for something", in which case it just loops.
Well, it seems it can loop by setting certain things, like "I'm waiting for a semaphore" or "a response" or "an interrupt". Or maybe it gets interrupted without waiting for one. This is what confuses me.
The processor time-slices processes and threads. That's all fine and dandy, but how does it decide when to stop one? I understand that you get to the Polling function and say "Hey I'm waiting for input, clock tick or user do something". Somehow it tells this to the os? I'm not sure. But moreso:
It seems to be able to completely randomly interrupt or interject, even on a single-threaded application. So you're running one thread and suddenly waitpid() says "Hey, I finished a process, let me interrupt this, we both hate zombies, I gotta do this." and you're still looping on some calculation. So, what just happens??? I have no idea, somehow they both run and your computation isn't messed with, 'cause it's single threaded, but that somehow doesn't mean that it won't stop what it's doing to run waitpid() inside the same thread WHILE you're still doing your other app things.
Also confusing, is how you can be notified, like iOSes notifications, and say "Hey, I got some UI changes, get me off of 16 and put me back on 1 so I can change this thing". But same question as last paragraph, how does it interrupt a thread that's running?
I think I understand the splitting, but this joining is utterly confusing. It's like the textbooks have this "rabbit from hat" step I'm supposed to accept. Other SO posts told me they don't share the same stack, but that didn't help, now I'm imagining a slinky (stack) leaning over to another slinky, but unsure how it recombines to change the data.
Thanks for any help, I apologize that this is long, but I know someone's going to misinterpret this and give me the "they are different stacks" answer if I'm too concise here.
Thanks,
OK, I'll have a go, though it's gonna be 'economical with the truth':)
It's sorta like this:
The OS kernel scheduler/dispatcher is a state-machine for managing threads. A thread comprises a stack, (allocated at the time of thread creation), and a Thread Control Block, (TCB), struct in the kernel that holds thread state and can store thread context, (including user registers, especially the stack-pointer). A thread must have code to run, but the code is not dedicated to the thread - many threads can run the same code. Threads have states, eg. blocked on I/O, blocked on an inter-thread signal, sleeping for a timer period, ready, running on a core.
Threads belong to processes - a process must have at least one thread to run its code and has one created for it by the OS loader when the process starts up. The 'main thread' may then create others that will also belong to that process.
The state-machine inputs are software interrupts - system calls from those threads that are already running on cores, and hardware interrupts from perhiperal devices/controllers, (disk, network, mouse, KB etc), that use processor hardware features to stop the processor/s running instructions from the threads and 'immediately' run driver code instead.
The output of the state-machine is a set of threads running on cores. If there are fewer ready threads than cores, the OS will halt the unuseable cores. If there are more ready threads than cores, (ie. the machine is overloaded), the 'sheduling algorithm' that decided with threads to run takes into account several factors - thread and process priority, prority boosts for threads that have just become ready on I/O completion or inter-thread signal, foreground-process boosts and others.
The OS has the ability to stop any running thread on any core. It has an interprocessor hardware-interrupt channel and drivers that can force any thread to enter the OS and be blocked/stopped, (maybe because another thread has just beome ready and the OS scheduling algorithm has decided that a running thread must be immediately preempted).
The software intrrupts from running threads can change the set of running threads by requesting I/O, or by signaling other threads, (the events, mutexes, condition-variables and semaphores). The hardware interrupts from peripheral devices can change the set of running threads by signaling I/O completion.
When the OS gets these inputs, it uses that input, and internal state in containers of Thread Control Block and Process Control Block structs, to decide which set of ready threads to run next. It can block a thread from running by saving its context, (including registers, especially stack pointer), in its TCB and not returning from the interrupt. It can run a thread that was blocked by restoring its context from its TCB to a core and performing an interrupt-return, so allowing the thread to resume from where it left off.
The gain is that no thread that is waiting for I/O gets to run at all and so does not use any CPU and, when I/O becomes avilable, a waiting thread is made ready 'immediately' and, if there is a core available, running.
This combination of OS state data, and hardware/software interrupts, effciently matches up threads that can make forward progress with cores avalable to run them, and no CPU is wasted on polling I/O or inter-thread comms flags.
All this complexity, both in the OS and for the developer who has to design multithreaded apps and so put up with locks, synchronization, mutexes etc, has just one vital goal - high performance I/O. Without it, you can forget video streaming, BitTorrent and browsers - they would all be too piss-slow to be useable.
Statements and phrases like 'CPU quantum', 'give up the remainder of their time-slice' and 'round-robin' make me want to throw up.
It's a state-machine. Hardware and software interrupts go in, a set of running threads comes out. The hardware timer interrupt, (the one that can time-out system calls, allow threads to sleep and share out CPU on a box that is overloaded), though valuable, is just one of many.
So I'm on thread 16, and I need to get to thread 1 to modify UI. I
randomly stop it anywhere, "move the stack over to thread 1" then
"take its context and modify it"?
No, time for 'economical with truth' #2...
Thread 1 is running the GUI. To do this, it needs inputs from mouse, keyboard. The classic way for this to happen is that thread 1 waits, blocked, on a GUI input queue - a thread-safe producer-consumer queue, for KB/mouse messages. It's using no CPU - the cores are off running services and BitTorrent downloads. You hit a key on the keyboard, and the keyboard-controller hardware raises an interrupt line on the interrupt controller, causing a core to jump to the keyboard driver code as soon as it has finished its current instruction. The driver reads the KB controller, assembles a KeyPressed message and pushes it onto the input queue of the GUI thread with focus - your thread 1. The driver exits by calling the scheduler interrupt entry point so that a scheduling run can be performed and your GUI thread is assigned a core an run on it. To thread 1, all it has done is make a blocking 'pop' call on a queue and, eventually, it returns with a message to process.
So, thread 1 is performing:
void* HandleGui{
while(true){
GUImessage message=thread1InputQueue.pop();
switch(message.type){
.. // lots of case statements to handle all the possible GUI messages
..
..
};
};
};
If thread 16 wants to interact with the GUI, it cannot do it directly. All it can do is to queue a message to thread 1, in a similar way to the KB/mouse drivers, to instruct it to do stuff.
This may seem a bit restrictive, but the message from thread 16 can contain more than POD. It could have a 'RunMyCode' message type and contain a function pointer to code that thread 16 wants to be run in the context of thread 1. When thread 1 gets around to hadling the message, its 'RunMyCode' case statement calls the function pointer in the message. Note that this 'simple' mechanism is asynchronous - thread 16 has issued the mesage and runs on - it has no idea when thread 1 will get around to running the function it passed. This can be a problem if the function accesses any data in thread 16 - thread 16 may also be accessing it. If this is an issue, (and it may not be - all the data required by the function may be in the message, which can be passed into the function as a parameter when thread 1 calls it), it is possible to make the function call synchronous by making thread 16 wait until thread 1 has run the function. One way would be for the function signal an OS synchronization object as its last line - an object upon which thread 16 will wait immediately after queueing its 'RunMyCode' message:
void* runOnGUI(GUImessage message){
// do stuff with GUI controls
message.notifyCompletion->signal(); // tell thread 16 to run again
};
void* thread16run(){
..
..
GUImessage message;
waitEvent OSkernelWaitObject;
message.type=RunMyCode;
message.function=runOnGUI;
message.notifyCompletion=waitEvent;
thread1InputQueue.push(message); // ask thread 1 to run my function.
waitEvent->wait(); // wait, blocked, until the function is done
..
..
};
So, getting a function to run in the context of another thread requires cooperation. Threads cannot call other threads - only signal them, usually via the OS. Any thread that is expected to run such 'externally signaled' code must have an accessible entry point where the function can be placed and must execute code to retreive the function address and call it.

Semaphore vs. Monitors - what's the difference?

What are the major differences between a Monitor and a Semaphore?
A Monitor is an object designed to be accessed from multiple threads. The member functions or methods of a monitor object will enforce mutual exclusion, so only one thread may be performing any action on the object at a given time. If one thread is currently executing a member function of the object then any other thread that tries to call a member function of that object will have to wait until the first has finished.
A Semaphore is a lower-level object. You might well use a semaphore to implement a monitor. A semaphore essentially is just a counter. When the counter is positive, if a thread tries to acquire the semaphore then it is allowed, and the counter is decremented. When a thread is done then it releases the semaphore, and increments the counter.
If the counter is already zero when a thread tries to acquire the semaphore then it has to wait until another thread releases the semaphore. If multiple threads are waiting when a thread releases a semaphore then one of them gets it. The thread that releases a semaphore need not be the same thread that acquired it.
A monitor is like a public toilet. Only one person can enter at a time. They lock the door to prevent anyone else coming in, do their stuff, and then unlock it when they leave.
A semaphore is like a bike hire place. They have a certain number of bikes. If you try and hire a bike and they have one free then you can take it, otherwise you must wait. When someone returns their bike then someone else can take it. If you have a bike then you can give it to someone else to return --- the bike hire place doesn't care who returns it, as long as they get their bike back.
Following explanation actually explains how wait() and signal() of monitor differ from P and V of semaphore.
The wait() and signal() operations on condition variables in a monitor are similar to P and V operations on counting semaphores.
A wait statement can block a process's execution, while a signal statement can cause another process to be unblocked. However, there are some differences between them. When a process executes a P operation, it does not necessarily block that process because the counting semaphore may be greater than zero. In contrast, when a wait statement is executed, it always blocks the process. When a task executes a V operation on a semaphore, it either unblocks a task waiting on that semaphore or increments the semaphore counter if there is no task to unlock. On the other hand, if a process executes a signal statement when there is no other process to unblock, there is no effect on the condition variable. Another difference between semaphores and monitors is that users awaken by a V operation can resume execution without delay. Contrarily, users awaken by a signal operation are restarted only when the monitor is unlocked. In addition, a monitor solution is more structured than the one with semaphores because the data and procedures are encapsulated in a single module and that the mutual exclusion is provided automatically by the implementation.
Link: here for further reading. Hope it helps.
Semaphore allows multiple threads (up to a set number) to access a shared object. Monitors allow mutually exclusive access to a shared object.
Monitor
Semaphore
One Line Answer:
Monitor: controls only ONE thread at a time can execute in the monitor. (need to acquire lock to execute the single thread)
Semaphore: a lock that protects a shared resource. (need to acquire the lock to access resource)
A semaphore is a signaling mechanism used to coordinate between threads. Example: One thread is downloading files from the internet and another thread is analyzing the files. This is a classic producer/consumer scenario. The producer calls signal() on the semaphore when a file is downloaded. The consumer calls wait() on the same semaphore in order to be blocked until the signal indicates a file is ready. If the semaphore is already signaled when the consumer calls wait, the call does not block. Multiple threads can wait on a semaphore, but each signal will only unblock a single thread.
A counting semaphore keeps track of the number of signals. E.g. if the producer signals three times in a row, wait() can be called three times without blocking. A binary semaphore does not count but just have the "waiting" and "signalled" states.
A mutex (mutual exclusion lock) is a lock which is owned by a single thread. Only the thread which have acquired the lock can realease it again. Other threads which try to acquire the lock will be blocked until the current owner thread releases it. A mutex lock does not in itself lock anything - it is really just a flag. But code can check for ownership of a mutex lock to ensure that only one thread at a time can access some object or resource.
A monitor is a higher-level construct which uses an underlying mutex lock to ensure thread-safe access to some object. Unfortunately the word "monitor" is used in a few different meanings depending on context and platform and context, but in Java for example, a monitor is a mutex lock which is implicitly associated with an object, and which can be invoked with the synchronized keyword. The synchronized keyword can be applied to a class, method or block and ensures only one thread can execute the code at a time.
Semaphore :
Using a counter or flag to control access some shared resources in a concurrent system, implies use of Semaphore.
Example:
A counter to allow only 50 Passengers to acquire the 50 seats (Shared resource) of any Theatre/Bus/Train/Fun ride/Classroom. And to allow a new Passenger only if someone vacates a seat.
A binary flag indicating the free/occupied status of any Bathroom.
Traffic lights are good example of flags. They control flow by regulating passage of vehicles on Roads (Shared resource)
Flags only reveal the current state of Resource, no count or any other information on the waiting or running objects on the resource.
Monitor :
A Monitor synchronizes access to an Object by communicating with threads interested in the object, asking them to acquire access or wait for some condition to become true.
Example:
A Father may acts as a monitor for her daughter, allowing her to date only one guy at a time.
A school teacher using baton to allow only one child to speak in the class.
Lastly a technical one, transactions (via threads) on an Account object synchronized to maintain integrity.
When a semaphore is used to guard a critical region, there is no direct relationship between the semaphore and the data being protected. This is part of the reason why semaphores may be dispersed around the code, and why it is easy to forget to call wait or notify, in which case the result will be, respectively, to violate mutual exclusion or to lock the resource permanently.
In contrast, niehter of these bad things can happen with a monitor. A monitor is tired directly to the data (it encapsulates the data) and, because the monitor operations are atomic actions, it is impossible to write code that can access the data without calling the entry protocol. The exit protocol is called automatically when the monitor operation is completed.
A monitor has a built-in mechanism for condition synchronisation in the form of condition variable before proceeding. If the condition is not satisfied, the process has to wait until it is notified of a change in the condition. When a process is waiting for condition synchronisation, the monitor implementation takes care of the mutual exclusion issue, and allows another process to gain access to the monitor.
Taken from The Open University M362 Unit 3 "Interacting process" course material.

Resources