Reader-Writer with Semaphores and weighted priority - semaphore

I'm wondering whether there is a simple solution to the reader writer problem, where we can give the readers priority, but not an absolute priority using semaphores. Basically I want achieve a ratio of 5 readers to 1 writer in the queue for the critical section.
In other words: if a reader and a writer wanted to enter the the critical section, the reader would have priority. And if another reader wanted to enter after this, they would also have priority. This would continue for five readers after which the writer would have priority.
For instance imagine if 10 readers and 2 writers wanted to enter the critical section. The order of entry would be:
5 readers
1 writer
5 readers
1 writer
I'd appreciate any help on this matter.

I was just involved in a design session two days ago where we intended to solve the very same problem. Here is what we came up with.
class WeightedSemaphore {
Map<String,Semaphore> layer1
Semaphore layer2
//ratios passed, for your example: read:5, write:1
public configure(int realLeases, Map<String,Integer> ratios) {
for (String key : ratio.keys) {
layer1.put(key, new Semaphore(realLeases * ratios.get(key))
}
layer2 = new Semaphore(realLeases, fair)
}
public acquire(String key) {
layer1.get(key).acquire();
layer2.acquire();
}
}
This is minimal pseudocode, but gets the main point across. The main idea was to design a 2 layer semaphore, where layer2 was a single semaphore with the real number of leases we wanted to support; and layer1 served to shape-the-mix of work that would ever be waiting on the layer2 semaphore.
This approach has 2 properties, that i think are pretty cool.
Though it favors keys with a higher ratio, it doesn't exhibit starvation behaviors you see in many priority queue type systems.
It doesn't waste capacity when there is no high-priority work to do. Any key can use the full capacity of the semaphore, but will step aside quickly when new work arrives.
Hope this helps.

Related

How to control simultaneous access to multiple shared queues by multiple producers?

One way would be to lock and then check the status of first shared queue, push data if space available, or ignore if not, and then unlock.
Then check the status of second shared queue, push data if space available, or ignore if not, and then unlock.
So, on and so forth.
Here we'll be constantly locking and unlocking to see the status of a shared queue and then act accordingly.
Questions:
What are the drawbacks of this method? Of course time will be spent in locking and unlocking. Is that it?
What are the other ways to achieve the same effect without the current method's drawbacks?
Lock contention is very expensive because it requires a context switch - see the LMAX Disruptor for a more in-depth explanation, in particular the performance results page; the Disruptor is a lock-free bounded queue that exhibits less latency than a bounded queue that uses locks.
One way to reduce lock contention is to have your producers check the queues in a different order from each other, for example instead of each producer checking Queue1, then Queue2, ... and finally QueueN, each producer would repeatedly generate a random number between [1, N] and then check Queue[Rand(N)]. A more complex solution would be to maintain a set of queues sorted according to their available space (e.g. in Java this would be a ConcurrentSkipListSet), then have each producer remove the queue from the head of the set (i.e. the queue with the most available space that is not being simultaneously accessed by another producer), add an element, and insert the queue back into the set; a simpler solution along the same vein is to maintain an unbounded unsorted queue of queues and to have a producer remove and check the queue from the head of the queue of queues and to then insert the queue back into the tail of the queue of queues, which would ensure that only one producer is able to check a queue at any given point of time.
Another solution is to reduce and ideally eliminate the number of locks - it's difficult to write lock-free algorithms but it's also potentially very rewarding as demonstrated by the performance of LMAX's lock-free queue. In lieu of replacing your locked bounded queues with LMAX's lock-free bounded queues, another solution is to replace your locked bounded queues with lock-free unbounded queues (e.g. Java's ConcurrentLinkedQueue; lock-free unbounded queues are much more likely to be in your language's standard library than lock-free bounded queues) and to place conservative lock-free guards on these queues. For example, using Java's AtomicInteger for the guards:
public class BoundedQueue<T> {
private ConcurrentLinkedQueue<T> queue = new ConcurrentLinkedQueue<>();
private AtomicInteger bound = new AtomicInteger(0);
private final int maxSize;
public BoundedQueue(int maxSize) {
this.maxSize = maxSize;
}
public T poll() {
T retVal = queue.poll();
if(retVal != null) {
bound.decrementAndGet();
}
return retVal;
}
public boolean offer(T t) {
if(t == null) throw new NullPointerException();
int boundSize = bound.get();
for(int retryCount = 0; retryCount < 3 && boundSize < maxSize; retryCount++) {
if(bound.compareAndSet(boundSize, boundSize + 1)) {
return queue.offer(t);
}
boundSize = bound.get();
}
return false;
}
}
poll() will return the element from the head of the queue, decrementing bound if the head element isn't null i.e. if the queue isn't empty. offer(T t) attempts to increment the size of bound without exceeding maxSize, if this succeeds then it puts the element at the tail of the queue, else if this fails three times then the method returns false. This is a conservative guard because it is possible for offer to fail even if the queue isn't full, e.g. if an element is removed after boundSize = bound.get() sets boundSize to maxSize, or if the bound.compareAndSet(expected, newVal) method happens to fail three times due to multiple consumers calling poll().
Really, you are making too many locks/unlocks here. The solution is to make the same check twice:
check if space is available, if not, continue
lock
check if space is available AGAIN
... go on as you did before.
This way you will lock if you needn't to do it only in very rare cases.
I have first seen the solution in the book "Professional Java EE Design Patterns" (Yener, Theedom)
Edit.
About spreading the start queue numbers among threads.
Notice, that without any special organization these threads are waiting for queues only the first time. The next time the necessary timeshift will be already created by simply waiting. Of course, we can create the timeshift ourselves, spreading the start numbers among threads. And that simple spread by equal shift will be more effective that a random one.

Effects of swapping buffers on concurrent access

Consider an application with two threads, Producer and Consumer.
Both threads are running approximately equally frequent, multiple times in a second.
Both threads access the same memory region, where Producer writes to the memory, and Consumer reads the current chunk of data and does something with it, without invalidating the data.
A classical approach is this one:
int[] sharedData;
//Called frequently by thread Producer
void WriteValues(int[] data)
{
lock(sharedData)
{
Array.Copy(data, sharedData, LENGTH);
}
}
//Called frequently by thread Consumer
void WriteValues()
{
int[] data;
lock(sharedData)
{
Array.Copy(sharedData, data, LENGTH);
}
DoSomething(data);
}
If we assume that the Array.Copy takes time, this code would run slow, since Producer always has to wait for Consumer during copying and vice versa.
An approach to this problem would be to create two buffers, one which is accessed by the Consumer, and one which is written to by the Producer, and swap the buffers, as soon as writing has finished.
int[] frontBuffer;
int[] backBuffer;
//Called frequently by thread Producer
void WriteValues(int[] data)
{
lock(backBuffer)
{
Array.Copy(data, backBuffer, LENGTH);
int[] temp = frontBuffer;
frontBuffer = backBuffer;
backBuffer = temp;
}
}
//Called frequently by thread Consumer
void WriteValues()
{
int[] data;
int[] currentFrontBuffer = frontBuffer;
lock(currentForntBuffer)
{
Array.Copy(currentFrontBuffer , data, LENGTH);
}
DoSomething(currentForntBuffer );
}
Now, my questions:
Is locking, as shown in the 2nd example, safe? Or does the change of references introduce problems?
Will the code in the 2nd example execute faster than the code in the 1st example?
Are there any better methods to efficiently solve the problem described above?
Could there be a way to solve this problem without locks? (Even if I think it is impossible)
Note: this is no classical producer/consumer problem: It is possible for Consumer to read the values multiple times before Producer writes it again - the old data stays valid until Producer writes new data.
Is locking, as shown in the 2nd example, safe? Or does the change of references introduce problems?
As far as I can tell, because reference assignment is atomic, this may be safe but not ideal. Because the WriteValues() method reads from frontBuffer without a lock or memory barrier forcing a cache refresh, there no guarantee that the variable will ever be updated with new values from main memory. There is then a potential to continuously read the stale, cached values of that instance from the local register or CPU cache. I'm unsure of whether the compiler/JIT might infer a cache refresh anyway based on the local variable, maybe somebody with more specific knowledge can speak to this area.
Even if the values aren't stale, you may also run into more contention than you would like. For example...
Thread A calls WriteValues()
Thread A takes a lock on the instance in frontBuffer and starts copying.
Thread B calls WriteValues(int[])
Thread B writes its data, moves the currently locked frontBuffer instance into backBuffer.
Thread B calls WriteValues(int[])
Thread B waits on the lock for backBuffer because Thread A still has it.
Will the code in the 2nd example execute faster than the code in the 1st example?
I suggest that you profile it and find out. X being faster than Y only matters if Y is too slow for your particular needs, and you are the only one who knows what those are.
Are there any better methods to efficiently solve the problem described above?
Yes. If you are using .Net 4 and above, there is a BlockingCollection type in System.Collections.Concurrent that models the Producer/Consumer pattern well. If you consistently read more than you write, or have multiple readers to very few writers, you may also want to consider the ReaderWriterLockSlim class. As a general rule of thumb, you should do as little within a lock as you can, which will also help to alleviate your time issue.
Could there be a way to solve this problem without locks? (Even if I think it is impossible)
You might be able to, but I wouldn't suggest trying that unless you are extremely familiar with multi-threading, cache coherency, and potential compiler/JIT optimizations. Locking will most likely be fine for your situation and it will be much easier for you (and others reading your code) to reason about and maintain.

How does a read-write mutex/lock work?

Let's say I'm programming in a threading framework that does not have multiple-reader/single-writer mutexes. Can I implement their functionality with the following:
Create two mutexes: a recursive (lock counting) one for readers and a binary one for the writer.
Write:
acquire lock on binary mutex
wait until recursive mutex has lock count zero
actual write
release lock on binary mutex
Read:
acquire lock on binary mutex (so I know the writer is not active)
increment count of recursive mutex
release lock on binary mutex
actual read
decrement count of recursive mutex
This is not homework. I have no formal training in concurrent programming, and am trying to grasp the issues. If someone can point out a flaw, spell out the invariants or provide a better algorithm, I'd be very pleased. A good reference, either online or on dead trees, would also be appreciated.
The following is taken directly from The Art of Multiprocessor Programming which is a good book to learn about this stuff. There's actually 2 implementations presented: a simple version and a fair version. I'll go ahead and reproduce the fair version.
One of the requirements for this implementation is that you have a condition variable primitive. I'll try to figure out a way to remove it but that might take me a little while. Until then, this should still be better than nothing. Note that it's also possible to implement this primitive using only locks.
public class FifoReadWriteLock {
int readAcquires = 0, readReleases = 0;
boolean writer = false;
ReentrantLock lock;
Condition condition = lock.newCondition(); // This is the condition variable.
void readLock () {
lock.lock();
try {
while(writer)
condition.await();
readAcquires++;
}
finally {
lock.unlock();
}
}
void readUnlock () {
lock.lock();
try {
readReleases++;
if (readAcquires == readReleases)
condition.signalAll();
}
finally {
lock.unlock();
}
}
void writeLock () {
lock.lock();
try {
while (writer)
condition.await();
writer = true;
while (readAcquires != readReleases)
condition.await();
}
finally {
lock.unlock();
}
}
void writeUnlock() {
writer = false;
condition.signalAll();
}
}
First off, I simplified the code a little but the algorithm remains the same. There also happens to be an error in the book for this algorithm which is corrected in the errata. If you plan on reading the book, keep the errata close by or you'll end up being very confused (like me a few minutes ago when I was trying to re-understand the algorithm). Note that on the bright side, this is a good thing since it keeps you on your toes and that's a requirement when you're dealing with concurrency.
Next, while this may be a Java implementation, only use it as pseudo code. When doing the actual implementation you'll have to be carefull about the memory model of the language or you'll definitely end up with a headache. As an example, I think that the readAcquires and readReleases and writer variable all have to be declared as volatile in Java or the compiler is free to optimize them out of the loops. This is because in a strictly sequential programs there's no point in continuously looping on a variable that is never changed inside the loop. Note that my Java is a little rusty so I might be wrong. There's also another issue with integer overflow of the readReleases and readAcquires variables which is ignored in the algorithm.
One last note before I explain the algorithm. The condition variable is initialized using the lock. That means that when a thread calls condition.await(), it gives up its ownership of the lock. Once it's woken up by a call to condition.signalAll() the thread will resume once it has reacquired the lock.
Finally, here's how and why it works. The readReleases and readAcquires variables keep track of the number threads that have acquired and released the read lock. When these are equal, no thread has the read lock. The writer variable indicates that a thread is trying to acquire the write lock or it already has it.
The read lock part of the algorithm is fairly simple. When trying to lock, it first checks to see if a writer is holding the lock or is trying to acquire it. If so, it waits until the writer is done and then claims the lock for the readers by incrementing the readAcquires variable. When unlocking, a thread increases the readReleases variable and if there's no more readers, it notifies any writers that may be waiting.
The write lock part of the algorithm isn't much more complicated. To lock, a thread must first check whether any other writer is active. If they are, it has to wait until the other writer is done. It then indicates that it wants the lock by setting writer to true (note that it doesn't hold it yet). It then waits until there's no more readers before continuing. To unlock, it simply sets the variable writer to false and notifies any other threads that might be waiting.
This algorithm is fair because the readers can't block a writer indefinitely. Once a writer indicates that it wants to acquire the lock, no more readers can acquire the lock. After that the writer simply waits for the last remaining readers to finish up before continuing. Note that there's still the possibility of a writer indefinitely blocking another writer. That's a fairly rare case but the algorithm could be improved to take that into account.
So I re-read your question and realised that I partly (badly) answered it with the algorithm presented below. So here's my second attempt.
The algorithm, you described is fairly similar to the simple version presented in the book I mentionned. The only problem is that A) it's not fair and B) I'm not sure how you would implement wait until recursive mutex has lock count zero. For A), see above and for B), the book uses a single int to keep track of the readers and a condition variable to do the signalling.
You may want to prevent write starvation, to accomplish this you can either give preference to writes or make mutex fair.
ReadWriteLock Java's interface documentation says Writer preference is common,
ReentrantReadWriteLock class documentation says
This class does not impose a reader or writer preference ordering for lock access. However, it does support an optional fairness policy.
Note R..'s comment
Rather than locking and unlocking the binary mutex for reading, you
can just check the binary mutex state after incrementing the count on
the recursive mutex, and wait (spin/yield/futex_wait/whatever) if it's
locked until it becomes unlocked
Recommended reading:
Programming with POSIX Threads
Perl's RWLock
Java's ReadWriteLock documentation.

Interview Question on .NET Threading

Could you describe two methods of synchronizing multi-threaded write access performed
on a class member?
Please could any one help me what is this meant to do and what is the right answer.
When you change data in C#, something that looks like a single operation may be compiled into several instructions. Take the following class:
public class Number {
private int a = 0;
public void Add(int b) {
a += b;
}
}
When you build it, you get the following IL code:
IL_0000: nop
IL_0001: ldarg.0
IL_0002: dup
// Pushes the value of the private variable 'a' onto the stack
IL_0003: ldfld int32 Simple.Number::a
// Pushes the value of the argument 'b' onto the stack
IL_0008: ldarg.1
// Adds the top two values of the stack together
IL_0009: add
// Sets 'a' to the value on top of the stack
IL_000a: stfld int32 Simple.Number::a
IL_000f: ret
Now, say you have a Number object and two threads call its Add method like this:
number.Add(2); // Thread 1
number.Add(3); // Thread 2
If you want the result to be 5 (0 + 2 + 3), there's a problem. You don't know when these threads will execute their instructions. Both threads could execute IL_0003 (pushing zero onto the stack) before either executes IL_000a (actually changing the member variable) and you get this:
a = 0 + 2; // Thread 1
a = 0 + 3; // Thread 2
The last thread to finish 'wins' and at the end of the process, a is 2 or 3 instead of 5.
So you have to make sure that one complete set of instructions finishes before the other set. To do that, you can:
1) Lock access to the class member while it's being written, using one of the many .NET synchronization primitives (like lock, Mutex, ReaderWriterLockSlim, etc.) so that only one thread can work on it at a time.
2) Push write operations into a queue and process that queue with a single thread. As Thorarin points out, you still have to synchronize access to the queue if it isn't thread-safe, but it's worth it for complex write operations.
There are other techniques. Some (like Interlocked) are limited to particular data types, and there are even more (like the ones discussed in Non-blocking synchronization and Part 4 of Joseph Albahari's Threading in C#), though they are more complex: approach them with caution.
In multithreaded applications, there are many situations where simultaneous access to the same data can cause problems. In such cases synchronization is required to guarantee that only one thread has access at any one time.
I imagine they mean using the lock-statement (or SyncLock in VB.NET) vs. using a Monitor.
You might want to read this page for examples and an understanding of the concept. However, if you have no experience with multithreaded application design, it will likely become quickly apparent, should your new employer put you to the test. It's a fairly complicated subject, with many possible pitfalls such as deadlock.
There is a decent MSDN page on the subject as well.
There may be other options, depending on the type of member variable and how it is to be changed. Incrementing an integer for example can be done with the Interlocked.Increment method.
As an excercise and demonstration of the problem, try writing an application that starts 5 simultaneous threads, incrementing a shared counter a million times per thread. The intended end result of the counter would be 5 million, but that is (probably) not what you will end up with :)
Edit: made a quick implementation myself (download). Sample output:
Unsynchronized counter demo:
expected counter = 5000000
actual counter = 4901600
Time taken (ms) = 67
Synchronized counter demo:
expected counter = 5000000
actual counter = 5000000
Time taken (ms) = 287
There are a couple of ways, several of which are mentioned previously.
ReaderWriterLockSlim is my preferred method. This gives you a database type of locking, and allows for upgrading (although the syntax for that is incorrect in the MSDN last time I looked and is very non-obvious)
lock statements. You treat a read like a write and just prevent access to the variable
Interlocked operations. This performs an operations on a value type in an atomic step. This can be used for lock free threading (really wouldn't recommend this)
Mutexes and Semaphores (haven't used these)
Monitor statements (this is essentially how the lock keyword works)
While I don't mean to denigrate other answers, I would not trust anything that does not use one of these techniques. My apologies if I have forgotten any.

What is a semaphore?

A semaphore is a programming concept that is frequently used to solve multi-threading problems. My question to the community:
What is a semaphore and how do you use it?
Think of semaphores as bouncers at a nightclub. There are a dedicated number of people that are allowed in the club at once. If the club is full no one is allowed to enter, but as soon as one person leaves another person might enter.
It's simply a way to limit the number of consumers for a specific resource. For example, to limit the number of simultaneous calls to a database in an application.
Here is a very pedagogic example in C# :-)
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;
namespace TheNightclub
{
public class Program
{
public static Semaphore Bouncer { get; set; }
public static void Main(string[] args)
{
// Create the semaphore with 3 slots, where 3 are available.
Bouncer = new Semaphore(3, 3);
// Open the nightclub.
OpenNightclub();
}
public static void OpenNightclub()
{
for (int i = 1; i <= 50; i++)
{
// Let each guest enter on an own thread.
Thread thread = new Thread(new ParameterizedThreadStart(Guest));
thread.Start(i);
}
}
public static void Guest(object args)
{
// Wait to enter the nightclub (a semaphore to be released).
Console.WriteLine("Guest {0} is waiting to entering nightclub.", args);
Bouncer.WaitOne();
// Do some dancing.
Console.WriteLine("Guest {0} is doing some dancing.", args);
Thread.Sleep(500);
// Let one guest out (release one semaphore).
Console.WriteLine("Guest {0} is leaving the nightclub.", args);
Bouncer.Release(1);
}
}
}
The article Mutexes and Semaphores Demystified by Michael Barr is a great short introduction into what makes mutexes and semaphores different, and when they should and should not be used. I've excerpted several key paragraphs here.
The key point is that mutexes should be used to protect shared resources, while semaphores should be used for signaling. You should generally not use semaphores to protect shared resources, nor mutexes for signaling. There are issues, for instance, with the bouncer analogy in terms of using semaphores to protect shared resources - you can use them that way, but it may cause hard to diagnose bugs.
While mutexes and semaphores have some similarities in their implementation, they should always be used differently.
The most common (but nonetheless incorrect) answer to the question posed at the top is that mutexes and semaphores are very similar, with the only significant difference being that semaphores can count higher than one. Nearly all engineers seem to properly understand that a mutex is a binary flag used to protect a shared resource by ensuring mutual exclusion inside critical sections of code. But when asked to expand on how to use a "counting semaphore," most engineers—varying only in their degree of confidence—express some flavor of the textbook opinion that these are used to protect several equivalent resources.
...
At this point an interesting analogy is made using the idea of bathroom keys as protecting shared resources - the bathroom. If a shop has a single bathroom, then a single key will be sufficient to protect that resource and prevent multiple people from using it simultaneously.
If there are multiple bathrooms, one might be tempted to key them alike and make multiple keys - this is similar to a semaphore being mis-used. Once you have a key you don't actually know which bathroom is available, and if you go down this path you're probably going to end up using mutexes to provide that information and make sure you don't take a bathroom that's already occupied.
A semaphore is the wrong tool to protect several of the essentially same resource, but this is how many people think of it and use it. The bouncer analogy is distinctly different - there aren't several of the same type of resource, instead there is one resource which can accept multiple simultaneous users. I suppose a semaphore can be used in such situations, but rarely are there real-world situations where the analogy actually holds - it's more often that there are several of the same type, but still individual resources, like the bathrooms, which cannot be used this way.
...
The correct use of a semaphore is for signaling from one task to another. A mutex is meant to be taken and released, always in that order, by each task that uses the shared resource it protects. By contrast, tasks that use semaphores either signal or wait—not both. For example, Task 1 may contain code to post (i.e., signal or increment) a particular semaphore when the "power" button is pressed and Task 2, which wakes the display, pends on that same semaphore. In this scenario, one task is the producer of the event signal; the other the consumer.
...
Here an important point is made that mutexes interfere with real time operating systems in a bad way, causing priority inversion where a less important task may be executed before a more important task because of resource sharing. In short, this happens when a lower priority task uses a mutex to grab a resource, A, then tries to grab B, but is paused because B is unavailable. While it's waiting, a higher priority task comes along and needs A, but it's already tied up, and by a process that isn't even running because it's waiting for B. There are many ways to resolve this, but it most often is fixed by altering the mutex and task manager. The mutex is much more complex in these cases than a binary semaphore, and using a semaphore in such an instance will cause priority inversions because the task manager is unaware of the priority inversion and cannot act to correct it.
...
The cause of the widespread modern confusion between mutexes and semaphores is historical, as it dates all the way back to the 1974 invention of the Semaphore (capital "S", in this article) by Djikstra. Prior to that date, none of the interrupt-safe task synchronization and signaling mechanisms known to computer scientists was efficiently scalable for use by more than two tasks. Dijkstra's revolutionary, safe-and-scalable Semaphore was applied in both critical section protection and signaling. And thus the confusion began.
However, it later became obvious to operating system developers, after the appearance of the priority-based preemptive RTOS (e.g., VRTX, ca. 1980), publication of academic papers establishing RMA and the problems caused by priority inversion, and a paper on priority inheritance protocols in 1990, 3 it became apparent that mutexes must be more than just semaphores with a binary counter.
Mutex: resource sharing
Semaphore: signaling
Don't use one for the other without careful consideration of the side effects.
Mutex: exclusive-member access to a resource
Semaphore: n-member access to a resource
That is, a mutex can be used to syncronize access to a counter, file, database, etc.
A sempahore can do the same thing but supports a fixed number of simultaneous callers. For example, I can wrap my database calls in a semaphore(3) so that my multithreaded app will hit the database with at most 3 simultaneous connections. All attempts will block until one of the three slots opens up. They make things like doing naive throttling really, really easy.
Consider, a taxi that can accommodate a total of 3(rear)+2(front) persons including the driver. So, a semaphore allows only 5 persons inside a car at a time.
And a mutex allows only 1 person on a single seat of the car.
Therefore, Mutex is to allow exclusive access for a resource (like an OS thread) while a Semaphore is to allow access for n number of resources at a time.
#Craig:
A semaphore is a way to lock a
resource so that it is guaranteed that
while a piece of code is executed,
only this piece of code has access to
that resource. This keeps two threads
from concurrently accesing a resource,
which can cause problems.
This is not restricted to only one thread. A semaphore can be configured to allow a fixed number of threads to access a resource.
Semaphore can also be used as a ... semaphore.
For example if you have multiple process enqueuing data to a queue, and only one task consuming data from the queue. If you don't want your consuming task to constantly poll the queue for available data, you can use semaphore.
Here the semaphore is not used as an exclusion mechanism, but as a signaling mechanism.
The consuming task is waiting on the semaphore
The producing task are posting on the semaphore.
This way the consuming task is running when and only when there is data to be dequeued
There are two essential concepts to building concurrent programs - synchronization and mutual exclusion. We will see how these two types of locks (semaphores are more generally a kind of locking mechanism) help us achieve synchronization and mutual exclusion.
A semaphore is a programming construct that helps us achieve concurrency, by implementing both synchronization and mutual exclusion. Semaphores are of two types, Binary and Counting.
A semaphore has two parts : a counter, and a list of tasks waiting to access a particular resource. A semaphore performs two operations : wait (P) [this is like acquiring a lock], and release (V)[ similar to releasing a lock] - these are the only two operations that one can perform on a semaphore. In a binary semaphore, the counter logically goes between 0 and 1. You can think of it as being similar to a lock with two values : open/closed. A counting semaphore has multiple values for count.
What is important to understand is that the semaphore counter keeps track of the number of tasks that do not have to block, i.e., they can make progress. Tasks block, and add themselves to the semaphore's list only when the counter is zero. Therefore, a task gets added to the list in the P() routine if it cannot progress, and "freed" using the V() routine.
Now, it is fairly obvious to see how binary semaphores can be used to solve synchronization and mutual exclusion - they are essentially locks.
ex. Synchronization:
thread A{
semaphore &s; //locks/semaphores are passed by reference! think about why this is so.
A(semaphore &s): s(s){} //constructor
foo(){
...
s.P();
;// some block of code B2
...
}
//thread B{
semaphore &s;
B(semaphore &s): s(s){} //constructor
foo(){
...
...
// some block of code B1
s.V();
..
}
main(){
semaphore s(0); // we start the semaphore at 0 (closed)
A a(s);
B b(s);
}
In the above example, B2 can only execute after B1 has finished execution. Let's say thread A comes executes first - gets to sem.P(), and waits, since the counter is 0 (closed). Thread B comes along, finishes B1, and then frees thread A - which then completes B2. So we achieve synchronization.
Now let's look at mutual exclusion with a binary semaphore:
thread mutual_ex{
semaphore &s;
mutual_ex(semaphore &s): s(s){} //constructor
foo(){
...
s.P();
//critical section
s.V();
...
...
s.P();
//critical section
s.V();
...
}
main(){
semaphore s(1);
mutual_ex m1(s);
mutual_ex m2(s);
}
The mutual exclusion is quite simple as well - m1 and m2 cannot enter the critical section at the same time. So each thread is using the same semaphore to provide mutual exclusion for its two critical sections. Now, is it possible to have greater concurrency? Depends on the critical sections. (Think about how else one could use semaphores to achieve mutual exclusion.. hint hint : do i necessarily only need to use one semaphore?)
Counting semaphore: A semaphore with more than one value. Let's look at what this is implying - a lock with more than one value?? So open, closed, and ...hmm. Of what use is a multi-stage-lock in mutual exclusion or synchronization?
Let's take the easier of the two:
Synchronization using a counting semaphore: Let's say you have 3 tasks - #1 and 2 you want executed after 3. How would you design your synchronization?
thread t1{
...
s.P();
//block of code B1
thread t2{
...
s.P();
//block of code B2
thread t3{
...
//block of code B3
s.V();
s.V();
}
So if your semaphore starts off closed, you ensure that t1 and t2 block, get added to the semaphore's list. Then along comes all important t3, finishes its business and frees t1 and t2. What order are they freed in? Depends on the implementation of the semaphore's list. Could be FIFO, could be based some particular priority,etc. (Note : think about how you would arrange your P's and V;s if you wanted t1 and t2 to be executed in some particular order, and if you weren't aware of the implementation of the semaphore)
(Find out : What happens if the number of V's is greater than the number of P's?)
Mutual Exclusion Using counting semaphores: I'd like you to construct your own pseudocode for this (makes you understand things better!) - but the fundamental concept is this : a counting semaphore of counter = N allows N tasks to enter the critical section freely. What this means is you have N tasks (or threads, if you like) enter the critical section, but the N+1th task gets blocked (goes on our favorite blocked-task list), and only is let through when somebody V's the semaphore at least once. So the semaphore counter, instead of swinging between 0 and 1, now goes between 0 and N, allowing N tasks to freely enter and exit, blocking nobody!
Now gosh, why would you need such a stupid thing? Isn't the whole point of mutual exclusion to not let more than one guy access a resource?? (Hint Hint...You don't always only have one drive in your computer, do you...?)
To think about : Is mutual exclusion achieved by having a counting semaphore alone? What if you have 10 instances of a resource, and 10 threads come in (through the counting semaphore) and try to use the first instance?
I've created the visualization which should help to understand the idea. Semaphore controls access to a common resource in a multithreading environment.
ExecutorService executor = Executors.newFixedThreadPool(7);
Semaphore semaphore = new Semaphore(4);
Runnable longRunningTask = () -> {
boolean permit = false;
try {
permit = semaphore.tryAcquire(1, TimeUnit.SECONDS);
if (permit) {
System.out.println("Semaphore acquired");
Thread.sleep(5);
} else {
System.out.println("Could not acquire semaphore");
}
} catch (InterruptedException e) {
throw new IllegalStateException(e);
} finally {
if (permit) {
semaphore.release();
}
}
};
// execute tasks
for (int j = 0; j < 10; j++) {
executor.submit(longRunningTask);
}
executor.shutdown();
Output
Semaphore acquired
Semaphore acquired
Semaphore acquired
Semaphore acquired
Could not acquire semaphore
Could not acquire semaphore
Could not acquire semaphore
Sample code from the article
A semaphore is an object containing a natural number (i.e. a integer greater or equal to zero) on which two modifying operations are defined. One operation, V, adds 1 to the natural. The other operation, P, decreases the natural number by 1. Both activities are atomic (i.e. no other operation can be executed at the same time as a V or a P).
Because the natural number 0 cannot be decreased, calling P on a semaphore containing a 0 will block the execution of the calling process(/thread) until some moment at which the number is no longer 0 and P can be successfully (and atomically) executed.
As mentioned in other answers, semaphores can be used to restrict access to a certain resource to a maximum (but variable) number of processes.
A hardware or software flag. In multi tasking systems , a semaphore is as variable with a value that indicates the status of a common resource.A process needing the resource checks the semaphore to determine the resources status and then decides how to proceed.
Semaphores are act like thread limiters.
Example: If you have a pool of 100 threads and you want to perform some DB operation. If 100 threads access the DB at a given time, then there may be locking issue in DB so we can use semaphore which allow only limited thread at a time.Below Example allow only one thread at a time. When a thread call the acquire() method, it will then get the access and after calling the release() method, it will release the acccess so that next thread will get the access.
package practice;
import java.util.concurrent.Semaphore;
public class SemaphoreExample {
public static void main(String[] args) {
Semaphore s = new Semaphore(1);
semaphoreTask s1 = new semaphoreTask(s);
semaphoreTask s2 = new semaphoreTask(s);
semaphoreTask s3 = new semaphoreTask(s);
semaphoreTask s4 = new semaphoreTask(s);
semaphoreTask s5 = new semaphoreTask(s);
s1.start();
s2.start();
s3.start();
s4.start();
s5.start();
}
}
class semaphoreTask extends Thread {
Semaphore s;
public semaphoreTask(Semaphore s) {
this.s = s;
}
#Override
public void run() {
try {
s.acquire();
Thread.sleep(1000);
System.out.println(Thread.currentThread().getName()+" Going to perform some operation");
s.release();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
So imagine everyone is trying to go to the bathroom and there's only a certain number of keys to the bathroom. Now if there's not enough keys left, that person needs to wait. So think of semaphore as representing those set of keys available for bathrooms (the system resources) that different processes (bathroom goers) can request access to.
Now imagine two processes trying to go to the bathroom at the same time. That's not a good situation and semaphores are used to prevent this. Unfortunately, the semaphore is a voluntary mechanism and processes (our bathroom goers) can ignore it (i.e. even if there are keys, someone can still just kick the door open).
There are also differences between binary/mutex & counting semaphores.
Check out the lecture notes at http://www.cs.columbia.edu/~jae/4118/lect/L05-ipc.html.
This is an old question but one of the most interesting uses of semaphore is a read/write lock and it has not been explicitly mentioned.
The r/w locks works in simple fashion: consume one permit for a reader and all permits for writers.
Indeed, a trivial implementation of a r/w lock but requires metadata modification on read (actually twice) that can become a bottle neck, still significantly better than a mutex or lock.
Another downside is that writers can be started rather easily as well unless the semaphore is a fair one or the writes acquire permits in multiple requests, in such case they need an explicit mutex between themselves.
Further read:
Mutex is just a boolean while semaphore is a counter.
Both are used to lock part of code so it's not accessed by too many threads.
Example
lock.set()
a += 1
lock.unset()
Now if lock was a mutex, it means that it will always be locked or unlocked (a boolean under the surface) regardless how many threads try access the protected snippet of code. While locked, any other thread would just wait until it's unlocked/unset by the previous thread.
Now imagine if instead lock was under the hood a counter with a predefined MAX value (say 2 for our example). Then if 2 threads try to access the resource, then lock would get its value increased to 2. If a 3rd thread then tried to access it, it would simply wait for the counter to go below 2 and so on.
If lock as a semaphore had a max of 1, then it would be acting exactly as a mutex.
A semaphore is a way to lock a resource so that it is guaranteed that while a piece of code is executed, only this piece of code has access to that resource. This keeps two threads from concurrently accesing a resource, which can cause problems.

Resources