How do semaphores allowing negatives work? - multithreading

I've read very conflicting sources online about whether a semaphore can have a non-zero value. It seems like in some implementations this signifies the number of waiting threads, but I can't seem to get this. If I'm understanding correctly, the count value says how many open "slots" there are. So first of all, I can't see how you can have negative slots if the semaphore blocks a wait call until count is positive. I also don't see how the number of waiting threads would have anything to do with the number of open slots. How do negative values of the counter variable work in these situations?
Please refer to this video to see how he is explaining semaphores with negative values for example.

please quote the source or paste some code.
Semaphore are generally binary, meaning their value is either a 0 or 1.
What count are you referring to?
In general sense, count can be the number of waiting threads. I don't understand what slots you refer to.
Waiting threads generally enter a queue and go to sleep.

Related

Can Semaphore cause race condition?

I am a student, currently studying operating system's concurrency - semaphore.
I read books and read articles about semaphores, mutex & semaphores... but can't seem to answer title's condition.
There exists a semaphore, and semaphore can be used as "Binary semaphore" and "Counting semaphore" which is classified by initial value.
I understand binary semaphore can prevent race condition by acting similarly as mutex(but two are not the same by various reasons.)
What i am curious about is that when we initialize the semaphore's value of more than or equal to 2, let's say n, then n values can enter the critical session. Then does this use of semaphore cause race condition?
I read articles about counting semaphores and it is considered that they are considered to keep track of the access to resources, and I'm confused about
do we not use counting semaphore like this, and is counting semaphore not used to solve concurrency problems?
added below because my questions weren't detailed.
For example, when there are 100 threads, and I initialize X=10, then initialize semaphore with sem_init(&s, 0, X), and if there is a critical session in threads' code flow, then doesn't it induce race condition because 10 threads are allowed to use resources and do through the threads' flow?
Semaphores prevent race conditions. Counting semaphores are used where there is more than one instance of the resource that they control available.
If they control access to a single resource, then a mutex semaphore will be used. If there are two resources that can be used then a counting semaphore of two will be used. If there are three, then a semaphore of three will be, and so on.
What i am curious about is that when we initialize the semaphore's value of more than or equal to 2, let's say n, then n values can enter the critical session. Then does this use of semaphore cause race condition?
You are talking about a counting semaphore which generally gets initialized to 0. I actually can't think of a use case where you'd want to initialize it to a value >0 because each waiting thread/task will cause the counting semaphore to increment as long as it's waiting. Also the increments are atomic instructions and will not cause concurrency problems.

Many threads and critical region

If I have many threads running at the same time how can I allow only 1 thread to enter the critical region? Also what will happen if I had more than 1 thread in the critical region?
There are several kinds of bugs that can happen as a result of improperly-protected critical sections, but the most common is called a race condition. This occurs when the correctness of your program's behavior or output is dependent on events occurring in a particular sequence, but it's possible that the events will occur in a different sequence. This will tend to cause the program to behave in an unexpected or unpredictable manner. (Sorry to be a little vague on that last point but by its very nature it's often difficult to predict in advance what the exact result will be other than to say "it probably won't be what you wanted").
Generally, to fix this, you'd use some kind of lock to ensure that only one thread can access the critical section at once. The most common mechanism for this is a mutex lock, which is used for the "simple" case - you have some kind of a shared resource and only one thread can access it at one time.
There are some other mechanisms available too for more complicated cases, such as:
Reader-Writer Locks - either one person can write to the resource or an unlimited number of people can be reading from it.
Counting semaphore - some specified number of threads can access a particular thread at one time. As an analogy, think of a parking lot that only has, for example, 100 spaces - once 100 cars are parked there, they can't accept any more (or, at least, until one of them leaves).
The .NET Framework provides a ManualResetEvent - basically, the threads in question have to wait until an event happens.
This isn't a lock per se, but it's becoming increasingly common to use immutable data structures to obviate the need for locking in the first place. The idea here is that no thread can modify another thread's data, they're always working against either a local version or the unmodified "central" version.

Advantage of counting-semaphore

Can anyone tell me what is counting semaphore?
what is advantage of counting semaphore?
can you write a snippet code for a counting semaphore in c.
In cases where you have N available resources a counting semaphore can keep track
of number of remaining resources. When any thread access one of the semaphores counter of semaphore
will reduce by one and when a thread release the semaphore the counter will increase by one.
If the counter reaches zero and a thread ask for a resource the thread will be blocked
till another thread release the semaphore.
A well known application of semaphore is producer-consumer.
You can find a good description for producer consumer problem here: https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem
It also includes the simple code you where looking for.
Also semaphores can be initialized to limit the maximum number of resources it controls.
If we limit that to 1 this is called a binary semaphore which has just two states sema = 1 or sema = 0
Binary and counting semaphores are compared here:
Differnce between Counting and Binary Semaphores
Counting semaphores are about as powerful as conditional variables (used in conjunction with mutexes). In many cases, the code might be simpler when it is implemented with counting semaphores rather than with condition variables (as shown in the next few examples).
Conceptually, a semaphore is a nonnegative integer count. Semaphores are typically used to coordinate access to resources, with the semaphore count initialized to the number of free resources. Threads then atomically increment the count when resources are added and atomically decrement the count when resources are removed.
When the semaphore count becomes zero, indicating that no more resources are present, threads trying to decrement the semaphore block wait until the count becomes greater than zero.
Refer to this link for example.

How do multiple threads don't deadlock in critical section using Semaphores

I've recently read up about Semaphores and get most of the logic.
Except for the fact that,
When let's say the value of Semaphore is 5, that means 5 threads can't enter the critical section, but how do we make sure these 5 threads don't try to access the same resource again causing a concurrency problem.
Is it something we are supposed to manage manually?
I think you got it backwards :)
You create the semaphore with the count for how many concurrent threads can enter it.
Say you have five resources for doing some work, you then create the semaphore with a count of five. What this means is that the first five threads that try to enter the semaphore with WaitOne get let in, decrementing the counter in the process.
When a thread exits the protected area with Release, it increments the counter again.
If a thread attempts to enter when the count is zero or below, that thread blocks until one of the thread already "in" the semaphore exists.
This way only five threads can be "in" the protected area at any one time.

If I have 1 thread writing and 1 thread reading an int32, is it threadsafe?

I am using C#, and I need to know if having 1 thread reading, and 1 thread writing is safe without using any volatiles or locks. I would only be reading/writing to 32 bit ints and booleans.
This is pretty much the definition of thread-unsafe code. If you analyze the need for a lock in threaded code then you always look for exactly this pattern. If you don't lock then the thread that reads will observe random changes to the variable, completely out of sync with its execution and at completely random times.
The only guarantee you get from the C# memory model is that an int and a bool update is atomic. An expensive word that means that you will not accidentally read a value where some of the bytes in the value have the new value and some have the old value. Which would produce a completely random number. Not every value update is atomic, long, double and structure types (including Nullable<> and decimal) are not atomic.
Not locking where necessary produces extremely hard to debug problems. They depend on timing and programs tend to settle into a execution patterns where timing doesn't vary much. But which suddenly can change, when the machine is occupied by another task for example. Your program could be running fine for a week, then fail once when you get an email message :) That is undebuggable.

Resources