thread execution with no synchronization - multithreading

Threads 1 and 2 are executing concurrently with shared integer variables A, B, and C.
Thread 1 executes:A=4, B=5, C=B-A; Thread 2 executes: A=3, B=6, C=A+B;
Suppose there no synchronization implemented. What are all possible values of C after the execution of this fragment?
I know if there is no synchronization then the reads see writes that occur later in the execution order and it will be counter intuitive ( execution will be happens-before consistency)
I am confused what can be the possible values of C.

Related

Starvation Free Mutex In Little Book of Semaphores

Background:
The Little Book of Semaphores by Allen B. Downey talks about assumptions needed to prevent thread starvation.
He states that the scheduler needs to guarantee the following:
Property 2: if a thread is ready to run, then the time it waits until it runs is bounded.
And a weak semaphore guarantees:
Property 3: if there are threads waiting on a semaphore when a thread executes signal, then one of the waiting threads has to be woken.
However, he states that even with these properties, the following code when run for 3 or more threads (Thread A,B,C) can cause starvation:
while True:
mutex.wait()
# critical section
mutex.signal()
The argument is that if A executes first, then wakes up B, A could wait on the mutex again before B releases it. At this point, the A could be woken up again reacquire the mutex and repeat this cycle with B. C would be starved.
Question:
Wouldn't Property 2 guarantee that C would have to be woken up by the scheduler in some finite amount of time? If so, then Thread C couldn't be starved. Even if weak semaphore does not guarantee that Thread C will be woken up, shouldn't the scheduler run it?
I thought about it a little bit more and realized that Property 2 is guarantees that Threads in a RUNNABLE state will be scheduled in a finite amount of time.
The argument in the book states that Thread C would never get to a RUNNABLE state so Property 2 and 3 do not guarantee no starvation.

Big-O of a multi-threading project

Let's suppose that I create a project with 2 threads.
The Big O of them are n! and n respectively and they run at the same time.
When one of them returns what I want, both of them stops.
With that said, it would make sense that the complexity of the algorithm is O(n), although one of the threads has a Big-O of n!, am I right?
P.S. I did my research but none of the answers serve my need, since all of them talk about a problem that is cut in half (O(n/2) per thread instead of O(n) with one thread), while I want to start solving 2 problems at once but both stop when the first one is done.
The analysis of this needs to be more careful.
The thread scheduler may not guarantee that all threads will get a "fair" amount of execution time. Imagine two threads that are both counting up from 1, but the thread scheduler wakes thread A up for 1 step, then B for 1 step, then A for 2 steps, then B for 1 step, then A for 4 steps, and so on.
Thread A will do exponentially more work than thread B in this case, because it is given exponentially more time by the scheduler to do its work. So if thread B signals for thread A to stop after B counts up to n, then thread A would stop after counting up to 2n - 1. The scheduler could be even more unfair, so A's running time cannot be bounded by any function of n.
Given that, if thread A chooses to terminate itself after n! operations, then its running time can only be bounded by O(n!), because we can't guarantee that thread B will have completed its n operations and sent the termination signal within that time.
Now suppose the thread scheduler does guarantee that one thread is never favoured over another by more than some constant factor. In this case, the algorithm in thread B will send a signal to thread A after thread B completes O(n) steps. Since thread A can only complete O(n) steps in the same time (otherwise it would be favoured over thread B by more than a constant factor), then thread A will terminate in O(n) time.
That said, the fact that the algorithm in thread A is checking for a signal and terminating when it receives that signal, implies that O(n!) can't be derived as a tight upper bound just by looking at what thread A does; because it has instructions to terminate when it receives a signal from outside. So at least, there isn't a contradiction.

Semaphore minimization

I stumbled upon a problem in a multi-threading book in the library and I was wondering what I would need to do in order to minimize the number of semaphores.
What would you do in this situation?
Semaphores
Assume a process P0's execution depends on other k processes: P1,...,Pk.
You need one semaphore to synchronize the processes and to satisfy this single constrain.
The semaphore S0 is initialized with 0, while P0 will try to wait k times on S0 (in other word, it will try to acquire k resources).
Each of k processes P1, ..., Pk will release S0 upon their ends of executions.
This will guarantee that P0 will start execution only after all the other k processes complete their execution (in any order and asynchronously).
In the link you provided, you need 4 semaphores, T1 does not need any semaphore because its execution depends on nobody else.

What's the point of invoking sequential (synchronous) threads? Performance?

So with synchronous threads, you have some threads waiting for other threads to finish.
As we can see in the observer pattern, etc.
An Example:
Thread 3 Waiting for Thread 2 to execute Method C
Thread 2 waiting for Thread 1 to execute Method B
Thread 1 execute Method A
Does this scenario have any benefit in performance in contrast with:
Thread 1
Executing Method A
Executing Method B
Executing Method C
Maybe they just do it for sake of spliting tasks according to behavior?

yield between different processes

I have two C++ codes one called a and one called b. I am running in in a 64 bits Linux, using the Boost threading library.
The a code creates 5 threads which stay in a non-ending loop doing some operation.
The b code creates 5 threads which stay in a non-ending loop invoking yield().
I am on a quadcore machine... When a invoke the a code alone, it gets almost 400% of the CPU usage. When a invoke the b code alone, it gets almost 400% of the CPU usage. I already expected it.
But when running both together, I was expecting that the b code used almost nothing of CPU and a use the 400%. But actually both are using equals slice of the CPU, almost 200%.
My question is, doesn't yield() works between different process? Is there a way to make it work the way I expected?
So you have 4 cores running 4 threads that belong to A. There are 6 threads in the queue - 1 A and 5 B. One of running A threads exhausts its timeslice and returns to the queue. The scheduler chooses the next runnable thread from the queue. What is the probability that this tread belongs to B? 5/6. Ok, this thread is started, it calls sched_yield() and returns back to the queue. What is the probability that the next thread will be a B thread again? 5/6 again!
Process B gets CPU time again and again, and also forces the kernel to do expensive context switches.
sched_yield is intended for one particular case - when one thread makes another thread runnable (for example, unlocks a mutex). If you want to make B wait while A is working on something important - use some synchronization mechanism that can put B to sleep until A wakes it up
Linux uses dynamic thread priority. The static priority you set with nice is just to limit the dynamic priority.
When a thread use his whole timeslice, the kernel will lower it's priority and when a thread do not use his whole timeslice (by doing IO, calling wait/yield, etc) the kernel will increase it's priority.
So my guess is that process b threads have higher priority, so they execute more often.

Resources