Atomic Instructions and Variable Update visibility - multithreading

On most common platforms (the most important being x86; I understand that some platforms have extremely difficult memory models that provide almost no guarantees useful for multithreading, but I don't care about rare counter-examples), is the following code safe?
Thread 1:
someVariable = doStuff();
atomicSet(stuffDoneFlag, 1);
Thread 2:
while(!atomicRead(stuffDoneFlag)) {} // Wait for stuffDoneFlag to be set.
doMoreStuff(someVariable);
Assuming standard, reasonable implementations of atomic ops:
Is Thread 1's assignment to someVariable guaranteed to complete before atomicSet() is called?
Is Thread 2 guaranteed to see the assignment to someVariable before calling doMoreStuff() provided it reads stuffDoneFlag atomically?
Edits:
The implementation of atomic ops I'm using contains the x86 LOCK instruction in each
operation, if that helps.
Assume stuffDoneFlag is properly cleared somehow. How isn't important.
This is a very simplified example. I created it this way so that you wouldn't have to understand the whole context of the problem to answer it. I know it's not efficient.

If your actual x86 code has the store to someVariable before the store in atomicSet in Thread 1 and load of someVariable after the load in atomicRead in Thread 2, then you should be fine. Intel's Software Developer's Manual Volume 3A specifies the memory model for x86 in Section 8.2, and the intra-thread store-store and load-load constraints should be enough here.
However, there may not be anything preventing your compiler from reordering the instructions generated from whatever higher-level language you are using across the atomic operations.

1)Yes
2)Yes
Both work.

This code looks thread safe, but I question the efficiency of your spinlock (the while loop) unless you are only spinning for a very short amount of time. There is no guarantee on any given system that Thread 2 won't completely hog all processing time.
I would recommend using some actual synchronization primitives (looks like boost::condition_variable is what you want here) instead of relying on the spin lock.

The atomic instructions ensure that the thread 2 waits for thread 1 to complete setting the variable before thread 2 proceeds. There are, however, two key issues:
1) the someVariable must be declared 'volatile' to ensure that the compiler does not optimise it's allocation e.g. storing it in a register or deferring a write.
2) the second thread is blocking while waiting for the signal (termed spinlocking). Your platform probably provides much better locking and signalling primatives and mechanisms, but a relatively straightforward improvement would be to simply sleep() in the thread 2's while() body.

dsimcha written: "Assume stuffDoneFlag is properly cleared somehow. How isn't important."
This is not true!
Let's see scenario:
Thread2 checks the stuffDoneFlag if it's 1 start reading someVariable.
Before the Thread2 finish reading the task scheduler interrupt its task and suspend the task for some time.
Thread1 again access to the someVariable and change the memory content.
Task scheduler switch on again Thread2 and it continue the job but memory content of someVariable is changed!

Related

How to explain thread synchronization mechanism at OS level conceptually?

There is lot of discussion on thread synchronization on SO as well as on many forums all-over the Internet. However, I could not find precise information as to how exactly it happens at OS level conceptually.
As we all know there are these types of thread synchronization objects:
Mutex
Semaphore
Critical section
And as I understand it fully, allowing multiple threads at a time to modify a resource (for example, two threads simultaneously changing bits of a variable in memory) is not a good idea and so we use these objects. But then that's what exactly same should happen when multiple threads try to access these objects as well.
What really happens at the core? How exactly does OS achieve this?
How can we explain this to someone at conceptual level (rather than going in hardware or assembly level details)?
First let's sum up what the fundamental problem of threading really is-- two threads try to access the same piece of memory at the same time. You can imagine that when this happens we can't guarantee that a piece of memory is in a valid state, and our program might be incorrect.
Trying to keep this very high level, part of the way processors work is by throwing interrupts which basically tell a thread to stop what is doing and do something else. This is where much of the problem of threading lies. A thread can be interrupted in the middle of task. Imagine one thread is interrupted in the middle of an operation and some intermediate garbage value exists because the thread hasn't finished its task. Another thread could come along and read this value and destroy the correctness of your program.
The OS achieves this with Atomic instructions. Without getting into the details, image that there were some instructions that were guaranteed to either be completed or not completed. This means that if a thread checks the result of an instruction it won't see an intermediate results. So an atomic add method would either show the value before the add or after the add, but not during the add when their might be some intermediate state.
Now if you have a few atomic instructions you might be able to imagine that you could build higher level abstractions that deal with threads and thread safety on the back of these. Maybe the most basic example in a lock created with the test and set primitive. Take a look at this wikipedia article https://en.wikipedia.org/wiki/Test-and-set. Now that was probably a lot because these things get pretty complex. But I will attempt to given an example that clarifies. If you have two processes running that are trying to access some section of code, a very naive solution would be to create a lock variable
boolean isLocked = false;
Anytime a process tried to acquire this lock you could merely check isLocked==false and wait until isLocked ==true before executing some code. For example...
while(isLocked){
//wait for isLocked == false
}
isLocked = true;
// execute the code you want to be locked
isLocked = false;
Of course, we know that something as simple as setting or reading a boolean can be interrupted and cause threading mayhem. So, the good folks that developed kernels and processors and hardware created an atomic test and set operation which returns the old value of a boolean and sets the new value to true. So of course you can implement our lock above by doing something like.
while(testAndSet(isLocked)){ //wait until the old value returned is
false so the lock is unlocked } //do some critical stuff
//unlock after you did the critical stuff lock = false;
I only show the implementation of a basic lock above to prove the point that it is possible to build higher level abstractions on atomic instructions. Atomic instruction are about as low level as you can get conceptually, in my opinion, without delving into hardware specifics. You can imagine though that within hardware, the hardware must somehow set a flag of some sort when memory is being read that precludes another thread from accessing the same memory.
Hope that helps!

Will Mutex protection failed for register promotion?

In an article about c++11 memory order, author show an example reasoning "threads lib will not work in c++03"
for (...){
...
if (mt) pthread_mutex_lock(...);
x=...x...
if (mt) pthread_mutex_unlock(...);
}
//should not have data-race
//but if "clever" compiler use a technique called
//"register promotion" , code become like this:
r = x;
for (...){
...
if (mt) {
x=r; pthread_mutex_lock(...); r=x;
}
r=...r...
if (mt) {
x=r; pthread_mutex_unlock(...); r=x;
}
x=r;
There are 3 question:
1.Is this promotion only break the mutex protection in c++03?What about c language?
2.c++03 thread libs become unwork?
3.Any other promotion may caused same problem?
If it's wrong example, then thread libs work, what about the 《Threads Cannot be Implemented as a Library》by Hans Boehm.
POSIX functions pthread_mutex_lock and pthread_mutex_unlock are memory barriers, the compiler and/or CPU cannot reorder loads and stores around them. Otherwise the mutexes would be useless. That article is probably inaccurate.
See POSIX 4.12 Memory Synchronization:
Applications shall ensure that access to any memory location by more than one thread of control (threads or processes) is restricted such that no thread of control can read or modify a memory location while another thread of control may be modifying it. Such access is restricted using functions that synchronize thread execution and also synchronize memory with respect to other threads. The following functions synchronize memory with respect to other threads: [see the list on the website]
For single thread code, the state in the abstract machine is not directly observable: objects that aren't volatile are not guaranteed to have any particular state when you pause the only thread with a signal and observe it via ptrace or the equivalent. The only requirement is that the program execution has the same observable behavior as a behavior of one possible execution of the abstract machine.
The observables are the interactions with external world; basically, input/output on streams and actions on volatile objects.
A compiler for mono-thread code can generate code that perform operations on global variables or other object that happen to be shared between threads, as long as the single thread semantic is respected. This is obviously the case if a global variable to changed in such a way that it gets back its original value.
For example, a compiler might emit code that increment then decrement a variable, at least in some rare cases; the goal would be to emit simple code, at the cost of the occasional few unneeded operations.
Such changes to shared variables that don't exist in the abstract machine would obviously break multithreaded code that concurrently performs a real operation; such code does not have any race condition on the accesses of the shared variable, that are properly serialized, but the generated code introduced a race that breaks the program.

Do shared variables between threads always require protection ?

Lets say I have two threads reading and modifying a bool / int "state". The reads and writes are guaranteed to be atomic by the processor.
Thread 1:
if (state == ENABLED)
{
Process_Data()
}
Thread 2:
state = DISABLED
In this case yes the thread 1 can read the state and go into it's "if" to Process_Data and then Thread2 can change state. But it isn't incorrect at that point to still go on to Process_Data. Yes if we peek into the hood we have an inconsistency of state being DISABLED and us entering the Process_Data function. But after its executed the next time Thread1 executes it will get state = DISABLED and not Process_Data.
My question is do I still need a lock in both these threads to make Thread1's check-state-and-process atomic and Thread2's write atomic (wrt to Thread 1) ?
You've addressed the atomicity concerns. However, in modern processors, you have to worry not just about atomicity, but also memory visibility.
For example, thread 1 is executing on one processor, and reads ENABLED from state - from its processor's cache.
Meanwhile, thread 2 is executing on a different processor, and writes DISABLED to state on its processor's cache.
Without further code - in some languages, for example, declaring state volatile - the DISABLED value may not get flushed to main memory for a long time. It may never get flushed to main memory if thread 2 changes the value back to ENABLED eventually.
Meanwhile, even if the DISABLED value is flushed to main memory, thread 1 may never pick it up, instead continuing to use its cached value of ENABLED indefinitely.
Generally if you want to share values between threads, it's better to do so explicitly using the appropriate mechanisms for the programming language and environment that you're using.
There's no way to answer your question generically. If the specification for the language, compiler, threading library and/or platform you are using says you need protection, then you do. If it says you don't, then you don't. I believe every threading library or multi-threading implementation specifies rules for sane use and sharing of data. If yours doesn't, it's a piece of junk that is impossible to use reliably and you should get a better one.
Do not make the mistake of thinking, "This is safe because I can't think of any way it can go wrong." Or "I tested this, and I couldn't get it to fail, so it's safe." That kind of thinking produces fragile code that tends to fail when you change compiler options, upgrade your CPU, or run the program on a different platform. Follow the specifications for the tools you are using.

Thread visibility among one process

I'm reading the book Crack Code Interview recently, but there's one paragraph confusing me a lot on page 257:
A thread is a particular execution path of a process; when one thread modifies a process resource, the change is immediately visible to sibling threads.
IIRC, if one thread make a change to a variable, the change will firstly save in the CPU cache (say, L1 cache), and will not guarantee to synchronize to other threads unless the variable is declared as volatile.
Am I right?
Nope, you're wrong. But this is a very common misunderstanding.
Every modern multi-core CPU has hardware cache coherence. The L1, and similar caches, are invisible. CPU caches like the L1 cache have nothing to do with memory visibility.
Changes are visible immediately when a thread modifies a process resource. The issue is optimizations that cause process resources not to be modified in precisely the order the code specifies.
If your code has k = j; i = 4; if (j == 2) foo(); an optimizer might see that your first assignment reads the value of j. So it might not bother reading it again when you compare it to 2 since it "knows" that it can't have changed. However, another thread might have changed it. So optimizations of some kinds need to be disabled when synchronization between threads is required. That's what things like volatile do.
If compilers and CPUs made no optimizations and executed a program precisely as it was written, volatile would never be needed. Memory visibility is about optimizations in code (some done by the compiler, some by the CPU), not caches.
I think the text you are quoting is incorrect. The whole idea of the Java Memory Model is to deal with the complex optimizations by modern software & hardware, so that programmers can determine what writes are visible by the respective reads in other threads.
Unless a program in Java is properly synchronized, you can't guarantee that changes by one thread are immediately visible to other threads. Maybe the text refers to a very specific (and weak) memory model.
Usage of volatile variables is just one way to synchronize threads, and it's not suitable for all scenarios.
--Edit--
I think I understand the confusion now... I agree with David Schwartz, assuming that:
1) "modifies a process resource" means the actual change of the resource, not just the execution of a write instruction written in some high level computer language.
2) "is immediately visible to sibling threads" means that other threads are able to see it; it doesn't mean that a thread in your program will necessarily see it. You may still need to use synchronization tools in order to disable optimizations that bypass the actual access to the resource.

linux thread synchronization

I am new to linux and linux threads. I have spent some time googling to try to understand the differences between all the functions available for thread synchronization. I still have some questions.
I have found all of these different types of synchronizations, each with a number of functions for locking, unlocking, testing the lock, etc.
gcc atomic operations
futexes
mutexes
spinlocks
seqlocks
rculocks
conditions
semaphores
My current (but probably flawed) understanding is this:
semaphores are process wide, involve the filesystem (virtually I assume), and are probably the slowest.
Futexes might be the base locking mechanism used by mutexes, spinlocks, seqlocks, and rculocks. Futexes might be faster than the locking mechanisms that are based on them.
Spinlocks dont block and thus avoid context swtiches. However they avoid the context switch at the expense of consuming all the cycles on a CPU until the lock is released (spinning). They should only should be used on multi processor systems for obvious reasons. Never sleep in a spinlock.
The seq lock just tells you when you finished your work if a writer changed the data the work was based on. You have to go back and repeat the work in this case.
Atomic operations are the fastest synch call, and probably are used in all the above locking mechanisms. You do not want to use atomic operations on all the fields in your shared data. You want to use a lock (mutex, futex, spin, seq, rcu) or a single atomic opertation on a lock flag when you are accessing multiple data fields.
My questions go like this:
Am I right so far with my assumptions?
Does anyone know the cpu cycle cost of the various options? I am adding parallelism to the app so we can get better wall time response at the expense of running fewer app instances per box. Performances is the utmost consideration. I don't want to consume cpu with context switching, spinning, or lots of extra cpu cycles to read and write shared memory. I am absolutely concerned with number of cpu cycles consumed.
Which (if any) of the locks prevent interruption of a thread by the scheduler or interrupt...or am I just an idiot and all synchonization mechanisms do this. What kinds of interruption are prevented? Can I block all threads or threads just on the locking thread's CPU? This question stems from my fear of interrupting a thread holding a lock for a very commonly used function. I expect that the scheduler might schedule any number of other workers who will likely run into this function and then block because it was locked. A lot of context switching would be wasted until the thread with the lock gets rescheduled and finishes. I can re-write this function to minimize lock time, but still it is so commonly called I would like to use a lock that prevents interruption...across all processors.
I am writing user code...so I get software interrupts, not hardware ones...right? I should stay away from any functions (spin/seq locks) that have the word "irq" in them.
Which locks are for writing kernel or driver code and which are meant for user mode?
Does anyone think using an atomic operation to have multiple threads move through a linked list is nuts? I am thinking to atomicly change the current item pointer to the next item in the list. If the attempt works, then the thread can safely use the data the current item pointed to before it was moved. Other threads would now be moved along the list.
futexes? Any reason to use them instead of mutexes?
Is there a better way than using a condition to sleep a thread when there is no work?
When using gcc atomic ops, specifically the test_and_set, can I get a performance increase by doing a non atomic test first and then using test_and_set to confirm? I know this will be case specific, so here is the case. There is a large collection of work items, say thousands. Each work item has a flag that is initialized to 0. When a thread has exclusive access to the work item, the flag will be one. There will be lots of worker threads. Any time a thread is looking for work, they can non atomicly test for 1. If they read a 1, we know for certain that the work is unavailable. If they read a zero, they need to perform the atomic test_and_set to confirm. So if the atomic test_and_set is 500 cpu cycles because it is disabling pipelining, causes cpu's to communicate and L2 caches to flush/fill .... and a simple test is 1 cycle .... then as long as I had a better ratio of 500 to 1 when it came to stumbling upon already completed work items....this would be a win.
I hope to use mutexes or spinlocks to sparilngly protect sections of code that I want only one thread on the SYSTEM (not jsut the CPU) to access at a time. I hope to sparingly use gcc atomic ops to select work and minimize use of mutexes and spinlocks. For instance: a flag in a work item can be checked to see if a thread has worked it (0=no, 1=yes or in progress). A simple test_and_set tells the thread if it has work or needs to move on. I hope to use conditions to wake up threads when there is work.
Thanks!
Application code should probably use posix thread functions. I assume you have man pages so type
man pthread_mutex_init
man pthread_rwlock_init
man pthread_spin_init
Read up on them and the functions that operate on them to figure out what you need.
If you're doing kernel mode programming then it's a different story. You'll need to have a feel for what you are doing, how long it takes, and what context it gets called in to have any idea what you need to use.
Thanks to all who answered. We resorted to using gcc atomic operations to synchronize all of our threads. The atomic ops were about 2x slower than setting a value without synchronization, but magnitudes faster than locking a mutex, changeing the value, and then unlocking the mutex (this becomes super slow when you start having threads bang into the locks...) We only use pthread_create, attr, cancel, and kill. We use pthread_kill to signal threads to wake up that we put to sleep. This method is 40x faster than cond_wait. So basicly....use pthreads_mutexes if you have time to waste.
in addtion you should check the nexts books
Pthreads Programming: A POSIX
Standard for Better Multiprocessing
and
Programming with POSIX(R) Threads
regarding question # 8
Is there a better way than using a condition to sleep a thread when there is no work?
yes i think that the best aproach instead of using sleep
is using function like sem_post() and sem_wait of "semaphore.h"
regards
A note on futexes - they are more descriptively called fast userspace mutexes. With a futex, the kernel is involved only when arbitration is required, which is what provides the speed up and savings.
Implementing a futex can be extremely tricky (PDF), debugging them can lead to madness. Unless you really, really, really need the speed, its usually best to use the pthread mutex implementation.
Synchronization is never exactly easy, but trying to implement your own in userspace makes it inordinately difficult.

Resources