Why LR/SC Atomic instructions works? - multithreading

I'm the beginner of the multi-threads, I found it so different from single thread programming. and this is the question:
Here are two threads: T1 and T2
If T1 want write someting to memory, it first read and set a flag to a piece of memory, then T1 need to check the 'reservation set'. If check successful, then write.
The problem is: If T1 set the flag(LR) and finish check, next step is going to write. then T2 start working, T2 set a flag, check flag and write something to the memory. Then T1 start again, T1 already checked the flag, it just writing and quit, which means T1 write to the memory and cover the result from T2. Are this possible? If so, why LR/SC can make sure the result can be stored correctly?
LR:
lr.{w/d}.{aqrl} rd, (rs1)
load rs1 to rd and make change to 'reservation set'
SC pseudocode:
if (is_reserved(rs1)) {
*rs1 = rs2
rd = 0
} else
rd = 1
clean_reservation_set(cur_hart)
I tried to read handbook of RISC-V and assembly code. but it still doesn't solve the problem I described above.I wonder if my understanding of LR/SC or multithreading is wrong? If so, what kind of information do I need to consult to get a deeper understanding of them

Related

Can a simple write to one variable modify the state of another?

Taken from Operating Systems - Principles and Practices Vol 3. Chapter 5 Question 4. This isn't homework, I'm now curious.
Suppose that you mistakenly create an automatic (local) variable v in one
thread t1 and pass a pointer to v to another thread t2. Is it possible that a
write by t1 to some variable other than v will change the state of v as
observed by t2? If so, explain how this can happen and give an example. If
not, explain why not.
I don't think this is possible unless T1 declared v within a lower level scope, which was popped off the stack via modification to the afore-mentioned unrelated variable, followed by a swift reallocation of that memory to a new variable. But what I've described isn't a simple write, and that new variable isn't v anymore; at least not to T1.
Were this possible, I think it would imply that a simple write to any unrelated variable could somehow change another memory location even in single threaded programs.
So is this possible?

Love2D, Threads execution and input

Please, i got a problem with Love2d thread functions, and i didnt find any examples or explanations i would understand anywhere.
first:
in main file i got:
thread1 = love.thread.newThread("ambient.lua")
thread2 = love.thread.newThread("ambient.lua")
thread1:start()
thread2:start()
ambient.lua contains:
random1 = love.math.random(1, 10)
gen1 = love.audio.newSource("audio/ambient/gen/a/gen_a_".."random1"..".mp3",
"static")
gen1:setVolume(1.0)
gen1:setLooping(false)
gen1:play()
works fine, problem is that when i ask var = Thread1:isRunning( ) in same step or with delay, when audio is playing and try to print it, it throws error (is supposedly null). when the audio finishes, i see that memory is cleared. also if i link thread1:start() to mouse click and then start it multiple times in short time, memory usage goes up like crazy, then after time similar to sample length it starts to decrease. question is, am i creating multiple threads? in that case did they even terminate properly after samples ended? or is the thread lifetime just 1-step long and i am only creating multiple audio sources playing with the same thread? is problem in the check itself?
next issue is that i need to use thread1:start() with values:
thread1:start(volume, sampleID)
and i have no clue how to addres them in the thread itself. guides and examples says "vararg" reference. i didnt see any decent explanation or any example containing "..." usage in variable input into threads. i am in need of example how to write it. even if this audio fiddle is not a great example, i will surely need it for AI. No need for complicated input, just simple x,y,size,target_x,target_y values.
and i have no clue how to addres them in the thread itself. guides and
examples says "vararg" reference. i didnt see any decent explanation
or any example containing "..." usage in variable input into threads
You didn't read manuals enough. Every loaded Lua chunk (section 2.4.1 of Lua 5.1 manuals) is an anonymous function with variable number of arguments.When you call love.thread.newThread("ambient.lua"), Love2D will create new chunk, so basic Lua rules applies to this case.
In your example, volume and sampleID parameters from within thread would be retrieved like this:
local volume, sampleID = ...
gen1 = love.audio.newSource(get_stream_by_id(sampleID), "static")
gen1:setVolume(volume)
gen1:setLooping(false)
gen1:play()

Reasoning about IORef operation reordering in concurrent programs

The docs say:
In a concurrent program, IORef operations may appear out-of-order to
another thread, depending on the memory model of the underlying
processor architecture...The implementation is required to ensure that
reordering of memory operations cannot cause type-correct code to go
wrong. In particular, when inspecting the value read from an IORef,
the memory writes that created that value must have occurred from the
point of view of the current thread.
Which I'm not even entirely sure how to parse. Edward Yang says
In other words, “We give no guarantees about reordering, except that
you will not have any type-safety violations.” ...
the last sentence remarks that an IORef is not allowed to point to
uninitialized memory
So... it won't break the whole haskell; not very helpful. The discussion from which the memory model example arose also left me with questions (even Simon Marlow seemed a bit surprised).
Things that seem clear to me from the documentation
within a thread an atomicModifyIORef "is never observed to take place ahead of any earlier IORef operations, or after any later IORef operations" i.e. we get a partial ordering of: stuff above the atomic mod -> atomic mod -> stuff after. Although, the wording "is never observed" here is suggestive of spooky behavior that I haven't anticipated.
A readIORef x might be moved before writeIORef y, at least when there are no data dependencies
Logically I don't see how something like readIORef x >>= writeIORef y could be reordered
What isn't clear to me
Will newIORef False >>= \v-> writeIORef v True >> readIORef v always return True?
In the maybePrint case (from the IORef docs) would a readIORef myRef (along with maybe a seq or something) before readIORef yourRef have forced a barrier to reordering?
What's the straightforward mental model I should have? Is it something like:
within and from the point of view of an individual thread, the
ordering of IORef operations will appear sane and sequential; but the
compiler may actually reorder operations in such a way that break
certain assumptions in a concurrent system; however when a thread does
atomicModifyIORef, no threads will observe operations on that
IORef that appeared above the atomicModifyIORef to happen after,
and vice versa.
...? If not, what's the corrected version of the above?
If your response is "don't use IORef in concurrent code; use TVar" please convince me with specific facts and concrete examples of the kind of things you can't reason about with IORef.
I don't know Haskell concurrency, but I know something about memory models.
Processors can reorder instructions the way they like: loads may go ahead of loads, loads may go ahead of stores, loads of dependent stuff may go ahead of loads of stuff they depend on (a[i] may load the value from array first, then the reference to array a!), stores may be reordered with each other. You simply cannot put a finger on it and say "these two things definitely appear in a particular order, because there is no way they can be reordered". But in order for concurrent algorithms to operate, they need to observe the state of other threads. This is where it is important for thread state to proceed in a particular order. This is achieved by placing barriers between instructions, which guarantee the order of instructions to appear the same to all processors.
Typically (one of the simplest models), you want two types of ordered instructions: ordered load that does not go ahead of any other ordered loads or stores, and ordered store that does not go ahead of any instructions at all, and a guarantee that all ordered instructions appear in the same order to all processors. This way you can reason about IRIW kind of problem:
Thread 1: x=1
Thread 2: y=1
Thread 3: r1=x;
r2=y;
Thread 4: r4=y;
r3=x;
If all of these operations are ordered loads and ordered stores, then you can conclude the outcome (1,0,0,1)=(r1,r2,r3,r4) is not possible. Indeed, ordered stores in Threads 1 and 2 should appear in some order to all threads, and r1=1,r2=0 is witness that y=1 is executed after x=1. In its turn, this means that Thread 4 can never observe r4=1 without observing r3=1 (which is executed after r4=1) (if the ordered stores happen to be executed that way, observing y==1 implies x==1).
Also, if the loads and stores were not ordered, the processors would usually be allowed to observe the assignments to appear even in different orders: one might see x=1 appear before y=1, the other might see y=1 appear before x=1, so any combination of values r1,r2,r3,r4 is permitted.
This is sufficiently implemented like so:
ordered load:
load x
load-load -- barriers stopping other loads to go ahead of preceding loads
load-store -- no one is allowed to go ahead of ordered load
ordered store:
load-store
store-store -- ordered store must appear after all stores
-- preceding it in program order - serialize all stores
-- (flush write buffers)
store x,v
store-load -- ordered loads must not go ahead of ordered store
-- preceding them in program order
Of these two, I can see IORef implements a ordered store (atomicWriteIORef), but I don't see a ordered load (atomicReadIORef), without which you cannot reason about IRIW problem above. This is not a problem, if your target platform is x86, because all loads will be executed in program order on that platform, and stores never go ahead of loads (in effect, all loads are ordered loads).
A atomic update (atomicModifyIORef) seems to me a implementation of a so-called CAS loop (compare-and-set loop, which does not stop until a value is atomically set to b, if its value is a). You can see the atomic modify operation as a fusion of a ordered load and ordered store, with all those barriers there, and executed atomically - no processor is allowed to insert a modification instruction between load and store of a CAS.
Furthermore, writeIORef is cheaper than atomicWriteIORef, so you want to use writeIORef as much as your inter-thread communication protocol permits. Whereas writeIORef x vx >> writeIORef y vy >> atomicWriteIORef z vz >> readIORef t does not guarantee the order in which writeIORefs appear to other threads with respect to each other, there is a guarantee that they both will appear before atomicWriteIORef - so, seeing z==vz, you can conclude at this moment x==vx and y==vy, and you can conclude IORef t was loaded after stores to x, y, z can be observed by other threads. This latter point requires readIORef to be a ordered load, which is not provided as far as I can tell, but it will work like a ordered load on x86.
Typically you don't use concrete values of x, y, z, when reasoning about the algorithm. Instead, some algorithm-dependent invariants about the assigned values must hold, and can be proven - for example, like in IRIW case you can guarantee that Thread 4 will never see (0,1)=(r3,r4), if Thread 3 sees (1,0)=(r1,r2), and Thread 3 can take advantage of this: this means something is mutually excluded without acquiring any mutex or lock.
An example (not in Haskell) that will not work if loads are not ordered, or ordered stores do not flush write buffers (the requirement to make written values visible before the ordered load executes).
Suppose, z will show either x until y is computed, or y, if x has been computed, too. Don't ask why, it is not very easy to see outside the context - it is a kind of a queue - just enjoy what sort of reasoning is possible.
Thread 1: x=1;
if (z==0) compareAndSet(z, 0, y == 0? x: y);
Thread 2: y=2;
if (x != 0) while ((tmp=z) != y && !compareAndSet(z, tmp, y));
So, two threads set x and y, then set z to x or y, depending on whether y or x were computed, too. Assuming initially all are 0. Translating into loads and stores:
Thread 1: store x,1
load z
if ==0 then
load y
if == 0 then load x -- if loaded y is still 0, load x into tmp
else load y -- otherwise, load y into tmp
CAS z, 0, tmp -- CAS whatever was loaded in the previous if-statement
-- the CAS may fail, but see explanation
Thread 2: store y,2
load x
if !=0 then
loop: load z -- into tmp
load y
if !=tmp then -- compare loaded y to tmp
CAS z, tmp, y -- attempt to CAS z: if it is still tmp, set to y
if ! then goto loop -- if CAS did not succeed, go to loop
If Thread 1 load z is not a ordered load, then it will be allowed to go ahead of a ordered store (store x). It means wherever z is loaded to (a register, cache line, stack,...), the value is such that existed before the value of x can be visible. Looking at that value is useless - you cannot then judge where Thread 2 is up to. For the same reason you've got to have a guarantee that the write buffers were flushed before load z executed - otherwise it will still appear as a load of a value that existed before Thread 2 could see the value of x. This is important as will become clear below.
If Thread 2 load x or load z are not ordered loads, they may go ahead of store y, and will observe the values that were written before y is visible to other threads.
However, see that if the loads and stores are ordered, then the threads can negotiate who is to set the value of z without contending z. For example, if Thread 2 observes x==0, there is guarantee that Thread 1 will definitely execute x=1 later, and will see z==0 after that - so Thread 2 can leave without attempting to set z.
If Thread 1 observes z==0, then it should try to set z to x or y. So, first it will check if y has been set already. If it wasn't set, it will be set in the future, so try to set to x - CAS may fail, but only if Thread 2 concurrently set z to y, so no need to retry. Similarly there is no need to retry if Thread 1 observed y has been set: if CAS fails, then it has been set by Thread 2 to y. Thus we can see Thread 1 sets z to x or y in accordance with the requirement, and does not contend z too much.
On the other hand, Thread 2 can check if x has been computed already. If not, then it will be Thread 1's job to set z. If Thread 1 has computed x, then need to set z to y. Here we do need a CAS loop, because a single CAS may fail, if Thread 1 is attempting to set z to x or y concurrently.
The important takeaway here is that if "unrelated" loads and stores are not serialized (including flushing write buffers), no such reasoning is possible. However, once loads and stores are ordered, both threads can figure out the path each of them _will_take_in_the_future, and that way eliminate contention in half the cases. Most of the time x and y will be computed at significantly different times, so if y is computed before x, it is likely Thread 2 will not touch z at all. (Typically, "touching z" also possibly means "wake up a thread waiting on a cond_var z", so it is not only a matter of loading something from memory)
within a thread an atomicModifyIORef "is never observed to take place
ahead of any earlier IORef operations, or after any later IORef
operations" i.e. we get a partial ordering of: stuff above the atomic
mod -> atomic mod -> stuff after. Although, the wording "is never
observed" here is suggestive of spooky behavior that I haven't
anticipated.
"is never observed" is standard language when discussing memory reordering issues. For example, a CPU may issue a speculative read of a memory location earlier than necessary, so long as the value doesn't change between when the read is executed (early) and when the read should have been executed (in program order). That's entirely up to the CPU and cache though, it's never exposed to the programmer (hence language like "is never observed").
A readIORef x might be moved before writeIORef y, at least when there
are no data dependencies
True
Logically I don't see how something like readIORef x >>= writeIORef y
could be reordered
Correct, as that sequence has a data dependency. The value to be written depends upon the value returned from the first read.
For the other questions: newIORef False >>= \v-> writeIORef v True >> readIORef v will always return True (there's no opportunity for other threads to access the ref here).
In the myprint example, there's very little you can do to ensure this works reliably in the face of new optimizations added to future GHCs and across various CPU architectures. If you write:
writeIORef myRef True
x <- readIORef myRef
yourVal <- x `seq` readIORef yourRef
Even though GHC 7.6.3 produces correct cmm (and presumably asm, although I didn't check), there's nothing to stop a CPU with a relaxed memory model from moving the readIORef yourRef to before all of the myref/seq stuff. The only 100% reliable way to prevent it is with a memory fence, which GHC doesn't provide. (Edward's blog post does go through some of the other things you can do now, as well as why you may not want to rely on them).
I think your mental model is correct, however it's important to know that the possible apparent reorderings introduced by concurrent ops can be really unintuitive.
Edit: at the cmm level, the code snippet above looks like this (simplified, pseudocode):
[StackPtr+offset] := True
x := [StackPtr+offset]
if (notEvaluated x) (evaluate x)
yourVal := [StackPtr+offset2]
So there are a couple things that can happen. GHC as it currently stands is unlikely to move the last line any earlier, but I think it could if doing so seemed more optimal. I'm more concerned that, if you compile via LLVM, the LLVM optimizer might replace the second line with the value that was just written, and then the third line might be constant-folded out of existence, which would make it more likely that the read could be moved earlier. And regardless of what GHC does, most CPU memory models allow the CPU itself to move the read earlier absent a memory barrier.
http://en.wikipedia.org/wiki/Memory_ordering for non atomic concurrent reads and writes. (basically when you dont use atomics, just look at the memory ordering model for your target CPU)
Currently ghc can be regarded as not reordering your reads and writes for non atomic (and imperative) loads and stores. However, GHC Haskell currently doesn't specify any sort of concurrent memory model, so those non atomic operations will have the ordering semantics of the underlying CPU model, as I link to above.
In other words, Currently GHC has no formal concurrency memory model, and because any optimization algorithms tend to be wrt some model of equivalence, theres no reordering currently in play there.
that is: the only semantic model you can have right now is "the way its implemented"
shoot me an email! I'm working on some patching up atomics for 7.10, lets try to cook up some semantics!
Edit: some folks who understand this problem better than me chimed in on ghc-users thread here http://www.haskell.org/pipermail/glasgow-haskell-users/2013-December/024473.html .
Assume that i'm wrong in both this comment and anything i said in the ghc-users thread :)

Lock code sections with languages that don't support multi-threading

In some cases we need to be sure that some section of code won't be used by one thread while another thread is in this section. Using languages that support multi-threading it's easy to achieve with "locking" these parts. But when we have to "simulate" threads there's not built-in thing like lock keyword in C# or Lock interface in Java. The best way I've found to lock sections in these cases look likes this:
if (!locked){
locked = true;
do some stuff
locked = false;
} else {
add to queue
}
What are the drawbacks of current solution? Does it worth to actively use one?
Maybe two threads can enter in the if statement.
Explanation
If T1 entered in the if statement and doesn't change the locked value. Suddenly, if the CPU change T1 for T2, locked value will continue false. Then T2 will enter into if statement too.
T1 -> !locked -> Stopped -> Waiting -> ... //locked value still false and T2
T2 -> Waiting -> Waked -> !locked -> ... //will be able to enter into if
You should take a look at Semaphores and Mutexes and If you will only use two process you can use the Peterson's Algorithm, but it still have a busy wait to deal!
Take a look at double-checked locking and syncronization primitives provided by underlaying system or OS: mutex, semaphore, motinore, etc. You'll be suprised that even C++ does not have Lock keyword, but everything is fine. C# lock is implemented using monitor and you could find more details here.

Simple POSIX threads question

I have this POSIX thread:
void subthread(void)
{
while(!quit_thread) {
// do something
...
// don't waste cpu cycles
if(!quit_thread) usleep(500);
}
// free resources
...
// tell main thread we're done
quit_thread = FALSE;
}
Now I want to terminate subthread() from my main thread. I've tried the following:
quit_thread = TRUE;
// wait until subthread() has cleaned its resources
while(quit_thread);
But it does not work! The while() clause does never exit although my subthread clearly sets quit_thread to FALSE after having freed its resources!
If I modify my shutdown code like this:
quit_thread = TRUE;
// wait until subthread() has cleaned its resources
while(quit_thread) usleep(10);
Then everything is working fine! Could someone explain to me why the first solution does not work and why the version with usleep(10) suddenly works? I know that this is not a pretty solution. I could use semaphores/signals for this but I'd like to learn something about multithreading, so I'd like to know why my first solution doesn't work.
Thanks!
Without a memory fence, there is no guarantee that values written in one thread will appear in another. Most of the pthread primitives introduce a barrier, as do several system calls such as usleep. Using a mutex around both the read and write introduces a barrier, and more generally prevents multi-byte values being visible in partially written state.
You also need to separate the idea of asking a thread to stop executing, and reporting that it has stopped, and appear to be using the same variable for both.
What's most likely to be happening is that your compiler is not aware that quit_thread can be changed by another thread (because C doesn't know about threads, at least at the time this question was asked). Because of that, it's optimising the while loop to an infinite loop.
In other words, it looks at this code:
quit_thread = TRUE;
while(quit_thread);
and thinks to itself, "Hah, nothing in that loop can ever change quit_thread to FALSE, so the coder obviously just meant to write while (TRUE);".
When you add the call to usleep, the compiler has another think about it and assumes that the function call may change the global, so it plays it safe and doesn't optimise it.
Normally you would mark the variable as volatile to stop the compiler from optimising it but, in this case, you should use the facilities provided by pthreads and join to the thread after setting the flag to true (and don't have the sub-thread reset it, do that in the main thread after the join if it's necessary). The reason for that is that a join is likely to be more efficient than a continuous loop waiting for a variable change since the thread doing the join will most likely not be executed until the join needs to be done.
In your spinning solution, the joining thread will most likely continue to run and suck up CPU grunt.
In other words, do something like:
Main thread Child thread
------------------- -------------------
fStop = false
start Child Initialise
Do some other stuff while not fStop:
fStop = true Do what you have to do
Finish up and exit
join to Child
Do yet more stuff
And, as an aside, you should technically protect shared variables with mutexes but this is one of the few cases where it's okay, one-way communication where half-changed values of a variable don't matter (false/not-false).
The reason you normally mutex-protect a variable is to stop one thread seeing it in a half-changed state. Let's say you have a two-byte integer for a count of some objects, and it's set to 0x00ff (255).
Let's further say that thread A tries to increment that count but it's not an atomic operation. It changes the top byte to 0x01 but, before it gets a chance to change the bottom byte to 0x00, thread B swoops in and reads it as 0x01ff.
Now that's not going to be very good if thread B want to do something with the last element counted by that value. It should be looking at 0x0100 but will instead try to look at 0x01ff, the effect of which will be wrong, if not catastrophic.
If the count variable were protected by a mutex, thread B wouldn't be looking at it until thread A had finished updating it, hence no problem would occur.
The reason that doesn't matter with one-way booleans is because any half state will also be considered as true or false so, if thread A was halfway between turning 0x0000 into 0x0001 (just the top byte), thread B would still see that as 0x0000 (false) and keep going (until thread A finishes its update next time around).
And if thread A was turning the boolean into 0xffff, the half state of 0xff00 would still be considered true by thread B so it would do its thing before thread A had finished updating the boolean.
Neither of those two possibilities is bad simply because, in both, thread A is in the process of changing the boolean and it will finish eventually. Whether thread B detects it a tiny bit earlier or a tiny bit later doesn't really matter.
The while(quite_thread); is using the value quit_thread was set to on the line before it. Calling a function (usleep) induces the compiler to reload the value on each test.
In any case, this is the wrong way to wait for a thread to complete. Use pthread_join instead.
You're "learning" multhithreading the wrong way. The right way is to learn to use mutexes and condition variables; any other solution will fail under some circumstances.

Resources