What happens when a declarator (my/state) is in a for block? - scope

The following blocks run a loop assigning the topic to a variable $var:
The first one the my $var; is outside the loop
The second the my $var; is inside the loop
Lastly the state $var; is inside the loop
my $limit=10_000_000;
{
my $var;
for ^$limit { $var =$_; }
say now - ENTER now;
}
{
for ^$limit { my $var; $var=$_; }
say now - ENTER now;
}
{
for ^$limit { state $var; $var=$_; }
say now - ENTER now;
}
A sample output durations (seconds) of each block are as follows:
0.5938845
1.8251226
2.60700803
The docs at https://docs.perl6.org/syntax/state motion state variables have the same lexical scoping as my. Functionally code block 1 and block 3 would achieve the same persistent storage across multiple calls to the respective loop block.
Why does the state ( and the inner my) version take so much more time? What else is it doing?
Edit:
Similar to #HåkonHægland's comment,if I cut and paste the above code so to run each block three times in total the timing changes significantly for the my $var outside the loop(the first case):
0.600303
1.7917011
2.6640811
1.67793597
1.79197091
2.6816156
1.795679
1.81233942
2.77486777

Short version: in a world without any runtime optimization (type specialization, JIT, and so forth), the timings would match your expectations. The timings here are influenced by how well the optimizer deals with each example.
First of all, it's interesting to run the code without any kind of runtime optimization. In my (rather slow) VM on the box I'm currently on, sticking MVM_SPESH_DISABLE=1 into the environment results in these timings:
13.92366942
16.235372
14.4329288
These make some kind of intuitive sense:
In the first case, we have a simple lexical variable declared in the outer scope of the block
In the second case, we have to allocate, and then garbage collect, an extra Scalar allocation every time around the loop, which accounts for the extra time
In the third case, we're using the state variable. A state variable is stored in the code object of the closure, and then copied into the call frame at entry time. That's cheaper than allocating a new Scalar every time, but still a little bit more work than not having to do that operation at all.
Next, let's run 3 programs with the optimizer enabled, each example in its own isolated program.
The first comes out at 0.86298831, a factor of 16 faster. Go optimizer! It has inlined the loop body.
The second comes out at 1.2288566, a factor of 13 faster. Not too shabby either. It has again inlined the loop body. (This case will also become rather cheaper in the future, once the escape analyzer is smart enough to eliminate the Scalar allocation.)
The third comes out at 2.0695035, a factor of 7 faster. That's comparatively unimpressive (even if still quite an improvement), and the major reason is that it has not inlined the loop body. Why? Because it doesn't know how to inline code that uses state variables yet. (How to see this: run with MVM_SPESH_INLINE_LOG=1 in the environment, and among the output is: Can NOT inline (1) with bytecode size 78 into (3): cannot inline code that declares a state variable.)
In short, the dominating factor here is the inlining of the loop body, and with state variables that is presently not possible.
It's not immediately clear why the optimizer does worse at the case with the outer declaration of $var when that isn't the first loop in the program; that feels more like a bug than a reasonable case of "this feature isn't optimized well yet". In its slight defense, it still consistently manages to deliver a big improvement, even when not so big as might be desired!

Related

Multithreading and recursion

so I've got this multithreaded, recursive application. It's coded in Pharo Smalltalk but the logical solution to the issue is likely to be the same across most languages.
I have 4 of the same process running relatively simultaneously. It's the last iteration of a recursive call. I'd like to print the result calculated by my recursive function (it's a dictionary being modified in the argument of the recursive function/message). The issue I'm facing right now is that the print is called in the base case terminator of the recursion, so the result is printed 4 times.
I tried setting a global variable which allows for me to print the result of the process which finishes first, but of course that means that the result is wrong. It needs to print the result of the last process to execute of all the processes in that last iteration of the recursion.
How could I go about this without going too deep into the Process class? Thanks for any help.
Do you know the number of threads? (Supposedly, 4)
Then you can use an atomic long (in java, for example):
AtomicLong myAtomicLong = new AtomicLong(0);
...
...
// do my work
if (totalThreadCount == myAtomicLong.getAndIncrement() -1)
{
//my print
}
The increment and get is atomic, so the last thread to want to print, will get there and the condition will be true after all other threads have finished their jobs. Please notice that it is important to place the increment and check after the job, is done.

How does thread context-switching work with global variable?

I have been confused at this question:
I have C++ function:
void withdraw(int x) {
balance = balance - x;
}
balance is a global integer variable, which equals to 100 at the start.
We run the above function with two different thread: thread A and thread B. Thread A run withdraw(50) and thread B run withdraw(30).
Assuming we don't protect balance, what is the final result of balance after running those threads in following sequences?
A1->A2->A3->B1->B2->B3
B1->B2->B3->A1->A2->A3
A1->A2->B1->B2->B3->A3
B1->B2->A1->A2->A3->B3
Explanation:
A1 means OS execute the first line of function withdraw in thread A, A2 means OS execute the second line of function withdraw in thread A, B3 means OS execute the third line of function withdraw in thread B, and so on.
The sequence is how OS schedule thread A & B presumably.
My answer is
20
20
50 (Before context switch, OS saves balance. After context switch, OS restore balance to 50)
70 (Similar to above)
But my friend disagrees, he said that balance was a global variable. Thus it is not saved in stack, so it does not affected by context switching. He claimed that all 4 sequences result in 20.
So who is right? I can't find fault in his logic.
(We assume we have one processor that can only execute one thread at a time)
Consider this line:
balance = balance - x;
Thread A reads balance. It is 100. Now, thread A subtracts 50 and ... oops
Thread B reads balance. It is 100. Now, thread B subtracts 30 and updates the variable, which is now 70.
...thread A continues now updates the variable, which is now 50. You've just lost the work that Thread B.
Threads don't execute "lines of code" -- they execute machine instructions. It does not matter if a global variable is affected by context switching. What matters is when the variable is read, and when it is written, by each thread, because the value is "taken off the shelf" and modified, then "put back". Once the first thread has read the global variable and is working with the value "somewhere in space", the second thread must not read the global variable until the first thread has written the updated value.
Unless the threading standard you are using specifies, then there's no way to know. Most typical threading standards don't, so typically there's no way to know.
Your answer sounds like nonsense though. The OS has no idea what balance is nor any way to do anything to it around a context switch. Also, threads can run at the same time without context switches.
Your friend's answer also sounds like nonsense. How does he know that it won't be cached in a register by the compiler and thus some of the modifications will stomp on previous ones?
But the point is, both of you are just guessing about what might happen to happen. If you want to answer this usefully, you have to talk about what is guaranteed to happen.
Clearly homework, but saved by doing actual work before asking.
First, forget about context switching. Context switching is totally irrelevant to the problem. Assume that you have multiple processors, each executing one thread, and each progressing at an unknown speed, stopping and starting at unpredictable times. Even better, assume that this stopping and storing is controlled by an enemy, who will try to break your program.
And because context switching is irrelevant, the OS will not save or restore anything. It won't touch the variable balance. Only your two threads will.
Your friend is absolutely, totally wrong. It's quite the opposite. Since balance is a global variable, both threads can read and write it. But you don't only have the problem that they might read and write it in unknown order, as you examined, it is worse. They could access it at the same time, and if one thread modifies data while another reads it, you have a race condition and anything at all could happen. Not only could you get any result, your program could also crash.
If balance was a local variable saved on the stack, then both threads would have each its own variable, and nothing bad would happen.
Consider this line:
balance = balance - x;
Thread A reads balance. It is 100. Now, thread A subtracts 50 and ... oops
Thread B reads balance. It is 100. Now, thread B subtracts 30 and updates the variable, which is now 70.
...thread A continues and updates the variable, which is now 50. You've just completely lost the work of Thread B.
Threads don't execute "lines of code" -- they execute machine instructions. It does not matter if a global variable is affected by context switching. What matters is when the variable is read, and when it is written, by each thread, because the value is "taken off the shelf" and modified, then "put back". Once the first thread has read the global variable and is working with the value "somewhere in space", the second thread cannot read the global variable until the first thread has written the updated value.
Simple and short answer for c++: Unsynchronized access to a shared variable is undefined behavior, so anything can happen. The value can e.g. be 100,70,50,20,42 or -458995. The program could crash or not. And in theory its even allowed to order pizza.
The actual machine code that is executed is usually far away from what your program looks like and in the case of undefined behavior, you are no longer guaranteed, that the actual behavior has anything to do with the c++ code you have written.

Are Data Races bad?

I like to settle a theoretical computing argument.
Assume everything initial 0
Thread0 Thread1
x=1 | y=x
Here we have a data race. As far as I understand (assuming that x fits in the architecture's word-size and is aligned on the word boundary, which it normally would be), the result is either x=1 ^ y=0 or x=1 ^ y=1.
Now my second example uses explicit locking (assume that lock() gets some global lock), and as far as I understand this is not a data race condition anymore.
Thread0 Thread1
lock() | lock()
x=1 | y=x
unlock() | unlock()
However I would argue that both programs are identical, they produce identical output, have identical race issues. Somehow however people are trying to convince me that data race condition is bad, and I don't see why my first program would be worse than my second.
Edit. The full quote from Wikipedia is:
C++11 introduced formal support for multithreading, and defined a data race strictly as a race condition between non-atomic variables. While race conditions in general will continue to exist, a "data race" must be avoided by the programmer, who must assure that only one thread at a time may access any variable if the access is for writing.
Now, assuming this is correct (it's wikipedia, which tends to be reasonably good on programming but can often be very wrong indeed), it's defining "data race" in this context purely as one of the clearly bad cases; those which can cause shearing of values. Such cases obviously must be avoided, so clearly data-races—defined as they are here—must be avoided.
And by this definition, neither program in your question has a data race.
I leave my original answer on race conditions generally:
The second example has a data-race too. Indeed, it has the exact same data-race as the first one.
Is this bad? That depends. Note before any of the rest. Not only are many cases bad, as I'll describe more below, but those cases that are bad tend to be particularly hard to find and fix, which in itself should lean one towards assuming the worse.
An obvious case where a data race is bad is where it corrupts data. Let's say we change your example so that x and y were larger than the architecture's word size and we're setting x = -1. We'll also assume two's-complement. Now the possible values for y are not just -1 and 0, but also -4294967296 and 4294967295.
In this case, the locking you suggest wouldn't remove the data-race completely, but would remove that part of it that could cause shearing: The only possible values of y would again be -1 and 0.
Another question is serialisation. It's often necessary to be able to consider a sequence of concurrent events as having been one of a limited set of sequential events.
For example, consider we start with X = 0 and then have:
Thread 0 Thread 1
++x x = -50
Now, there's still the risk of sheering here that could result in a possible bogus value.
Assuming that x is word-size or smaller, we still might have an issue. There are two possible values if the operations were not concurrent. Either x could be equal to -50 (increment, then assign -50) or x could be equal to -49 (assign -50 then increment). However, concurrently it's possible for us to end up with x having a value of 1 because thread 0 reads 0, thread 1 assigns -50and then thread 0 increments and assigns 1.
Now, it's quite possible that this is perfectly okay. It's very likely though that it isn't.
As programmers we've got four possibilities:
Identify the data-race. Determine that it is harmless (or relatively harmless*), and let it be.
Identify the data-race. Determine that it can cause problems, and fix it.
Identify the data-race. Just fix it because that way we can't make a mistake in determining it is harmless when it actually isn't.
Identify the data-race. Determine that it can cause problems. Change the code so the race doesn't cause problems.
The importance of case number 2 is obvious - we turn code that has a bug into code that isn't.
The importance of case number 3 comes down to time and provability. We might well be making code less efficient (many methods for stopping data-races have at least some overhead), but it often takes less developer time to remove a race than prove it harmless, and the cost of a wrong example is marginally slower code whereas the cost of being wrong in the other direction is a hard to fix bug.
The importance of number 1 is more complicated, it can be important in some very low-level concurrent code to avoid locking, so there are cases where we want to tolerate races. Number 4 is a way to turn something from number 2 into number 1, and comes up when either the data-race is inherent to the problem (we can't remove it) or we're doing the sort of low-level concurrency that number 1 involves.
Here's an interesting example in C#:
public static SomeResource GetTheResource()
{
get
{
if(_theResource == null)
_theResource = CreateTheResource();
return _theResource
}
}
The data-race should be obvious; until theResource is set and all CPU's caches see the update, we might assign to it several times from different threads. Is this a bug? Many people would say it is, but actually it depends. It's possible that it's safe to have a brief period where different versions of theResource are used, and all we really lose is some efficiency in the beginning from the multiple calls to CreateTheResource(). In code with a high requirement for performance we might decide to tolerate this initial lower efficiency for the long-term efficiency gain of no locking. Or it might be vital that we lock. Or we might just lock because we don't have that pressing a need to avoid it, and it's simpler just to assume that the might be a problem.
Important Point 1: If you do decide to tolerate a race like this, you should add a comment to that effect and why. Otherwise every time someone comes across this code they'll have to check again that it's safe, rather than at most check your stated reasoning.
Important Point 2: While the principle here is language-agnostic, the details in each case often are not. In this case tolerating the race depends not just on the temporary multiple copies being safe, but also on garbage collection cleaning those excess copies up. If we were instead assigning a pointer to the heap in C++ the above would at the very best be leaky, even if otherwise safe.
A more complicated case is something like this (again a C# example, but applicable to other languages):
internal sealed class LockFreeQueue<T>
{
private sealed class Node
{
public readonly T Item;
public Node Next;
public Node(T item)
{
Item = item;
}
}
private volatile Node _head;
private volatile Node _tail;
public LockFreeQueue()
{
_head = _tail = new Node(default(T));
}
#pragma warning disable 420 // volatile semantics not lost as only by-ref calls are interlocked
public void Enqueue(T item)
{
Node newNode = new Node(item);
for(;;)
{
Node curTail = _tail;
if (Interlocked.CompareExchange(ref curTail.Next, newNode, null) == null) //append to the tail if it is indeed the tail.
{
Interlocked.CompareExchange(ref _tail, newNode, curTail); //CAS in case we were assisted by an obstructed thread.
return;
}
else
{
Interlocked.CompareExchange(ref _tail, curTail.Next, curTail); //assist obstructing thread.
}
}
}
public bool TryDequeue(out T item)
{
for(;;)
{
Node curHead = _head;
Node curTail = _tail;
Node curHeadNext = curHead.Next;
if (curHead == curTail)
{
if (curHeadNext == null)
{
item = default(T);
return false;
}
else
Interlocked.CompareExchange(ref _tail, curHeadNext, curTail); // assist obstructing thread
}
else
{
item = curHeadNext.Item;
if (Interlocked.CompareExchange(ref _head, curHeadNext, curHead) == curHead)
{
return true;
}
}
}
}
#pragma warning restore 420
}
This code doesn't prevent data-races, but rather it reacts to them. If an operation is affected by another thread, then rather than error or return an incorrect result, the thread deals with the race and returns something else (and indeed even helps the other thread in some cases).
So in summary, data-races are not in and of themselves bad things. They are though complicating things, and those complications can cause problems. When you have a data-race you have a choice between proving it's not a problem, changing your code to tolerate the race so that it's no longer a problem, or changing your code to remove the race. Of these, just removing the race is often the easiest choice.
*I don't mean "relatively harmless" in a vague way here, but relative to the alternative. E.g. if we decide to leave the race in the C# example given, it's because we've decided that the cost of redundant object creation is less harmful than the relative cost of preventing it.
I thank everybody for their answers, although valuable they did not actually answer the question I was hoping I asked. The answers did allow me to reason better about what I was actually asking, and in the end find something of an answer online:
http://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong
So I guess my question should have been:
The C(++)11 standard defines my first example as a data race (if I don't use the "atomic" keyword), and the second one not. The first one therefore has undefined behaviour (even though there don't seem to be compiler implementations that would result in anything but x==1 && y==0|1, according to the standard any resulting value for x and y is correct compiler behaviour). I was wondering why this is. I think the Intel document answers that question pretty elaborately.
If x and y fit into a machine register then assignment is atomic by default so locks won't change the outcome. It's equally possible to get y = 0 or y = 1 in the second case as well.

Is the empty statement in programming languages as C, Java, ... no-op in Assembler?

A few days ago I saw that for ( ; ; ) results in an infinite loop. That made me wonder about two things.
Is the empty statement ( ; ) no-op in assembler
Why is it evaluated as "true" in the for example given above?
Answering from a C perspective here:
No, ; does not translate into a no-op instruction. No-op instructions (such as nop) are explicit assembly level instructions which tend to actually do something (in that they consume time, though not necessarily affect any stored state within the CPU).
The for(;;) snippet is a for loop with defaults for each of the three sections. You can think of the ; in this case as not being an empty statement but a separator for the sections (a).
The first section (initialisation) has a default of "do nothing".
The second section is a condition under which the loop will continue. Its default is to continue forever.
The third section, the steps to take before beginning a subsequent iteration, is also "do nothing".
I have, in the past, been guilty of the heinous crime of using things like:
#define ever ;;
#define forever for (;;)
so that I could write my infinite loops as:
for(ever) { ... }
forever { ... }
I wouldn't do that nowadays of course.
(a) A "true" empty statement along the lines of:
if (condition) {
a = b;
;
}
will also probably not translate to a no-op. More than likely it will not result in any code at all.
Keep in mind this is based on fairly common behaviour. In terms of C, ; can generate any lower level code it wants as long as it doesn't affect the "virtual machine" that is the C environment. It may, for example, increase a hidden line number variable and update coverage statistics if you have profiling enabled.

Is it ok to have multiple threads writing the same values to the same variables?

I understand about race conditions and how with multiple threads accessing the same variable, updates made by one can be ignored and overwritten by others, but what if each thread is writing the same value (not different values) to the same variable; can even this cause problems? Could this code:
GlobalVar.property = 11;
(assuming that property will never be assigned anything other than 11), cause problems if multiple threads execute it at the same time?
The problem comes when you read that state back, and do something about it. Writing is a red herring - it is true that as long as this is a single word most environments guarantee the write will be atomic, but that doesn't mean that a larger piece of code that includes this fragment is thread-safe. Firstly, presumably your global variable contained a different value to begin with - otherwise if you know it's always the same, why is it a variable? Second, presumably you eventually read this value back again?
The issue is that presumably, you are writing to this bit of shared state for a reason - to signal that something has occurred? This is where it falls down: when you have no locking constructs, there is no implied order of memory accesses at all. It's hard to point to what's wrong here because your example doesn't actually contain the use of the variable, so here's a trivialish example in neutral C-like syntax:
int x = 0, y = 0;
//thread A does:
x = 1;
y = 2;
if (y == 2)
print(x);
//thread B does, at the same time:
if (y == 2)
print(x);
Thread A will always print 1, but it's completely valid for thread B to print 0. The order of operations in thread A is only required to be observable from code executing in thread A - thread B is allowed to see any combination of the state. The writes to x and y may not actually happen in order.
This can happen even on single-processor systems, where most people do not expect this kind of reordering - your compiler may reorder it for you. On SMP even if the compiler doesn't reorder things, the memory writes may be reordered between the caches of the separate processors.
If that doesn't seem to answer it for you, include more detail of your example in the question. Without the use of the variable it's impossible to definitively say whether such a usage is safe or not.
It depends on the work actually done by that statement. There can still be some cases where Something Bad happens - for example, if a C++ class has overloaded the = operator, and does anything nontrivial within that statement.
I have accidentally written code that did something like this with POD types (builtin primitive types), and it worked fine -- however, it's definitely not good practice, and I'm not confident that it's dependable.
Why not just lock the memory around this variable when you use it? In fact, if you somehow "know" this is the only write statement that can occur at some point in your code, why not just use the value 11 directly, instead of writing it to a shared variable?
(edit: I guess it's better to use a constant name instead of the magic number 11 directly in the code, btw.)
If you're using this to figure out when at least one thread has reached this statement, you could use a semaphore that starts at 1, and is decremented by the first thread that hits it.
I would expect the result to be undetermined. As in it would vary from compiler to complier, langauge to language and OS to OS etc. So no, it is not safe
WHy would you want to do this though - adding in a line to obtain a mutex lock is only one or two lines of code (in most languages), and would remove any possibility of problem. If this is going to be two expensive then you need to find an alternate way of solving the problem
In General, this is not considered a safe thing to do unless your system provides for atomic operation (operations that are guaranteed to be executed in a single cycle).
The reason is that while the "C" statement looks simple, often there are a number of underlying assembly operations taking place.
Depending on your OS, there are a few things you could do:
Take a mutual exclusion semaphore (mutex) to protect access
in some OS, you can temporarily disable preemption, which guarantees your thread will not swap out.
Some OS provide a writer or reader semaphore which is more performant than a plain old mutex.
Here's my take on the question.
You have two or more threads running that write to a variable...like a status flag or something, where you only want to know if one or more of them was true. Then in another part of the code (after the threads complete) you want to check and see if at least on thread set that status... for example
bool flag = false
threadContainer tc
threadInputs inputs
check(input)
{
...do stuff to input
if(success)
flag = true
}
start multiple threads
foreach(i in inputs)
t = startthread(check, i)
tc.add(t) // Keep track of all the threads started
foreach(t in tc)
t.join( ) // Wait until each thread is done
if(flag)
print "One of the threads were successful"
else
print "None of the threads were successful"
I believe the above code would be OK, assuming you're fine with not knowing which thread set the status to true, and you can wait for all the multi-threaded stuff to finish before reading that flag. I could be wrong though.
If the operation is atomic, you should be able to get by just fine. But I wouldn't do that in practice. It is better just to acquire a lock on the object and write the value.
Assuming that property will never be assigned anything other than 11, then I don't see a reason for assigment in the first place. Just make it a constant then.
Assigment only makes sense when you intend to change the value unless the act of assigment itself has other side effects - like volatile writes have memory visibility side-effects in Java. And if you change state shared between multiple threads, then you need to synchronize or otherwise "handle" the problem of concurrency.
When you assign a value, without proper synchronization, to some state shared between multiple threads, then there's no guarantees for when the other threads will see that change. And no visibility guarantees means that it it possible that the other threads will never see the assignt.
Compilers, JITs, CPU caches. They're all trying to make your code run as fast as possible, and if you don't make any explicit requirements for memory visibility, then they will take advantage of that. If not on your machine, then somebody elses.

Resources