Delphi threading - which parts of code need to be protected/synchronized? - multithreading

so far I thought that any operation done on "shared" object (common for multiple threads) must be protected with "synchronize", no matter what. Apparently, I was wrong - in the code I'm studying recently there are plenty of classes (thread-safe ones, as the Author claims) and only one of them uses Critical Section for almost every method.
How do I find what parts / methods of my code needs to be protected with CriticalSection (or any other method) and which not?
So far I haven't stumbled upon any interesting explanation / article / blog note, all google results are:
a) examples of synchronization between thread and the GUI. From simple progressbar to most complex, but still the lesson is obvious: each time you access / modify the property of GUI component, do that in "Synchronize". But nothing more.
b) articles explaining Critical Sections, Mutexes etc. Just a different approaches of protection/synchronization.
c) Examples of very very simple thread-safe classes (thread safe stack or list) - they all do the same - implement lock / unlock methods which do enter/leave critical section and return the actual stack/list pointer on locking.
Now I'm looking for explanation which parts of code should be protected.
could be in form of code ;) but please don't provide me with one more "using Synchronize to update progressbar" ... ;)
thank you!

You are asking for specific answers to a very general question.
Basically, apart of UI operations, you should protect every shared memory/resource access to avoid two potentially competing threads to:
read inconsistent memory
write memory at the same time
try to use the same resource at the same time from more than one thread... until the resource is thread-safe.
Generally, I consider any other operation thread safe, including operations that access not shared memory or not shared objects.
For example, consider this object:
type
TThrdExample = class
private
FValue: Integer;
public
procedure Inc;
procedure Dec;
function Value: Integer;
procedure ThreadInc;
procedure ThreadDec;
function ThreadValue: Integer;
end;
ThreadVar
ThreadValue: Integer;
Inc, Dec and Value are methods which operate over FValue field. The methods are not thread safe until you protect them with some synchronization mechanism. It can be a MultipleReaderExclusiveWriterSinchronizer for Value function and CriticalSection for Inc and Dec methods.
ThreadInc and ThreadDec methods operate over ThreadValue variable, which is defined as ThreadVar, so I consider it ThreadSafe because the memory they access is not shared between threads... each call from different thread will access different memory address.
If you know that, by design, a class should be used only in one thread or inside other synchronization mechanisms, you're free to consider that thread safe by design.
If you want more specific answers, I suggest you try with a more specific question.
Best regards.
EDIT: Maybe someone say the integer fields is a bad example because you can consider integer operations atomic on Intel/Windows thus is not needed to protect it... but I hope you get the idea.

You misunderstood TThread.Synchronize method.
TThread.Synchronize and TThread.Queue methods executes protected code in the context of main (GUI) thread. That is why you should use Syncronize or Queue to update GUI controls (like progressbar) - normally only main thread should access GUI controls.
Critical Sections are different - the protected code is executed in the context of the thread that acquired critical section, and no other thread is permitted to acquire the critical section until the former thread releases it.

You use critical section in case there's a need for a certain set of objects to be updated atomically. This means, they must at all times be either already updated completely or not yet updated at all. They must never be accessible in a transitional state.
For example, with a simple integer reading/writing this is not the case. The operation of reading integer as well as the operation of writing it are atomic already: you cannot read integer in the middle of processor writing it, half-updated. It's either old value or new value, always.
But if you want to increment the integer atomically, you have not one, but three operations you have to do at once: read the old value into processor's cache, increment it, and write it back to memory. Each operation is atomic, but the three of them together are not.
One thread might read the old value (say, 200), increment it by 5 in cache, and at the same time another thread might read the value too (still 200). Then the first thread writes back 205, while the second thread increments its cached value of 200 to 203 and writes back 203, overwriting 205. The result of two increments (+5 and +3) should be 208, but it's 203 due to non-atomicity of operations.
So, you use critical sections when:
A variable, set of variables, or any resource is used from several threads and needs to be updated atomically.
It's not atomic by itself (for example, calling a function which is guarded by critical section inside of the function body, is an atomic operation already)

Have a read of this documentation
http://www.eonclash.com/Tutorials/Multithreading/MartinHarvey1.1/ToC.html

If you use messaging to communicate between threads then you can basically ignore synchronisation primitives completely because each thread only accesses its internal structures and the messages themselves. In essence this is far easier and more scalable architecture than using synchronisation primitives.

Related

Is it necessary to do Multi-thread protection for a Boolean property in Delphi?

I found a Delphi library named EventBus and I think it will be very useful, since the Observer is my favorite design pattern.
In the process of learning its source code, I found a piece of code that may be due to multithreading security considerations, which is in the following (property Active's getter and setter methods).
TSubscription = class(TObject)
private
FActive: Boolean;
procedure SetActive(const Value: Boolean);
function GetActive: Boolean;
// ... other members
public
constructor Create(ASubscriber: TObject;
ASubscriberMethod: TSubscriberMethod);
destructor Destroy; override;
property Active: Boolean read GetActive write SetActive;
// ... other methods
end;
function TSubscription.GetActive: Boolean;
begin
TMonitor.Enter(self);
try
Result := FActive;
finally
TMonitor.exit(self);
end;
end;
procedure TSubscription.SetActive(const Value: Boolean);
begin
TMonitor.Enter(self);
try
FActive := Value;
finally
TMonitor.exit(self);
end;
end;
Could you please tell me the lock protection for FActive is whether or not necessary and why?
Summary
Let me start by making this point as clear as possible: Do not attempt to distill multi-threaded development into a set of "simple" rules. It is essential to understand how the data is shared in order to evaluate which of the available concurrency protection techniques would be correct for a particular situation.
The code you have presented suggests the original authors had only a superficial understanding of multi-threaded development. So it serves as a lesson in what not to do.
First, locking the Boolean for read/write access in that way serves no purpose at all. I.e. each read or write is already atomic.
Furthermore, in cases where the property does need protection for concurrent access: it fails abysmally to provide any protection at all.
The net effect is redundant ineffective code that can trigger pointless wait states.
Thread-safety
In order to evaluate 'thread-safety', the following concepts should be understood:
If 2 threads 'race' for the opportunity to access a shared memory location, one will be first, and the other second. In the absence of other factors, you have no control over which thread would 'start' its access first.
Your only control is to block the 'second' thread from concurrent access if the 'first' thread hasn't finished its critical work.
The word "critical" has loaded meaning and may take some effort to fully understand. Take note of the explanation later about why a Boolean variable might need protection.
Critical work refers to all the processing required for the operation on the shared data to be deemed complete.
It's related to concepts of atomic operations or transactional integrity.
The 'second' thread could either be made to wait for the 'first' thread to finish or to skip its operation altogether.
Note that if the shared memory is accessed concurrently by both threads, then there's the possibility of inconsistent behaviour based on the exact ordering of the internal sub-steps of each thread's processing.
This is the fundamental risk and area of concern when thinking about thread-safety. It is the base principle from which other principles are derived.
'Simple' reads and writes are (usually) atomic
No concurrent operations can interfere with the reading/writing of a single byte of data. You will always either get the value in its entirety or replace the value in its entirety.
This concept extends to multiple bytes up to the machine architecture bit size; but does have a caveat, known as tearing.
When a memory address is not aligned on the bit size, then there's the possibility of the bytes spanning the end of one aligned location into the beginning of the next aligned location.
This means that reading/writing the bytes may take 2 operations at the machine level.
As a result 2 concurrent threads could interleave their sub-steps resulting in invalid/incorrect values being read. E.g.
Suppose one thread writes $ffff over an existing value of $0000 while another reads.
"Valid" reads would return either $0000 or $ffff depending on which thread is 'first'.
If the sub-steps run concurrently, then the reading thread could return invalid values of $ff00 or $00ff.
(Note that some platforms might still guarantee atomicity in this situation, but I don't have the knowledge to comment in detail on this.)
To reiterate: single byte values (including Boolean) cannot span aligned memory locations. So they're not subject to the tearing issue above. And this is why the code in the question that attempts to protect the Boolean is completely pointless.
When protection is needed
Although reads and writes in isolation are atomic, it's important to note that when a value is read and impacts a write decision, then this cannot be assumed to be thread-safe. This is best explained by way of a simple example.
Suppose 2 threads invert a shared boolean value: FBool := not FBool;
2 threads means this happens twice and once both threads have finished, the boolean should end up having its starting value. However, each is a multi-step operation:
Read FBool into a location local to the thread (either stack or register).
Invert the value.
Write the inverted value back to the shared location.
If there's no thread-safety mechanism employed then the sub-steps can run concurrently. And it's possible that both threads:
Read FBool; both getting the starting value.
Both threads invert their local copies.
Both threads write the same inverted value to the shared location.
And the end result is that the value is inverted when it should have been reverted to its starting value.
Basically the critical work is clearly more than simply reading or writing the value. To properly protect the boolean value in this situation, the protection must start before the read, and end after the write.
The important lesson to take away from this is that thread-safety requires understanding how the data is shared. It's not feasible to produce an arbitrary generic safety mechanism without this understanding.
And this is why any such attempt as in the EventBus code in the question is almost certainly doomed to be deficient (or even an outright failure).

Confusion about C++11 lock free stack push() function

I'm reading C++ Concurrency in Action by Anthony Williams, and don't understand its push implementation of the lock_free_stack class. Listing 7.12 to be precise
void push(T const& data)
{
counted_node_ptr new_node;
new_node.ptr=new node(data);
new_node.external_count=1;
new_node.ptr->next=head.load(std::memory_order_relaxed)
while(!head.compare_exchange_weak(new_node.ptr->next,new_node, std::memory_order_release, std::memory_order_relaxed));
}
So imagine 2 threads (A, B) calling push function. Both of them reach while loop but not start it. So they both read the same value from head.load(std::memory_order_relaxed).
Then we have the following things going on:
B thread gets swiped out for any reason
A thread starts the loop and obviously successfully adds a new node to the stack.
B thread gets back on track and also starts the loop.
And this is where it gets interesting as it seems to me.
Because there was a load operation with std::memory_order_relaxed and compare_exchange_weak(..., std::memory_order_release, ...) in case of success it looks like there is no synchronization between threads whatsoever.
I mean it's like std::memory_order_relaxed - std::memory_order_release and not std::memory_order_acquire - std::memory_order_release.
So B thread will simply add a new node to the stack but to its initial state when we had no nodes in the stack and reset head to this new node.
I was doing my research all around this subject and the best i could find was in this post Does exchange or compare_and_exchange reads last value in modification order?
So the question is, is it true? and all RMW functions see the last value in modification order? No matter what std::memory_order we used, if we use RMW operation it will synchronize with all threads (CPU and etc) and find the last value to be written to the atomic operation upon it is called?
So after some research and asking a bunch of people I believe I found the proper answer to this question, I hope it'll be a help to someone.
So the question is, is it true? and all RMW functions see the last
value in modification order?
Yes, it is true.
No matter what std::memory_order we used, if we use RMW operation it
will synchronize with all threads (CPU and etc) and find the last
value to be written to the atomic operation upon it is called?
Yes, it is also true, however there is something that needs to be highlighted.
RMW operation will synchronize only the atomic variable it works with. In our case, it is head.load
Perhaps you would like to ask why we need release - acquire semantics at all if RMW does the synchronization even with the relaxed memory order.
The answer is because RMW works only with the variable it synchronizes, but other operations which occurred before RMW might not be seen in the other thread.
lets look at the push function again:
void push(T const& data)
{
counted_node_ptr new_node;
new_node.ptr=new node(data);
new_node.external_count=1;
new_node.ptr->next=head.load(std::memory_order_relaxed)
while(!head.compare_exchange_weak(new_node.ptr->next,new_node, std::memory_order_release, std::memory_order_relaxed));
}
In this example, in case of using two push threads they won't be synchronized with each other to some extent, but it could be allowed here.
Both threads will always see the newest head because compare_exchange_weak
provides this. And a new node will be always added to the top of the stack.
However if we tried to get the value like this *(new_node.ptr->next) after this line new_node.ptr->next=head.load(std::memory_order_relaxed) things could easily turn ugly: empty pointer might be dereferenced.
This might happen because a processor can change the order of instructions and because there is no synchronization between threads the second thread could see the pointer to a top node even before that was initialized!
And this is exactly where release-acquire semantic comes to help. It ensures that all operations which happened before release operation will be seen in acquire part!
Check out and compare listings 5.5 and 5.8 in the book.
I also recommend you to read this article about how processors work, it might provide some essential information for better understanding.
memory barriers

Can I mix and match InterlockedIncrement and CriticalSection?

Previously, I had code like this:
EnterCriticalSection(q^);
Inc(global_stats.currentid);
LeaveCriticalSection(q^);
and I changed it to:
InterlockedIncrement(global_stats.currentid);
and I found out there are some code like this:
EnterCriticalSection(q^);
if (global_stats.currentid >= n) then
begin
LeaveCriticalSection(q^);
Exit;
end;
LeaveCriticalSection(q^);
So, question is , can I mix and match InterLockedIncrement and Enter/Leave CriticalSection?
which has a faster performance? critical and atomic?
Can I mix and match InterLockedIncrement and Enter/Leave CriticalSection?
In general, no you cannot. Critical sections and atomic operations do not interact.
Atomic functions, like your call to InterLockedIncrement, operate completely independently from critical sections and other locks. That is, one thread can hold the lock, and the other thread can at the same time modify the protected variable. Critical sections, like any other form of mutual exclusion, only work if all parties that operate on the shared data, do so when holding the lock.
However, from what we can see of your code, the critical section is needless in this case. You can write the code like this:
// thread A
InterlockedIncrement(global_stats.currentid);
....
// thread B
InterlockedIncrement(global_stats.currentid);
....
// thread C
if global_stats.currentid >= n then
Exit;
That code is semantically equivalent to your previous code with a critical section.
As for which performs better, the original code with the lock, and the code above without, the latter would be expected to perform better. Broadly speaking, lock free code can be expected to be better than code that uses locks, but that's not a rule that can be relied upon. Some algorithms can be faster if implemented with locks than equivalent lock-free implementations.
No, in general you cannot.
Critical section is used to assure that of all protected blocks of code at most one is executing at a given moment. If such protected block accesses currentid and that variable is modified at another place, code may work incorrectly.
In a specific case, it may be OK to mix&match but then you would have to check all affected code and rethink the processing so you'll be sure nothing can go wrong.

Interview Question on .NET Threading

Could you describe two methods of synchronizing multi-threaded write access performed
on a class member?
Please could any one help me what is this meant to do and what is the right answer.
When you change data in C#, something that looks like a single operation may be compiled into several instructions. Take the following class:
public class Number {
private int a = 0;
public void Add(int b) {
a += b;
}
}
When you build it, you get the following IL code:
IL_0000: nop
IL_0001: ldarg.0
IL_0002: dup
// Pushes the value of the private variable 'a' onto the stack
IL_0003: ldfld int32 Simple.Number::a
// Pushes the value of the argument 'b' onto the stack
IL_0008: ldarg.1
// Adds the top two values of the stack together
IL_0009: add
// Sets 'a' to the value on top of the stack
IL_000a: stfld int32 Simple.Number::a
IL_000f: ret
Now, say you have a Number object and two threads call its Add method like this:
number.Add(2); // Thread 1
number.Add(3); // Thread 2
If you want the result to be 5 (0 + 2 + 3), there's a problem. You don't know when these threads will execute their instructions. Both threads could execute IL_0003 (pushing zero onto the stack) before either executes IL_000a (actually changing the member variable) and you get this:
a = 0 + 2; // Thread 1
a = 0 + 3; // Thread 2
The last thread to finish 'wins' and at the end of the process, a is 2 or 3 instead of 5.
So you have to make sure that one complete set of instructions finishes before the other set. To do that, you can:
1) Lock access to the class member while it's being written, using one of the many .NET synchronization primitives (like lock, Mutex, ReaderWriterLockSlim, etc.) so that only one thread can work on it at a time.
2) Push write operations into a queue and process that queue with a single thread. As Thorarin points out, you still have to synchronize access to the queue if it isn't thread-safe, but it's worth it for complex write operations.
There are other techniques. Some (like Interlocked) are limited to particular data types, and there are even more (like the ones discussed in Non-blocking synchronization and Part 4 of Joseph Albahari's Threading in C#), though they are more complex: approach them with caution.
In multithreaded applications, there are many situations where simultaneous access to the same data can cause problems. In such cases synchronization is required to guarantee that only one thread has access at any one time.
I imagine they mean using the lock-statement (or SyncLock in VB.NET) vs. using a Monitor.
You might want to read this page for examples and an understanding of the concept. However, if you have no experience with multithreaded application design, it will likely become quickly apparent, should your new employer put you to the test. It's a fairly complicated subject, with many possible pitfalls such as deadlock.
There is a decent MSDN page on the subject as well.
There may be other options, depending on the type of member variable and how it is to be changed. Incrementing an integer for example can be done with the Interlocked.Increment method.
As an excercise and demonstration of the problem, try writing an application that starts 5 simultaneous threads, incrementing a shared counter a million times per thread. The intended end result of the counter would be 5 million, but that is (probably) not what you will end up with :)
Edit: made a quick implementation myself (download). Sample output:
Unsynchronized counter demo:
expected counter = 5000000
actual counter = 4901600
Time taken (ms) = 67
Synchronized counter demo:
expected counter = 5000000
actual counter = 5000000
Time taken (ms) = 287
There are a couple of ways, several of which are mentioned previously.
ReaderWriterLockSlim is my preferred method. This gives you a database type of locking, and allows for upgrading (although the syntax for that is incorrect in the MSDN last time I looked and is very non-obvious)
lock statements. You treat a read like a write and just prevent access to the variable
Interlocked operations. This performs an operations on a value type in an atomic step. This can be used for lock free threading (really wouldn't recommend this)
Mutexes and Semaphores (haven't used these)
Monitor statements (this is essentially how the lock keyword works)
While I don't mean to denigrate other answers, I would not trust anything that does not use one of these techniques. My apologies if I have forgotten any.

Is this a safe version of double-checked locking?

Slightly modified version of canonical broken double-checked locking from Wikipedia:
class Foo {
private Helper helper = null;
public Helper getHelper() {
if (helper == null) {
synchronized(this) {
if (helper == null) {
// Create new Helper instance and store reference on
// stack so other threads can't see it.
Helper myHelper = new Helper();
// Atomically publish this instance.
atomicSet(helper, myHelper);
}
}
}
return helper;
}
}
Does simply making the publishing of the newly created Helper instance atomic make this double checked locking idiom safe, assuming that the underlying atomic ops library works properly? I realize that in Java, one could just use volatile, but even though the example is in pseudo-Java, this is supposed to be a language-agnostic question.
See also:
Double checked locking Article
It entirely depends on the exact memory model of your platform/language.
My rule of thumb: just don't do it. Lock-free (or reduced lock, in this case) programming is hard and shouldn't be attempted unless you're a threading ninja. You should only even contemplate it when you've got profiling proof that you really need it, and in that case you get the absolute best and most recent book on threading for that particular platform and see if it can help you.
I don't think you can answer the question in a language-agnostic fashion without getting away from code completely. It all depends on how synchronized and atomicSet work in your pseudocode.
The answer is language dependent - it comes down to the guarantees provided by atomicSet().
If the construction of myHelper can be spread out after the atomicSet() then it doesn't matter how the variable is assigned to the shared state.
i.e.
// Create new Helper instance and store reference on
// stack so other threads can't see it.
Helper myHelper = new Helper(); // ALLOCATE MEMORY HERE BUT DON'T INITIALISE
// Atomically publish this instance.
atomicSet(helper, myHelper); // ATOMICALLY POINT UNINITIALISED MEMORY from helper
// other thread gets run at this time and tries to use helper object
// AT THE PROGRAMS LEISURE INITIALISE Helper object.
If this is allowed by the language then the double checking will not work.
Using volatile would not prevent a multiple instantiations - however using the synchronize will prevent multiple instances being created. However with your code it is possible that helper is returned before it has been setup (thread 'A' instantiates it, but before it is setup thread 'B' comes along, helper is non-null and so returns it straight away. To fix that problem, remove the first if (helper == null).
Most likely it is broken, because the problem of a partially constructed object is not addressed.
To all the people worried about a partially constructed object:
As far as I understand, the problem of partially constructed objects is only a problem within constructors. In other words, within a constructor, if an object references itself (including it's subclass) or it's members, then there are possible issues with partial construction. Otherwise, when a constructor returns, the class is fully constructed.
I think you are confusing partial construction with the different problem of how the compiler optimizes the writes. The compiler can choose to A) allocate the memory for the new Helper object, B) write the address to myHelper (the local stack variable), and then C) invoke any constructor initialization. Anytime after point B and before point C, accessing myHelper would be a problem.
It is this compiler optimization of the writes, not partial construction that the cited papers are concerned with. In the original single-check lock solution, optimized writes can allow multiple threads to see the member variable between points B and C. This implementation avoids the write optimization issue by using a local stack variable.
The main scope of the cited papers is to describe the various problems with the double-check lock solution. However, unless the atomicSet method is also synchronizing against the Foo class, this solution is not a double-check lock solution. It is using multiple locks.
I would say this all comes down to the implementation of the atomic assignment function. The function needs to be truly atomic, it needs to guarantee that processor local memory caches are synchronized, and it needs to do all this at a lower cost than simply always synchronizing the getHelper method.
Based on the cited paper, in Java, it is unlikely to meet all these requirements. Also, something that should be very clear from the paper is that Java's memory model changes frequently. It adapts as better understanding of caching, garbage collection, etc. evolve, as well as adapting to changes in the underlying real processor architecture that the VM runs on.
As a rule of thumb, if you optimize your Java code in a way that depends on the underlying implementation, as opposed to the API, you run the risk of having broken code in the next release of the JVM. (Although, sometimes you will have no choice.)
dsimcha:
If your atomicSet method is real, then I would try sending your question to Doug Lea (along with your atomicSet implementation). I have a feeling he's the kind of guy that would answer. I'm guessing that for Java he will tell you that it's cheaper to always synchronize and to look to optimize somewhere else.

Resources