I am trying to implement the Reusable Barrier algorithm from "Little Book of Semaphores", but without using a counter. I thought about an implementation where I may use 4 barriers, but I cannot find a way of knowing when all threads have passed first two barriers for example.
Also, another implementation that I thought about is with some boolean variables:
def threadA():
isOpenA = True
barrierA.wait()
isOpenA = False
barrierA.signal()
isOpenA = True
while isOpenA == True:
#hold threads here until each one arrives
#barrierB wait
#signal barrierB
Is there a possible way to implement the Reusable Barrier without using a counter?
Related
I have a several questions about atomic operations and multithreading.
There is a function for which a race condition occurs (julia lang):
function counter(n)
counter = 0
for i in 1:n
counter += i
end
return counter
end
If atomic operations are used to change the global variable "counter", would that help get rid of the race condition?
Does protocol of cache coherence have any real effect to perfomance? Virtual machines like the JVM can use their own architectures to support parallel computing.
Do atomic arithmetic and similar operations require more or less resources than ordinary arithmetic?
It's difficult for me now. Hope for your help.
I don't quite understand your example, the variable counter seems to be local, and then there will be no race conditions in your example.
Anyway, yes, atomic operations will ensure that race conditions do not occur. There are 2 or 3 ways to do that.
1. Your counter can be an Atomic{Int}:
using .Threads
const counter = Atomic{Int}(0)
...
function updatecounter(i)
atomic_add!(counter, i)
end
This is described in the manual: https://docs.julialang.org/en/v1/manual/multi-threading/#Atomic-Operations
2. You can use a field in a struct declared as #atomic:
mutable struct Counter
#atomic c::Int
end
const counter = Counter(0)
...
function updatecounter(i)
#atomic counter.c += i
end
This is described here: https://docs.julialang.org/en/v1/base/multi-threading/#Atomic-operations
It seems the details of the semantics haven't been written yet, but it's the same as in C++.
3. You can use a lock:
counter = 0
countlock = ReentrantLock()
...
function updatecounter(i)
#lock countlock global counter += i
end
and 2. are more or less the same. The lock approach is slower, but can be used if several operations must be done serially. No matter how you do it, there will be a performance degradation relative to non-atomic arithmetic. The atomic primitives in 1. and 2. must do a memory fence to ensure the correct ordering, so cache coherence will matter, depending on the hardware.
When using monitors for most concurrency problems, you can just put the critical section inside a monitor method and then invoke the method. However, there are some multiplexing problems wherein up to n threads can run their critical sections simultaneously. So we can say that it's useful to know how to use a monitor like the following:
monitor.enter();
runCriticalSection();
monitor.exit();
What can we use inside the monitors so we can go about doing this?
Side question: Are there standard resources tackling this? Most of what I read involve only putting the critical section inside the monitor. For semaphores there is "The Little Book of Semaphores".
As far as I understand your question, any solution must satisfy this:
When fewer than n threads are in the critical section, a thread calling monitor.enter() should not blockāi.e. the only thing preventing it from progressing should be the whims of the scheduler.
At most n threads are in the critical section at any point in time; implying that
When thread n+1 calls monitor.enter(), it must block until a thread calls monitor.exit().
As far as I can tell, your requirements are equivalent to this:
The "monitor" is a semaphore with an initial value of n.
monitor.enter() is semaphore.prolaag() (aka P, decrement or wait)
monitor.exit() is semaphore.verhoog() (aka V, increment or signal)
So here it is, a semaphore implemented from a monitor:
monitor Semaphore(n):
int capacity = n
method enter:
while capacity == 0: wait()
capacity -= 1
method exit:
capacity += 1
signal()
Use it like this:
shared state:
monitor = Semaphore(n)
each thread:
monitor.enter()
runCriticalSection()
monitor.exit()
The other path
I guess that you might want some kind of syntactic wrapper, let's call it Multimonitor, so you can write something like this:
Multimonitor(n):
method critical_section_a:
<statements>
method critical_section_b:
<statements>
And your run-time environment would ensure that at most n threads are active inside any of the monitor methods (in your case you just wanted one method). I know of no such feature in any programming language or runtime environment.
Perhaps in python you can create a Multimonitor class containing all the book-keeping variables, then subclass from it and put decorators on all the methods; a metaclass-involving solution might be able to do the decorating for the user.
The third option
If you implement monitors using semaphores, you're often using a semaphore as a mutex around monitor entry and resume points. I think you could initialize such a semaphore with a value larger than one and thereby produce such a Multimonitor, complete with wait() and signal() on condition variables. But: it would do more than what you need in your stated question, and if you use semaphores, why not just use them in the basic and straightforward way?
This is a continuation from here: Golang: Shared communication in async http server
Assuming I have a hashmap w/ locking:
//create async hashmap for inter request communication
type state struct {
*sync.Mutex // inherits locking methods
AsyncResponses map[string]string // map ids to values
}
var State = &state{&sync.Mutex{}, map[string]string{}}
Functions that write to this will place a lock. My question is, what is the best / fastest way to have another function check for a value without blocking writes to the hashmap? I'd like to know the instant a value is present on it.
MyVal = State.AsyncResponses[MyId]
Reading a shared map without blocking writers is the very definition of a data race. Actually, semantically it is a data race even when the writers will be blocked during the read! Because as soon as you finish reading the value and unblock the writers - the value may not exists in the map anymore.
Anyway, it's not very likely that proper syncing would be a bottleneck in many programs. A non-blocking lock af a {RW,}Mutex is probably in the order of < 20 nsecs even on middle powered CPUS. I suggest to postpone optimization not only after making the program correct, but also after measuring where the major part of time is being spent.
Please bear with me on this as I'm new to this.
I have an array and two threads.
First thread appends new elements to the array when required
myArray ~= newArray;
Second thread removes elements from the array when required:
extractedArray = myArray[0..10];
myArray = myArray[10..myArray.length()];
Is this thread safe?
What happens when the two threads interact on the array at the exact same time?
No, it is not thread-safe. If you share data across threads, then you need to deal with making it thread-safe yourself via facilities such as synchronized statements, synchronized functions, core.atomic, and mutexes.
However, the other major thing that needs to be pointed out is that all data in D is thread-local by default. So, you can't access data across threads unless it's explicitly shared. So, you don't normally have to worry about thread safety at all. It's only when you explicitly share data that it's an issue.
this is not thread safe
this has the classic lost update race:
appending means examening the array to see if it can expand in-place, if not it needs to make a (O(n) time) copy while the copy is busy the other thread can slice of a piece and when the copy is done that piece will return
you should look into using a linked list implementation which are easier to make thread safe
Java's ConcurrentLinkedQueue uses the list described here for it's implementation and you can implement it with the core.atomic.cas() in the standard library
It is not thread-safe. The simplest way to fix this is to surround array operations with the synchronized block. More about it here: http://dlang.org/statement.html#SynchronizedStatement
Could you describe two methods of synchronizing multi-threaded write access performed
on a class member?
Please could any one help me what is this meant to do and what is the right answer.
When you change data in C#, something that looks like a single operation may be compiled into several instructions. Take the following class:
public class Number {
private int a = 0;
public void Add(int b) {
a += b;
}
}
When you build it, you get the following IL code:
IL_0000: nop
IL_0001: ldarg.0
IL_0002: dup
// Pushes the value of the private variable 'a' onto the stack
IL_0003: ldfld int32 Simple.Number::a
// Pushes the value of the argument 'b' onto the stack
IL_0008: ldarg.1
// Adds the top two values of the stack together
IL_0009: add
// Sets 'a' to the value on top of the stack
IL_000a: stfld int32 Simple.Number::a
IL_000f: ret
Now, say you have a Number object and two threads call its Add method like this:
number.Add(2); // Thread 1
number.Add(3); // Thread 2
If you want the result to be 5 (0 + 2 + 3), there's a problem. You don't know when these threads will execute their instructions. Both threads could execute IL_0003 (pushing zero onto the stack) before either executes IL_000a (actually changing the member variable) and you get this:
a = 0 + 2; // Thread 1
a = 0 + 3; // Thread 2
The last thread to finish 'wins' and at the end of the process, a is 2 or 3 instead of 5.
So you have to make sure that one complete set of instructions finishes before the other set. To do that, you can:
1) Lock access to the class member while it's being written, using one of the many .NET synchronization primitives (like lock, Mutex, ReaderWriterLockSlim, etc.) so that only one thread can work on it at a time.
2) Push write operations into a queue and process that queue with a single thread. As Thorarin points out, you still have to synchronize access to the queue if it isn't thread-safe, but it's worth it for complex write operations.
There are other techniques. Some (like Interlocked) are limited to particular data types, and there are even more (like the ones discussed in Non-blocking synchronization and Part 4 of Joseph Albahari's Threading in C#), though they are more complex: approach them with caution.
In multithreaded applications, there are many situations where simultaneous access to the same data can cause problems. In such cases synchronization is required to guarantee that only one thread has access at any one time.
I imagine they mean using the lock-statement (or SyncLock in VB.NET) vs. using a Monitor.
You might want to read this page for examples and an understanding of the concept. However, if you have no experience with multithreaded application design, it will likely become quickly apparent, should your new employer put you to the test. It's a fairly complicated subject, with many possible pitfalls such as deadlock.
There is a decent MSDN page on the subject as well.
There may be other options, depending on the type of member variable and how it is to be changed. Incrementing an integer for example can be done with the Interlocked.Increment method.
As an excercise and demonstration of the problem, try writing an application that starts 5 simultaneous threads, incrementing a shared counter a million times per thread. The intended end result of the counter would be 5 million, but that is (probably) not what you will end up with :)
Edit: made a quick implementation myself (download). Sample output:
Unsynchronized counter demo:
expected counter = 5000000
actual counter = 4901600
Time taken (ms) = 67
Synchronized counter demo:
expected counter = 5000000
actual counter = 5000000
Time taken (ms) = 287
There are a couple of ways, several of which are mentioned previously.
ReaderWriterLockSlim is my preferred method. This gives you a database type of locking, and allows for upgrading (although the syntax for that is incorrect in the MSDN last time I looked and is very non-obvious)
lock statements. You treat a read like a write and just prevent access to the variable
Interlocked operations. This performs an operations on a value type in an atomic step. This can be used for lock free threading (really wouldn't recommend this)
Mutexes and Semaphores (haven't used these)
Monitor statements (this is essentially how the lock keyword works)
While I don't mean to denigrate other answers, I would not trust anything that does not use one of these techniques. My apologies if I have forgotten any.