"threadgroup_barrier" makes no difference - multithreading

Currently I'm working with Metal compute shaders and trying to understand how GPU threads synchronization works there.
I wrote a simple code but it doesn't work the way I expect it:
Consider I have threadgroup variable, which is array where all threads can produce an output simultaneously.
kernel void compute_features(device float output [[ buffer(0) ]],
ushort2 group_pos [[ threadgroup_position_in_grid ]],
ushort2 thread_pos [[ thread_position_in_threadgroup]],
ushort tid [[ thread_index_in_threadgroup ]])
{
threadgroup short blockIndices[288];
float someValue = 0.0
// doing some work here which fills someValue...
blockIndices[thread_pos.y * THREAD_COUNT_X + thread_pos.x] = someValue;
//wait when all threads are done with calculations
threadgroup_barrier(mem_flags::mem_none);
output += blockIndices[thread_pos.y * THREAD_COUNT_X + thread_pos.x]; // filling out output variable with threads calculations
}
The code above doesn't work. Output variable doesn't contain all threads calculations, it contains only the value from the thread which was presumable the last at adding up a value to output. To me it seems like threadgroup_barrier does absolutely nothing.
Now, the interesting part. The code below works:
blockIndices[thread_pos.y * THREAD_COUNT_X + thread_pos.x] = someValue;
threadgroup_barrier(mem_flags::mem_none); //wait when all threads are done with calculations
if (tid == 0) {
for (int i = 0; i < 288; i ++) {
output += blockIndices[i]; // filling out output variable with threads calculations
}
}
And this code also works as good as the previous one:
blockIndices[thread_pos.y * THREAD_COUNT_X + thread_pos.x] = someValue;
if (tid == 0) {
for (int i = 0; i < 288; i ++) {
output += blockIndices[i]; // filling out output variable with threads calculations
}
}
To summarize: My code works as expected only when I'm handling threadgroup memory in one GPU thread, no matter what's the id of it, it can be the last thread in the threadgroup as well as the first one. And presense of threadgroup_barrier makes absolutely no difference. I also used threadgroup_barrier with mem_threadgroup flag, code still doesn't work.
I understand that I might be missing some very important detail and I would be happy if someone can point me out to my errors. Thanks in advance!

When you write output += blockIndices[...], all threads will try to perform this operation at the same time. But since output is not an atomic variable, this results in race conditions. It's not a threadsafe operation.
Your second solution is the correct one. You need to have just a single thread to collect the results (although you could split this up across multiple threads too). That it still works OK if you remove the barrier may just be due to luck.

Related

How to prevent switching threads in FOR loop, while using omp directive

Assume such simple FOR loop
#pragma omp parallel for
for (int i = 0; i<10; i++)
{
//do_something_1
//do_something_2
//do_something_3
}
As I understand, loop iterations can ran in any order. This is fine for me.
The problem is, that threads can switch between themselfs in the middle of iteration execution.
Assume there where created 10 threads(as a number of iterations).
Lets say, thread_1 is currently running, and after completing do_something_1 and do_something_2 lines, it switched to thread_4(for example).
Is there a way to force a thread to complete whole iteration without being switched, I mean thread_1 completing lines do_something_1, do_something_2 and do_something_3 without being interrupted.
I understand that this is a part of OS algorithm for multithreading environment, but this hope there is a way to bypass it.
Edit:
Using ORDERED clause in pragma is needed only in 2 cases,
1) You need a result of previous iteration in current iteration. And than it will be a single thread program
2) You need, that your index will be correct in each iteration(thought you still can run all iterations parallel).
Let's see example for my problem:
int new_index = 0;
#pragma omp parallel for
for (int i = 0; i<10; i++)
{
<mutex lock>
new_index++
<mutex unlock>
//do_something_1
//do_something_2
//do_something_3
my_array[new_index] = 5; //correct
my_array[i] = 5; //not correct
}
So, there will be still 10 iterations, but now it should be a correct index each time for my_array.
The problem is : thread_1 increments new_index(new_index = 1), complete do_something_1, and then switched to thread_2.
Thread_2 completes it's loop completely(new_index = 2), but now , when OS switch back to thread_1, there is no correct new_index(new_index = 1) and my_array stays unchanged.
So, I thought if it possible to tall to OS, don't switch threads in a middle of iteration.

How to join all threads before deleting the ThreadPool

I am using a MultiThreading class which creates the required number of threads in its own threadpool and deletes itself after use.
std::thread *m_pool; //number of threads according to available cores
std::mutex m_locker;
std::condition_variable m_condition;
std::atomic<bool> m_exit;
int m_processors
m_pool = new std::thread[m_processors + 1]
void func()
{
//code
}
for (int i = 0; i < m_processors; i++)
{
m_pool[i] = std::thread(func);
}
void reset(void)
{
{
std::lock_guard<std::mutex> lock(m_locker);
m_exit = true;
}
m_condition.notify_all();
for(int i = 0; i <= m_processors; i++)
m_pool[i].join();
delete[] m_pool;
}
After running through all tasks, the for-loop is supposed to join all running threads before delete[] is being executed.
But there seems to be one last thread still running, while the m_pool does not exist anymore.
This leads to the problem, that I can't close my program anymore.
Is there any way to check if all threads are joined or wait for all threads to be joined before deleting the threadpool?
Simple typo bug I think.
Your loop that has the condition i <= m_processors is a bug and will actually process one extra entry past the end of the array. This is an off-by-one bug. Suppose m_processors is 2. You'll have an array that contains 2 elements with indices [0] and [1]. Yet, you'll be reading past the end of the array, attempting to join with the item at index [2]. m_pool[2] is undefined memory and you're likely going to either crash or block forever there.
You likely intended i < m_processors.
The real source of the problem is addressed by Wick's answer. I will extend it with some tips that also solve your problem while improving other aspects of your code.
If you use C++11 for std::thread, then you shouldn't create your thread handles using operator new[]. There are better ways of doing that with other C++ constructs, which will make everything simpler and exception safe (you don't leak memory if an unexpected exception is thrown).
Store your thread objects in a std::vector. It will manage the memory allocation and deallocation for you (no more new and delete). You can use other more flexible containers such as std::list if you insert/delete threads dynamically.
Fill the vector in place with std::generate or similar
std::vector<std::thread> m_pool;
m_pool.reserve(n_processors);
// Fill the vector
std::generate_n( std::back_inserter(m_pool), m_processors,
[](){ return std::thread(func); } );
Join all the elements using range-for loop and delete handles using container's functions.
for( std::thread& t: m_pool ) {
t.join();
}
m_pool.clear();

lock-free bounded MPMC ringbuffer failure

I've been banging my head against (my attempt) at a lock-free multiple producer multiple consumer ring buffer. The basis of the idea is to use the innate overflow of unsigned char and unsigned short types, fix the element buffer to either of those types, and then you have a free loop back to beginning of the ring buffer.
The problem is - my solution doesn't work for multiple producers (it does though work for N consumers, and also single producer single consumer).
#include <atomic>
template<typename Element, typename Index = unsigned char> struct RingBuffer
{
std::atomic<Index> readIndex;
std::atomic<Index> writeIndex;
std::atomic<Index> scratchIndex;
Element elements[1 << (sizeof(Index) * 8)];
RingBuffer() :
readIndex(0),
writeIndex(0),
scratchIndex(0)
{
;
}
bool push(const Element & element)
{
while(true)
{
const Index currentReadIndex = readIndex.load();
Index currentWriteIndex = writeIndex.load();
const Index nextWriteIndex = currentWriteIndex + 1;
if(nextWriteIndex == currentReadIndex)
{
return false;
}
if(scratchIndex.compare_exchange_strong(
currentWriteIndex, nextWriteIndex))
{
elements[currentWriteIndex] = element;
writeIndex = nextWriteIndex;
return true;
}
}
}
bool pop(Element & element)
{
Index currentReadIndex = readIndex.load();
while(true)
{
const Index currentWriteIndex = writeIndex.load();
const Index nextReadIndex = currentReadIndex + 1;
if(currentReadIndex == currentWriteIndex)
{
return false;
}
element = elements[currentReadIndex];
if(readIndex.compare_exchange_strong(
currentReadIndex, nextReadIndex))
{
return true;
}
}
}
};
The main idea for writing was to use a temporary index 'scratchIndex' that acts a pseudo-lock to allow only one producer at any one time to copy-construct into the elements buffer, before updating the writeIndex and allowing any other producer to make progress. Before I am called heathen for implying my approach is 'lock-free' I realise that this approach isn't exactly lock-free, but in practice (if it would work!) it is significantly faster than having a normal mutex!
I am aware of a (more complex) MPMC ringbuffer solution here http://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue, but I am really experimenting with my idea to then compare against that approach and find out where each excels (or indeed whether my approach just flat out fails!).
Things I have tried;
Using compare_exchange_weak
Using more precise std::memory_order's that match the behaviour I want
Adding cacheline pads between the various indices I have
Making elements std::atomic instead of just Element array
I am sure that this boils down to a fundamental segfault in my head as to how to use atomic accesses to get round using mutex's, and I would be entirely grateful to whoever can point out which neurons are drastically misfiring in my head! :)
This is a form of the A-B-A problem. A successful producer looks something like this:
load currentReadIndex
load currentWriteIndex
cmpxchg store scratchIndex = nextWriteIndex
store element
store writeIndex = nextWriteIndex
If a producer stalls for some reason between steps 2 and 3 for long enough, it is possible for the other producers to produce an entire queue's worth of data and wrap back around to the exact same index so that the compare-exchange in step 3 succeeds (because scratchIndex happens to be equal to currentWriteIndex again).
By itself, that isn't a problem. The stalled producer is perfectly within its rights to increment scratchIndex to lock the queue—even if a magical ABA-detecting cmpxchg rejected the store, the producer would simply try again, reload exactly the same currentWriteIndex, and proceed normally.
The actual problem is the nextWriteIndex == currentReadIndex check between steps 2 and 3. The queue is logically empty if currentReadIndex == currentWriteIndex, so this check exists to make sure that no producer gets so far ahead that it overwrites elements that no consumer has popped yet. It appears to be safe to do this check once at the top, because all the consumers should be "trapped" between the observed currentReadIndex and the observed currentWriteIndex.
Except that another producer can come along and bump up the writeIndex, which frees the consumer from its trap. If a producer stalls between steps 2 and 3, when it wakes up the stored value of readIndex could be absolutely anything.
Here's an example, starting with an empty queue, that shows the problem happening:
Producer A runs steps 1 and 2. Both loaded indices are 0. The queue is empty.
Producer B interrupts and produces an element.
Consumer pops an element. Both indices are 1.
Producer B produces 255 more elements. The write index wraps around to 0, the read index is still 1.
Producer A awakens from its slumber. It had previously loaded both read and write indices as 0 (empty queue!), so it attempts step 3. Because the other producer coincidentally paused on index 0, the compare-exchange succeeds, and the store progresses. At completion the producer lets writeIndex = 1, and now both stored indices are 1, and the queue is logically empty. A full queue's worth of elements will now be completely ignored.
(I should mention that the only reason I can get away with talking about "stalling" and "waking up" is that all the atomics used are sequentially consistent, so I can pretend that we're in a single-threaded environment.)
Note that the way that you are using scratchIndex to guard concurrent writes is essentially a lock; whoever successfully completes the cmpxchg gets total write access to the queue until it releases the lock. The simplest way to fix this failure is to just replace scratchIndex with a spinlock—it won't suffer from A-B-A and it's what's actually happening.
bool push(const Element & element)
{
while(true)
{
const Index currentReadIndex = readIndex.load();
Index currentWriteIndex = writeIndex.load();
const Index nextWriteIndex = currentWriteIndex + 1;
if(nextWriteIndex == currentReadIndex)
{
return false;
}
if(scratchIndex.compare_exchange_strong(
currentWriteIndex, nextWriteIndex))
{
elements[currentWriteIndex] = element;
// Problem here!
writeIndex = nextWriteIndex;
return true;
}
}
}
I've marked the problematic spot. Multiple threads can get to the writeIndex = nextWriteIndex at the same time. The data will be written in any order, although each write will be atomic.
This is a problem because you're trying to update two values using the same atomic condition, which is generally not possible. Assuming the rest of your method is fine, one way around this would be to combine both scratchIndex and writeIndex into a single value of double-size. For example, treating two uint32_t values as a single uint64_t value and operating atomically on that.

Design pattern for asynchronous while loop

I have a function that boils down to:
while(doWork)
{
config = generateConfigurationForTesting();
result = executeWork(config);
doWork = isDone(result);
}
How can I rewrite this for efficient asynchronous execution, assuming all functions are thread safe, independent of previous iterations, and probably require more iterations than the maximum number of allowable threads ?
The problem here is we don't know how many iterations are required in advance so we can't make a dispatch_group or use dispatch_apply.
This is my first attempt, but it looks a bit ugly to me because of arbitrarily chosen values and sleeping;
int thread_count = 0;
bool doWork = true;
int max_threads = 20; // arbitrarily chosen number
dispatch_queue_t queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
while(doWork)
{
if(thread_count < max_threads)
{
dispatch_async(queue, ^{ Config myconfig = generateConfigurationForTesting();
Result myresult = executeWork();
dispatch_async(queue, checkResult(myresult)); });
thread_count++;
}
else
usleep(100); // don't consume too much CPU
}
void checkResult(Result value)
{
if(value == good) doWork = false;
thread_count--;
}
Based on your description, it looks like generateConfigurationForTesting is some kind of randomization technique or otherwise a generator which can make a near-infinite number of configuration (hence your comment that you don't know ahead of time how many iterations you will need). With that as an assumption, you are basically stuck with the model that you've created, since your executor needs to be limited by some reasonable assumptions about the queue and you don't want to over-generate, as that would just extend the length of the run after you have succeeded in finding value ==good measurements.
I would suggest you consider using a queue (or OSAtomicIncrement* and OSAtomicDecrement*) to protect access to thread_count and doWork. As it stands, the thread_count increment and decrement will happen in two different queues (main_queue for the main thread and the default queue for the background task) and thus could simultaneously increment and decrement the thread count. This could lead to an undercount (which would cause more threads to be created than you expect) or an overcount (which would cause you to never complete your task).
Another option to making this look a little nicer would be to have checkResult add new elements into the queue if value!=good. This way, you load up the initial elements of the queue using dispatch_apply( 20, queue, ^{ ... }) and you don't need the thread_count at all. The first 20 will be added using dispatch_apply (or an amount that dispatch_apply feels is appropriate for your configuration) and then each time checkResult is called you can either set doWork=false or add another operation to queue.
dispatch_apply() works for this, just pass ncpu as the number of iterations (apply never uses more than ncpu worker threads) and keep each instance of your worker block running for as long as there is more work to do (i.e. loop back to generateConfigurationForTesting() unless !doWork).

Why am i use Parallel.For get the diffrent result?

using System.Threading.Tasks;
const int _Total = 1000000;
[ThreadStatic]
static long count = 0;
static void Main(string[] args)
{
Parallel.For(0, _Total, (i) =>
{
count++;
});
Console.WriteLine(count);
}
I get different result every time, can anybody help me and tell me why?
Most likely your "count" variable isn't atomic in any form, so you are getting concurrent modifications that aren't synchronized. Thus, the following sequence of events is possible:
Thread 1 reads "count"
Thread 2 reads "count"
Thread 1 stores value+1
Thread 2 stores value+1
Thus, the "for" loop has done 2 iterations, but the value has only increased by 1. As thread ordering is "random", so will be the result.
Things can get a lot worse, of course:
Thread 1 reads count
Thread 2 increases count 100 times
Thread 1 stores value+1
In that case, all those 100 increases done by thread 2 are undone. Although that can really only happen if the "++" is actually split into at least 2 machine instructions, so it can be interrupted in the middle of the operation. In the one-instruction case, you're only dealing with interleaved hardware threads.
It's a typical race condition scenario.
So, most likely, ThreadStatic is not working here. In this concrete sample use System.Threading.Interlocked:
void Main()
{
int total = 1000000;
int count = 0;
System.Threading.Tasks.Parallel.For(0, _Total, (i) =>
{
System.Threading.Interlocked.Increment(ref count);
});
Console.WriteLine(count);
}
Similar question
C# ThreadStatic + volatile members not working as expected

Resources