Dividing loop iterations among threads - multithreading

I recently wrote a small number-crunching program that basically loops over an N-dimensional grid and performs some calculation at each point.
for (int i1 = 0; i1 < N; i1++)
for (int i2 = 0; i2 < N; i2++)
for (int i3 = 0; i3 < N; i3++)
for (int i4 = 0; i4 < N; i4++)
histogram[bin_index(i1, i2, i3, i4)] += 1; // see bottom of question
It worked fine, yadda yadda yadda, lovely graphs resulted ;-) But then I thought, I have 2 cores on my computer, why not make this program multithreaded so I could run it twice as fast?
Now, my loops run a total of, let's say, around a billion calculations, and I need some way to split them up among threads. I figure I should group the calculations into "tasks" - say each iteration of the outermost loop is a task - and hand out the tasks to threads. I've considered
just giving thread #n all iterations of the outermost loop where i1 % nthreads == n - essentially predetermining which tasks go to which threads
trying to set up some mutex-protected variable which holds the parameter(s) (i1 in this case) of the next task that needs executing - assigning tasks to threads dynamically
What reasons are there to choose one approach over the other? Or another approach I haven't thought about? Does it even matter?
By the way, I wrote this particular program in C, but I imagine I'll be doing the same kind of thing again in other languages as well so answers need not be C-specific. (If anyone knows a C library for Linux that does this sort of thing, though, I'd love to know about it)
EDIT: in this case bin_index is a deterministic function which doesn't change anything except its own local variables. Something like this:
int bin_index(int i1, int i2, int i3, int i4) {
// w, d, h are constant floats
float x1 = i1 * w / N, x2 = i2 * w / N, y1 = i3 * d / N, y2 = i4 * d / N;
float l = sqrt((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2) + h * h);
float th = acos(h / l);
// th_max is a constant float (previously computed as a function of w, d, h)
return (int)(th / th_max);
}
(although I appreciate all the comments, even those which don't apply to a deterministic bin_index)

The first approach is simple. It is also sufficient if you expect that the load will be balanced evenly over the threads. In some cases, especially if the complexity of bin_index is very dependant on the parameter values, one of the threads could end up with a much heavier task than the rest. Remember: the task is finished when the last threads finishes.
The second approach is a bit more complicated, but balances the load more evenly if the tasks are finegrained enough (the number of tasks is much larger than the number of threads).
Note that you may have issues putting the calculations in separate threads. Make sure that bin_index works correctly when multiple threads execute it simultaneously. Beware of the use of global or static variables for intermediate results.
Also, "histogram[bin_index(i1, i2, i3, i4)] += 1" could be interrupted by another thread, causing the result to be incorrect (if the assignment fetches the value, increments it and stores the resulting value in the array). You could introduce a local histogram for each thread and combine the results to a single histogram when all threads have finished. You could also make sure that only one thread is modifying the histogram at the same time, but that may cause the threads to block each other most of the time.

The first approach is enough. No need for complication here. If you start playing with mutexes you risk making hard to detect errors.
Don't start complicating unless you really see that you need this. Syncronization issues (especially in case of many threads instead of many processes) can be really painful.

As I understand it, OpenMP was made just for what you are trying to do, although I have to admit I have not used it yet myself. Basically it seems to boil down to just including a header and adding a pragma clause.
You could probably also use Intel's Thread Building Blocks Library.

If you never coded a multithread application, I bare you to begin with OpenMP:
the library is now included in gcc by default
this is very easy to use
In your example, you should just have to add this pragma:
#pragma omp parallel shared(histogram)
{
for (int i1 = 0; i1 < N; i1++)
for (int i2 = 0; i2 < N; i2++)
for (int i3 = 0; i3 < N; i3++)
for (int i4 = 0; i4 < N; i4++)
histogram[bin_index(i1, i2, i3, i4)] += 1;
}
With this pragma, the compiler will add some instruction to create threads, launch them, add some mutexes around accesses to the histogram variable etc... There are a lot of options, but well defined pragma do all the work for you. Basically, the simplicity depends on the data dependency.
Of course, the result should not be optimal as if you coded all by hand. But if you don't have load balancing problem, you maybe could approach a 2x speed up. Actually this is only write in matrix with no spacial dependency in it.

I would do something like this:
void HistogramThread(int i1, Action<int[]> HandleResults)
{
int[] histogram = new int[HistogramSize];
for (int i2 = 0; i2 < N; i2++)
for (int i3 = 0; i3 < N; i3++)
for (int i4 = 0; i4 < N; i4++)
histogram[bin_index(i1, i2, i3, i4)] += 1;
HandleResults(histogram);
}
int[] CalculateHistogram()
{
int[] histogram = new int[HistogramSize];
ThreadPool pool; // I don't know syntax off the top of my head
for (int i1=0; i1<N; i1++)
{
pool.AddNewThread(HistogramThread, i1, delegate(int[] h)
{
lock (histogram)
{
for (int i=0; i<HistogramSize; i++)
histogram[i] += h[i];
}
});
}
pool.WaitForAllThreadsToFinish();
return histogram;
}
This way you don't need to share any memory, until the end.

If you ever do it in .NET, use the Parallel Extensions.

If you want to write multithreaded number crunching code (and you are going to be doing a lot of it in the future) I would suggest you take a look at using a functional language like OCaml or Haskell.
Due to the lack of side effects and lack of shared state in functional languages (well, mostly) making your code run across multiple threads is a LOT easier. Plus, you'll probably find that you end up with a lot less code.

I agree with Sharptooth that your first approach seems like the only plausible one.
Your single threaded app is continuously assigning to memory. To get any speedup, your several threads would need to also be continuously assigning to memory. If only one thread is assigning at a time, you would get no speedup at all. So if your assignments are guarded, the whole exercise would fail.
This would be a dangerous approach since you assigning to shared memory without a guard. But it seems to be worth the danger (if a x2 speedup matters). If you can be sure that all the values of bin_index(i1, i2, i3, i4) are different in your division of the loop, then it should work since the array assignment would be to a different locations in your shared memory. Still, one always should look and hard at approaches like this.
I assume you would also produce a test routine to compare the results of the two versions.
Edit:
Looking at your bin_index(i1, i2, i3, i4), I suspect your process could not be parallelized without considerable effort.
The only way to divide up the work of calculation in your loop is, again, to be sure that your threads will access the same areas in memory. However, it looks like bin_index(i1, i2, i3, i4) will likely repeat values quite often. You might divide up the iteration into the conditions where bin_index is higher than a cutoff and where it is lower than a cut-off. Or you could divide it arbitrarily and see whether increment is implemented atomically. But any complex threading approach looks unlikely to provide improvement if you can only have two cores to work with to start with.

Related

How to count branch mispredictions?

I`ve got a task to count branch misprediction penalty (in ticks), so I wrote this code:
int main (int argc, char ** argv) {
unsigned long long start, end;
FILE *f;
f = fopen("output", "w");
long long int k = 0;
unsigned long long min;
int n = atoi(argv[1]);// n1 = atoi(argv[2]);
for (int i = 1; i <= n + 40; i++) {
min = 9999999999999;
for(int r = 0; r < 1000; r++) {
start = rdtsc();
for (long long int j = 0; j < 100000; j++) {
if (j % i == 0) {
k++;
}
}
end = rdtsc();
if (min > end - start) min = end - start;
}
fprintf (f, "%d %lld \n", i, min);
}
fclose (f);
return 0;
}
(rdtsc is a function that measures time in ticks)
The idea of this code is that it periodically (with period equal to i) goes into branch (if (j % i == 0)), so at some point it starts doing mispredictions. Other parts of the code are mostly multiple measurements, that I need to get more precise results.
Tests show that branch mispredictions start to happen around i = 47, but I do not know how to count exact number of mispredictions to count exact number of ticks. Can anyone explain to me, how to do this without using any side programs like Vtune?
It depends on the processor your using, in general cpuid can be used to obtain a lot of information about the processor and what cpuid does not provide is typically accessible via smbios or other regions of memory.
Doing this in code on a general level without the processor support functions and manual will not tell you as much as you want to a great degree of certainty but may be useful as an estimate depending on what your looking for and how you have your code compiled e.g. the flags you use during compilation etc.
In general, what is referred to as specular or speculative execution and is typically not observed by programs as their logic which transitions through the pipeline is determined to be not used is then discarded.
Depending on how you use specific instructions in your program you may be able to use such stale cache information for better or worse but the logic therein would vary greatly depending on the CPU in use.
See also Spectre and RowHammer for interesting examples of using such techniques for privileged execution.
See the comments below for links which have code related to the use of cpuid as well as rdrand, rdseed and a few others. (rdtsc)
It's not completely clear what your looking for perhaps but will surely get you started and provide some useful examples.
See also Branch mispredictions

ways to express concurrency without thread

I am wondering about how concurrency can be expressed without an explicit thread object, not the implementation, which probably would use threads or thread pools, but the language design related issues.
Q1: I wonder what would be lost if there was no thread object, what couldn't be done in such a language?
Q2: I also wonder about how this would be expressed, what ways were proposed or implemented as alternatives or complements to threads?
one possibility is the MPI-programm-model (GPU as well)
lets say you have the following code
for(int i=0; i < 100; i++) {
work(i);
}
the "normal" thread-based way would be the separation of the iteration-range into multiple subsets. So something like this
Thread-1:
for(int i=0; i < 50; i++) {
work(i);
}
Thread-2:
for(int i=50; i < 100; i++) {
work(i);
}
however in MPI/GPU you do something different.
the idea is, that every core execute the same(GPU) or at least
a similar (MPI) programm. the difference is, that each core uses
a different ID, which changes the behavior of the code.
mpi-style: (not exactly the MPI-syntax)
int rank = get_core_id();
int size = get_num_core();
int subset = 100 / size;
for (int i = rank * subset;i < (rand+1)*subset; i+) {
//each core will use a different range for i
work(i);
}
the next big thing is communication. Normally you need to use all of the synchronization-stuff manually. MPI is message-based, meaning that its not perfectly suited for classical shared-memory modells (every core has access to the same memory), but in a cluster system (many cores combined with a network) it works excellent. This is not only limited to supercomputers (they use basically only mpi-style stuff), but in the recent years a new type of core-architecture (manycores) was developed. They have a local so called Network-On-Chip, so each core can send/receive messages without having the problem with synchronization.
MPI contains not only simple messages, but higher constructs to automatically scatter and gather data to every core.
Example: (again not MPI-syntax)
int rank = get_core_id();
int size = get_num_core();
int data[100];
int result;
int results[size];
if (rank == 0) { //master-core only
fill_with_stuff(data);
}
scatter(0, data); //core-0 will send the data-content to all other cores
result = work(rank, data); // every core works on the same data
gather(0,result,results); //get all local results and store them in
//the results-array of core-0
an other solutions is the openMP-libary
here you declare parallel-blocks. the whole thread-part is done by the libary itself
example:
//this will split the for-loop automatically in 4 threads
#pragma omp parallel for num_threads(4)
for(int i=0; i < 100; i++) {
work(i);
}
the big advantage is, that its fast to write. thats it
you may get better performance with writing the threads on your own,
but it takes a lot more time and knowledge about synchronization

OpenCL float sum reduction

I would like to apply a reduce on this piece of my kernel code (1 dimensional data):
__local float sum = 0;
int i;
for(i = 0; i < length; i++)
sum += //some operation depending on i here;
Instead of having just 1 thread that performs this operation, I would like to have n threads (with n = length) and at the end having 1 thread to make the total sum.
In pseudo code, I would like to able to write something like this:
int i = get_global_id(0);
__local float sum = 0;
sum += //some operation depending on i here;
barrier(CLK_LOCAL_MEM_FENCE);
if(i == 0)
res = sum;
Is there a way?
I have a race condition on sum.
To get you started you could do something like the example below (see Scarpino). Here we also take advantage of vector processing by using the OpenCL float4 data type.
Keep in mind that the kernel below returns a number of partial sums: one for each local work group, back to the host. This means that you will have to carry out the final sum by adding up all the partial sums, back on the host. This is because (at least with OpenCL 1.2) there is no barrier function that synchronizes work-items in different work-groups.
If summing the partial sums on the host is undesirable, you can get around this by launching multiple kernels. This introduces some kernel-call overhead, but in some applications the extra penalty is acceptable or insignificant. To do this with the example below you will need to modify your host code to call the kernel repeatedly and then include logic to stop executing the kernel after the number of output vectors falls below the local size (details left to you or check the Scarpino reference).
EDIT: Added extra kernel argument for the output. Added dot product to sum over the float 4 vectors.
__kernel void reduction_vector(__global float4* data,__local float4* partial_sums, __global float* output)
{
int lid = get_local_id(0);
int group_size = get_local_size(0);
partial_sums[lid] = data[get_global_id(0)];
barrier(CLK_LOCAL_MEM_FENCE);
for(int i = group_size/2; i>0; i >>= 1) {
if(lid < i) {
partial_sums[lid] += partial_sums[lid + i];
}
barrier(CLK_LOCAL_MEM_FENCE);
}
if(lid == 0) {
output[get_group_id(0)] = dot(partial_sums[0], (float4)(1.0f));
}
}
I know this is a very old post, but from everything I've tried, the answer from Bruce doesn't work, and the one from Adam is inefficient due to both global memory use and kernel execution overhead.
The comment by Jordan on the answer from Bruce is correct that this algorithm breaks down in each iteration where the number of elements is not even. Yet it is essentially the same code as can be found in several search results.
I scratched my head on this for several days, partially hindered by the fact that my language of choice is not C/C++ based, and also it's tricky if not impossible to debug on the GPU. Eventually though, I found an answer which worked.
This is a combination of the answer by Bruce, and that from Adam. It copies the source from global memory into local, but then reduces by folding the top half onto the bottom repeatedly, until there is no data left.
The result is a buffer containing the same number of items as there are work-groups used (so that very large reductions can be broken down), which must be summed by the CPU, or else call from another kernel and do this last step on the GPU.
This part is a little over my head, but I believe, this code also avoids bank switching issues by reading from local memory essentially sequentially. ** Would love confirmation on that from anyone that knows.
Note: The global 'AOffset' parameter can be omitted from the source if your data begins at offset zero. Simply remove it from the kernel prototype and the fourth line of code where it's used as part of an array index...
__kernel void Sum(__global float * A, __global float *output, ulong AOffset, __local float * target ) {
const size_t globalId = get_global_id(0);
const size_t localId = get_local_id(0);
target[localId] = A[globalId+AOffset];
barrier(CLK_LOCAL_MEM_FENCE);
size_t blockSize = get_local_size(0);
size_t halfBlockSize = blockSize / 2;
while (halfBlockSize>0) {
if (localId<halfBlockSize) {
target[localId] += target[localId + halfBlockSize];
if ((halfBlockSize*2)<blockSize) { // uneven block division
if (localId==0) { // when localID==0
target[localId] += target[localId + (blockSize-1)];
}
}
}
barrier(CLK_LOCAL_MEM_FENCE);
blockSize = halfBlockSize;
halfBlockSize = blockSize / 2;
}
if (localId==0) {
output[get_group_id(0)] = target[0];
}
}
https://pastebin.com/xN4yQ28N
You can use new work_group_reduce_add() function for sum reduction inside single work group if you have support for OpenCL C 2.0 features
A simple and fast way to reduce data is by repeatedly folding the top half of the data into the bottom half.
For example, please use the following ridiculously simple CL code:
__kernel void foldKernel(__global float *arVal, int offset) {
int gid = get_global_id(0);
arVal[gid] = arVal[gid]+arVal[gid+offset];
}
With the following Java/JOCL host code (or port it to C++ etc):
int t = totalDataSize;
while (t > 1) {
int m = t / 2;
int n = (t + 1) / 2;
clSetKernelArg(kernelFold, 0, Sizeof.cl_mem, Pointer.to(arVal));
clSetKernelArg(kernelFold, 1, Sizeof.cl_int, Pointer.to(new int[]{n}));
cl_event evFold = new cl_event();
clEnqueueNDRangeKernel(commandQueue, kernelFold, 1, null, new long[]{m}, null, 0, null, evFold);
clWaitForEvents(1, new cl_event[]{evFold});
t = n;
}
The host code loops log2(n) times, so it finishes quickly even with huge arrays. The fiddle with "m" and "n" is to handle non-power-of-two arrays.
Easy for OpenCL to parallelize well for any GPU platform (i.e. fast).
Low memory, because it works in place
Works efficiently with non-power-of-two data sizes
Flexible, e.g. you can change kernel to do "min" instead of "+"

can we inspect value in the cache after a read instruction has executed?

Recently, I write a program to transpose a matrix.
for (int i = 0; i < 1000; i++)
{
for (int j = 0; j < 1000; j++)
{
ndata[j][i] = odata[i][j];
}
}
From above code, we know inner loop is cache friendly for odata, however not friendly for ndata, which will lead a lot of cache miss, I want to inspect value of the L1 cache and L2 cache after a read instruction has executed. How can I do this?
First of all - 1000*1000*2 elements (of what, int?) won't fit in any L1 I know of, maybe an L3.
As for your question - there's no simple way to inspect the contents of a cache (expect for running it on a CPU or cache simulator that produces that information), you may measure the access time of that line but by that you would a) affect the contents of the cache or their LRU weights, and b) probably get meaningless results unless you measure accessing multiple such lines in a single measurement and amortize.
By the way, if you're interested in improving this code, just add SW prefetches for ndata[j+1][i] on each iteration.

How should i protect the access to a shared variable?

Already posted a doubt about the same issue, but i think the answers started to go in other direction, so i will try to focus my questions :P
1) Need to fill a hughe vector with some data. To improve the speed i want to use threads to do the job, so 1 thread can write the first half of the vector and the other thread write the second half.
Since each thread is accesing different positions of the vector... Do i need to protect that acces?
In other words, can i write at the same time in 2 different positions of this structure without protecting it?
...
using namespace std;
...
main{
int n = 256x1024x1024;
vector<int> vec(n);
thread t1(fillFunction(std::ref(vector), 0, n/2);
thread t2(fillFunction(std::ref(vector), n/2, n);
t1.join;
t2.join;
}
fillFunction(vector<int> &vec, int first, int final){
int i;
for (i = first; i < final; i++){
vec[i] = some_data;
}
}
In case i have to protect the access, should i use lock_guard or unique_lock?
2) This thread solution is really going to improve the speed?
I mean, even if i protect the writings, the vector is large enough to not fit on cache. The threads are writing on very different positions, so the 'for' will generate so many cache misses.
Can these "cache misses" result in a slower execution than without threads?
Making 1 thread to fill the even numbers and the other thread the odd numbers can reduce the cache misses?
thread t1(fillFunction(std::ref(vector), 0, n/2);
thread t2(fillFunction(std::ref(vector), 1, n);
[...]
for (i = first; i < final; i = i+2){
vec[i] = some_data;
}
Thank you all :)
1) No you do not need to protect the vector if you are guaranteed to write to different addresses.
2) You will really just have to test these things yourself on your exact machine. Try single thread vs. interleaved access vs. split access and just time the results.

Resources