This question is not about thread savety.
The objects I am talking about are const vectors (of point clouds) used as input in a multithreaded routine. As they are const, I can share them without worrying about thread savety.
It is about getting better runtime performance. My multithreaded routine is by far not as fast as I expect it to be. I run 11 parallel threads (maximum number is 12 on my six core), but I only get something like half the runtime of the non multithreaded approach.
The counter argument to make copies (inefficient in memory usage) can also be ignored. Memory is in my case not an issue.
My routine relies on a lot of spatial queries on the input vector. As the vector is shared, I expect the loss of efficiency here.
My question, before I modify my complex code is: If I generate 11 copies of the vector for my 11 parallel threads, will this be only a minor improvement, because all 11 objects still share the same physical memory? My knowledge of the hardware of a computer is too limited to answer this question.
Related
CPUs such as ARM have the weak memory model. Assume we have two threads T1 and T2.
| T1 | T2 |
|---------|---------|
| Instr A | Instr C |
| Instr B | Instr D |
In a weak order any instruction can run at any time which mean "D -> A -> B -> C" is possible.
My first question is why is this beneficial? And my second question is how is the selection (optimization) done? is the CPU randomly picking them or are there algorithms behind it? Is the CPU doing the picking or there is another chip which is doing the work (memory chip or something)?
There is no global arbiter that would do any such thing. If there was, it would be as efficient to always do things in order.
The only data available immediately is local. Each execution takes decision based on rapidly available information.
There is no pressure to execute anything in reverse order rather than in written order. Reserve is not a priori better. But data for B might be available before data for A and then B might be executed first as waiting for A to complete would let computing resources unused.
So it's all a matter of having all data available when needed, and the delays of communication between processors. You could view that as a team effort to work cooperatively with people that can only exchange by very slow means of communication: they would get as much work done based on their locally available information. No central power would ever have an accurate picture of the state of latest done work.
Why do weak memory models exit?
For performance reasons. Weak memory models allow compiler and hardware optimization that improve system performance. The cost of enforcing a strong
memory model (sequential-consistency model) in compilation and hardware implementation is severe performance degradation.
What are the allowed instruction reorderings (how is the selection done)?
It is specific to each memory model. There are several weak memory models, and the instruction reordering rules are part of their specifications.
Instruction reordering is ubiquitously used in compiler and hardware optimizations to achieve higher performance. The basic premise for these optimizations is that the instructions can be reordered as long as the functional correctness of the program is preserved.
In a sequential (single-threaded) program, functional correctness can be guaranteed by simply ensuring that "two operations are executed in program order if they are accessing the same memory location and one of them is a write or if there is a data or control dependence between them."
For multithreaded programs, functional correctness also depends on the relative order of loads and stores to different memory locations in the same thread. It is the memory model specification that specifies the conditions under which two memory instructions can be reordered without affecting the functional correctness.
In addition to the above answers:
If there are no fences, the only ordering that needs to be preserved is the data dependency order. So on a single CPU a load of X should see the most recent store to X before it. But if instructions do not have any data dependency, they can be executed in any order.
Modern CPU's use out of order execution the maximize the amount of parallelism in the instruction stream. This way independent instructions can run in parallel and it prevents the CPU from stalling for memory access.
CPUs make use of other techniques like store buffers, load buffers, write coalescing etc. Which all can lead to loads and stores being executed out of order. This is fine, because it isn't visible to the core that executes these loads and stores. The problem is when the core is sharing memory with other cores; then these reorderings can become visible.
For Sequential Consistency (SC) no reordering is allowed; so all 4 fences need to be preserved -> [LoadLoad][LoadStore][StoreLoad][StoreStore].
On the X86, the store buffers can cause older stores to be reordered with newer loads to a different address; so the [StoreLoad] is dropped and SC only preserved [LoadLoad][LoadStore][StoreStore]. This memory model is called TSO (Total Store Order).
TSO can be relaxed by allowing writes from the same core to be reordered (e.g. write coalescing or store buffers that don't retire in order). This results in PMO (partial store order).
The problem with SC/TSO/PMO is that certain reordering aren't allowed and this can lead to reduced performance; imagine there are 2 independent loads on the same CPU, then these loads can't be reordered because of the [LoadLoad]. In practice this can be resolved by executing instructions speculatively and if an out of order load is detected, then flush the pipeline and start again. This makes CPU's more complex and less performant.
Models like SC, TSO, PMO are strong consistency models because ever load and every store has certain ordering semantics. But in a weakly ordered consistency model, there is a separation between a plain load/store (no ordering semantics) and synchronization actions e.g. an acquire load and release store that do provide ordering semantics. The weak memory model with acquire loads and release stores is called release-consistency.
The big advantage of these weak models is that they allow for a much higher degree of parallelism and simpler CPU design. It shifts the burden to the software.
In practice you normally program using a programming language/API that provides a certain memory model and it needs to make sure the compiler isn't violating the model and sufficient ordering is added to the hardware e.g. in the form of fences. If you have a look at Java or C11, and you are using it correctly, then the same code will run fine on a CPU with a strong memory model like an X86 and a CPU with a weak memory model like ARM.
I am trying to create my first application of multi-threading, one that is scalable to multi-core technology. Its inspiration comes from the concept of a event-driven spiking neural network.
The design is a little like this: The data structure of the algorithm is stored in 1 location in memory, in the form of instances of classes. An example of a task that can be performed on this structure is a neuron spiking: it will modify several values in the neuron and connected neurons, and identify any future tasks that may need to be performed. The tasks to be performed are added a queue. There are several threads whose only function is to pull a task from the queue, perform the task, and lather rinse repeat. Any updates to values can be performed in any order, as long as they are performed. Small but rare errors that result from this parallelism would have a statistically insignificant effect on the performance of the system.
This design does not use any memory other than shared memory (except for possibly a small amount of dedicated memory used for calculations). I've recently watched a few lectures where the speaker implied that the use of shared memory in multi-core and GPU applications was very slow. Even though I have a few ideas as to why that might be the case, I'd like to find out from people who have experience with this sort of thing, and maybe be directed to a useful resource to help me out.
Accessing shared state from multiple threads in multicore system can be slow due to CPU cache coherency protocol. That is every change in the shared state must be reflected in the cache lines of all the cores.
http://msdn.microsoft.com/en-us/magazine/cc163715.aspx#S2 provides good explanation why accessing shared data from multiple threads can be slow and what can be done about it.
My Previous Question
From the above answer, means if in my threads has create objects, i will face memory allocation/deallocation bottleneck, thus result running threads may slower or no obvious time taken diff. than no thread. What's the advantages of running multi threads in the application if I cannot allocate memory to create the object for calculations in my thread?
What's the advantages of running multi threads in the application if I cannot allocate memory to create the objects for calculations in my thread?
It depends on where your bottlenecks are. If your bottleneck is the amount of memory available, then creating more threads won't help. Or, if I/O is a bottleneck, trying to parallelize will just slightly slow down everything because of context switching. It's like trying to make an underpowered car faster by putting wider tyres in it: fixing the wrong thing doesn't help.
Threads are useful when the bottleneck is the processor and there are several processors available.
Well, if you allocate chunks of memory in a loop, things will slow down.
If you can create your objects once at the beginning of TThread.execute, the overhead will be smaller.
Threads can also be benificial if you have to wait for IO-operations, or if you have expensive calculations to do on a machine with more than one physical core.
If you have memory intensive threads (many memory allocations/deallocations) you better use TopMM instead of FastMM:
http://www.topsoftwaresite.nl/
FastMM uses a lock which blocks all other threads, TopMM does not so it scales much better on multi cores/cpus!
When it comes to multithreding, shared resources issues will always arise (with current technology). All resources that may need serialization (RAM, disk, etc.) are a possible bottleneck. Multithreading is not a magic solution that turns a slow app in a fast one, and not always result in better speed. Made in the wrong way, it can actually result in worse speed. it should be analyzed to find possible bottlenecks, and some parts could need to be rewritten to minimize bottlenecks using different techniques (i.e. preallocating memory, using async I/O, etc.). Anyway, performance is only one of the reasons to use more than one thread. There are several other reason, for example letting the user to be able to interact with the application while background threads perform operations (i.e. printing, checking data, etc.) without "locking" the user. The application that way could seem "faster" (the user can keep on using it without waiting) even if it is actually slowerd (it takes more time to finish operations than if made them serially).
As far as I know, the multi-core architecture in a processor does not effect the program. The actual instruction execution is handled in a lower layer.
my question is,
Given that you have a multicore environment, Can I use any programming practices to utilize the available resources more effectively? How should I change my code to gain more performance in multicore environments?
That is correct. Your program will not run any faster (except for the fact that the core is handling fewer other processes, because some of the processes are being run on the other core) unless you employ concurrency. If you do use concurrency, though, more cores improves the actual parallelism (with fewer cores, the concurrency is interleaved, whereas with more cores, you can get true parallelism between threads).
Making programs efficiently concurrent is no simple task. If done poorly, making your program concurrent can actually make it slower! For example, if you spend lots of time spawning threads (thread construction is really slow), and do work on a very small chunk size (so that the overhead of thread construction dominates the actual work), or if you frequently synchronize your data (which not only forces operations to run serially, but also has a very high overhead on top of it), or if you frequently write to data in the same cache line between multiple threads (which can lead to the entire cache line being invalidated on one of the cores), then you can seriously harm the performance with concurrent programming.
It is also important to note that if you have N cores, that DOES NOT mean that you will get a speedup of N. That is the theoretical limit to the speedup. In fact, maybe with two cores it is twice as fast, but with four cores it might be about three times as fast, and then with eight cores it is about three and a half times as fast, etc. How well your program is actually able to take advantage of these cores is called the parallel scalability. Often communication and synchronization overhead prevent a linear speedup, although, in the ideal, if you can avoid communication and synchronization as much as possible, you can hopefully get close to linear.
It would not be possible to give a complete answer on how to write efficient parallel programs on StackOverflow. This is really the subject of at least one (probably several) computer science courses. I suggest that you sign up for such a course or buy a book. I'd recommend a book to you if I knew of a good one, but the paralell algorithms course I took did not have a textbook for the course. You might also be interested in writing a handful of programs using a serial implementation, a parallel implementation with multithreading (regular threads, thread pools, etc.), and a parallel implementation with message passing (such as with Hadoop, Apache Spark, Cloud Dataflows, asynchronous RPCs, etc.), and then measuring their performance, varying the number of cores in the case of the parallel implementations. This was the bulk of the course work for my parallel algorithms course and can be quite insightful. Some computations you might try parallelizing include computing Pi using the Monte Carlo method (this is trivially parallelizable, assuming you can create a random number generator where the random numbers generated in different threads are independent), performing matrix multiplication, computing the row echelon form of a matrix, summing the square of the number 1...N for some very large number of N, and I'm sure you can think of others.
I don't know if it's the best possible place to start, but I've subscribed to the article feed from Intel Software Network some time ago and have found a lot of interesting thing there, presented in pretty simple way. You can find some very basic articles on fundamental concepts of parallel computing, like this. Here you have a quick dive into openMP that is one possible approach to start parallelizing the slowest parts of your application, without changing the rest. (If those parts present parallelism, of course.) Also check Intel Guide for Developing Multithreaded Applications. Or just go and browse the article section, the articles are not too many, so you can quickly figure out what suits you best. They also have a forum and a weekly webcast called Parallel Programming Talk.
Yes, simply adding more cores to a system without altering the software would yield you no results (with exception of the operating system would be able to schedule multiple concurrent processes on separate cores).
To have your operating system utilise your multiple cores, you need to do one of two things: increase the thread count per process, or increase the number of processes running at the same time (or both!).
Utilising the cores effectively, however, is a beast of a different colour. If you spend too much time synchronising shared data access between threads/processes, your level of concurrency will take a hit as threads wait on each other. This also assumes that you have a problem/computation that can relatively easily be parallelised, since the parallel version of an algorithm is often much more complex than the sequential version thereof.
That said, especially for CPU-bound computations with work units that are independent of each other, you'll most likely see a linear speed-up as you throw more threads at the problem. As you add serial segments and synchronisation blocks, this speed-up will tend to decrease.
I/O heavy computations would typically fare the worst in a multi-threaded environment, since access to the physical storage (especially if it's on the same controller, or the same media) is also serial, in which case threading becomes more useful in the sense that it frees up your other threads to continue with user interaction or CPU-based operations.
You might consider using programming languages designed for concurrent programming. Erlang and Go come to mind.
I'm working on a parallelization library for the D programming language. Now that I'm pretty happy with the basic primitives (parallel foreach, map, reduce and tasks/futures), I'm starting to think about some higher level parallel algorithms. Among the more obvious candidates for parallelization is sorting.
My first question is, are parallelized versions of sorting algorithms useful in the real world, or are they mostly academic? If they are useful, where are they useful? I personally would seldom use them in my work, simply because I usually peg all of my cores at 100% using a much coarser grained level of parallelism than a single sort() call.
Secondly, it seems like quick sort is almost embarrassingly parallel for large arrays, yet I can't get the near-linear speedups I believe I should be getting. For a quick sort, the only inherently serial part is the first partition. I tried parallelizing a quick sort by, after each partition, sorting the two subarrays in parallel. In simplified pseudocode:
// I tweaked this number a bunch. Anything smaller than this and the
// overhead is smaller than the parallelization gains.
const smallestToParallelize = 500;
void quickSort(T)(T[] array) {
if(array.length < someConstant) {
insertionSort(array);
return;
}
size_t pivotPosition = partition(array);
if(array.length >= smallestToParallelize) {
// Sort left subarray in a task pool thread.
auto myTask = taskPool.execute(quickSort(array[0..pivotPosition]));
quickSort(array[pivotPosition + 1..$]);
myTask.workWait();
} else {
// Regular serial quick sort.
quickSort(array[0..pivotPosition]);
quickSort(array[pivotPosition + 1..$]);
}
}
Even for very large arrays, where the time the first partition takes is negligible, I can only get about a 30% speedup on a dual core, compared to a purely serial version of the algorithm. I'm guessing the bottleneck is shared memory access. Any insight on how to eliminate this bottleneck or what else the bottleneck might be?
Edit: My task pool has a fixed number of threads, equal to the number of cores in the system minus 1 (since the main thread also does work). Also, the type of wait I'm using is a work wait, i.e. if the task is started but not finished, the thread calling workWait() steals other jobs off the pool and does them until the one it's waiting on is done. If the task isn't started, it is completed in the current thread. This means that the waiting isn't inefficient. As long as there is work to be done, all threads will be kept busy.
Keep in mind I'm not an expert on parallel sort, and folks make research careers out of parallel sort but...
1) are they useful in the real world.
of course they are, if you need to sort something expensive (like strings or worse) and you aren't pegging all the cores.
think UI code where you need to sort a large dynamic list of strings based on context
think something like a barnes-hut n-bodies sim where you need to sort the particles
2) Quicksort seems like it would give a linear speedup, but it isn't. The partition step is a sequential bottleneck, you will see this if you profile and it will tend to cap out at 2-3x on a quad core.
If you want to get good speedups on a smaller system you need to ensure that your per task overheads are really small and ideally you will want to ensure that you don't have too many threads running, i.e. not much more than 2 on a dual core. A thread pool probably isn't the right abstraction.
If you want to get good speedups on a larger system you'll need to look at the scan based parallel sorts, there are papers on this. bitonic sort is also quite easy parallelize as is merge sort. A parallel radix sort can also be useful, there is one in the PPL (if you aren't averse to Visual Studio 11).
I'm no expert but... here is what I'd look at:
First of all, I've heard that as a rule of thumb, algorithms that look at small bits of a problem from the start tends to work better as parallel algorithms.
Looking at your implementation, try making the parallel/serial switch go the other way: partition the array and sort in parallel until you have N segments, then go serial. If you are more or less grabbing a new thread for each parallel case, then N should be ~ your core count. OTOH if your thread pool is of fixed size and acts as a queue of short lived delegates, then I'd use N ~ 2+ times your core count (so that cores don't sit idle because one partition finished faster).
Other tweaks:
skip the myTask.wait(); at the local level and rather have a wrapper function that waits on all the tasks.
Make a separate serial implementation of the function that avoids the depth check.
"My first question is, are parallelized versions of sorting algorithms useful in the real world" - depends on the size of the data set that you are working on in the real work. For small sets of data the answer is no. For larger data sets it depends not only on the size of the data set but also the specific architecture of the system.
One of the limiting factors that will prevent the expected increase in performance is the cache layout of the system. If the data can fit in the L1 cache of a core, then there is little to gain by sorting across multiple cores as you incur the penalty of the L1 cache miss between each iteration of the sorting algorithm.
The same reasoning applies to chips that have multiple L2 caches and NUMA (non-uniform memory access) architectures. So the more cores that you want to distribute the sorting across, the smallestToParallelize constant will need to be increased accordingly.
Another limiting factor which you identified is shared memory access, or contention over the memory bus. Since the memory bus can only satisfy a certain number of memory accesses per second; having additional cores that do essentially nothing but read and write to main memory will put a lot of stress on the memory system.
The last factor that I should point out is the thread pool itself as it may not be as efficient as you think. Because you have threads that steal and generate work from a shared queue, that queue requires synchronization methods; and depending on how those are implemented, they can cause very long serial sections in your code.
I don't know if answers here are applicable any longer or if my suggestions are applicable to D.
Anyway ...
Assuming that D allows it, there is always the possibility of providing prefetch hints to the caches. The core in question requests that data it will soon (not immediately) need be loaded into a certain cache level. In the ideal case the data will have been fetched by the time the core starts working on it. More likely the prefetch process will be more or less on the way which at least will result in less wait states than if the data were fetched "cold."
You'll still be constrained by the overall cache-to-RAM throughput capacity so you'll need to have organized the data such that so much data is in the core's exclusive caches that it can spend a fair amount of time there before having to write updated data.
The code and data need to be organized according to the concept of cache lines (fetch units of 64 bytes each) which is the smallest-sized unit in a cache. This should result in that for two cores the work needs to be organized such that the memory system works half as much per core (assuming 100% scalability) as before when only one core was working and the work hadn't been organized. For four cores a quarter as much and so on. It's quite a challenge but by no means impossible, it just depends on how imaginative you are in restructuring the work. As always, there are solutions that cannot be conceived ... until someone does just that!
I don't know how WYSIWYG D is compared to C - which I use - but in general I think the process of developing scaleable applications is ameliorated by how much the developer can influence the compiler in its actual machine code generation. For interpreted languages there will be so much memory work going on by the interpreter that you risk not being able to discern improvements from the general "background noise."
I once wrote a multi-threaded shellsort which ran 70% faster on two cores compared to one and 100% on three cores compared to one. Four cores ran slower than three. So I know the dilemmas you face.
I would like to point you to External Sorting[1] which faces similar problems. Usually, this class of algorithms is used mostly to cope with large volumes of data, but their main point is that they split up large chunks into smaller and unrelated problems, which are therefore really great to run in parallel. You "only" need to stitch together the partial results afterwards, which is not quite as parallel (but relatively cheap compared to the actual sorting).
An External Merge Sort would also work really well with an unknown amount of threads. You just split the work-load arbitrarily, and give each chunk of n elements to a thread whenever there is one idle, until all your work units are done, at which point you can start joining them up.
[1] http://en.wikipedia.org/wiki/External_sorting