I would like to do a performance test using Lmax Disruptor:
Configure dusruptor
Somehow "pause" the processing
Add many messages to RingBuffer
"Unpause" the processing
This way I could measure cleanly how fast buffer gets empty. If I "mix" adding many messages (that introduces some delays) and processing I may have less certain results on processing speed.
I do not seem however to find a "pausing" available in LMAX Disruptor (https://lmax-exchange.github.io/disruptor/docs/com/lmax/disruptor/dsl/Disruptor.html ), is there a way to make one?
The answer largely depends on what you mean by pause the disruptor.
You can easily have a consumer which does not free the slots in the ring buffer until it reaches the nth message. However the time it takes to drain a ring buffer is entirely dependent on the speed of the consumer.
However if you are looking to benchmark the cost of the disruptor it self that's a much more complicated proposition. I'd start with investigating the disruptor projects own performance tests.
In a bursty workload the disruptor's batching effects may actually improve through put by amortizing the cost of expensive operations, such as network or disk IO.
Related
Using LMAX Disruptor,we have observed that if we use like 5-10 disruptors together in an application (sort of like a chain of disruptors with every disruptor having one consumer on it performing a specified task and then handing over the message to the next disruptor/ ringbuffer), What happens is the CPU utilization reaches 90% and above and system becomes unresponsive until we bring down the application, We feel it's because of so many active disruptor threads. This happens even when the disruptors are not really processing anything. Can anyone comment on as to what should be the optimum number of disruptors to be used in an application?
It could be that you need to change the wait strategies you are using on the consumers. If you are using the busy-wait strategy on all of them, even if no inputs have been provided to the ring buffers, the polling threads could still tie up CPU resources because they'll be in tight loops where they're constantly checking the buffer for new values to read.
The following is an excerpt from the book Java Concurrency in Practice, Chapter 12.2 Testing for Performance where the author talks about throughput of a bounded buffer implementation.
Figure 12.1 shows some sample results on a 4-way machine, using buffer
capacities of 1, 10, 100, and 1000. We see immediately that a buffer
size of one causes very poor throughput; this is because each thread
can make only a tiny bit of progress before blocking and waiting for
another thread. Increasing buffer size to ten helps dramatically, but
increases past ten offer diminishing returns.
It may be somewhat puzzling at first that adding a lot more threads
degrades performance only slightly. The reason is hard to see from the
data, but easy to see on a CPU performance meter such as perfbar while
the test is running: even with many threads, not much computation is
going on, and most of it is spent blocking and unblocking threads. So
there is plenty of CPU slack for more threads to do the same thing
without hurting performance very much.
However, be careful about concluding from this data that you can
always add more threads to a producer-consumer program that uses a
bounded buffer. This test is fairly artificial in how it simulates the
application; the producers do almost no work to generate the item
placed on the queue, and the consumers do almost no work with the item
retrieved. If the worker threads in a real producer-consumer
application do some nontrivial work to produce and consume items (as
is generally the case), then this slack would disappear and the
effects of having too many threads could be very noticeable. The
primary purpose of this test is to measure what constraints the
producer-consumer handoff via the bounded buffer imposes on overall
throughput.
What does the author mean by cpu slack here? Why will the throughput degrade not degrade more and more as more number of threads are being added? I am not following the reasoning given by the author regarding the slight degradation of performance while adding more and more threads , assuming that the bound on the buffer size is kept constant.
Edit: I can think of one reason :since in this case no real work is being done by threads , so the classic problem of increased traffic on shared memory bus, number of cache misses due to context switching of threads are not playing a major role as more and more threads are being added. The situation is going to change once the threads start doing some more work. Is that what the author is trying to convey here in the third paragraph?
There is no formal term such as CPU slack. The author simply means that the CPU is not fully utilised in doing meaningful work because most time is spent waiting to successfully acquire a mutually exclusive lock. The author is calling the unused capacity of the CPU, the CPU slack.
NOTE: The associated code tests a multiple producer / multiple consumer scenario, with an equal number of producers and consumers.
EDIT: In the later discussion they talk about the effect of adding more threads if a) the threads do almost no work, and b) the threads occupy the CPU substantially for every produced or consumed item. I will try to explain the difference with some slightly artificial scenarios.
Suppose that locking takes 1 time unit actively, and 8 time units passively by waiting. Passive waiting does not occupy the CPU.
Case 1: Producer-Consumer cost is 1 time unit.
So we currently account for 2 time units of CPU time, with an
additional 8 time units of passive waiting time. So we have 8/10
available CPU time units.
If we now want to double the number of threads, we need to accommodate
an additional 2 time units (1 for producer-consumer stuff, and 1 for
active locking time). That would eat into our supply of available CPU
time -- but we have enough.
Case 2: Producer-Consumer cost is 11 time units.
So we currently account for 11+1=12 time units of CPU time, with an additional 8 time units of passive waiting time. So we have 8/20 available CPU time units.
If we now want to double the number of threads, we need to accommodate an additional 12 time units (11 for producer-consumer stuff, and 1 for active locking time). That goes beyond the available CPU time units. Something has to give -- so waiting time will increase, and throughput will suffer.
So in case 2, the amount of real work reduces the amount of time available for new threads, thereby increasing the observed effect of locking contention on the throughput. It would have been nice if they had also included figures for this imagined scenarios in the book. It would have made their hand-wavy argument easier to follow.
I think cpu slack is the resource. According to Wikipedia, it is referred to the amount of time left after a job if the job was started now.
Plenty of cpu slack means much computation resources. When Consumer/Producer do something nontrivial, cpu slack decreases and impacts throughput.
I have the following query which i need someone to please help me with.Im new to message queues and have recently started looking at the Kestrel message queue.
As i understand,both threads and message queues are used for concurrency in applications so what is the advantage of using message queues over multitreading ?
Please help
Thank you.
message queues allow you to communicate outside your program.
This allows you to decouple your producer from your consumer. You can spread the work to be done over several processes and machines, and you can manage/upgrade/move around those programs independently of each other.
A message queue also typically consists of one or more brokers that takes care of distributing your messages and making sure the messages are not lost in case something bad happens (e.g. your program crashes, you upgrade one of your programs etc.)
Message queues might also be used internally in a program, in which case it's often just a facility to exchange/queue data from a producer thread to a consumer thread to do async processing.
Actually, one facilitates the other. Message queue is a nice and simple multithreading pattern: when you have a control thread (usually, but not necessarily an application's main thread) and a pool of (usually looping) worker threads, message queues are the easiest way to facilitate control over the thread pool.
For example, to start processing a relatively heavy task, you submit a corresponding message into the queue. If you have more messages, than you can currently process, your queue grows, and if less, it goes vice versa. When your message queue is empty, your threads sleep (usually by staying locked under a mutex).
So, there is nothing to compare: message queues are part of multithreading and hence they're used in some more complicated cases of multithreading.
Creating threads is expensive, and every thread that is simultaneously "live" will add a certain amount of overhead, even if the thread is blocked waiting for something to happen. If program Foo has 1,000 tasks to be performed and doesn't really care in what order they get done, it might be possible to create 1,000 threads and have each thread perform one task, but such an approach would not be terribly efficient. An second alternative would be to have one thread perform all 1,000 tasks in sequence. If there were other processes in the system that could employ any CPU time that Foo didn't use, this latter approach would be efficient (and quite possibly optimal), but if there isn't enough work to keep all CPUs busy, CPUs would waste some time sitting idle. In most cases, leaving a CPU idle for a second is just as expensive as spending a second of CPU time (the main exception is when one is trying to minimize electrical energy consumption, since an idling CPU may consume far less power than a busy one).
In most cases, the best strategy is a compromise between those two approaches: have some number of threads (say 10) that start performing the first ten tasks. Each time a thread finishes a task, have it start work on another until all tasks have been completed. Using this approach, the overhead related to threading will be cut by 99%, and the only extra cost will be the queue of tasks that haven't yet been started. Since a queue entry is apt to be much cheaper than a thread (likely less than 1% of the cost, and perhaps less than 0.01%), this can represent a really huge savings.
The one major problem with using a job queue rather than threading is that if some jobs cannot complete until jobs later in the list have run, it's possible for the system to become deadlocked since the later tasks won't run until the earlier tasks have completed. If each task had been given a separate thread, that problem would not occur since the threads associated with the later tasks would eventually manage to complete and thus let the earlier ones proceed. Indeed, the more earlier tasks were blocked, the more CPU time would be available to run the later ones.
It makes more sense to contrast message queues and other concurrency primitives, such as semaphores, mutex, condition variables, etc. They can all be used in the presence of threads, though message-passing is also commonly used in non-threaded contexts, such as inter-process communication, whereas the others tend to be confined to inter-thread communication and synchronisation.
The short answer is that message-passing is easier on the brain. In detail...
Message-passing works by sending stuff from one agent to another. There is generally no need to coordinate access to the data. Once an agent receives a message it can usually assume that it has unqualified access to that data.
The "threading" style works by giving all agent open-slather access to shared data but requiring them to carefully coordinate their access via primitives. If one agent misbehaves, the process becomes corrupted and all hell breaks loose. Message passing tends to confine problems to the misbehaving agent and its cohort, and since agents are generally self-contained and often programmed in a sequential or state-machine style, they tend not to misbehave as often — or as mysteriously — as conventional threaded code.
JDK's concurrency package, Boost's thread library, Perl's Thread library (not in Python though) all implement barrier, I haven't come across a need for using a barrier, so wondering what would a typical use case be in multi-threaded applications.
Barriers can be used all over the place through contrived examples but you'll often seem them in a scatter/reduce method where the results of the different threads are all required before proceeding.
If you wanted to parallelize a sort for instance you could split the list n times and start n threads to sort their section and pause, when they're all finished they would die letting the parent know that it's finally okay to combine the sorted chunks. (I know there are better ways but it's one implementation).
The other place I've seen it is in parallelized networking where you have to send a certain amount of data per payload. So the interface will start up n buckets and wait for them all to fill before sending out the transmission. When you think about a partitioned T1 line it sorta makes sense, sending one burst of data over the 64 multiplexed partitions would be better than sending 1 partition's data (which essentially costs the same since the packet has to be padded with 0's.)
Hope those are some things to get you thinking about it the problem!
Example: a set of threads work concurrently to compute a result-set and the said result-set (in part/total) is required as input for the next stage of processing to some/all threads at the "barrier".
A barrier makes it easier to synchronize multiple threads without having to craft a solution around multiple conditions & mutexes.
I can't say I have seen barriers often though. At some point, as the number of threads grows, it might be worthwhile considering a more "decoupled" system as to manage possible dead-locks.
MSDN: A Barrieris an object that prevents individual tasks in a parallel operation from continuing until all tasks reach the barrier. It is useful when a parallel operation occurs in
phases, and each phase requires synchronization between tasks.
Found here
I'm working on a parallelization library for the D programming language. Now that I'm pretty happy with the basic primitives (parallel foreach, map, reduce and tasks/futures), I'm starting to think about some higher level parallel algorithms. Among the more obvious candidates for parallelization is sorting.
My first question is, are parallelized versions of sorting algorithms useful in the real world, or are they mostly academic? If they are useful, where are they useful? I personally would seldom use them in my work, simply because I usually peg all of my cores at 100% using a much coarser grained level of parallelism than a single sort() call.
Secondly, it seems like quick sort is almost embarrassingly parallel for large arrays, yet I can't get the near-linear speedups I believe I should be getting. For a quick sort, the only inherently serial part is the first partition. I tried parallelizing a quick sort by, after each partition, sorting the two subarrays in parallel. In simplified pseudocode:
// I tweaked this number a bunch. Anything smaller than this and the
// overhead is smaller than the parallelization gains.
const smallestToParallelize = 500;
void quickSort(T)(T[] array) {
if(array.length < someConstant) {
insertionSort(array);
return;
}
size_t pivotPosition = partition(array);
if(array.length >= smallestToParallelize) {
// Sort left subarray in a task pool thread.
auto myTask = taskPool.execute(quickSort(array[0..pivotPosition]));
quickSort(array[pivotPosition + 1..$]);
myTask.workWait();
} else {
// Regular serial quick sort.
quickSort(array[0..pivotPosition]);
quickSort(array[pivotPosition + 1..$]);
}
}
Even for very large arrays, where the time the first partition takes is negligible, I can only get about a 30% speedup on a dual core, compared to a purely serial version of the algorithm. I'm guessing the bottleneck is shared memory access. Any insight on how to eliminate this bottleneck or what else the bottleneck might be?
Edit: My task pool has a fixed number of threads, equal to the number of cores in the system minus 1 (since the main thread also does work). Also, the type of wait I'm using is a work wait, i.e. if the task is started but not finished, the thread calling workWait() steals other jobs off the pool and does them until the one it's waiting on is done. If the task isn't started, it is completed in the current thread. This means that the waiting isn't inefficient. As long as there is work to be done, all threads will be kept busy.
Keep in mind I'm not an expert on parallel sort, and folks make research careers out of parallel sort but...
1) are they useful in the real world.
of course they are, if you need to sort something expensive (like strings or worse) and you aren't pegging all the cores.
think UI code where you need to sort a large dynamic list of strings based on context
think something like a barnes-hut n-bodies sim where you need to sort the particles
2) Quicksort seems like it would give a linear speedup, but it isn't. The partition step is a sequential bottleneck, you will see this if you profile and it will tend to cap out at 2-3x on a quad core.
If you want to get good speedups on a smaller system you need to ensure that your per task overheads are really small and ideally you will want to ensure that you don't have too many threads running, i.e. not much more than 2 on a dual core. A thread pool probably isn't the right abstraction.
If you want to get good speedups on a larger system you'll need to look at the scan based parallel sorts, there are papers on this. bitonic sort is also quite easy parallelize as is merge sort. A parallel radix sort can also be useful, there is one in the PPL (if you aren't averse to Visual Studio 11).
I'm no expert but... here is what I'd look at:
First of all, I've heard that as a rule of thumb, algorithms that look at small bits of a problem from the start tends to work better as parallel algorithms.
Looking at your implementation, try making the parallel/serial switch go the other way: partition the array and sort in parallel until you have N segments, then go serial. If you are more or less grabbing a new thread for each parallel case, then N should be ~ your core count. OTOH if your thread pool is of fixed size and acts as a queue of short lived delegates, then I'd use N ~ 2+ times your core count (so that cores don't sit idle because one partition finished faster).
Other tweaks:
skip the myTask.wait(); at the local level and rather have a wrapper function that waits on all the tasks.
Make a separate serial implementation of the function that avoids the depth check.
"My first question is, are parallelized versions of sorting algorithms useful in the real world" - depends on the size of the data set that you are working on in the real work. For small sets of data the answer is no. For larger data sets it depends not only on the size of the data set but also the specific architecture of the system.
One of the limiting factors that will prevent the expected increase in performance is the cache layout of the system. If the data can fit in the L1 cache of a core, then there is little to gain by sorting across multiple cores as you incur the penalty of the L1 cache miss between each iteration of the sorting algorithm.
The same reasoning applies to chips that have multiple L2 caches and NUMA (non-uniform memory access) architectures. So the more cores that you want to distribute the sorting across, the smallestToParallelize constant will need to be increased accordingly.
Another limiting factor which you identified is shared memory access, or contention over the memory bus. Since the memory bus can only satisfy a certain number of memory accesses per second; having additional cores that do essentially nothing but read and write to main memory will put a lot of stress on the memory system.
The last factor that I should point out is the thread pool itself as it may not be as efficient as you think. Because you have threads that steal and generate work from a shared queue, that queue requires synchronization methods; and depending on how those are implemented, they can cause very long serial sections in your code.
I don't know if answers here are applicable any longer or if my suggestions are applicable to D.
Anyway ...
Assuming that D allows it, there is always the possibility of providing prefetch hints to the caches. The core in question requests that data it will soon (not immediately) need be loaded into a certain cache level. In the ideal case the data will have been fetched by the time the core starts working on it. More likely the prefetch process will be more or less on the way which at least will result in less wait states than if the data were fetched "cold."
You'll still be constrained by the overall cache-to-RAM throughput capacity so you'll need to have organized the data such that so much data is in the core's exclusive caches that it can spend a fair amount of time there before having to write updated data.
The code and data need to be organized according to the concept of cache lines (fetch units of 64 bytes each) which is the smallest-sized unit in a cache. This should result in that for two cores the work needs to be organized such that the memory system works half as much per core (assuming 100% scalability) as before when only one core was working and the work hadn't been organized. For four cores a quarter as much and so on. It's quite a challenge but by no means impossible, it just depends on how imaginative you are in restructuring the work. As always, there are solutions that cannot be conceived ... until someone does just that!
I don't know how WYSIWYG D is compared to C - which I use - but in general I think the process of developing scaleable applications is ameliorated by how much the developer can influence the compiler in its actual machine code generation. For interpreted languages there will be so much memory work going on by the interpreter that you risk not being able to discern improvements from the general "background noise."
I once wrote a multi-threaded shellsort which ran 70% faster on two cores compared to one and 100% on three cores compared to one. Four cores ran slower than three. So I know the dilemmas you face.
I would like to point you to External Sorting[1] which faces similar problems. Usually, this class of algorithms is used mostly to cope with large volumes of data, but their main point is that they split up large chunks into smaller and unrelated problems, which are therefore really great to run in parallel. You "only" need to stitch together the partial results afterwards, which is not quite as parallel (but relatively cheap compared to the actual sorting).
An External Merge Sort would also work really well with an unknown amount of threads. You just split the work-load arbitrarily, and give each chunk of n elements to a thread whenever there is one idle, until all your work units are done, at which point you can start joining them up.
[1] http://en.wikipedia.org/wiki/External_sorting