Does a large Max Degree Of Parallelism cause queuing? - multithreading

I would like to know if my understanding of setting a Max Degree Of Parallelism (MDOP) value larger than a machines available processor amount causes a queueing effect that I have described below.
Please see this as a purely I/O asynchronous operation:
A computer has (for example) 16 processors. This means a max of 16 tasks can be worked on at any one time.
If there is a requirement for 100 http end points to be called and the MDOP was set to 100, this will create 100 http request tasks at the same time all run in parallel. The problem is only 16 will ever be handled at once meaning the rest are effectively queued and will be handled once a processor frees resulting in an increased response time. Also to add, the process will be solved down further due to other parts of the system demanding use of the 16 available processors.
Setting the MDOP to half the available processer count (8 for example in a 16 processor machine) means that 8 http request tasks will be in flight at any one time. The response times of the 8 requests will be minimal due to there being no queueing of the tasks as the set MDOP is well under the machines available processor resources. Further to this there are also another 8 processors available to handle any other tasks required by the machine.
The main difference is that the overall response time for 100 calls will return faster with a MDOP of 100 as all 100 tasks were started at the same time, where as with 8 there are only ever 8 requests in flight at once.

The implicit assumption made in the question are not correct.
IO operations are generally far from saturating a core. Synchronous and asynchronous request works results in different behaviours. The former is not efficient and must not be used. Both should not be limited to the number of available cores but to the maximum concurrency of the target device completing the IO operations assuming the software stack is doing its job correctly.
For synchronous requests, most of the time is spent waiting for the operation to complete. For example, for a network operation, the OS send the request buffer to the NIC which send it asynchronously on network link. It takes some time to be sure data has been sent so the NIC needs to wait a bit for this sending request to mark it as completed. It also sometimes need to wait for the link to be ready. During this time, the processor can be free and it can actually queue new requests to the NIC. Not to mention the response from the request sent will take a significant time (during which neither the processor nor the link work for this specific request). When a synchronous operation needs to wait for the target device, the IO scheduler of the OS does a context switch (assuming the user code does a proper passive wait). This enable the processor to actually start new IO requests of other threads or overlap the IO requests with computation when the load is high. If there is not enough threads to do IO operations, then this is the main issue, not the number of cores itself. Increasing the number of thread is not efficient. It just increases the number of context switches and thread migration resulting in significant overheads. Asynchronous operations should be used instead. Regarding the OS stack, they may also causes many context switches, but they are generally more efficiently scheduled by the OS. Moreover, using asynchronous IO operations remove the artificial limitation of the number of threads (ie. the maximum degree of parallelism).
For asynchronous operations, one thread can starts a lot of IO requests before they can actually be completed. Having more cores does not directly means more requests can be completed in a given fixed time. This is only true if the OS IO stack is truly parallel and if the operations are limited by the OS stack rather than the concurrency of the target device (this tends to be true nowadays for example on SSD which are massively parallel). The thing is that modern processors are very fast so few threads should theoretically be enough to saturate the queue of most target device, although in practice, not all OS stacks are efficiently designed for modern IO devices.
Every software and hardware stack have a maximum degree of parallelism meant to saturate the device and so to mitigate the latency of IO requests. Because IO latency is generally high, IO request queues are large. "Queuing" do not mean much here since requests are eventually queued anyway. The question is whether they are queued in the OS stack and not the one of the device, that is if the degree Of parallelism of the software stack (including the OS) is bigger than the one of the target device (which may or may not truly compute incoming request of its request queue in parallel). The answer is generally yes if the target application send a lot of requests and the OS stack to not provide any mechanism to regulate the amount of incoming requests. That being said, some API provides it or even guarantee it (asynchronous IO ring buffers are a good example).
Put it shortly, it depends of the exact target device, the target operating system, the OS API/stack used as well as the application itself. The system can be seen as big platform-dependent dataflow where queues are everywhere so one needs to carefully specify what "MDOP" and "queuing" means in this context.

You cannot expect anyone to know what you mean by MDOP unless you mention the precise technology in the context of which you are using this term. Microsoft SQL Server has a concept of MDOP, but you are talking about HTTP requests, so you are probably not talking about MSSQL. So, what are you talking about? Anyway, on with the question.
A computer has (for example) 16 processors. This means a max of 16 tasks can be worked on at any one time.
No, it doesn't mean that. It means that the computer can execute 16 CPU instructions simultaneously. (If we disregard pipelines, superscalar pipelines, memory contention, etc.) A "Task" is a very high-level concept which involves all sorts of things besides just executing CPU instructions. For example, it involves waiting for I/O to complete, or even waiting for events to occur, events which might be raised by other tasks.
When a system allows you to set the value of some concept such as a "degree of parallelism", this means that there is no silver bullet for that value, so depending on the application at hand, different values will yield different performance benefits. Knowing your specific usage scenario you can begin with an educated guess for a good value, but the only way to know the optimal value is to try and see how your actual system performs.
Specifically with degree of parallelism, it depends on what your threads are doing. If your threads are mostly computation-bound, then a degree of parallelism close to the number of physical cores will yield best results. If your threads are mostly I/O bound, then your degree of parallelism should be orders of magnitude larger than the number of physical cores, and the ideal value depends on how much memory each thread is consuming, because with too many threads you might start hitting memory bottlenecks.
Proof: check how many threads are currently alive on your computer. (Use some built-in system monitor utility, or download one if need be.) Notice that you have thousands of threads running. And yet look at your CPU utilization: it is probably close to zero. That's because virtually all of those thousands of threads are doing nothing most of the time but waiting for stuff to happen, like for you to press a key or for a network packet to arrive.

Related

Is there every a reason to use thread affinity when there are more threads being used than ones specified/reserved?

I am working with Rust but this question would also apply to many other situations.
Suppose you have M available vCPUs and N threads (including the main thread) to schedule, and that N > M. Each thread does approximately equal amounts of work.
Is there any good reason then to pin threads to specific cores? I've written a number of toy benchmarks and it seems like the answer is no, as I cannot make a program under these assumptions that performs better with thread affinity; in fact, it always does much worse.
if your application is working on a system with a lot of cores and heavily relies on the core cache, a context switch will be too expensive, so pinning tasks to cores reduces the context switches and improves throughput.
but in an "average pc" running plain RAM-bound tasks then your OS scheduler will be much better at load balancing the cores than you ever will.
pinning threads to cores is also useful if you care about latency instead of throughput, in a heavily loaded system if you have a time-critical task then you want it to have its own core which won't be contented by other tasks on the system, hence it makes sense to pin it to a certain core, an example will be an in-memory Database that needs to responds to request in under a millisecond latency.
so the answer is, it's only useful for certain apps.

Java Threads and number of Cores

Is it recommended that the number of threads in a java application should be less than the number of cpu cores?
If so why is this the case and what are the implications of using threads greater than the number of cpu cores ?
You will probably not get any definitive answer on the question of knowing, generally speaking, how many threads an app should have, in relation to the number of core(s) the underlying computer has.
One may also argue that, at the time of PaaS software design and/or elastic clusters, the notion of a fixed number of cores for any given process might be overrated.
Still, the first part of your question :
Is it recommended that the number of threads in a java application should be less than the number of cpu cores?
This has a definitive answer, which is a "no" (once more : as a general rule). And the reason why, shortly, it that all created threads are not typically running (and maybe more importantly runable) at once, meaning there is an opportunity to optimize here.
As a support to this discussion, I'll oppose two ways of creating apps, you could call it "classical" versus "reactive", although this is not a generally acceptable division. Yet, let's use this as a support.
Classical application design
I label as classical applications that rely mostly on "blocking" calls and/or "thread per request" pattern. Consider the traditional way I/Os are done (socket communication like HTTP or Database connection, hard drive based file reading, ...) : your app thread calls some kind of read or write method, which usually triggers an OS level call, that blocks your app thread, fills some device buffer at the OS level (say, read from a disk). Once the buffer has received enough data, the OS signals your java app and thread to resume activity, and the read method returns with the data from the buffer.
The whole time the OS is working (usually just a tiny fraction of a second, but still some large amount of time compared to your typical GHz CPU speed), your Java thread is in state BLOCKED_WAITING, waiting for the OS to signal it can resume. This happens all the time. A code profiler tool, like JProfiler, or YourKit, can help you measure this time. If you do so, you'll notice that in many apps doing I/O, this is a significant part of the so-called "wall time" or "clock time" that is spent... waiting.
So we have one thread waiting, meaning it is not using any CPU time. It can be scheduled out, and the OS is free to give CPU time to anybody else.
Suppose this is a one core CPU, then NOW would be a good time to have another thread to feed the CPU. Meaning having two or more threads could be a good design to maximize CPU usage even on a single core CPU, and get the most out of your hardware.
Most "classical" web applications are typically subject to this type of CPU underuse if you follow the rule of "one thread per CPU core", because Socket communications (or more typically : the time spent waiting for a response to your SQL queries) will incur so much blocking.
If you raise the number of threads your app has, then even if one or two long running requests remain waiting, other faster requests will have runnable threads to run them, and you'll get better CPU usage, and better performance (number of concurrent requests). That is... untill something else reaches saturation (too many heavy requests on your DB, too many simultaneous hard drive reads/writes...)
Reactive app design
Recognizing this typical behavior of apps, and using different sets of OS features, some application frameworks now use non blocking patterns (even for I/O) to mitigate the above issues. Examples in the Java ecosystem are NIO based networking stacks like Netty, or actor pattern implementations like Akka.
In a typical "reactive" app, one usually abandons the "thread per request" pattern that we have in classical apps (meaning one thread is responsible for handling everything from start to finish of a given user request, and waiting when need be for external resources to become available), in favor of a vastly more modular, and non-blocking approach.
Threads are given more technical-grained bits of work to do, and each thread will hand-off work to one another and callbacks to hear back when work they depend upon is done. This "handing of" of units of work means each thread can quickly grab new units of work it is able to handle. Meaning one of two things : you achieve higher CPU usage with far fewer threads in your app (because each can grab work more efficiently, instead of just sitting "waiting") ; or you can instantiate many many more threads because they'll mostly be waiting (not saturating the CPUs), and the dynamical hand-off will still allow for good CPU usage.
Conclusion
Anyway, you don't design the number of threads solely based on the number of available cores. The nature of your implementation and work dictates the number of optimal threads to create.
On a classical app-design philosophy, the two numbers are more closely related than on a reactive one, but still, we have different profiles :
a very simple server app can accomodate many more threads than CPU cores, because it will allow for better throughput (the limit being, say, the output network bandwidth).
a SQL heavy app, should be scalled to the point where your app server will saturate the SQL backend. As your app server will be mostly waiting for your SQL server, then this is the limit
a mixed application consisting of some SQL heavy work, and some lightweight work, will need precision tuning, because you don't want the stuck threads (those blocked waiting for the DB) starving the light requests that would be served more rapidly
a compute intensive program (say, a cryptography service) will probably benefit from a number of threads close to the number CPU cores (if your algorithm is implemented in a classical way), because creating more threads than you are able to run is pointless. In an actor based implementation, creating more threads could actually be a win.

libevent / epoll number of worker threads?

I am following this example. Line#37 says that number of worker threads should be equal of number of cpu cores. Why is that so?
If there are 10k connections and my system has 8 cores, does that mean 8 worker threads will be processing 10k connections? Why shouldn't I increase this number?
Context Switching
For an OS to context switch between threads takes a little bit of time. Having a lot of threads, each one doing comparatively little work, means that the context switch time starts becoming a significant portion of the overall runtime of the application.
For example, it could take an OS about 10 microseconds to do a context switch; if the thread does only 15 microseconds worth of work before going back to sleep then 40% of the runtime is just context switching!
This is inefficient, and that sort of inefficiency really starts to show up when you're up-scaling as your hardware, power and cooling costs go through the roof. Having few threads means that the OS doesn't have to switch contexts anything like as much.
So in your case if your requirement is for the computer to handle 10,000 connections and you have 8 cores then the efficiency sweet spot will be 1250 connections per core.
More Clients Per Thread
In the case of a server handling client requests it comes down to how much work is involved in processing each client. If that is a small amount of work, then each thread needs to handle requests from a number of clients so that the application can handle a lot of clients without having a lot of threads.
In a network server this means getting familiar with the the select() or epoll() system call. When called these will both put the thread to sleep until one of the mentioned file descriptors becomes ready in some way. However if there's no other threads pestering the OS for runtime the OS won't necessarily need to perform a context switch; the thread can just sit there dozing until there's something to do (at least that's my understanding of what OSes do. Everyone, correct me if I'm wrong!). When some data turns up from a client it can resume a lot faster.
And this of course makes the thread's source code a lot more complicated. You can't do a blocking read of data from the clients for instance; being told by epoll() that a file descriptor has become ready for reading does not mean that all the data you're expecting to receive from the client can be read immediately. And if the thread stalls due to a bug that affects more than one client. But that's the price paid for attaining the highest possible efficiency.
And it's not necessarily the case that you would want just 8 threads to go with your 8 cores and 10,000 connections. If there's something that your thread has to do for each connection every time it handles a single connection then that's an overhead that would need to be minimised (by having more threads and fewer connections per thread). [The select() system call is like that, which is why epoll() got invented.] You have to balance that overhead up against the overhead of context switching.
10,000 file descriptors is a lot (too many?) for a single process in Linux, so you might have to have several processes instead of several threads. And then there's the small matter of whether the hardware is fundamentally able to support 10,000 within whatever response time / connection requirements your system has. If it doesn't then you're forced to distribute your application across two or more servers, and that can start getting really complicated!
Understanding exactly how many clients to handle per thread depends on what the processing is doing, whether there's harddisk activity involved, etc. So there's no one single answer; it's different for different applications, and also for the same application on different machines. Tuning the clients / thread to achieve peak efficiency is a really hard job. This is where profiling tools like dtrace on Solaris, ftrace on Linux, (especially when used like this, which I've used a lot on Linux on x86 hardware) etc. can help because they allow you to understand at a very fine scale precisely what runtime is involved in your thread handling a request from a client.
Outfits like Google are of course very keen on efficiency; they get through a lot of electricity. I gather that when Google choose a CPU, hard drive, memory, etc. to put into their famously home grown servers they measure performance in terms of "Searches per Watt". Obviously you have to be a pretty big outfit before you get that fastidious about things, but that's the way things go ultimately.
Other Efficiencies
Handling things like TCP network connections can take up a lot of CPU time in it's own right, and it can be difficult to understand whereabouts in a system all your CPU runtime has gone. For network connections things like TCP offload in the smarter NICs can have a real benefit, because that frees the CPU from the burden of doing things like the error correction calculations.
TCP offload mirrors what we do in the high speed large scale real time embedded signal processing world. The (weird) interconnects that we use require zero CPU time to run them. So all of the CPU time is dedicated to processing data, and specialised hardware looks after moving data around. That brings about some quite astonishing efficiencies, so one can build a system with more modest, lower cost, less power hungry CPUs.
Language can have a radical effect on efficiency too; Things like Ruby, PHP, Perl are all very well and good, but everyone who has used them initially but has then grown rapidly ended up going to something more efficient like Java/Scala, C++, etc.
Your question is even better than you think! :-P
If you do networking with libevent, it can do non-blocking I/O on sockets. One thread could do this (using one core), but that would under-utilize the CPU.
But if you do “heavy” file I/O, then there is no non-blocking interface to the kernel. (Many systems have nothing to do that at all, others have some half-baked stuff going on in that field, but non-portable. –Libevent doesn’t use that.) – If file I/O is bottle-necking your program/test, then a higher number of threads will make sense! If a hard-disk is used, and the i/o-scheduler is reordering requests to avoid disk-head-moves, etc. it will depend on how much requests the scheduler takes into account to do its job the best. 100 pending requests might work better then 8.
Why shouldn't you increase the thread number?
If non-blocking I/O is done: all cores are working with thread-count = core-count. More threads only means more thread-switching with no gain.
For blocking I/O: you should increase it!

How is processor speed distributed across threads?

Objective:
I am trying to estimate how fast my code will execute when run concurrently in multiple threads.
Question 1)
If I know exactly how fast my code runs for a single request in one thread is their any way of estimating how fast it will run amongst multiple threads?
Question 2)
What impact, if any, does the presence of other threads effect the execution speed of each other thread?
My Situation:
I traverse a graph in memory of worst case size 1 million nodes. It's simply accessing 1 million memory addresses 1 at a time. Takes Half a second on 1 thread and I was worried how this will scale with multiple users performing the same query. Every user requests is handled by a separate thread so 100 simultaneous users will require 100 simultaneous threads. Each thread is sharing the same resource but read only. No writing. Is there any chance I could get each user to see roughly the same execution time?
Note: I know it will depend upon a number of factors but surely there must be some way of identifying whether or not your code will scale if you find it takes x amount of time for a single thread given x hardware. As final note I'd like to add I have limited experience with computer hardware architecture and how multi-threading works under the hood.
These are all interesting questions, but there is, unfortunately, no straightforward answer, because the answer will depend on a lot of different factors.
Most modern machines are multi-core: in an ideal situation, a four-thread process has the ability to scale up almost linearly in a four-core machine (i.e. run four times as fast).
Most programs, though, spend most of their time waiting for things: disk or database access, the memory bus, network I/O, user input, and other resources. Faster machines don't generally make these things appreciably faster.
The way that most modern operating systems, including Windows, Unix/Linux, and MacOS, use the processor is by scheduling processor time to processes and threads in a more-or-less round-robin manner: at any given time there may be threads that are waiting for processor time (this is a bit simplistic, as they all have some notions of process prioritization, so that high-criticality processes get pushed up the queue earlier than less important ones).
When a thread is using a processor core, it gets it all for as long as its time slice lasts: indeed, only one thing at a time is actually running on a single core. When the process uses up its time slice, or requests some resource that isn't immediately available, it its turn at the processor core is ended, and the next scheduled task will begin. This tends to make pretty optimal use of the processor resources.
So what are the factors that determine how well a process will scale up?
What portion of its run time does a single process spend waiting for
I/O and user input?
Do multiple threads hit the same resources, or different ones?
How much communication has to happen between threads? Between individual threads and your processes main thread? This takes synchronization, and introduces waiting.
How "tight" are the hotspots of the active thread? Can the body of it fit into the processor's memory, or does the (much slower) bus memory have to be accessed?
As a general rule, the more independent individual threads are of one another, the more linearly your application will scale. In real-world business applications, though, that is far from the case. The best way to increase the scaling ability of your process is to understand it--and its dependencies--well, and then use a profiler to find out where the most waiting occurs, and see if you can devise technical strategies to obviate them.
If I know exactly how fast my code runs for a single request in one thread is their any way of estimating how fast it will run amongst multiple threads?
No, you should determine it empirically.
What impact, if any, does the presence of other threads effect the execution speed of each other thread?
Computation-bound tasks will likely scale very well and be mostly independent of other threads. Interestingly enough, some CPU manufacturers implement features which can increase the clock of a lone-busy CPU core to compensate for the all the idle cores. This sort of feature might confound your measurements and expectations about scaling.
Cache/Memory/disk-bound tasks will start to contend with each other except for where resource partitions exist.
I know it will depend upon a number of factors
Absolutely! So I recommend that you prototype it and measure it. And then find out why it didn't scale as well as you'd hoped and try a different algorithm. Iterate.
but surely there must be some way of identifying whether or not your code will scale
Yes, but unfortunately it requires a detailed description of the algorithm implemented by the code. Your results will be heavily dependent on the ratio of your code's activity among these general regions, and your target's capability for these:
disk I/O
network I/O
memory I/O
computation
My Situation: My application runs in an app server that assigns one thread for every user request. If my application executes in 2 seconds for 1 user I can't assume it will be always take 2 seconds if say 100 users are simultaneously running the same operation correct?
If your app server computes pi to 100 digits for each user request, it will likely scale reasonably well until you encounter the core limit of your target.
If your app server does database queries for each user request, it will likely scale only as well as the target hardware can sustain the necessary load.
EDIT given specifics:
I traverse a graph in memory of worst case size 1 million nodes. It's simply accessing 1 million memory addresses 1 at a time.
Your problem sounds memory+cache-bound. You should study the details of your target CPU/mem deployment or if you are designing it, opt for high memory throughput.
A NUMA system ("resource partitioning" for memory) can likely maximize your overall concurrent memory throughput. Note that since your problem seems to dictate concurrent access to the same memory pages, a NUMA system would penalize the process doing remote memory accesses. In this case, consider creating multiple copies of the data at initialization time.
Depending on the pattern of traversal, TLB pressure might be a factor. Consider experimenting with huge (aka "large") pages.
Cache contention may be a factor in scaling as well.
Your specific algorithm could easily end up dominating over any of the specific system effects, depending on how far apart the best and worst cases are.
limited experience with computer hardware architecture and how multi-threading works under the hood.
Profile the query using CPU performance counters with a tool like Intel's VTune, perf, or oprofile. It can tell you where expensive operations are executing in your code. With this information you can optimize your query to perform well (individually and in aggregate).

Is the timeslice given to a thread that is waiting on I/O "wasted"?

I'm currently analyzing the pros and cons of writing a server using a threaded model or event driven model. I already know the many cons of the threaded model (does not scale well due to context switching overhead, virtual memory limitations, etc.) but I came upon another one in my analysis and would like to verify that my understanding of threads is correct.
If I have 5 threads, 1 which is doing work (not being blocked), 4 which are being blocked waiting for I/O (for example waiting on data from a socket), isn't the CPU time given to those 4 threads essentially wasted since no work is actually being done (assuming no data arrives)? The timeslice given to those 4 blocked threads is taking away time from the 1 thread actually doing work, correct?
In this case I'm explicitly saying that the socket is a blocking one.
No. Although it actually depends on the type of OS, type of I/O (polled/DMA) and device driver architecture, most device I/O is performed using DMA + interrupts. In such cases a thread is put into a sleep state until an interrupt is triggered for such I/O operations and scheduler does not visit those threads until their pending I/O is complete. Only polling I/O can cause consumption of CPU, such as PIO mode for hard disks.
Threads don't need to use their entire timeslice. I don't know the specifics, but if blocked threads even get time, they certainly don't use it all.
Obviously, these details vary platform-to-platform-to-environment-to-etc.

Resources