These days, what are good reasons for setting thread affinity rather than leaving it to the OS? - multithreading

Searching answers here for "thread affinity", I see a lot of interest in doing it but little justification for it save possibly getting stable QueryPerformanceTimer results.
Assuming a modern OS and a modern 2-4 socket workstation/server class machine with modern 4-6 core CPUs, what good reasons would anyone have for thinking they know better than their OS's scheduler ? Are there any real world situations where taking more control of thead affinity is the right thing to do ? What sort of performance benefits can be demonstrated ?
The last time I saw a really good case for setting thread affinity somewhere (as in, it was backed up by concrete results showing genuine and significant improvements in system performance), it was some obscure thing to do with Win2K device drivers. But I haven't seen anything like that in years so when someone tells me they need to control thread affinity (but not why) these days I am deeply sceptical... but curious to be shown otherwise.

The primary reason is if you have something that depends heavily upon caching. The OS scheduler doesn't necessarily take that into account to the degree you might like.

I use it to assign threads to cores; for example in a simulation you do the physics entirely on one core, and allow the rest of the computation to be executed on another one. It makes sense to be able to control this, if you're on a tight environment where you know the hardware.
Of course, configuring this needs to be done per system, so by default I let the OS decide the cores on which to run, but keep the option of restricting core usage.

In the OS kernel and sometimes in kernel mode drivers you need to perform the same action on every CPU (e.g. update a system register). You can do that in a loop in a single thread, changing the affinity on each iteration.

For desktops it's quite unnecessary.
But I can see some applications where it would help. For example the CPU cache likes it if the app that runs on it doesn't change.
Another possibility is you have a critical task - you give it an entire CPU, and the other tasks use the rest of the CPUs.
Or the opposite: You have some low priority tasks, you put them all on one CPU, then leave the others free for more important tasks (using process priority will give you most of this benefit without having affinity, but I can imagine some memory heavy cases where it wouldn't).

I would agree its best to leave to the CPU to figure this out in most situations. However, the most common reason to go for thread affinity as far as I have seen is when you need good cache dependency. In multiple CPU systems, when a particular CPU caches something individually for itself and if the same thing has been cached in some other CPU, then I believe it can automatically get invalidated on the other CPU. So if a particular thread keeps changing CPUs on which it executes, then the cache hit rate will be too less. So in this case I guess it makes sense for the programmer to be a better judge of the COU affinities.
I also think the above point by Ariel about making sure a critical task constantly gets a CPU without throttling other low priority processes also makes sense.

Related

Consistent use of CPU by Java Process

I am running a Java program which does a heavy load work and needs lots of memory and CPU attention.
I took the snapshot of task manager while that program was running and this is how it looks like
Clearly this program is making use of all 8 cores available on my machine but if you see the CPU usage graph, you can see dips in the CPU usage and these dips are consistent across all cores.
My question is, Is there some way of avoiding these dips? Can i make sure that all my cores are being used consistently without any dip and come to rest only after my program has finished?
This looks so familiar. Obviously, your threads are blocking for some reason. Here are my suggestions:
Check to see if you have any thread blocking (synchronization). Thread synchronization is easy to do wrong and can stop computation for extended periods of time.
Make sure you aren't waiting on I/O (file, network, devices, etc). Often the default for network or other I/O is to block.
Don't block on message passing or remote procedure calls.
Use a more sophisticated profiler to get a better look. I use Intel VTune, but then I have access to it. There are other low-level profiling tools that are just as capable but more difficult to use.
Check for other processes that might be using the system. I've had situations where that other process doesn't use the processor (blocks) but doesn't give the context up (doesn't swap out and allow another process to run).
When I say "don't block", I don't mean that you should poll. That's even worse as it consumes processing without doing anything useful. Restructure your algorithm to hide latency. Use a new algorithm that permits more latency hiding. Find alternate ways of thread synchronization that minimizes or eliminates blocking.
My two cents.

What makes a kernel/OS real-time?

I was reading this article, but my question is on a generic level, I was thinking along the following lines:
Can a kernel be called real time just because it has a real time scheduler? Or in other words, say I have a linux kernel, and if I change the default scheduler from O(1) or CFS to a real time scheduler, will it become an RTOS?
Does it require any support from the hardware? Generally I have seen embedded devices having an RTOS (eg VxWorks, QNX), do these have any special provisions/hw to support them? I know RTOS process's running time is deterministic, but then one can use longjump/setjump to get the output in determined time.
I'd really appreciate some input/insight on it, if I am wrong about something, please correct me.
After doing some research, talking to poeple (Jamie Hanrahan, Juha Aaltonen #linkedIn Group - Device Driver Experts) and ofcourse the input from #Jim Garrison, this what I can conclude:
In Jamie Hanrahan's words-
What makes a kernel real time?
The sine qua non of a real time OS -
The ability to guarantee a maximum latency between an external interrupt and the start of the interrupt handler.
Note that the maximum latency need not be particularly short (e.g. microseconds), you could have a real time OS that guaranteed an absolute maximum latency of 137 milliseconds.
A real time scheduler is one that offers completely predictable (to the developer) behavior of thread scheduling - "which thread runs next".
This is generally separate from the issue of a guaranteed maximum latency to responding to an interrupt (since interrupt handlers are not necessarily scheduled like ordinary threads) but it is often necessary to implement a real-time application. Schedulers in real-time OSs generally implement a large number of priority levels. And they almost always implement priority inheritance, to avoid priority inversion situations.
So, it is good to have a guaranteed latency for an interrupt and predictability of thread scheduling, then why not make every OS real time?
Because an OS suited for general purpose use (servers and/or desktops) needs to have characteristics that are generally at odds with real-time latency guarantees.
For example, a real-time scheduler should have completely predictable behavior. That means, among other things, that whatever priorities have been assigned to the various tasks by the developer should be left alone by the OS. This might mean that some low-priority tasks end up being starved for long periods of time. But the RT OS has to shrug and say "that's what the dev wanted." Note that to get the correct behavior, the RT system developer has to worry a lot about things like task priorities and CPU affinities.
A general-purpose OS is just the opposite. You want to be able to just throw apps and services on it, almost always things written by many different vendors (instead of being one tightly integrated system as in most R-T systems), and get good performance. Perhaps not the absolute best possible performance, but good.
Note that "good performance" is not just measured in interrupt latency. In particular, you want CPU and other resource allocations that are often described as "fair", without the user or admin or even the app developers having to worry much if at all about things like thread priorities and CPU affinities and NUMA nodes. One job might be more important than another, but in a general-purpose OS, that doesn't mean that the second job should get no resources at all.
So the general purpose OS will usually implement time-slicing among threads of equal priority, and it may adjust the priorities of threads according to their past behavior (e.g. a CPU hog might have its priority reduced; an I/O bound thread might have its priority increased, so it can keep the I/O devices working; a CPU-starved thread might have its priority boosted so it can get a little bit of CPU time now and then).
Can a kernel be called real time just because it has a real time scheduler?
No, an RT scheduler is a necessary component of an RT OS, but you also need predictable behavior in other parts of the OS.
Does it require any support from the hardware?
In general, the simpler the hardware the more predictable its behavior is. So PCI-E is less predictable than PCI, and PCI is less predictable than ISA, etc. There are specific I/O buses that were designed for (among other things) easy predictability of e.g. interrupt latency, but a lot of R-T requirements can be met these days with commodity hardware.
The specific description of real-time is that processes have minimum response time guarantees. This is often not sufficient for the application, and even less important than determinism. This is especially hard to achieve with modern feature rich OS's. Consider:
If I want to command some hardware or a machine at precise points in time, I need to be able to generate command signals at those specific moments, often with far sub millisecond accuracy. Generally if you compile let's say a C-code that runs a loop that waits for "half a millisecond" and does something, the wait time is not exactly half a millisecond, it is a little bit more, since the way common OS's handle this, is that they put the process aside at least up until the correct time has passed, after which the scheduler might (at some point) pick it up again.
What is seriously problematic is not that the time t is not exactly half a second but that it cannot be known in advance how much more it is. This inaccuracy is not constant nor deterministic.
This has surprising consequences when doing physical automation. For example it is impossible to command a stepper motor accurately with any typical OS without using dedicated hardware through kernel interfaces and telling them how long time steps you really want. Because of this, a single AVR module can command several motors accurately, but a Raspberry Pi (that absolutely stomps the AVR in terms of clockspeed) cannot manage more than 2 with any typical OS.

Is it possible to emulate processor cores or threads to the operation system?

First, let me say that:
I know it wont get me more performance
In fact, i know it'll get me less performance, if it is possible at all!
Basically, i want the more Threads as possible on a single machine!
I want the operation system to recognize them all, and a want a specific application to run scripts on the single threads generated... (the application is not mine, so i can't edit it directly)
1st - Is it possible?
2nd - how?
You can't change other programs, unless you have their source code or are willing to go to great lengths of disassembling it and then monkey patching the things together that you need, at which point it might just be better to write it from scratch.
Also, keep in mind that applications not specifically designed to deal with multithreading, are not only not likely to gain much or any performance from it, it will also lead to a lot of bugs and problems due to timing and atomicity issues.
In theory you can spin up as many threads as the OS lets you, and the OS will all allow them to run on the CPU. That's one of the fundamental aspects of the underlying kernel of any modern OS after all. But you can't tell the OS to just spin up more threads for a specific program, what you can do is give it a higher priority so that the existing threads the program spins up get more time on the CPU. But that's no longer a programming issue but how to use the OS you are working with.

With modern OS schedulers, does it still make sense to manually lock processes to specific CPUs/cores?

I recently learned that sometimes people will lock specific processes or threads to specific processors or cores, and it's thought that this manual tuning will best distribute the load. This is a bit counter-intuitive to me -- I would think the OS scheduler would be able to make a better decision than a human about how to spread the load. I could see it being true for older operating systems that perhaps weren't aware of issues like their being more latency between specific pairs of cores, or shared cache between one pair of cores but not another pair. But I assume 'modern' OSs like Linux, Solaris 10, OS X, and Vista should have schedulers that know this information. Am I mistaken about their capabilities? Am I mistaken that it's a problem the OS can actually solve? I'm particularly interested in the answer for Solaris and Linux.
The consequence is whether or not I need to inform users of my (multithreaded) software of how they might consider balancing on their box.
First of all, 'Lock' is not a correct term to describe it. 'Affinity' is more suitable term.
In most case, you don't need to care about it. However, in some cases, manually setting CPU/Process/Thread affinity could be beneficial.
Operating systems are usually oblivious to the details of modern multicore architecture. For example, say we have 2-socket quadcore processors, and the processor supports SMT(=HyperThreading). In this case, we have 2 processors, 8 cores, and 16 hardware threads. So, OS will see 16 logical processors. If an OS does not recognize such hierarchy, it is highly likely to lose some performance gains. The reasons are:
Caches: in our example, two different processors (installed on two different sockets) are not sharing any on-chip caches. Say an application has 4 busy-running threads and a lot of data are shared by threads. If an OS schedules the threads across the processors, then we may lose some cache locality, resulting in performance lose. However, the threads are not sharing much data (having distinct working set), then separating to different physical processors would be better by increasing effective cache capacity. Also, more tricky scenario could be happen, which is very hard for OS to be aware of.
Resource conflict: let's consider SMT(=HyperThreading) case. SMT shares a lot of important resources of CPU such as caches, TLB, and execution units. Say there are only two busy threads. However, an OS may stupidly schedule these two threads on two logical processors from the same physical core. In such case, a significant resources are contended by two logical threads.
One good example is Windows 7. Windows 7 now supports a smart scheduling policy that consider SMT (related article). Windows 7 actually prevents the above 2. case. Here is a snapshot of task manger in Windows 7 with 20% load on Core i7 (quadcore with HyperThreading = 8 logical processors):
(source: egloos.com)
The CPU usage history is very interesting, isn't? :) You may see that only a single CPU in pairs is utilized, meaning Windows 7 avoids scheduling two threads on a same core simultaneously as possible. This policy will definitely decrease the negative effects of SMT such as resource conflict.
I'd like to say OS are not very smart to understand modern multicore architecture where a lot of caches, shared last-level cache, SMT, and even NUMA. So, there could be good reasons you may need to manually set CPU/process/thread affinity.
However, I won't say this is really needed. Only when you fully understand your workload patterns and your system architecture, then try it on. And, see the results whether your try is effective.
For general-purpose applications, there is no reason to set the CPU affinity; you should just allow the OS scheduler to choose which CPU should run the process or thread. However, there are instances where it is necessary to set the CPU affinity. For example, in real-time systems where the cost of migrating a thread from one core to another (which can happen at any time if the CPU affinity has not been set) can introduce unpredictable delays that can cause tasks to miss their deadlines and which preclude real-time guarantees.
You can take a look at this article about a multi-core aware implementation of real-time CORBA that, among other things, had to set the CPU affinity so that CPU migration could not result in missed deadlines.
The paper is: Real-Time Performance and Middleware for Multiprocessor and Multicore Linux Platforms
For applications designed with parallelism and multiple cores in mind, OS-default thread affinity is sometimes not enough. There are many approaches to parallelism, but so far all require involvement of the programmer and knowledge - at some level at least - of the architecture on which the solution will be mapped. This includes the machines, CPU's and threads that are involved.
This is an actively researched subject, and there is an excellent course on MIT's OpenCourseWare that delves into these issues: http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-189January--IAP--2007/CourseHome/
Well something many people haven't thought here is the idea of forbidding two processes to run on the same processor (socket). It might be worth to help the system to bound different heavily used processes to different processors. This can avoid contention if the scheduler is not clever enough to figure it out itself.
But this is more a system admin task then one for the programmers. I have seen optimizations like this for a few high performance database servers.
Most modern operating systems will do an effective job of allocating work between cores. They also attempt to keep threads running on the same core, to get the cache benefits you mentioned.
In general, you should never be setting your thread affinity unless you have a very good reason to. You don't have as good an insight as the OS into the other work that threads on the system are doing. Kernels are constantly being updated based on new processor technology (single CPU per socket to hyper threading to multiple cores per sockets). Any attempt by you to set hard affinity may backfire on future platforms.
This article from MSDN Magazine, Using concurrency for scalability, gives a good overview of multithreading on Win32. Regarding CPU affinity,
Windows automatically employs
so-called ideal processor affinity in
an attempt to maximize cache
efficiency. For example, a thread
running on CPU 1 that gets context
switched out will prefer to run again
on CPU 1 in the hope that some of its
data will still reside in cache. But
if CPU 1 is busy and CPU 2 is not, the
thread could be scheduled on CPU 2
instead, with all the negative cache
effects that implies.
The article also warns that CPU affinity shouldn't be manipulated without a deep understanding of the problem. Based on this information, my answer to your question would be No, except for very specific, well-understood scenarios.
I am not even sure you can pin processes to a specific CPU on linux. So, my answer is "NO" - let the OS handle it, it's smarter then you most of the time.
Edit:
It seems that on win32 you have some control over which CPU family are you going to run this process. Now I only wait for someone to prove me wrong also on linux/posix ...

When should I consider changing thread priority

I once was asked to increase thread priority to fix a problem. I refused, saying that changing it was dangerous and was not the root cause of the problem.
My question is, under what circumstannces should I conider changing priority of threads?
When you've made a list of the threads you're using and defined a priority order for them which makes sense in terms of the work they do.
If you nudge threads up here and there in order to bodge your way out of a problem, eventually they'll all be high priority and you're back where you started. Don't assume you can fix a race condition with prioritisation when really it needs locking, because chances are you've only fixed it in friendly conditions. There may still be cases where it can fail, such as when the lower-priority thread has undergone priority inheritance because another high-priority thread is waiting on another lock it's holding.
If you classify threads along the lines of "these threads fill the audio buffer", "these threads make my app responsive to system events", "these threads make my app responsive to the user", "these threads are getting on with some business and will report when they're good and ready", then the threads ought to be prioritised accordingly.
Finally, it depends on the OS. If thread priority is completely secondary to process priority, then it shouldn't be "dangerous" to prioritise threads: the only thing you can starve of CPU is yourself. But if your high-priority threads run in preference to the normal-priority threads of other, unrelated applications, then you have a broader responsibility. You should only be raising priorities of threads which do small amounts of urgent work. The definition of "small" depends what kind of device you're on - with a 3GHz multi-core processor you get away with a lot, but a mobile device might have pseudo real-time expectations that user-level apps can break.
Keeping the audio buffer serviced is the canonical example of when to be high priority, though, since small under-runs usually cause nasty crackling. Long downloads (or other slow I/O) are the canonical example of when to be low priority, since there's no urgency processing this chunk of data if the next one won't be along for ages anyway. If you're ever writing a device driver you'll need to make more complex decisions how to play nicely with others.
Not many. The only time I've ever had to change thread priorities in a positive direction was with a user interface thread. UIs must be extremely snappy in order for the app to feel right, so a lot of times it is best to prioritize painting threads higher than others. For example, the Swing Event Dispatch Thread runs at priority 6 by default (1 higher than the default).
I do push threads down in priority quite a bit. Again, this is usually to keep the UI responsive while some long-running background process does its thing. However, this also will sometimes apply to polling daemons and the like which I know that I don't want to be interfering with anything, regardless of how minimal the interference.
Our app uses a background thread to download data and we didn't want that interfering with the UI thread on single-core machines, so we deliberately prioritized that lower.
I think it depends on the direction you're looking at changing the priority.
Normally you shouldn't ever increase thread priority unless you have a very good reason. Increasing thread priority can cause your app's thread to start taking away time from other applications, which probably isn't what the user wants. If your thread is using up a significant amount of CPU it can make the machine hard to use, as some standard UI threads may start to starve.
I'd say the only times you should increase priority above normal is if the user explicitly told your app to do so, but even then you want to prevent "clueless" users from doing so. Maybe if your app doesn't use much CPU normally, but might have brief bursts of really really important activity then it could be OK to have an increased priority, as it wouldn't normally detract from the user's general experience.
Decreasing priority is another matter. If your app is doing something that takes a LOT of CPU and runs for a long time, yet isn't critical, then lowering the priority can be good. By lowering the priority you allow the CPU to be used for other things when it's needed, which helps keep the system responding quickly. As long as the system is mostly idling other than your app you'll still get most of the CPU time, but won't take away from tasks that need it more than you. An example of this would be a thread that indexes the hard drive (think google desktop).
I would say when your original design assumptions about the threads are no longer valid.
Thread priority is mostly a design decision about what work is most important. So for some examples of when to reconsider: If you add a new feature that might require its own thread that becomes more important, then reconsider thread priorities. If some requirements change that force you to reconsider the priorities of the work you are doing, then reconsider. Or, if you do performance testing and realize that your "high priority work" as specified in your design do not get the required performance, then tweak priorities.
Otherwise, its often a hack.

Resources