AFAIK writers of a seqlock may be scheduled away while having made the seq-counter odd, so this may starve the readers. So do seq-locks imply disabling the scheduler on a core for a for while ?
Related
AFAIK, as opposed to time-slice based scheduler preemption, pure User-level threads(ULTs) have the property of yielding the processor to the other threads. However, from my surfing on internet I see that we have several preemptive User Thread mechanisms now.
Keeping this in mind, wanted to start a discussion on benefits of Lock-free programming on User level threads. My understanding is that irrespective of presence of preemptive scheduler performance of Lock free programming should surpass that of mutex/semaphore based programs.
However, I am still confused; since acquire operation on mutex also takes fast-path in the absence of contention, performance gain need not be attractive enough to migrate to Lock-free approach.
In case of semaphores, there is a invocation to system call leading to context switch and hence lock-free approaches can be seen as much better option.
Please suggest for both the situations - ULT equipped with preemptive mechanism and the one without it.
This is not an easy question to answer, as it is very general, it will boil down to what your requirements are.
I have recently been working with systems where the use of lock free structures was considered, but when we sat down and wrote out our requirements, we realized that they are in fact not what we want. Our system didn't really require them, and in fact locking helps us because we typically have a producer/consumer architecture where if there is nothing being produced (i.e. nothing being added to a queue) then the consumer should be idle (i.e. blocked).
I recently wrote about this in more detail:
http://blog.chrisd.info/a-simple-thread-safe-queue-for-use-in-multi-threaded-c-applications/
I have a multithreaded application written in c#. What i noticed is that implementing thread synchronization with lock(this) method slows down the application by 20%. Is that an expected behavior or should i look into the implementation closer?
Locking does add some overhead, that can't be avoided. It is also very likely that some of your threads now will be waiting on resources to be released, rather than just grabbing them when they feel like. If you implemented thread synchronization correctly, then that is a good thing.
But in general, your question can't be answered without intimate knowledge about the application. 20 % slowdown might be OK, but you might be locking too broadly, and then the program would (in general) be slower.
Also, please dont use lock(this). If your instance is passed around and someone else locks on the reference, you will have a deadlock. Best practice is to lock on a private object that noone else can access.
Depending on how coarse or granular your lock() statements are, you can indeed impact the performance of your MT app. Only lock things you really know are supposed to be locked.
Any synchronization will slow down multithreading.
That being said, lock(this) is really never a good idea. You should always lock on a private object used for nothing but synchronization when possible.
Make sure to keep your locking to a minimum, and only hold the lock for as short of a time as possible. This will help keep the "slowdown" to a minimum.
There are performance counters you can monitor in Windows to see how much time your application spends contending for locks.
What is the best algorithm to use for scheduling an application that will support 10K concurrent threads with heavy I/O but low CPU usage? Links to papers are appreciated.
Why wouldn't you use SCHED_RR? You said it yourself: low CPU usage. You could even nice the process when you expect to do some heavy I/O so you're scheduled less often than other processes.
In general, though, why not let the OS do what it's best at, and just worry about writing efficient code? The OS will know you're doing a blocking I/O call and will put your thread/task in a waitqueue and select another task to run. You don't need to worry about those details.
Actually I believe no scheduling mechanism will handle this number of threads flawlesly as the management tables in the kernel will become quite large.
If possible, I'd suggest rewriting the app to use asynchronous I/O, select() or something similar on the OS of your choice.
You will likely want SCHED_RR for this. You might be interested in reading this question regarding the difference between SCHED_FIFO and SCHED_RR.
Your problem is more related to I/O scheduling than Thread scheduling. The linux kernel offers various I/O scheduler mplementations. You can find a good article on this subject in this edition of LWN.
As grover suggested, you could also use some Thread pooling mechanisms, which are less resource intensive and will solve your purpose at least to some reasonable extent, if not fully
Is there an advantage of the operating system understanding the characteristics of how a thread may be used? For example, what if there were a way in Java when creating a new thread to indicate that it would be used for intensive CPU calculations vs will block for I/O. Wouldn't thread scheduling improve if this were a capability?
I'm not sure what you're actually expecting the OS to do with the information that a thread is I/O or compute. The things which actually make the most difference to how threads get scheduled (ie thread priority and thread CPU affinity) are already exposed by APIs (and support for NUMA aspects are starting to appear in mainstream OS APIs too).
If by a "compute thread" you mean it's something doing background processing and less important than a GUI thread (from the point of view of maintaining app responsiveness) probably the most useful thing you can do is lower the priority of the compute threads a little.
That's what OS processes do. The OS has sophisticated scheduling for the processes. The OS tracks I/O use and CPU use and dynamically adjusts priorities so that CPU-intensive processing doesn't interfere with I/O.
If you want those features, use a proper OS process.
Is that even necessary? Threads blocking on I/O will cause CPU-intensive threads to run. The operating system decides how to schedule threads. AFAIK there's no way to give any hints with Java.
Yes, it is very important to understand them specially if you are one of those architects who like opening lot of threads, specially on windows.
Jeff Richter over at Wintellect has a library called PowerThreading. It is very useful if you are developing applications on .NET, but since you are talking about JAVA, it is still better to understand OS threads, kernel models and how the interrupts work.
I once was asked to increase thread priority to fix a problem. I refused, saying that changing it was dangerous and was not the root cause of the problem.
My question is, under what circumstannces should I conider changing priority of threads?
When you've made a list of the threads you're using and defined a priority order for them which makes sense in terms of the work they do.
If you nudge threads up here and there in order to bodge your way out of a problem, eventually they'll all be high priority and you're back where you started. Don't assume you can fix a race condition with prioritisation when really it needs locking, because chances are you've only fixed it in friendly conditions. There may still be cases where it can fail, such as when the lower-priority thread has undergone priority inheritance because another high-priority thread is waiting on another lock it's holding.
If you classify threads along the lines of "these threads fill the audio buffer", "these threads make my app responsive to system events", "these threads make my app responsive to the user", "these threads are getting on with some business and will report when they're good and ready", then the threads ought to be prioritised accordingly.
Finally, it depends on the OS. If thread priority is completely secondary to process priority, then it shouldn't be "dangerous" to prioritise threads: the only thing you can starve of CPU is yourself. But if your high-priority threads run in preference to the normal-priority threads of other, unrelated applications, then you have a broader responsibility. You should only be raising priorities of threads which do small amounts of urgent work. The definition of "small" depends what kind of device you're on - with a 3GHz multi-core processor you get away with a lot, but a mobile device might have pseudo real-time expectations that user-level apps can break.
Keeping the audio buffer serviced is the canonical example of when to be high priority, though, since small under-runs usually cause nasty crackling. Long downloads (or other slow I/O) are the canonical example of when to be low priority, since there's no urgency processing this chunk of data if the next one won't be along for ages anyway. If you're ever writing a device driver you'll need to make more complex decisions how to play nicely with others.
Not many. The only time I've ever had to change thread priorities in a positive direction was with a user interface thread. UIs must be extremely snappy in order for the app to feel right, so a lot of times it is best to prioritize painting threads higher than others. For example, the Swing Event Dispatch Thread runs at priority 6 by default (1 higher than the default).
I do push threads down in priority quite a bit. Again, this is usually to keep the UI responsive while some long-running background process does its thing. However, this also will sometimes apply to polling daemons and the like which I know that I don't want to be interfering with anything, regardless of how minimal the interference.
Our app uses a background thread to download data and we didn't want that interfering with the UI thread on single-core machines, so we deliberately prioritized that lower.
I think it depends on the direction you're looking at changing the priority.
Normally you shouldn't ever increase thread priority unless you have a very good reason. Increasing thread priority can cause your app's thread to start taking away time from other applications, which probably isn't what the user wants. If your thread is using up a significant amount of CPU it can make the machine hard to use, as some standard UI threads may start to starve.
I'd say the only times you should increase priority above normal is if the user explicitly told your app to do so, but even then you want to prevent "clueless" users from doing so. Maybe if your app doesn't use much CPU normally, but might have brief bursts of really really important activity then it could be OK to have an increased priority, as it wouldn't normally detract from the user's general experience.
Decreasing priority is another matter. If your app is doing something that takes a LOT of CPU and runs for a long time, yet isn't critical, then lowering the priority can be good. By lowering the priority you allow the CPU to be used for other things when it's needed, which helps keep the system responding quickly. As long as the system is mostly idling other than your app you'll still get most of the CPU time, but won't take away from tasks that need it more than you. An example of this would be a thread that indexes the hard drive (think google desktop).
I would say when your original design assumptions about the threads are no longer valid.
Thread priority is mostly a design decision about what work is most important. So for some examples of when to reconsider: If you add a new feature that might require its own thread that becomes more important, then reconsider thread priorities. If some requirements change that force you to reconsider the priorities of the work you are doing, then reconsider. Or, if you do performance testing and realize that your "high priority work" as specified in your design do not get the required performance, then tweak priorities.
Otherwise, its often a hack.