Theoretical questions about Linux nanosleep and processes - linux

I have 2 theoretical questions related to Linux system programming in C about nanosleep and process destruction.
So, the first one:
It is possible to make 97% CPU load just by using nanosleep. For example, let's consider a for-loop that iterates 50 times with a delay of 1 second, on a child process. The delay is obtained using nanosleep. What I observed, on a devboard with Debian Linux, is that after somewhere between 15 and 20 iterations, nanosleep blocks and CPU load is 90 % ( I used top to see the value).
The second question is somehow related to the first one. With the same code,
a for-loop running 50 times on a child process I observed that when nanosleep blocks ( freeze) at a 90% CPU load the child process become a zombie process.
It's a kernel mechanism that tries to kill a process that is using too much the CPU?
Again, sorry that I can't post the code, it's not mine... But I found curios this 2 cases and I didn't find something about on Internet, or I didn't know how to search. I just want to know, theoretically if it's possible to have 90% CPU load just using nanosleep, and secondly if the kernel have a safety mechanism that tries to kill processes that use too much the CPU.
I'm interested to find some opinions about this cases, maybe recommended alternative functions.
PS:I don't want to see comments that are asking for source code since this question is just theoretically.

I'm not an expert but I assume that if your kernel kills the process cause it's consuming too much resources it will depend solely on the distro you're using.
About the use of CPU, theoretically you could continously get processes into the CPU and just sleep them. In this case the OS will be dispatching the processes from CPU to lock queue and back (Overhead), and depending on the type of queuing your distro uses to dispatch processes (Round Robin, queues with aging, etc, I can't remember right now where you can set this parameter) it could eventually starve other processes
Anyway this is not a C related question just OS

Your question makes no sense (sorry).
If nanosleep blocks, it won't use any CPU, because it will be blocking, i.e. waiting in the kernel for something else to happen. That's what blocking means.
For a process to be using 100% CPU, it must be busy waiting.
If it's busywaiting, and calling nanosleep, we can conclude that each call to nanosleep is of very short duration.
An alternative explanation is it's using a large amount of CPU doing something else, and only very occasionally calling nanosleep, or there is more than one thread running, and a thread other than the one calling nanosleep is using lots of CPU.

Related

Why processes are deprived of CPU for TOO long while busy looping in Linux kernel?

At first glance, my question might look bit trivial. Please bear with me and read completely.
I have identified a busy loop in my Linux kernel module. Due to this, other processes (e.g. sshd) are not getting CPU time for long spans of time (like 20 seconds). This is understandable as my machine has only single CPU and busy loop is not giving chance to schedule other processes.
Just to experiment, I had added schedule() after each iteration in the busy loop. Even though, this would be keeping the CPU busy, it should still let other processes run as I am calling schedule(). But, this doesn't seem to be happening. My user level processes are still hanging for long spans of time (20 seconds).
In this case, the kernel thread got nice value -5 and user level threads got nice value 0. Even with low priority of user level thread, I think 20 seconds is too long to not get CPU.
Can someone please explain why this could be happening?
Note: I know how to remove busy loop completely. But, I want to understand the behaviour of kernel here. Kernel version is 2.6.18 and kernel pre-emption is disabled.
The schedule() function simply invokes the scheduler - it doesn't take any special measures to arrange that the calling thread will be replaced by a different one. If the current thread is still the highest priority one on the run queue then it will be selected by the scheduler once again.
It sounds as if your kernel thread is doing very little work in its busy loop and it's calling schedule() every time round. Therefore, it's probably not using much CPU time itself and hence doesn't have its priority reduced much. Negative nice values carry heavier weight than positives, so the difference between a -5 and a 0 is quite pronounced. The combination of these two effects means I'm not too surprised that user space processes miss out.
As an experiment you could try calling the scheduler every Nth iteration of the loop (you'll have to experiment to find a good value of N for your platform) and see if the situation is better - calling schedule() too often will just waste lots of CPU time in the scheduler. Of course, this is just an experiment - as you have already pointed out, avoiding busy loops is the correct option in production code, and if you want to be sure your thread is replaced by another then set it to be TASK_INTERRUPTIBLE before calling schedule() to remote itself from the run queue (as has already been mentioned in comments).
Note that your kernel (2.6.18) is using the O(1) scheduler which existed until the Completely Fair Scheduler was added in 2.6.23 (the O(1) scheduler having been added in 2.6 to replace the even older O(n) scheduler). The CFS doesn't use run queues and works in a different way, so you might well see different behaviour - I'm less familiar with it, however, so I wouldn't like to predict exactly what differences you'd see. I've seen enough of it to know that "completely fair" isn't the term I'd use on heavily loaded SMP systems with a large number of both cores and processes, but I also accept that writing a scheduler is a very tricky task and it's far from the worst I've seen, and I've never had a significant problem with it on a 4-8 core desktop machine.

How reliable is pstack as a profiling tool?

I've been using pstack (called in a loop periodically) as a substitute for a real profiling tool. I've noticed that even though there's more then 85% cpu usage for that pid in top, pstack shows the pid being blocked on I/O more often than being CPU bound.
How's pstack implemented? Is there any reason why pstack would be more susceptible to attaching to the pid when it's actually blocked on I/O?
You say you're calling pstack periodically in a loop - i.e. in a separate process (B) from the one you are profiling(A). If they are running in a single core, then B is more likely to "wake up" when A is blocked.
Regardless, I would trigger pstack manually, on the theory that not many samples are needed. Rather the samples I do get need to be scrutinized, not just lumped together.
In general, it's good to take samples during I/O time as well as CPU time, because both I/O and CPU wastage can make your program slow.
If it somewhat inflates one or the other, that's fairly harmless, assuming your real goal is to precisely identify things to optimize, rather than just get precise measurements of fuzzy things like functions.

multi-threading in fedora

I've written a multi-thread program with ptgread. My CPU is dual core. But the program does not run as parallel. I attached system monitoring as following.
My question is, does support fedora13 multi-threading?
Your question is incomplete so this answer may not be effective. Will revise with more information.
However, few tips you should be able work out.
Are any threads waiting for the other?
IS there a dead-lock amongst the threads where both threads are effectively sleeping?
Are there too many I/O involved? (wait on sockets, read, write on disk, even heavy printfs includes this)
Does any of the thread has long sleeps (usleep, nanosleep anyone..)
If there are any of the above condition true, even if the CPU is available, because active instruction set need to wait till effective back log is done.
Second limitation of your question is the measurement. You have chart that is system through put. Even if you have one CPU, the thread switching can be so transparent because the thread switch within matter of few (10s or 100s) of millisecond. And if each of your thread is running on same CPU - you can never say see when these threads switched. Infact the graph you are seeing is shared not only by your 2 threads - but so many processes that are running in system.
But as i said - i can only be more effective if you give complete details.

How many threads can I spawn before efficiency drops?

Is there any formula, maybe involving RAM & number of CPUs, which can give me a rough idea of how many threads I can spawn before it starts to be inefficient and slows the PC?
I want to load test another machine, so want to send requests as quickly as pobbile. But there's no point of spawning a million threads if they will just get in each other's way.
Edit: The threads are making Remote Procedure Calls (SOAP), so will be blocking waiting for the call to return.
It depends on what the threads are doing. If they're doing calculations, then the number will be lower. If they're waiting on I/O, then you can have more.
However, if they're waiting on I/O then you may be able to achieve the same result using async I/O apis better than using multiple threads.
If all threads are active and not blocking waiting for something then basically one thread per CPU (core really). Any more than that and you're relying on the operating system to context switch between the threads on a given CPU.
But it all depends on what the threads are doing. If they're sleeping most of the time or waiting on asynchronous IO operations, then you mostly just need to worry about the memory used for the stack which defaults to about 1MB per thread I believe.
The other answers are of course correct; "it depends". If the threads are busy doing CPU-intensive work, there's no point having more than the number of cores available. But assuming they are waiting on external results, it can vary widely.
I often find that this question is answered by the architecture and requirements of an application; you need as many threads as you need.
But if you potentially have an unlimited number of threads you might end up spawning, I think that probably sounds like a task for the ThreadPool myself; let it decide how many threads to actually have running.
First of all starting a thread may be quite a slow operation itself. When you start a thread stack space must be allocated, entry points in DLLs may be called etc. If you have a lot more threads than available cores then the majority of your threads will not be running at any given moment. I.e. they use resources and contribute nothing.
It is hard to give an exact number of threads for optimal performance, cause it depends on what the threads are doing, but generally you shouldn't go way above the number of available cores. Keep in mind that you cannot have more running threads than the number of available cores.

If 256 threads give better performance than 8 have I likely got the wrong approach?

I've just started programming with POSIX threads on dual-core x86_64 Linux system. It seems that 256 threads is about the optimum for performance with the way I've done it. I'm wondering how this could be? And if it could mean that my approach is wrong and a better approach would require far fewer threads and be just as fast or faster?
For further background (the program in question is a skeleton for a multi-threaded M-set image generator) see the following questions I've asked already:
Using threads, how should I deal with something which ideally should happen in sequential order?
How can my threaded image generating app get it’s data to the gui?
Perhaps I should mention that the skeleton (in which I've reproduced minimal functionality for testing and comparison) is now displaying the image, and the actual calculations are done almost twice as fast as the non-threaded program.
So if 256 threads running faster than 8 threads is not indicative of a poor approach to threading, how come 256 threads does outperform 8 threads?
The speed test case is a portion of the Mandelbrot Set located at:
xmin -0.76243636067708333333333328
xmax -0.7624335575810185185185186
ymax 0.077996663411458333333333929
calculated to a maximum of 30000 iterations.
On the non-threaded version rendering time on my system is around 15 seconds. On the threaded version, averages speed for 8 threads is 7.8 seconds, while 256 threads is 7.6 seconds.
Well, probably yes, you're doing something wrong.
However, there are circumstances where 256 threads would run better than 8 without you necessarily having a bad threading model. One must remember that having 8 threads does not mean all 8 threads are actually running all the time. Anytime one thread makes a blocking syscall to the operating system, the thread will stop running and wait for the result. In the meantime, another thread can often do work.
There's this myth that one can't usefully use more threads than contexts on the CPU, but that's just not true. If your threads block on a syscall, it can be critical to have another thread available to do more work. (In practice when threads block there tends to be less work to do, but this is not always the case.)
It's all very dependent on work-load and there's no one right number of threads for any particular application. Generally you never want less threads available than the OS will run, and that's the only true rule. (Unfortunately this can be very hard to find out and so people tend to just fire up as many threads as contexts and then use non-blocking syscalls where possible.)
Could it be your app is io bound? How is the image data generated?
A performance improvement gained by allocating more threads than cores suggests that the CPU is not the bottleneck. If I/O access such as disk, memory or even network access are involved your results make perfect sense.
You are probably benefitting from Simultaneous Multithreading (SMT). Your operating system schedules more threads than cores available, and will swap in and out the threads that are not stalled waiting for resources (such as a memory load). This can very effectively hide the latencies of your memory system from your program and is the technique used to great effect for massive parallelization in CUDA for general purpose GPU programming.
If you are seeing a performance increase with the jump to 256 threads, then what you are probably dealing with is a resource bottleneck. At some point, your code is waiting for some slow device (a hard disk or a network connection, for example) in order to continue. With multiple threads, waiting on this slow device isn't a problem because instead of sitting idle and twiddling its electronic thumbs, the CPU can process another thread while the first thread is waiting on the slow device. The more parallel threads that are running, the more work the CPU can do while it is waiting on something else.
If you are seeing performance improve all the way up to 256 threads, I am tempted to say that you have a major performance bottleneck somewhere and it's not the CPU. To test this, try to see if you can measure the idle time of individual threads. I suspect that you will see your threads are stuck in a "blocked" or "waiting" state for a longer portion of their lifetime than they spend in the "running" or "active" state. Some debuggers or function profiling tools will let you do this, and I think there are also Linux tools to do this on the command line.

Resources