Treadpool: Simple example for the wait time and execution time to determine the size of the pool - multithreading

I am trying to find simple examples for what are exactly the wait time and execution time in determining the size of the thread pool. According to brian Goetz:
For tasks that may wait for I/O to complete -- for example, a task
that reads an HTTP request from a socket -- you will want to increase
the pool size beyond the number of available processors, because not
all threads will be working at all times. Using profiling, you can
estimate the ratio of waiting time (WT) to service time (ST) for a
typical request. If we call this ratio WT/ST, for an N-processor
system, you'll want to have approximately N*(1+WT/ST) threads to keep
the processors fully utilized.
I really didn't understand what he meant the Input/output. Who's doing the I/O tasks.

Imagine a task that reads some data from disk. What actually happens:
Open file.
Wait for (the spinning) disk to awake from sleep, to position the head at the right spot and for the desired blocks to appear underneath the head until all bytes arrive in a buffer.
Read from the buffer.
The whole task takes 0.1s to complete. Of this 0.1s 10 percent are spent on step 1 and 3 and the remaining 90 percent on step 2. So 0.01s are "working time" and 0.09s "wait time" that is spent waiting for the disk.

Related

How to create a CPU spike less than 1 cpu with a bash command

At Present i am creating 1 cpu spike using stress software
# stress --cpu 1 --timeout 5
stress: info: [1830] dispatching hogs: 1 cpu, 0 io, 0 vm, 0 hdd
stress: info: [1830] successful run completed in 5s
now its utilising 1 cpu is there a way to utilise 100milli cpu or 10% cpu ?
A process is either running (100% of a core) or it's not (0%) at any given moment.
To get fractional CPU usage, you'd have to average over some time, with a process that spends some of its time in a sleep system call (or something else that blocks).
With a 10% duty cycle, like 1 millisecond running a loop and then calling nanosleep to sleep for 9 milliseconds, you can achieve 10% CPU usage if you look at the average load over a long enough interval. But at any given time, your task is either sleeping or running (or waiting to get scheduled onto a core if they're all busy when its sleep ends).
If you're writing a load test, you might want to have it use x86 _mm_pause() in a loop (or portably, Rust's std::hint::spin_loop) to save power. Otherwise just use an empty loop body.
Have the loop condition be while(now < end_time) { _mm_pause(); } where you calculate check the current time against an end timestamp you calculated earlier. You can either check the current time after waking from sleep, or increment a counter without checking if your sleep went slightly longer than it should have, or use the clock_gettime() values you see along the way to try to maintain the right duty cycle if one sleep went longer than you wanted. (It generally won't go shorter unless you get woken by a signal instead.)
Related: How to calculate time for an asm delay loop on x86 linux? for hand-written asm to busy-wait until a deadline using rdtsc.
https://github.com/GaetanoCarlucci/CPULoadGenerator is a Python program that can generate fractional loads, presumably using sleeps and delay loops.

What could delay pthread_join() after threads have exited successfully?

My main thread creates 8 worker threads (on a machine with a 4 core, 8 thread CPU), and then waits for them to complete with pthread_join(). The threads all exit successfully, and the pthread_join() successfully completes. However, I log the times that the threads exit and the time that pthread_join() completes for the last thread; the threads all exit essentially simultaneously (not surprising -- they are servicing a queue of work to be done), and the pthread_join() sometimes takes quite a long time to complete -- I have seen times in excess of 15 minutes after the last worker thread has exited!
More information: The worker threads are all set at the highest allowable round-robin scheduling priority (SCHED_RR); I have tried setting the main thread (waiting on the pthread_join()s) to the same thing and have also tried setting it to the highest SCHED_FIFO priority (where so far I have only seen it take as long as 27 seconds to complete; more testing is needed). My test is very CPU and memory intensive and takes about 90 -- 100 minutes to complete; during that time it is generally using all 8 threads at close to 100% capacity, and fairly quickly gets to where it is using about 90% of the 256 GB of RAM. This is running on a Linux (Fedora) OS at run level 3 (so no graphics or Window Manager -- essentially just a terminal -- because at the usual run level 5, a process using that much memory gets killed by the system).
An earlier version that took closer to 4 hours to complete (I have since made some performance improvements...) and in which I did not bother explicitly setting the priority of the main thread once took over an hour and 20 minutes for the pthread_join() to complete. I mention it because I don't really think that the main thread priority should be much of an issue -- there is essentially nothing else happening on the machine, it is not even on the network.
As I mentioned, all the threads complete with EXIT_SUCCESS. And in lighter weight tests, where the processing is over in seconds, I see no such delay. And so I am left suspecting that this is a scheduler issue. I know very little about the scheduler, but informally the impression I have is that here is this thread that has been waiting on a pthread_join() for well over an hour; perhaps the scheduler eventually shuffles it off to a queue of "very unlikely to require any processing time" tasks, and only checks it rarely.
Okay, eventually it completes. But ultimately, to get my work done, I have to run about 1000 of these, and some are likely to take a great deal longer than the 90 minutes or so that the case I have been testing takes. So I have to worry that the pthread_join() in those cases might delay even longer, and with 1000 iterations, those delays are going to add up to real time...
Thanks in advance for any suggestions.
In response to Nate's excellent questions and suggestions:
I have used top to spy on the process when it is in this state; all I can report is that it is using minimal CPU (maybe an occasional 2%, compared to the usual 700 - 800% that top reports for 8 threads running flat out, modulo some contention for locked resources). I am aware that top has all kinds of options I haven't investigated, and will look into how to run it to display information about the state of the main thread. (I see: I can use the -H option, and look in the S column... will do.) It is definitely not a matter of all the memory being swapped out -- my code is very careful to stay below the limit of physical memory, and does some disk I/O of its own to save and restore information that can't fit in memory. As a result little to no virtual memory is in use at any time.
I don't like my theory about the scheduler either... It's just the best I have been able to come up with so far...
As far as how I am determining when things happen: The exiting code does:
time_t now;
time(&now);
printf("Thread exiting, %s", ctime(&now));
pthread_exit(EXIT_SUCCESS);
and then the main thread does:
for (int i = 0; i < WORKER_THREADS; i++)
{
pthread_join(threads[i], NULL);
}
time(&now);
printf("Last worker thread has exited, %s", ctime(&now));
I like the idea of printing something each time pthread_join() returns, to see if we're waiting for the first thread to complete, the last thread to complete, or one in the middle, and will make that change.
A couple of other potentially relevant facts that have occurred to me since my original posting: I am using the GMP (GNU Multiprecision Arithmetic) library, which I can't really imagine matters; and I am also using a 3rd party (open source) library to create "canonical graphs," and that library, in order to be used in a multithreaded environment, does use some thread_local storage. I will have to dig into the particulars; still, it doesn't seem like cleaning that up should take any appreciable amount of time, especially without also using an appreciable amount of CPU.

Why is multi threading not faster on single core?

This question is not a duplicate of any question related to why multithreading is not faster on single-core, read the rest to figure out what I actually want to know
As far as I know, multithreading is only faster on a CPU with multiple cores, since each thread can run in parallel. However, as my understanding of how preemption and multithreading on single-core works, it should also be faster. The image below can describe what I mean better. Consider that our app is a simple loop that takes exactly 4 seconds to execute. In this example, the time slice is constant, but, I don't think it makes any difference because, in the end, all threads with the same priority will get equal time by the scheduler. The first timeline is single-threaded, but the second one has 4 threads. The cycle also means when the preemption ends and the scheduler goes back to the queue of threads from start. I/O has also been removed since that just adds complexity and even if it changes the results, let's assume I'm talking about some code that does not require any sort of I/O.
The red threads are threads related to my process, and others (black) are the ones for other processes and apps
There are a couple of questions here:
Why isn't it faster? What's wrong with my timeline?
What's that cycle point called?
Since the time slice is not fixed, does that means the Cycle time is fixed, or the time slice gets calculated and the cycle will be as much time required to spend the calculated time slice on each thread?
Is the slice time based on time or instruction? I mean, is it like 0.1 sec for each thread or like 10 instructions for each thread?
The CPU utilization is based on CPU time, so why isn't it always on 100% because when a thread's time reaches, it moves to the next thread, and if a thread stops on I/O, it does not wait but executes the next one, so the CPU always tries to find a thread to execute and minimalize the time spent IDLE. Is the time for I/O so significant that more than 50% of CPU time is spent doing nothing because all threads are waiting for something, mostly I/O and the CPU time is elapsed waiting for a thread to become in a ready state?
Note: This timeline is simplified, the time spent on I/O, thread creation, etc. is not calculated and it's assumed that other threads do not finish before the end of the timeline and have the same priority/nice value as our process

Creating a friendly timed busy loop for a hyperthread

Imagine I want to have one main thread and a helper thread run as the two hyperthreads on the same physical core (probably by forcing their affinity to approximately ensure this).
The main thread will be doing important high IPC, CPU-bound work. The helper thread should do nothing other than periodically updating a shared timestamp value that the the main thread will periodically read. The update frequency is configurable, but could be as fast as 100 MHz or more. Such fast updates more or less rule out a sleep-based approach, since blocking sleeps are too slow to sleep/wake on a 10 nanosecond (100 MHz) period.
So I want a busy wait. However, the busy wait should be as friendly as possible to the main thread: use as few execution resources as possible, and so add as little overhead as possible to the main thread.
I guess the idea would be a long-latency instruction that doesn't use many resources, like pause and that also has a fixed-and-known latency. That would let us calibrate the "sleep" period so no clock read is even needed (if want to update with period P we just issue P/L of these instructions for a calibrated busy-sleep. Well pause doesn't meet that latter criterion, as its latency varies a lot1.
A second option would be to use a long-latency instruction even if the latency is unknown, and after every instruction do a rdtsc or some other clock reading method (clock_gettime, etc) to see how long we actually slept. Seems like it might slow down the main thread a lot though.
Any better options?
1 Also pause has some specific semantics around preventing speculative memory accesses which may or may not be beneficial to this sibling thread scenario, since I'm not in a spin-wait loop really.
Some random musing on the subject.
So you want to have a time stamp on a 100 MHz sample, that means that on a 4GHz cpu you have 40 cycles between each call.
The timer thread busily reads the real time clock (RTDSC???) but can't use the save method with cpuid as that takes 100 cycles. The old real time clock has a latency of around 25(and a throughput of 1/25), there might be a slightly newer, slightly more accurate with slightly more latency timer (32 cycles).
start:
read time (25 cycles)
tmp = time - last (1 cycle)
if tmp < sample length goto start
last += cycles between samples
sample = time
goto start
In a perfect world the branch predictor will guess right every time, in reality it will mispredict randomly adding 5-14 cycles to the loops 26 cycles due to variance in the read time cycles.
When the sample is written the other thread will have its instructions cancelled from the first speculative loads from this cache line (remember to align to 64 byte for the sample position so no other data is affected). And the load of the sample time stamp starts over after a delay of ~5-14 cycles depending on where the instructions come from, the loop buffer, micro-ops cache or I-cache.
So a mimimum of 5->14 cycles / 40 cycles performance will be lost, in addition to half the cpu being used by the other thread.
On the other hand reading the real time clock in the main thread would cost ...
~1/4 cycle, the latency will most likely be covered by other instructions. But then you can't vary the frequency. The long latency of 25 cycles could be a problem unless some other long latency instructions precede it.
Using a CAS instruction (lock exch???) might partly solve the problem as the loads then shouldn't cause a reissue of the instruction, but instead results in a delay on all following reads and writes.

How can I measure the queuing time of a process (CPU intensive) before it gets executed?

Actually I am trying to run some experiments where i need to run benchmarks under heavy load. Starting from CPU load, I schedule a sysbench daemon that generates 1000 primes. I set its priority to low so that it only runs once the cpu is not busy with other tasks so as to reduce its impact on the regular workload. Since the priority of the process is set to Low, the process keeps waiting in the queue until it finds a free cpu core to run on. The problem is that its result shows the execution time including the wait period (in the queue) which renders the result invalid.
Is there some way that I could actually calculate the wait period and subtract it from the result to get a valid result?

Resources