I want to see the real change in process launch and execution after changing nice value.
When i allocate lower nice value to process, what changes should i see.
$ps -l |head -2
UID PID PPID F CPU PRI NI SZ RSS WCHAN S
501 25164 25144 4006 0 31 10 4280144 1584 - SN+
I executed
$renice -6 25164
and i got new value of NICENESS as -6 ,it was 10 before
ps -l |head -2
UID PID PPID F CPU PRI NI SZ RSS WCHAN S
501 25164 25144 4006 0 31 -6 4280144 1584 - S<+
So, what changes i should see now. i.e Should it increase processing speed .
or launch time will be less.
$renice -6 pid
I want to see the changes in process execution time, as it gets higher priority .What benefit user will get?
You will only see a difference in execution time if the cpu is fully utilized since the niceness affects the priority of a process. So to benchmark a difference you will need to run some other program that fully utilizes the cpu and then run the program you are benchmarking. Then change the niceness so that the priority is now more or less than the other program and you will then see a difference in execution time.
Related
I have a simple mono-threaded application that does almost pure processing
It uses two int buffers of the same size
It reads one-by-one all the values of the first buffer
each value is a random index in the second buffer
It reads the value at the index in the second buffer
It sums all the values taken from the second buffer
It does all the previous steps for bigger and bigger
At the end, I print the number of voluntary and involuntary CPU context switches
If the size of the buffers become quite big, my PC starts to slow down: why? I have 4 cores with hyper-threading so 3 cores are remaing. Only one is 100% busy. Is it because my process uses almost 100% for the "RAM-bus"?
Then, I created a CPU-set that I want to dedicate to my process (my CPU-set contains both CPU-threads of the same core)
$ cat /sys/devices/system/cpu/cpu3/topology/core_id
3
$ cat /sys/devices/system/cpu/cpu7/topology/core_id
3
$ cset set -c 3,7 -s my_cpuset
$ cset set -l
cset:
Name CPUs-X MEMs-X Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root 0-7 y 0 y 934 1 /
my_cpuset 3,7 n 0 n 0 0 /my_cpuset
It seems that absolutely no task at all is running on my CPU-set. I can relaunch my process and while it is running, I launch:
$ taskset -c 7 ./TestCpuset # Here, I launch my process
...
$ ps -mo pid,tid,fname,user,psr -p 25244 # 25244 being the PID of my process
PID TID COMMAND USER PSR
25244 - TestCpus phil -
- 25244 - phil 7
PSR = 7: my process is well running on the expected CPU-thread. I hope it is the only one running on it but at the end, my process displays:
Number of voluntary context switch: 2
Number of involuntary context switch: 1231
If I had involuntary context switches, it means that other processes are running on my core: How is it possible? What must I do in order to get Number of involuntary context switch = 0?
Last question: When my process is running, if I launch
$ cset set -l
cset:
Name CPUs-X MEMs-X Tasks Subs Path
------------ ---------- - ------- - ----- ---- ----------
root 0-7 y 0 y 1031 1 /
my_cpuset 3,7 n 0 n 0 0 /my_cpuset
Once again I get 0 tasks on my CPU-set. But I know that there is a process running on it: it seems that a task is not a process?
If the size of the buffers become quite big, my PC starts to slow down: why? I have 4 cores with hyper-threading so 3 cores are remaing. Only one is 100% busy. Is it because my process uses almost 100% for the "RAM-bus"?
You reached the hardware performance limit of a single-threaded application, that is 100% CPU time on the single CPU your program is allocated to. Your application thread will not run on more than one CPU at a time (reference).
What must I do in order to get Number of involuntary context switch = 0?
Aren't you missing --cpu_exclusive option in cset set command?
By the way, if you want to achieve lower execution time, i suggest you to make a multithreaded application and let operating system, and the hardware beneath parallelize execution instead. Locking a process to a CPU set and preventing it from doing context-switch might degrade the operating system performance and is not a portable solution.
I want to gather how many threads does a process use (from their PID/status i guess) and after that i wanna compare them and output the biggest number of them. For example i wanna gather all threads per chromium's processes and then compare the numbers and output the max. Any ideas?
E.g
2131 Threads : 20 , 2341 Threads : 10 , 2200 Threads : 5
Max Threads = 20
in ubuntu you can get no of thread for process using this command
ps -o nlwp `pgrep process_name`
nlwp stands for number of light weight process
I have a dell pd2950(2x4core) server running Ubuntu server 12.04LTS. And there's a VLC encoder instance running. Recently I updated the script(VLM) for VLC to increase quality and this means I'm increasing the CPU utilization too. So I started to tune the script to avoid exceeding maximum utilization. I use top to monitor the CPU utilization. I found that the load average is higher than 100%(I have 8-cores totally so 8.00 is 100%) but there's still 20-35% is idle, like:
top - 21:41:19 up 2 days, 17:15, 1 user, load average: 9.20, 9.65, 8.80
Tasks: 148 total, 1 running, 147 sleeping, 0 stopped, 0 zombie
Cpu(s): 32.8%us, 0.7%sy, 29.7%ni, 36.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1982680k total, 1735672k used, 247008k free, 126284k buffers
Swap: 0k total, 0k used, 0k free, 774228k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
9715 wilson RT 0 2572m 649m 13m S 499 33.5 13914:44 vlc
11663 wilson 20 0 17344 1328 964 R 2 0.1 0:02.00 top
1 root 20 0 24332 2264 1332 S 0 0.1 0:01.06 init
2 root 20 0 0 0 0 S 0 0.0 0:00.09 kthreadd
3 root 20 0 0 0 0 S 0 0.0 0:27.05 ksoftirqd/0
4 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/0:0
5 root 0 -20 0 0 0 S 0 0.0 0:00.00 kworker/0:0H
To confirm my CPU(s) don't have Hyper-Thread, I tried:
wilson#server:/$ nproc
8
And to reduce the sampling deviation cause by refresh time, I also tried:
wilson#server:/$ top -d 0.1
I looked at the number %id for a long time, it haven't been lower than 14.
I also tried:
wilson#server:/$ uptime
21:57:20 up 2 days, 17:31, 1 user, load average: 9.03, 9.12, 9.35
The 1m load average often reach 14-15. So I'm wondering what's wrong with my system? Has anyone ever have this problem?
More information:
I'm using VLC with x264 codec to encode a live HTTP stream(application/octet-stream). It use ffmpeg(libavc) to decode and output as Apple HLS(.ts segment). I found this problem after I added arguments for x264:
level=41,ref=5,b-adapt=2,direct=auto,me=umh,subq=8,rc-lookahead=60,analyse=all
This almost equal to preset=slower. And as you can see, my VLC in running in real-time. The parameter is:
wilson#server:/$ chrt -p -f 99 vlc-wrapper
There does not appear to be anything wrong with your system. What is wrong seems to be your understanding of CPU accounting. In particular, load average has nearly nothing at all to do with CPU usage. Load average is based on the number of processes that are ready to run (not waiting on I/O, network, keyboard input, etc...), if there is an available CPU for them to be scheduled on. While it's true that, given an 8 core system, if all 8 cores are 100% busy with a single CPU-bound thread each, your load average should be around 8.00, it is entirely possible to have a load average of 200.0 with near-0% CPU utilization. All that would indicate is you have 200 processes that are ready to run, but as soon as they get scheduled, they do almost nothing before they go back to waiting for input of some sort.
Your top output shows that vlc seems to be using roughly the equivalent of 5 of your cores, but it doesn't indicate whether you have 5 cores at 100% each, or if all 8 cores are at 62.5% each. All of the other processes listed by top also contribute to your load average, as well as CPU usage. In particular, top running with a short delay like your example of 0.1 seconds, will probably increase your load average by almost 1 itself, even though, overall, it's not using a lot of CPU time.
Read this:
Understanding load average vs. cpu usage
If the load average is at 7, with 4 hyper-threaded processors, shouldn't that means that the CPU is working to about 7/8 capacity?
No it just means that you have 7 running processes in the job queue on average.
But I think that we can't use load average as a reference number to determine system is overload or not. So that I wonder if there's a kernel-level cpu utitlization statistical tools or not?(why kernel level because reduce performance loss)
When I read the stat file following output comes
15465 (out1) S 15290 15465 15290 34817 15465 4202496 185 0 0 0 0 0 0 0 20 0 1 0 1505506 4263936 89 18446744073709551615 4194304 4196524 140733951429456 140733951428040 139957189597360 0 0 0 0 18446744071582981369 0 0 17 1 0 0 0 0 0 6295080 6295608 23592960 140733951431498 140733951431506 140733951431506 140733951434736 0
i.e. 52 lines are there ...whereas in the man proc around 44 lines are given.
Why this extra information is coming??
Can anyone please elaborate. I am working on Ubuntu 12.04 , kernel is 3.5.0-40-generic .
A very good documentation of stat content is available on the Linux /proc Filesystem manual:
Table 1-4: Contents of the stat files (as of 2.6.30-rc7)
..............................................................................
Field Content
pid process id
tcomm filename of the executable
state state (R is running, S is sleeping, D is sleeping in an
uninterruptible wait, Z is zombie, T is traced or stopped)
ppid process id of the parent process
pgrp pgrp of the process
sid session id
tty_nr tty the process uses
tty_pgrp pgrp of the tty
flags task flags
min_flt number of minor faults
cmin_flt number of minor faults with child's
maj_flt number of major faults
cmaj_flt number of major faults with child's
utime user mode jiffies
stime kernel mode jiffies
cutime user mode jiffies with child's
cstime kernel mode jiffies with child's
priority priority level
nice nice level
num_threads number of threads
it_real_value (obsolete, always 0)
start_time time the process started after system boot
vsize virtual memory size
rss resident set memory size
rsslim current limit in bytes on the rss
start_code address above which program text can run
end_code address below which program text can run
start_stack address of the start of the main process stack
esp current value of ESP
eip current value of EIP
pending bitmap of pending signals
blocked bitmap of blocked signals
sigign bitmap of ignored signals
sigcatch bitmap of caught signals
0 (place holder, used to be the wchan address, use /proc/PID/wchan instead)
0 (place holder)
0 (place holder)
exit_signal signal to send to parent thread on exit
task_cpu which CPU the task is scheduled on
rt_priority realtime priority
policy scheduling policy (man sched_setscheduler)
blkio_ticks time spent waiting for block IO
gtime guest time of the task in jiffies
cgtime guest time of the task children in jiffies
start_data address above which program data+bss is placed
end_data address below which program data+bss is placed
start_brk address above which program heap can be expanded with brk()
arg_start address above which program command line is placed
arg_end address below which program command line is placed
env_start address above which program environment is placed
env_end address below which program environment is placed
exit_code the thread's exit_code in the form reported by the waitpid system call
..............................................................................
source
Here is the code that forms the /proc/[pid]/stat file contents
http://lxr.free-electrons.com/source/fs/proc/stat.c?v=3.5#L74
Note on pri from ps man page:
"pri PRI priority of the process. Higher number means lower priority"
Consider PID 26073 here
$ renice +15 26073
26073: old priority 5, new priority 15 # I am making this process more nice
$ ps -t 1 -o pid,ppid,%cpu,stat,cmd,bsdstart,time,pri
PID PPID %CPU STAT CMD START TIME PRI
9115 18136 0.0 Ss bash 17:10 00:00:01 19
26073 9115 12.0 RN+ p4 sync 19:06 00:02:56 4
STAT = RN+ which means : Running , low-prio ( nice to others), foreground. PRI=4 (1)
$ sudo renice -10 26073
26073: old priority 15, new priority -10 # I am making this process less nice
$ ps -t 1 -o pid,ppid,%cpu,stat,cmd,bsdstart,time,pri
PID PPID %CPU STAT CMD START TIME PRI
9115 18136 0.0 Ss bash 17:10 00:00:01 19
26073 9115 12.0 S<+ p4 sync 19:06 00:03:15 29
STAT = S<+ which means : Interruptible sleep , high-prio ( not nice to others), foreground. PRI=29 (2)
In case 2 the process priority increased or to say it another way the process became higher priority.
But this contradicts what definition of pri says from man page ( that higher number means lower priority)
You are being confused by PRI (immediate priority) vs. NICE (the assigned priority). PRI often gets a boost (i.e. lower value) when a process is being restarted after blocking on I/O, and conversely is lowered (higher value) if it uses up its scheduler-assigned time slot without blocking, at least with the standard scheduler. Many systems have alternative schedulers with different behaviors, but in all cases PRI is the actual current priority that the scheduler has assigned; this value is influenced by, but not defined by, the assigned "niceness".
Reference on Linux's priority management here: http://oreilly.com/catalog/linuxkernel/chapter/ch10.html
Although I'm not an expert on the linux scheduler, I do know that it 'punishes' CPU bound processes and rewards I/O bound processes (something most schedulers do to a greater or lesser extent). As explained, this and other adjustments, along with the NICE value, result in an internal priority setting within the scheduler. The fact that they use an inverse NICE value and a non-inverse internal PRI value is somewhat confusing, but makes sense.