I ran the top -H -p for a process which gave me the few threads with LWPs.
But when I sort the results with smallest PID first, I noticed the time in first thread is constant but the other threads time is changing. Why TIME+ is different?
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16989 root 20 0 106m 28m 2448 S 0.0 0.2 0:22.31 glusterfs
16990 root 20 0 106m 28m 2448 S 0.0 0.2 0:00.00 glusterfs
16992 root 20 0 106m 28m 2448 S 0.0 0.2 0:00.00 glusterfs
16993 root 20 0 106m 28m 2448 S 0.0 0.2 0:00.00 glusterfs
16997 root 20 0 106m 28m 2448 S 0.0 0.2 0:11.71 glusterfs
17010 root 20 0 106m 28m 2448 S 0.0 0.2 0:21.07 glusterfs
17061 root 20 0 106m 28m 2448 S 0.0 0.2 0:00.00 glusterfs
Why TIME+ is different?
Because different threads are doing different percentages of the work. There could be a number of reasons for this1, but the most likely is that the application (glusterfs) is not attempting to distribute work evenly across the worker threads.
It is not something to worry about. It doesn't matter which thread does the work if the work level (see the %CPU) is negligible.
1 - If someone had the time and inclination, they could look at the source code of glusterfs to try to understand its behavior. However, I don't think the effort is warranted.
Because the time column referes to the time consumed by a process, so when a process time does not change it probably means that this process is "sleeping" or simply waiting for an other process to finish, but there could be many more reasons.
http://linux.about.com/od/commands/l/blcmdl1_top.htm
TIME:
Total CPU time the task has used since it started. If cumulative mode
is on, this also includes the CPU time used by the process's children
which have died. You can set cumulative mode with the S command line
option or toggle it with the interactive command S. The header line
will then be changed to CTIME.
Related
I notice some process always have VSZ as 0
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 19356 1400 ? Ss Jun13 0:00 /sbin/init
root 2 0.0 0.0 0 0 ? S Jun13 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S Jun13 0:00 [migration/0]
root 4 0.0 0.0 0 0 ? S Jun13 0:01 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S Jun13 0:00 [stopper/0]
root 6 0.0 0.0 0 0 ? S Jun13 0:03 [watchdog/0]
root 7 0.0 0.0 0 0 ? S Jun13 0:00 [migration/1]
how to understand why they have 0 VSZ?
VSZ is the Virtual Memory Size. It includes all memory that the process can access, including memory that is swapped out, memory that is allocated, but not used, and memory that is from shared libraries.
So, the top command screenshot you shared showing VSZ values equaling 0, means that those processes are not using VSZ.
NOTE: They are kernel threads and memory statistics are irrelevant for them as they use kernel memory. Just to visualize kernel processes, press c when top command is running and it will show you all [bracketed] entries in last column named COMMAND.
You can get more details on VSZ and learn about its counterpart RSS (Resident Set Size) from here.
Memory occupied by unknown (VMware/CentOS)
Hello.
We have a server that has memory full used issue, but can not find what is eating memory.
Usage of memory has increased few days ago 40% -> neary 100% and stayed there since then.
We’d like to kill whatever eating memory.
[Env]
cat /etc/redhat-release
CentOS release 6.5 (Final)
# arch
x86_64
[status]
#free
total used free shared buffers cached
Mem: 16334148 15682368 651780 0 10168 398956
-/+ buffers/cache: 15273244 1060904
Swap: 8388600 129948 8258652
Result of top (some info are masked with ???)
#top -a
top - 10:19:14 up 49 days, 11:13, 1 user, load average: 1.05, 1.05, 1.10
Tasks: 145 total, 1 running, 143 sleeping, 0 stopped, 1 zombie
Cpu(s): 11.1%us, 18.4%sy, 0.0%ni, 69.5%id, 0.8%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 16334148k total, 15684824k used, 649324k free, 9988k buffers
Swap: 8388600k total, 129948k used, 8258652k free, 387824k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17940 ??? 20 0 7461m 6.5g 6364 S 16.6 41.5 1113:27 java
4982 ??? 20 0 941m 531m 5756 S 2.7 3.3 611:22.48 java
3213 root 20 0 2057m 354m 2084 S 99.8 2.2 988:43.79 python
28270 ??? 20 0 835m 157m 5464 S 0.0 1.0 106:48.55 java
1648 root 20 0 197m 10m 1452 S 0.0 0.1 42:35.95 python
1200 root 20 0 246m 7452 808 S 0.0 0.0 2:37.42 rsyslogd
Processes that are using memory (some info are masked with ???)
# ps aux --sort rss
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1200 0.0 0.0 251968 7452 ? Sl Sep12 2:37 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
root 1648 0.0 0.0 202268 10604 ? Ss Sep12 42:36 /usr/lib64/???
??? 28270 0.1 0.9 855932 161092 ? Sl Sep14 106:49 /usr/java/???
root 3213 96.1 2.0 2107704 332932 ? Ssl Oct31 992:25 /usr/lib64/???
??? 4982 0.8 3.3 964096 544328 ? Sl Sep12 611:25 /usr/java/???
??? 17940 6.6 41.5 7649356 6781076 ? Sl Oct20 1113:49 /usr/java/???
Memory is almost 100% used, but with ps and top, we can only find processes that uses half of it.
We have checked slab cache, but it was not the cause.
Slab is only 90444 kB.
Nothing is found in syslog too.
Anyone has any idea how to detect what is eating memory?
Thank you in advance.
Run free -m and see the difference. Column available shows real free memory.
And take a look at the https://www.linuxatemyram.com/
we have restarted server and solved this case.
An important topic in software deveopment / programming is to assess the size of the product, and to match the application footprint to the system where it is running. One may need to optimize the product, and/or one may need to add more memory, use a faster processor, etc. In the case of virtual machines, it is important to make sure the application will work effectively by perhaps making the VM memory size larger, or allow a product to get more resources from the hypervisor when needed and available.
The linux top(1) command is great, with its ability to sort by different fields, add optional fields, highlight sort criteria on-screen, and switch sort field with < and >. On most systems though, there are very many processes running, making "at-a-glance" examination a little difficult. Consider:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ PPID SWAP nFLT COMMAND
2181 root 20 0 7565m 3.2g 7028 S 2.7 58.3 86:41.17 1 317m 10k java
1751 root 20 0 137m 2492 1056 S 0.0 0.0 0:02.57 1 5104 76 munin-node
11598 postgres 20 0 146m 23m 11m S 0.0 0.4 7:51.63 2143 3600 28 postmaster
1470 root 20 0 243m 1792 820 S 0.0 0.0 0:01.89 1 2396 23 rsyslogd
3107 postgres 20 0 146m 26m 11m S 0.0 0.5 7:40.61 2143 936 58 postmaster
3168 postgres 20 0 132m 14m 11m S 0.0 0.2 8:27.27 2143 904 53 postmaster
3057 postgres 20 0 138m 19m 11m S 0.0 0.3 6:55.63 2143 856 36 postmaster
3128 root 20 0 85376 900 896 S 0.0 0.0 0:00.11 1636 852 2 sshd
1728 root 20 0 80860 1080 952 S 0.0 0.0 0:00.61 1 776 0 master
3130 manager 20 0 85532 844 672 S 0.0 0.0 0:01.03 3128 712 36 sshd
436 root 16 -4 11052 264 260 S 0.0 0.0 0:00.01 1 688 0 udevd
2211 root 18 -2 11048 220 216 S 0.0 0.0 0:00.00 436 684 0 udevd
2212 root 18 -2 11048 220 216 S 0.0 0.0 0:00.00 436 684 0 udevd
1636 root 20 0 66176 524 436 S 0.0 0.0 0:00.12 1 620 25 sshd
1486 root 20 0 229m 2000 1648 S 0.0 0.0 0:00.79 1485 596 116 sssd_be
2306 postgres 20 0 131m 11m 9m S 0.0 0.2 0:01.21 2143 572 64 postmaster
3055 postgres 20 0 135m 16m 11m S 0.0 0.3 10:18.88 2143 560 36 postmaster
...etc... This is for about 20 processes, but there are well over 100 processes.
In this example I was sorting by SWAP field.
I would like to be able to aggregate related processes based on the "process group" of which they are a part, or based on the USER running the process, or based on the COMMAND being run. Essentially I want to:
Aggregate by PPID, or
Aggregate by USER, or
Aggregate by COMMAND, or
Turn off aggregation
This would allow me to see more quickly what is going on. The expectation is that all the postgres processes would show up together, as a single line, with process group leader (2143, not captured in the snippet) displaying aggegated metrics. Generally the aggregation would be a sum (VIRT, RES, SHR, %CPU, %MEM, TIME+, SWAP, nFLT), but sometimes not (as for PR and NI, which might be shown as just --).
For processes whose PPID is 1, it would be nice to have an option of toggling between aggregating them all together, or of leaving them listed individually.
Aggegation by the name of the process (java vs. munin-node, vs. postmaster, vs. chrome) would also be a nice option. The COMMAND arguments would not be used when aggregating by command name.
This would be very valuable when tuning an application. How can I do this, aggregating top data for at-a-glance viewing in larger scale systems? Has anyone written an app, perhaps that uses top in batch mode, to create a summary view like I'm discussing?
FYI, I'm specifically interest in something for CentOS, but this would be helpful on any OS variant.
Thanks!
...Alan
I have an embedded system, when I do the user i/o operations, the system just stalls. It does the action after a long time. This system is quite complex and has many process running. My question is how can I identify what is making the system stall - it does nothing literally for 5 minutes. After 5 minutes, I see the outcome. I really don't know what is stalling the system. Any inputs on how to debug this issue. I have run the top on the system. However, it doesn't lead to any issue. See here, the jup_render is just taking 30% of CPU, which is not enough to stall the system. So, I am not sure whether top is useful here or not.
~ # top
top - 12:01:05 up 21 min, 1 user, load average: 1.49, 1.26, 0.87
Tasks: 116 total, 2 running, 114 sleeping, 0 stopped, 0 zombie
Cpu(s): 44.4%us, 13.9%sy, 0.0%ni, 40.3%id, 0.0%wa, 0.0%hi, 1.4%si, 0.0%st
Mem: 822572k total, 389640k used, 432932k free, 1980k buffers
Swap: 0k total, 0k used, 0k free, 227324k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
850 root 20 0 309m 32m 16m S 30 4.0 3:10.88 jup_render
870 root 20 0 221m 13m 10m S 27 1.7 2:28.78 jup_render
688 root 20 0 1156m 4092 3688 S 11 0.5 1:25.49 rxserver
9 root 20 0 0 0 0 S 2 0.0 0:06.81 ksoftirqd/1
16 root 20 0 0 0 0 S 1 0.0 0:06.87 ksoftirqd/3
9294 root 20 0 1904 616 508 R 1 0.1 0:00.10 top
812 root 20 0 865m 85m 46m S 1 10.7 1:21.17 lippo_main
13 root 20 0 0 0 0 S 1 0.0 0:06.59 ksoftirqd/2
800 root 20 0 223m 8316 6268 S 1 1.0 0:08.30 rat-cadaemon
3 root 20 0 0 0 0 S 1 0.0 0:05.94 ksoftirqd/0
1456 root 20 0 80060 10m 8208 S 1 1.2 0:04.82 jup_render
1330 root 20 0 202m 10m 8456 S 0 1.3 0:06.08 jup_render
8905 root 20 0 1868 556 424 S 0 0.1 0:02.91 dropbear
1561 root 20 0 80084 10m 8204 S 0 1.2 0:04.92 jup_render
753 root 20 0 61500 7376 6184 S 0 0.9 0:04.06 ale_app
1329 root 20 0 79908 9m 8208 S 0 1.2 0:04.77 jup_render
631 dbus 20 0 3248 1636 676 S 0 0.2 0:13.10 dbus-daemon
1654 root 20 0 80068 10m 8204 S 0 1.2 0:04.82 jup_render
760 root 20 0 116m 15m 12m S 0 1.9 0:10.19 jup_server
8 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/1:0
2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd
7 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1
170 root 0 -20 0 0 0 S 0 0.0 0:00.00 kblockd
6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0
167 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers
281 root 0 -20 0 0 0 S 0 0.0 0:00.00 nfsiod
For an embedded system that has many process running, there can be multitude of reasons. You may need to investigate in all perspective.
Check code for race conditions and deadlock.The kernel might be busy looping in a certain condition . There can be scenario where your application is waiting on a select call or the CPU resource is used up (This choice of CPU resource usage is ruled out based on the output of top command shared by you) or blocked on a read.
If you are performing a blocking I/O operations, the process shall get into wait queue and only move back to the execution path(ready queue) after the completion of the request. That is, it is moved out of the scheduler run queue and put with a special state. It shall be put back into the run queue only if they wake from the sleep or the resource waited for is made available.
Immediate step shall be to try out 'strace'. It shall intercept/record system calls that are called by a process and also the signals that are received by a process. It will be able to show the order of events and all the return/resumption paths of calls. This can take you almost closer to the area of problem.
There are other many handy tools that can be tried based on your development environment/setup. Key tools are as below :
'iotop' - It shall provide you a table of current I/O usage by processes or threads on the system by monitoring the I/O usage information output by the kernel.
'LTTng' - Makes tracing of race conditions and interrupt cascades possible. It is the successor to LTT. It is a combination of kprobes, tracepoint and perf functionalities.
'Ftrace' - This is a Linux kernel internal tracer with which you can analyze/debug latency and performance related issues.
If your system is based on TI processor, the CCS(Trace analyzer) provides capability to perform non-intrusive debug and analysis of system activity. So, note that based on your setup, you may also need to use the relevant tool .
Came across few more ideas :
magic SysRq key is another option in linux. If the driver is stuck, the command SysRq p can take you to the exact routine that is causing the problem.
Profiling of data can tell where exactly the time is being spent by the kernel. There are couple of tools like Readprofile and Oprofile. Oprofile can be enabled by configuring with CONFIG_PROFILING and CONFIG_OPROFILE. Another option is to rebuild the kernel by enabling the profiling option and reading the profile counters using Readprofile utility by booting up with profile=2 via command line.
mpstat can give 'the percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request' via 'iowait' argument.
You said you run the top app. Did you find out which programme gets the biggest CPU time and how much is a percentage for it?
If you run the top you should see another screen in there, which you neither provided nor mentioned a cpu load percentage (or other relevant info).
I advise you to include what you can find interesting/relevant or suspicious through top. If it was already done you should discover it in your question more distinctively because now it's not obvious what is the CPU maximum load.
My app is running slow after a few days. Using the unix "top" command it seems there is not a lot of free memory. See below. Even if I stop the application about the same memory shows used. Any ideas why? Does this amount of memory look normal with no app running on a small gear application? How can I reboot the virtual machine?
Below is the output of the "top" command with no app running. Shows
7513700k total, 7327484k used, 186216k free
top - 22:06:26 up 14 days, 5:42, 0 users, load average: 1.83, 2.82, 3.21
Tasks: 3 total, 1 running, 2 sleeping, 0 stopped, 0 zombie
Cpu(s): 10.2%us, 26.6%sy, 1.6%ni, 57.4%id, 4.0%wa, 0.0%hi, 0.0%si, 0.2%st
Mem: 7513700k total, 7327484k used, 186216k free, 170244k buffers
Swap: 6249464k total, 4210036k used, 2039428k free, 925320k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
48736 3558 20 0 14908 1176 944 R 0.7 0.0 0:00.04 top
48374 3558 20 0 102m 2684 848 S 0.0 0.0 0:00.00 sshd
48383 3558 20 0 106m 2072 1436 S 0.0 0.0 0:00.19 bash
What type of app are you running? Also, since openshift uses cgroups you'll want to see what your usage is within your cgroup (top output shows the whole system). Try including the output from for i in $(oo-cgroup-read all);do echo “oo-cgroup-read $i” && oo-cgroup-read $i; done and pay close attention to your memory limits.