High IO load during the daily raid check which lasts for hours on Debian Jessie? - linux

I'm experiencing a load of about 6 during the daily raid check:
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[0] sdb3[1]
2111700992 blocks super 1.2 [2/2] [UU]
[=================>...] check = 87.1% (1840754048/2111700992) finish=43.6min speed=103504K/sec
bitmap: 2/16 pages [8KB], 65536KB chunk
md1 : active raid1 sda2[0] sdb2[1]
523712 blocks super 1.2 [2/2] [UU]
resync=DELAYED
The suspect seems to be jdbc2:
Total DISK READ : 0.00 B/s | Total DISK WRITE : 433.45 K/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 902.05 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
19794 be/3 root 0.00 B 616.00 K 0.00 % 99.46 % [jbd2/loop0-8]
259 be/3 root 0.00 B 96.00 K 0.00 % 87.46 % [jbd2/md2-8]
19790 be/0 root 0.00 B 18.93 M 0.00 % 10.13 % [loop0]
The Linux box is Debian GNU/Linux 8.7 (jessie) with a 4.4.44-1-pve kernel.
Almost instantly, when the raid check finishes, the load returns back to less than one. How can I figure out what's causing this problem?
I'm not sure how long the daily RAID check should run, but now it takes several hours, which seems excessive.
The IO levels drop significantly when the raid check has been finished:
Total DISK READ : 0.00 B/s | Total DISK WRITE : 8.29 M/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 8.63 M/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
259 be/3 root 0.00 B 188.00 K 0.00 % 28.80 % [jbd2/md2-8]
19794 be/3 root 0.00 B 720.00 K 0.00 % 28.65 % [jbd2/loop0-8]
This problem doesn't seem to make any sense to me. Any help further debugging this would be very useful.

The md RAID check needs to iterate through the RAID stripes on disk and perform the integrity check. This is both an I/O and CPU operation. So the load of the system will increase significantly during this time.

Related

why is cpu-cycles much less than cpu current frequency?

My cpu max frequency is 2.8GHZ and cpu frequency mode is performance, but cpu-cycles is only 0.105GHZ from perf, why??
The cpu-cycles event is 0x3c, it is CPU_CLK_UNHALTED.THREAD_P or CPU_CLK_THREAD_UNHALTED.REF_XCLK ?
Could I read the PMC register from perf directly?
Now the usage of cpu-8 reaches 90% by the command 'mpstat'.
CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
8 0.00 0.00 0.98 0.00 0.00 0.00 0.00 89.22 0.00 9.80
8 0.00 0.00 0.99 0.00 0.00 0.00 0.00 88.12 0.00 10.89
The cpu is Intel(R) Xeon(R) CPU E5-2680 v2 # 2.80GHz.
processor : 8
vendor_id : GenuineIntel
cpu family : 6
model : 62
model name : Intel(R) Xeon(R) CPU E5-2680 v2 # 2.80GHz
stepping : 4
microcode : 0x428
cpu MHz : 2800.000
cache size : 25600 KB
I want to get some idea about the cpu-8 by perf.
perf stat -C 8
Performance counter stats for 'CPU(s) 8':
8828.237941 task-clock (msec) # 1.000 CPUs utilized
11,550 context-switches # 0.001 M/sec
0 cpu-migrations # 0.000 K/sec
0 page-faults # 0.000 K/sec
926,167,840 cycles # 0.105 GHz
4,012,135,689 stalled-cycles-frontend # 433.20% frontend cycles idle
473,099,833 instructions # 0.51 insn per cycle
# 8.48 stalled cycles per insn
98,346,040 branches # 11.140 M/sec
1,254,592 branch-misses # 1.28% of all branches
8.828177754 seconds time elapsed
The cpu-cycles is only 0.105GHZ,it is really strange.
I try to understand the cpu-cycles meaning.
cat /sys/bus/event_source/devices/cpu/events/cpu-cycles
event=0x3c
I look up the document "Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3", at 19.6 session, page 40.
I also check the cpu frequency setting, the cpu should be running at the max frequency.
cat scaling_governor
performance
cat scaling_governor
performance
==============================================
I try this command:
taskset -c 8 stress --cpu 1
perf stat -C 8 sleep 10
Performance counter stats for 'CPU(s) 8':
10000.633899 task-clock (msec) # 1.000 CPUs utilized
1,823 context-switches # 0.182 K/sec
0 cpu-migrations # 0.000 K/sec
8 page-faults # 0.001 K/sec
29,792,267,638 cycles # 2.979 GHz
5,866,181,553 stalled-cycles-frontend # 19.69% frontend cycles idle
54,171,961,339 instructions # 1.82 insn per cycle
# 0.11 stalled cycles per insn
16,356,002,578 branches # 1635.497 M/sec
33,041,249 branch-misses # 0.20% of all branches
10.000592203 seconds time elapsed
some detail information about my environment
I run a application, let's call it 'A', in a virtual machine 'V', in a host 'H'。
The virtual machine is created by qume-kvm.
The application is used to receive packets from network and deal with them.
cpu-cycles could be frozen due to that CPU enters C1 or C2 idle state.

How to get second-level output from sar when used with -f option?

sar man page says that one can specify the resolution in seconds for its output.
However, I am not able to get a second level resolution by the following command.
sar -i 1 -f /var/log/sa/sa18
11:00:01 AM CPU %user %nice %system %iowait %steal %idle
11:10:01 AM all 0.04 0.00 0.04 0.00 0.01 99.91
11:20:01 AM all 0.04 0.00 0.04 0.00 0.00 99.92
11:30:01 AM all 0.04 0.00 0.04 0.00 0.00 99.92
Following command too does not give second level resolution:
sar -f /var/log/sa/sa18 1
I am able to get second-level result only if I do not specify the -f option:
sar 1 10
08:34:31 PM CPU %user %nice %system %iowait %steal %idle
08:34:32 PM all 0.12 0.00 0.00 0.00 0.00 99.88
08:34:33 PM all 0.00 0.00 0.12 0.00 0.00 99.88
08:34:34 PM all 0.00 0.00 0.12 0.00 0.00 99.88
But I want to see system performance varying by second for some past day.
How do I get sar to print second-level output with the -f option?
Linux version: Linux 2.6.32-642.el6.x86_64
sar version : sysstat version 9.0.4
I think the exist sar report file 'sa18' collected with an interval 10 mins. So we don't get the output in seconds.
Please check the /etc/cron.d/sysstat file.
[root#testserver ~]# cat /etc/cron.d/sysstat
#run system activity accounting tool every 10 minutes
*/10 * * * * root /usr/lib64/sa/sa1 1 1
#generate a daily summary of process accounting at 23:53
53 23 * * * root /usr/lib64/sa/sa2 -A
If you want to reduce the sar interval interval you can modify the sysstat file.
The /var/log/sa directory has all of the information already.
The sar command serves here as a parser, and reads all data in the sa file.
So you can use sar -f /var/log/sa/<sa file> to see first-level results, and use other flags, like '-r', for other results.
# sar -f /var/log/sa/sa02
12:00:01 CPU %user %nice %system %iowait %steal %idle
12:10:01 all 14.70 0.00 5.57 0.69 0.01 79.03
12:20:01 all 23.53 0.00 6.08 0.55 0.01 69.83
# sar -r -f /var/log/sa/sa02
12:00:01 kbmemfree kbavail kbmemused kbactive kbinact kbdirty
12:10:01 2109732 5113616 30142444 25408240 2600
12:20:01 1950480 5008332 30301696 25580696 2260
12:30:01 2278632 5324260 29973544 25214788 4112

Total CPU usage - multicore system

I am using xen and with xen top I get the total CPU usage in percentage:
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID
VM1 -----r 25724 299.4 3025244 12.0 20975616 83.4 12 1 14970253 27308358 1 3 146585 92257 10835706 9976308 0
As you can see from above I see the CPU usage is 299 %, but how I can get the total CPU usage from a VM ?
Top doesn't show me the total usage.
We usually see 100% cpu per core.
I guess there are at least 3 cores/cpus.
try this to count cores:
grep processor /proc/cpuinfo | wc -l
299% is the total cpu usage.
sar and mpstat are often used to display cpu usage of a system. Check that systat package is installed and display total cpu usage with:
$ mpstat 1 1
Linux 2.6.32-5-amd64 (debian) 05/01/2016 _x86_64_ (8 CPU)
07:48:51 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle
07:48:52 PM all 0.12 0.00 0.50 0.00 0.00 0.00 0.00 0.00 99.38
Average: all 0.12 0.00 0.50 0.00 0.00 0.00 0.00 0.00 99.38
If you agree that CPU utilisation is (100 - %IDLE):
$ mpstat 1 1 | awk '/^Average/ {print 100-$NF,"%"}'
0.52 %

GNU parallel load balancing

I am trying to find a way to execute CPU intensive parallel jobs over a cluster. My objective is to schedule one job per core, so that every job hopefully gets 100% CPU utilization once scheduled. This is what a have come up with so far:
FILE build_sshlogin.sh
#!/bin/bash
serverprefix="compute-0-"
lastserver=15
function worker {
server="$serverprefix$1";
free=$(ssh $server /bin/bash << 'EOF'
cores=$(grep "cpu MHz" /proc/cpuinfo | wc -l)
stat=$(head -n 1 /proc/stat)
work1=$(echo $stat | awk '{print $2+$3+$4;}')
total1=$(echo $stat | awk '{print $2+$3+$4+$5+$6+$7+$8;}')
sleep 2;
stat=$(head -n 1 /proc/stat)
work2=$(echo $stat | awk '{print $2+$3+$4;}')
total2=$(echo $stat | awk '{print $2+$3+$4+$5+$6+$7+$8;}')
util=$(echo " ( $work2 - $work1 ) / ($total2 - $total1) " | bc -l );
echo " $cores * (1 - $util) " | bc -l | xargs printf "%1.0f"
EOF
)
if [ $free -gt 0 ]
then
echo $free/$server
fi
}
export serverprefix
export -f worker
seq 0 $lastserver | parallel -k worker {}
This script is used by GNU parallel as follows:
parallel --sshloginfile <(./build_sshlogin.sh) --workdir $PWD command args {1} ::: $(seq $runs)
The problem with this technique is that if someone starts another CPU intensive job on a server in the cluster, without checking the CPU usage, then the script will end up scheduling jobs to a core that is being used. In addition, if by the time the first jobs finishes, the CPU usage has changed, then the newly freed cores will not be included for scheduling by GNU parallel for the remaining jobs.
So my question is the following: Is there a way to make GNU parallel re-calculate the free cores/server before it schedules each job? Any other suggestions for solving the problem are welcome.
NOTE: In my cluster all cores have the same frequency. If someone can generalize to account for different frequencies, that's also welcome.
Look at --load which is meant for exactly this situation.
Unfortunately it does not look at CPU utilization but load average. But if your cluster nodes do not have heavy disk I/O then CPU utilization will be very close to load average.
Since load average changes slowly you probably also need to use the new --delay option to give the load average time to rise.
Try mpstat
mpstat
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db) 07/09/2011
10:25:32 PM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
10:25:32 PM all 5.68 0.00 0.49 2.03 0.01 0.02 0.00 91.77 146.55
This is an overall snapshot on a per core basis
$ mpstat -P ALL
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db) 07/09/2011 _x86_64_ (4 CPU)
10:28:04 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle
10:28:04 PM all 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 99.99
10:28:04 PM 0 0.01 0.00 0.01 0.01 0.00 0.00 0.00 0.00 99.98
10:28:04 PM 1 0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.00 99.98
10:28:04 PM 2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
10:28:04 PM 3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
There lot of options, these two give a simple actual %idle per cpu. Check the manpage.

How can I measure the actual memory usage of an application or process?

How do you measure the memory usage of an application or process in Linux?
From the blog article of Understanding memory usage on Linux, ps is not an accurate tool to use for this intent.
Why ps is "wrong"
Depending on how you look at it, ps is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up if it were the only process running. Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by ps are almost definitely wrong.
(Note: This question is covered here in great detail.)
With ps or similar tools you will only get the amount of memory pages allocated by that process. This number is correct, but:
does not reflect the actual amount of memory used by the application, only the amount of memory reserved for it
can be misleading if pages are shared, for example by several threads or by using dynamically linked libraries
If you really want to know what amount of memory your application actually uses, you need to run it within a profiler. For example, Valgrind can give you insights about the amount of memory used, and, more importantly, about possible memory leaks in your program. The heap profiler tool of Valgrind is called 'massif':
Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations. The graph is supplemented by a text or HTML file that includes more information for determining where the most memory is being allocated. Massif runs programs about 20x slower than normal.
As explained in the Valgrind documentation, you need to run the program through Valgrind:
valgrind --tool=massif <executable> <arguments>
Massif writes a dump of memory usage snapshots (e.g. massif.out.12345). These provide, (1) a timeline of memory usage, (2) for each snapshot, a record of where in your program memory was allocated. A great graphical tool for analyzing these files is massif-visualizer. But I found ms_print, a simple text-based tool shipped with Valgrind, to be of great help already.
To find memory leaks, use the (default) memcheck tool of valgrind.
Try the pmap command:
sudo pmap -x <process pid>
It is hard to tell for sure, but here are two "close" things that can help.
$ ps aux
will give you Virtual Size (VSZ)
You can also get detailed statistics from the /proc file-system by going to /proc/$pid/status.
The most important is the VmSize, which should be close to what ps aux gives.
/proc/19420$ cat status
Name: firefox
State: S (sleeping)
Tgid: 19420
Pid: 19420
PPid: 1
TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 256
Groups: 4 6 20 24 25 29 30 44 46 107 109 115 124 1000
VmPeak: 222956 kB
VmSize: 212520 kB
VmLck: 0 kB
VmHWM: 127912 kB
VmRSS: 118768 kB
VmData: 170180 kB
VmStk: 228 kB
VmExe: 28 kB
VmLib: 35424 kB
VmPTE: 184 kB
Threads: 8
SigQ: 0/16382
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000020001000
SigCgt: 000000018000442f
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
Cpus_allowed: 03
Mems_allowed: 1
voluntary_ctxt_switches: 63422
nonvoluntary_ctxt_switches: 7171
In recent versions of Linux, use the smaps subsystem. For example, for a process with a PID of 1234:
cat /proc/1234/smaps
It will tell you exactly how much memory it is using at that time. More importantly, it will divide the memory into private and shared, so you can tell how much memory your instance of the program is using, without including memory shared between multiple instances of the program.
ps -eo size,pid,user,command --sort -size | \
awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' |\
cut -d "" -f2 | cut -d "-" -f1
Use this as root and you can get a clear output for memory usage by each process.
Output example:
0.00 Mb COMMAND
1288.57 Mb /usr/lib/firefox
821.68 Mb /usr/lib/chromium/chromium
762.82 Mb /usr/lib/chromium/chromium
588.36 Mb /usr/sbin/mysqld
547.55 Mb /usr/lib/chromium/chromium
523.92 Mb /usr/lib/tracker/tracker
476.59 Mb /usr/lib/chromium/chromium
446.41 Mb /usr/bin/gnome
421.62 Mb /usr/sbin/libvirtd
405.11 Mb /usr/lib/chromium/chromium
302.60 Mb /usr/lib/chromium/chromium
291.46 Mb /usr/lib/chromium/chromium
284.56 Mb /usr/lib/chromium/chromium
238.93 Mb /usr/lib/tracker/tracker
223.21 Mb /usr/lib/chromium/chromium
197.99 Mb /usr/lib/chromium/chromium
194.07 Mb conky
191.92 Mb /usr/lib/chromium/chromium
190.72 Mb /usr/bin/mongod
169.06 Mb /usr/lib/chromium/chromium
155.11 Mb /usr/bin/gnome
136.02 Mb /usr/lib/chromium/chromium
125.98 Mb /usr/lib/chromium/chromium
103.98 Mb /usr/lib/chromium/chromium
93.22 Mb /usr/lib/tracker/tracker
89.21 Mb /usr/lib/gnome
80.61 Mb /usr/bin/gnome
77.73 Mb /usr/lib/evolution/evolution
76.09 Mb /usr/lib/evolution/evolution
72.21 Mb /usr/lib/gnome
69.40 Mb /usr/lib/evolution/evolution
68.84 Mb nautilus
68.08 Mb zeitgeist
60.97 Mb /usr/lib/tracker/tracker
59.65 Mb /usr/lib/evolution/evolution
57.68 Mb apt
55.23 Mb /usr/lib/gnome
53.61 Mb /usr/lib/evolution/evolution
53.07 Mb /usr/lib/gnome
52.83 Mb /usr/lib/gnome
51.02 Mb /usr/lib/udisks2/udisksd
50.77 Mb /usr/lib/evolution/evolution
50.53 Mb /usr/lib/gnome
50.45 Mb /usr/lib/gvfs/gvfs
50.36 Mb /usr/lib/packagekit/packagekitd
50.14 Mb /usr/lib/gvfs/gvfs
48.95 Mb /usr/bin/Xwayland :1024
46.21 Mb /usr/bin/gnome
42.43 Mb /usr/bin/zeitgeist
42.29 Mb /usr/lib/gnome
41.97 Mb /usr/lib/gnome
41.64 Mb /usr/lib/gvfs/gvfsd
41.63 Mb /usr/lib/gvfs/gvfsd
41.55 Mb /usr/lib/gvfs/gvfsd
41.48 Mb /usr/lib/gvfs/gvfsd
39.87 Mb /usr/bin/python /usr/bin/chrome
37.45 Mb /usr/lib/xorg/Xorg vt2
36.62 Mb /usr/sbin/NetworkManager
35.63 Mb /usr/lib/caribou/caribou
34.79 Mb /usr/lib/tracker/tracker
33.88 Mb /usr/sbin/ModemManager
33.77 Mb /usr/lib/gnome
33.61 Mb /usr/lib/upower/upowerd
33.53 Mb /usr/sbin/gdm3
33.37 Mb /usr/lib/gvfs/gvfsd
33.36 Mb /usr/lib/gvfs/gvfs
33.23 Mb /usr/lib/gvfs/gvfs
33.15 Mb /usr/lib/at
33.15 Mb /usr/lib/at
30.03 Mb /usr/lib/colord/colord
29.62 Mb /usr/lib/apt/methods/https
28.06 Mb /usr/lib/zeitgeist/zeitgeist
27.29 Mb /usr/lib/policykit
25.55 Mb /usr/lib/gvfs/gvfs
25.55 Mb /usr/lib/gvfs/gvfs
25.23 Mb /usr/lib/accountsservice/accounts
25.18 Mb /usr/lib/gvfs/gvfsd
25.15 Mb /usr/lib/gvfs/gvfs
25.15 Mb /usr/lib/gvfs/gvfs
25.12 Mb /usr/lib/gvfs/gvfs
25.10 Mb /usr/lib/gnome
25.10 Mb /usr/lib/gnome
25.07 Mb /usr/lib/gvfs/gvfsd
24.99 Mb /usr/lib/gvfs/gvfs
23.26 Mb /usr/lib/chromium/chromium
22.09 Mb /usr/bin/pulseaudio
19.01 Mb /usr/bin/pulseaudio
18.62 Mb (sd
18.46 Mb (sd
18.30 Mb /sbin/init
18.17 Mb /usr/sbin/rsyslogd
17.50 Mb gdm
17.42 Mb gdm
17.09 Mb /usr/lib/dconf/dconf
17.09 Mb /usr/lib/at
17.06 Mb /usr/lib/gvfs/gvfsd
16.98 Mb /usr/lib/at
16.91 Mb /usr/lib/gdm3/gdm
16.86 Mb /usr/lib/gvfs/gvfsd
16.86 Mb /usr/lib/gdm3/gdm
16.85 Mb /usr/lib/dconf/dconf
16.85 Mb /usr/lib/dconf/dconf
16.73 Mb /usr/lib/rtkit/rtkit
16.69 Mb /lib/systemd/systemd
13.13 Mb /usr/lib/chromium/chromium
13.13 Mb /usr/lib/chromium/chromium
10.92 Mb anydesk
8.54 Mb /sbin/lvmetad
7.43 Mb /usr/sbin/apache2
6.82 Mb /usr/sbin/apache2
6.77 Mb /usr/sbin/apache2
6.73 Mb /usr/sbin/apache2
6.66 Mb /usr/sbin/apache2
6.64 Mb /usr/sbin/apache2
6.63 Mb /usr/sbin/apache2
6.62 Mb /usr/sbin/apache2
6.51 Mb /usr/sbin/apache2
6.25 Mb /usr/sbin/apache2
6.22 Mb /usr/sbin/apache2
3.92 Mb bash
3.14 Mb bash
2.97 Mb bash
2.95 Mb bash
2.93 Mb bash
2.91 Mb bash
2.86 Mb bash
2.86 Mb bash
2.86 Mb bash
2.84 Mb bash
2.84 Mb bash
2.45 Mb /lib/systemd/systemd
2.30 Mb (sd
2.28 Mb /usr/bin/dbus
1.84 Mb /usr/bin/dbus
1.46 Mb ps
1.21 Mb openvpn hackthebox.ovpn
1.16 Mb /sbin/dhclient
1.16 Mb /sbin/dhclient
1.09 Mb /lib/systemd/systemd
0.98 Mb /sbin/mount.ntfs /dev/sda3 /media/n0bit4/Data
0.97 Mb /lib/systemd/systemd
0.96 Mb /lib/systemd/systemd
0.89 Mb /usr/sbin/smartd
0.77 Mb /usr/bin/dbus
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.74 Mb /usr/bin/dbus
0.71 Mb /usr/lib/apt/methods/http
0.68 Mb /bin/bash /usr/bin/mysqld_safe
0.68 Mb /sbin/wpa_supplicant
0.66 Mb /usr/bin/dbus
0.61 Mb /lib/systemd/systemd
0.54 Mb /usr/bin/dbus
0.46 Mb /usr/sbin/cron
0.45 Mb /usr/sbin/irqbalance
0.43 Mb logger
0.41 Mb awk { hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }
0.40 Mb /usr/bin/ssh
0.34 Mb /usr/lib/chromium/chrome
0.32 Mb cut
0.32 Mb cut
0.00 Mb [kthreadd]
0.00 Mb [ksoftirqd/0]
0.00 Mb [kworker/0:0H]
0.00 Mb [rcu_sched]
0.00 Mb [rcu_bh]
0.00 Mb [migration/0]
0.00 Mb [lru
0.00 Mb [watchdog/0]
0.00 Mb [cpuhp/0]
0.00 Mb [cpuhp/1]
0.00 Mb [watchdog/1]
0.00 Mb [migration/1]
0.00 Mb [ksoftirqd/1]
0.00 Mb [kworker/1:0H]
0.00 Mb [cpuhp/2]
0.00 Mb [watchdog/2]
0.00 Mb [migration/2]
0.00 Mb [ksoftirqd/2]
0.00 Mb [kworker/2:0H]
0.00 Mb [cpuhp/3]
0.00 Mb [watchdog/3]
0.00 Mb [migration/3]
0.00 Mb [ksoftirqd/3]
0.00 Mb [kworker/3:0H]
0.00 Mb [kdevtmpfs]
0.00 Mb [netns]
0.00 Mb [khungtaskd]
0.00 Mb [oom_reaper]
0.00 Mb [writeback]
0.00 Mb [kcompactd0]
0.00 Mb [ksmd]
0.00 Mb [khugepaged]
0.00 Mb [crypto]
0.00 Mb [kintegrityd]
0.00 Mb [bioset]
0.00 Mb [kblockd]
0.00 Mb [devfreq_wq]
0.00 Mb [watchdogd]
0.00 Mb [kswapd0]
0.00 Mb [vmstat]
0.00 Mb [kthrotld]
0.00 Mb [ipv6_addrconf]
0.00 Mb [acpi_thermal_pm]
0.00 Mb [ata_sff]
0.00 Mb [scsi_eh_0]
0.00 Mb [scsi_tmf_0]
0.00 Mb [scsi_eh_1]
0.00 Mb [scsi_tmf_1]
0.00 Mb [scsi_eh_2]
0.00 Mb [scsi_tmf_2]
0.00 Mb [scsi_eh_3]
0.00 Mb [scsi_tmf_3]
0.00 Mb [scsi_eh_4]
0.00 Mb [scsi_tmf_4]
0.00 Mb [scsi_eh_5]
0.00 Mb [scsi_tmf_5]
0.00 Mb [bioset]
0.00 Mb [kworker/1:1H]
0.00 Mb [kworker/3:1H]
0.00 Mb [kworker/0:1H]
0.00 Mb [kdmflush]
0.00 Mb [bioset]
0.00 Mb [kdmflush]
0.00 Mb [bioset]
0.00 Mb [jbd2/sda5
0.00 Mb [ext4
0.00 Mb [kworker/2:1H]
0.00 Mb [kauditd]
0.00 Mb [bioset]
0.00 Mb [drbd
0.00 Mb [irq/27
0.00 Mb [i915/signal:0]
0.00 Mb [i915/signal:1]
0.00 Mb [i915/signal:2]
0.00 Mb [ttm_swap]
0.00 Mb [cfg80211]
0.00 Mb [kworker/u17:0]
0.00 Mb [hci0]
0.00 Mb [hci0]
0.00 Mb [kworker/u17:1]
0.00 Mb [iprt
0.00 Mb [iprt
0.00 Mb [kworker/1:0]
0.00 Mb [kworker/3:0]
0.00 Mb [kworker/0:0]
0.00 Mb [kworker/2:0]
0.00 Mb [kworker/u16:0]
0.00 Mb [kworker/u16:2]
0.00 Mb [kworker/3:2]
0.00 Mb [kworker/2:1]
0.00 Mb [kworker/1:2]
0.00 Mb [kworker/0:2]
0.00 Mb [kworker/2:2]
0.00 Mb [kworker/0:1]
0.00 Mb [scsi_eh_6]
0.00 Mb [scsi_tmf_6]
0.00 Mb [usb
0.00 Mb [bioset]
0.00 Mb [kworker/3:1]
0.00 Mb [kworker/u16:1]
There isn't any easy way to calculate this. But some people have tried to get some good answers:
ps_mem.py
ps_mem.py at GitHub
Use smem, which is an alternative to ps which calculates the USS and PSS per process. You probably want the PSS.
USS - Unique Set Size. This is the amount of unshared memory unique to that process (think of it as U for unique memory). It does not include shared memory. Thus this will under-report the amount of memory a process uses, but it is helpful when you want to ignore shared memory.
PSS - Proportional Set Size. This is what you want. It adds together the unique memory (USS), along with a proportion of its shared memory divided by the number of processes sharing that memory. Thus it will give you an accurate representation of how much actual physical memory is being used per process - with shared memory truly represented as shared. Think of the P being for physical memory.
How this compares to RSS as reported by ps and other utilities:
RSS - Resident Set Size. This is the amount of shared memory plus unshared memory used by each process. If any processes share memory, this will over-report the amount of memory actually used, because the same shared memory will be counted more than once - appearing again in each other process that shares the same memory. Thus it is fairly unreliable, especially when high-memory processes have a lot of forks - which is common in a server, with things like Apache or PHP (FastCGI/FPM) processes.
Notice: smem can also (optionally) output graphs such as pie charts and the like. IMO you don't need any of that. If you just want to use it from the command line like you might use ps -A v, then you don't need to install the Python and Matplotlib recommended dependency.
Use time.
Not the Bash builtin time, but the one you can find with which time, for example /usr/bin/time.
Here's what it covers, on a simple ls:
$ /usr/bin/time --verbose ls
(...)
Command being timed: "ls"
User time (seconds): 0.00
System time (seconds): 0.00
Percent of CPU this job got: 0%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2372
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 121
Voluntary context switches: 2
Involuntary context switches: 9
Swaps: 0
File system inputs: 256
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
Beside the solutions listed in the answers, you can use the Linux command "top". It provides a dynamic real-time view of the running system, and it gives the CPU and memory usage for the whole system, along with for every program, in percentage:
top
to filter by a program PID:
top -p <PID>
To filter by a program name:
top | grep <PROCESS NAME>
"top" provides also some fields such as:
VIRT -- Virtual Image (kb): The total amount of virtual memory used by the task
RES -- Resident size (kb): The non-swapped physical memory a task has used ; RES = CODE + DATA.
DATA -- Data+Stack size (kb): The amount of physical memory devoted to other than executable code, also known as the 'data resident set' size or DRS.
SHR -- Shared Mem size (kb): The amount of shared memory used by a task. It simply reflects memory that could be potentially shared with other processes.
Reference here.
This is an excellent summary of the tools and problems: archive.org link
I'll quote it, so that more devs will actually read it.
If you want to analyse memory usage of the whole system or to thoroughly analyse memory usage of one application (not just its heap usage), use exmap. For whole system analysis, find processes with the highest effective usage, they take the most memory in practice, find processes with the highest writable usage, they create the most data (and therefore possibly leak or are very ineffective in their data usage). Select such application and analyse its mappings in the second listview. See exmap section for more details. Also use xrestop to check high usage of X resources, especially if the process of the X server takes a lot of memory. See xrestop section for details.
If you want to detect leaks, use valgrind or possibly kmtrace.
If you want to analyse heap (malloc etc.) usage of an application, either run it in memprof or with kmtrace, profile the application and search the function call tree for biggest allocations. See their sections for more details.
I am using Arch Linux and there's this wonderful package called ps_mem:
ps_mem -p <pid>
Example Output
$ ps_mem -S -p $(pgrep firefox)
Private + Shared = RAM used Swap used Program
355.0 MiB + 38.7 MiB = 393.7 MiB 35.9 MiB firefox
---------------------------------------------
393.7 MiB 35.9 MiB
=============================================
There isn't a single answer for this because you can't pin point precisely the amount of memory a process uses. Most processes under Linux use shared libraries.
For instance, let's say you want to calculate memory usage for the 'ls' process. Do you count only the memory used by the executable 'ls' (if you could isolate it)? How about libc? Or all these other libraries that are required to run 'ls'?
linux-gate.so.1 => (0x00ccb000)
librt.so.1 => /lib/librt.so.1 (0x06bc7000)
libacl.so.1 => /lib/libacl.so.1 (0x00230000)
libselinux.so.1 => /lib/libselinux.so.1 (0x00162000)
libc.so.6 => /lib/libc.so.6 (0x00b40000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00cb4000)
/lib/ld-linux.so.2 (0x00b1d000)
libattr.so.1 => /lib/libattr.so.1 (0x00229000)
libdl.so.2 => /lib/libdl.so.2 (0x00cae000)
libsepol.so.1 => /lib/libsepol.so.1 (0x0011a000)
You could argue that they are shared by other processes, but 'ls' can't be run on the system without them being loaded.
Also, if you need to know how much memory a process needs in order to do capacity planning, you have to calculate how much each additional copy of the process uses. I think /proc/PID/status might give you enough information of the memory usage at a single time. On the other hand, Valgrind will give you a better profile of the memory usage throughout the lifetime of the program.
If your code is in C or C++ you might be able to use getrusage() which returns you various statistics about memory and time usage of your process.
Not all platforms support this though and will return 0 values for the memory-use options.
Instead you can look at the virtual file created in /proc/[pid]/statm (where [pid] is replaced by your process id. You can obtain this from getpid()).
This file will look like a text file with 7 integers. You are probably most interested in the first (all memory use) and sixth (data memory use) numbers in this file.
Three more methods to try:
ps aux --sort pmem
It sorts the output by %MEM.
ps aux | awk '{print $2, $4, $11}' | sort -k2r | head -n 15
It sorts using pipes.
top -a
It starts top sorting by %MEM
(Extracted from here)
Valgrind can show detailed information, but it slows down the target application significantly, and most of the time it changes the behavior of the application.
Exmap was something I didn't know yet, but it seems that you need a kernel module to get the information, which can be an obstacle.
I assume what everyone wants to know with respect to "memory usage" is the following...
In Linux, the amount of physical memory a single process might use can be roughly divided into following categories.
M.a anonymous mapped memory
.p private
.d dirty == malloc/mmapped heap and stack allocated and written memory
.c clean == malloc/mmapped heap and stack memory once allocated, written, then freed, but not reclaimed yet
.s shared
.d dirty == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
.c clean == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
M.n named mapped memory
.p private
.d dirty == file mmapped written memory private
.c clean == mapped program/library text private mapped
.s shared
.d dirty == file mmapped written memory shared
.c clean == mapped library text shared mapped
Utility included in Android called showmap is quite useful
virtual shared shared private private
size RSS PSS clean dirty clean dirty object
-------- -------- -------- -------- -------- -------- -------- ------------------------------
4 0 0 0 0 0 0 0:00 0 [vsyscall]
4 4 0 4 0 0 0 [vdso]
88 28 28 0 0 4 24 [stack]
12 12 12 0 0 0 12 7909 /lib/ld-2.11.1.so
12 4 4 0 0 0 4 89529 /usr/lib/locale/en_US.utf8/LC_IDENTIFICATION
28 0 0 0 0 0 0 86661 /usr/lib/gconv/gconv-modules.cache
4 0 0 0 0 0 0 87660 /usr/lib/locale/en_US.utf8/LC_MEASUREMENT
4 0 0 0 0 0 0 89528 /usr/lib/locale/en_US.utf8/LC_TELEPHONE
4 0 0 0 0 0 0 89527 /usr/lib/locale/en_US.utf8/LC_ADDRESS
4 0 0 0 0 0 0 87717 /usr/lib/locale/en_US.utf8/LC_NAME
4 0 0 0 0 0 0 87873 /usr/lib/locale/en_US.utf8/LC_PAPER
4 0 0 0 0 0 0 13879 /usr/lib/locale/en_US.utf8/LC_MESSAGES/SYS_LC_MESSAGES
4 0 0 0 0 0 0 89526 /usr/lib/locale/en_US.utf8/LC_MONETARY
4 0 0 0 0 0 0 89525 /usr/lib/locale/en_US.utf8/LC_TIME
4 0 0 0 0 0 0 11378 /usr/lib/locale/en_US.utf8/LC_NUMERIC
1156 8 8 0 0 4 4 11372 /usr/lib/locale/en_US.utf8/LC_COLLATE
252 0 0 0 0 0 0 11321 /usr/lib/locale/en_US.utf8/LC_CTYPE
128 52 1 52 0 0 0 7909 /lib/ld-2.11.1.so
2316 32 11 24 0 0 8 7986 /lib/libncurses.so.5.7
2064 8 4 4 0 0 4 7947 /lib/libdl-2.11.1.so
3596 472 46 440 0 4 28 7933 /lib/libc-2.11.1.so
2084 4 0 4 0 0 0 7995 /lib/libnss_compat-2.11.1.so
2152 4 0 4 0 0 0 7993 /lib/libnsl-2.11.1.so
2092 0 0 0 0 0 0 8009 /lib/libnss_nis-2.11.1.so
2100 0 0 0 0 0 0 7999 /lib/libnss_files-2.11.1.so
3752 2736 2736 0 0 864 1872 [heap]
24 24 24 0 0 0 24 [anon]
916 616 131 584 0 0 32 /bin/bash
-------- -------- -------- -------- -------- -------- -------- ------------------------------
22816 4004 3005 1116 0 876 2012 TOTAL
#!/bin/ksh
#
# Returns total memory used by process $1 in kb.
#
# See /proc/NNNN/smaps if you want to do something
# more interesting.
#
IFS=$'\n'
for line in $(</proc/$1/smaps)
do
[[ $line =~ ^Size:\s+(\S+) ]] && ((kb += ${.sh.match[1]}))
done
print $kb
I'm using htop; it's a very good console program similar to Windows Task Manager.
Get Valgrind. Give it your program to run, and it'll tell you plenty about its memory usage.
This would apply only for the case of a program that runs for some time and stops. I don't know if Valgrind can get its hands on an already-running process or shouldn't-stop processes such as daemons.
A good test of the more "real world" usage is to open the application, run vmstat -s, and check the "active memory" statistic. Close the application, wait a few seconds, and run vmstat -s again.
However much active memory was freed was in evidently in use by the application.
The below command line will give you the total memory used by the various process running on the Linux machine in MB:
ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' | awk '{total=total + $1} END {print total}'
If the process is not using up too much memory (either because you expect this to be the case, or some other command has given this initial indication), and the process can withstand being stopped for a short period of time, you can try to use the gcore command.
gcore <pid>
Check the size of the generated core file to get a good idea how much memory a particular process is using.
This won't work too well if process is using hundreds of megabytes, or gigabytes, as the core generation could take several seconds or minutes to be created depending on I/O performance. During the core creation the process is stopped (or "frozen") to prevent memory changes. So be careful.
Also make sure the mount point where the core is generated has plenty of disk space and that the system will not react negatively to the core file being created in that particular directory.
Note: this works 100% well only when memory consumption increases
If you want to monitor memory usage by given process (or group of processed sharing common name, e.g. google-chrome, you can use my bash-script:
while true; do ps aux | awk ‚{print $5, $11}’ | grep chrome | sort -n > /tmp/a.txt; sleep 1; diff /tmp/{b,a}.txt; mv /tmp/{a,b}.txt; done;
this will continuously look for changes and print them.
If you want something quicker than profiling with Valgrind and your kernel is older and you can't use smaps, a ps with the options to show the resident set of the process (with ps -o rss,command) can give you a quick and reasonable _aproximation_ of the real amount of non-swapped memory being used.
I would suggest that you use atop. You can find everything about it on this page. It is capable of providing all the necessary KPI for your processes and it can also capture to a file.
Another vote for Valgrind here, but I would like to add that you can use a tool like Alleyoop to help you interpret the results generated by Valgrind.
I use the two tools all the time and always have lean, non-leaky code to proudly show for it ;)
Check out this shell script to check memory usage by application in Linux.
It is also available on GitHub and in a version without paste and bc.
Given some of the answers (thanks thomasrutter), to get the actual swap and RAM for a single application, I came up with the following, say we want to know what 'firefox' is using
sudo smem | awk '/firefox/{swap += $5; pss += $7;} END {print "swap = "swap/1024" PSS = "pss/1024}'
Or for libvirt;
sudo smem | awk '/libvirt/{swap += $5; pss += $7;} END {print "swap = "swap/1024" PSS = "pss/1024}'
This will give you the total in MB like so;
swap = 0 PSS = 2096.92
swap = 224.75 PSS = 421.455
Tested on ubuntu 16.04 through 20.04.
While this question seems to be about examining currently running processes, I wanted to see the peak memory used by an application from start to finish. Besides Valgrind, you can use tstime, which is much simpler. It measures the "highwater" memory usage (RSS and virtual). From this answer.
Based on an answer to a related question.
You may use SNMP to get the memory and CPU usage of a process in a particular device on the network :)
Requirements:
The device running the process should have snmp installed and running
snmp should be configured to accept requests from where you will run the script below (it may be configured in file snmpd.conf)
You should know the process ID (PID) of the process you want to monitor
Notes:
HOST-RESOURCES-MIB::hrSWRunPerfCPU is the number of centi-seconds of the total system's CPU resources consumed by this process. Note that on a multi-processor system, this value may increment by more than one centi-second in one centi-second of real (wall clock) time.
HOST-RESOURCES-MIB::hrSWRunPerfMem is the total amount of real system memory allocated to this process.
Process monitoring script
echo "IP address: "
read ip
echo "Specfiy PID: "
read pid
echo "Interval in seconds: "
read interval
while [ 1 ]
do
date
snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfCPU.$pid
snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfMem.$pid
sleep $interval;
done
/prox/xxx/numa_maps gives some info there: N0=??? N1=???. But this result might be lower than the actual result, as it only counts those which have been touched.

Resources