Siege to capture only final statistics - linux

I have a requirement to capture final statistics of siege benchmark tool.
what I have tried is
siege -c2 -t10s http://127.0.0.1:3000/ > siege.log 2>&1
but siege.log has many HTTP/1.1 200 0.00 secs: 16 bytes ==> GET / aswel with it.
All i want is final statistics like below
Lifting the server siege...
Transactions: 496 hits
Availability: 100.00 %
Elapsed time: 0.97 secs
Data transferred: 0.01 MB
Response time: 0.00 secs
Transaction rate: 511.34 trans/sec
Throughput: 0.01 MB/sec
Concurrency: 1.82
Successful transactions: 496
Failed transactions: 0
Longest transaction: 0.04
Shortest transaction: 0.00
Please suggest. Thanks in advance

I found the solution in a hard way.
By default, at $HOME/.siege/siege.conf has verbose set to true.
Option 1:
One can override it to false at $HOME/.siege/siege.conf
Option 2:
if you prefer to not to change global configuration, One can create a custom siege configuration and run as below
siege -c2 -t10s http://127.0.0.1:3000/ --rc=siege.conf

Related

High IO load during the daily raid check which lasts for hours on Debian Jessie?

I'm experiencing a load of about 6 during the daily raid check:
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[0] sdb3[1]
2111700992 blocks super 1.2 [2/2] [UU]
[=================>...] check = 87.1% (1840754048/2111700992) finish=43.6min speed=103504K/sec
bitmap: 2/16 pages [8KB], 65536KB chunk
md1 : active raid1 sda2[0] sdb2[1]
523712 blocks super 1.2 [2/2] [UU]
resync=DELAYED
The suspect seems to be jdbc2:
Total DISK READ : 0.00 B/s | Total DISK WRITE : 433.45 K/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 902.05 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
19794 be/3 root 0.00 B 616.00 K 0.00 % 99.46 % [jbd2/loop0-8]
259 be/3 root 0.00 B 96.00 K 0.00 % 87.46 % [jbd2/md2-8]
19790 be/0 root 0.00 B 18.93 M 0.00 % 10.13 % [loop0]
The Linux box is Debian GNU/Linux 8.7 (jessie) with a 4.4.44-1-pve kernel.
Almost instantly, when the raid check finishes, the load returns back to less than one. How can I figure out what's causing this problem?
I'm not sure how long the daily RAID check should run, but now it takes several hours, which seems excessive.
The IO levels drop significantly when the raid check has been finished:
Total DISK READ : 0.00 B/s | Total DISK WRITE : 8.29 M/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 8.63 M/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
259 be/3 root 0.00 B 188.00 K 0.00 % 28.80 % [jbd2/md2-8]
19794 be/3 root 0.00 B 720.00 K 0.00 % 28.65 % [jbd2/loop0-8]
This problem doesn't seem to make any sense to me. Any help further debugging this would be very useful.
The md RAID check needs to iterate through the RAID stripes on disk and perform the integrity check. This is both an I/O and CPU operation. So the load of the system will increase significantly during this time.

Total CPU usage - multicore system

I am using xen and with xen top I get the total CPU usage in percentage:
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID
VM1 -----r 25724 299.4 3025244 12.0 20975616 83.4 12 1 14970253 27308358 1 3 146585 92257 10835706 9976308 0
As you can see from above I see the CPU usage is 299 %, but how I can get the total CPU usage from a VM ?
Top doesn't show me the total usage.
We usually see 100% cpu per core.
I guess there are at least 3 cores/cpus.
try this to count cores:
grep processor /proc/cpuinfo | wc -l
299% is the total cpu usage.
sar and mpstat are often used to display cpu usage of a system. Check that systat package is installed and display total cpu usage with:
$ mpstat 1 1
Linux 2.6.32-5-amd64 (debian) 05/01/2016 _x86_64_ (8 CPU)
07:48:51 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle
07:48:52 PM all 0.12 0.00 0.50 0.00 0.00 0.00 0.00 0.00 99.38
Average: all 0.12 0.00 0.50 0.00 0.00 0.00 0.00 0.00 99.38
If you agree that CPU utilisation is (100 - %IDLE):
$ mpstat 1 1 | awk '/^Average/ {print 100-$NF,"%"}'
0.52 %

calculating correct memory utilization from invalid output of pidstat

I used, pidstat -r -p <pid> <interval> >> log_Path/MemStat.CSV & command to to collect the memory stat.
after running this command I found that RSS VSZ %MEM values are increased continuously, which is not expected, as pidstat provides the the values considering the interval.
After searching on net I found that there is bug in pidstat and I need to update the syssat package.
(please refer last few statements by pidstat author on this link : http://sebastien.godard.pagesperso-orange.fr/tutorial.html)
Now, my question is, how do I calculate the correct %MEM utilization from the current output as we cannot run the test again.
sample output :
Time PID minflt/s majflt/s VSZ RSS %MEM
9:55:22 AM 18236 1280 0 26071488 119136 0.36
9:55:23 AM 18236 4273 0 27402768 126276 0.38
9:55:24 AM 18236 9831 0 27402800 162468 0.49
9:55:25 AM 18236 161 0 27402800 169092 0.51
9:55:26 AM 18236 51 0 27402800 175416 0.53
9:55:27 AM 18236 6859 0 27402800 198340 0.6
9:55:28 AM 18236 1440 0 27402800 203608 0.62
On the Sysstat tutorial page you refer to it is said:
I noticed that pidstat had a memory footprint (VSZ and RSS fields)
that was constantly increasing as the time went by. I quickly found
that I had forgotten to close a file descriptor in a function of my
code and that was responsible for the memory leak...!
So, the output of pidstat has never been invalid; on the contrary, the author wrote
pidstat helped me to detect a memory leak in the pidstat command
itself.

Siege aborted due to excessive socket failure

I have encountered this problem whilst trying to run off the following cmd from siege on Mac OS X 10.8.3.
siege -d1 -c 20 -t2m -i -f -r10 urls.txt
The output from Siege is the following:
** SIEGE 2.74
** Preparing 20 concurrent users for battle.
The server is now under siege...
done.
siege aborted due to excessive socket failure; you
can change the failure threshold in $HOME/.siegerc
Transactions: 0 hits
Availability: 0.00 %
Elapsed time: 27.04 secs
Data transferred: 0.00 MB
Response time: 0.00 secs
Transaction rate: 0.00 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 0.00
Successful transactions: 0
Failed transactions: 1043
Longest transaction: 0.00
Shortest transaction: 0.00
FILE: /usr/local/var/siege.log
You can disable this annoying message by editing
the .siegerc file in your home directory; change
the directive 'show-logfile' to false.
The problem may be that you run out of ephemeral ports. To remedy that, either expand the number of ports to use, or reduce the duration that ports stay in TIME_WAIT, or both.
Expand the usable ports:
Check your current setting:
$ sudo sysctl net.inet.ip.portrange.hifirst
net.inet.ip.portrange.hifirst: 49152
Set it lower to expand your window:
$ sudo sysctl -w net.inet.ip.portrange.hifirst=32768
net.inet.ip.portrange.hifirst: 49152 -> 32768
(hilast should already be at the max, 65536)
Reduce the maximum segment lifetime
$ sudo sysctl -w net.inet.tcp.msl=1000
net.inet.tcp.msl: 15000 -> 1000
I had this error too. Turned out my URI's were faulty. Most of them returned a 404 or 500 status. When I fixed the uri's all went well.

How can I measure the actual memory usage of an application or process?

How do you measure the memory usage of an application or process in Linux?
From the blog article of Understanding memory usage on Linux, ps is not an accurate tool to use for this intent.
Why ps is "wrong"
Depending on how you look at it, ps is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up if it were the only process running. Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by ps are almost definitely wrong.
(Note: This question is covered here in great detail.)
With ps or similar tools you will only get the amount of memory pages allocated by that process. This number is correct, but:
does not reflect the actual amount of memory used by the application, only the amount of memory reserved for it
can be misleading if pages are shared, for example by several threads or by using dynamically linked libraries
If you really want to know what amount of memory your application actually uses, you need to run it within a profiler. For example, Valgrind can give you insights about the amount of memory used, and, more importantly, about possible memory leaks in your program. The heap profiler tool of Valgrind is called 'massif':
Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations. The graph is supplemented by a text or HTML file that includes more information for determining where the most memory is being allocated. Massif runs programs about 20x slower than normal.
As explained in the Valgrind documentation, you need to run the program through Valgrind:
valgrind --tool=massif <executable> <arguments>
Massif writes a dump of memory usage snapshots (e.g. massif.out.12345). These provide, (1) a timeline of memory usage, (2) for each snapshot, a record of where in your program memory was allocated. A great graphical tool for analyzing these files is massif-visualizer. But I found ms_print, a simple text-based tool shipped with Valgrind, to be of great help already.
To find memory leaks, use the (default) memcheck tool of valgrind.
Try the pmap command:
sudo pmap -x <process pid>
It is hard to tell for sure, but here are two "close" things that can help.
$ ps aux
will give you Virtual Size (VSZ)
You can also get detailed statistics from the /proc file-system by going to /proc/$pid/status.
The most important is the VmSize, which should be close to what ps aux gives.
/proc/19420$ cat status
Name: firefox
State: S (sleeping)
Tgid: 19420
Pid: 19420
PPid: 1
TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 256
Groups: 4 6 20 24 25 29 30 44 46 107 109 115 124 1000
VmPeak: 222956 kB
VmSize: 212520 kB
VmLck: 0 kB
VmHWM: 127912 kB
VmRSS: 118768 kB
VmData: 170180 kB
VmStk: 228 kB
VmExe: 28 kB
VmLib: 35424 kB
VmPTE: 184 kB
Threads: 8
SigQ: 0/16382
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000020001000
SigCgt: 000000018000442f
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
Cpus_allowed: 03
Mems_allowed: 1
voluntary_ctxt_switches: 63422
nonvoluntary_ctxt_switches: 7171
In recent versions of Linux, use the smaps subsystem. For example, for a process with a PID of 1234:
cat /proc/1234/smaps
It will tell you exactly how much memory it is using at that time. More importantly, it will divide the memory into private and shared, so you can tell how much memory your instance of the program is using, without including memory shared between multiple instances of the program.
ps -eo size,pid,user,command --sort -size | \
awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' |\
cut -d "" -f2 | cut -d "-" -f1
Use this as root and you can get a clear output for memory usage by each process.
Output example:
0.00 Mb COMMAND
1288.57 Mb /usr/lib/firefox
821.68 Mb /usr/lib/chromium/chromium
762.82 Mb /usr/lib/chromium/chromium
588.36 Mb /usr/sbin/mysqld
547.55 Mb /usr/lib/chromium/chromium
523.92 Mb /usr/lib/tracker/tracker
476.59 Mb /usr/lib/chromium/chromium
446.41 Mb /usr/bin/gnome
421.62 Mb /usr/sbin/libvirtd
405.11 Mb /usr/lib/chromium/chromium
302.60 Mb /usr/lib/chromium/chromium
291.46 Mb /usr/lib/chromium/chromium
284.56 Mb /usr/lib/chromium/chromium
238.93 Mb /usr/lib/tracker/tracker
223.21 Mb /usr/lib/chromium/chromium
197.99 Mb /usr/lib/chromium/chromium
194.07 Mb conky
191.92 Mb /usr/lib/chromium/chromium
190.72 Mb /usr/bin/mongod
169.06 Mb /usr/lib/chromium/chromium
155.11 Mb /usr/bin/gnome
136.02 Mb /usr/lib/chromium/chromium
125.98 Mb /usr/lib/chromium/chromium
103.98 Mb /usr/lib/chromium/chromium
93.22 Mb /usr/lib/tracker/tracker
89.21 Mb /usr/lib/gnome
80.61 Mb /usr/bin/gnome
77.73 Mb /usr/lib/evolution/evolution
76.09 Mb /usr/lib/evolution/evolution
72.21 Mb /usr/lib/gnome
69.40 Mb /usr/lib/evolution/evolution
68.84 Mb nautilus
68.08 Mb zeitgeist
60.97 Mb /usr/lib/tracker/tracker
59.65 Mb /usr/lib/evolution/evolution
57.68 Mb apt
55.23 Mb /usr/lib/gnome
53.61 Mb /usr/lib/evolution/evolution
53.07 Mb /usr/lib/gnome
52.83 Mb /usr/lib/gnome
51.02 Mb /usr/lib/udisks2/udisksd
50.77 Mb /usr/lib/evolution/evolution
50.53 Mb /usr/lib/gnome
50.45 Mb /usr/lib/gvfs/gvfs
50.36 Mb /usr/lib/packagekit/packagekitd
50.14 Mb /usr/lib/gvfs/gvfs
48.95 Mb /usr/bin/Xwayland :1024
46.21 Mb /usr/bin/gnome
42.43 Mb /usr/bin/zeitgeist
42.29 Mb /usr/lib/gnome
41.97 Mb /usr/lib/gnome
41.64 Mb /usr/lib/gvfs/gvfsd
41.63 Mb /usr/lib/gvfs/gvfsd
41.55 Mb /usr/lib/gvfs/gvfsd
41.48 Mb /usr/lib/gvfs/gvfsd
39.87 Mb /usr/bin/python /usr/bin/chrome
37.45 Mb /usr/lib/xorg/Xorg vt2
36.62 Mb /usr/sbin/NetworkManager
35.63 Mb /usr/lib/caribou/caribou
34.79 Mb /usr/lib/tracker/tracker
33.88 Mb /usr/sbin/ModemManager
33.77 Mb /usr/lib/gnome
33.61 Mb /usr/lib/upower/upowerd
33.53 Mb /usr/sbin/gdm3
33.37 Mb /usr/lib/gvfs/gvfsd
33.36 Mb /usr/lib/gvfs/gvfs
33.23 Mb /usr/lib/gvfs/gvfs
33.15 Mb /usr/lib/at
33.15 Mb /usr/lib/at
30.03 Mb /usr/lib/colord/colord
29.62 Mb /usr/lib/apt/methods/https
28.06 Mb /usr/lib/zeitgeist/zeitgeist
27.29 Mb /usr/lib/policykit
25.55 Mb /usr/lib/gvfs/gvfs
25.55 Mb /usr/lib/gvfs/gvfs
25.23 Mb /usr/lib/accountsservice/accounts
25.18 Mb /usr/lib/gvfs/gvfsd
25.15 Mb /usr/lib/gvfs/gvfs
25.15 Mb /usr/lib/gvfs/gvfs
25.12 Mb /usr/lib/gvfs/gvfs
25.10 Mb /usr/lib/gnome
25.10 Mb /usr/lib/gnome
25.07 Mb /usr/lib/gvfs/gvfsd
24.99 Mb /usr/lib/gvfs/gvfs
23.26 Mb /usr/lib/chromium/chromium
22.09 Mb /usr/bin/pulseaudio
19.01 Mb /usr/bin/pulseaudio
18.62 Mb (sd
18.46 Mb (sd
18.30 Mb /sbin/init
18.17 Mb /usr/sbin/rsyslogd
17.50 Mb gdm
17.42 Mb gdm
17.09 Mb /usr/lib/dconf/dconf
17.09 Mb /usr/lib/at
17.06 Mb /usr/lib/gvfs/gvfsd
16.98 Mb /usr/lib/at
16.91 Mb /usr/lib/gdm3/gdm
16.86 Mb /usr/lib/gvfs/gvfsd
16.86 Mb /usr/lib/gdm3/gdm
16.85 Mb /usr/lib/dconf/dconf
16.85 Mb /usr/lib/dconf/dconf
16.73 Mb /usr/lib/rtkit/rtkit
16.69 Mb /lib/systemd/systemd
13.13 Mb /usr/lib/chromium/chromium
13.13 Mb /usr/lib/chromium/chromium
10.92 Mb anydesk
8.54 Mb /sbin/lvmetad
7.43 Mb /usr/sbin/apache2
6.82 Mb /usr/sbin/apache2
6.77 Mb /usr/sbin/apache2
6.73 Mb /usr/sbin/apache2
6.66 Mb /usr/sbin/apache2
6.64 Mb /usr/sbin/apache2
6.63 Mb /usr/sbin/apache2
6.62 Mb /usr/sbin/apache2
6.51 Mb /usr/sbin/apache2
6.25 Mb /usr/sbin/apache2
6.22 Mb /usr/sbin/apache2
3.92 Mb bash
3.14 Mb bash
2.97 Mb bash
2.95 Mb bash
2.93 Mb bash
2.91 Mb bash
2.86 Mb bash
2.86 Mb bash
2.86 Mb bash
2.84 Mb bash
2.84 Mb bash
2.45 Mb /lib/systemd/systemd
2.30 Mb (sd
2.28 Mb /usr/bin/dbus
1.84 Mb /usr/bin/dbus
1.46 Mb ps
1.21 Mb openvpn hackthebox.ovpn
1.16 Mb /sbin/dhclient
1.16 Mb /sbin/dhclient
1.09 Mb /lib/systemd/systemd
0.98 Mb /sbin/mount.ntfs /dev/sda3 /media/n0bit4/Data
0.97 Mb /lib/systemd/systemd
0.96 Mb /lib/systemd/systemd
0.89 Mb /usr/sbin/smartd
0.77 Mb /usr/bin/dbus
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.74 Mb /usr/bin/dbus
0.71 Mb /usr/lib/apt/methods/http
0.68 Mb /bin/bash /usr/bin/mysqld_safe
0.68 Mb /sbin/wpa_supplicant
0.66 Mb /usr/bin/dbus
0.61 Mb /lib/systemd/systemd
0.54 Mb /usr/bin/dbus
0.46 Mb /usr/sbin/cron
0.45 Mb /usr/sbin/irqbalance
0.43 Mb logger
0.41 Mb awk { hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }
0.40 Mb /usr/bin/ssh
0.34 Mb /usr/lib/chromium/chrome
0.32 Mb cut
0.32 Mb cut
0.00 Mb [kthreadd]
0.00 Mb [ksoftirqd/0]
0.00 Mb [kworker/0:0H]
0.00 Mb [rcu_sched]
0.00 Mb [rcu_bh]
0.00 Mb [migration/0]
0.00 Mb [lru
0.00 Mb [watchdog/0]
0.00 Mb [cpuhp/0]
0.00 Mb [cpuhp/1]
0.00 Mb [watchdog/1]
0.00 Mb [migration/1]
0.00 Mb [ksoftirqd/1]
0.00 Mb [kworker/1:0H]
0.00 Mb [cpuhp/2]
0.00 Mb [watchdog/2]
0.00 Mb [migration/2]
0.00 Mb [ksoftirqd/2]
0.00 Mb [kworker/2:0H]
0.00 Mb [cpuhp/3]
0.00 Mb [watchdog/3]
0.00 Mb [migration/3]
0.00 Mb [ksoftirqd/3]
0.00 Mb [kworker/3:0H]
0.00 Mb [kdevtmpfs]
0.00 Mb [netns]
0.00 Mb [khungtaskd]
0.00 Mb [oom_reaper]
0.00 Mb [writeback]
0.00 Mb [kcompactd0]
0.00 Mb [ksmd]
0.00 Mb [khugepaged]
0.00 Mb [crypto]
0.00 Mb [kintegrityd]
0.00 Mb [bioset]
0.00 Mb [kblockd]
0.00 Mb [devfreq_wq]
0.00 Mb [watchdogd]
0.00 Mb [kswapd0]
0.00 Mb [vmstat]
0.00 Mb [kthrotld]
0.00 Mb [ipv6_addrconf]
0.00 Mb [acpi_thermal_pm]
0.00 Mb [ata_sff]
0.00 Mb [scsi_eh_0]
0.00 Mb [scsi_tmf_0]
0.00 Mb [scsi_eh_1]
0.00 Mb [scsi_tmf_1]
0.00 Mb [scsi_eh_2]
0.00 Mb [scsi_tmf_2]
0.00 Mb [scsi_eh_3]
0.00 Mb [scsi_tmf_3]
0.00 Mb [scsi_eh_4]
0.00 Mb [scsi_tmf_4]
0.00 Mb [scsi_eh_5]
0.00 Mb [scsi_tmf_5]
0.00 Mb [bioset]
0.00 Mb [kworker/1:1H]
0.00 Mb [kworker/3:1H]
0.00 Mb [kworker/0:1H]
0.00 Mb [kdmflush]
0.00 Mb [bioset]
0.00 Mb [kdmflush]
0.00 Mb [bioset]
0.00 Mb [jbd2/sda5
0.00 Mb [ext4
0.00 Mb [kworker/2:1H]
0.00 Mb [kauditd]
0.00 Mb [bioset]
0.00 Mb [drbd
0.00 Mb [irq/27
0.00 Mb [i915/signal:0]
0.00 Mb [i915/signal:1]
0.00 Mb [i915/signal:2]
0.00 Mb [ttm_swap]
0.00 Mb [cfg80211]
0.00 Mb [kworker/u17:0]
0.00 Mb [hci0]
0.00 Mb [hci0]
0.00 Mb [kworker/u17:1]
0.00 Mb [iprt
0.00 Mb [iprt
0.00 Mb [kworker/1:0]
0.00 Mb [kworker/3:0]
0.00 Mb [kworker/0:0]
0.00 Mb [kworker/2:0]
0.00 Mb [kworker/u16:0]
0.00 Mb [kworker/u16:2]
0.00 Mb [kworker/3:2]
0.00 Mb [kworker/2:1]
0.00 Mb [kworker/1:2]
0.00 Mb [kworker/0:2]
0.00 Mb [kworker/2:2]
0.00 Mb [kworker/0:1]
0.00 Mb [scsi_eh_6]
0.00 Mb [scsi_tmf_6]
0.00 Mb [usb
0.00 Mb [bioset]
0.00 Mb [kworker/3:1]
0.00 Mb [kworker/u16:1]
There isn't any easy way to calculate this. But some people have tried to get some good answers:
ps_mem.py
ps_mem.py at GitHub
Use smem, which is an alternative to ps which calculates the USS and PSS per process. You probably want the PSS.
USS - Unique Set Size. This is the amount of unshared memory unique to that process (think of it as U for unique memory). It does not include shared memory. Thus this will under-report the amount of memory a process uses, but it is helpful when you want to ignore shared memory.
PSS - Proportional Set Size. This is what you want. It adds together the unique memory (USS), along with a proportion of its shared memory divided by the number of processes sharing that memory. Thus it will give you an accurate representation of how much actual physical memory is being used per process - with shared memory truly represented as shared. Think of the P being for physical memory.
How this compares to RSS as reported by ps and other utilities:
RSS - Resident Set Size. This is the amount of shared memory plus unshared memory used by each process. If any processes share memory, this will over-report the amount of memory actually used, because the same shared memory will be counted more than once - appearing again in each other process that shares the same memory. Thus it is fairly unreliable, especially when high-memory processes have a lot of forks - which is common in a server, with things like Apache or PHP (FastCGI/FPM) processes.
Notice: smem can also (optionally) output graphs such as pie charts and the like. IMO you don't need any of that. If you just want to use it from the command line like you might use ps -A v, then you don't need to install the Python and Matplotlib recommended dependency.
Use time.
Not the Bash builtin time, but the one you can find with which time, for example /usr/bin/time.
Here's what it covers, on a simple ls:
$ /usr/bin/time --verbose ls
(...)
Command being timed: "ls"
User time (seconds): 0.00
System time (seconds): 0.00
Percent of CPU this job got: 0%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2372
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 121
Voluntary context switches: 2
Involuntary context switches: 9
Swaps: 0
File system inputs: 256
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
Beside the solutions listed in the answers, you can use the Linux command "top". It provides a dynamic real-time view of the running system, and it gives the CPU and memory usage for the whole system, along with for every program, in percentage:
top
to filter by a program PID:
top -p <PID>
To filter by a program name:
top | grep <PROCESS NAME>
"top" provides also some fields such as:
VIRT -- Virtual Image (kb): The total amount of virtual memory used by the task
RES -- Resident size (kb): The non-swapped physical memory a task has used ; RES = CODE + DATA.
DATA -- Data+Stack size (kb): The amount of physical memory devoted to other than executable code, also known as the 'data resident set' size or DRS.
SHR -- Shared Mem size (kb): The amount of shared memory used by a task. It simply reflects memory that could be potentially shared with other processes.
Reference here.
This is an excellent summary of the tools and problems: archive.org link
I'll quote it, so that more devs will actually read it.
If you want to analyse memory usage of the whole system or to thoroughly analyse memory usage of one application (not just its heap usage), use exmap. For whole system analysis, find processes with the highest effective usage, they take the most memory in practice, find processes with the highest writable usage, they create the most data (and therefore possibly leak or are very ineffective in their data usage). Select such application and analyse its mappings in the second listview. See exmap section for more details. Also use xrestop to check high usage of X resources, especially if the process of the X server takes a lot of memory. See xrestop section for details.
If you want to detect leaks, use valgrind or possibly kmtrace.
If you want to analyse heap (malloc etc.) usage of an application, either run it in memprof or with kmtrace, profile the application and search the function call tree for biggest allocations. See their sections for more details.
I am using Arch Linux and there's this wonderful package called ps_mem:
ps_mem -p <pid>
Example Output
$ ps_mem -S -p $(pgrep firefox)
Private + Shared = RAM used Swap used Program
355.0 MiB + 38.7 MiB = 393.7 MiB 35.9 MiB firefox
---------------------------------------------
393.7 MiB 35.9 MiB
=============================================
There isn't a single answer for this because you can't pin point precisely the amount of memory a process uses. Most processes under Linux use shared libraries.
For instance, let's say you want to calculate memory usage for the 'ls' process. Do you count only the memory used by the executable 'ls' (if you could isolate it)? How about libc? Or all these other libraries that are required to run 'ls'?
linux-gate.so.1 => (0x00ccb000)
librt.so.1 => /lib/librt.so.1 (0x06bc7000)
libacl.so.1 => /lib/libacl.so.1 (0x00230000)
libselinux.so.1 => /lib/libselinux.so.1 (0x00162000)
libc.so.6 => /lib/libc.so.6 (0x00b40000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00cb4000)
/lib/ld-linux.so.2 (0x00b1d000)
libattr.so.1 => /lib/libattr.so.1 (0x00229000)
libdl.so.2 => /lib/libdl.so.2 (0x00cae000)
libsepol.so.1 => /lib/libsepol.so.1 (0x0011a000)
You could argue that they are shared by other processes, but 'ls' can't be run on the system without them being loaded.
Also, if you need to know how much memory a process needs in order to do capacity planning, you have to calculate how much each additional copy of the process uses. I think /proc/PID/status might give you enough information of the memory usage at a single time. On the other hand, Valgrind will give you a better profile of the memory usage throughout the lifetime of the program.
If your code is in C or C++ you might be able to use getrusage() which returns you various statistics about memory and time usage of your process.
Not all platforms support this though and will return 0 values for the memory-use options.
Instead you can look at the virtual file created in /proc/[pid]/statm (where [pid] is replaced by your process id. You can obtain this from getpid()).
This file will look like a text file with 7 integers. You are probably most interested in the first (all memory use) and sixth (data memory use) numbers in this file.
Three more methods to try:
ps aux --sort pmem
It sorts the output by %MEM.
ps aux | awk '{print $2, $4, $11}' | sort -k2r | head -n 15
It sorts using pipes.
top -a
It starts top sorting by %MEM
(Extracted from here)
Valgrind can show detailed information, but it slows down the target application significantly, and most of the time it changes the behavior of the application.
Exmap was something I didn't know yet, but it seems that you need a kernel module to get the information, which can be an obstacle.
I assume what everyone wants to know with respect to "memory usage" is the following...
In Linux, the amount of physical memory a single process might use can be roughly divided into following categories.
M.a anonymous mapped memory
.p private
.d dirty == malloc/mmapped heap and stack allocated and written memory
.c clean == malloc/mmapped heap and stack memory once allocated, written, then freed, but not reclaimed yet
.s shared
.d dirty == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
.c clean == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
M.n named mapped memory
.p private
.d dirty == file mmapped written memory private
.c clean == mapped program/library text private mapped
.s shared
.d dirty == file mmapped written memory shared
.c clean == mapped library text shared mapped
Utility included in Android called showmap is quite useful
virtual shared shared private private
size RSS PSS clean dirty clean dirty object
-------- -------- -------- -------- -------- -------- -------- ------------------------------
4 0 0 0 0 0 0 0:00 0 [vsyscall]
4 4 0 4 0 0 0 [vdso]
88 28 28 0 0 4 24 [stack]
12 12 12 0 0 0 12 7909 /lib/ld-2.11.1.so
12 4 4 0 0 0 4 89529 /usr/lib/locale/en_US.utf8/LC_IDENTIFICATION
28 0 0 0 0 0 0 86661 /usr/lib/gconv/gconv-modules.cache
4 0 0 0 0 0 0 87660 /usr/lib/locale/en_US.utf8/LC_MEASUREMENT
4 0 0 0 0 0 0 89528 /usr/lib/locale/en_US.utf8/LC_TELEPHONE
4 0 0 0 0 0 0 89527 /usr/lib/locale/en_US.utf8/LC_ADDRESS
4 0 0 0 0 0 0 87717 /usr/lib/locale/en_US.utf8/LC_NAME
4 0 0 0 0 0 0 87873 /usr/lib/locale/en_US.utf8/LC_PAPER
4 0 0 0 0 0 0 13879 /usr/lib/locale/en_US.utf8/LC_MESSAGES/SYS_LC_MESSAGES
4 0 0 0 0 0 0 89526 /usr/lib/locale/en_US.utf8/LC_MONETARY
4 0 0 0 0 0 0 89525 /usr/lib/locale/en_US.utf8/LC_TIME
4 0 0 0 0 0 0 11378 /usr/lib/locale/en_US.utf8/LC_NUMERIC
1156 8 8 0 0 4 4 11372 /usr/lib/locale/en_US.utf8/LC_COLLATE
252 0 0 0 0 0 0 11321 /usr/lib/locale/en_US.utf8/LC_CTYPE
128 52 1 52 0 0 0 7909 /lib/ld-2.11.1.so
2316 32 11 24 0 0 8 7986 /lib/libncurses.so.5.7
2064 8 4 4 0 0 4 7947 /lib/libdl-2.11.1.so
3596 472 46 440 0 4 28 7933 /lib/libc-2.11.1.so
2084 4 0 4 0 0 0 7995 /lib/libnss_compat-2.11.1.so
2152 4 0 4 0 0 0 7993 /lib/libnsl-2.11.1.so
2092 0 0 0 0 0 0 8009 /lib/libnss_nis-2.11.1.so
2100 0 0 0 0 0 0 7999 /lib/libnss_files-2.11.1.so
3752 2736 2736 0 0 864 1872 [heap]
24 24 24 0 0 0 24 [anon]
916 616 131 584 0 0 32 /bin/bash
-------- -------- -------- -------- -------- -------- -------- ------------------------------
22816 4004 3005 1116 0 876 2012 TOTAL
#!/bin/ksh
#
# Returns total memory used by process $1 in kb.
#
# See /proc/NNNN/smaps if you want to do something
# more interesting.
#
IFS=$'\n'
for line in $(</proc/$1/smaps)
do
[[ $line =~ ^Size:\s+(\S+) ]] && ((kb += ${.sh.match[1]}))
done
print $kb
I'm using htop; it's a very good console program similar to Windows Task Manager.
Get Valgrind. Give it your program to run, and it'll tell you plenty about its memory usage.
This would apply only for the case of a program that runs for some time and stops. I don't know if Valgrind can get its hands on an already-running process or shouldn't-stop processes such as daemons.
A good test of the more "real world" usage is to open the application, run vmstat -s, and check the "active memory" statistic. Close the application, wait a few seconds, and run vmstat -s again.
However much active memory was freed was in evidently in use by the application.
The below command line will give you the total memory used by the various process running on the Linux machine in MB:
ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' | awk '{total=total + $1} END {print total}'
If the process is not using up too much memory (either because you expect this to be the case, or some other command has given this initial indication), and the process can withstand being stopped for a short period of time, you can try to use the gcore command.
gcore <pid>
Check the size of the generated core file to get a good idea how much memory a particular process is using.
This won't work too well if process is using hundreds of megabytes, or gigabytes, as the core generation could take several seconds or minutes to be created depending on I/O performance. During the core creation the process is stopped (or "frozen") to prevent memory changes. So be careful.
Also make sure the mount point where the core is generated has plenty of disk space and that the system will not react negatively to the core file being created in that particular directory.
Note: this works 100% well only when memory consumption increases
If you want to monitor memory usage by given process (or group of processed sharing common name, e.g. google-chrome, you can use my bash-script:
while true; do ps aux | awk ‚{print $5, $11}’ | grep chrome | sort -n > /tmp/a.txt; sleep 1; diff /tmp/{b,a}.txt; mv /tmp/{a,b}.txt; done;
this will continuously look for changes and print them.
If you want something quicker than profiling with Valgrind and your kernel is older and you can't use smaps, a ps with the options to show the resident set of the process (with ps -o rss,command) can give you a quick and reasonable _aproximation_ of the real amount of non-swapped memory being used.
I would suggest that you use atop. You can find everything about it on this page. It is capable of providing all the necessary KPI for your processes and it can also capture to a file.
Another vote for Valgrind here, but I would like to add that you can use a tool like Alleyoop to help you interpret the results generated by Valgrind.
I use the two tools all the time and always have lean, non-leaky code to proudly show for it ;)
Check out this shell script to check memory usage by application in Linux.
It is also available on GitHub and in a version without paste and bc.
Given some of the answers (thanks thomasrutter), to get the actual swap and RAM for a single application, I came up with the following, say we want to know what 'firefox' is using
sudo smem | awk '/firefox/{swap += $5; pss += $7;} END {print "swap = "swap/1024" PSS = "pss/1024}'
Or for libvirt;
sudo smem | awk '/libvirt/{swap += $5; pss += $7;} END {print "swap = "swap/1024" PSS = "pss/1024}'
This will give you the total in MB like so;
swap = 0 PSS = 2096.92
swap = 224.75 PSS = 421.455
Tested on ubuntu 16.04 through 20.04.
While this question seems to be about examining currently running processes, I wanted to see the peak memory used by an application from start to finish. Besides Valgrind, you can use tstime, which is much simpler. It measures the "highwater" memory usage (RSS and virtual). From this answer.
Based on an answer to a related question.
You may use SNMP to get the memory and CPU usage of a process in a particular device on the network :)
Requirements:
The device running the process should have snmp installed and running
snmp should be configured to accept requests from where you will run the script below (it may be configured in file snmpd.conf)
You should know the process ID (PID) of the process you want to monitor
Notes:
HOST-RESOURCES-MIB::hrSWRunPerfCPU is the number of centi-seconds of the total system's CPU resources consumed by this process. Note that on a multi-processor system, this value may increment by more than one centi-second in one centi-second of real (wall clock) time.
HOST-RESOURCES-MIB::hrSWRunPerfMem is the total amount of real system memory allocated to this process.
Process monitoring script
echo "IP address: "
read ip
echo "Specfiy PID: "
read pid
echo "Interval in seconds: "
read interval
while [ 1 ]
do
date
snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfCPU.$pid
snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfMem.$pid
sleep $interval;
done
/prox/xxx/numa_maps gives some info there: N0=??? N1=???. But this result might be lower than the actual result, as it only counts those which have been touched.

Resources