Haskell WinGHC Running Program with Performance Statistics - haskell

Using WinGHC how can i run my program with the +RTS -sstderr option to get statistics like compile time, memory useage or anything else of interest?
Currently i am using the command line: ghc -rtsopts -O3 -prof -auto-all Main.hs

Using WinGHCi set your active directory where your program is e.g.
Prelude> :cd C:\Haskell
Then, when you are in the right directory as per previous comments above enter:
Prelude> :! ghc +RTS -s -RTS -O2 -prof -fprof-auto Main.hs
The 'Main.hs' should be your program name.
This will then give statistics as follows:
152,495,848 bytes allocated in the heap
36,973,728 bytes copied during GC
10,458,664 bytes maximum residency (5 sample(s))
1,213,516 bytes maximum slop
21 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 241 colls, 0 par 0.14s 0.20s 0.0008s 0.0202s
Gen 1 5 colls, 0 par 0.08s 0.12s 0.0233s 0.0435s
etc..........
etc.............
Other flags can be set to see other statistics.

Related

Is there an equivalent for time([some command]) for checking peak memory usage of a bash command?

I want to figure out how much memory a specific command uses but I'm not sure how to check for the peak memory of the command. Is there anything like the time([command]) usage but for memory?
Basically, I'm going to have to run an interactive queue using SLURM, then test a command for a program I need to use for a single sample, see how much memory was used, then submit a bunch of jobs using that info.
Yes, time is the program that monitors programs and shows the Maximum resident set size. Not to be confused with time Bash builtin that only shows real/user/sys times. On my Arch Linux you have to install time with pacman -S time, it's a separate package.
$ command time -v echo 1
1
Command being timed: "echo 1"
User time (seconds): 0.00
System time (seconds): 0.00
Percent of CPU this job got: 0%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 1968
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 90
Voluntary context switches: 1
Involuntary context switches: 1
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
Note:
$ type time
time is a shell keyword
$ time -V
bash: -V: command not found
real 0m0.002s
user 0m0.000s
sys 0m0.002s
$ command time -V
time (GNU Time) 1.9
$ /bin/time -V
time (GNU Time) 1.9
$ /usr/bin/time -V
time (GNU Time) 1.9

What is GHC doing when run with -N (parallel) flag?

I've written the following test application:
main = print $ sum $ map (read . show) [1 .. 10^7]
When I run it with and without -N flag, I get the following results:
$ ghc -O2 -threaded -rtsopts -o test test.hs
...
$ time ./test +RTS -s
50000005000000
real 0m12.411s
user 0m12.367s
sys 0m0.040s
$ time ./test +RTS -s -N12
50000005000000
real 0m22.702s
user 1m14.904s
sys 0m12.608s
It seems like GHC decides to honour the -N12 flag by distributing the calculation over different cores (with very bad results), but I can't find any documentation about how exactly it decides to do so when the code doesn't contain explicit instructions. Is there some documentation that I'm missing?
I have GHC version 8.6.5.
Garbage collection statistics:
$ ghc -O2 -threaded -rtsopts -o test test.hs
...
$ time ./test +RTS -s
50000005000000
54,332,520,712 bytes allocated in the heap
53,571,832 bytes copied during GC
56,824 bytes maximum residency (2 sample(s))
29,192 bytes maximum slop
0 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 52088 colls, 0 par 0.154s 0.150s 0.0000s 0.0001s
Gen 1 2 colls, 0 par 0.000s 0.000s 0.0001s 0.0001s
TASKS: 4 (1 bound, 3 peak workers (3 total), using -N1)
SPARKS: 0(0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
INIT time 0.000s ( 0.000s elapsed)
MUT time 12.250s ( 12.249s elapsed)
GC time 0.155s ( 0.151s elapsed)
EXIT time 0.001s ( 0.010s elapsed)
Total time 12.406s ( 12.410s elapsed)
Alloc rate 4,435,169,879 bytes per MUT second
Productivity 98.7% of total user, 98.7% of total elapsed
real 0m12.411s
user 0m12.367s
sys 0m0.040s
$ time ./test +RTS -s -N12
50000005000000
54,332,687,840 bytes allocated in the heap
214,001,248 bytes copied during GC
183,360 bytes maximum residency (2 sample(s))
146,696 bytes maximum slop
0 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 52088 colls, 52088 par 20.219s 0.975s 0.0000s 0.0001s
Gen 1 2 colls, 1 par 0.001s 0.000s 0.0001s 0.0002s
Parallel GC work balance: 0.15% (serial 0%, perfect 100%)
TASKS: 26 (1 bound, 25 peak workers (25 total), using -N12)
SPARKS: 0(0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
INIT time 0.007s ( 0.003s elapsed)
MUT time 67.281s ( 21.720s elapsed)
GC time 20.221s ( 0.975s elapsed)
EXIT time 0.002s ( 0.003s elapsed)
Total time 87.511s ( 22.701s elapsed)
Alloc rate 807,549,654 bytes per MUT second
Productivity 76.9% of total user, 95.7% of total elapsed
real 0m22.702s
user 1m14.904s
sys 0m12.608s
GHC doesn't automatically parallelize code. (The runtime system itself may take advantage of multiple threads for initialization giving a small, fixed performance improvement at startup, but that's the only thing that happends "automatically".)
So, your code is running sequentially. As noted in some of the comments, the bizarre performance problem is probably parallel garbage collection.
Parallel GC has been observed to perform very poorly on certain workloads when running on large numbers of capabilities. See issue #14981, for example. Of course, that issue talks about 32- or 64-core machines.
However, I have observed very poor performance especially with the default runtime GC settings even on relatively small numbers of cores. For example, using your test case and GHC version, I get similar poor performance on my 8-core, 16-thread Intel i9-9980HK laptop with -N12 or more. Here is the comparison of a 1-capability and 12-capability run. Compile it:
$ cat test.hs
main = print $ sum $ map (read . show) [1 .. 10^7]
$ stack ghc --resolver=lts-14.27 -- -fforce-recomp -O2 -threaded -rtsopts -o test test.hs
[1 of 1] Compiling Main ( test.hs, test.o )
Linking test ...
Run it on one capability:
$ time ./test +RTS -N1
50000005000000
real 0m10.803s
user 0m10.770s
sys 0m0.037s
Run it on twelve:
$ time ./test +RTS -N12
50000005000000
real 0m15.655s
user 0m52.103s
sys 0m7.019s
To see that parallel GC is at fault, we can switch to sequential GC:
$ time ./test +RTS -N12 -qg
50000005000000
real 0m11.175s
user 0m11.066s
sys 0m0.120s
I had assumed that this poor parallel GC performance was related to exceeding the number of physical cores, but your experience suggests this can happen with around 12 capabilities even if it doesn't exceed the physical core count.
Instead of disabling parallel GC entirely, you are advised to play with the runtime garbage collector controls. The effects can be startling. For example, increasing the generation 0 allocation area from its default of 1m to 4m results in a big improvement:
$ time ./test +RTS -N12 -A4m
50000005000000
real 0m12.485s
user 0m25.219s
sys 0m2.053s
and going even higher to 16m eliminates the performance problem entirely, at least for this simple test case.
$ time ./test +RTS -N12 -A16m
50000005000000
real 0m11.481s
user 0m11.775s
sys 0m0.126s
I get similar improvements switching to compaction for the second generation:
$ time ./test +RTS -N12 -c
50000005000000
real 0m11.125s
user 0m11.043s
sys 0m0.089s
Of course, running the parallel GC on a reduced number of cores may also help:
$ time ./test +RTS -N12 -qn4
50000005000000
real 0m14.092s
user 0m18.961s
sys 0m3.031s

Why using pipe for sort (linux command) is slow?

I have a large text file of ~8GB which I need to do some simple filtering and then sort all the rows. I am on a 28-core machine with SSD and 128GB RAM. I have tried
Method 1
awk '...' myBigFile | sort --parallel = 56 > myBigFile.sorted
Method 2
awk '...' myBigFile > myBigFile.tmp
sort --parallel 56 myBigFile.tmp > myBigFile.sorted
Surprisingly, method1 takes 11.5 min while method2 only takes (0.75 + 1 < 2) min. Why is sorting so slow when piped? Is it not paralleled?
EDIT
awk and myBigFile is not important, this experiment is repeatable by simply using seq 1 10000000 | sort --parallel 56 (thanks to #Sergei Kurenkov), and I also observed a six-fold speed improvement using un-piped version on my machine.
When reading from a pipe, sort assumes that the file is small, and for small files parallelism isn't helpful. To get sort to utilize parallelism you need to tell it to allocate a large main memory buffer using -S. In this case the data file is about 8GB, so you can use -S8G. However, at least on your system with 128GB of main memory, method 2 may still be faster.
This is because sort in method 2 can know from the size of the file that it is huge, and it can seek in the file (neither of which is possible for a pipe). Further, since you have so much memory compared to these file sizes, the data for myBigFile.tmp need not be written to disc before awk exits, and sort will be able to read the file from cache rather than disc. So the principle difference between method 1 and method 2 (on a machine like yours with lots of memory) is that sort in method 2 knows the file is huge and can easily divide up the work (possibly using seek, but I haven't looked at the implementation), whereas in method 1 sort has to discover the data is huge, and it can not use any parallelism in reading the input since it can't seek the pipe.
I think sort does not use threads when read from pipe.
I have used this command for your first case. And it shows that sort uses only 1 CPU even though it is told to use 4. atop actually also shows that there is only one thread in sort:
/usr/bin/time -v bash -c "seq 1 1000000 | sort --parallel 4 > bf.txt"
I have used this command for your second case. And it shows that sort uses 2 CPU. atop actually also shows that there are four thread in sort:
/usr/bin/time -v bash -c "seq 1 1000000 > tmp.bf.txt && sort --parallel 4 tmp.bf.txt > bf.txt"
In you first scenario sort is an I/O bound task, it does lots of read syscalls from stdin. In your second scenario sort uses mmap syscalls to read file and it avoids being an I/O bound task.
Below are results for the first and second scenarios:
$ /usr/bin/time -v bash -c "seq 1 10000000 | sort --parallel 4 > bf.txt"
Command being timed: "bash -c seq 1 10000000 | sort --parallel 4 > bf.txt"
User time (seconds): 35.85
System time (seconds): 0.84
Percent of CPU this job got: 98%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:37.43
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 9320
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 2899
Voluntary context switches: 1920
Involuntary context switches: 1323
Swaps: 0
File system inputs: 0
File system outputs: 459136
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
$ /usr/bin/time -v bash -c "seq 1 10000000 > tmp.bf.txt && sort --parallel 4 tmp.bf.txt > bf.txt"
Command being timed: "bash -c seq 1 10000000 > tmp.bf.txt && sort --parallel 4 tmp.bf.txt > bf.txt"
User time (seconds): 43.03
System time (seconds): 0.85
Percent of CPU this job got: 175%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:24.97
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 1018004
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 2445
Voluntary context switches: 299
Involuntary context switches: 4387
Swaps: 0
File system inputs: 0
File system outputs: 308160
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
You have more system calls, if you use the pipe.
seq 1000000 | strace sort --parallel=56 2>&1 >/dev/null | grep read | wc -l
2059
Without the pipe the file is mapped into memory.
seq 1000000 > input
strace sort --parallel=56 input 2>&1 >/dev/null | grep read | wc -l
33
Kernel calls are in most cases the bottle neck. That is the reason why sendfile has been invented.

Valgrind memory leak report differs for executable vs. '.so' library

Currently I manage a module of my lab.
I checked for memory leaks using valgrind. I found several memory leaks and fixed them. And when I made a '.so' file to provide the module as a library and test code for running the '.so' library, valgrind reports a memory leak that did not exist in the original program.
There is no missing function in the test code. I think that the difference of options when compiling with g++ cause this situation. But I couldn't verify this.
I use the options below to compile the original module
-ggdb -I. -D_DEBUG -D_CICODE -D_EDITMODE -DPOINTER -fPIC
and
-shared -W1,-soname,libmysutff.so.1
to make the .so file.
Working environment: Ubuntu 10.10, g++
This is the output of valgrind:
==23426==
==23426== HEAP SUMMARY:
==23426== in use at exit: 1,148 bytes in 8 blocks
==23426== total heap usage: 4,294 allocs, 4,286 frees, 43,176,780 bytes allocated
==23426==
==23426== 300 bytes in 1 blocks are definitely lost in loss record 5 of 6
==23426== at 0x4C2815C: malloc (vg_replace_malloc.c:236)
==23426== by 0x400A27: main
==23426==
==23426== 848 (120 direct, 728 indirect) bytes in 1 blocks are definitely lost in loss record 6 of 6
==23426== at 0x4C27480: calloc (vg_replace_malloc.c:467)
==23426== by 0x4E6E6B6: NewPathNode() (mem_manager.cpp:156)
==23426== by 0x4E61F21: Morph_No_Space(unsigned char*, _path_node**, int, unsigned char) (API.cpp:198)
==23426== by 0x4E625F6: API_Morph (API.cpp:350)
==23426== by 0x4E626A0: API_Tagger (API.cpp:379)
==23426== by 0x400ADC: main
==23426==
==23426== LEAK SUMMARY:
==23426== definitely lost: 420 bytes in 2 blocks
==23426== indirectly lost: 728 bytes in 6 blocks
==23426== possibly lost: 0 bytes in 0 blocks
==23426== still reachable: 0 bytes in 0 blocks
==23426== suppressed: 0 bytes in 0 blocks
==23426==
==23426== For counts of detected and suppressed errors, rerun with: -v
==23426== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 4 from 4)

How can I measure the actual memory usage of an application or process?

How do you measure the memory usage of an application or process in Linux?
From the blog article of Understanding memory usage on Linux, ps is not an accurate tool to use for this intent.
Why ps is "wrong"
Depending on how you look at it, ps is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up if it were the only process running. Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by ps are almost definitely wrong.
(Note: This question is covered here in great detail.)
With ps or similar tools you will only get the amount of memory pages allocated by that process. This number is correct, but:
does not reflect the actual amount of memory used by the application, only the amount of memory reserved for it
can be misleading if pages are shared, for example by several threads or by using dynamically linked libraries
If you really want to know what amount of memory your application actually uses, you need to run it within a profiler. For example, Valgrind can give you insights about the amount of memory used, and, more importantly, about possible memory leaks in your program. The heap profiler tool of Valgrind is called 'massif':
Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations. The graph is supplemented by a text or HTML file that includes more information for determining where the most memory is being allocated. Massif runs programs about 20x slower than normal.
As explained in the Valgrind documentation, you need to run the program through Valgrind:
valgrind --tool=massif <executable> <arguments>
Massif writes a dump of memory usage snapshots (e.g. massif.out.12345). These provide, (1) a timeline of memory usage, (2) for each snapshot, a record of where in your program memory was allocated. A great graphical tool for analyzing these files is massif-visualizer. But I found ms_print, a simple text-based tool shipped with Valgrind, to be of great help already.
To find memory leaks, use the (default) memcheck tool of valgrind.
Try the pmap command:
sudo pmap -x <process pid>
It is hard to tell for sure, but here are two "close" things that can help.
$ ps aux
will give you Virtual Size (VSZ)
You can also get detailed statistics from the /proc file-system by going to /proc/$pid/status.
The most important is the VmSize, which should be close to what ps aux gives.
/proc/19420$ cat status
Name: firefox
State: S (sleeping)
Tgid: 19420
Pid: 19420
PPid: 1
TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 256
Groups: 4 6 20 24 25 29 30 44 46 107 109 115 124 1000
VmPeak: 222956 kB
VmSize: 212520 kB
VmLck: 0 kB
VmHWM: 127912 kB
VmRSS: 118768 kB
VmData: 170180 kB
VmStk: 228 kB
VmExe: 28 kB
VmLib: 35424 kB
VmPTE: 184 kB
Threads: 8
SigQ: 0/16382
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000020001000
SigCgt: 000000018000442f
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
Cpus_allowed: 03
Mems_allowed: 1
voluntary_ctxt_switches: 63422
nonvoluntary_ctxt_switches: 7171
In recent versions of Linux, use the smaps subsystem. For example, for a process with a PID of 1234:
cat /proc/1234/smaps
It will tell you exactly how much memory it is using at that time. More importantly, it will divide the memory into private and shared, so you can tell how much memory your instance of the program is using, without including memory shared between multiple instances of the program.
ps -eo size,pid,user,command --sort -size | \
awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' |\
cut -d "" -f2 | cut -d "-" -f1
Use this as root and you can get a clear output for memory usage by each process.
Output example:
0.00 Mb COMMAND
1288.57 Mb /usr/lib/firefox
821.68 Mb /usr/lib/chromium/chromium
762.82 Mb /usr/lib/chromium/chromium
588.36 Mb /usr/sbin/mysqld
547.55 Mb /usr/lib/chromium/chromium
523.92 Mb /usr/lib/tracker/tracker
476.59 Mb /usr/lib/chromium/chromium
446.41 Mb /usr/bin/gnome
421.62 Mb /usr/sbin/libvirtd
405.11 Mb /usr/lib/chromium/chromium
302.60 Mb /usr/lib/chromium/chromium
291.46 Mb /usr/lib/chromium/chromium
284.56 Mb /usr/lib/chromium/chromium
238.93 Mb /usr/lib/tracker/tracker
223.21 Mb /usr/lib/chromium/chromium
197.99 Mb /usr/lib/chromium/chromium
194.07 Mb conky
191.92 Mb /usr/lib/chromium/chromium
190.72 Mb /usr/bin/mongod
169.06 Mb /usr/lib/chromium/chromium
155.11 Mb /usr/bin/gnome
136.02 Mb /usr/lib/chromium/chromium
125.98 Mb /usr/lib/chromium/chromium
103.98 Mb /usr/lib/chromium/chromium
93.22 Mb /usr/lib/tracker/tracker
89.21 Mb /usr/lib/gnome
80.61 Mb /usr/bin/gnome
77.73 Mb /usr/lib/evolution/evolution
76.09 Mb /usr/lib/evolution/evolution
72.21 Mb /usr/lib/gnome
69.40 Mb /usr/lib/evolution/evolution
68.84 Mb nautilus
68.08 Mb zeitgeist
60.97 Mb /usr/lib/tracker/tracker
59.65 Mb /usr/lib/evolution/evolution
57.68 Mb apt
55.23 Mb /usr/lib/gnome
53.61 Mb /usr/lib/evolution/evolution
53.07 Mb /usr/lib/gnome
52.83 Mb /usr/lib/gnome
51.02 Mb /usr/lib/udisks2/udisksd
50.77 Mb /usr/lib/evolution/evolution
50.53 Mb /usr/lib/gnome
50.45 Mb /usr/lib/gvfs/gvfs
50.36 Mb /usr/lib/packagekit/packagekitd
50.14 Mb /usr/lib/gvfs/gvfs
48.95 Mb /usr/bin/Xwayland :1024
46.21 Mb /usr/bin/gnome
42.43 Mb /usr/bin/zeitgeist
42.29 Mb /usr/lib/gnome
41.97 Mb /usr/lib/gnome
41.64 Mb /usr/lib/gvfs/gvfsd
41.63 Mb /usr/lib/gvfs/gvfsd
41.55 Mb /usr/lib/gvfs/gvfsd
41.48 Mb /usr/lib/gvfs/gvfsd
39.87 Mb /usr/bin/python /usr/bin/chrome
37.45 Mb /usr/lib/xorg/Xorg vt2
36.62 Mb /usr/sbin/NetworkManager
35.63 Mb /usr/lib/caribou/caribou
34.79 Mb /usr/lib/tracker/tracker
33.88 Mb /usr/sbin/ModemManager
33.77 Mb /usr/lib/gnome
33.61 Mb /usr/lib/upower/upowerd
33.53 Mb /usr/sbin/gdm3
33.37 Mb /usr/lib/gvfs/gvfsd
33.36 Mb /usr/lib/gvfs/gvfs
33.23 Mb /usr/lib/gvfs/gvfs
33.15 Mb /usr/lib/at
33.15 Mb /usr/lib/at
30.03 Mb /usr/lib/colord/colord
29.62 Mb /usr/lib/apt/methods/https
28.06 Mb /usr/lib/zeitgeist/zeitgeist
27.29 Mb /usr/lib/policykit
25.55 Mb /usr/lib/gvfs/gvfs
25.55 Mb /usr/lib/gvfs/gvfs
25.23 Mb /usr/lib/accountsservice/accounts
25.18 Mb /usr/lib/gvfs/gvfsd
25.15 Mb /usr/lib/gvfs/gvfs
25.15 Mb /usr/lib/gvfs/gvfs
25.12 Mb /usr/lib/gvfs/gvfs
25.10 Mb /usr/lib/gnome
25.10 Mb /usr/lib/gnome
25.07 Mb /usr/lib/gvfs/gvfsd
24.99 Mb /usr/lib/gvfs/gvfs
23.26 Mb /usr/lib/chromium/chromium
22.09 Mb /usr/bin/pulseaudio
19.01 Mb /usr/bin/pulseaudio
18.62 Mb (sd
18.46 Mb (sd
18.30 Mb /sbin/init
18.17 Mb /usr/sbin/rsyslogd
17.50 Mb gdm
17.42 Mb gdm
17.09 Mb /usr/lib/dconf/dconf
17.09 Mb /usr/lib/at
17.06 Mb /usr/lib/gvfs/gvfsd
16.98 Mb /usr/lib/at
16.91 Mb /usr/lib/gdm3/gdm
16.86 Mb /usr/lib/gvfs/gvfsd
16.86 Mb /usr/lib/gdm3/gdm
16.85 Mb /usr/lib/dconf/dconf
16.85 Mb /usr/lib/dconf/dconf
16.73 Mb /usr/lib/rtkit/rtkit
16.69 Mb /lib/systemd/systemd
13.13 Mb /usr/lib/chromium/chromium
13.13 Mb /usr/lib/chromium/chromium
10.92 Mb anydesk
8.54 Mb /sbin/lvmetad
7.43 Mb /usr/sbin/apache2
6.82 Mb /usr/sbin/apache2
6.77 Mb /usr/sbin/apache2
6.73 Mb /usr/sbin/apache2
6.66 Mb /usr/sbin/apache2
6.64 Mb /usr/sbin/apache2
6.63 Mb /usr/sbin/apache2
6.62 Mb /usr/sbin/apache2
6.51 Mb /usr/sbin/apache2
6.25 Mb /usr/sbin/apache2
6.22 Mb /usr/sbin/apache2
3.92 Mb bash
3.14 Mb bash
2.97 Mb bash
2.95 Mb bash
2.93 Mb bash
2.91 Mb bash
2.86 Mb bash
2.86 Mb bash
2.86 Mb bash
2.84 Mb bash
2.84 Mb bash
2.45 Mb /lib/systemd/systemd
2.30 Mb (sd
2.28 Mb /usr/bin/dbus
1.84 Mb /usr/bin/dbus
1.46 Mb ps
1.21 Mb openvpn hackthebox.ovpn
1.16 Mb /sbin/dhclient
1.16 Mb /sbin/dhclient
1.09 Mb /lib/systemd/systemd
0.98 Mb /sbin/mount.ntfs /dev/sda3 /media/n0bit4/Data
0.97 Mb /lib/systemd/systemd
0.96 Mb /lib/systemd/systemd
0.89 Mb /usr/sbin/smartd
0.77 Mb /usr/bin/dbus
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.74 Mb /usr/bin/dbus
0.71 Mb /usr/lib/apt/methods/http
0.68 Mb /bin/bash /usr/bin/mysqld_safe
0.68 Mb /sbin/wpa_supplicant
0.66 Mb /usr/bin/dbus
0.61 Mb /lib/systemd/systemd
0.54 Mb /usr/bin/dbus
0.46 Mb /usr/sbin/cron
0.45 Mb /usr/sbin/irqbalance
0.43 Mb logger
0.41 Mb awk { hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }
0.40 Mb /usr/bin/ssh
0.34 Mb /usr/lib/chromium/chrome
0.32 Mb cut
0.32 Mb cut
0.00 Mb [kthreadd]
0.00 Mb [ksoftirqd/0]
0.00 Mb [kworker/0:0H]
0.00 Mb [rcu_sched]
0.00 Mb [rcu_bh]
0.00 Mb [migration/0]
0.00 Mb [lru
0.00 Mb [watchdog/0]
0.00 Mb [cpuhp/0]
0.00 Mb [cpuhp/1]
0.00 Mb [watchdog/1]
0.00 Mb [migration/1]
0.00 Mb [ksoftirqd/1]
0.00 Mb [kworker/1:0H]
0.00 Mb [cpuhp/2]
0.00 Mb [watchdog/2]
0.00 Mb [migration/2]
0.00 Mb [ksoftirqd/2]
0.00 Mb [kworker/2:0H]
0.00 Mb [cpuhp/3]
0.00 Mb [watchdog/3]
0.00 Mb [migration/3]
0.00 Mb [ksoftirqd/3]
0.00 Mb [kworker/3:0H]
0.00 Mb [kdevtmpfs]
0.00 Mb [netns]
0.00 Mb [khungtaskd]
0.00 Mb [oom_reaper]
0.00 Mb [writeback]
0.00 Mb [kcompactd0]
0.00 Mb [ksmd]
0.00 Mb [khugepaged]
0.00 Mb [crypto]
0.00 Mb [kintegrityd]
0.00 Mb [bioset]
0.00 Mb [kblockd]
0.00 Mb [devfreq_wq]
0.00 Mb [watchdogd]
0.00 Mb [kswapd0]
0.00 Mb [vmstat]
0.00 Mb [kthrotld]
0.00 Mb [ipv6_addrconf]
0.00 Mb [acpi_thermal_pm]
0.00 Mb [ata_sff]
0.00 Mb [scsi_eh_0]
0.00 Mb [scsi_tmf_0]
0.00 Mb [scsi_eh_1]
0.00 Mb [scsi_tmf_1]
0.00 Mb [scsi_eh_2]
0.00 Mb [scsi_tmf_2]
0.00 Mb [scsi_eh_3]
0.00 Mb [scsi_tmf_3]
0.00 Mb [scsi_eh_4]
0.00 Mb [scsi_tmf_4]
0.00 Mb [scsi_eh_5]
0.00 Mb [scsi_tmf_5]
0.00 Mb [bioset]
0.00 Mb [kworker/1:1H]
0.00 Mb [kworker/3:1H]
0.00 Mb [kworker/0:1H]
0.00 Mb [kdmflush]
0.00 Mb [bioset]
0.00 Mb [kdmflush]
0.00 Mb [bioset]
0.00 Mb [jbd2/sda5
0.00 Mb [ext4
0.00 Mb [kworker/2:1H]
0.00 Mb [kauditd]
0.00 Mb [bioset]
0.00 Mb [drbd
0.00 Mb [irq/27
0.00 Mb [i915/signal:0]
0.00 Mb [i915/signal:1]
0.00 Mb [i915/signal:2]
0.00 Mb [ttm_swap]
0.00 Mb [cfg80211]
0.00 Mb [kworker/u17:0]
0.00 Mb [hci0]
0.00 Mb [hci0]
0.00 Mb [kworker/u17:1]
0.00 Mb [iprt
0.00 Mb [iprt
0.00 Mb [kworker/1:0]
0.00 Mb [kworker/3:0]
0.00 Mb [kworker/0:0]
0.00 Mb [kworker/2:0]
0.00 Mb [kworker/u16:0]
0.00 Mb [kworker/u16:2]
0.00 Mb [kworker/3:2]
0.00 Mb [kworker/2:1]
0.00 Mb [kworker/1:2]
0.00 Mb [kworker/0:2]
0.00 Mb [kworker/2:2]
0.00 Mb [kworker/0:1]
0.00 Mb [scsi_eh_6]
0.00 Mb [scsi_tmf_6]
0.00 Mb [usb
0.00 Mb [bioset]
0.00 Mb [kworker/3:1]
0.00 Mb [kworker/u16:1]
There isn't any easy way to calculate this. But some people have tried to get some good answers:
ps_mem.py
ps_mem.py at GitHub
Use smem, which is an alternative to ps which calculates the USS and PSS per process. You probably want the PSS.
USS - Unique Set Size. This is the amount of unshared memory unique to that process (think of it as U for unique memory). It does not include shared memory. Thus this will under-report the amount of memory a process uses, but it is helpful when you want to ignore shared memory.
PSS - Proportional Set Size. This is what you want. It adds together the unique memory (USS), along with a proportion of its shared memory divided by the number of processes sharing that memory. Thus it will give you an accurate representation of how much actual physical memory is being used per process - with shared memory truly represented as shared. Think of the P being for physical memory.
How this compares to RSS as reported by ps and other utilities:
RSS - Resident Set Size. This is the amount of shared memory plus unshared memory used by each process. If any processes share memory, this will over-report the amount of memory actually used, because the same shared memory will be counted more than once - appearing again in each other process that shares the same memory. Thus it is fairly unreliable, especially when high-memory processes have a lot of forks - which is common in a server, with things like Apache or PHP (FastCGI/FPM) processes.
Notice: smem can also (optionally) output graphs such as pie charts and the like. IMO you don't need any of that. If you just want to use it from the command line like you might use ps -A v, then you don't need to install the Python and Matplotlib recommended dependency.
Use time.
Not the Bash builtin time, but the one you can find with which time, for example /usr/bin/time.
Here's what it covers, on a simple ls:
$ /usr/bin/time --verbose ls
(...)
Command being timed: "ls"
User time (seconds): 0.00
System time (seconds): 0.00
Percent of CPU this job got: 0%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2372
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 121
Voluntary context switches: 2
Involuntary context switches: 9
Swaps: 0
File system inputs: 256
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
Beside the solutions listed in the answers, you can use the Linux command "top". It provides a dynamic real-time view of the running system, and it gives the CPU and memory usage for the whole system, along with for every program, in percentage:
top
to filter by a program PID:
top -p <PID>
To filter by a program name:
top | grep <PROCESS NAME>
"top" provides also some fields such as:
VIRT -- Virtual Image (kb): The total amount of virtual memory used by the task
RES -- Resident size (kb): The non-swapped physical memory a task has used ; RES = CODE + DATA.
DATA -- Data+Stack size (kb): The amount of physical memory devoted to other than executable code, also known as the 'data resident set' size or DRS.
SHR -- Shared Mem size (kb): The amount of shared memory used by a task. It simply reflects memory that could be potentially shared with other processes.
Reference here.
This is an excellent summary of the tools and problems: archive.org link
I'll quote it, so that more devs will actually read it.
If you want to analyse memory usage of the whole system or to thoroughly analyse memory usage of one application (not just its heap usage), use exmap. For whole system analysis, find processes with the highest effective usage, they take the most memory in practice, find processes with the highest writable usage, they create the most data (and therefore possibly leak or are very ineffective in their data usage). Select such application and analyse its mappings in the second listview. See exmap section for more details. Also use xrestop to check high usage of X resources, especially if the process of the X server takes a lot of memory. See xrestop section for details.
If you want to detect leaks, use valgrind or possibly kmtrace.
If you want to analyse heap (malloc etc.) usage of an application, either run it in memprof or with kmtrace, profile the application and search the function call tree for biggest allocations. See their sections for more details.
I am using Arch Linux and there's this wonderful package called ps_mem:
ps_mem -p <pid>
Example Output
$ ps_mem -S -p $(pgrep firefox)
Private + Shared = RAM used Swap used Program
355.0 MiB + 38.7 MiB = 393.7 MiB 35.9 MiB firefox
---------------------------------------------
393.7 MiB 35.9 MiB
=============================================
There isn't a single answer for this because you can't pin point precisely the amount of memory a process uses. Most processes under Linux use shared libraries.
For instance, let's say you want to calculate memory usage for the 'ls' process. Do you count only the memory used by the executable 'ls' (if you could isolate it)? How about libc? Or all these other libraries that are required to run 'ls'?
linux-gate.so.1 => (0x00ccb000)
librt.so.1 => /lib/librt.so.1 (0x06bc7000)
libacl.so.1 => /lib/libacl.so.1 (0x00230000)
libselinux.so.1 => /lib/libselinux.so.1 (0x00162000)
libc.so.6 => /lib/libc.so.6 (0x00b40000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00cb4000)
/lib/ld-linux.so.2 (0x00b1d000)
libattr.so.1 => /lib/libattr.so.1 (0x00229000)
libdl.so.2 => /lib/libdl.so.2 (0x00cae000)
libsepol.so.1 => /lib/libsepol.so.1 (0x0011a000)
You could argue that they are shared by other processes, but 'ls' can't be run on the system without them being loaded.
Also, if you need to know how much memory a process needs in order to do capacity planning, you have to calculate how much each additional copy of the process uses. I think /proc/PID/status might give you enough information of the memory usage at a single time. On the other hand, Valgrind will give you a better profile of the memory usage throughout the lifetime of the program.
If your code is in C or C++ you might be able to use getrusage() which returns you various statistics about memory and time usage of your process.
Not all platforms support this though and will return 0 values for the memory-use options.
Instead you can look at the virtual file created in /proc/[pid]/statm (where [pid] is replaced by your process id. You can obtain this from getpid()).
This file will look like a text file with 7 integers. You are probably most interested in the first (all memory use) and sixth (data memory use) numbers in this file.
Three more methods to try:
ps aux --sort pmem
It sorts the output by %MEM.
ps aux | awk '{print $2, $4, $11}' | sort -k2r | head -n 15
It sorts using pipes.
top -a
It starts top sorting by %MEM
(Extracted from here)
Valgrind can show detailed information, but it slows down the target application significantly, and most of the time it changes the behavior of the application.
Exmap was something I didn't know yet, but it seems that you need a kernel module to get the information, which can be an obstacle.
I assume what everyone wants to know with respect to "memory usage" is the following...
In Linux, the amount of physical memory a single process might use can be roughly divided into following categories.
M.a anonymous mapped memory
.p private
.d dirty == malloc/mmapped heap and stack allocated and written memory
.c clean == malloc/mmapped heap and stack memory once allocated, written, then freed, but not reclaimed yet
.s shared
.d dirty == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
.c clean == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
M.n named mapped memory
.p private
.d dirty == file mmapped written memory private
.c clean == mapped program/library text private mapped
.s shared
.d dirty == file mmapped written memory shared
.c clean == mapped library text shared mapped
Utility included in Android called showmap is quite useful
virtual shared shared private private
size RSS PSS clean dirty clean dirty object
-------- -------- -------- -------- -------- -------- -------- ------------------------------
4 0 0 0 0 0 0 0:00 0 [vsyscall]
4 4 0 4 0 0 0 [vdso]
88 28 28 0 0 4 24 [stack]
12 12 12 0 0 0 12 7909 /lib/ld-2.11.1.so
12 4 4 0 0 0 4 89529 /usr/lib/locale/en_US.utf8/LC_IDENTIFICATION
28 0 0 0 0 0 0 86661 /usr/lib/gconv/gconv-modules.cache
4 0 0 0 0 0 0 87660 /usr/lib/locale/en_US.utf8/LC_MEASUREMENT
4 0 0 0 0 0 0 89528 /usr/lib/locale/en_US.utf8/LC_TELEPHONE
4 0 0 0 0 0 0 89527 /usr/lib/locale/en_US.utf8/LC_ADDRESS
4 0 0 0 0 0 0 87717 /usr/lib/locale/en_US.utf8/LC_NAME
4 0 0 0 0 0 0 87873 /usr/lib/locale/en_US.utf8/LC_PAPER
4 0 0 0 0 0 0 13879 /usr/lib/locale/en_US.utf8/LC_MESSAGES/SYS_LC_MESSAGES
4 0 0 0 0 0 0 89526 /usr/lib/locale/en_US.utf8/LC_MONETARY
4 0 0 0 0 0 0 89525 /usr/lib/locale/en_US.utf8/LC_TIME
4 0 0 0 0 0 0 11378 /usr/lib/locale/en_US.utf8/LC_NUMERIC
1156 8 8 0 0 4 4 11372 /usr/lib/locale/en_US.utf8/LC_COLLATE
252 0 0 0 0 0 0 11321 /usr/lib/locale/en_US.utf8/LC_CTYPE
128 52 1 52 0 0 0 7909 /lib/ld-2.11.1.so
2316 32 11 24 0 0 8 7986 /lib/libncurses.so.5.7
2064 8 4 4 0 0 4 7947 /lib/libdl-2.11.1.so
3596 472 46 440 0 4 28 7933 /lib/libc-2.11.1.so
2084 4 0 4 0 0 0 7995 /lib/libnss_compat-2.11.1.so
2152 4 0 4 0 0 0 7993 /lib/libnsl-2.11.1.so
2092 0 0 0 0 0 0 8009 /lib/libnss_nis-2.11.1.so
2100 0 0 0 0 0 0 7999 /lib/libnss_files-2.11.1.so
3752 2736 2736 0 0 864 1872 [heap]
24 24 24 0 0 0 24 [anon]
916 616 131 584 0 0 32 /bin/bash
-------- -------- -------- -------- -------- -------- -------- ------------------------------
22816 4004 3005 1116 0 876 2012 TOTAL
#!/bin/ksh
#
# Returns total memory used by process $1 in kb.
#
# See /proc/NNNN/smaps if you want to do something
# more interesting.
#
IFS=$'\n'
for line in $(</proc/$1/smaps)
do
[[ $line =~ ^Size:\s+(\S+) ]] && ((kb += ${.sh.match[1]}))
done
print $kb
I'm using htop; it's a very good console program similar to Windows Task Manager.
Get Valgrind. Give it your program to run, and it'll tell you plenty about its memory usage.
This would apply only for the case of a program that runs for some time and stops. I don't know if Valgrind can get its hands on an already-running process or shouldn't-stop processes such as daemons.
A good test of the more "real world" usage is to open the application, run vmstat -s, and check the "active memory" statistic. Close the application, wait a few seconds, and run vmstat -s again.
However much active memory was freed was in evidently in use by the application.
The below command line will give you the total memory used by the various process running on the Linux machine in MB:
ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' | awk '{total=total + $1} END {print total}'
If the process is not using up too much memory (either because you expect this to be the case, or some other command has given this initial indication), and the process can withstand being stopped for a short period of time, you can try to use the gcore command.
gcore <pid>
Check the size of the generated core file to get a good idea how much memory a particular process is using.
This won't work too well if process is using hundreds of megabytes, or gigabytes, as the core generation could take several seconds or minutes to be created depending on I/O performance. During the core creation the process is stopped (or "frozen") to prevent memory changes. So be careful.
Also make sure the mount point where the core is generated has plenty of disk space and that the system will not react negatively to the core file being created in that particular directory.
Note: this works 100% well only when memory consumption increases
If you want to monitor memory usage by given process (or group of processed sharing common name, e.g. google-chrome, you can use my bash-script:
while true; do ps aux | awk ‚{print $5, $11}’ | grep chrome | sort -n > /tmp/a.txt; sleep 1; diff /tmp/{b,a}.txt; mv /tmp/{a,b}.txt; done;
this will continuously look for changes and print them.
If you want something quicker than profiling with Valgrind and your kernel is older and you can't use smaps, a ps with the options to show the resident set of the process (with ps -o rss,command) can give you a quick and reasonable _aproximation_ of the real amount of non-swapped memory being used.
I would suggest that you use atop. You can find everything about it on this page. It is capable of providing all the necessary KPI for your processes and it can also capture to a file.
Another vote for Valgrind here, but I would like to add that you can use a tool like Alleyoop to help you interpret the results generated by Valgrind.
I use the two tools all the time and always have lean, non-leaky code to proudly show for it ;)
Check out this shell script to check memory usage by application in Linux.
It is also available on GitHub and in a version without paste and bc.
Given some of the answers (thanks thomasrutter), to get the actual swap and RAM for a single application, I came up with the following, say we want to know what 'firefox' is using
sudo smem | awk '/firefox/{swap += $5; pss += $7;} END {print "swap = "swap/1024" PSS = "pss/1024}'
Or for libvirt;
sudo smem | awk '/libvirt/{swap += $5; pss += $7;} END {print "swap = "swap/1024" PSS = "pss/1024}'
This will give you the total in MB like so;
swap = 0 PSS = 2096.92
swap = 224.75 PSS = 421.455
Tested on ubuntu 16.04 through 20.04.
While this question seems to be about examining currently running processes, I wanted to see the peak memory used by an application from start to finish. Besides Valgrind, you can use tstime, which is much simpler. It measures the "highwater" memory usage (RSS and virtual). From this answer.
Based on an answer to a related question.
You may use SNMP to get the memory and CPU usage of a process in a particular device on the network :)
Requirements:
The device running the process should have snmp installed and running
snmp should be configured to accept requests from where you will run the script below (it may be configured in file snmpd.conf)
You should know the process ID (PID) of the process you want to monitor
Notes:
HOST-RESOURCES-MIB::hrSWRunPerfCPU is the number of centi-seconds of the total system's CPU resources consumed by this process. Note that on a multi-processor system, this value may increment by more than one centi-second in one centi-second of real (wall clock) time.
HOST-RESOURCES-MIB::hrSWRunPerfMem is the total amount of real system memory allocated to this process.
Process monitoring script
echo "IP address: "
read ip
echo "Specfiy PID: "
read pid
echo "Interval in seconds: "
read interval
while [ 1 ]
do
date
snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfCPU.$pid
snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfMem.$pid
sleep $interval;
done
/prox/xxx/numa_maps gives some info there: N0=??? N1=???. But this result might be lower than the actual result, as it only counts those which have been touched.

Resources