How can I send a signal to my process which runs inside valgrind to check its memory usage status?
Thanks!
To send a signal to valgrind, pkill -USR1 valgrind doesn't want to work for me.
pkill -USR1 memcheck
does the trick.
There is not a signal that tells valgrind to check its memory usage status. If you are interested in the amount of memory used by your program over time and where that memory is allocated, valgrind's massif tool can record that information, which can then be displayed using its ms_print utility. Massif records snapshots of the program's memory usage automatically throughout the execution of the program, including a peak snapshot representing the point at which the memory usage was at its peak (within 1% using the default options).
To run your program under valgrind's massif tool:
valgrind --tool=massif yourprogram
A binary file massif.out.pid will be created. Use ms_print to format the information in text form:
ms_print massif.out.12345
Related
I want to generate a graph of the allocated memory for a particular PID over time for which I am currently using a custom script that uses an strace log. From the strace log, I am aggregating the memory allocation changes from mmap, munmap, and, brk system calls.
I was wondering, however, if there is a better and more matured solution to do this (measure/graph the lifetime of memory allocations for a process)
I believe what you are looking for is a tool called massif visualizer (a part of Valgrind) which allows you to view graphed memory allocation for specific processes over time and is still actively maintained.
How would I use either vmstat, free, or meminfo to see how much memory was allocated to a process? I also need to do the same for a process running in the background.
vmstat, free and meminfo display the total usage of memory in the system, not only for one process, so I am afraid you can't use any of them. I would recommend to use pmap:
pmap <PID>
The last line displays the total memory usage by the process. Since also processes in the background have PID, you can check also them.
Since I'm fairly new to linux and core dumps, I'm not sure what kind of information is stored in core-dumps. It makes me wonder if there is a GDB command to retrieve CPU % usage of threads from a Core dump file. Like the CPU % usage you get from 'top' command. Would be also nice to get memory usage too.
I'm rephrasing the question from my previous posting to stay more focused to the answer I'm looking for.
Reference : How to diagnose a python process chewing CPU in linux
Thanks.
No, it's not possible to obtain info about the CPU usage from a coredump.
The coredump is just the snapshot of the memory of the process at death-time. Any dynamic history is not available: CPU make/model/frequency, system load, number of other processes, kernel scheduling info, etc.
As a side effect, you DO get the memory usage information, as long as you know the memory available on the system that generated the coredump: since the coredump is the memory of the process, the more memory the process used, the bigger the coredump (generally speaking, there are exceptions like regions of memory not included in the codedump).
A core dump is a copy of the crashed process's address space (memory). You can use it to see how much memory the process was using (and you can examine all the data in its memory at the time it crashed), but it doesn't contain any information about CPU usage.
For the future, you can collect this easily enough -- have your process periodically collect memory usage for each thread, and when debugging, hunt for that variable in the core.
I have quite a complex system, with 30 applications running. One quite complex C++ application was leaking memory, and I think I fixed it.
What I've done so far is:
I executed the application using valgrind's memcheck, and it detected no problems.
I monitored the application using htop, and I noticed that virtual and residual memory is not increasing
I am planing to run valgrind's massif and see if it uses new memory
The question is, how can I make sure there are no leaks? I thought if virtual memory stopped increasing, then I could be sure there are no leaks. When I test my application, I trigger the loop where the memory is allocated and deallocated several times just to make sure.
You can't be sure except you know exactly all the conditions under which the application will allocate new memory. If you can't induce all of these conditions neither valgrind nor htop will guarantee that your application doesn't leak memory under all circumstances.
Yet you should make at least sure that the application doesn't leak memory under normal conditions.
If valgrind doesn't report leaks, there aren't leaks in the sense of memory areas that aren't accessible anymore (during the runs you checked). That doesn't mean that the program doesn't allocate memory, uses it and doesn't free it when it won't use it anymore (but it is still reachable). Think e.g. a the typical to-do stack, you place new items on top, work on the item on top and then push another one. Won't ever go back to the old ones so the memory used for them is wasted, but technically it isn't a leak.
What you can do is to monitor the memory usage by the process. If it steadily increases, you might have a problem there (either a bona fide leak, or some data structure that grows without need).
If this isn't really pressing, it might be cheaper in the long run just letting it be...
You need to use a tool called Valgrind. It is memory debugging, memory leak detection, and profiling tool for Linux and Mac OS X operating systems. Valgrind is a flexible program for debugging and profiling Linux executables.
follow steps..
Just install valgrind
To run...
./a.out arg1 arg2
Now how to Use this command line to turn on the detailed memory leak detector:
valgrind --leak-check=yes ./a.out arg1 arg2
valgrind --leak-check=yes /path/to/myapp arg1 arg2
Or
You can also set logfile:
valgrind --log-file=output.file --leak-check=yes --tool=memcheck ./a.out arg1 arg2
You can check its log for error of memory leak...
cat output.file
Is there a way we can record memory footprint? In a way that
after the process has finish we still can have access to it.
The typical way I check memory footprint is this:
$ cat /proc/PID/status
But in no way it exist after the process has finished.
you can do something like:
watch 'grep VmSize /proc/PID/status >> log'
when the program ends you'll have a list of memory footprints over time in log.
Valgrind has a memory profiler called Massif that provides detailed information about the memory usage of your program:
Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations. The graph is supplemented by a text or HTML file that includes more information for determining where the most memory is being allocated. Massif runs programs about 20x slower than normal.
You can record it using munin + a custom plugin.
This will allow you to monitor and save the needed process information, and graph it, easily.
Here's a related answer I gave at serverfault.com