valgrind massif ms_print prints 0x0: ??? instead of stack - memory-leaks

I use valgrind --tool massif to get the profile of memory, and then print it with ms_print. But 74% of memory shows 0x0: ???, does it means 74% of memory leak?
->74.11% (503,526,238B) 0x0: ???
|
->24.66% (167,561,216B) 0x5051C66: ceph::buffer::raw_posix_aligned::raw_posix_aligned(unsigned int, unsigned int) (buffer.cc:393)

No it doesn't mean that there is a leak. It means that massif isn't able to get the source information for that point in the callstack.
If you want to detect leaks you should be using memcheck (as a rule).

Related

is this code secure againt memory leak

i've run a scan with fortify and says that this line contains possibly a memory leak :
LPTSTR args = _tcsdup(commandArgs.c_str());
i don't see any way how can someone exploit that if commandArgs are user manipulated.
Thank you
The memory leak is going to occur regardless of the commandArgs. _tcsdup() allocates new memory, you need to later free() the pointer it returns to avoid a memory leak.

massif reported heap usage much less than VmRss, what could be wrong?

massif output:
time=3220706
mem_heap_B=393242041
mem_heap_extra_B=73912175
mem_stacks_B=93616
heap_tree=peak
process shows 1.2GB in VmRss, so the huge difference comes from where? (I saw Rss grows up continuously).
Per http://cs.swan.ac.uk/~csoliver/ok-sat-library/internet_html/doc/doc/Valgrind/3.8.1/html/ms-manual.html
Heap allocation functions such as malloc are built on top of these system calls. For example, when needed, an allocator will typically call mmap to allocate a large chunk of memory, and then hand over pieces of that memory chunk to the client program in response to calls to malloc et al. Massif directly measures only these higher-level malloc et al calls, not the lower-level system calls.
There is no way to guarantee RSS size based on massif output. With --pages-as-heap=yes option you maybe able to estimate VIRT size, but that is including everything that was mapped into memory, not necessary residing in RAM.
You may want to play with alloc-fn option, which may bring you closer to estimating real memory usage by manually specifying all "custom" memory allocation functions.
Valgrind can use significant memory for its own internal house keeping. So, it is normal to have massif reporting memory significantly less than the process size, as the process size includes the 'client/guest' memory + valgrind's own memory.
You can use the valgrind option --stats=yes to have more information about the memory used by the client versus the memory used by valgrind.

MPI memory leak

I am writing some code that uses MPI and I was keeping noticing some memory leaks when running it with valgrind. While trying to identify where the problem was, I ended up with this simple (and totally useless) main:
#include "/usr/include/mpi/mpi.h"
int main(int argc,char** argv)
{
MPI_Init(&argc, &argv);
MPI_Finalize();
return 0;
}
As you can see, this code doesn't do anything and shouldn't create any problem. However, when I run the code with valgrind (both in the serial and parallel case), I get the following summary:
==28271== HEAP SUMMARY:
==28271== in use at exit: 190,826 bytes in 2,745 blocks
==28271== total heap usage: 11,214 allocs, 8,469 frees, 16,487,977 bytes allocated
==28271==
==28271== LEAK SUMMARY:
==28271== definitely lost: 5,950 bytes in 55 blocks
==28271== indirectly lost: 3,562 bytes in 32 blocks
==28271== possibly lost: 0 bytes in 0 blocks
==28271== still reachable: 181,314 bytes in 2,658 blocks
==28271== suppressed: 0 bytes in 0 blocks
I don't understand why there are these leaks. Maybe it's just me not able to read the valgrind output or to use MPI initialization/finalization correctly...
I am using OMPI 1.4.1-3 under ubuntu on a 64 bit architecture, if this can help.
Thanks a lot for your time!
The OpenMPI FAQ addresses issues with valgrind. This refers initalization issues and memory leaks during finalization - which should have no practical negative impact.
There are many situations, where Open MPI purposefully does not
initialize and subsequently communicates memory, e.g., by calling
writev. Furthermore, several cases are known, where memory is not
properly freed upon MPI_Finalize.
This certainly does not help distinguishing real errors from false
positives. Valgrind provides functionality to suppress errors and
warnings from certain function contexts.
In an attempt to ease debugging using Valgrind, starting with v1.5,
Open MPI provides a so-called Valgrind-suppression file, that can be
passed on the command line:
mpirun -np 2 valgrind
--suppressions=$PREFIX/share/openmpi/openmpi-valgrind.supp
You're not doing anything wrong. Memcheck false positives with valgrind are common, the best you can do is suppress them.
This page of the manual speaks more about these false positives. A quote near the end:
The wrappers should reduce Memcheck's false-error rate on MPI
applications. Because the wrapping is done at the MPI interface, there
will still potentially be a large number of errors reported in the MPI
implementation below the interface. The best you can do is try to
suppress them.

Finding allocation site for double-free errors (with valgrind)

Given a double-free error (reported by valgrind), is there a way to find out where the memory was allocated? Valgrind only tells me the location of the deallocation site (i.e. the call to free()), but I would like to know where the memory was allocated.
To get Valgrind keep tracks of allocation stack traces, you have to use options:
--track-origins=yes --keep-stacktraces=alloc-and-free
Valgrind will then report allocation stack under Block was alloc'd at section, just after Address ... inside a block of size x free'd alert.
In case your application is large, --error-limit=no --num-callers=40 options may be useful too.
The first check I would do is verifying that the error is indeed due to a double-free error. Sometimes, running a program (including with valgrind) can show a double-free error while in reality, it's a memory corruption problem (for example a memory overflow).
The best way to check is to apply the advice detailed in the answers : How to track down a double free or corruption error in C++ with gdb.
First of all, you can try to compile your program with flags fsanitize=address -g. This will instrument the memory of the program at runtime to keep track of all allocations, detect overflows, etc.
In any case, if the problem is indeed a double-free, the error message should contain all the necessary information for you to debug the problem.

Send signal to a process inside valgrind?

How can I send a signal to my process which runs inside valgrind to check its memory usage status?
Thanks!
To send a signal to valgrind, pkill -USR1 valgrind doesn't want to work for me.
pkill -USR1 memcheck
does the trick.
There is not a signal that tells valgrind to check its memory usage status. If you are interested in the amount of memory used by your program over time and where that memory is allocated, valgrind's massif tool can record that information, which can then be displayed using its ms_print utility. Massif records snapshots of the program's memory usage automatically throughout the execution of the program, including a peak snapshot representing the point at which the memory usage was at its peak (within 1% using the default options).
To run your program under valgrind's massif tool:
valgrind --tool=massif yourprogram
A binary file massif.out.pid will be created. Use ms_print to format the information in text form:
ms_print massif.out.12345

Resources