Can you please tell me how can I use valgrind for memory profile?
The article I found from google talks about how to use valgrind for memory leak. I am interested in how to use that for memory profiling (i.e. how much memory is used by what classes)?
Thank you.
You can use valgrind's Massif tool to get a heap profile. This code is still labelled "experimental", and it does not ship with all versions of valgrind. You may have to download and build from source.
Also note that the heap profile is organized by allocation site, which is a finer granularity than classes. If you need information organized by class, you will have to read the developer documentation and get the machine-readable format, then figure out which allocation sites go with which classes - perhaps with support from your compiler.
Even without support for classes, however, the Massif profile may be useful.
Related
I am adding JVM args by using -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio to see the GC behavior changes. I got some explanations here (what is the purpose of -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio)
The question is how could I get the memory usage picture in the answer. I wanna make sure the modifications I made can really change the GC behavior.
It is a tool called VisualVM
There are some other alternatives such us JConsole or Java Mission Control
I'm a compiler researcher and I'm currently researching the impact of different compilation strategies in the final size of an executable. For instance, if a strategy is to generate multiple versions of the same function that bake in some of the parameters (constant propagation), it's to be expected that the amount of code will increase; dead code elimination, on the other hand, decrease it. Similarly, if some optimization relies on using a static memory pool whose size is determined at compile-time, I'd like to know how much space it is taking.
Most tools I've looked into (e.g. valgrind) seem to focus on stack and heap usage, but don't say much about static memory. Is there some other tool I'm missing?
Try using newrelic monitoring tool it will help you to collect garbage data of code or which api is using. In linux there is only physical memory and swap memory,application level they use heap memory for that new relic will be helpful.
Newrelic link to get stack details :
https://newrelic.com/ --> 100GB free trail is available to understand your requirement.
I'm trying to build a toy language compiler (that generates assembly for NASM) and so far so good, but I got really stuck in the topic of dynamic memory allocation. It's the only part on assembly that's stopping me from starting my implementation. The goal is just to learn how things work at the low level.
Is there a good and comprehensive guide/tutorial/book about how to dynamically allocate, use and free memory using Assembly (preferably x64/Linux)? I have found some tips here and there mentioning brk, sbrk and mmap, but I don't know how to use them and I feel that there is more to it than just checking the arguments and the return value of these syscalls. How do they work exactly?
For example, in this post, it is mentioned that sbrk moves the border of the data segment. Can I know where the border is initially/after calling sbrk? Can I just use my initial data segment for the first dynamic allocations (and how)?
This other post explains how free works in C, but it does not explain how C actually gives the memory back to the OS. I have also started to read some books on assembly, but somehow they seem to ignore this topic (perhaps because it's OS specific).
Are there some working assembly code examples? I really couldn't find enough information.
I know one way is to use glibc's malloc, but I wanted to know how it is done from assembly. How do compiled languages or even LLVM do it? Do they just use C's malloc?
malloc is inteface provided for userspace programs. It may have different implementations, such as ptmalloc, tcmalloc and jemalloc. Depending on different environment, you can choosing different allocators to use and even implement your own allocator. As I know, jemalloc manages memory for userspace programs by mmap a block of demanded memory, and jemalloc controls when the block of memory frees to kernel/system.(I know jemalloc is used in Android.) Also jemalloc also uses sbrk depending on different states od system memory. For more detailed info, I think you have to read the codes of defferent allocators you wanted to learn.
We have a a very large project which is basically an application which uses Linux Application programming and runs on PowerPC processor. This project was initially developed by another company. We acquired the project from the company and now we are maintaining the project.
The application is reported to have a lot of memory leak issue. Since this is a large project, it is not possible to go to each source code file and find out the memory leak. We have used Valgrid, mpatrol and other memory leak detection tools. These tools did not help much and the memory leak has not decreased by a significant percentage.
In this situation, how to go about to reduce the memory leak by a significant amount.Is there a general method which people use in these case to reduce the memory leak other than the memory leak detection tools like mentioned above.
Usually Valgrind belongs to the best tools for this tasks. If it does not work correctly, there might only be a couple of things you can still do.
First question: What language is the application in? Valgrind is very good for C and C++, but will not help you with garbage collected or scripting language. So check the language first. There might be something similar for java, but I have not used that much java, so you would have to ask someone else.
Play around a lot with the settings of valgrind. There are several plugins, that can help with this. One example could be using --leak-check=full or similar options. There are also plugins for valgrind, that can enhance it detection capabilities.
You say, that the application was reported to have a memory leak. How was this detected? Did the application detect this by itself. If it was detected by the application on it's own without any external tools, this probably means someone has added their own memory tracker inside the application. Custom memory tracker, memory pools etc. mess up valgrind and any other leak detection system very bad. So in case any custom memory handling is present in the application, your only choice is to either deactivate it (if possible) or to hook into this custom mechanism. How this could be done depends on your application only.
Add your own memory tracker. For example in C++ it is possible to hook into new/delete calls and get them to track the memory. There are a couple of libraries you can use for this. You can also write your own new/delete replacement in about 500 LOC. If you decide to use this method, be sure to read a lot of tutorials on replacing new/delete, since there are several things that are unusual in the C++ world when attempting this task.
What makes you so sure, there is an memory leak in the application (i.e. how was this detected)? If a tool just reported huge numbers of allocated memory, this might not even mean, there is an actual memory leak. A memory leak means that the handles to the memory are lost and hence it becomes impossible to every reach and free that memory again. In case your application just get's a lot of memory and keeps it accessible, you probably have a completely different problem. For example you simply might use an algorithm with a bad space complexity at one point or the other, leading to many allocations. In this case you will not need a leak detector, but rather a memory profiler, which gives you more detailed overview of the memory footprint of the code parts. However I have never used a profiler for this kind of task before, so I cannot give you any more hints on this.
You could replace all memory allocation calls with calls to your own allocation methods, which should call original methods and at the same time count memory usage and where it was allocated. This will allow you to find the leaks and eliminate them by hand.
There might also be automated tools that allow you to do this - not sure, haven't used any. But this method works.
Perhaps you might also consider using Boehm's garbage collector (that is using GC_malloc instead of malloc etc... and not bother about free-ing data).
I currently have some trouble with the Heap in a program of mine. While I was googling my way through the internet to find solutions I came across a page from the MSDN that describes some linker options for heap allocation that I don't understand.
The documentation says that you can set the Heapsize with /HEAP.
I always knew that the stack size was fixed and that makes sense to me. But I always thought that the Heap is variable in size. To add some more confusion I found that the default value is 1MB. I have written lots of programs that use more than 1 MB of memory.
What exactly does the /HEAP Option do then?
Thanks
windows gives .exes (processes) memory by giving them read/write acess to pages of memory. To the C++ programmer, it should be left to the operating system, never to be messed with
/HEAP 1,000,000 means that an .exe starts up with 1,000,000 bytes worth of pages... TO START WITH. Changing this value shouldn't affect anything. Windows automatically pages in memory. It's just a hint for windows to give this process the memory it needs for performance.
I think you are confusing the OS heap (HeapAlloc function) which is controlled by the PE header, in turn set by this linker option, and your C++ runtime library dynamic allocation (malloc, new) which probably grab memory directly from the OS using VirtualAlloc and don't use the OS heap.
For more information on the OS heap parameters, read the MSDN documentation for CreateHeap.