I have taken the memory dump, analysed it with memory analyzer. It showing 73% of memory taken by java.lang.ref.finalizer object. I went to see what is inside this very big object. I found it looks like recursive trail of objects. which looks like below
Finalizer
|__ Finalizer (recursive)
|__ java.io.FileInputStream or org.eclipse.jetty.util.resource.FileResource
Inside the FileResource i found path to extract of war file, but could not find what is inside FileInputStream object.
Also screenshots can be found here.
https://lh4.googleusercontent.com/-uZTZ031DlqI/UD33kMskuZI/AAAAAAAABYo/eOrqw65k_Mw/s1179/summary.png
https://lh6.googleusercontent.com/-yWBPUV_71js/UD33kAYYDEI/AAAAAAAABYk/J9fF_WwOeO4/s1074/details.png
Please let me know.
That is not a leak per se. Please read this: http://www.oracle.com/technetwork/articles/javase/finalization-137655.html about finalisation mechanism in JVM.
Finalizers can become a problem, if too many finalazble objects are created, in your case FileInputStream. You can try to decrease your heap size somewhat in order for Garbage Collector to run more frequently and dispose of them faster.
Or, better if possible, decrease your usage of FileInputStreams.
Related
I have a lot of "not very well" written JavaScript scripts I run in nodejs environment. It contains memory leaks, infinite loops and whatever code can "regular" (non-programmers) user produce.
What I found out when randomly analyzing execution of those "scripts" was, that some of them has huge rss memory area lets say around 1.0GB while heapTotal might be "just" around 450MB.
Despite reading blog posts about memory layout in nodejs I am not able to explain/simulate such "leak". I tried to create heapdump but obviously I wont find what is stored in the "stack area" because I did not dump that zone.
Do anyone know what has to happen in the source code so we leak out all memory while heap size is much smaller i.e. what would "evil source code" look like to eat space out of heap?
EDIT:
I found out that its pretty simple: const c = Buffer.alloc(1024*1024*1024, 1) consumes 1GB outside of heap. New question arise: How can one "clean out" this space and free the memory up? How can I detect leaky buffers? Is restart only way?
I have been into a little trouble lately: The memory used by GenServer processes is super high, probably because of large binary leaks.
The problem comes from here: we receive large binaries through the GenServer and we pass them to the consumer, which then interacts with that data. Now, these large binaries are never assigned to a variable and the GC doesn't go over them.
I have tried hibernating the processes after managing the data, which partially worked because the memory used by processes lowered a lot, but since binaries were not getting GC'd, the amount of memory used by them increased slowly but steadily, from 30 MBs without hibernating to 200MBs with process hibernation in about 25 minutes.
I have also tried to set :erlang.system_flag(:fullsweep_after, 0), which has also worked and lowered the memory used by processes by around 20%.
Before and after.
I must say it goes down to 60-70MB used by processes from time to time.
Edit: Using :recon.bin_leak(15) frees a lot of memory -- result of :recon.bin_leak(15)
Anyhow the memory used is still high and I'm completely sure it can be fixed.
Here you have a screenshot taken from the observer in the Processes tab. As you can see, GenServer is the one eating the memory like the cookie monster.
I have researched a lot about this topic, tried all the suggestions and possible solutions that were given out there, and nevertheless, I am still in this position.
Any help is welcome.
The code is in this Github Repository
Code of interest that is probably causing this + Applications tree. 3 out of 4 processes there (<0.294.0>, <0.295.0>, <0.297.0> are using 27MB of memory.
Thank you beforehand for reading.
You can try to add the :hibernate atom to your handle_events return values in your GenStage related modules. For example:
def handle_events(events, _from, %{handler: handler, public: public} = state) do
public = handle(handler, events, public)
{:noreply, [], %{state | public: public}, :hibernate}
end
Another option is to record the PIDs after :recon.bin_leak() and then pass them to Process.info(PID) to get some more information about the offending GenServers.
Some additional resources:
https://elixirforum.com/t/extremely-high-memory-usage-in-genservers/4035/23
https://www.erlang-in-anger.com/ (Specifically Chapter 7 on Memory Leaks)
My project has started using java 8 from java 7.
After switching to java 8, we are seeing issues like the memory consumed is getting higher with time.
Here are the investigations that we have done :
Issues comes only after migrating from java7 and from java8
As metaspace is the only thing related to memory which is changes from hava 7 to java 8. We monitored metaspace and this does not grow more then 20 MB.
Heap also remains consistent.
Now the only path left is to analyze how the memory gets distributes to process in java 7 and java 8, specifically private byte memory. Any thoughts or links here would be appreciated.
NOTE: this javaw application is a swing based application.
UPDATE 1 : After analyzing the native memory with NMT tool and generated a diff of memory occupied as compare to baseline. We found that the heap remained same but threads are leaking all this memory. So as no change in Heap, I am assuming that this leak is because of native code.
So challenge remains still open. Any thoughts on how to analyze the memory occupied by all the threads will be helpful here.
Below are the snapshots taken from native memory tracking.
In this pic, you can see that 88 MB got increased in threads. Where arena and resource handle count had increased a lot.
in this picture you can see that 73 MB had increased in this Malloc. But no method name is shown here.
So please throw some info in understanding these 2 screenshot.
You may try another GC implementation like G1 introduced in Java 7 and probably the default GC in Java 9. To do so just launch your Java apps with:
-XX:+UseG1GC
There's also an interesting functionality with G1 GC in Java 8u20 that can look for duplicated Strings in the heap and "deduplicate" them (this only works if you activate G1, not with the default Java 8's GC).
-XX:+UseStringDeduplication
Be aware to test thoroughly your system before going to production with such a change!!!
Here you can find a nice description of the diferent GCs you can use
I encountered the exact same issue.
Heap usage constant, only metaspace increase, NMT diffs showed a slow but steady leak in the memory used by threads specifically in the arena allocation. I had tried to fix it by setting the MALLOC_ARENAS_MAX=1 env var but that was not fruitful. Profiling native memory allocation with jemalloc/jeprof showed no leakage that could be attributed to client code, pointing instead to a JDK issue as the only smoking gun there was the memory leak due to malloc calls which, in theory, should be from JVM code.
Like you, I found that upgrading the JDK fixed the problem. The reason I am posting an answer here is because I know the reason it fixes the issue - it's a JDK bug that was fixed in JDK8 u152: https://bugs.openjdk.java.net/browse/JDK-8164293
The bug report mentions Class/malloc increase, not Thread/arena, but a bit further down one of the comments clarifies that the bug reproduction clearly shows increase in Thread/arena.
consider optimising the JVM options
Parallel Collector(throughput collector)
-XX:+UseParallelGC
concurrent collectors (low-latency collectors)
-XX:+UseConcMarkSweepGC
use String Duplicates remover
-XX:+UseStringDeduplication
optimise compact ratio
-XXcompactRatio:
and refer
link1
link2
In this my answer you can see information and references how to profile native memory of JVM to find memory leaks. Shortly, see this.
UPDATE
Did you use -XX:NativeMemoryTracking=detail option? The results are straightforward, they show that the most memory allocated by malloc. :) It's a little bit obviously. Your next step is to profile your application. To analyze native methods and Java I use (and we use on production) flame graphs with perf_events. Look at this blog post for a good start.
Note, that your memory increased for threads, likely your threads grow in application. Before perf I recommend analyze thread dumps before/after to check does Java threads number grow and why. Thread dumps you can get with jstack/jvisualvm/jmc, etc.
This issue does not come with Java 8 update 152. The exact root cause of why it was coming with earlier versions is still not clearly identified.
I need to check for a memory leak in an embedded system.
The IDE is HEW and we are using uCOSIII RTOS.
Valgrind does not support the above configurations. Can you please suggest a tool or a method to check for memory leaks?
First rule of dynamically allocating memory in embedded systems is "don't". Allocate it all once at the start of execution and then leave well alone. Otherwise you have to assess and decide what to do when a malloc (or similar operation) fails.
If you must dynamically allocate memory at runtime, then at its simplest you may be able to use a logging infrastructure to track calls to malloc/free by writing wrappers around them. Then you can track where and when the allocations and deallocations are happening and hopefully see what is missing.
Take a look at libtalloc, the core memory allocator used in Samba. It may not work out-of-the-box for you if you don't have atexit() or stdio.h, but it shouldn't take too much work to port it to your environment.
Have a look at talloc_enable_leak_report_full() and talloc_report_full() (among others) to get you started.
I have been giving some thoughts about it, and here is a random try on how to do this with embedded systems:
First you need to check in which thread leakage occur. When doing alloc, you should also count for each thread how many active allocation. Where number of allocation keeps growing without deallocation, this is suspicious task
Secondly, you need to count number of allocations for allocs comming from that thread. To do this, replace alloc with a macro. Using macro you can save name of the file and line number where the call originated.
for example
#define alloc(x) my_alloc(x, __LINE__, __FILE__)
void * my_alloc(size_t size, int line, char * file)
{
// increase number of allocations and dealocations for each combination line/file
}
Similarly you need to define my_free.
After this, run the program and printf from time to time allocations that keep growing. This should help find memory leaks.
P.S. I didn't test this, but I saw somebody do something similar in our code :)
Your requirement is not completely clear. If you are looking for the tool as "valgrind" that can be able find the memory leak in your environment; that is difficult to find out.
If you are having some code than you can check all the memory allocations & freeing of the memory in the particular application. As link1 Link2
Also there are some files available by executing them you can find the memory leak.
http://code.axter.com/debugalloc.cpp
http://code.axter.com/debugalloc.h
http://code.axter.com/debuglogger.cpp
http://code.axter.com/debuglogger.h
http://code.axter.com/debuglog.c
http://code.axter.com/debuglog.h
debugalloc.* code has the ability to track memory leaks, and it has
description and usage information in comments.
debuglogger.* code has some code for profileing your code.
debuglog.* is some limited C version of the code.
We have a a very large project which is basically an application which uses Linux Application programming and runs on PowerPC processor. This project was initially developed by another company. We acquired the project from the company and now we are maintaining the project.
The application is reported to have a lot of memory leak issue. Since this is a large project, it is not possible to go to each source code file and find out the memory leak. We have used Valgrid, mpatrol and other memory leak detection tools. These tools did not help much and the memory leak has not decreased by a significant percentage.
In this situation, how to go about to reduce the memory leak by a significant amount.Is there a general method which people use in these case to reduce the memory leak other than the memory leak detection tools like mentioned above.
Usually Valgrind belongs to the best tools for this tasks. If it does not work correctly, there might only be a couple of things you can still do.
First question: What language is the application in? Valgrind is very good for C and C++, but will not help you with garbage collected or scripting language. So check the language first. There might be something similar for java, but I have not used that much java, so you would have to ask someone else.
Play around a lot with the settings of valgrind. There are several plugins, that can help with this. One example could be using --leak-check=full or similar options. There are also plugins for valgrind, that can enhance it detection capabilities.
You say, that the application was reported to have a memory leak. How was this detected? Did the application detect this by itself. If it was detected by the application on it's own without any external tools, this probably means someone has added their own memory tracker inside the application. Custom memory tracker, memory pools etc. mess up valgrind and any other leak detection system very bad. So in case any custom memory handling is present in the application, your only choice is to either deactivate it (if possible) or to hook into this custom mechanism. How this could be done depends on your application only.
Add your own memory tracker. For example in C++ it is possible to hook into new/delete calls and get them to track the memory. There are a couple of libraries you can use for this. You can also write your own new/delete replacement in about 500 LOC. If you decide to use this method, be sure to read a lot of tutorials on replacing new/delete, since there are several things that are unusual in the C++ world when attempting this task.
What makes you so sure, there is an memory leak in the application (i.e. how was this detected)? If a tool just reported huge numbers of allocated memory, this might not even mean, there is an actual memory leak. A memory leak means that the handles to the memory are lost and hence it becomes impossible to every reach and free that memory again. In case your application just get's a lot of memory and keeps it accessible, you probably have a completely different problem. For example you simply might use an algorithm with a bad space complexity at one point or the other, leading to many allocations. In this case you will not need a leak detector, but rather a memory profiler, which gives you more detailed overview of the memory footprint of the code parts. However I have never used a profiler for this kind of task before, so I cannot give you any more hints on this.
You could replace all memory allocation calls with calls to your own allocation methods, which should call original methods and at the same time count memory usage and where it was allocated. This will allow you to find the leaks and eliminate them by hand.
There might also be automated tools that allow you to do this - not sure, haven't used any. But this method works.
Perhaps you might also consider using Boehm's garbage collector (that is using GC_malloc instead of malloc etc... and not bother about free-ing data).