Could frozen objects in JRuby be the reason for a memory leak? Or does the Garbage Collector destroy frozen objects ?
My problem is that I have an app which has some frozen hashes running around and I did not find out just yet where they are coming from and I would really like to know if frozen objects could throw ActionView::Template::Error (GC overhead limit exceeded) or OutOfMemory Java error or at least contribute to such an error.
Thank you.
No. All Object#freeze does is call org::jruby::RubyObject::freeze which then calls org::jruby::RubyObject::setFrozen which sets a property on the IRubyObject.
Nothing here would have any effect on GC.
Related
I just gone though memory leaks articles but not getting Why open connections or unclosed resources can not be garbage collected in jvm? Can someone please help with example. Thanks in advance.
The Java garbage collector isn't required to ever reclaim an object, even when it's unreachable. In practice, it's usually reclaimed eventually, but this can take hours (or days, or ∞), depending on the application and garbage collection activity. Read up on generational garbage collection to learn more about this.
The first JVM only supported a conservative garbage collector. There's a bunch of stuff online regarding this design as well, but the main caveat when using one of these type of collectors is that it might never identify an object as being unreachable even when it truly is.
Finally, there's the "epsilon" collector option, which never runs garbage collection at all. This is just used for performance testing and for short-lived Java processes. Once a process exits, the open resources get closed by the operating system anyhow.
I am trying to analyse the Google CLoud Stackdriver's Profiling, now can anyone please tell me how can I Optimize my code by using this.
Also, I cannot see any of my function name and all, i don't know what is this _tickCallback and which part of the code it is executing ??
Please help me, anyone.
When looking at node.js heap profiles, I find that it's really helpful to know if the code is busy and allocating memory (i.e. does a web server have a load?).
This is because the heap profiler takes a snap shot of everything that is in the heap at a the time of profile collection, including allocations which are not in use anymore, but have not been garbage collected.
Garbage collection doesn't happen very often if the code isn't allocating memory. So, when the code isn't very busy, the heap profile will show a lot of memory allocations which are in the heap but no longer really in use.
Looking at your flame graph, I would guess that your code isn't very busy (or isn't allocating very much memory), so memory allocations from the profiler dominate the heap profiles. If you're code is a web server and you profiled it when it didn't have a load, it may help if you generate a load for it while profiling.
To answer the secondary question: _tickCallback is a Node.js internal function that is used for running things on timers. For example, if anything is using a setTimeout. Anything that is scheduled on a timer would have _tickCallback on the bottom of the stack.
Elsewhere on the picture, you can see some green and cyan 'end' functions on the stack. I suspect those are the places where you are calling response.end in your express request handlers.
I would like to unterstand the GC Process a little bit better in Nodejs/V8.
Could you provide some information for the following questions:
When GC is triggered, does this block the event loop of node js?
Is GC running in it's own process or is just a submethod of the event-loop ?
When spawning nodejs process via Pm2 (clustered mode) does the instance
really have it's own process or is the GC shared between the
instances ?
For Logging Purposes I am using Grafana
(https://github.com/RuntimeTools/appmetrics-statsd), can someone
explain the differences \ more details about these gauges:
gc.size the size of the JavaScript heap in bytes.
gc.used the amount of memory used on the JavaScript heap in bytes.
Are there any scenarios where GC is not freeing memory (gc.used) in relation with stress tests?
The questions are related to an issue that I am currently facing. The used memory of GC is rising and doesn't release any memory (classical memory leak). The problem is that it only appears when we a lot of requests.
I played around with max-old-space-size to avoid pm2 restarts, but it looks like that GC is not freeing up anymore and the whole application is getting really slow...
Any ideas ?
ok some questions, I already figured out:
gc.size = Total Heap Size (https://nodejs.org/api/v8.html -> getHeapStatistics),
gc.used = used_heap_size
it looks ok that when gc_size hits a plateu that it never goes down again =>
Memory usage doesn't decrease in node.js? What's going on?
Why is garbage collection expensive? The V8 JavaScript engine employs a stop-the-world garbage collector mechanism. In practice, it means that the program stops execution while garbage collection is in progress.
https://blog.risingstack.com/finding-a-memory-leak-in-node-js/
Trying to detect memory leak in a webapp.
Taken heap dump of the app at the time of crash.
using eclipse MAT to parse the dump.
The collated info from the parsing leads to these 2 conclusions -
Objects occupying more memory dont have GC roots. Essentially whenever GC happens, they get cleaned up.
Objects under GC roots occupy significantly less memory. So these may not be the root cause of the memory leak(?).
So does this mean there is no leak happening? and the crash happens because of out of memory error?
EDIT : ADDING ENV INFO
I am running a java webapp on tomcat 6.
The webapp is based on openreports (reporting tool)
Adding incoming reference list of the biggest object --
http://imgur.com/lYrju
Here each instance of hash map has a reference from com.opensymphony.xwork2 which is not GC collected. Would this probably be a source of the problem. Because tomcat logs say -
SEVERE: The web application [/openreports] created a ThreadLocal with key of type [com.opensymphony.xwork2.ActionContext.ActionContextThreadLocal] (value [com.opensymphony.xwork2.ActionContext$ActionContextThreadLocal#7c45901a]) and a value of type [com.opensymphony.xwork2.ActionContext] (value [com.opensymphony.xwork2.ActionContext#3af7dab3]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.
SEVERE: The web application [/openreports] created a ThreadLocal with key of type [com.opensymphony.xwork2.inject.ContainerImpl$10] (value [com.opensymphony.xwork2.inject.ContainerImpl$10#258c27bd]) and a value of type [com.opensymphony.xwork2.inject.InternalContext[]] (value [[Lcom.opensymphony.xwork2.inject.InternalContext;#1484fc8d]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.
EDIT : Adding stack trace of OOM error
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3209)
at java.lang.String.<init>(String.java:215)
at java.lang.StringBuffer.toString(StringBuffer.java:585)
at java.io.StringWriter.toString(StringWriter.java:193)
at org.displaytag.tags.TableTag.writeExport(TableTag.java:1503)
at org.displaytag.tags.TableTag.doExport(TableTag.java:1454)
at org.displaytag.tags.TableTag.doEndTag(TableTag.java:1309)
at org.efs.openreports.engine.QueryReportEngine.generateReport(QueryReportEngine.java:198)
at org.efs.openreports.util.ScheduledReportJob.execute(ScheduledReportJob.java:173)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:529)
10:01:04,193 ERROR ErrorLogger - Job (90.70|1338960412084 threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception. [See nested exception: java.lang.OutOfMemoryError: Java heap space]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:529)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3209)
at java.lang.String.<init>(String.java:215)
at java.lang.StringBuffer.toString(StringBuffer.java:585)
at java.io.StringWriter.toString(StringWriter.java:193)
at org.displaytag.tags.TableTag.writeExport(TableTag.java:1503)
at org.displaytag.tags.TableTag.doExport(TableTag.java:1454)
at org.displaytag.tags.TableTag.doEndTag(TableTag.java:1309)
at org.efs.openreports.engine.QueryReportEngine.generateReport(QueryReportEngine.java:198)
at org.efs.openreports.util.ScheduledReportJob.execute(ScheduledReportJob.java:173)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
Are you running ASP.NET WebForms (not MVC) application? We had a similar issue in our application and these are the points that we ran across. First, I should state that our web farm is composed of a lot of application pools, one for each client of course. Each app pool was given a specific memory limit in order to ensure that clients did not spiral memory out of control and affect other worker processes.
1) Even when objects are not rooted, if they are over 85K they will be placed onto the Large Object Heap, which .NET's GC does not compact. That means that if you have an object that is 100K, it will sit on the LOH. When the object is cleaned up, you will get your 100K back and the GC might decide to put something else in that hole. Since it is not compacted, you can't ever be guaranteed that the space is completely filled. This leaves holes in the space that was not filled up completely and leads to memory fragmentation. The only way to resolve this is to examine your dump file and see what objects are taking up the largest space and identify classes/collections that can be pared down. You will likely see a lot of object arrays and strings as those are typically rather large in cache managers and such.
2) Are you disposing of instances? Waiting until the finalizer thread comes along to cleanup instances should not be the default case, because that can lead to terrible GC performance. Make sure you are disposing all classes that implement the IDisposable interface somewhere in their inheritance. Calling Dispose early means that the resources are released in a deterministic fashion, whereas depending on the finalizer means that cleanup can happen much, much later (if at all).
3) If you are getting OutOfMemoryExceptions, it could be due to two reasons: you ran out of managed memory (rare in the case of a worker process with a limit set as it will be recycled) OR you ran out of Virtual Memory, which is even worse since you are limited to 2GB of Virtual Memory if you are running in a 32-bit IIS application. We also saw this problem in production where the GC was allocating 64MB chunks of heap (32GB in Workstation Mode) and once it was close to the 2GB limit and was not able to allocate more space, it threw the exception (see: Should we use "workstation" garbage collection or "server" garbage collection?). If this is the case, you are probably leaking a managed resource where it is rooted to something static or you did not cleanup event handlers by removing them with the -= operator.
If you could post your dump file or at least the relevant parts of it, it would be easier to see where the problem lies. But from your short description, these are the items I would look at.
As far as I know GC is used only when JVM needs more memory, but I'm not sure about it. So, please, someone suggest an answer to this question.
The garbage collection algorithm of Java is as I understand, pretty complex and not as straightforward. Also, there is more than algorithm available for GC, which can be chosen at VM launchtime with an argument passed to the JVM.
There's a FAQ about garbage collection here: http://www.oracle.com/technetwork/java/faq-140837.html
Oracle has also published an article "Tuning Garbage Collection with the 5.0 Java[tm] Virtual Machine" which contains deep insights into garbage collection, and will probably help you understand the matter better: http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html
The JVM and java specs don't say anything about when garbage collection occurs, so its entirely up to the JVM implementors what policies they wish to use. Each JVM should probably have some documention about how it handles GC.
In general though, most JVMs will trigger GC when a memory allocation pushes the total amount of allocated memory above some threshold. There may be mulitple levels of gc (full vs partial/generational) that occur at different thresholds.
Garbage Collection is deliberately vaguely described in the Java Language Specification, to give the JVM implementors best working conditions to provide good garbage collectors.
Hence garbage collectors and their behaviour are extremely vendor dependent.
The simplest but least fun is the one that stops the whole program when needing to clean. Others are more sophisticated and run quietly along your program cleaning up as you go more or less aggressively.
The most fun way to investigate garbage collection is to run jvisualvm in the Sun 6 JDK. It allows you to see many, many internal things many relevant to garbage collection.
https://visualvm.dev.java.net/monitor_tab.html (but the newest version has plenty more)
In the older days garbage collector were empirical in nature. At some set interval or based on certain condition they would kick in and examine each of the object.
Modern days collectors are smarter in the sense that they differentiate based on the fact that objects are different lifespan. The objects are differentiated between young generation objects and tenured generation objects.
Memory is managed in pools according to generation. Whenever the young generation memory pool is filled, a minor collection happens. The surviving objects are moved to tenured generation memory pool. When the tenured generation memory pool gets filled a major collection happens. A third generation is kept which is known as permanent generation and may contain objects defining classes and methods.