I am trying to use the Observatory tab in dartium dev tools, to find a memory leak in my framework. I have made a test program here which should be viewable in js or dart. My goal is to find out where references are holding on to Massive objects, which are just wrappers around a List<double> with a million doubles in it. If I click New Client I get a new client view on the right, if I generate a bunch of Massive objects and refresh the observatory tools I see that double now takes up most of the app memory usage. If I then delete the Massive objects and wait 5 seconds for the frameworks remote garbage collection to run, then refresh the observatory tab, the doubles still occupy the same amount of memory even though they should have been GC'd (I click the GC button on the observatoy tab to, I assume, force the GC to run.) If I keep creating and deleting Massive objects in the app eventually the page crashes, typically after around 28 Massive objects have been created. My problem is finding out how to use the tools to find out where the Massive objects are still having references hold on to them. Is it possible to get find references to objects in the dev tools?
UPDATE:
I have fixed the memory leak in the test app I link too and describe above, therefore following the instructions above will not result in recreating the memory leak.
I'm currently investigating a memory leak myself. What's missing in the observatory is a way the chain from the root to the leaking object. I'm not sure if there is already an issue open for it, though. Feel free to open a new one.
Related
I have a use case where I repeatedly fetch data from the server and display it using cytoscape. For this, I just have a single cy object and I repeatedly remove and add the elements. This happens once every second or two. I notice the browser memory growing with time. The documentation says "Though the elements specified to this function are removed from the graph, they may still exist in memory"
So, do I need to do anything with the collection returned by calling remove? How do I ensure memory is cleared.
Well javascript is already a garbage collected language, so it will drop all of your references to your nodes eventually. If you remove nodes from the graph and you don't have any references to it, then the garbage collector will clean it up ... eventually :)
Due to the fact that these memory leaks exists, my educated guess is, that there may be some internal entanglement with a global scope or sth like that which prevents elements to be discarded before the whole graph is reinitialized (maybe try that?).
I'm having bit trouble with a NodeJS/Express/React application that is on production as we speak.
The problem is, that it keeps climbing up on memory usage and it just doesn't stop. It is slow and steady, and eventually Node crashes. I have several heapdumps that I have been creating with the help of node-heapdump, however, I don't know how to properly identify the leak.
I will share an image of my snapshot. Please note that I sorted by shallow size so supposedly one of those objects/types that appear on top must be the problem:
As I can see below, there is this "Promis in #585" that I see in many places and that could be the one, but I'm unable to identify that line, function or component.
Anybody could help? I can share more screenshots if you want.
Thanks.
I found the problem.
I'm using React Body Classname in my app so when we load different routes we can change the body class from client side. This npm module needs to be used with the Rewind() funcion when you do server side rendering in order to avoid memory leaks:
This is the module I'm talking about:
https://github.com/iest/react-body-classname
And, in order to avoid the memory leak, we are calling:
BodyClassName.rewind()
In the render function of our main App.js container component. This way, doesn't matter what url a user is landing on, Rewind() will always be called and so the data that can be garbage collected will be properly freed in the future.
Now our app stays at a nice and steady 120mb of memory usage.
Thanks anyway :D
For over a month I'm struggling with a very annoying memory leak issue and I have no clue how to solve it.
I'm writing a general purpose web crawler based on: http, async, cheerio and nano. From the very beginning I've been struggling with memory leak which was very difficult to isolate.
I know it's possible to do a heapdump and analyse it with Google Chrome but I can't understand the output. It's usually a bunch of meaningless strings and objects leading to some anonymous functions telling me exactly nothing (it might be lack of experience on my side).
Eventually I came to a conclusion that the library I had been using at the time (jQuery) had issues and I replaced it with Cheerio. I had an impression that Cheerio solved the problem but now I'm sure it only made it less dramatic.
You can find my code at: https://github.com/lukaszkujawa/node-web-crawler. I understand it might be lots of code to analyse but perhaps I'm doing something stupid which can be obvious strait away. I'm suspecting the main agent class which does HTTP requests https://github.com/lukaszkujawa/node-web-crawler/blob/master/webcrawler/agent.js from multiple "threads" (with async.queue).
If you would like to run the code it requires CouchDB and after npm install do:
$ node crawler.js -c conf.example.json
I know that Node doesn't go crazy with garbage collection but after 10min of heavy crawling used memory can go easily over 1GB.
(tested with v0.10.21 and v0.10.22)
For what it's worth, Node's memory usage will grow and grow even if your actual used memory isn't very large. This is for optimization on behalf of the V8 engine. To see your real memory usage (to determine if there is actually a memory leak) consider dropping this code (or something like it) into your application:
setInterval(function () {
if (typeof gc === 'function') {
gc();
}
applog.debug('Memory Usage', process.memoryUsage());
}, 60000);
Run node --expose-gc yourApp.js. Every minute there will be a log line indicating real memory usage immediately after a forced garbage collection. I've found that watching the output of this over time is a good way to determine if there is a leak.
If you do find a leak, the best way I've found to debug it is to eliminate large sections of your code at a time. If the leak goes away, put it back and eliminate a smaller section of it. Use this method to narrow it down to where the problem is occurring. Closures are a common source, but also check for anywhere else references may not be cleaned up. Many network applications will attach handlers for sockets that aren't immediately destroyed.
I am almost finished with my first monotouch app, almost that is, but for some big problems with memory leaks. Even though I override the viewDidUnload on every view controller so that for every UI element that I create I first remove it from its superview, then call Dispose and then set it to null, the problem persists. Using Instruments is no help, it doesn't detect the memory leaks and the memory allocations doesn't point me to anything that I can track.
My app uses mainly the MPMoviePlayer to play video streams and also displays an image gallery from images loaded through http. Both operation are causing problems.
Any ideas would be very much appreciated, thanks.
The Master is right of course. Gracias Miguel. I wanted though to give a more thorough explanation of what I did, with the hope that it may help future Xamarin colleges.
1) The MPMoviePlayer stores a video buffer which is kind of sticky. What I have done is have a unique instance running on the AppDelegate an reuse this across the views that show a video. So instead of initializing the MPMoviePlayerController with an url you use the constructor with no arguments, and then set the ContentUrl property and call Play().
2) Do not rely on ViewDidUnload being called to clean up you objects, because it is not called consistently. For example I use a lot of modal view controllers and this method never got called. Memory just kept accumulating until the app crashed. Better call you clean up code directly.
3) Images were the biggest memory hug I had. Even Images inside UIImageViews that were just used as background would never get disposed. I had to specifically call the following code on each image before I could clear up the memory:
myImageView.RemoveFromSuperview();
myImageView.Image.Dispose();
myImageView.Image = null;
myImageView.Dispose();
myImageView = null;
4) Beware of the UIWebView, it can hug a lot of memory, specially if there is some kind of AJAX interaction running on the page that you are loading. Calling the following code help a little but didn't solve all the problems. Some memory leaks remain that I wasn't able to get rid off:
NSString keyStr = new NSString("WebKitCacheModelPreferenceKey");
NSUserDefaults.StandardUserDefaults.SetValueForKey(NSObject.FromObject(val), keyStr);
5) If you can, avoid using Interface Builder. On several occasions I was never able to release UI elements created on the xib files. Positioning everything by hand can be a pain but if you are using a memory intensive application creating all your view controllers in code may be the way to go.
6) Using anonymous methods as event handlers is nice and my help code readability, but on some occasions not being able to manually detach an event handler became a problem. References to unwanted object remain in memory.
7) In my UITableViewSource I was able to have a more efficient memory handling by creating the UIImages that I use to style the cells independently and then just reusing them on my GetViewForHeader and GetCell methods.
One last thing; even though instruments is not really compatible with the way monotouch is doing things using the "Memory Monitor" was a valuable tool, if you are having problem it can indeed help.
These strategies solved most of my problems. Some may be pretty obvious in hindsight but I thought it wont hurt to pass them along.
Happy Coding and long live c# on the iPhone
The UI elements are not likely the responsible ones, those barely use any memory. And calling Dispose()/nulling helps, but only a tiny bit.
The more likely source of the problem are objects like the MPMoviePlayer, the image downloading and the image rendering. You might be keeping references to those objects which prevent the GC from getting rid of them.
When you create an instance of the cachefactory and then don’t use it anymore the memory that was used during the creation of the object is not released. This will have a substantial effect on all web apps or scenarios where a cachfactory might be created multiple times. The symptoms of this will be unusually high memory use one the process and in IIS this will most likely result in your app having to recycle more often since it with overrun its allocated memory more quickly.
The following code will show an increase of about 500MB yes I mean MegaBytes of memory usage!
To duplicate put the following code into your app:
Dim CacheFactory1 As CacheFactory = New CacheFactory()
For i As Int32 = 1 To 1 * (10 ^ 4)
CacheFactory1 = New CacheFactory()
CacheFactory1 = Nothing
Next
There are only two workarounds for this.
Velocity team fixes the bug (and I’m sure they will)
You need to use the same cachefactory object on a static method in your app and reference it every time you want to use the cache. (this works but isn’t optimal in my opinion.)
I also have a cachingscope that can be used to wrap your caching methods and will post this on codeplex soon. You can wrap it around your caching methods just like a transaction scope and it will manage the locking and connection for you.
So where is the question? You should file the bug, and not post this here, as the Velocity team is more than likely monitoring Microsoft Connect for bugs.
I've build a scope provider for resolving this issue. You can get the code here.
http://www.codeplex.com/CacheScope