memory leaks in Microsoft.FSharp.Control.Mailbox? - memory-leaks

I'm hunting for some memory-leaks in a long runing service (using F#) right now.
The only "strange" thing I've seen so far is the following:
I use a MailboxProcessor in a subsystem with an algebraic-datatype named QueueChannelCommands (more or less a bunch of Add/Get commands - some with AsyncReplyChannels attached)
when I profile the service (using Ants Memory Profiler) I see instances of arrays of mentioned type (most having lenght 4, but growing) - all empty (null) whose references seems to be held by Control.Mailbox:
I cannot see any reason in my code for this behaviour (your standard code you can find in every Mailbox-example out there - just a loop with a let! = receive and a match to follow ended with a return! loop()
Has anyone seen this kind of behaviour before or even knows how to handle this?
Or is this even a (known) bug?
Update: the growing of the arrays is really strange - seems like there is additional space appended without beeing used properly:

I am not a F# expert by any means but maybe you can look at the first answer in this thread:
Does Async.StartChild have a memory leak?
The first reply mentions a tutorial for memory profiling on the following page:
http://moiraesoftware.com/blog/2011/12/11/fixing-a-hole/
But they mention this open source version of F#
https://github.com/fsharp/fsharp/blob/master/src/fsharp/FSharp.Core/control.fs
And I am not sure it is what you are looking for (about this open source version of F# in the last point), but maybe it can help you to find the source of the leak or prove that it is actually leaking memory.
Hope that helps somehow maybe ?
Tony

.NET has its own garbage collector, which works quite nicely.
The most common way to cause memory leaks in .NET technologies is by setting up delegates, and not removing them on object deconstructors.

Related

Netlogo 5.1 (and 5.05) Behavior Space Memory Leak

I have posted on this before, but thought I had tracked it down to the NW extension, however, memory leakage still occurs in the latest version. I found this thread, which discusses a similar issues, but attributes it to Behavior Space:
http://netlogo-users.18673.x6.nabble.com/Behaviorspace-Memory-Leak-td5003468.html
I have found the same symptoms. My model starts out at around 650mb, but over each run the private working set memory rises, to the point where it hits the 1024 limit. I have sufficient memory to raise this, but in reality it will only delay the onset. I am using the table output, as based on previous discussions this helps, and it does, but it only slows the rate of increase. However, eventually the memory usage rises to a point where the PC starts to struggle. I am clearing all data between runs so there should be no hangover. I noticed in the highlighted thread that they were going to run headless. I will try this, but I wondered if anyone else had noticed the issue? My other option is to break the BehSpc simulation into a few batches so the issues never arises, bit i would be nice to let the model run and walk away as it takes around 2 hours to go through.
Some possible next steps:
1) Isolate the exact conditions under which the problem does or not occur. Can you make it happen without involving the nw extension, or not? Does it still happen if you remove some of the code from your model? What if you keep removing code — when does the problem go away? What is the smallest code that still causes the problem? Almost any bug can be demonstrated with only a small amount of code — and finding that smallest demonstration is exactly what is needed in order to track down the cause and fix it.
2) Use standard memory profiling tools for the JVM to see what kind of objects are using the memory. This might provide some clues to possible causes.
In general, we are not receiving other bug reports from users along these lines. It's routine, and has been for many years now, for people to use BehaviorSpace (both headless and not) and do experiments that last for hours or even for days. So whatever it is you're experiencing almost certainly has a more specific cause -- mostly likely, in the nw extension -- that could be isolated.

Further locating memory leak with memwatch

Recently I started my first project with node.js, and I can definitely say I'm loving it. Very powerful with all the modules; however, it seems I'm having a "slight" memory leak that causes my server to crash after about an hour (hit's 99-100% CPU). I've been trying to fix this problem for a while now.
Luckily, after a bit of searching, I found a popular tool called memwatch. I of course installed the module, and started logging memory usage/storage of my server's process.
Eventually, after looking through the logs, I have found the likely cause.
{
"what": "String",
"size_bytes": 9421368,
"size": "8.98 mb",
"+": 16635,
"-": 533
}
Of course, within thirty seconds this little bugger managed an increase of 9mb (very unusual). This is nice and dandy to know that my memory leak seems to be of type string, but where exactly do I go from here? Is there any way I can get more accurate results?
I looked through my code, but there really isn't a string in my code that could possibly grow like this. Is there a possibility this string isn't actually a part of my code, and more a part of node or the Socket.IO module?
Right approach. Use StrongOps (Previously Nodefly) to profile memory. Isolate type of leaking object. Look at heap retained sizes as well as the instance counts. Growing Instance counts with steady workload will point at few smoking guns.
I believe StrongOps uses memwatch + some V8 profiler/GC code under the hood. Better automation. See link - http://strongloop.com/node-js-performance/strongops/
Then used node-heapdump module, that their co-founder (core contributor Ben Noordhuis) wrote to isolate the leak down to collection object, GC roots and line of code.
See blog from Ben - http://strongloop.com/strongblog/how-to-heap-snapshots/
You can use node-heapdump module, to make a dump of the V8 heap for later inspection, so you will be able to see more accurate results.
After you will make a heapdump, analyze it with Chrome DevTools:
https://developers.google.com/chrome-developer-tools/docs/javascript-memory-profiling
As Shubhra suggested, another tool to consider in helping you diagnose your memory leak is the heap profiler of StrongOps monitoring. You can easily get started in a few steps here: http://docs.strongloop.com/display/DOC/Setting+up+StrongOps+monitoring
This will save you time from having to dig through logs and gives you a visual of what's going on in your applications heap over time, as well as comparing the String to other likely culprits causing your memory leak.
You can find more information here: http://docs.strongloop.com/display/DOC/Profiling#Profiling-Memoryprofiler

How does one go about fixing a memory leak?

I've got this game server that whether i download the pre-compiled binary or compile the source code myself just leaks until i have to reboot or enter a BSOD. I'm not super keen on C++ just currently classes for my degree but i can look at the code and understand whats going on. I'm just not 'fluent'.
Specifically looking at the resource monitor the modified memory type just fills and fills constantly by about 3-5MB per 5 seconds
is there anything i can do about this?
There is tool which is helpful for finding memory leaks: http://valgrind.org/
If you have ever ever heard about a tool called valgrind you can run your C++ code in valgrind to see exactly where the leakages are.
http://valgrind.org/

Reducing memory usage in an extended Mathematica session

I'm doing some rather long computations, which can easily span a few days. In the course of these computations, sometimes Mathematica will run out of memory. To this end, I've ended up resorting to something along the lines of:
ParallelEvaluate[$KernelID]; (* Force the kernels to launch *)
kernels = Kernels[];
Do[
If[Mod[iteration, n] == 0,
CloseKernels[kernels];
LaunchKernels[kernels];
ClearSystemCache[]];
(* Complicated stuff here *)
Export[...], (* If a computation ends early I don't want to lose past results *)
{iteration, min, max}]
This is great and all, but over time the main kernel accumulates memory. Currently, my main kernel is eating up roughly 1.4 GB of RAM. Is there any way I can force Mathematica to clear out the memory it's using? I've tried littering Share and Clear throughout the many Modules I'm using in my code, but the memory still seems to build up over time.
I've tried also to make sure I have nothing big and complicated running outside of a Module, so that something doesn't stay in scope too long. But even with this I still have my memory issues.
Is there anything I can do about this? I'm always going to have a large amount of memory being used, since most of my calculations involve several large and dense matrices (usually 1200 x 1200, but it can be more), so I'm wary about using MemoryConstrained.
Update:
The problem was exactly what Alexey Popkov stated in his answer. If you use Module, memory will leak slowly over time. It happened to be exacerbated in this case because I had multiple Module[..] statements. The "main" Module was within a ParallelTable where 8 kernels were running at once. Tack on the (relatively) large number of iterations, and this was a breeding ground for lots of memory leaks due to the bug with Module.
Since you are using Module extensively, I think you may be interested in knowing this bug with non-deleting temporary Module variables.
Example (non-deleting unlinked temporary variables with their definitions):
In[1]:= $HistoryLength=0;
a[b_]:=Module[{c,d},d:=9;d/;b===1];
Length#Names[$Context<>"*"]
Out[3]= 6
In[4]:= lst=Table[a[1],{1000}];
Length#Names[$Context<>"*"]
Out[5]= 1007
In[6]:= lst=.
Length#Names[$Context<>"*"]
Out[7]= 1007
In[8]:= Definition#d$999
Out[8]= Attributes[d$999]={Temporary}
d$999:=9
Note that in the above code I set $HistoryLength = 0; to stress this buggy behavior of Module. If you do not do this, temporary variables can still be linked from history variables (In and Out) and will not be removed with their definitions due to this reason in more broad set of cases (it is not a bug but a feature, as Leonid mentioned).
UPDATE: Just for the record. There is another old bug with non-deleting unreferenced Module variables after Part assignments to them in v.5.2 which is not completely fixed even in version 7.0.1:
In[1]:= $HistoryLength=0;$Version
Module[{L=Array[0&,10^7]},L[[#]]++&/#Range[100];];
Names["L$*"]
ByteCount#Symbol##&/#Names["L$*"]
Out[1]= 7.0 for Microsoft Windows (32-bit) (February 18, 2009)
Out[3]= {L$111}
Out[4]= {40000084}
Have you tried to evaluate $HistoryLength=0; in all subkernels and as well as in the master kernel? History tracking is the most common source for going out of memory.
Have you tried do not use slow and memory-consuming Export and use fast and efficient Put instead?
It is not clear from your post where you evaluate ClearSystemCache[] - in the master kernel or in subkernels? It looks like you evaluate it in the master kernel only. Try to evaluate it in all subkernels too before each iteration.

JavaME - LWUIT images eat up all the memory

I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this).
The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage.
The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down.
If anyone has any insight to offer, I would greatly appreciate it.
Mobile devices are usually very low on memory. So you have to use some tricks to conserve and use memory.
We had the same problem at a project of ours and we solved it like this.
for downloaded images:
Make a cache where you put your images. If you need an image, check if it is in the cachemap, if it isn't download it and put it there, if it is, use it. if memory is full, remove the oldest image in the cachemap and try again.
for other resource images:
keep them in memory only for as long as you can see them, if you can't see them, break the reference and the gc will do the cleanup for you.
Hope this helps.
There are a few things that might be happening here:
You might have seen the memory used before garbage collection, which doesn't correspond to the actual memory used by your app.
Some third party code you are running might be pooling some internal datastructures to minimize allocation. While pooling is a viable strategy, sometimes it does look like a leak. In that case, look if there is API to 'close' or 'dispose' the objects you don't need.
Finally, you might really have a leak. In this case you need to get more details on what's going on in the emulator VM (though keep in mind that it is not necessarily the same as the phone VM).
Make sure that your emulator uses JRE 1.6 as backing JVM. If you need it to use the runtime libraries from erlyer JDK, use -Xbootclasspath:<path-to-rt.jar>.
Then, after your application gets in the state you want to see, do %JAVA_HOME%\bin\jmap -dump:format=b,file=heap.bin <pid> (if you don't know the id of your process, use jps)
Now you've got a dump of the JVM heap. You can analyze it with jhat (comes with the JDK, a bit difficult to use) or some third party profilers (my preference is YourKit - it's commercial, but they have time-limited eval licenses)
I had a similar problem with LWUIT at Java DTV. Did you try flushing the images when you don't need them anymore (getAWTImage().flush())?
Use EncodedImage and resource files when possible (resource files use EncodedImage by default. Read the javadoc for such. Other comments are also correct that you need to actually observe the amount of memory, even high RAM Android/iOS devices run out of memory pretty fast with multiple images.
Avoid scaling which effectively eliminates the EncodedImage.
Did you think of the fact, that maybe loading the same image from JAR, many times, is causing many separate image objects (with identical contents) to be created instead of reusing one instance per-individual-image? This is my first guess.

Resources