JavaME - LWUIT images eat up all the memory - java-me

I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this).
The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage.
The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down.
If anyone has any insight to offer, I would greatly appreciate it.

Mobile devices are usually very low on memory. So you have to use some tricks to conserve and use memory.
We had the same problem at a project of ours and we solved it like this.
for downloaded images:
Make a cache where you put your images. If you need an image, check if it is in the cachemap, if it isn't download it and put it there, if it is, use it. if memory is full, remove the oldest image in the cachemap and try again.
for other resource images:
keep them in memory only for as long as you can see them, if you can't see them, break the reference and the gc will do the cleanup for you.
Hope this helps.

There are a few things that might be happening here:
You might have seen the memory used before garbage collection, which doesn't correspond to the actual memory used by your app.
Some third party code you are running might be pooling some internal datastructures to minimize allocation. While pooling is a viable strategy, sometimes it does look like a leak. In that case, look if there is API to 'close' or 'dispose' the objects you don't need.
Finally, you might really have a leak. In this case you need to get more details on what's going on in the emulator VM (though keep in mind that it is not necessarily the same as the phone VM).
Make sure that your emulator uses JRE 1.6 as backing JVM. If you need it to use the runtime libraries from erlyer JDK, use -Xbootclasspath:<path-to-rt.jar>.
Then, after your application gets in the state you want to see, do %JAVA_HOME%\bin\jmap -dump:format=b,file=heap.bin <pid> (if you don't know the id of your process, use jps)
Now you've got a dump of the JVM heap. You can analyze it with jhat (comes with the JDK, a bit difficult to use) or some third party profilers (my preference is YourKit - it's commercial, but they have time-limited eval licenses)

I had a similar problem with LWUIT at Java DTV. Did you try flushing the images when you don't need them anymore (getAWTImage().flush())?

Use EncodedImage and resource files when possible (resource files use EncodedImage by default. Read the javadoc for such. Other comments are also correct that you need to actually observe the amount of memory, even high RAM Android/iOS devices run out of memory pretty fast with multiple images.
Avoid scaling which effectively eliminates the EncodedImage.

Did you think of the fact, that maybe loading the same image from JAR, many times, is causing many separate image objects (with identical contents) to be created instead of reusing one instance per-individual-image? This is my first guess.

Related

Nodejs | Chrome Memory Debugging

Context:
I have an Node.js application which memory seems to very high, I don't know if that is memory leak or not, because it is reduced after certain period time, but some times on heavy load it keeps on increasing takes much longer to get reduced.
So going through the articles and couple of videos, i figured that i have to heap snapshot and analyse what is causing the memory leak.
Steps:
I have taken 4 snap shots as of now in my local to reproduce the memory leak.
Snapshot 1: 800MB
Snapshot 2: 1400MB
Snapshot 3: 1600MB
Snapshot 4: 2000+MB
When i uploaded the heapdump files to chrome dev tools I see there a lot of information but i don't know how to proceed from there.
Please check below screenshot, it says there is constructor [array] which has 687206 as shallow Size & Retained Size is 721414 in the columns, so when expanded that constructor i can see there are 4097716 constructors created ( refer the second screenshot attached below ).
Question
What does internal array [] means ? Why is there 4097716 created ?
How can a filter out the constructor which created by my app and showing me that instead of some system/v8 engine constructor ?
In the same screenshot one of the constructor uses global variable called tenantRequire function, this is custom global function which is being used internally in some places instead of normal Node.js require, I see the variable across all the constructor like "Array", "Object". This is that global tenantRequire code for reference. It is just patched require function with trycatch. Is this causing the memory leak somehow ?
Refer screenshot 3, [string] constructor it has 270303848 as shallow size. When i expanded it shows modules loaded by Node.js. Question why is this taking that much size ? & Why is my lodash modules are repeated in that string constructor ?
Without knowing much about your app and the actions that cause the high memory usage, it's hard to tell what could be the issue. Which tool did you use to record the heap snapshot? What is the sequence of operations you did when you recorded the snapshot? Could you add this information to your question?
A couple of remarks
You tagged the question with node.js and showed Chrome DevTools. That's ok. You can totally take a heap snapshot of a Node.js application and analyze it in Chrome DevTools. But since both Node.js and Chrome use the same JS engine (V8) and the same garbage collector (Orinoco), it might be a bit confusing for someone who reads the question. Just to make sure I understand it correctly: the issue is in a Node.js app, not in a browser app. And you are using Chrome just to analyze the heap snapshot. Right?
Also, you wrote that you took the snapshots to reproduce the memory leak. That's not correct. You performed some action which you thought would cause a high memory usage, recorded a heap snapshot, and later loaded the snapshot in Chrome DevTools to observe the supposed memory leak.
Trace first, profile second
Every time you suspect a performance issue, you should first use tracing to understand which functions in your applications are problematic (i.e. slow, create a lot of objects that have to be garbage-collected, etc).
Then, when you know which functions to focus on, you can profile them.
Try these visual tools
There are a few tools that can help you with tracing/profiling your app. Have a look a FlameScope (a web app) and node-clinic (a suite of tools). There is also Perfetto, but I think it's for Chrome apps, not Node.js apps.
I also highly recommend the V8 blog.

Heap Generation 2 and Large Object Heap climbs

I am not sure if I am posting to the right StackOverFlow forum but here goes.
I have a C# desktop app. It receives images from 4 analogue cameras and it tries to detect motion and if so it saves it.
When I leave the app running say over a 24hr cycle I notice the Private Working Set has climbed by almost 500% in Task manager.
Now, I know using Task Manager is not a good idea but it does give me an indication if something is wrong.
To that end I purchase dotMemory profiler from JetBrains.
I have used its tools to determine that the Heap Generation 2 increases a lot in size. Then to a lesser degree the Large Object Heap as well.
The latter is a surprise as the image size is 360x238 and the byte array size is always less than 20K.
So, my issues are:
Should I explicitly call GC.Collect(2) for instance?
Should I be worried that my app is somehow responsible for this?
Andrew, my recommendation is to take memory snapshot in dotMemory, than explore it to find what retains most of the memory. This video will help you. If you not sure about GC.Collect, you can just tap "Force GC" button it will collect all available garbage in your app.

Check memory usage in haskell

I'm creating a program which implements some kind of cache. I need to use as much memory as possible and to do that I need to do two things:
Check how much memory is still available in system (RAM only, not SWAP)
Check how much memory my app is already using.
I need a platform independent solution (Linux, Windows, etc.).
Using these two pieces of information I will reduce the size of cache or enlarge it.
How can I get this information in Haskell? Are there any packages that can provide that information?
I can't immediately see how to do this portably.
However, GHC does have "weak pointers". (See System.Mem.Weak.) If you create items and hang on to them via weak pointers (only), then the garbage collector will automatically start deleting items if you run low on physical memory.
(Unfortunately, this doesn't give you the ability to decide which items to delete first — e.g., the ones that are cheapest to recreate or the ones that have been least-used or something.)

j2me out of memory exception

I'm making a game using J2ME and sometimes while testing it I'm getting out of memory exception, I know what it means but how can I solve it ? if I'll call System.gc() in my game loop every time will it help somehow ? Any tips on how to prevent this would be appreciated !
I see you've also asked j2me wtk find memory leak
In my experience, memory leaks don't cause OutOfMemoryExceptions. All they do is to slowly use up all the memory in the device. And when it's all nearly used up, the device is then forced to call System.gc() itself.
System.gc() is a blocking call, meaning it'll make your whole game stall for some microseconds, which of course is annoying. And this is why people go hunting for memory leaks - to prevent the automatic call to System.gc().
An OutOfMemoryException may occur if e.g. you have 1mb memory left, while trying to load a 2mb resource. And while a memory leak may dramatically increase chances of running into a situation like that, your problem is not the memory leak itself, but more likely that you're using too big resources.
Are you using mp3 files for music? Or big images for backgrounds or maps?
You could try calling System.gc() just before loading big resources, and it might reduce the problem. But the problem doesn't have to be related to your game alone. It could also matter what other apps are running on the device at the same time, and how much memory they use.
You could also try replacing mp3 music with MIDI music, if only just to test if it makes a difference. (Find JavaME optimized MIDI music at IndieGameMusic.com).
And if you do use big images, make sure you optimize them with tools like PNGout or Optipng.
If original file is not too big then no need of decreasing size,you can use jpeg instead...also you can just have specific limit to your local buffer size.and before system.gc() you can make use of Thread.sleep() for testing purpose to check the effect and to give time to Gc.also check with WTK's performance monitor to check actual pick location.

What's the max. memory available for applications? (getting no more handles error)

In an Eclipse RCP application I am trying to open many editors. It is a basically a tree with lot of nodes each of which opens an editor. When I open in access of 150 to 200 editors and try to open an editor for next treenode it doesn't open. Eclipse console shows "org.eclipse.swt.SWTError: No more handles". However if I close a few of already opened editors I am able to open as many new treenode editors.
I monitored the memory usage for javaw.exe; memory grows on opening of each editor but number of handles remain constant after a certain MAX. javaw.exe consumes around 120,000K when the error happens. The total memory consumed by all applications during error is 700,000K. And if I try to open a few more applications like IE it either doesn’t open or opens with lesser UI features due to shortage of system memory. And all this in spite of having 2GB RAM!
I also tried by increasing vmargs in eclipse memory settings but it wasn’t of much help either.
a) Is there a memory leak in my code? I don’t see it as handles remain constant after a certain MAX. As I understand, as editors are open, the SWT controls on it are not disposed until they are closed.
b) Whats the max. memory that can be used up by applications? As my RAM is 2GB and I see that my overall memory to all processes should be way better than 700,000K which I think is around 680MB.
a) Try Sleak. It can find GDI-leaks in your SWT-application
b) You can try to change the maximum GDI handles or User object in the registry. See here and here more information.
You also might want to try to create a vitual tree so that only tree nodes that are shown are created.
I cannot answer specifically your question, but it seems to me you are running into the maximum limit of open files a process can have at any time (judging from the "handles" term, which often refers to open files, much like file descriptors in Unix). It would be a matter of operating-system-level user permissions/capabilities then. The allowed number of open files has nothing to do with memory size.
Handles refers to file handles (open file references) and is controlled by the operating system. It's not typically a user changed setting because keeping lots of file handles open indefinitely hogs OS resources.
This question falls into the If you have to ask, you're probably doing something wrong category. ;-)
Your best bet here is to open the file, read it and then close it. Then reopen the file when you need to write out the changes. You may need to use a locking mechanism as well if you are worried about concurrent edits.
If you don't want to change too much logic it may help to open+read+close each file as you put it int he tree and then reopen (for write) the one(s) that are currently in the user's active view, closing them as the user navigates away.
Windows deals with several kinds of handles, e.g. executive handles (files, threads), GDI handles (fonts, brushes, device contexts), user handles (windows, menus, most native controls, images). For most of these handles, there's a per-process and an overall system limit which has nothing to do with the amount of RAM available in your system. E.g. due to historical reasons, there's a limit of 10000 user handles per process and a limit of 32000 user handles per desktop session. See http://msdn.microsoft.com/en-us/library/ms810501.aspx for an in-depth explanation of handles.
So first you need to make sure that you are not leaking any handles, e.g. using Sleak. Then you need to be aware that SWT uses at least one user handle for each widget (yes even for plain Composite objects). If you have a big application with lots of widgets, you'll hit the limit of 10000 user objects. I wrote a short blog entry about what we did to work around this limit in our product: http://www.subshell.com/en/subshell/blog/investigating-user-handles-with-the-swt-detective100.html. I also hacked the SWT Spy into a tool which allows me to investigate the widget tree of our application, to find places to reduce the widget count. The download link for this tool is in the blog entry.

Resources