j2me out of memory exception - java-me

I'm making a game using J2ME and sometimes while testing it I'm getting out of memory exception, I know what it means but how can I solve it ? if I'll call System.gc() in my game loop every time will it help somehow ? Any tips on how to prevent this would be appreciated !

I see you've also asked j2me wtk find memory leak
In my experience, memory leaks don't cause OutOfMemoryExceptions. All they do is to slowly use up all the memory in the device. And when it's all nearly used up, the device is then forced to call System.gc() itself.
System.gc() is a blocking call, meaning it'll make your whole game stall for some microseconds, which of course is annoying. And this is why people go hunting for memory leaks - to prevent the automatic call to System.gc().
An OutOfMemoryException may occur if e.g. you have 1mb memory left, while trying to load a 2mb resource. And while a memory leak may dramatically increase chances of running into a situation like that, your problem is not the memory leak itself, but more likely that you're using too big resources.
Are you using mp3 files for music? Or big images for backgrounds or maps?
You could try calling System.gc() just before loading big resources, and it might reduce the problem. But the problem doesn't have to be related to your game alone. It could also matter what other apps are running on the device at the same time, and how much memory they use.
You could also try replacing mp3 music with MIDI music, if only just to test if it makes a difference. (Find JavaME optimized MIDI music at IndieGameMusic.com).
And if you do use big images, make sure you optimize them with tools like PNGout or Optipng.

If original file is not too big then no need of decreasing size,you can use jpeg instead...also you can just have specific limit to your local buffer size.and before system.gc() you can make use of Thread.sleep() for testing purpose to check the effect and to give time to Gc.also check with WTK's performance monitor to check actual pick location.

Related

Is it practical to use the "rude big hammer" approach to parallelize a MacOS/CoreAudio real-time audio callback?

First, some relevant background info: I've got a CoreAudio-based low-latency audio processing application that does various mixing and special effects on audio that is coming from an input device on a purpose-dedicated Mac (running the latest version of MacOS) and delivers the results back to one of the Mac's local audio devices.
In order to obtain the best/most reliable low-latency performance, this app is designed to hook in to CoreAudio's low-level audio-rendering callback (via AudioDeviceCreateIOProcID(), AudioDeviceStart(), etc) and every time the callback-function is called (from the CoreAudio's realtime context), it reads the incoming audio frames (e.g. 128 frames, 64 samples per frame), does the necessary math, and writes out the outgoing samples.
This all works quite well, but from everything I've read, Apple's CoreAudio implementation has an unwritten de-facto requirement that all real-time audio operations happen in a single thread. There are good reasons for this which I acknowledge (mainly that outside of SIMD/SSE/AVX instructions, which I already use, almost all of the mechanisms you might employ to co-ordinate parallelized behavior are not real-time-safe and therefore trying to use them would result in intermittently glitchy audio).
However, my co-workers and I are greedy, and nevertheless we'd like to do many more math-operations per sample-buffer than even the fastest single core could reliably execute in the brief time-window that is necessary to avoid audio-underruns and glitching.
My co-worker (who is fairly experienced at real-time audio processing on embedded/purpose-built Linux hardware) tells me that under Linux it is possible for a program to requisition exclusive access for one or more CPU cores, such that the OS will never try to use them for anything else. Once he has done this, he can run "bare metal" style code on that CPU that simply busy-waits/polls on an atomic variable until the "real" audio thread updates it to let the dedicated core know it's time to do its thing; at that point the dedicated core will run its math routines on the input samples and generate its output in a (hopefully) finite amount of time, at which point the "real" audio thread can gather the results (more busy-waiting/polling here) and incorporate them back into the outgoing audio buffer.
My question is, is this approach worth attempting under MacOS/X? (i.e. can a MacOS/X program, even one with root access, convince MacOS to give it exclusive access to some cores, and if so, will big ugly busy-waiting/polling loops on those cores (including the polling-loops necessary to synchronize the CoreAudio callback-thread relative to their input/output requirements) yield results that are reliably real-time enough that you might someday want to use them in front of a paying audience?)
It seems like something that might be possible in principle, but before I spend too much time banging my head against whatever walls might exist there, I'd like some input about whether this is an avenue worth pursuing on this platform.
can a MacOS/X program, even one with root access, convince MacOS to give it exclusive access to some cores
I don't know about that, but you can use as many cores / real-time threads as you want for your calculations, using whatever synchronisation methods you need to make it work, then pass the audio to your IOProc using a lock free ring buffer, like TPCircularBuffer.
But your question reminded me of a new macOS 11/iOS 14 API I've been meaning to try, the Audio Workgroups API (2020 WWDC Video).
My understanding is that this API lets you "bless" your non-IOProc real-time threads with audio real-time thread properties or at least cooperate better with the audio thread.
The documents distinguish between the threads working in parallel (this sounds like your case) and working asynchronously (this sounds like my proposal), I don't know which case is better for you.
I still don't know what happens in practice when you use Audio Workgroups, whether they opt you in to good stuff or opt you out of bad stuff, but if they're not the hammer you're seeking, they may have some useful hammer-like properties.

Heap Generation 2 and Large Object Heap climbs

I am not sure if I am posting to the right StackOverFlow forum but here goes.
I have a C# desktop app. It receives images from 4 analogue cameras and it tries to detect motion and if so it saves it.
When I leave the app running say over a 24hr cycle I notice the Private Working Set has climbed by almost 500% in Task manager.
Now, I know using Task Manager is not a good idea but it does give me an indication if something is wrong.
To that end I purchase dotMemory profiler from JetBrains.
I have used its tools to determine that the Heap Generation 2 increases a lot in size. Then to a lesser degree the Large Object Heap as well.
The latter is a surprise as the image size is 360x238 and the byte array size is always less than 20K.
So, my issues are:
Should I explicitly call GC.Collect(2) for instance?
Should I be worried that my app is somehow responsible for this?
Andrew, my recommendation is to take memory snapshot in dotMemory, than explore it to find what retains most of the memory. This video will help you. If you not sure about GC.Collect, you can just tap "Force GC" button it will collect all available garbage in your app.

ImageMagick's display GPU "memory leak"?

I'm testing CUDA app and I have run into strange memory issue:
My program performs some image operations and displays it using ImageMagick's display program.
The problem is that every time I run that IM's display I get more GPU memory usage, so less memory for GPU computation.
I'm using IM's display, because I couldn't find anything that displays image from the pipe input. Any suggestions?
Anyway why IM's display takes so much GPU memory and why is it not freed?
Based on your question, you're attempting to display a series of files in sequence using a shell not unlike Bash after performing a set of GPU-intensive operations. You're curious why more GPU memory is being consumed with every subsequent invocation of ImageMagick display, which appears to be closing out successfully after the conclusion of each operation.
We may further theorize that you're using ImageMagick's OpenCL support for at least some of your processing. While we don't have enough information to determine what your GPU's texture buffers look like at the completion of each rendering via display, I speculate your GPU isn't freeing textures expediently, causing memory to slowly creep up.
Instead of continuing to build conjecture around this hypothesis, I will instead recommend a tool to debug your issue: gDEBugger. This should allow you to interrogate your video card to determine exactly why things are slowing down.
Best of luck with your application.
I know it's old, but we have figured out that using pipes (popen()) makes sophisticated copy of the program in memory, what also causes copying the end program directives, or whatever called... So when I close program opened with popen I also finish all CUDA related context that are usually freed in "background", when program ends. So cleaning CUDA memory after I close popen application won't work, and I thing here was my memory leak and general major program error.
I hope someone will find it useful.

How to fix GC error in Mac Common Lisp 5.0?

I'm fairly new to Lisp, and I'm trying to run an algorithmic music application on the original MCL 5.0 (not the RMCL version). The program works by incrementally inputting textual representations of music, and learning from the user via an association net. Unfortunately, shortly after I begin inputting the text, I begin to see the GC icon flash. The more text I input, the longer the GC will appear, until finally it will last so long that the application will crash. I've been talking with the creator of this application, and he's never had this problem. Any ideas as to how I might fix this? Perhaps somehow altering my MCL's GC preferences?
On a side note, when I input the text and the GC icon is flashing, in Activity Monitor it shows MCL using around 90% of my CPU's processing power, but not much RAM.
MCL on what OS and Mac?
It could be that MCL starts up with too little memory. Possible reasons: it is configured for too little memory, the Mac has too little free memory for some reason.
(room t)
shows details about the available memory.
It can also be that the program takes up too much memory when running. Reasons for that: it is not compiled or the available memory is too small.
Generally I would propose to use the MCL user mailing list for these questions.
Send a message with the text 'help' in the body to info-mcl-request # digitool.com (remove the spaces). You will get a message how to subscribe. The actual mailing list is info-mcl # digitool.com (again without the spaces).

JavaME - LWUIT images eat up all the memory

I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this).
The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage.
The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down.
If anyone has any insight to offer, I would greatly appreciate it.
Mobile devices are usually very low on memory. So you have to use some tricks to conserve and use memory.
We had the same problem at a project of ours and we solved it like this.
for downloaded images:
Make a cache where you put your images. If you need an image, check if it is in the cachemap, if it isn't download it and put it there, if it is, use it. if memory is full, remove the oldest image in the cachemap and try again.
for other resource images:
keep them in memory only for as long as you can see them, if you can't see them, break the reference and the gc will do the cleanup for you.
Hope this helps.
There are a few things that might be happening here:
You might have seen the memory used before garbage collection, which doesn't correspond to the actual memory used by your app.
Some third party code you are running might be pooling some internal datastructures to minimize allocation. While pooling is a viable strategy, sometimes it does look like a leak. In that case, look if there is API to 'close' or 'dispose' the objects you don't need.
Finally, you might really have a leak. In this case you need to get more details on what's going on in the emulator VM (though keep in mind that it is not necessarily the same as the phone VM).
Make sure that your emulator uses JRE 1.6 as backing JVM. If you need it to use the runtime libraries from erlyer JDK, use -Xbootclasspath:<path-to-rt.jar>.
Then, after your application gets in the state you want to see, do %JAVA_HOME%\bin\jmap -dump:format=b,file=heap.bin <pid> (if you don't know the id of your process, use jps)
Now you've got a dump of the JVM heap. You can analyze it with jhat (comes with the JDK, a bit difficult to use) or some third party profilers (my preference is YourKit - it's commercial, but they have time-limited eval licenses)
I had a similar problem with LWUIT at Java DTV. Did you try flushing the images when you don't need them anymore (getAWTImage().flush())?
Use EncodedImage and resource files when possible (resource files use EncodedImage by default. Read the javadoc for such. Other comments are also correct that you need to actually observe the amount of memory, even high RAM Android/iOS devices run out of memory pretty fast with multiple images.
Avoid scaling which effectively eliminates the EncodedImage.
Did you think of the fact, that maybe loading the same image from JAR, many times, is causing many separate image objects (with identical contents) to be created instead of reusing one instance per-individual-image? This is my first guess.

Resources