j2me - Out of memory exception, does it have anything to do with the maximum heap or jar size? - java-me

I'm currently developing an app for taking orders. Before I ask my question, let me give you a few details of the basic functionality of my app:
The first thing the app does once the user is logged in, is to read data from a webservice (products, prices and customers) so that the user can work offline.
Once the user have all the necessary data, they can starting taking orders.
Finally, at the end of the day, the user sends all the orders to a server for its processing.
Now that you know how my app works, here's the problem :
When I run my app in the emulator it works, but now that running tests on a physical device When I read data from the webservice, the following error appears on the screen :
Out of memory error java/lang/OutOfMemoryError
At first I thought that the data that is read from the WS (in json format) was too much for the StringBuffer :
hc = (HttpConnection) Connector.open(mUrl);
if (hc.getResponseCode() == HttpConnection.HTTP_OK) {
is = hc.openInputStream();
int ch;
while ((ch = is.read()) != -1) {
stringBuffer.append((char) ch);
}
}
But it turned out that the error occurs when I tried to convert the result from the WS (string in json Format) into a JSONArray .
I do this because I need to loop through all the objects (for example the Products) and then save them using RMS. Here's part of the code I use:
rs = RecordStore.openRecordStore(mRecordStoreName, true);
try {
// This is the line that throws the exception
JSONArray js = new JSONArray(data);
for (int i = 0; i < js.length(); i++) {
JSONObject jsObj = js.getJSONObject(i);
stringJSON = jsObj.toString();
id = saveRecord(stringJSON, rs);
}
The saveRecord Method
public int saveRecord(String stringJSON, RecordStore rs) throws JSONException {
int id = -1;
try {
byte[] raw = stringJSON.getBytes();
id= rs.addRecord(raw, 0, raw.length);
} catch (RecordStoreException ex) {
System.out.println(ex);
}
return id;
}
Searching a little , I found these functions : Runtime.getRuntime().freeMemory() and Runtime.getRuntime().totalMemory()
With those functions I found out that the total memory is : 2097152 and the free memory before the error occurs is : 69584 bytes.
Now the big question(or rather questions) is :
Where this little amount of memory is taken from ?? The heap size??
The device's specifications says that it has 4MB
Another thing that worries me is if RMS increases the JAR size
because the specifications also say that the maximum jar size is 2
MB.
As always, I really appreciate all your comments and suggestions.
Thanks in advance.

i don't know what is the exact reason but these J2ME devices indeed have a memory problem.
my app is working with contacts, and when i tried to receive the JSON of contacts from the server, if it was too long the conversion indeed caused an out of memory error.
the solution that i found is paging. divide the data that you receive from the server into parts and read it part by part.

Related

Memory leak for dotnet core (3.1) application in Linux (Ubuntu 20.04), but not Windows (2012 R2)

I'm seeing an apparent memory leak with a dotnet core 3.1 application in our Linux (Ubuntu 20.04) environment, but not our Windows (2012 R2) environment. I have run the same operations between the two, and it's clear that Linux is hanging onto memory (gigs worth) and Windows is properly disposing of it.
I've tried looking at dotnet core settings, (workstation vs server garbage collection, etc) and the defaults seem to be appropriate. I have tried researching the issue and have found a number of seemingly related posts here and elsewhere, but nothing conclusive or anything that has been helpful. Here are the most relevant posts I found:
Dotnet Core Docker Container Leaks RAM on Linux and causes OOM
How do I use server garbage collection with dotnet fsi?
https://github.com/dotnet/diagnostics/issues/1402
I have also tried more general memory troubleshooting using MS's official documentation and the dotnet-counters tool. "Working set" memory appeared to be what is getting bloated, the GC Heap didn't expand much and relinquished what it did find.
In code, what's happening is I am creating images using System.Drawing, the images are wrapped in "using" blocks, and I am calling dispose on those images when done.
Code example - in summary, this function takes a user's imported image, and scales them down to a more optimized size. The helper functions "scaleImage" & "imageToByteArray" are the items in question
UserImage[] userImages = this._repository.getImagesForUsersByIds(userIds.ToArray(), idList.ToArray());
List<UserImage> returnImages = new List<UserImage>();
foreach (UserImage img in userImages) {
try {
string filePath = this._fileManager.getFilePathOfEditorImage(img.OwnerId, img.Id, "." + img.ImageType);
if (!string.IsNullOrEmpty(filePath) && this._fileInfoHelper.doesFileExist(filePath)) {
System.Drawing.Image scaledImage;
if (size > 0) {
scaledImage = this._fileInfoHelper.scaleImage(filePath, size * size);
} else {
scaledImage = System.Drawing.Image.FromFile(filePath);
}
byte[] bytes = this._fileInfoHelper.imageToByteArray(scaledImage, this._fileInfoHelper.getFormatFromExtension(img.ImageType));
string type = this._fileInfoHelper.getContentType(filePath);
img.Base64 = "data:" + type.ToLower() + ";base64," + Convert.ToBase64String(bytes);
returnImages.Add(img);
scaledImage.Dispose();
}
}
catch (Exception ex) {
//could not convert image....
}
}
return Ok(returnImages.ToArray());
Code example for an image scaling function I'm using:
public Image scaleImage(string fileName, int scaleImageArea) {
Image returnVal = null;
using (var bmpTemp = new Bitmap(fileName)) {
using (Image sourceImage = new Bitmap(bmpTemp)) {
if (sourceImage.Width * sourceImage.Height > scaleImageArea) {
//We need to scale the image down.
float width, height;
//This is the targetArea that we are shooting for.
float ratio = (float)sourceImage.Width / (float)sourceImage.Height;
//The expression to get the width is width^2 * ratio = target
height = (float)Math.Sqrt(scaleImageArea / ratio);
width = height * ratio;
returnVal = new Bitmap(sourceImage, new Size((int)width, (int)height));
} else {
returnVal = new Bitmap(sourceImage);
}
bmpTemp.Dispose();
sourceImage.Dispose();
}
}
return returnVal;
}
Code example of image to byte array function:
public byte[] imageToByteArray(System.Drawing.Image imageIn, System.Drawing.Imaging.ImageFormat format) {
byte[] returnValue;
using (var ms = new MemoryStream()) {
imageIn.Save(ms, format);
imageIn.Dispose();
returnValue = ms.ToArray();
}
return returnValue;
}
Specific problem:
I have a memory leak in a Linux environment but not in windows with the same code. I suspect a dotnet core or other Linux-specific configuration/environment issue, but troubleshooting with official dotnet core memory leak tools has been a dead end.
Desired behavior:
I don't want the memory leak and would expect to see behavior in Linux similar to that of Windows.

StackExchange.Redis on Azure is throwing timeout performing get and no connection available exceptions

I recently switched an MVC application that serves data feeds and dynamically generated images (6k rpm throughput) from the v3.9.67 ServiceStack.Redis client to the latest StackExchange.Redis client (v1.0.450) and I'm seeing some slower performance and some new exceptions.
Our Redis instance is S4 level (13GB), CPU shows a fairly constant 45% or so and network bandwidth appears fairly low. I'm not entirely sure how to interpret the gets/sets graph in our Azure portal, but it shows us around 1M gets and 100k sets (appears that this may be in 5 minute increments).
The client library switch was straightforward and we are still using the v3.9 ServiceStack JSON serializer so that the client lib was the only piece changing.
Our external monitoring with New Relic shows clearly that our average response time increases from about 200ms to about 280ms between ServiceStack and StackExchange libraries (StackExchange being slower) with no other change.
We recorded a number of exceptions with messages along the lines of:
Timeout performing GET feed-channels:ag177kxj_egeo-_nek0cew, inst: 12, mgr: Inactive, queue: 30, qu=0, qs=30, qc=0, wr=0/0, in=0/0
I understand this to mean that there are a number of commands in the queue that have been sent but no response available from Redis, and that this can be caused by long running commands that exceed the timeout. These errors appeared for a period when our sql database behind one of our data services was getting backed up, so perhaps that was the cause? After scaling out that database to reduce load we haven't seen very many more of this error, but the DB query should be happening in .Net and I don't see how that would hold up a redis command or connection.
We also recorded a large batch of errors this morning over a short period (couple of minutes) with messages like:
No connection is available to service this operation: SETEX feed-channels:vleggqikrugmxeprwhwc2a:last-retry
We were used to transient connection errors with the ServiceStack library, and those exception messages were usually like this:
Unable to Connect: sPort: 63980
I'm under the impression that SE.Redis should be retrying connections and commands in the background for me. Do I still need to be wrapping our calls through SE.Redis in a retry policy of my own? Perhaps different timeout values would be more appropriate (though I'm not sure what values to use)?
Our redis connection string sets these parameters: abortConnect=false,syncTimeout=2000,ssl=true. We use a singleton instance of ConnectionMultiplexer and transient instances of IDatabase.
The vast majority of our Redis use goes through a Cache class, and the important bits of the implementation are below, in case we're doing something silly that's causing us problems.
Our keys are generally 10-30 or so character strings. Values are largely scalar or reasonably small serialized object sets (hundred bytes to a few kB generally), though we do also store jpg images in the cache so a large chunk of the data is from a couple hundred kB to a couple MB.
Perhaps I should be using different multiplexers for small and large values, probably with longer timeouts for larger values? Or couple/few multiplexers in case one is stalled?
public class Cache : ICache
{
private readonly IDatabase _redis;
public Cache(IDatabase redis)
{
_redis = redis;
}
// storing this placeholder value allows us to distinguish between a stored null and a non-existent key
// while only making a single call to redis. see Exists method.
static readonly string NULL_PLACEHOLDER = "$NULL_VALUE$";
// this is a dictionary of https://github.com/StephenCleary/AsyncEx/wiki/AsyncLock
private static readonly ILockCache _locks = new LockCache();
public T GetOrSet<T>(string key, TimeSpan cacheDuration, Func<T> refresh) {
T val;
if (!Exists(key, out val)) {
using (_locks[key].Lock()) {
if (!Exists(key, out val)) {
val = refresh();
Set(key, val, cacheDuration);
}
}
}
return val;
}
private bool Exists<T>(string key, out T value) {
value = default(T);
var redisValue = _redis.StringGet(key);
if (redisValue.IsNull)
return false;
if (redisValue == NULL_PLACEHOLDER)
return true;
value = typeof(T) == typeof(byte[])
? (T)(object)(byte[])redisValue
: JsonSerializer.DeserializeFromString<T>(redisValue);
return true;
}
public void Set<T>(string key, T value, TimeSpan cacheDuration)
{
if (value.IsDefaultForType())
_redis.StringSet(key, NULL_PLACEHOLDER, cacheDuration);
else if (typeof (T) == typeof (byte[]))
_redis.StringSet(key, (byte[])(object)value, cacheDuration);
else
_redis.StringSet(key, JsonSerializer.SerializeToString(value), cacheDuration);
}
public async Task<T> GetOrSetAsync<T>(string key, Func<T, TimeSpan> getSoftExpire, TimeSpan additionalHardExpire, TimeSpan retryInterval, Func<Task<T>> refreshAsync) {
var softExpireKey = key + ":soft-expire";
var lastRetryKey = key + ":last-retry";
T val;
if (ShouldReturnNow(key, softExpireKey, lastRetryKey, retryInterval, out val))
return val;
using (await _locks[key].LockAsync()) {
if (ShouldReturnNow(key, softExpireKey, lastRetryKey, retryInterval, out val))
return val;
Set(lastRetryKey, DateTime.UtcNow, additionalHardExpire);
try {
var newVal = await refreshAsync();
var softExpire = getSoftExpire(newVal);
var hardExpire = softExpire + additionalHardExpire;
if (softExpire > TimeSpan.Zero) {
Set(key, newVal, hardExpire);
Set(softExpireKey, DateTime.UtcNow + softExpire, hardExpire);
}
val = newVal;
}
catch (Exception ex) {
if (val == null)
throw;
}
}
return val;
}
private bool ShouldReturnNow<T>(string valKey, string softExpireKey, string lastRetryKey, TimeSpan retryInterval, out T val) {
if (!Exists(valKey, out val))
return false;
var softExpireDate = Get<DateTime?>(softExpireKey);
if (softExpireDate == null)
return true;
// value is in the cache and not yet soft-expired
if (softExpireDate.Value >= DateTime.UtcNow)
return true;
var lastRetryDate = Get<DateTime?>(lastRetryKey);
// value is in the cache, it has soft-expired, but it's too soon to try again
if (lastRetryDate != null && DateTime.UtcNow - lastRetryDate.Value < retryInterval) {
return true;
}
return false;
}
}
A few recommendations.
- You can use different multiplexers with different timeout values for different types of keys/values
http://azure.microsoft.com/en-us/documentation/articles/cache-faq/
- Make sure you are not network bound on the client and server. if you are on the server then move to a higher SKU which has more bandwidth
Please read this post for more details
http://azure.microsoft.com/blog/2015/02/10/investigating-timeout-exceptions-in-stackexchange-redis-for-azure-redis-cache/

ReceiveFromAsync leaking SocketAsyncEventArgs?

I have a client application that receives video stream from a server via UDP or TCP socket.
Originally, when it was written using .NET 2.0 the code was using BeginReceive/EndReceive and IAsyncResult.
The client displays each video in it's own window and also using it's own thread for communicating with the server.
However, since the client is supposed to be up for a long period of time, and there might be 64 video streams simultaneously, there is a "memory leak" of IAsyncResult objects that are allocated each time the data receive callback is called.
This causes the application eventually to run out of memory, because the GC can't handle releasing of the blocks in time. I verified this using VS 2010 Performance Analyzer.
So I modified the code to use SocketAsyncEventArgs and ReceiveFromAsync (UDP case).
However, I still see a growth in memory blocks at:
System.Net.Sockets.Socket.ReceiveFromAsync(class System.Net.Sockets.SocketAsyncEventArgs)
I've read all the samples and posts about implementing the code, and still no solution.
Here's how my code looks like:
// class data members
private byte[] m_Buffer = new byte[UInt16.MaxValue];
private SocketAsyncEventArgs m_ReadEventArgs = null;
private IPEndPoint m_EndPoint; // local endpoint from the caller
Initializing:
m_Socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
m_Socket.Bind(m_EndPoint);
m_Socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReceiveBuffer, MAX_SOCKET_RECV_BUFFER);
//
// initalize the socket event args structure.
//
m_ReadEventArgs = new SocketAsyncEventArgs();
m_ReadEventArgs.Completed += new EventHandler<SocketAsyncEventArgs>(readEventArgs_Completed);
m_ReadEventArgs.SetBuffer(m_Buffer, 0, m_Buffer.Length);
m_ReadEventArgs.RemoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
m_ReadEventArgs.AcceptSocket = m_Socket;
Starting the read process:
bool waitForEvent = m_Socket.ReceiveFromAsync(m_ReadEventArgs);
if (!waitForEvent)
{
readEventArgs_Completed(this, m_ReadEventArgs);
}
Read completion handler:
private void readEventArgs_Completed(object sender, SocketAsyncEventArgs e)
{
if (e.BytesTransferred == 0 || e.SocketError != SocketError.Success)
{
//
// we got error on the socket or connection was closed
//
Close();
return;
}
try
{
// try to process a new video frame if enough data was read
base.ProcessPacket(m_Buffer, e.Offset, e.BytesTransferred);
}
catch (Exception ex)
{
// log and error
}
bool willRaiseEvent = m_Socket.ReceiveFromAsync(e);
if (!willRaiseEvent)
{
readEventArgs_Completed(this, e);
}
}
Basically the code works fine and I see the video streams perfectly, but this leak is a real pain.
Did I miss anything???
Many thanks!!!
Instead of recursively calling readEventArgs_Completed after !willRaiseEvent use goto to return to the top of the method. I noticed I was slowly chewing up stack space when I had a pattern similar to yours.

"The specified block list is invalid" while uploading blobs in parallel

I've a (fairly large) Azure application that uploads (fairly large) files in parallel to Azure blob storage.
In a few percent of uploads I get an exception:
The specified block list is invalid.
System.Net.WebException: The remote server returned an error: (400) Bad Request.
This is when we run a fairly innocuous looking bit of code to upload a blob in parallel to Azure storage:
public static void UploadBlobBlocksInParallel(this CloudBlockBlob blob, FileInfo file)
{
blob.DeleteIfExists();
blob.Properties.ContentType = file.GetContentType();
blob.Metadata["Extension"] = file.Extension;
byte[] data = File.ReadAllBytes(file.FullName);
int numberOfBlocks = (data.Length / BlockLength) + 1;
string[] blockIds = new string[numberOfBlocks];
Parallel.For(
0,
numberOfBlocks,
x =>
{
string blockId = Convert.ToBase64String(Guid.NewGuid().ToByteArray());
int currentLength = Math.Min(BlockLength, data.Length - (x * BlockLength));
using (var memStream = new MemoryStream(data, x * BlockLength, currentLength))
{
var blockData = memStream.ToArray();
var md5Check = System.Security.Cryptography.MD5.Create();
var md5Hash = md5Check.ComputeHash(blockData, 0, blockData.Length);
blob.PutBlock(blockId, memStream, Convert.ToBase64String(md5Hash));
}
blockIds[x] = blockId;
});
byte[] fileHash = _md5Check.ComputeHash(data, 0, data.Length);
blob.Metadata["Checksum"] = BitConverter.ToString(fileHash).Replace("-", string.Empty);
blob.Properties.ContentMD5 = Convert.ToBase64String(fileHash);
data = null;
blob.PutBlockList(blockIds);
blob.SetMetadata();
blob.SetProperties();
}
All very mysterious; I'd think the algorithm we're using to calculate the block list should produce strings that are all the same length...
We ran into a similar issue, however we were not specifying any block ID or even using the block ID anywhere. In our case, we were using:
using (CloudBlobStream stream = blob.OpenWrite(condition))
{
//// [write data to stream]
stream.Flush();
stream.Commit();
}
This would cause The specified block list is invalid. errors under parallelized load. Switching this code to use the UploadFromStream(…) method while buffering the data into memory fixed the issue:
using (MemoryStream stream = new MemoryStream())
{
//// [write data to stream]
stream.Seek(0, SeekOrigin.Begin);
blob.UploadFromStream(stream, condition);
}
Obviously this could have negative memory ramifications if too much data is buffered into memory, but this is a simplification. One thing to note is that UploadFromStream(...) uses Commit() in some cases, but checks additional conditions to determine the best method to use.
This exception can happen also when multiple threads open stream into a blob with the same file name and try to write into this blob simultaneously.
NOTE: this solution is based on Azure JDK code, but I think we can safely assume that pure REST version will have the very same effect (as any other language actually).
Since I have spent entire work day fighting this issue, even if this is actually a corner case, I'll leave a note here, maybe it will be of help to someone.
I did everything right. I had block IDs in the right order, I had block IDs of the same length, I had a clean container with no leftovers of some previous blocks (these three reasons are the only ones I was able to find via Google).
There was one catch: I've been building my block list for commit via
CloudBlockBlob.commitBlockList(Iterable<BlockEntry> blockList)
with use of this constructor:
BlockEntry(String id, BlockSearchMode searchMode)
passing
BlockSearchMode.COMMITTED
in the second argument. And THAT proved to be the root cause. Once I changed it to
BlockSearchMode.UNCOMMITTED
and eventually landed on the one-parameter constructor
BlockEntry(String id)
which uses UNCOMMITED by default, commiting the block list worked and blob was successfuly persisted.

How do I tell my C# application to close a file it has open in a FileInfo object or possibly Bitmap object?

So I was writing a quick application to sort my wallpapers neatly into folders according to aspect ratio. Everything is going smoothly until I try to actually move the files (using FileInfo.MoveTo()). The application throws an exception:
System.IO.IOException
The process cannot access the file because it is being used by another process.
The only problem is, there is no other process running on my computer that has that particular file open. I thought perhaps that because of the way I was using the file, perhaps some internal system subroutine on a different thread or something has the file open when I try to move it. Sure enough, a few lines above that, I set a property that calls an event that opens the file for reading. I'm assuming at least some of that happens asynchronously. Is there anyway to make it run synchronously? I must change that property or rewrite much of the code.
Here are some relevant bits of code, please forgive the crappy Visual C# default names for things, this isn't really a release quality piece of software yet:
private void button1_Click(object sender, EventArgs e)
{
for (uint i = 0; i < filebox.Items.Count; i++)
{
if (!filebox.GetItemChecked((int)i)) continue;
//This calls the selectedIndexChanged event to change the 'selectedImg' variable
filebox.SelectedIndex = (int)i;
if (selectedImg == null) continue;
Size imgAspect = getImgAspect(selectedImg);
//This is gonna be hella hardcoded for now
//In the future this should be changed to be generic
//and use some kind of setting schema to determine
//the sort/filter results
FileInfo file = ((FileInfo)filebox.SelectedItem);
if (imgAspect.Width == 8 && imgAspect.Height == 5)
{
finalOut = outPath + "\\8x5\\" + file.Name;
}
else if (imgAspect.Width == 5 && imgAspect.Height == 4)
{
finalOut = outPath + "\\5x4\\" + file.Name;
}
else
{
finalOut = outPath + "\\Other\\" + file.Name;
}
//Me trying to tell C# to close the file
selectedImg.Dispose();
previewer.Image = null;
//This is where the exception is thrown
file.MoveTo(finalOut);
}
}
//The suspected event handler
private void filebox_SelectedIndexChanged(object sender, EventArgs e)
{
FileInfo selected;
if (filebox.SelectedIndex >= filebox.Items.Count || filebox.SelectedIndex < 0) return;
selected = (FileInfo)filebox.Items[filebox.SelectedIndex];
try
{
//The suspected line of code
selectedImg = new Bitmap((Stream)selected.OpenRead());
}
catch (Exception) { selectedImg = null; }
if (selectedImg != null)
previewer.Image = ResizeImage(selectedImg, previewer.Size);
else
previewer.Image = null;
}
I have a long-fix in mind (that's probably more efficient anyway) but it presents more problems still :/
Any help would be greatly appreciated.
Since you are using your selectedImg as a Class scoped variable it is keeping a lock on the File while the Bitmap is open. I would use an using statement and then Clone the Bitmap into the variable you are using this will release the lock that Bitmap is keeping on the file.
Something like this.
using ( Bitmap img = new Bitmap((Stream)selected.OpenRead()))
{
selectedImg = (Bitmap)img.Clone();
}
New answer:
I looked at the line where you do an OpenRead(). Clearly, this locks your file. It would be better to provide the file path instead of an stream, because you can't dispose your stream since bitmap would become erroneous.
Another thing I'm looking in your code which could be a bad practice is binding to FileInfo. Better create a data-transfer object/value object and bind to a collection of this type - some object which has the properties you need to show in your control -. That would help in order to avoid file locks.
In the other hand, you can do some trick: why don't you show streched to screen resolution images compressing them so image size would be extremly lower than actual ones and you provide a button called "Show in HQ"? That should solve the problem of preloading HD images. When the user clicks "Show in HQ" button, loads that image in memory, and when this is closed, it gets disposed.
It's ok for you?
If I'm not wrong, FileInfo doesn't block any file. You're not opening it but reading its metadata.
In the other hand, if you application shows images, you should move to memory visible ones and load them to your form from a memory stream.
That's reasonable because you can open a file stream, read its bytes and move them to a memory stream, leaving the lock against that file.
NOTE: This solution is fine for not so large images... Let me know if you're working with HD images.
using(selectedImg = new Bitmap((Stream)selected))
Will that do it?

Resources