Scrolling UIImageViews getting "Low Memory Warning" - memory-leaks

My application is relatively simple - basically a UIScrollView to look at a couple of hundred (large) JPEGs. Yet it is crashing consistently with a "Low Memory Warning"
The scroll view consists of three UIImageViews (previousPage, currentPage, and nextPage). Upon starting, and every time the current page is scrolled, I "reset" the three UIImageViews with new UIImages.
NSString *previousPath = [[NSBundle mainBundle] pathForResource:previousName ofType:#"jpg"];
previousPage.image = [UIImage imageWithContentsOfFile:previousPath];
currentPage.image = [UIImage imageWithContentsOfFile:currentPath];
nextPage.image = [UIImage imageWithContentsOfFile:nextPath];
Running in Allocations, the number of UIImage objects #living stays constant through the running of the app, but the number of #transitory UIImage objects can grow quite high.
Is this a problem? Is there some way that I can be 'releasing' the UIImage objects? Am I right in thinking this is the source of what must be a memory leak?

Related

AVAssetWriter getting raw bytes makes corrupt videos on device (works on sim)

So my goal is to add CVPixelBuffers into my AVAssetWriter / AVAssetWriterInputPixelBufferAdaptor with super high speed. My previous solution used CGContextDrawImage but it is very slow (0.1s) to draw. The reason seems to be with color matching and converting, but that's another question I think.
My current solution is trying to read the bytes of the image directly to skip the draw call. I do this:
CGImageRef cgImageRef = [image CGImage];
CGImageRetain(cgImageRef);
CVPixelBufferRef pixelBuffer = NULL;
CGDataProviderRef dataProvider = CGImageGetDataProvider(cgImageRef);
CGDataProviderRetain(dataProvider);
CFDataRef da = CGDataProviderCopyData(dataProvider);
CVPixelBufferCreateWithBytes(NULL,
CGImageGetWidth(cgImageRef),
CGImageGetHeight(cgImageRef),
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(da),
CGImageGetBytesPerRow(cgImageRef),
NULL,
0,
NULL,
&pixelBuffer);
[writerAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentTime];
-- releases here --
This works fine on my simulator and inside an app. But when I run the code inside the SpringBoard process, it comes out as the images below. Running it outside the sandbox is a requirement, it is meant for jailbroken devices.
I have tried to play around with e.g. pixel format styles but it mostly comes out with differently corrupted images.
The proper image/video file looks fine:
But this is what I get in the broken state:
Answering my own question as I think I got the answer(s). The resolution difference was a simple code error, not using the device bounds on the latter ones.
The color issues. In short, the CGImages I got when running outside of the sandbox was using more bytes per pixel, 8 bytes. The images I get when running inside the sandbox was 4 bytes. So basically I was simply writing the wrong data into the buffer.
So, instead of simply slapping all of the bytes from the larger image into the smaller buffer. I loop through the pixel buffer row-by-row, byte-by-byte and I pick the RGBA values for each pixel. I essentially had to skip every other byte from the source image to get the right data into the right place within the buffer.

Memory leak updating geometry - ArcGis Runtime .Net

I'm working with ArcGis Runtime SDK for .Net v10.2.5.
I have an UDP socket listening and waiting for image data that fires a function executed on a different thread in background.
I want to draw the image over a ellipse of arbitrary radius so i use
var filestream = System.IO.File.Open(imagepath, FileMode.Open, FileAccess.Read);
MapPoint point = new MapPoint(center.longitude, center.latitude, SpatialReferences.Wgs84);
var polySymbol = new Esri.ArcGISRuntime.Symbology.PictureFillSymbol();
await polySymbol.SetSourceAsync(filestream);
var param = new GeodesicEllipseParameters(point, 25, LinearUnits.Meters);
var ellipse = GeometryEngine.GeodesicEllipse(param);
***//HERE IS THE PROBLEM***
_graphicsLayer.Graphics.Clear();
_graphicsLayer.Graphics.Add(new Graphic { Geometry = ellipse, Symbol = polySymbol });
This is done ~5 times/second. Despite i'm clearing the layer each iteration there is a memory leak that is increasing memory use till app crashes.
I read about problems with memory using ArcGIS and Geometry process, so i'm not sure if i'm hitting a wall or just doing things badly
I also tried overwriting geometry without clear:
//this is the problematic line, if i comment that, memory doesn't increase.
_graphicsLayer.Graphics[0].Symbol = polySymbol;
_graphicsLayer.Graphics[0].Geometry = ellipse;
And using stream statement, filestream is properly closed at the end, but used RAM keep increasing till app crashes.
I would store the PictureFillSymbol in a Dictionary by fileName and reuse the symbol rather than creating a new one on every update. Changing the Symbol and Geometry is likely the best way to do it rather than creating a new Graphic every time

To retrieve the values for only one property of an entity in Core Data of iOS

I have an entity called "Records", and it has an attribute named "amount" of NSDecimalNumber class.
And of course "Record" has other attributes such as "name","date", and so on.
Now I need to fetch only the amount attribute of all "Records" to sum them up,
to get better performance I just need the value of "amount", and I don't care the name, or date.
So how should I do?
Here's my code but I guess they are not professional enough.
NSFetchRequest *request = [[NSFetchRequest alloc] initWithEntityName:#"TransferRecord"];
request.includesSubentities = NO;
[request setPropertiesToFetch:[NSArray arrayWithObject:#"amount"]];
// [request setReturnsObjectsAsFaults:NO]; // I don't know whether I shoud use this
[request setResultType:NSDictionaryResultType];
NSError *error = nil;
NSArray *temp = [self.fetchedResultsController.managedObjectContext executeFetchRequest:request error:&error];
if (temp) {
// NSDecimalNumber *allTrans = [NSDecimalNumber zero];
// for (NSDecimalNumber *one in [temp valueForKey:<#(NSString *)#>)
NSLog(#"%#",[temp description]);
And I'm not clear about what the "faults" means.
Eno,
How do you know that the standard fetch is not fast enough for your needs? Have you identified the fetch, using Instruments, as the problem? (I ask because the standard way to make CD apps run faster, particularly on iOS, is to fetch more data by using simpler predicates. You then refine the query using set operations on the items in RAM.)
If you think about it, you'll see why. Rows are stored in contiguous bytes on disk. The rows can frequently be found in the same disk block. Hence, a single fetch can potentially get many rows. The disk fetch is the slow part of any database query. On iOS, the flash is quite slow. It isn't designed for high performance scatter-gather database operations. IOW, an iOS device is not and SSD for its own OS. BTW, a fetch on flash will bring upwards of 128k-256k bytes into the buffers. Hence, getting more rows is easy and comparatively fast.
Your code above is basically correct.
You need to go read the documentation about Core Data faults. It is a fundamental concept in the system. Apple's documentation is quite clear on the nature of a fault. Stack Overflow is the wrong place to ask for basic information covered well in any system's documentation.
Andrew

Xna Xbox framedrops when GC kicks in

I'm developing an app (XNA Game) for the XBOX, which is a pretty simple app. The startpage contains tiles with moving gif images. Those gif images are actually all png images, which gets loaded once by every tile, and put in an array. Then, using a defined delay, these images are played (using a counter which increases every time a delay passes).
This all works well, however, I noticed some small lag every x seconds in the movement of the GIF images. I then started to add some benchmarking stuff:
http://gyazo.com/f5fe0da3ff81bd45c0c52d963feb91d8
As you can see, the FPS is pretty low for such a simple program (This is in debug, when running the app from the Xbox itself, I get an avg of 62fps).
2 important settings:
Graphics.SynchronizeWithVerticalRetrace = false;
IsFixedTimeStep = false;
Changing isFixedTimeStep to true increases the lag. The settings tile has wheels which rotate, and you can see the wheels go back a little every x seconds. The same counts for SynchronizeWVR, also increases lag.
I noticed a connection between the lag and the moment the garbage collector kicks in, every time it kicks in, there is a lag...
Don't mind the MAX HMU(Heap memory usage), as this is takes the amount of the start, the avg is more realistic.
Here is another screen from the performance monitor, however I don't understand much from this tool, first time I'm using it... Hope it helps:
http://gyazo.com/f70a3d400657ac61e6e9f2caaaf17587
After a little research I found the culprit.
I have custom components that all derive from GameComponent, and who get added to the Component list of the main Game class.
This was one (of a total of 2) major problem, causing to update everything that wasn't needing an update. (The draw method was the only one who kept the page state in mind, and only drew if needed).
I fixed this by using different "screens" (or pages as I called them), wich are the only components who derive from GameComponent.
Then I only update the page wich is active, and the custom components on that page also get updated. Problem fixed.
The second big problem, is the following;
I made a class which helps me on positioning stuff on the screen, relative that is, with percentages and stuff like that. Parent containers, aligns & v-aligns etc etc.
That class had properties, for size & vectors, but instead of saving the calculated value in a backing field, I recalculated them everytime I accessed a property. But calculating complex stuff like that uses references (to parent & child containers for example) wich made it very hard for the CLR, because it had alot of work to do.
I now rebuilt the whole positioning class to a fully functional optimized class, with different flags for recalculating when necessairy, and instead of drops of 20fps, I now get an average of 170+fps!

glDrawElements flicker and crash

I'm getting a fishy error when using glDrawElements(). I'm trying to render simple primitives (mainly rectangles) to speed up drawing of text and so forth, but when I call glDrawElements() the WHOLE screen blinks black (not just my window area) for one frame or so. The next frame it turns back to the same "Windows colors" as before. And so it flickers for a couple of seconds, ending up in a message box saying
The NVIDIA OpenGL Driver encountered an unrecoverable error
and must close this application.
Error 12
Is there any setting for GL which I need to reset before calling glDrawElements()? I know it's not some dangling glEnableClientState(), I checked it (I used to have one of those, but then glDrawElements() crashed instead).
Come to think of it, it almost looks like some back buffer error... Any ideas on what to try?
Obviously you are mixing VBO mode and VA mode. This is perfectly possible but must be use with care.
When you call:
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
This means that next time you render something with glDrawElements(..., ..., ..., x), it will use x as a pointer on the indices data, and the last call to glVertexPointer points on the vertices data.
If you don't unbind the current VBO and IBO (with the above two glBindBuffer calls), this means that when rendering with the same glDrawElements, x will be use as an offset on the indices data in the IBO, and the last call to glVertexPointer as an offset on the vertices data in the VBO.
Depending values of x and glVertexPointer you can make the driver crash because the offsets go out of bounds and/or the underlying data is of the wrong type (NaN).
So for answering your question, after drawing with VBO mode and then drawing with VA mode:
unbind the current VBO
unbind the current IBO
specify the right vertices address with glVertexPointer
specify the right indices address with glDrawElements
and then it will be fine.
Bah! Found it. When I did
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
before rendering the flickering+crashing stopped. Is this the expected behavior? Sorry for wasting time and space.

Resources