I'm working with ArcGis Runtime SDK for .Net v10.2.5.
I have an UDP socket listening and waiting for image data that fires a function executed on a different thread in background.
I want to draw the image over a ellipse of arbitrary radius so i use
var filestream = System.IO.File.Open(imagepath, FileMode.Open, FileAccess.Read);
MapPoint point = new MapPoint(center.longitude, center.latitude, SpatialReferences.Wgs84);
var polySymbol = new Esri.ArcGISRuntime.Symbology.PictureFillSymbol();
await polySymbol.SetSourceAsync(filestream);
var param = new GeodesicEllipseParameters(point, 25, LinearUnits.Meters);
var ellipse = GeometryEngine.GeodesicEllipse(param);
***//HERE IS THE PROBLEM***
_graphicsLayer.Graphics.Clear();
_graphicsLayer.Graphics.Add(new Graphic { Geometry = ellipse, Symbol = polySymbol });
This is done ~5 times/second. Despite i'm clearing the layer each iteration there is a memory leak that is increasing memory use till app crashes.
I read about problems with memory using ArcGIS and Geometry process, so i'm not sure if i'm hitting a wall or just doing things badly
I also tried overwriting geometry without clear:
//this is the problematic line, if i comment that, memory doesn't increase.
_graphicsLayer.Graphics[0].Symbol = polySymbol;
_graphicsLayer.Graphics[0].Geometry = ellipse;
And using stream statement, filestream is properly closed at the end, but used RAM keep increasing till app crashes.
I would store the PictureFillSymbol in a Dictionary by fileName and reuse the symbol rather than creating a new one on every update. Changing the Symbol and Geometry is likely the best way to do it rather than creating a new Graphic every time
Related
So my goal is to add CVPixelBuffers into my AVAssetWriter / AVAssetWriterInputPixelBufferAdaptor with super high speed. My previous solution used CGContextDrawImage but it is very slow (0.1s) to draw. The reason seems to be with color matching and converting, but that's another question I think.
My current solution is trying to read the bytes of the image directly to skip the draw call. I do this:
CGImageRef cgImageRef = [image CGImage];
CGImageRetain(cgImageRef);
CVPixelBufferRef pixelBuffer = NULL;
CGDataProviderRef dataProvider = CGImageGetDataProvider(cgImageRef);
CGDataProviderRetain(dataProvider);
CFDataRef da = CGDataProviderCopyData(dataProvider);
CVPixelBufferCreateWithBytes(NULL,
CGImageGetWidth(cgImageRef),
CGImageGetHeight(cgImageRef),
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(da),
CGImageGetBytesPerRow(cgImageRef),
NULL,
0,
NULL,
&pixelBuffer);
[writerAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentTime];
-- releases here --
This works fine on my simulator and inside an app. But when I run the code inside the SpringBoard process, it comes out as the images below. Running it outside the sandbox is a requirement, it is meant for jailbroken devices.
I have tried to play around with e.g. pixel format styles but it mostly comes out with differently corrupted images.
The proper image/video file looks fine:
But this is what I get in the broken state:
Answering my own question as I think I got the answer(s). The resolution difference was a simple code error, not using the device bounds on the latter ones.
The color issues. In short, the CGImages I got when running outside of the sandbox was using more bytes per pixel, 8 bytes. The images I get when running inside the sandbox was 4 bytes. So basically I was simply writing the wrong data into the buffer.
So, instead of simply slapping all of the bytes from the larger image into the smaller buffer. I loop through the pixel buffer row-by-row, byte-by-byte and I pick the RGBA values for each pixel. I essentially had to skip every other byte from the source image to get the right data into the right place within the buffer.
I am using SkiaSharp to load an SVG drawing. It's a site plan, so is reasonably complex, and it takes a long time to load. On my Samsung Galaxy phone, it's about 3 seconds, and during that time the phone completely locks up, which is quite unacceptable.
I am using SkiaSharp.Extended.Svg.SKSvg, but cannot find an asynchronous version of the Load() method. Is there one? Or maybe a better way of doing this?
I am overlaying obejcts on top of the site plan and it has taken me some considerable time to get all the scaling and alignment sorted, so if at all possible, I'd like to stick with SkiaSharp rather than start with something completely different from scratch.
Thanks for your help!
3 seconds does sound a bit long...
It may not be the SVG part, but rather the loading of the file off the file system.
Either way, you might be able to just load the whole think in a background task (pseudocode):
var picture = Task.Run(() => {
var stream = LoadStream();
var svg = LoadSvg(stream);
return svg.GetPicture();
});
I think I am running into a memory leak with an Express app when connecting x number of EventSource clients to it. After connecting the clients and sending them x messages and disconnecting them, my Express app only releases a small amount of the allocated Heap/RSS.
To confirm this I saved a Heapdump when starting the server and one after connecting 7,000 clients to it and sending x messages to each client. I waited for a while to give the GC a chance to clean up before taking the heap snapshot.
To compare these heap snapshots I loaded them in the Chrome Developer Tools Profile view and chose the "Comparison" mode.
My questions are:
1) How to interpret these numbers?
(For reference see the attached heap snapshot screenshot.)
2) For instance it looks like that the Socket objects doesn't almost free any objects at all, is that correct?
3) Can you give me more tips to investigate the problem?
You could be free from memory leak and as a bonus avoid the garbage collector.
All you got to do is object polling.
You could do something like
var clientsPool = new Array(1000);
var clientsConnected = [];
When a new client connects, you do
var newClient = clientsPool.pop();
//set your props here
clientsConnected.push(newClient);
That's an awesome way to avoid the Garbage Collector and prevent memory leak. Sure, there's a little more work to it and you will have to manage that carefully but it's totally worth for performance.
There's an awesome talk about it, here you go
https://www.youtube.com/watch?v=RWmzxyMf2cE
As to my Comment...
Javascript can't clear up a section of memory should anything be pointing at it about 2 years ago some one found an exploit and it was quickly closed like that and it works like this
var someData = ["THIS IS SOME DATA SAY IT WAS THE SIZE OF A SMALL APPLICATION"];
var somePointer = someData[0];
delete someData;
they then injected an application into somePointer as it was a reference to a memory location when there was no data now. hey presto you injected memory.
So if there is a reference like above somePointer = someData[0]; you cant free the memory until you delete someData so you have to remove all references to anything you want cleaning in your case ALL_CLIENTS.push(this); on line 64 is making your system memory accessible through ALL_CLIENTS, so what you can do is
Line 157
_.each(ALL_CLIENTS, function(client, i) {
var u; // holds a undefined value (null, empty, nothing)
client.close();
//delete ALL_CLIENTS[i];
ALL_CLIENTS[i] = u;
ALL_CLIENTS.unused++;
});
On another note this is not a memory leak a memory leak is say you had this server you close it if the memory did not free up after you exited it then you have a memory leak if it does clean the memory behind it's self it's not a leak it's just poor memory management
Thanks to #Magus for pointing out that delete is not the best thing you could use however i would never recommend that you implement a limiting structure but you could try
Line 27:
ALL_CLIENTS.unused = 0;
Line 64:
var u;
if(ALL_CLIENTS.unused > 0){
for(var i = 0; i < ALL_CLIENTS.length; i++){
if(ALL_CLIENTS[i] == u){
ALL_CLIENTS[i] = this;
ALL_CLIENTS.unused--;
i = ALL_CLIENTS.length;
}
}
}else{
ALL_CLIENTS.push(this);
}
I'm downloading some small images (about 40 KB each) in my MonoTouch app. The images are downloaded in parallel - the main thread creates the requests and executes them asynchronously.
I use WebRequest.Create to create the HTTP request, and in the completion handler, I retrieve the response stream using response.GetResponseStream(). Then, the following code reads the data from the response stream:
var ms = new MemoryStream();
byte[] buffer = new byte[4096];
int read;
while ((read = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
When downloading only one image, this runs very fast (50-100 milliseconds, including the wait for the web request). However, as soon as there are several images, say 5-10, these lines need more than 2 seconds to complete. Sometimes the thread spends more than 4 second. Note that I'm not talking about the time needed for response.BeginGetResponse or the time waiting for the callback to run.
Then I tested the following code, it needs less than 100 milliseconds in the same scenario:
var ms = new MemoryStream();
responseStream.CopyTo(ms);
What's the reason for this huge lag in the first version?
The reason I need the first version of the code is that I need the partially downloaded data of the image (specially when the image is bigger). To isolate the performance problem, I removed the code which deals with the partially downloaded image.
I ran the code in Debug and Release mode in the simulator as well as on my iPad 3, and I tried both compiler modes, LLVM and non-LLVM. The lag was there in all configurations, Debug/Release, Device/Simulator, LLVM/non-LLVM.
Most likely it is a limitation in the number of concurrent http network connections:
http://msdn.microsoft.com/en-us/library/system.net.servicepoint.connectionlimit.aspx
Hello maybe not the direct answer to your question but have you tried SDWebImage from Xamarin Component Store? This is an amazing library developed by Olivier Poitrey that out of the box provides asynchronous image downloader, memory + disk image caching and some other benefits, it is really nice.
Hope this helps
Alex
I have an image coming into my Node.js application via email (through cloud service provider Mandrill). The image comes in as a base64 encoded string, email.content in the example below. I'm currently writing the image to a buffer, and then a file like this:
//create buffer and write to file
var dataBuffer = new Buffer(email.content, 'base64');
var writeStream = fs.createWriteStream(tmpFileName);
writeStream.once('open', function(fd) {
console.log('Our steam is open, lets write to it');
writeStream.write(dataBuffer);
writeStream.end();
}); //writeSteam.once('open')
writeStream.on('close', function() {
fileStats = fs.statSync(tmpFileName);
This works fine and is all well and good, but am I essentially doubling the memory requirements for this section of code, since I have my image in memory (as the original string), and then create a buffer of that same string before writing the file? I'm going to be dealing with a lot of inbound images so doubling my memory requirements is a concern.
I tried several ways to write email.content directly to the stream, but it always produced an invalid file. I'm a rank amateur with modern coding, so you're welcome to tell me this concern is completely unfounded as long as you tell me why so some light will dawn on marble head.
Thanks!
Since you already have the entire file in memory, there's no point in creating a write stream. Just use fs.writeFile
fs.writeFile(tmpFileName, email.content, 'base64', callback)
#Jonathan's answer is a better way to shorten the code you already have, so definitely do that.
I will expand on your question about memory though. The fact is that Node will not write anything to a file without converting it to a Buffer first, so given when you have told us about email.content, there is nothing more you can do.
If you are really worried about this though, then you would need some way to process the value of email.content as it comes in from where ever you are getting it from, as a stream. Then as the data is being streamed into the server, you immediately write it to a file, thus not taking up any more RAM than needed.
If you elaborate more, I can try to fill in more info.