I use the Video tool box, to decode h264 data on IOS8.x. When this code run on IOS 9.x, I got a memory leak every time I call VTDecompressionSessionDecodeFrame, and I can't see any memory leak in Instruments tool!
here is the code:
https://github.com/stevenyao/iOSHardwareDecoder
I was seeing a memory leak on device when processing the hardware decoded frame in a background thread, but I then copied the YUV frame to a BGRA frame via CoreImage and then passed the BGRA frame back to the main thread so that it could be turned into a UIImage. At that point, the leak went away.
Related
I have some memory problems with my electron app.
On startup the memory usage is about 120 MB. The JS heap stays constant at 32 MB. Without performing any actions in the browser window, the memory usage of the renderer in the task manager goes up by about 1 MB every second. After increasing by 20 MB it seems to go down by around 16 again (propably GC). but leaving the window open for several minutes results in 300 MB memory usage. So there is a memory leak somewhere.
Since the JS heap size never changes, I assume the leak in inside the Node process, am I correct on that part?
How can I analyze the memory usage in the electron/node process? (since the chrome profile does not seem to help in that case)
related to https://spectrum.chat/electron/general/debugging-high-memory-usage-in-electron~80057ff2-a51c-427f-b6e1-c297d47baf5b and https://www.electronjs.org/docs/tutorial/performance
I have the same problem, my app starts with 200MB of used memory and less than 20 minutes, it uses more than 450MB just doing nothing...only showing some images. It happened the same with a raspberry pi +3b. The use of memory grows 'till rasp dies.
What i found is that if you have the devtools window opened (i assume you have for debug purposes) it just eat all memory.
Once i closed the devtools window, my app on a Win system uses 100MB (stable) and in my raspberry pi 300MB (stable).
I've read somewhere that when use GPU to render it uses a lot of memory too so i used
app.disableHardwareAcceleration();
Running linux kernel 3.14.43, I am reading from a CDC ACM device (implemented on an LPC4330). The device appears at /dev/ttyACM0, and I can open it and read from it in the normal way.
The device appears to be buffering about 4096 bytes (I get reads of 4095 bytes, typically, after a period of mostly 12 byte reads during the early phase of operation). When running on a fancy schmancy workstation, all goes smoothly. However, when running on an embedded device (AM3352), occasionally data goes missing if I go for a high data rate. I suspect the read buffer is being over-filled - the high data rate is fast enough to fill a 4k buffer in not much more than 1msec (3.8MB/s), and it seems very possible that I'm not coming back to the read() call fast enough to keep up, given that the device is doing a bunch of other stuff in other threads and I'm using SCHED_OTHER across the board.
So - first - is there a way to increase the buffer size of the device? As a supplementary, is there any way to detect that the buffers here have overflowed? I guess I have another option of using real-time scheduling for the read() thread but I'd rather not get into that if possible.
Thanks.
hi i am developing application with 20 UIViewControllers. application working fine but when tracking memory allocations in the instruments All allocations(Live bytes) showing bellow 10 MB .but it producing low memory warning alert like bellow.
i can't understand what the problem is
can any one help me to get me out of this issue
Please check if view controller as a delegate to anything such as a text field or other control? If so, this will prevent the view controller from being released from memory due to a circular dependency.
Using DirectX 9 I want to capture what is on the screen and display a smaller version of it in my program.
To capture it I found and am using GetFrontBufferData. However the way it works is by writing to a surface defined in system memory (D3DPOOL_SYSTEMMEM). This results in my having to then transfer the screen shot back into video memory so I can then draw it.
As you can imagine this needless transfer from (video memory -> System memory -> video memory) causes quite a shutter in my program.
Is there a way I can get the image stored in the front buffer and put it onto a surface in video memory?
This question is a spin off of my recent question : Capture and Draw a ScreenShot using DirectX
I am developing a tool drawing primitves with DX9 in my XP-32.
When create vertex buffer and index buffer, there could be some error of creation failed.
Return code could be D3DERR_OUTOFVIDEOMEMORY or E_OUTOFMEMORY.
I am not sure what the difference of them.
I use VideoMemory tool in DX sample to check the memory, and it reports 1024MB.
Does that mean if I create a bunch of managed resource more than 1024MB, it will report D3DERR_OUTOFVIDEOMEMORY?
If there is no more free virtual space memory in process and malloc fail, DX9 will report E_OUTOFMEMORY?
E_OUTOFMEMORY means that DirectX was unable to allocate (ie through malloc or new) the block of memory you requested.
D3DERR_OUTOFVIDEOMEMORY means that DirectX was unable to allocate (ie out of the pool of memory, either on the gfx card or reserved in main memory) the block of memory you requested.
Caveat: Drivers might lie.
D3DERR_OUTOFVIDEOMEMORY is a directx memory error...not necessarily related to video memory, it could memory occupied for holding a scene or drawing an image, as you have found out if there is not enough memory for your process you will get E_OUTOFMEMORY. The latter refers to the memory that is assigned to your process being exhausted. You did not say what operating system/hardware spec you have, best bet would be to look into getting a system memory upgrade if you're running low on resources..
Edit: Some laptops/netbooks have a graphics adapter that is 'fitted with system memory', ok these graphics cards are not serious for the likes of 'Beyond Call of Duty' and other top end games...the graphics card actually steal some memory from the main board thus inflating the amount of RAM that is available to the graphics controllers. They are fine if you are doing word processing/emailing and so on...but at the cost of the system ram which is gobbled by the controller a la 'Integrated graphics controller'...
Hope this helps,
Best regards,
Tom.