Is there a way to fail an automated test upon Netty leak detection? - memory-leaks

I'm using Netty 4.0.x on a project where a core separate project will create ByteBuf buffers and pass them to the client code layer, which should be responsible for closing the buffer.
I've found leaks in some cases and I'd like to cover the codepath leading to those leaks with an automated test, but Netty's ResourceLeakDetector seems to only report leaks inside logs.
Is there a way to fail an automated JUnit test in the event of such a leak? (eg by plugin some behavior in the ResourceLeakDetector)?
Thanks!
PS: Keep in mind that my test wouldn't really create the buffers, the core code (which is a dependency) does.

Netty's CI server leak build will indicate that the build is unstable if leaks are detected. I'm not sure on the mechanism for this but it is possible (possibly detecting the leak messages from build logs) to automatically detect.
Keep in mind that the leaks are detected when the ByteBuf objects are GC. So when/where a leak is detected may be far removed from when/where the leak actually occurred. Netty does include a trail of places where the ByteBuf has been accessed in meaningful ways to help you trace back where the buffer was originally allocated, and potentially a list of objects who may be responsible for releasing that buffer.
If you are OK with the above limitations you could add a throw of an exception into ResourceLeakDetector for private use.

Related

node.js memory leak with cluster and express

I am using the node.js 6.11.3 ,cluster module, express 4.14
Seeing memory leaks slowly over a period of 1 weeks.
attached is a screenshot of the heap dumps in chrome dev tools. Cant tell the reason for the leak.click for heap dump
Unfortunately, nobody can answer where your leak comes from without accessing the entire application and environment. The real question is how do you debug a node memory leak.
First its important to try to understand how memory leaks occur in Node. How is it possible if Node has built in garbage collection? Well, variables are marked as garbage when no longer referenced. If you have code (closures etc) that still hold reference to variables, they are not collected. That is just one example. There are also dependencies that can cause memory leaks which can confuse you into thinking you have issues in your own code. And maybe you do, in the way you use the dependency.
Bottom line, its best to get familiar with this debugging process yourself so that you can understand the issues leading to the leak. Best of luck. Here is one article that is helpful.
https://www.alexkras.com/simple-guide-to-finding-a-javascript-memory-leak-in-node-js/

How to use IntelliTrace Standalone Collector to detect memory leaks in production .Net applications?

Visual Studio 2012RC has the ability to use externally collected trace files of IIS app pool data collected by the IntellitTrace Standalone Collector. I know that in my production app there is some kind of memory leak that is apparent after a few hours of monitoring.
I now have my large iTrace file ready to plug into VS2012, but would like to know how to find the questionable object.
I am also in the process of using the Debugger tools and following these instructions. However, run into an error indicating that the appropriate CLR files (or something like that) are not loaded when trying to do the .load SOS or any other command.
I was hoping to see a similar address list and consumed memory in the IntelliTrace analyzer - is this possible?
Some assistance would be appreciated.
Intellitrace only profiles events and method calls. You won't get information on individual objects or memory leaks because it's not tracking memory. There's also no event provided for object creation/destruction so you can't infer that in any case.
To track memory you will have to use the profiling tools on your app, though don't attach them to your production server! Use a test environment for it and see if you can replicate the problem.

Should CUDA events and streams always be destroyed?

I am reading CUDA By Example and I found that when they introduced events, they called cudaEventDestroy for each event they created.
However I noticed that some later examples neglected this cleanup function. Are there any undesirable side-effects of forgetting to destroy created events and streams (i.e. like a memory leak when you forget to free allocated memory)?
Any resources the app is still holding at the time it exits will be automatically free'ed by the OS / drivers. So, if the app creates only a limited number of events, it is not strictly necessary to free them manually. Still, letting the app exit without freeing all resources on purpose is bad practice because it becomes hard to distinguish between genuine leaks and "on purpose" leaks.
You have identified bugs in the book's sample code.
CUDA events are lightweight, but a resource leak is a resource leak. Over time, if you leak enough of them, you won't be able to create them anymore.

Is it possible to force termination of backgrounding apps on iOS?

I've written an app which is handling videos. As we know, video processing takes a huge amount of memory while dealing with HD resolution. My App always seemed to crash. But actually I am 100% sure, that there is no memory leak in my code. Instruments is showing no leak.
At the beginning I am startin up one OpenGLES view and the video engine. For a very short time the memory consumption is high, but falling down to normal level after the initializations are done. I am always getting memory warnings during this period. Normally this is no problem. But if I have a lot of apps in suspended mode running, the App seems to be crashing. Watching into the crash log and using the debugger shows up, that I am only running out of memory.
My customers are flooding my support mail with "app is crashing" mails. But I do know, that they have too much Apps running in the background, so there is no memory left to go. I think it's bad style programing saying the customer that he has to close Background tasks before running the app.
According to this post this is a common problem.
My question is: Is it possible to tell the OS that one needs a lot of memory so the OS should terminate some suspended Apps? This memory stuff makes me crazy, because it's no bug I could fix.
No. It is not possible to affect anything outside of your sandbox without API calls. None exist for affecting other processes in the public API.
Have you tried to minimize your memory usage? In my experience once a memory warning it thrown apps can be more likely to have problems once they are in the background, even when memory usages drops.
If you are using OpenGLES and textures, if you haven't already compress your textures. What is the specific cause of your memory allocation spike?

Memory Leaks and Apache

My VPS account has been occasionally running out of memory. It's using Apache on Linux. Support says it's a slow memory leak and has enabled MaxRequestsPerChild to deal with it.
I have a few questions about this. When a child process dies, will it cause my scripts to lose session data? Does anyone have advice on how I can track down this memory leak?
Thanks
No, when a child process dies you will not lose any data unless it was in the middle of a request at the time (which should not happen if it exits due to MaxRequestsPerChild).
You should try to reproduce the memory leak using an identical software stack on your test system. You can use tools such as Valgrind to try to detect it.
You can also try a debug build of your web server and its modules, which will enable you to detect what's going on.
It's difficult to reproduce the behaviour of production systems in non-production ones. If you have auto-test coverage of your web application, you could try using your full auto-test suite, but in practice this is unlikely to cover every code path therefore may miss the leaky one.
When a child process dies, will it cause my scripts to lose session data?
Without knowing what scripting language and session handler you are using (and the actual code) it rather hard to say.
In most cases, using scripting languages in modules or via [fast] cgi, then its very unlikely that the session data would actually be lost - although if the process dies in the middle of processing a request it may not get the chance to write the updated session back to whatever is storing the session. And in the very unlikely event it dies during the writeback, it may corrupt the session data. These are quite exceptional circumstances.
OTOH if your application logic is implemented via a daemon (e.g. a Java container) then its quite probable that memory leaks could accumulate (although these would be reported against a different process).
Note that if the problem is alleviated by setting MaxRequestsPerChild then it implies that the problem is occurring in an Apache module.
The production releases of Apache itself, in my experience, is very stable without memory leaks. However I've not used all the modules. Not sure if ExtendedStatus gives a breakdwon of memory usage by module - might be worth checking.
I've previously seen problems with the memory management of modules loaded by the PHP module not respecting PHP's memory limits - these did clear down at the end of the request though.
C.

Resources