Should CUDA events and streams always be destroyed? - memory-leaks

I am reading CUDA By Example and I found that when they introduced events, they called cudaEventDestroy for each event they created.
However I noticed that some later examples neglected this cleanup function. Are there any undesirable side-effects of forgetting to destroy created events and streams (i.e. like a memory leak when you forget to free allocated memory)?

Any resources the app is still holding at the time it exits will be automatically free'ed by the OS / drivers. So, if the app creates only a limited number of events, it is not strictly necessary to free them manually. Still, letting the app exit without freeing all resources on purpose is bad practice because it becomes hard to distinguish between genuine leaks and "on purpose" leaks.

You have identified bugs in the book's sample code.
CUDA events are lightweight, but a resource leak is a resource leak. Over time, if you leak enough of them, you won't be able to create them anymore.

Related

Is there a way to fail an automated test upon Netty leak detection?

I'm using Netty 4.0.x on a project where a core separate project will create ByteBuf buffers and pass them to the client code layer, which should be responsible for closing the buffer.
I've found leaks in some cases and I'd like to cover the codepath leading to those leaks with an automated test, but Netty's ResourceLeakDetector seems to only report leaks inside logs.
Is there a way to fail an automated JUnit test in the event of such a leak? (eg by plugin some behavior in the ResourceLeakDetector)?
Thanks!
PS: Keep in mind that my test wouldn't really create the buffers, the core code (which is a dependency) does.
Netty's CI server leak build will indicate that the build is unstable if leaks are detected. I'm not sure on the mechanism for this but it is possible (possibly detecting the leak messages from build logs) to automatically detect.
Keep in mind that the leaks are detected when the ByteBuf objects are GC. So when/where a leak is detected may be far removed from when/where the leak actually occurred. Netty does include a trail of places where the ByteBuf has been accessed in meaningful ways to help you trace back where the buffer was originally allocated, and potentially a list of objects who may be responsible for releasing that buffer.
If you are OK with the above limitations you could add a throw of an exception into ResourceLeakDetector for private use.

Is it possible to force termination of backgrounding apps on iOS?

I've written an app which is handling videos. As we know, video processing takes a huge amount of memory while dealing with HD resolution. My App always seemed to crash. But actually I am 100% sure, that there is no memory leak in my code. Instruments is showing no leak.
At the beginning I am startin up one OpenGLES view and the video engine. For a very short time the memory consumption is high, but falling down to normal level after the initializations are done. I am always getting memory warnings during this period. Normally this is no problem. But if I have a lot of apps in suspended mode running, the App seems to be crashing. Watching into the crash log and using the debugger shows up, that I am only running out of memory.
My customers are flooding my support mail with "app is crashing" mails. But I do know, that they have too much Apps running in the background, so there is no memory left to go. I think it's bad style programing saying the customer that he has to close Background tasks before running the app.
According to this post this is a common problem.
My question is: Is it possible to tell the OS that one needs a lot of memory so the OS should terminate some suspended Apps? This memory stuff makes me crazy, because it's no bug I could fix.
No. It is not possible to affect anything outside of your sandbox without API calls. None exist for affecting other processes in the public API.
Have you tried to minimize your memory usage? In my experience once a memory warning it thrown apps can be more likely to have problems once they are in the background, even when memory usages drops.
If you are using OpenGLES and textures, if you haven't already compress your textures. What is the specific cause of your memory allocation spike?

Memory Leaks and Apache

My VPS account has been occasionally running out of memory. It's using Apache on Linux. Support says it's a slow memory leak and has enabled MaxRequestsPerChild to deal with it.
I have a few questions about this. When a child process dies, will it cause my scripts to lose session data? Does anyone have advice on how I can track down this memory leak?
Thanks
No, when a child process dies you will not lose any data unless it was in the middle of a request at the time (which should not happen if it exits due to MaxRequestsPerChild).
You should try to reproduce the memory leak using an identical software stack on your test system. You can use tools such as Valgrind to try to detect it.
You can also try a debug build of your web server and its modules, which will enable you to detect what's going on.
It's difficult to reproduce the behaviour of production systems in non-production ones. If you have auto-test coverage of your web application, you could try using your full auto-test suite, but in practice this is unlikely to cover every code path therefore may miss the leaky one.
When a child process dies, will it cause my scripts to lose session data?
Without knowing what scripting language and session handler you are using (and the actual code) it rather hard to say.
In most cases, using scripting languages in modules or via [fast] cgi, then its very unlikely that the session data would actually be lost - although if the process dies in the middle of processing a request it may not get the chance to write the updated session back to whatever is storing the session. And in the very unlikely event it dies during the writeback, it may corrupt the session data. These are quite exceptional circumstances.
OTOH if your application logic is implemented via a daemon (e.g. a Java container) then its quite probable that memory leaks could accumulate (although these would be reported against a different process).
Note that if the problem is alleviated by setting MaxRequestsPerChild then it implies that the problem is occurring in an Apache module.
The production releases of Apache itself, in my experience, is very stable without memory leaks. However I've not used all the modules. Not sure if ExtendedStatus gives a breakdwon of memory usage by module - might be worth checking.
I've previously seen problems with the memory management of modules loaded by the PHP module not respecting PHP's memory limits - these did clear down at the end of the request though.
C.

Resource management by Linux

When a program with some theards, mutexes, shared data, file handles crash because of too much memory allocation, which all resources are freed. How do you recover?
If you mean, how do you go back and free up the resources that were allocated by the now-crashed process, well, you don't have to.
When the process exit(2)'s or dies by a signal all of the OS-allocated resources will be retrieved. This is the kernel's job.
You recover by checking the results of resource acquisition functions and not allowing unchecked errors to occur in the first place.
All resources that belongs to the process are cleaned up.
The only exceptions would be the sysv shared memory/message queues/semaphores - which although might have been created by the process are not owned by it.

OS; resources automatically clean up

From this answer: When is a C++ terminate handler the Right Thing(TM)?
It would be nice to have a list of resources that 'are' and 'are not' automatically cleaned up by the OS when an application quits. In your answer it would be nice if you can specify the OS/resource and preferably a link to some documentaiton (if appropriate).
The obvious one:
Memory: Yes automatically cleaned up.
Question. Are there any exceptions?
There are some obscure resources that Windows does not clean up when an app crashes or exits without explicitly releasing them, mostly because the OS doesn't know if they're important to leave around or not.
Temporary files -- as others have mentioned.
Globally registered WNDCLASSes ("No window classes registered by a DLL are unregistered when the DLL is unloaded. A DLL must explicitly unregister its classes when it is unloaded." MSDN) If your global window class also has a class DC, then that DC will leak as well.
Global ATOMs (a relatively limited resource).
Window message IDs created with RegisterWindowMessage. These are designed to leak, since there's no UnregisterWindowMessage.
Semaphores and Events aren't technically leaked, but when the owning application goes away without signalling them, then other processes can hang. This is not true for a Mutex. If the owning application goes away, other processes waiting on that Mutex are released.
There may be some residual weirdness on Windows XP and earlier if you don't unregister a hot key before exiting. Other applications may be unable to register the same hot key.
On Windows XP and earlier, it's not uncommon to have a zombie console window live on after a process crashes. (Specifically, a GUI application that also creates a console window.) It shows up on the task bar. All you can do is minimize, restore, or move the window.
Buggy drivers can be aggravated by apps that don't explicitly release resources when they exit. Non-paged pool leaks are fairly common.
Data copied to the clipboard. I guess that doesn't really count because it's owned by the OS at that point, not the application that put it there.
Globally installed hooks aren't unloaded when the installing process crashes before removing the hook.
Temporary files is a good example of something that will not be cleaned up - the handle is released but the file isn't deleted
In Windows, just about anything you can get handle to should be in fact be managed by the OS - that's why you only get a handle. This includes, but is not limited tom
the following (list copied from MSDN docs for CloseHandle() API):
Communications device
Console input
Console screen buffer
Event
File
File mapping
Job
Mailslot
Mutex
Named pipe
Process
Semaphore
Socket
Thread
Token
All of these should be recovered by the OS when an application closes, though possibly not immediately, depending on their use by other processes.
Other operating systems work in the same way. It's hard to an imagine an OS worth its name (I exclude embedded systems etc.) where this is not the case - resource management is the #1 raison d'etre for an operating system.
Any exception is a bug - applications can and do crash and do contain leaks. An OS needs to be reliable and not exhaust resources even in the face of poorly written applications. This also applies to non-OS resources. Services that hand out resources to processes need to free those resources when the process exits. If they don't it is a bug that needs to be fixed.
If you're looking for program artifacts which can persist beyond process exit, on Windows you have at least:
Registry keys that are created
without REG_OPTION_VOLATILE
Files created without FILE_FLAG_DELETE_ON_CLOSE
Event log entries
Paper that was used for print jobs

Resources