How can I avoid resource leak when using a semaphore? - linux

Linux sem_destroy() documentation says:
An unnamed semaphore should be destroyed with sem_destroy() before the memory in which it is
located is deallocated. Failure to do this can result in resource leaks on some implementations.
But the best I can do is register sem_destroy() to atexit(), which won't be called on aborts or SIGKILL. I have a process responsible for creating and destroying a semaphore on shared memory (a mmaped file), how can I avoid a resource leak on abnormal termination conditions?
On Linux, if the mmaped file is deleted before sem_destroy() is called, is any kind of resource leaked? What resource?

The glibc implementation of sem_destroy does nothing, and this will not change. If you use glibc, you do not have to do anything for freeing up resources. Furthermore, the kernel would free such resources on process termination anyway.
The glibc implementation of semaphores is based on futexes, which is why it does not need any additional resources besides the memory used to store the semaphore.

Related

Restricting memory regions to threads

Is there an operating system-specific way in Linux/Darwin/Windows, to restrict access to certain virtual memory pages to only one thread, so that when another thread tries to access it, the OS would intercept and report an error?
I'm trying to emulate the behavior of fork with multiple processes, where each process has its own memory except for some shared memory, mainly to avoid all programming errors where one worker would access memory belonging to another worker.
As a general proposition, this is not possible. The whole idea of threads is to have multiple streams of execution that share the same address. If you're a kernel mode kommando, you might be able to some up with some modification of the page tables that a thread uses to make pages inaccessible from usermode then unlocks them.

shm_open - how to know if I have opened an existing shared memory existing

I have two questions:
while using shm_open, how to know if I have opened an already existing shared memory, I am using O_CREATE | O_RDWR.
I am using shm_open to create/open a shared memory object with some name and mmap for mapping it into process' virtual address space. If the process crashes and fails to clean up shared memory it stays until system shutdown. Though it this contradictory with what has been mentioned on wiki, which says, "The shared memory created by shm_open is persistent. It stays in the system until explicitly removed by a process. This has a drawback that if the process crashes and fails to clean up shared memory it will stay until system shutdown. To avoid this issue mmap can be used to create a shared memory". I am talking about the file with name mentioned in shm_open, which gets created in /dev/shm, it remains if process gets crashed without cleaning up the shared memory (unmap and shm_unlink). I am expecting, if there are no other references to shared memory by any process, and the crashed process was the only one referring, that shared memory object and file should get cleaned up.
I know this answer is late, but I was busy with the same subject.
According to this shm_open manual use the O_EXCL oflag to detect if the shared memory object already exists.

Are the services offered by the linux kernel implemented as kernel threads?

Like in process management and memory management.
Are the scheduler and memory manager implemented as kernel threads that are run on the cpu the moment they are needed? If not, how does the kernel treat them?
Are they like processes, tasks, or some line of code that gets executed when needed?
Some are, some aren't. The terms "process management" and "memory management" are kind of broad and cover a fair bit of kernel code.
For memory management, a call to mmap() will just require changing some data structures and can be done by the current thread, but if pages are swapped out it will be done by kswapd, which is a kernel thread.
You might consider the scheduler a special case: since the scheduler is responsible for scheduling all threads, it itself is not a thread and does not execute on any thread (otherwise it would need to schedule itself... but how would it schedule itself, if it had to schedule itself first in order to do that?). You might think of the scheduler as running directly on each processor core when necessary.

Should CUDA events and streams always be destroyed?

I am reading CUDA By Example and I found that when they introduced events, they called cudaEventDestroy for each event they created.
However I noticed that some later examples neglected this cleanup function. Are there any undesirable side-effects of forgetting to destroy created events and streams (i.e. like a memory leak when you forget to free allocated memory)?
Any resources the app is still holding at the time it exits will be automatically free'ed by the OS / drivers. So, if the app creates only a limited number of events, it is not strictly necessary to free them manually. Still, letting the app exit without freeing all resources on purpose is bad practice because it becomes hard to distinguish between genuine leaks and "on purpose" leaks.
You have identified bugs in the book's sample code.
CUDA events are lightweight, but a resource leak is a resource leak. Over time, if you leak enough of them, you won't be able to create them anymore.

Resource management by Linux

When a program with some theards, mutexes, shared data, file handles crash because of too much memory allocation, which all resources are freed. How do you recover?
If you mean, how do you go back and free up the resources that were allocated by the now-crashed process, well, you don't have to.
When the process exit(2)'s or dies by a signal all of the OS-allocated resources will be retrieved. This is the kernel's job.
You recover by checking the results of resource acquisition functions and not allowing unchecked errors to occur in the first place.
All resources that belongs to the process are cleaned up.
The only exceptions would be the sysv shared memory/message queues/semaphores - which although might have been created by the process are not owned by it.

Resources