is memory allocated by kmalloc() ever automatically freed? - linux

I'm writing a device driver that, among other things, allocates a block of memory with kmalloc. This memory is freed when the user program closes the file. In one of my experiments, the user program crashed without closing the file.
Would anything have freed this memory?
In another experiment, I moved the kfree() from the close() function to the module_exit() function. When I ran the user program twice consecutively, I called kmalloc again with the same pointer as before, without freeing it first. Thus, I lost a pointer to that memory, and cannot free it.
Is this memory lost to the system until I reboot, or will it be freed when I unload the driver?

Kernel memory is never freed automatically. This includes kmalloc.
All memory related to an open file descriptor should be released when the file is closed.
When a process exits, for any reason whatsoever (including kill -9), all open file descriptors are closed, and the driver's close function is called. So if you free there, nothing the process can do will make the memory stay after the process dies.

Please don't relate your user-space experience with Kernel programming.
What do I mean by this?
Normal processes get a clean-up for them once they exit, that's not the case with kernel modules because they're not really processes.
Technically, when you load a module and then call kmalloc, what you did was that you asked the kernel to allocate some memory for you in the kernel space, it's technically a new memory for the whole kernel so even if you unload your module, that allocated kernel memory is there unless explicitly freed.
In simple terms answering your question:
Every kmalloc needs a kfree, else the memory will remain there as long as the system is up.

Related

Release cl::Buffer and memory leakage on device

we know that in openCL by using cl::CreateBuffer() we can create buffer in device, which allocate memory there. But my question is whether the buffer would be free after terminating the program or there is a function we should use to free the memory to prevent memory leakage on device.
The destructor for the cl::Buffer object returned by cl::CreateBuffer() will release the buffer, which will also free any memory allocated on-device. This is the main mechanism you should be relying upon.
Process death for any reason (crash, clean exit) even with resources allocated will also destroy the process's context handle in the device driver, which will cause the driver to perform the cleanup.
Of course, bugs at any level of the stack could prevent this from happening correctly in all cases, but in general, once your process dies, everything should be reset.

How do I verify that all memory allocations have been freed between two checkpoints?

I have a process that seems to be leaking memory. The longer the process runs, the more memory it uses. That is in spite of the fact that the process consists primarily of a loop that iteratively calls a function which should not preserve any data between calls. When I use valgrind to check for leaks, everything comes back a-ok. When the process eventually exits after running for a few hours, there is a substantial delay at exit, which all leads me to believe that memory is being allocated in that function and not freed immediately because it is still referenced. The memory is then subsequently freed on exit because that reference is eventually freed.
I'm wondering if there is a way with valgrind (or some other linux-compatible tool) to do a leak check between two code checkpoints. I'd like to get a leak report of all memory that was allocated but not freed between two code checkpoints.
I wrote an article on this a few years back.
In short, you include valgrind.h and then you can use macros like
VALGRIND_DO_LEAK_CHECK
Alternatively you can attach gdb and issue the 'monitor leak_check' command. This can be incremental. See here

linux: munmap shared memory in on single call

If a process calls mmap(...,MAP_ANONYMOUS | MAP_SHARED,...) and forks N children, is it possible for any one of these processes (parent or descendants) to munmap() the memory for all processes in one go, thus releasing the physical memory, or does every of these processes have to munmap() individually?
(I know the memory will be unmapped on process exit, but the children won't exit yet).
Alternatively, is there a way to munmap memory from another process? I'm thinking of a call something like munmap(pid,...).
Or is there a way to achieve what I am looking for using non-anonymous mappings and performing an operation on the related file descriptor (e.g closing the file)?
My processes are performance sensitive, and I would like to avoid performing lots of IPC when it becomes known that the shared memory will no longer be used by anyone.
No, there is no way to unmap memory in one go.
If you don't need mapped memory in child processes at all, you may mark mappings with madvise(MADV_DONTFORK) before forking.
In emergency situations, you may invoke syscalls from inside external processes by using gdb:
Figure out PID of target process
List mapped memory with cat /proc/<PID>/maps
Attach to process using gdb: gdb -p <PID> (it will suspend execution of target process)
Run from gdb: call munmap(0x<address>, 0x<size>) for each region you need to unmap
Exit gdb (execution of process is resumed)
It must be obvious that if your process tries to access unmapped memory, it will receive SIGSEGV. So, you must be 100% sure what you are doing.

What happens to allocated memory of other threads when forking

I have a huge application that needs to fork itself at some point. The application is multithreaded and has about 200MB of allocated memory. What I want to do now to ensure that the data allocated by the process wont get duplicated is to start a new thread and fork inside of this thread. From what I have read, only the thread that calls fork will be duplicated, but what will happen to the allocated memory? Will that still be there? The purpose of this is to restart the application with other startup parameters, when its forked, it will call main with my new parameters, thus getting hopefully a new process of the same program. Now before you ask: I cannot assure that the binary of that process will still be in the same place as when I started the process, otherwise I could just fork and exec whats in /proc/self/exe.
Threads are execution units inside the big bag of resources that a process is. A process is the whole thing that you can access from any thread in the process: all the threads, all the file descriptors, all the other resources. So memory is absolutely not tied to a thread, and forking from a thread has no useful effect. Everything still needs to be copied over since the point of forking is creating a new process.
That said, Linux has some tricks to make it faster. Copying 2 gigabytes worth of RAM is neither fast or efficient. So when you fork, Linux actually gives the new process the same memory (at first), but it uses the virtual memory system to mark it as copy-on-write: as soon as one process needs to write to that memory, the kernel intercepts it and allocates distinct memory so that the other process isn't affected.

How to avoid shared memory leaks

I'm using shared memory between 2 processes on Suse Linux and I'm wondering how can I avoid the shared memory leaks in case one process crashes or both. Does a leak occur in this case? If yes, how can I avoid it?
You could allocate space for two counters in the shared memory region: one for each process. Every few seconds, each process increments its counter, and checks that the other counter has been incremented as well. That makes it easy for these two processes, or an external watchdog, to tear down the shared memory if somebody crashes or exits.
If the subprocess is a simple fork() from the parent process, then mmap() with MAP_SHARED should work.
If the subprocess does an exec() to start a different executable, you can often pass file descriptors from shm_open() or a similar non-portable system call (see Is there anything like shm_open() without filename?) On many operating systems, including Linux, you can shm_unlink() the file descriptor from shm_open() so it doesn't leak memory when your processes die, and use fcntl() to clear the close-on-exec flag on the shm file descriptor so that your child process can inherit it across exec. This is not well defined in the POSIX standard but it appears to be very portable in practice.
If you need to use a filename instead of just a file descriptor number to pass the shared memory object to an unrelated process, then you have to figure out some way to shm_unlink() the file yourself when it's no longer needed; see John Zwinck's answer for one method.

Resources