I detected my service process leaking memory on a Linux server, it takes 1.2G of physical memory and consumes more and more.
While I am looking at the code for the memory leak, I notice the process is restarted (This process if managed by supervisord, so it is restarted if killed). There is no error log or panic in the log of the process. So my guess is that it is killed by the kernel.
When does the kernel kill a process that is leaking memory? When it consumes too much memory? or it allocates memory too fast?
Memory leaks can cause your system memory to get low. If the memory gets very low, the OOM(Out Of Memory) killer will be invoked to try to recover from low memory state. The OOM Killer will terminate one or more processes that consume more memory and are of least importance(low priority). Normally, the OOM killer will be invoked incase there is no user address space available or if there is no page available.
OOM killer uses select_bad_process(),badness() to determine and kill the process. These functions determine the process by assigning points/score for all the processes based on various factors such as VM size of process, VM size of its children, uptime, priority, whether it does any hardware access, whether it is swapper or init or kernel thread. The process with highest points/ score(badness) gets terminated/killed.
Also, checkout whether the overcommit behaviour of kernel (/proc/sys/vm/overcommit_memory, /proc/sys/vm/overcommit_ratio) and the limit on the address space for the processes are appropriate.
Valgrind is a very handy tool in such scenarios in identifying memory leaks.
Related
I just learned about mlock() functions. I know that it allows you to lock program memory into RAM (allowing the physical address to change but not allowing the memory to be evicted). I've read that newer Linux kernel versions have a mlock limit (ulimit -l), but that this is only applicable to unprivileged processes. If this is a per-process limit, could an unprivileged process spawn a ton of processes by fork()-ing and have each call mlock(), until all memory is locked up and the OS slows to a crawl because of tons of swapping or OOM killer calls?
It is possible that an attacker could cause problems with this, but not materially more problems than they could cause otherwise.
The default limit for this on my system is about 2 MB. That means a typical process won't be able to lock more than 2 MB of data into memory. Note that this is just normal memory that won't be swapped out; it's not an independent, special resource.
It is possible that a malicious process could spawn many other processes to use more locked memory, but because a process usually requires more than 2 MB of memory to run anyway, they're not really exhausting memory more efficiently by locking it; in fact, starting a new process itself is actually going to more effective at using memory than locking it. It is true that a process could simply fork, lock memory, and sleep, in which case its other pages would likely be shared because of copy-on-write, but it could just also allocate a decent chunk of memory and cause many more problems, and in fact it will generally have permission to do so since many processes require non-trivial amounts of memory.
So, yes, it's possible that an attacker could use this technique to cause problems, but because there are many easier and more effective ways to exhaust memory or cause other problems, this seems like a silly way to go about doing it. I, for one, am not worried about this as a practical security problem.
we know that in openCL by using cl::CreateBuffer() we can create buffer in device, which allocate memory there. But my question is whether the buffer would be free after terminating the program or there is a function we should use to free the memory to prevent memory leakage on device.
The destructor for the cl::Buffer object returned by cl::CreateBuffer() will release the buffer, which will also free any memory allocated on-device. This is the main mechanism you should be relying upon.
Process death for any reason (crash, clean exit) even with resources allocated will also destroy the process's context handle in the device driver, which will cause the driver to perform the cleanup.
Of course, bugs at any level of the stack could prevent this from happening correctly in all cases, but in general, once your process dies, everything should be reset.
I always read that, at any given time, the processor can only run one process at a time. So one and only one process is in state running.
However, we can have a number of runnable processes. These are all of these processes who are waiting for the scheduler to schedule their execution.
At any given time, do all these runnable processes exist in user address space? Or has the currently running process in user address space, and it is only once they are scheduled that they are brought back to RAM from disk. In which case, does it mean that the kernel keeps the process task descriptor in its list of all runnable processes even if they are in disk? I guess you can tell I am confused.
If CPU supports virtual memory addressing, each process has a unique view of the memory. Two different processes that tries to read from the same memory address, will map to different location in physical memory, unless the memory maps tells otherwize (shared memory, like DLL files are mapped read only like this for instance)
If CPU does not support virtual memory, but only memory protection, the memory from the other processes will be protected away, so that the running process can only access its own memory.
All.
Let's suppose that process'A allocate a lot of pages by such as below code.
And process'A periodically executes this code so it happens memory leak.
// allocates 1Mb
for(i=0;i<10;i++)
{
page_p=alloc_pages(gfp_mask, 8);
}
BTW, what become of the allocated pages after killing process without free page?
Allocated pages are permanently leak?
In Linux you have virtual memory, which is a per process memory map. The processes memory is allocated from this map, and the OS maps this memory into physical memory, either RAM or swap.
When a process exits, the OS removes the processes memory map, and another process can reuse it. So leaked memory is only leaked when the process is running.
By swapped and terminated, I mean, if the process is about to be swapped to a swap space or terminated(by OOM killer) to free up memory.
What algorithm does the linux kernel follow?
For instance, Process A needs extra memory and Process B has been chosen to be swapped or killed(if swap space is already occupied), but process B still has a blocking thread.
a.) Does process B gets swapped or killed regardless of the blocking thread?
b.) If not, how is this kind of case handled?
If my example is an unlikely case, any insights would be appreciated.
Yeah - you need to read up on paged virtual memory, as suggested by #CL. Processes are not swapped out in their entirety and swapping!=termination.
If the OS needs to terminate a process, either because of a specific API request or because of its OOM algorithm, the OS stops all its threads first. Blocked threads are easy to 'stop' because they are not running anyway - it's only necessary to change their state to ensure that they are never run again. Thread/s that are actually running on cores have to be stopped by means of an inter-core comms driver that can hardware-interrupt the cores running the threads. Once all threads are not running, the resources, including all user-space memory, allocated to the process can be freed and OS thread/process management structs released. The process then no longer exists.