Will killing a process recover leaked memory? - memory-leaks

For example, when a particular application runs, it causes 10M of memory leaks. If I kill the process, will the 10M be recovered by the system?
I tried to check myself:
I created an application that will cause 10M memory leak.
Before I run it, I used "Memory Doctor" to check my free memory. (250.4M)
After I run and kill it, the "Memory Doctor" show my free memory is 240M.
I want to confirm: when the application killed, will the memory consumed by the process be reclaimed by the operating system or not?

Yes, killed processes will return any used memory to the operating system.
Keep in mind that any filesystem cache used by the application may not be free'd immediately.

Related

Memory leak or consumption issues of a process in an embedded system

If we want to debug memory-related issues of a process then we have to start the process using Valgrind. Are there any other tools using which we can analyze a process that is already running in the embedded system?
For example, a process will be started by the embedded system on bootup. The memory consumption of the process is increasing gradually. I don't want to kill and start the process with Valgrind, I want to inspect the existing process. Are there any tools that can help here?
I think we can try with /proc/pid/maps, but not sure how we can understand the anonymous allocations in /proc/pid/maps file.

What are the reasons for process abort, other than memory leak?

I have a nodejs application, which is crashing due to process abort. I can see it in /var/log/message. But when monitoring the memory usage, memory usage is stable. Are there any other reasons for memory leak, for which a process can get aborted?

Does cache in Linux can cause heap memory out of space exception?

I am facing an issue while deploying a particular application onto the Linux server running Ubuntu 16.04.
The application is written in Java, and performs a lot of I/O operation. In due course of time, while running the application, the cache consumption will increases. Although the output of free -h will show sufficient amount of available memory, but the application will crash by throwing the exception Java Heap Memory Out of Space Exception.
To work around the problem, I execute the clear cache command to free up the cache.
I need some guidance on whether the issue is caused by the cache, or something is wrong while running the application, as clearing the cache won't let the exception happen. Do Cache take away JVM memory?
Linux will always free the cache as needed, you should never have to do this explicitly.
the application will crash by throwing the exception Java Heap Memory Out of Space Exception
This means there isn't enough swap space to allocate memory to the JVM's heap.
I would either
increase the swap space
decrease the heap size.
pretouch all the spaces to ensure they are allocated eagerly.

Unable to locate the memory hog on openvz container

i have a very odd issue on one of my openvz containers. The memory usage reported by top,htop,free and openvz tools seems to be ~4GB out of allocated 10GB.
when i list the processes by memory usage or use ps_mem.py script, i only get ~800MB of memory usage. Similarily, when i browse the process list in htop, i find myself unable to pinpoint the memory hogging offender.
There is definitely a process leaking ram in my container, but even when it hits critical levels and i stop everything in that container (except for ssh, init and shells) i cannot reclaim the ram back. Only restarting the container helps, otherwise the OOM starts kicking in in the container eventually.
I was under the assumption that leaky process releases all its ram when killed, and you can observe its misbehavior via top or similar tools.
If anyone has ever experienced behavior like this, i would be grateful for any hints. The container is running icinga2 (which i suspect for leaking ram) , although at most times the monitoring process sits idle, as it manages to execute all its scheduled checks in more than timely manner - so i'd expect the ram usage to drop at those times. It doesn't though.
I had a similar issue in the past and in the end it was solved by the hosting company where I had my openvz container. I think the best approach would be to open a support ticket to your hoster, explain them the problem and ask them to investigate. Maybe they use some outdated kernel version or they did changes on the server that have impact on your ovz container.

do we need to disable swap for riak?

I just found in the riak documentation that the swap makes the server unresponsive so it has to be disabled.It is also given that Riak node be allowed to be killed by the kernel if it uses too much RAM. If swap is completely disabled, Riak will simply exit. I am confused should we have to disable the swap or not?
http://docs.basho.com/riak/latest/cookbooks/Linux-Performance-Tuning/
Swap Space
Due to the heavily I/O-focused profile of Riak, swap usage
can result in the entire server becoming unresponsive. Disable swap or
otherwise implement a solution for ensuring Riak's process pages are
not swapped.
Basho recommends that the Riak node be allowed to be killed by the
kernel if it uses too much RAM. If swap is completely disabled, Riak
will simply exit when it is unable to allocate more RAM and leave a
crash dump (named erl_crash.dump) in the /var/log/riak directory which
can be used for forensics (by Basho Client Services Engineers if you
are a customer).
So no, you don't have to ... but if you don't and you use all your available RAM the machine is likely to become unresponsive.
With any (unbounded) application that performs heavy I/O where you could exhaust your system's memory that's going to be the case. Typically you would have monitoring on the machine that warned you of memory usage going past a threshold.

Resources