Memory leak or consumption issues of a process in an embedded system - memory-leaks

If we want to debug memory-related issues of a process then we have to start the process using Valgrind. Are there any other tools using which we can analyze a process that is already running in the embedded system?
For example, a process will be started by the embedded system on bootup. The memory consumption of the process is increasing gradually. I don't want to kill and start the process with Valgrind, I want to inspect the existing process. Are there any tools that can help here?
I think we can try with /proc/pid/maps, but not sure how we can understand the anonymous allocations in /proc/pid/maps file.

Related

What is the proper tools and technics to analyze a core dump file in Linux

I'm not asking how to find the cause of the crash. Actually there is no crash at all. I can't exclude the possibility of memory leaking but the executable passed Valgrind analysis in the stress testing. However, when it's running in the cloud, with much load, it gradually consumed much memory. Devop had to use kill -6 pid to kill the process and generated a core dump file, then restarted it. With that core dump, what are the good tools and technics to help you locate which part of the code contributed to the very high memory consumption? Thanks!

Limit a process's CPU and memory usage, with Docker perhaps?

Are there anyways to run a process inside a Docker container without building the container plus all the other isolations (IO, etc)?
My end goal is not to build an isolated environment, but rather limit CPU and Memory usage (ability to malloc). And using VM instances is just a too bit overhead. The ulimit, systemd, cpulimit, and other Linux tools doesn't seem to provide a good solution here. For example it seems that systemd only kills the process if RES/VIRT exceeds threshold.
Docker seems to do the trick without performance degradation, but are there any simple methods to run e.g. a python script without all the extra hassle and configurations?
Or are there any other ways to limit CPU and Mem usage?

how to analyze memory leaks for "azure web apps" (PaaS)

I am looking to analyze memory leaks for the web app deployed in azure.
Referring to following urls
https://blogs.msdn.microsoft.com/kaushal/2017/05/04/azure-app-service-manually-collect-memory-dumps/
https://blogs.msdn.microsoft.com/kaushal/2017/05/04/azure-app-service-manually-collect-memory-dumps/
we were able to extract memory dump and analyze them. but since we were not able to inject the LeakTrack dll / enable memory leaks tracking when collecting the dump, we are getting message that leak analysis was not performed due to not injecting the dll on performing memory analysis.
please suggest how to find out memory leakages from analyzing the dump in this scenario.
As you said, DebugDiag currently can't create reflected process dumps, and ProcDump doesn't have a way to inject the LeakTrack dll to track allocations. So, we could get around by working with both tools.
We can simply go to the Processes tab in DebugDiag, right click the process, and chose "Start Monitory for Leaks."
We can do that by scripting DebugDiag and ProcDump to do the individual tasks we've set out for them.
Once we have the PID of the troubled process, we can use a script to inject the LeakTrack dll into the process. With the PID known and the script created, we can launch DebugDiag from a command line.
Such as:
C:\PROGRA~1\DEBUGD~1\DbgHost.exe -script "your LeakTrack dll path" -attach your PID
For more detail, you could refer to this article.
Here is also the reference case.

why Linux keeps killing Apache?

I have a long running apache webserver with lots of requests after sometime I find the apache server stopped with
Killed line at the end
what can I do to solve this problem or prevent the system from killing the apache instance ??
Linux usually kills processes when resources like memory are getting low. You might want to have a look at the memory consumption of your apache process over time.
You might find some more details here:
https://unix.stackexchange.com/questions/136291/will-linux-start-killing-my-processes-without-asking-me-if-memory-gets-short
Also you can monitor your processes using the MMonit software, have a look here: https://serverfault.com/questions/402834/kill-processes-if-high-load-average
The is a utility top which shows the process consumption (e.g. Mem, CPU, User etc), you can use it to keep an eye on apache process.

Node.JS V8 heap growing quickly even though usage remains the same

I'm running a Node.JS web application that works fine for a few hours and then at some random point in time, the V8 heap suddenly starts growing very quickly without a reason and about 40 minutes later, this growth usually stops and the process continues running normally.
I'm monitoring this with nodetime:
What could be the cause of this? Is it a memory leak in my program or perhaps a bug in V8?
There is no way of knowing what the issue by what you provided, but there's a 99.99% chance the problem is inside / fixable in your code.
The best tools I've found for debugging memory issues with Node.js is https://github.com/bnoordhuis/node-heapdump, you can set it up to dump a certain intervals, or by default it listens to USR2 signal, so you can send kill -s USR2 to the pid of your process and get the snapshot.
Then you can use Chrome Inspector to load the heap into it's profiling tool and start inspecting.
I've generally found the issues to be around holding on to external requests too long.

Resources