Docker memory leak on AWS EC2 container service - python-3.x

Whenever we are making a API call to our script it gets completed successfully, but after
the end of the script script, memory doesn't get released. Lets say if there was 10MB memory was
used up during execution then after execution memory usage should have come done atleast by 5 MB
but it is not happening.
So after a certain amount of time memory usage go beyond the 75% usage and we start getting alerts.
Docker version 1.11.2, build b9f10c9/1.11.2
Python3.4.2
flask
We motoring usage with docker stats command

Found this solution and it really helped.
This issue was due to linux and python. Python was releasing the memory but
linux thought flask is still running(caller of process) so it should not release that memory and due to this memory was not getting released.
http://www.paulsprogrammingnotes.com/2014/10/large-dictionaries-not-released-from.html?showComment=1483516233443#c3352147816385844344

Related

Is it normal that the memory consumed by nodejs keep increasing as the project size increase

I have a nodejs project that was running fine on my local machine using pm2 process manager. Now I migrated my code to a micro ec2 instance and when I started my server using pm2 it crashed. I checked memory consumption using free -m and I found that node almost consume 855 Mb of memory out of 1GB RAM. I refactored my code and I started from scratch and I found as I add more files and modules the memory used by node js increases. I changed the swappiness limit to 15 instead of 50. Another thing I tried to used my hard as a swap, Although it works but as I add more files and functionality to my project nodejs consume more memory. Is this behaviour normal. If not how to debug it. Finally are there any tweeks to links or nodejs infrastructure to minimize the memory footprint.
My server is not running any databases or anything. Only nodejs app.
Here is the value code meterics
And here is the free -m command

Memory leak in express.js api application

I am running an express.js application, which is used as a REST api. One endpoint starts puppeteer and test my website with several procedures.
After starting the application and the continuous consumption of the endpoint, my docker container runs out of memory every hour as you can see below.
First, I thought I have a memory leak in my puppeteer / headless chrome, but I then I monitored the memory usage from the processes, there isn't and memory leak visible as you can see here:
0.00 Mb COMMAND
384.67 Mb /var/express/node_modules/puppeteer/.local
157.41 Mb node /var/express/bin/www
101.76 Mb node /usr/local/bin/pm2
4.34 Mb /var/express/node_modules/puppeteer/.local
1.06 Mb ps
0.65 Mb bash
0.65 Mb bash
0.31 Mb cut
0.31 Mb cut
0.13 Mb dumb
Now, I ran out of ideas what the problem could be. Has anyone an idea where the RAM consumption could came from?
Analyse the problem more
You need to monitor the activity real time.
We do not have the code, thus we cannot even know what is going on. However, you can use more advanced tool like htop, gtop, netdata and others than top or ps.
The log on pm2 might also tell you more about things. On such situation, the logs will have more data than the process manager. Make sure to thoroughly investigate the logs to see if scripts are responsible, and throwing errors or not,
pm2 logs
Each api call will cost you
Calculate the cost early and prepare accordingly,
If you have 1 call, then be prepared to have 100Mb-1GB or more each time. It will cost you just like a browser tab. The cost will be there as long as the tab is open.
If the target website is heavy, then it will cost more. Some websites like Youtube will obviously cost you more.
Any script running inside the browser tab will cost cpu and memory usage.
Say each process is causing 300MB ram, If you don't close the process properly and start making API calls, then only 10 API call will be able to use 3GB ram pretty easily. It can add up pretty quickly.
Make sure to close the tabs
Whether the automation task is successful or not, make sure to properly use browser.close() to ensure the resource it is using gets free. Most of time we forget about such small things and it costs us.
Apply dumb init on docker to avoid ghost process
Something like dumb-init or tini can be used if you have a process that spawns new processes and you doesn't have good signal handlers implemented to catch child signals and stop your child if your process should be stopped etc.
Read more on this SO answer.
I've got the problem solved. It was caused by the underlying Kubernetes system, which wasn't configured with a resource limit on that specific container. Therefore, the container can consume as many memory as possible.
Now I've limited at 2GB an it looks like this:

How to does Linux manage VM allocation per process? OOM crash

I'm currently prototyping a very light weight TCP server based on a custom protocol. It's written in C++ and using Boost Asio for cross-platform sockets. When I monitor the process on Windows it only eats about <3MB in memory, barely grows with many concurrent connections (I tested up to 8).
I built the same server for Linux and put it on a 128MB + 64MB swap VPS for testing. It runs fine and my testings are successful, but the process gets killed in the middle of the night by kernel. I checked the logs and it was out of memory (OOM score was 0).
I highly doubt my process has memory leaks. I checked my server logs and only 1 person has connected to it the previous night, which should not result in OOM. The process sleeps for majority of the time as it only does processing if Boost's async handler wakes up the main thread to process the packet.
What I did notice is that the default VM allocation for the process is a whooping 89MB (using top command). And as soon as I make a connection it is doubled to about 151MB. My VPS has about 100MB free ram and all 64MB swap while running the server, so the the only thing I could think of is that the process tried to allocate more virtual memory going over the ~164MB remaining and went beyond the physical limit and triggered the OOM-Killer.
I've since used the ulimit command to limit the VM allocation to 30MB and it seems to be working fine, but I'll have to wait a while to see if it actually helps the issue.
My question is how does Linux determine how much VM to allocate for a process? Is there a compiler/linker setting I can use to reduce the default VM reservation? Is my reasoning correct or are there other reasons for the OOM?

Unable to locate the memory hog on openvz container

i have a very odd issue on one of my openvz containers. The memory usage reported by top,htop,free and openvz tools seems to be ~4GB out of allocated 10GB.
when i list the processes by memory usage or use ps_mem.py script, i only get ~800MB of memory usage. Similarily, when i browse the process list in htop, i find myself unable to pinpoint the memory hogging offender.
There is definitely a process leaking ram in my container, but even when it hits critical levels and i stop everything in that container (except for ssh, init and shells) i cannot reclaim the ram back. Only restarting the container helps, otherwise the OOM starts kicking in in the container eventually.
I was under the assumption that leaky process releases all its ram when killed, and you can observe its misbehavior via top or similar tools.
If anyone has ever experienced behavior like this, i would be grateful for any hints. The container is running icinga2 (which i suspect for leaking ram) , although at most times the monitoring process sits idle, as it manages to execute all its scheduled checks in more than timely manner - so i'd expect the ram usage to drop at those times. It doesn't though.
I had a similar issue in the past and in the end it was solved by the hosting company where I had my openvz container. I think the best approach would be to open a support ticket to your hoster, explain them the problem and ask them to investigate. Maybe they use some outdated kernel version or they did changes on the server that have impact on your ovz container.

Node.js killed due to out of memory

I'm running Node.js on a server with only 512MB of RAM. The problem is when I run a script, it will be killed due to out of memory.
By default the Node.js memory limit is 512MB. So I think using --max-old-space-size is useless.
Follows the content of /var/log/syslog:
Oct 7 09:24:42 ubuntu-user kernel: [72604.230204] Out of memory: Kill process 6422 (node) score 774 or sacrifice child
Oct 7 09:24:42 ubuntu-user kernel: [72604.230351] Killed process 6422 (node) total-vm:1575132kB, anon-rss:396268kB, file-rss:0kB
Is there a way to get rid of out of memory without upgrading the memory? (like using persistent storage as additional RAM)
Update:
It's a scraper which uses node module request and cheerio. When it runs, it will open hundreds or thousands of webpage (but not in parallel)
If you're giving Node access to every last megabyte of the available 512, and it's still not enough, then there's 2 ways forward:
Reduce the memory requirements of your program. This may or may not be possible. If you want help with this, you should post another question detailing your functionality and memory usage.
Get more memory for your server. 512mb is not much, especially if you're running other services (such as databases or message queues) which require in-memory storage.
There is the 3rd possibility of using swap space (disk storage that acts as a memory backup), but this will have a strong impact on performance. If you still want it, Google how to set this up for your operating system, there's a lot of articles on the topic. This is OS configuration, not Node's.
Old question, but may be this answer will help people. Using --max-old-space-size is not useless.
Before Nodejs 12, versions have an heap memory size that depends on the OS (32 or 64 bits). So, following documentations, on 64-bit machines that (the old generation alone) would be 1400 MB, far away from your 512mb.
From Nodejs12, heap size take care of system RAM; however Nodejs' heap isn't the only thing in memory, especially if your server isn't dedicated to it. So set the --max-old-space-size permit to have a limit regarding the old memory heap, and if your application comes closer, the garbage collector will be triggered and will try to free memory.
I've write a post about how I've observed this: https://loadteststories.com/nodejs-kubernetes-an-oom-serial-killer-story/

Resources