Is there a way to see whats "stuck" on Gearman-Job-Server? - gearman

I have Ubuntu/LAMP install of gearman-job-server.
4 have been running on various server sizes; but 1 has 6% cpu and 33% mem (and growing, slowly).
I would like to inspect that job servers processes. Is there a way to do so? This is a package installed via Apt

Related

Npm install using excess CPU on docker container running with limited cores

Problem encountered
When running npm install on a remote docker container running on a machine with 36 CPUs but with the container limited to 4 (specifically circleci), a post install process was killed returning 137 error. Googling suggests this is an out of memory problem however monitoring the process suggests it is nowhere near the memory limit. However, the npm install process peaks at 940% CPU utilisation (across many spawned processes) before being killed, while the docker container (according to the circleci docs) is limited to 4 CPUs. I believe this is because the process responsible for spawning processes is working under the assumption that 36 CPUs are available?
Current Fix
I have a fix using cpulimit to limit the number of CPU cores available to npm install and it's spawned processes to 4. I'd like to know if this is the best solution though cos it seems like a bit of a hack.
Other Info
I've discovered npm spawns install and postinstall processes via node's child_process.spawn method. My thought is that this process is where the problem is coming from but this is where I loose the trail cos I'm not great with c++. I'd be interested to know how nodejs spawns new processes and if it is possible to limit it based on the number of cores actually available? If it's a kernel thing is there a way to control that in docker? Alternatively, if I'm on completely the wrong track could someone point me in the right direction.

DigitalOcean Server CPU 100% without app running

htop command shows the CPU is 100% used even tho I do not have the app running or anything else. The DigitalOcean dashboard metric shows this same data (100% usage) as well.
The top tasks on the htop list take less than 10% CPU usage. The biggest is pm2 taking ~5.2 % usage.
Is it possible that there are hidden tasks that are not displaying on the list and, in general, how I can start investigating what's going on?
My droplet used this one-click installation:
https://marketplace.digitalocean.com/apps/nodejs
Thanks in advance!
Update 1)
The droplet has a lot of free disk space
I ran pm2 save --force to sync running processes and the CPU went back to normal.
I guess there was an app stuck or something that ate all the CPU.

Increasing kernel CPU usage on web server running node on Ubuntu 20.04

I have a problem with increasing kernel cpu usage on a web server I am running. On a 6 core cpu the kernel usage increases from 5 to 50% in some 8 hours.
I have noticed it takes less time when there are more active users on the site and I don't have this problem in dev, therefore I don't have any code that can reproduce the problem. I am hoping for some advice how to troubleshoot this though, what should I investigate to figure out what the problem is?
"pm2 restart" will take the cpu usage down so this is what I need to do every 8 hours or so. I have also noticed increasing cpu usage of systemd-resolved up to some 50% in 8 hours but restarting it with "systemctl restart systemd-resolved" will not help.
I am running it on ubuntu 20.04, node v12.19.0, next 9.5.3, express, express-session, express-socket.io.session, mongodb etc. I have had this problem even on older versions of all this though.

Memory leak in express.js api application

I am running an express.js application, which is used as a REST api. One endpoint starts puppeteer and test my website with several procedures.
After starting the application and the continuous consumption of the endpoint, my docker container runs out of memory every hour as you can see below.
First, I thought I have a memory leak in my puppeteer / headless chrome, but I then I monitored the memory usage from the processes, there isn't and memory leak visible as you can see here:
0.00 Mb COMMAND
384.67 Mb /var/express/node_modules/puppeteer/.local
157.41 Mb node /var/express/bin/www
101.76 Mb node /usr/local/bin/pm2
4.34 Mb /var/express/node_modules/puppeteer/.local
1.06 Mb ps
0.65 Mb bash
0.65 Mb bash
0.31 Mb cut
0.31 Mb cut
0.13 Mb dumb
Now, I ran out of ideas what the problem could be. Has anyone an idea where the RAM consumption could came from?
Analyse the problem more
You need to monitor the activity real time.
We do not have the code, thus we cannot even know what is going on. However, you can use more advanced tool like htop, gtop, netdata and others than top or ps.
The log on pm2 might also tell you more about things. On such situation, the logs will have more data than the process manager. Make sure to thoroughly investigate the logs to see if scripts are responsible, and throwing errors or not,
pm2 logs
Each api call will cost you
Calculate the cost early and prepare accordingly,
If you have 1 call, then be prepared to have 100Mb-1GB or more each time. It will cost you just like a browser tab. The cost will be there as long as the tab is open.
If the target website is heavy, then it will cost more. Some websites like Youtube will obviously cost you more.
Any script running inside the browser tab will cost cpu and memory usage.
Say each process is causing 300MB ram, If you don't close the process properly and start making API calls, then only 10 API call will be able to use 3GB ram pretty easily. It can add up pretty quickly.
Make sure to close the tabs
Whether the automation task is successful or not, make sure to properly use browser.close() to ensure the resource it is using gets free. Most of time we forget about such small things and it costs us.
Apply dumb init on docker to avoid ghost process
Something like dumb-init or tini can be used if you have a process that spawns new processes and you doesn't have good signal handlers implemented to catch child signals and stop your child if your process should be stopped etc.
Read more on this SO answer.
I've got the problem solved. It was caused by the underlying Kubernetes system, which wasn't configured with a resource limit on that specific container. Therefore, the container can consume as many memory as possible.
Now I've limited at 2GB an it looks like this:

Memory Issues with Gitlab 10 CE on Ubuntu 16.04

Tried posting on Gitlab's forum and had no luck so I thought I'd try here.
We’ve been using Gitlab10 CE for a few months now. We are a pretty small shop with only 5 developers so our instance of gitlab is busy but not crazy by any stretch of the imagination, yet we are constantly running into memory problems. It is a virtual machine running on Ubuntu 16.04. I initially began with the recommended 1 core, and 4GB of memory, and we were constantly being alerted about memory and CPU issues. I upped the specs to 2 cores, and 8GB of memory. Same issue. I’ve now pushed the box to 8 cores and 32GB of CPU and I am still constantly being alerted about memory issues (although the CPU has died down quite a bit). As of the time of this message, we’ve received 20 memory alerts in the last 5 hours. These things are even coming in through the night hours when we have no one even touching the system.
When I run HTOP, there are 28 processes called sidekiq 5.0.4 gitlab-rails [0 of 25 busy] that claim to be costing 2% of our overall memory each. That is over 16GB worth! Under that there’s a whole host of unicorn workers costing 1.8% of our overall memory each.
We’re pretty new to using gitlab so there could easily be something I’m just missing. Any advice on how to throttle the number of processes for each of these or throttle git’s overall memory consumption would be awesome. Thanks!
I'd bet you are seeing threads, not processes in htop. Press Shift-H to view processes. Those threads are all sharing the same 2% of memory.
Make sure you are keeping up to date with GitLab versions, they fix bugs and optimize their code all the time.

Resources