Problem:
I am running 3 java processes on a server with 32GB RAM. I frequently face closed ssh sessions owing to network issues. So, I ran the command using
nohup bash script.sh >log-file 2>&1 &.
Now I am running the process using nohup and additionally I am putting them in background. Still, after 2-3 hours of processing, my java process stops writing to a log-file. I checked /proc/pid/status. It shows that the process is sleeping but actually this should not happen in my case. When I am using top, it does not show my process in the list of top processes.
My question is how can I know the reason behind the waiting process ??
When I check the freemem using top, it shows that out of 32GB space, 30 GB is being used and only 2 GB is free. This means that my process is alive and occupying space but not running.
BTW, my server mounts my home and data using a nfs server and we use kerberos for authentication. So, can this be a problem ?? I am using a krenew command for the expiring kerberos ticket.
Perhaps you should setup the 3 java procs to run as daemons, versus using no hangup.
Related
I have deployed Centos 7 with Gnome for a use case that requires multiple users to log into the desktop environment.
I have XRDP setup to time-out after 3 days if there is no activity and end the session. This mostly works but often I have lingering sessions when I run loginctl. A quick ps and the only processes left behind that I can tie back to the sessions (based on dates) are ibus-daemon processes. This is making it hard to track the number of active sessions or enforce limits with ghost processes hanging around.
Nov03 ? 00:00:00 ibus-daemon --xim --panel disable
I have read there is a way to add an argument to the daemon to enable a timeout but I have been unable to figure out what is spinning up the ibus-daemon (parent process is 1) or where the startup string is. There is not a systemd or init file so some other process is calling these.
Any suggestions where I can track down the startup config or mitigate this would be appreciated!
I've been struggling to run multiple instances of Puppeteer on DigitalOcean for quite some time with little luck. I'm able to run ~5 concurrently using tools like puppeteer-cluster, but for some reason the whole thing just chokes with little helpful messaging. So, I switched to spawning ~5 child processes without any additional library -- just Puppeteer itself. Same issue. Chokes with no helpful errors.
I'm able to run all of these jobs just fine locally, but after I deploy, I hit these walls. So, my hunch is that it's a resource/performance issue, but I can't say for sure.
I'm running a droplet with 1GB and 3CPUs on Digital Ocean.
Basically, I'm just looking for ways to start troubleshooting something like this. is there a way I can know for sure that I'm hitting resource walls? I've tried pm2 and the DO dashboard graphs, but I feel like those are all leaving a lot of information out, or else I'm missing something else altogether.
Author of puppeteer-cluster here. You are right, 1 GB of memory is likely not enough for running 5 browser windows (or tabs) in addition to your operating system and maybe even other background tasks.
Here is a list of resources you should check:
Memory: Use a tool like htop to check your memory usage while your application is running.
CPU: Again, you can use htop for that, 3 vCPUs should be more than enough for 5 windows.
Disk space: Use a tool like df to check if there is enough space on the disk. I know of multiple cases in which there was not enough space on the disk (like some old kernels filling the disk), and Chrome needs at least some space to run.
Network throughput: Rarely the problem, but sometimes the network just does not have the bandwidth to support many open browser. Use a tool like nload to check the network throughput.
To use htop or nload, you start your script in the background (node script.js &) or use a terminal multiplexer (like tmux). Resource problems should then be easy to spot.
Most probably you're running out of memory, 5 puppeteer processes are a lot for a 1GB VM.
You can run
grep -i 'killed process' /var/log/messages
to confirm that the OOM killer terminated your processes.
I have a long running apache webserver with lots of requests after sometime I find the apache server stopped with
Killed line at the end
what can I do to solve this problem or prevent the system from killing the apache instance ??
Linux usually kills processes when resources like memory are getting low. You might want to have a look at the memory consumption of your apache process over time.
You might find some more details here:
https://unix.stackexchange.com/questions/136291/will-linux-start-killing-my-processes-without-asking-me-if-memory-gets-short
Also you can monitor your processes using the MMonit software, have a look here: https://serverfault.com/questions/402834/kill-processes-if-high-load-average
The is a utility top which shows the process consumption (e.g. Mem, CPU, User etc), you can use it to keep an eye on apache process.
We have an Ubuntu server, running apache, PHP, and MySQL. Slowly over time, it seems the number of apache processes slowly increases (seen with ps -aef and top commands). Last night, it was so bad that the server was so slow it was unusable. I have no idea how all the processes were started, since we don't get that much traffic. There are many cron jobs running, but never more than 5-10 at a time. When I first start apache, I have the usual 10 processes, over a few hours it doubles, but this morning when I got in there were 100. I didn't run the top command with the 100 processes, but currently each one uses about 10M-40M.
I read about the prefork and worker MPMs, and wondering if changing settings may help. Currently using prefork with default values. Do I decrease MaxClients to kill the extra processes? Do I set a number for MaxRequestsPerChild so they're killed sooner?
Or something completely different?
We have test server with 3 different node.js apps running on it. Each application is using the same MongoDB database test instance of which also runs on the same server. So at any given moment of time we have at most 3 different open connections to mongodb server.
The issue is that after each code deployment (which basically is: killing currently running process, code update and starting new process) i see new process(thread of a single proccess) on server which is shown in htop as /usr/bin/mongod --config /etc/mongodb.conf. Thus once in awhile we have to restart the test server because there are too many not used threads like that and it makes the mongod process take all the RAM.
I am not sure why is this happening and looking for solution to fix this issue.
My assumption is that if we simply kill the node.js proccess the connection (and therefore the thread related to this connection) somehow stays alive and therefore instead of killing nodejs process, we should gracefully shut it down with closing the DB connection.
htop is also showing different threads, your mongod isn't started multiple times, which wouldn't be possible with the same config because the port is already in use.
use top or ps aux | grep mongod and you should see just one process.
you can also configure htop not to show those, press F2 > display options > hide userland threads.