We have an Ubuntu server, running apache, PHP, and MySQL. Slowly over time, it seems the number of apache processes slowly increases (seen with ps -aef and top commands). Last night, it was so bad that the server was so slow it was unusable. I have no idea how all the processes were started, since we don't get that much traffic. There are many cron jobs running, but never more than 5-10 at a time. When I first start apache, I have the usual 10 processes, over a few hours it doubles, but this morning when I got in there were 100. I didn't run the top command with the 100 processes, but currently each one uses about 10M-40M.
I read about the prefork and worker MPMs, and wondering if changing settings may help. Currently using prefork with default values. Do I decrease MaxClients to kill the extra processes? Do I set a number for MaxRequestsPerChild so they're killed sooner?
Or something completely different?
Related
I have deployed Centos 7 with Gnome for a use case that requires multiple users to log into the desktop environment.
I have XRDP setup to time-out after 3 days if there is no activity and end the session. This mostly works but often I have lingering sessions when I run loginctl. A quick ps and the only processes left behind that I can tie back to the sessions (based on dates) are ibus-daemon processes. This is making it hard to track the number of active sessions or enforce limits with ghost processes hanging around.
Nov03 ? 00:00:00 ibus-daemon --xim --panel disable
I have read there is a way to add an argument to the daemon to enable a timeout but I have been unable to figure out what is spinning up the ibus-daemon (parent process is 1) or where the startup string is. There is not a systemd or init file so some other process is calling these.
Any suggestions where I can track down the startup config or mitigate this would be appreciated!
I have a long running apache webserver with lots of requests after sometime I find the apache server stopped with
Killed line at the end
what can I do to solve this problem or prevent the system from killing the apache instance ??
Linux usually kills processes when resources like memory are getting low. You might want to have a look at the memory consumption of your apache process over time.
You might find some more details here:
https://unix.stackexchange.com/questions/136291/will-linux-start-killing-my-processes-without-asking-me-if-memory-gets-short
Also you can monitor your processes using the MMonit software, have a look here: https://serverfault.com/questions/402834/kill-processes-if-high-load-average
The is a utility top which shows the process consumption (e.g. Mem, CPU, User etc), you can use it to keep an eye on apache process.
We use clustering with our express apps on multi cpu boxes. Works well, we get the maximum use out of AWS linux servers.
We inherited an app we are fixing up. It's unusual in that it has two processes. It has an Express API portion, to take incoming requests. But the process that acts on those requests can run for several minutes, so it was build as a seperate background process, node calling python and maya.
Originally the two were tightly coupled, with the python script called by the request to upload the data. But this of course was suboptimal, as it would leave the client waiting for a response for the time it took to run, so it was rewritten as a background process that runs in a loop, checking for new uploads, and processing them sequentially.
So my question is this: if we have this separate node process running in the background, and we run clusters which starts up a process for each CPU, how is that going to work? Are we not going to get two node processes competing for the same CPU. We were getting a bit of weird behaviour and crashing yesterday, without a lot of error messages, (god I love node), so it's bit concerning. I'm assuming Linux will just swap the processes in and out as they are being used. But I wonder if it will be problematic, and I also wonder about someone getting their web session swapped out for several minutes while the longer running process runs.
The smart thing to do would be to rewrite this to run on two different servers, but the files that maya uses/creates are on the server's file system, and we were not given the budget to rebuild the way we should. So, we're stuck with this architecture for now.
Any thoughts now possible problems and how to avoid them would be appreciated.
From an overall architecture prospective, spawning 1 nodejs per core is a great way to go. You have a lot of interdependencies though, the nodejs processes are calling maya which may use mulitple threads (keep that in mind).
The part that is concerning to me is your random crashes and your "process that runs in a loop". If that process is just checking the file system you probably have a race condition where the nodejs processes are competing to work on the same input/output files.
In theory, 1 nodejs process per core will work great and should help to utilize all your CPU usage. Linux always swaps the processes in and out so that is not an issue. You could start multiple nodejs per core and still not have an issue.
One last note, be sure to keep an eye on your memory usage, several linux distributions on EC2 do not have a swap file enabled by default, running out of memory can be another silent app killer, best to add a swap file in case you run into memory issues.
Problem:
I am running 3 java processes on a server with 32GB RAM. I frequently face closed ssh sessions owing to network issues. So, I ran the command using
nohup bash script.sh >log-file 2>&1 &.
Now I am running the process using nohup and additionally I am putting them in background. Still, after 2-3 hours of processing, my java process stops writing to a log-file. I checked /proc/pid/status. It shows that the process is sleeping but actually this should not happen in my case. When I am using top, it does not show my process in the list of top processes.
My question is how can I know the reason behind the waiting process ??
When I check the freemem using top, it shows that out of 32GB space, 30 GB is being used and only 2 GB is free. This means that my process is alive and occupying space but not running.
BTW, my server mounts my home and data using a nfs server and we use kerberos for authentication. So, can this be a problem ?? I am using a krenew command for the expiring kerberos ticket.
Perhaps you should setup the 3 java procs to run as daemons, versus using no hangup.
I have a server running virtual hosts that get changed quite often. Rather than someone actually going to the server and typing in the apache restart command I was thinking of making a cron (every 1, 5 or 10 minutes, maybe only during working hours, when changes to the virtual hosts are actually made) to restart apache gracefully.
sudo apachectl graceful
I found an explanation here on stackoverflow that goes like this:
Graceful does not wait for active connections to die before doing a "full restart". It is the same as doing a HUP against the master process. Apache keeps children (processes) with active connections alive, whilst bringing up new children with new configuration (or nicely cleared caches) for each new connection. As the old connections die off, those child processes are killed as well to make way for the new ones.
Would this mean that there would be little to no impact on the visitor's experience (long wait times), or should I just stick with manually restarting apache?
Thanks!
Sorry, but I don't consider that a good idea.
If you're planning on restarting Apache every X minutes, even though it may not need it, I see plenty of downside there but no upside.
If you're just checking and restarting when needed, such as with a process running which can detect when a change is needed, that might be okay.
Personally, I wouldn't even do that since I'd rather keep control over deployment changes. For example, if you wanted to get a whole lot of stuff installed during the working day ready for restart but not actually activate it till quiet time.
Of course, in a robust environment, you'd be running multiple servers so you could offline them one at a time for changes, without affecting anyone.