Script killing too long process - linux

I'm a webhosting owner, I don't know why currently, but I have some php scripts that are launched for many hours (as personnaly known customers), so I think there is a bug somewhere.
These scripts are eating the RAM AND the swap... So I'm looking for a way to list processes, find the execution time, kill them 1 by 1 if the execution exceed 10 or 20 minutes.
I'm not a bash master, but I know bash and pipes. The only thing I don't know, is how to list the processes (with execution time AND complete command line with arguments). Actually, even in top (then c) there is no arguments in php :/
Thanks for your help.

If you are running Apache with mod_php, you will not see a separate PHP process since the script is actually running inside an Apache process. If you are running as FastCGI, you also might not see a distinguishable PHP process for the actual script execution, though I have no experience with PHP/FastCGI and might be wrong on this.
You can set the max_execution_time option, but it is overridable at run time by calling set_time_limit() unless you run in Safe Mode. Safe mode, however, has been deprecated in PHP 5.3 and removed in 5.4, so you cannot rely on it if you are on 5.4 or plan to upgrade.
If you can manage it with your existing customers (since in some cases it requires non-trivial changes to PHP code), running PHP as CGI should allow you to monitor the actual script execution, as each CGI request will spawn a separate PHP interpreter process and you should be able to distinguish between the scripts they are executing. Note, however, that CGI is the least effective setup (the others being mod_php and FastCGI).

You can use the ps -aux command to list the processes with some detailed information.
You can also check out the ps man page.
This might also be of some help.

Related

I observed a Java process running at root level through top command on my application server, will it lead to performance problems?

We were running a load test and simultaneously executed top command and observed that Java process (running at root level) was consuming 204℅ cpu, even though we ran just 10℅ of expected load on server.
Also one of my colleagues said that a Java process should not be running at root level as this leads to performance issues.
I tried searching the internet but could not find anything which says that Java process should not run at root level.
Note for experts :- please excuse me for my lack of knowledge, please do not download or block the question.
Screen shot of top command:
That's incorrect -- running a process as root will not affect performance, but will likely affect security.
The reason why everyone says not to run your processes as root unless ABSOLUTELY NECESSARY is because the root user has privileges over the entire disk, and many other things: external devices, hardware, processes, etc.
Running code that interacts with the world as root means that if anyone can find a vulnerability in your code / project / process / whatever, the amount of damage / harm that can be done is likely WAY MORE than what could be possible by a non-root user.
Try running the below command to find all the processes in Tree Structure.
ps -e -o pid,args --forest
From the output, you will be able to figure out those java processes or other processes running at Root level are children of whom. For ex. sometimes while testing some scripts, we ourselves trigger those scripts with sudo which might in turn starts the java instance.

Kill a certain httpd job

We have a CentOS server that runs our PHP scripts.
Sometimes when we start a script from a browser and the browser is closed the job keeps running on the server.
Is there a way to kill that particular job ?
On the server I can see a bunch of /usr/sbin/httpd jobs running, but how do I know that is the job that was started in the browser, so I make sure I'm not killing some other job ?
It would be useful if you provided details of the particular jobs that are being started by the users.
Its difficult to know which thread the script is running on, It would probably be more effective to set your max_execution_time in your php.ini file to something suitable.
If you are getting zombie processes, you could try something like the solution to this other question on SO:
bash script to kill php process older then an hour
There are other options available depending on what the scripts are doing, but it's difficult to say without knowing what it's doing

Automating services with Linux OS starting up and shutting down

I have a script to start and stop my services. My server is based on Linux. How do I automate the process such that when OS is shutdown the stop script runs and when it is starting up, the start script runs?
You should install init script for your program. The standard way is to follow Linux Standards Base section 20 subsections 2-8
The idea is to create a script that will start your application when called with argument start, stop it when called with argument stop, restart it when called with argument restart and make it reload configuration when called with argument reload. This script should be installed in /etc/init.d and linked in various /etc/rd.* directories. The standard describes a comment to put at the beginning of the script and a uitlity to handle the installation.
Please, refer to the documentation; it is to complicated to explain everything in sufficient detail here.
Now that way should be supported by all Linux distribution. But Linux community is currently searching for better init system and there are two new, improved, systems being used:
systemd is what most of the world seems to be going to
upstart is a solution Ubuntu created and sticks to so far
They provide some better options like ability to restart your application when it fails, but your script will then be specific to the chosen system.

How to execute a script at the start and end of every application in Linux?

I am trying to log the applications that a user opens/closes in the Linux (any distro) OS. Is there a way to execute a script (Java,Python,etc) everytime an application (like firefox,etc) is opened and closed?
As a general feature -- no.
The execution of a script to log the execution, would in itself be a program execution which would require logging, and hence you would have a recursive problem.
However, if you want to log specific programs, you can implement a shell script which replaces the executable for those specific programs (firefox, python etc), and then within that shell script you could log the execution before calling the actual program.
However
The user would still be able to call the original program without the logging if they know the path.
The new scripts would be a security issue (making the system less secure) and hence would not be recommended.
So in short, a bad idea.
This is not a programming question, but still.
You can actually do this. See https://superuser.com/questions/222912/how-can-i-log-all-process-launches-in-linux
There are many answers. For example, you could use auditd.

How do I create an application that runs in the background and is interactive in Linux?

I want to create an application that runs in the background in Linux (daemon) that will basically at set times (5 times) play a music file or any sound given every single day. I want this daemon to start when the computer is started in terminal mode (non-GUI). I want to know if this is possible and if so, What considerations, tools, and programming language would be the most efficient in doing so? This will be a dedicated computer that will only be executing this task, so if any recommendations on how I can maximize efficiency while disabling other features that are not required for this task will be appreciated. Also, could you please explain how processes and tasks work in terminal (non-GUI)? I always thought terminal was something like CMD in Windows and can only run tasks one at a time.
EDIT: I need the sound to run at variable times, I'll be fetching these times from a website. Any suggestions regarding how to achieve this?
Thanks for the help and sorry for any shortcoming in the questions or my research.
Look at using cron to run your tasks. cron is a very flexible scheduling utility built in to most Linux distributions.
Basically, with cron you specify a task to run (your main program, or maybe just a sound-playing program), all of its arguments, and when it runs. cron takes care of running it, and will even send you "mail" if the job produces any output (such as errors).
You can make processes fork into a subprocess of your terminal, i.e. you are able to run more than one task at a time by putting a & after your terminal command:
> cmd&
> [you can type other commands here but the "cmd" program is still running]
However, for services you generally don't have to worry about starting it as a subprocess because the system already knows to do this. Here's a good question from Super User that has an example of a working service. Simply place your service as a shell script in the /etc/init.d and it will be automatically started as a service.

Resources