Cron killing spawned processes - cron

I have a cron job set up that will start my script.
The intent of this script is to kill a process that is currently running, and start up a new version of this process (CHECKDB). CHECKDB needs to be running all the time, so we have a start_checkdb script that is basically a infinite loop that runs CHECKDB; if it crashes, it stays in the loop, starts it again. [yes, i realize that isn't the best practice, but that's not what this is about]
My script will be called by cron without issue, and then it will kill CHECKDB without issue. As far as I can tell, the child script gets called that starts CHECKDB back up, but every time I check ps after the cron runs, the process is not running. If I run the script by hand on the command line, under any shell, it works no problem: kills CHECKDB and start_checkdb, starts up start_checkdb which starts up CHECKDB.
Yet for some reason, when cron does it, the process is never running afterwards. It kills the live one, and either doesn't start it, or it starts it and kills it.
Is it possible that when cron comes to the end the parent process, it will kill the child processes that were called?
I don't know if it makes a difference, but this is on Solaris 8.

You might look at using nohup inside your cron script, when launching checkdb. Some thing like 'nohup command &' would be a normal way to launch something you wanted to live beyond the launching process.

Could you clarify your description of the arrangement? It sounds like, under normal circumstances both start_checkdb and CHECKDB are running. The cron job is supposed to kill CHECKDB, and the already-running copy of start_checkdb is supposed to restart it? Or does the cron job kill both processes and then restart start_checkdb? After the cron job runs, which process is missing--CHECKDB, start_checkdb, or both?
Having said that, the most common reasons for a process to work from the command line but fail from cron are:
Dependency on the correct command PATH (or some other environment variable)
Dependency on being run from the correct directory
Dependency on being run from a tty.

Related

When PHP exec function creates a process, where is it queued so that it can be removed programmatically or from command line?

I have a PHP script that runs the following code:
exec("ls $image_subdir | parallel -j8 tesseract $image_subdir/{} /Processed/OCR/{.} -l eng pdf",$output, $result_code);
The code runs, however, even after I terminate the PHP script and close the browser, it continues to create the pdf files (thousands). It has been 24 hrs and it is still running. When I run a ps command, it only shows the 8 current processes that were created.
How can I find where all the pending ones are running and kill them? I believe I can simply restart Apache/PHP, but I would like to know where these pending processes are and how they can be down or controlled. It seemed originally that the code waited a minute while it executed the above code, then proceeded to the next line of code in the PHP script. So it appears that it created the jobs somewhere and then proceeded to the next line of code.
Is it perhaps something peculiar to the parallel command? Any information is very much appreciated. Thank you.
The jobs appear to have been produced by a perl process:
perl /usr/bin/parallel -j8 tesseract {...basically the code from the exec() function call in the php script}
perl was invoked either by the gnu parallel command or php's exec function. In any event, htop would not allow killing of process and did not produce any error or status and so it may be a permission problem preventing htop from killing the process. So it was done with sudo on the command line which ultimately killed the process and stopped any further processes creation from the original PHP exec() call.

How do I end a looping script in crontab?

I'm working on a script that runs through a never ending loop. Using cron I start the script on reboot. However, I need to update this script from github every 24 hours. I'm running a shell script that basically follows:
Backup cron to .txt file
Empty cron with crontab -r
Pull updates from GitHub
Load cron backup and start cron again.
The shell script empties cron, updates the code, then starts cron again with the same file name and cron runs the program again. I'm testing this by outputting a message to a text file every time the script completes one loop. When I change the message output in GitHub, cron pulls the update and I can see the updated message. The problem is, it continues to show the old message as well. For example:
Original Message "Test": Test Test Test Test Test Test
Updated Message "Update": Test Update Test Update Test Update
It continues to output old messages even though I cleared cron, updated the code, then started it again. It appears to me that simply emptying cron does not stop the previous loop from continuing to run.
I looked into using "killall" to stop all sh scripts from running, but in an attempt to clear out the many looping scripts I had created I killed every running process with killall5 -9. Now when I enter ps to view running processes, none are listed.
I'm very stuck. Any and all help would be appreciated!
Used sudo pkill python to end all running python scripts.

How do I terminate a command that runs infinitely in shell script?

I have this command in my shell script that runs forever- it wouldn't finish unless I do ctrl-c. I have been trying to look up how to send ctrl-c signal to script and all the answers have been some sort of kill $! or kill$$ or such. My problem is that the command never finishes, so it never goes on to the next command like my "kill" commands or anything else. I have to manually hit the ctrl-C in my terminal for it to even execute kill $!. I'm sure there is a way to work around this but I am not sure what. Thanks in advance!
There are several approaches to this problem. The simplest (but not most robust) is (perhaps) to simply run your long running command in the background:
#!/bin/sh
long-running-command & # run in the background
sleep 5 # sleep for a bit
kill %1 # send SIGTERM to the command if it's still running

bash subshell vs vanilla command execution

As far as I know, when you run a command, like
> sleep 3
The shell process will fork another process and run the command with the child process.
However when you do
> (sleep 3)
you launch a subshell and execute the command. Essentially what it does is also fork another process to execute the command and wait the command to complete.
In this case, the behavior of the two commands looks the same, the parent shell will wait the sleep command to complete.
However sometime I noticed things are different with subshell:
For example, if I run some command like:
> virtualbox &
If I accidentally close the terminal the virtualbox will close at the same time. I already screwed my ongoing work several time in this way.
However if I do it this way it the program won't be killed even if I exited the terminal:
> (virtualbox &)
So I am not sure what's going on under the hood? How are the tasks started and managed by the shell with the two different approach?
As others write, using nohup will allow you to run the process without it being terminated when your shell is terminated. What happens in the two cases you describe is the following.
In the virtualbox & case virtualbox becomes a child of your shell. When your controlling terminal is closed all processes associated with it receive a SIGHUP signal, and are terminated.
In the (virtualbox &) case the command is executed within a subshell. When the subshell terminates, the command is disassociated from the shell and the terminal. (You can see this by running ps.) In this case the SIGHUP will not be sent to virtualbox, and therefore your command will not be terminated when the controlling terminal is closed.
The nohup command achieves the same result by specifying that the SIGHUP signal must be ignored.

Is there a way to run a shell script as one whole task(with single PID)?

I have a shell script called run.sh. In it, I may call other shell scripts like:
./run_1.sh
./run_2.sh
.........
If I call the script by ./run.sh, I have found actually it will invoke different tasks inside the script sequentially with different PIDs(i.e., run_1.sh will be a task and run_2.sh will be another task). This disables me to kill the whole group of tasks using one "kill" command or run the whole group of tasks all in background by running "./run.sh &".
So is there a way to run the script just as one whole task?
pkill can be used for killing the children of a process, using the -P option.
pkill -P $PID
where $PID is the PID of the parent process.
You can source the run_1.sh command so that it is executed in the same shell (This could cause side effects, since now all scripts will share the same scope).
source run_1.sh
source run_2.sh

Resources