I'm working on a script that runs through a never ending loop. Using cron I start the script on reboot. However, I need to update this script from github every 24 hours. I'm running a shell script that basically follows:
Backup cron to .txt file
Empty cron with crontab -r
Pull updates from GitHub
Load cron backup and start cron again.
The shell script empties cron, updates the code, then starts cron again with the same file name and cron runs the program again. I'm testing this by outputting a message to a text file every time the script completes one loop. When I change the message output in GitHub, cron pulls the update and I can see the updated message. The problem is, it continues to show the old message as well. For example:
Original Message "Test": Test Test Test Test Test Test
Updated Message "Update": Test Update Test Update Test Update
It continues to output old messages even though I cleared cron, updated the code, then started it again. It appears to me that simply emptying cron does not stop the previous loop from continuing to run.
I looked into using "killall" to stop all sh scripts from running, but in an attempt to clear out the many looping scripts I had created I killed every running process with killall5 -9. Now when I enter ps to view running processes, none are listed.
I'm very stuck. Any and all help would be appreciated!
Used sudo pkill python to end all running python scripts.
Related
I have modified my rc.local script to run some python script at startup.
This python script seems to be started successfully.
As the script is running forever (intended) and I want to see what the script does, my question is:
Is there a way to access the shell that runs this script?
Yes, to see what is going on, I could log to some file, but what if that script needs to get input from the user via console?
Thanks for your help!
You will not be able to interact with the script run by rc.local. But you can see what it does by logging its output into dedicated files:
python myscript.py > /home/myhome/log/myscript.log 2> /home/myhome/log/myscript.err
where error messages go into a separate log file.
Note that your script will be executed by root, having permissions and ownership accordingly.
Here's a link to an earlier answer about this with a method to log all outputs of rc.local.
Now you can see in your log file, if the execution stops due to the script demanding input or indeed crashing, and then you can fix the script accordingly.
If you don't want to mess with rc.local for testing, you could also first run it through crontab on your or root's account (scheduled execution by user, see man crontab). This might be easier for debugging, and you can start it through rc.local once it works as you want.
I'm having a peculiar issue with a shell script that I have set to run every minute via crontab.
I use Pelican as a blog platform and wanted to semi-automate the way in which the site updates whenever there's a new post. To do this, I've created a script to look for a file called respawn in the same directory as the content (it syncs via Dropbox so I simply create the file there which syncs to the server).
The script is written so that if the file respawn exists then it rebuilds the blog and deletes it. If it's not, it exits instead.
Here's the script called publish.sh
#!/bin/bash
Respawn="/home/user/content/respawn"
if [ -f $Respawn ]
then
sudo /home/user/sb.sh;rm $Respawn
else
exit 0
fi
exit 0
Here's the crontab for the shell script
* * * * * /home/user/publish.sh
And finally, here's the contents of sb.sh
make html -C /var/www/site/
Now, if I run the script via SSH and respawn exists, it works perfectly. However, if I let the cron do it then it doesn't run the shell script but it still deletes the respawn file.
I have one other cron job that runs every 4 hours that simply runs sb.sh which works perfectly (in case I forget to publish something).
I've tried using the user's crontab as well as adding it to root instead and I've also added the user to the sudoers file so it can be run without password intervention. Neither seem to work. Am I missing something?
It must be sudo. cron can't input the password.
Check mail for the user running the cron to confirm. Something like sudo: no tty present.
Try changing sudo /home/user/sb.sh;rm $Respawn to
/home/user/sb.sh;rm $Respawn
sudo is not necessary to run your command in this context, since it'll be invoked as root anyway.
I have a php script that deletes a file from a specific folder on my server:
if (file_exists($_SERVER['DOCUMENT_ROOT']."/folder/file1"))
{
unlink($_SERVER['DOCUMENT_ROOT']."/folder/file1");
}
When I go to this script address with my browser it works fine.
I created a cron job to run this script every hour and running this script from the cron job - the file is not deleted.
I also created a flag that send me an email and I suspect that the cron job gets a false response to the "file_exists" test and not continue to the "unlink" action.
Any idea why cron job wont delete the file?
Thanks
Anyone??
Solved it:
Instead of $_SERVER['DOCUMENT_ROOT']."/folder/file1
Had to put this: /home/public_html/folder/file1
First the background to this intriguing challenge. The continuous integration build can often have failures during development and testing of deadlocks, loops, or other issues that result in a never ending test. So all the mechanisms for notifying that a build has failed become useless.
The solution will be to have the build script timeout if there's zero output to the build log file for more than 5 minutes since the build routinely writes out the names of unit tests as it proceeds. So that's the best way to identify it's "frozen".
Okay. Now the nitty gritty...
The build server uses Hudson to run a simple bash script that invokes the more complex build script based on Nant and MSBuild (all on Windows).
So far all solutions around the net involve a timeout on the total run time of the command. But that solution fails in this case because the tests might hang or freeze in the first 5 minutes.
What we've thought of so far:
First, here's the high level bash command run the full test suite in Hudson.
build.sh clean free test
That command simply sends all the Nant and MSBuild build logging to stdout.
It's obvious that we need to tee that output to a file:
build.sh clean free test 2>&1 | tee build.out
Then in parallel a command needs to sleep, check the modify time of the file and if more than 5 minutes kill the main process. A kill -9 will be fine at that point--nothing graceful needed once it has frozen.
That's the part you can help with.
In fact, I made a script like this over 15 years ago to kill the connection with a data phone line to japan after periods of inactivity but can't remember how I did it.
Sincerely,
Wayne
build.sh clean free test 2>&1 | tee build.out &
sleep 300
kill -KILL %1
You may be able to use timeout:
timeout 300 command
Solved this myself by writing a bash script.
It's called iotimeout with one parameter which is the number of seconds.
You use it like this:
build.sh clean dev test | iotimeout 120
iotimeout has 2 loops.
One is a simple while read line loop that echos echo line but
it also uses the touch command to update the modified time of a
tmp file every time it writes a line. Unfortunately, it wasn't
possible to monitor a build.out file because Windoze doesn't
update the file modified time until you close the file. Oh well.
Another loop runs in the background, that's a forever loop
which sleeps 10 seconds and then checks the modified time
of the temp file. If that ever exceeds 120 seconds old then
that loop forces the entire process group to exit.
The only tricky stuff was returning the exit code of the original
program. Bash gives you a PIPESTATUS array to solve that.
Also, figuring out how to kill the entire program group was
some research but turns out to be easy just--kill 0
I have a cron job set up that will start my script.
The intent of this script is to kill a process that is currently running, and start up a new version of this process (CHECKDB). CHECKDB needs to be running all the time, so we have a start_checkdb script that is basically a infinite loop that runs CHECKDB; if it crashes, it stays in the loop, starts it again. [yes, i realize that isn't the best practice, but that's not what this is about]
My script will be called by cron without issue, and then it will kill CHECKDB without issue. As far as I can tell, the child script gets called that starts CHECKDB back up, but every time I check ps after the cron runs, the process is not running. If I run the script by hand on the command line, under any shell, it works no problem: kills CHECKDB and start_checkdb, starts up start_checkdb which starts up CHECKDB.
Yet for some reason, when cron does it, the process is never running afterwards. It kills the live one, and either doesn't start it, or it starts it and kills it.
Is it possible that when cron comes to the end the parent process, it will kill the child processes that were called?
I don't know if it makes a difference, but this is on Solaris 8.
You might look at using nohup inside your cron script, when launching checkdb. Some thing like 'nohup command &' would be a normal way to launch something you wanted to live beyond the launching process.
Could you clarify your description of the arrangement? It sounds like, under normal circumstances both start_checkdb and CHECKDB are running. The cron job is supposed to kill CHECKDB, and the already-running copy of start_checkdb is supposed to restart it? Or does the cron job kill both processes and then restart start_checkdb? After the cron job runs, which process is missing--CHECKDB, start_checkdb, or both?
Having said that, the most common reasons for a process to work from the command line but fail from cron are:
Dependency on the correct command PATH (or some other environment variable)
Dependency on being run from the correct directory
Dependency on being run from a tty.