Cronjob stuck without exiting - linux

I have 50+ cronjobs like the one given below running in my Centos 7 server.
curl -s https://url.com/file.php
This runs every 10 minutes. When running manually from the shell it only takes 1-2 minutes. It also is working fine using cronjob. The problem is that it does not exit after the execution. When i check my processes using ps command, it shows many cronjobs of previous dates(even 10 days before) which accumulates the total proccesses in my server.
Line in crontab :-
*/10 * * * * user curl -s https://url.com/file.php > /dev/null 2>&1
Is there any reasion for this? If i rememmber correctly this happened after latest patch update.
Please help.

Modify your command to store the logs in log files instead of dumping it to /dev/null.
Options
--max-time
--connect-timeout
--retry
--retry-max-time
can be used to control the curl command behaviour.

Related

Active cron job

I am trying to make a cron job for the first time but i have some problems making it work.
Here is what i have done so far:
Linux commands:
crontab -e
My cronjob looks like this:
1 * * * * wget -qO /dev/null http://mySite/myController/myView
Now when i look in:
/var/spool/cron/crontabs/
I get the following output:
marc root
if i open the file root
i see my cronjob (the one above)
However it doesnt seem like it is running.
is there a way i can check if its running or make sure that it is running?
By default cron jobs do have a log file. It should be in /var/log/syslog (depends on your system). Vouch for it and you're done. Else you can simply append the output to a log file manually by
1 * * * * wget http://mySite/myController/myView >> ~/my_log_file.txt
and see what's your output. Notice I've changed removed the quiet parameter from wget command so that there is some output.

Why won't cron execute this mysql routine properly?

I have a cron job set up
0 0 * * * /usr/bin/mysqldump -u omeka_admin -h localhost omeka > /home/groups/omeka/database/omeka.sql > /dev/null
Pass is stored in .my.cnf. It works great from the command line but everytime cron executes it, the resulting file is size zero
I tried putting in the pass, but got the same result. Again, it works great from the command line. Just in case there was some weird process going that interfered with the output but it's still not working
Any idea of what's going on?
Remove last redirection
... omeka > /home/groups/omeka/database/omeka.sql
or redirect stderr
... omeka > /home/groups/omeka/database/omeka.sql 2> /dev/null

How can I trigger a delayed system shutdown from in a shell script?

On my private network I have a backup server, which runs a bacula backup every night. To save energy I use a cron job to wake the server, but I haven't found out, how to properly shut it down after the backup is done.
By the means of the bacula-director configuration I can call a script during the processing of the last backup job (i.e. the backup of the file catalog). I tried to use this script:
#!/bin/bash
# shutdown server in 10 minutes
#
# ps, 17.11.2013
bash -c "nohup /sbin/shutdown -h 10" &
exit 0
The script shuts down the server - but apparently it returns just during the shutdown,
and as a consequence that last backup job hangs just until the shutdown. How can I make the script to file the shutdown and return immediately?
Update: After an extensive search I came up with a (albeit pretty ugly) solution:
The script run by bacula looks like this:
#!/bin/bash
at -f /root/scripts/shutdown_now.sh now + 10 minutes
And the second script (shutdown_now.sh) looks like this:
#!/bin/bash
shutdown -h now
Actually I found no obvious method to add the required parameters of shutdown in the syntax of the 'at' command. Maybe someone can give me some advice here.
Depending on your backup server’s OS, the implementation of shutdown might behave differently. I have tested the following two solutions on Ubuntu 12.04 and they both worked for me:
As the root user I have created a shell script with the following content and called it in a bash shell:
shutdown -h 10 &
exit 0
The exit code of the script in the shell was correct (tested with echo $?). The shutdown was still in progress (tested with shutdown -c).
This bash function call in a second shell script worked equally well:
my_shutdown() {
shutdown -h 10
}
my_shutdown &
exit 0
No need to create a second BASH script to run the shutdown command. Just replace the following line in your backup script:
bash -c "nohup /sbin/shutdown -h 10" &
with this:
echo "/sbin/poweroff" | /usr/bin/at now + 10 min >/dev/null 2>&1
Feel free to adjust the time interval to suit your preference.
If you can become root: either log in as, or sudo -i this works (tested on ubuntu 14.04):
# shutdown -h 20:00 & //halts the machine at 8pm
No shell script needed. I can then log out, and log back in, and the process is still there. Interestingly, if I tried this with sudo in the command line, then when I log out, the process does go away!
BTW, just to note, that I also use this command to do occasional reboots after everyone has gone home.
# shutdown -r 20:00 & //re-boots the machine at 8pm

FeedWordPress cron job code

FeedWordPress (RSS fetcher) plugin of WordPress is working well, but it doesn't have an option to update RSS every 5 minutes (default is 60 minutes) so the only way is clicking the UPDATE NOW button manually.
I am new and some guys told me to trigger it every 5 minutes using a cron job, so I tried that in cpanel
First I tried this
curl http://domain.com/?update_feedwordpress=true > dev/null
but was getting this error
/bin/sh: dev/null: No such file or directory
Second I tried this
wget http://domain.com/?update_feedwordpress=1
but now I'm getting this error
/bin/sh: /usr/bin/wget: Permission denied
(I used my domain.com in that place)
Any correct/exact working code?
You can use this code:
/usr/local/bin/curl --silent -L "http://example.com/?update_feedwordpress=1" >/dev/null 2>&1
where you should replace example.com with your domain name.
Null device is /dev/null
The /usr/bin/wget does not seems to have permission to execute for the current user.
It may be easier to write a script (say mydebug.sh ) and run the script with cron
id
ls -l /usr/bin/wget
curl ...
Then check for any obvious errors due to permissions.

Shell script to log server checks runs manually, but not from cron

I'm using a basic shell script to log the results of top, netstat, ps and free every minute.
This is the script:
/scripts/logtop:
TERM=vt100
export TERM
time=$(date)
min=${time:14:2}
top -b -n 1 > /var/log/systemCheckLogs/$min
netstat -an >> /var/log/systemCheckLogs/$min
ps aux >> /var/log/systemCheckLogs/$min
free >> /var/log/systemCheckLogs/$min
echo "Message Content: $min" | mail -s "Ran System Check script" email#domain.com
exit 0
When I run this script directly it works fine. It creates the files and puts them in /var/log/systemCheckLogs/ and then sends me an email.
I can't, however, get it to work when trying to get cron to do it every minute.
I tried putting it in /var/spool/cron/root like so:
* * * * * /scripts/logtop > /dev/null 2>&1
and it never executes
I also tried putting it in /var/spool/cron/myservername and also like so:
* * * * * /scripts/logtop > /dev/null 2>&1
it'll run every minute, but nothing gets created in systemCheckLogs.
Is there a reason it works when I run it but not when cron runs it?
Also, here's what the permissions look like:
-rwxrwxrwx 1 root root 326 Jul 21 01:53 logtop
drwxr-xr-x 2 root root 4096 Jul 21 01:51 systemCheckLogs
Normally crontabs are kept in "/var/spool/cron/crontabs/". Also, normally, you update it with the crontab command as this HUPs crond after you're done and it'll make sure the file gets in the correct place.
Are you using the crontab command to create the cron entry? crontab to import a file directly. crontab -e to edit the current crontab with $EDITOR.
All jobs run by cron need the interpreter listed at the top, so cron knows how to run them.
I can't tell if you just omitted that line or if it is not in your script.
For example,
#!/bin/bash
echo "Test cron jon"
When running from /var/spool/cron/root, it may be failing because cron is not configured to run for root. On linux, root cron jobs are typically run from /etc/crontab rather than from /var/spool/cron.
When running from /var/spool/cron/myservername, you probably have a permissions problem. Don't redirect the error to /dev/null -- capture them and examine.
Something else to be aware of, cron doesn't initialize the full run environment, which can sometimes mean you can run it just fine from a fully logged-in shell, but it doesn't behave the same from cron.
In the case of above, you don't have a "#!/bin/shell" up top in your script. If root is configured to use something like a regular bourne shell or cshell, the syntax you use to populate your variables will not work. This would explain why it would run, but not populate your files. So if you need it to be ksh, "#!/bin/ksh". It's generally best not to trust the environment to keep these things sane. If you need your profile run the do a ". ~/.profile" up front as well. Or a quick and dirty way to get your relatively full env is to do it from su as such "* * * * * su - root -c "/path/to/script" > /dev/null 2>&1
Just some things I've picked up over the years. You're definitely expecting a ksh based on your syntax, so you might want to be sure it's using it.
Thanks for the tips... used a little bit of each answer to get to the bottom of this.
I did have the interpreter at the top (wasn't shown here), but may have been wrong.
Am using #!/bin/bash now and that works.
Also had to tinker with the permissions of the directory the log files are being dumped in to get things working.

Resources