I'm trying to have root run a script sitting inside another user's bin folder (say 'john'). The script itself is owned by root and chmoded 774.
I can launch the script without any issue from the command line.
Via cron, it does not get started. I can't understand why.
Here is the content if i crontab -e while logged in as root.
# daily backup
15 2 * * * php -q /home/john/bin/backup
My server is a Red Hat Enterprise Linux Server release 6.5 (Santiago)
Perhaps php is not in the default PATH, such as in /usr/local/bin rather than /usr/bin or /bin.
Always redirect the output of both stdout and stderr, otherwise the default has traditionally been to send a message via /usr/bin/mail, and you probably don't want to get an email each time the job runs.
When you revise your crontab (i.e. "/usr/local/bin/php -q /home/john/bin/backup > /tmp/crontest.txt 2>&1") you can then see if any error messages are being logged.
Related
I am trying to run a root cronjob for executing a script.
Here's the cronjob I put into sudo crontab -e:
*/1 * * * * ~/temperature_log/logtemp.sh >> ~/temperature_log/templog.log>&1
The script requires root permission for hddtemp.
Unfortunately, the templog.log file never appears. The syslog says:
Jun 6 13:09:01 user CRON[32433]: (root) CMD (~/temperature_log/logtemp.sh >> ~/temperature_log/templog.log>&1)
Jun 6 13:09:01 user CRON[32426]: (CRON) info (No MTA installed, discarding output)
So apparently, the script IS run, but something goes wrong from there.
Even stranger: If I run a user cron via just crontab -e, the script executes (without root permissions, though, so it is of no use for me) and does write the log file.
How can I make sure that my root crontab works correctly?
I am connecting to this computer via ssh as a user without root permissions, but I do have the root passwort.
EDIT
I changed the program now, I want it to log to syslog via logger. Again, running the script manually works and it logs correctly, but running it from crontab just shows this:
Jun 6 14:27:01 user CRON[1657]: (root) CMD (Jun 6 15:06:01 insystems CRON[25328]: (root) CMD (/bin/sh ~/temperature_log/logtemp.sh)
No information is logged. I added the /dev/null part to get rid of the email warning. I am not planning on installing an email service.
Have you written the script to send email alerts? The warning, "(No MTA installed, discarding output)", happens when a mail service is not installed.
Most Linux distributions have a mail service (including an MTA) installed. Ubuntu doesn't though.
You can install a mail service, postfix for example, to solve this problem.
sudo apt-get install postfix
Also, try providing the full path for the files (The absolute path):
~/temperature_log/logtemp.sh and ~/temperature_log/templog.log
Make sure logtemp.sh has execute permission. If no, then issue command
chmod +x logtemp.sh
My solution was to add the cronjob not to crontab -e but to /etc/crontab. From there, it worked without issues.
I probably made a mistake in the other crontab file, but this solution is okay for me.
I have a backup script that I can run successfully manually as root
backup perform --trigger confluence --config-file /etc/backup/config.rb --log-path=/var/log > /dev/null
In the /etc/cron.d folder I have file confluence_backup with the following contents
# Crontab for confluence_backup managed by Chef. Changes will be overwritten.
0 0 * * * root backup perform --trigger confluence --config-file /etc/backup/config.rb --log-path=/var/log > /dev/null
For some reason this backup script will not run. I don't see messages in /var/log/backup or in syslog for that matter. I tried restart of cron service but still the job does not run.
Why is the script not running?
First, have you confirmed that the executable 'backup' is in the root user's PATH variable?
I'm trying to make cron jobs or task schduler working, but I can not figure out why my script is not taken in consideration.
I'm trying to simply archive a folder with:
tar -cvf /volume1/NetBackup/Backups/Monday.tgz /volume1/NetBackup/Backups/ns3268116.ovh.net/
Each time the script starts working but cannot achieve the work. Either with task scheduler or crontab, a file Monday.tgz is created in folder /volume1/NetBackup/Backups/, but this file is only 1024 bytes.
Synology Cron is really fussy.
Here are my own personal notes for Synology DS413j, DSM 5.2:
Hand edit /etc/crontab as root, crontab -e isn't available
Ensure you use tabs not spaces to separate the columns
Your crontab changes may not survive a reboot if there are syntax problems
The who column in crontab may not be reliable. Use root in the who column and /bin/su -c '<command>' <username> to run as another other user
remember that it uses ash not bash so check for bashisms, e.g use >> /path/to/logfile 2>&1' not&>> /path/to/logfile`
It doesn't support 'MAILTO='
you need to restart crond synoservicectl --reload crond for the new crontab to take effect
You may try adding some diagnostics to it. For instance:
Add MAILTO into the crontab file (on top of crontab -e) to receive cron errors by email:
MAILTO=username#domain.com
Redirect output of your tar command to the file:
your command > ~/log.txt 2>&1
Check cron log and look for anomalies. For instance (it may depend on your configuration):
/var/log/cron.log
You may also try searching through /var/log/messages at the time of your cron job.
Is volume1 a resource on remote host? If yes, it is worth checking this part of the system.
I agree about the really nagging nature of Crontab on Synology Linux OSs.
I would certainly suggest to create de desired job as a .sh shell script and call it via CRON task inserted by using the GUI, as suggested here.
As for today (March 2017) is the best method I have found, since working with crontab via CLI is nearly a pain.
I'm using #reboot ~/www/example.com/bin/server in my user's crontab...but when I reboot the server, the web server (this script) does not come up. (script works fine from command line).
My guess is the /home/user directory has not been mounted yet...does anyone know if its possible to get a script to run out of a home directory using this crontab #reboot method?
If you think /home/user hasn't been mounted (or some required systems aren't running) yet, in your crontab line, you can always wait before executing a command like:
#reboot sleep 60; /home/user/www/example.com/bin/server
It should definitely be due to the environment scenarios as given in comments. Try the following and check once by doing a reboot
#reboot (date > /tmp/date-check.txt)
To be sure cron is able to run the jobs.
My problem was that the crontab did not have a full environment. I made the script it was pointing to source my .bashrc.
#reboot /home/user/www/example.com/bin/server
./server does . /home/user/.bashrc to get a working environment.
In Ubuntu if you are using the Home Directory Encryption feature turned on then #reboot in your crontab file won't work as the file system is still encrypted when the system is starting up and cron runs its #reboot jobs.
Your options are to place your files in an unencrypted location (/usr/local/bin or something?) or disable Home Directory encryption on your home directory.
I'm using a basic shell script to log the results of top, netstat, ps and free every minute.
This is the script:
/scripts/logtop:
TERM=vt100
export TERM
time=$(date)
min=${time:14:2}
top -b -n 1 > /var/log/systemCheckLogs/$min
netstat -an >> /var/log/systemCheckLogs/$min
ps aux >> /var/log/systemCheckLogs/$min
free >> /var/log/systemCheckLogs/$min
echo "Message Content: $min" | mail -s "Ran System Check script" email#domain.com
exit 0
When I run this script directly it works fine. It creates the files and puts them in /var/log/systemCheckLogs/ and then sends me an email.
I can't, however, get it to work when trying to get cron to do it every minute.
I tried putting it in /var/spool/cron/root like so:
* * * * * /scripts/logtop > /dev/null 2>&1
and it never executes
I also tried putting it in /var/spool/cron/myservername and also like so:
* * * * * /scripts/logtop > /dev/null 2>&1
it'll run every minute, but nothing gets created in systemCheckLogs.
Is there a reason it works when I run it but not when cron runs it?
Also, here's what the permissions look like:
-rwxrwxrwx 1 root root 326 Jul 21 01:53 logtop
drwxr-xr-x 2 root root 4096 Jul 21 01:51 systemCheckLogs
Normally crontabs are kept in "/var/spool/cron/crontabs/". Also, normally, you update it with the crontab command as this HUPs crond after you're done and it'll make sure the file gets in the correct place.
Are you using the crontab command to create the cron entry? crontab to import a file directly. crontab -e to edit the current crontab with $EDITOR.
All jobs run by cron need the interpreter listed at the top, so cron knows how to run them.
I can't tell if you just omitted that line or if it is not in your script.
For example,
#!/bin/bash
echo "Test cron jon"
When running from /var/spool/cron/root, it may be failing because cron is not configured to run for root. On linux, root cron jobs are typically run from /etc/crontab rather than from /var/spool/cron.
When running from /var/spool/cron/myservername, you probably have a permissions problem. Don't redirect the error to /dev/null -- capture them and examine.
Something else to be aware of, cron doesn't initialize the full run environment, which can sometimes mean you can run it just fine from a fully logged-in shell, but it doesn't behave the same from cron.
In the case of above, you don't have a "#!/bin/shell" up top in your script. If root is configured to use something like a regular bourne shell or cshell, the syntax you use to populate your variables will not work. This would explain why it would run, but not populate your files. So if you need it to be ksh, "#!/bin/ksh". It's generally best not to trust the environment to keep these things sane. If you need your profile run the do a ". ~/.profile" up front as well. Or a quick and dirty way to get your relatively full env is to do it from su as such "* * * * * su - root -c "/path/to/script" > /dev/null 2>&1
Just some things I've picked up over the years. You're definitely expecting a ksh based on your syntax, so you might want to be sure it's using it.
Thanks for the tips... used a little bit of each answer to get to the bottom of this.
I did have the interpreter at the top (wasn't shown here), but may have been wrong.
Am using #!/bin/bash now and that works.
Also had to tinker with the permissions of the directory the log files are being dumped in to get things working.