I have a bash script which I want to run as a cron job.
It works fine except one command.
I redirected its stderr to get the error and found out that the error it shows was the command not recognized.
It is a root crontab.
Both the current user and root execute the command successfully when I type it in the terminal.
Even the script executes the command when I run it through the terminal.
Startup script :
#!/bin/bash
sudo macchanger -r enp2s0 > /dev/null
sudo /home/deadpool/.logkeys/logger.sh > /dev/null
logger.sh :
#!/bin/bash
dat="$(date)"
echo " " >> /home/deadpool/.logkeys/globallog.log
echo $dat >> /home/deadpool/.logkeys/globallog.log
echo " " >> /home/deadpool/.logkeys/globallog.log
cat /home/deadpool/.logkeys/logfile.log >> /home/deadpool/.logkeys/globallog.log
cat /dev/null > /home/deadpool/.logkeys/logfile.log
cat /dev/null > /home/deadpool/.logkeys/error.log
logkeys --start --output /home/deadpool/.logkeys/logfile.log 2> /home/deadpool/.logkeys/error.log
error.log
/home/deadpool/.logkeys/logger.sh: line 10: logkeys: command not found
Remember cron runs with a different environment then your user account or root does and might not include the path to logkeys in its PATH. You should try the absolute path for logkeys (find it with which logkeys from your user) in your script. Additionally I recommend looking at this answer on serverfault about running scripts like they are running from cron when you need to find out why it's working for you and not in a job.
Related
When I run the BASH script from the command line, it is executed.
When I try to run it as a cron task, it fails. By an exception method, I found a problem. It consists in the fact that the "which iptables" command returns an empty string. This happens with all the programs that I try to find in the "/sbin" directory.
Example:
# crontab -e
* * * * * /root/test.sh >> /root/test.log 2>&1
test.sh
#!/bin/bash
IPT=$(which iptables);
echo ${IPT} >> /root/test.log
But in test.log written empty string.
Tested on Ubuntu 16.04 and Debian 8.
It's not related to privileges.
which looks for the command in $PATH. cron scripts have a limited path, that doesn't include iptables, so it's not found.
$ /usr/bin/which iptables
/sbin/iptables
$ PATH=/bin:/usr/bin /usr/bin/which iptables
$ echo $?
1
When you have a limited path, it will return a empty string (on my other machine it reports that no iptables in (/usr/bin:/bin) so YMMV) and exit with non-zero code.
if you do something like echo $PATH >> /root/test.log you will see that cron has a path with just /usr/bin and /bin
You have to either set the $PATH to contain the iptables location, or use full path when calling iptables
Currently I am writing a little script which should add a cronjob to the root crontable. But it seems that my root crontable stopped working. When I try to run the crontab commands in my bash scrip, I get "command not found". Also it worked for some time and stopped working yesterday. Now when I enter "sudo crontab -l" I don't get "no crontab for root" anymore. I am not sure what I did wrong. Here is my code:
#!/bin/bash
sudo crontab -l > rootcron 2> /dev/null
sudo echo "test" >> rootcron
sudo crontab rootcron
sudo rm rootcron
You didn't specify when the command is to be run. Typically you would see something like:
*/5 * * * * touch /tmp/test-cron
So basically you probably have an invalid cron file. What are the contents of the file now?
I've an init script (/etc/init.d) that should run a my executable jar file as a serviceat boot. I need that this script is runned by a specified user.
With su & sudo is possibile but it split the process and I don't like this.
There is another way to run this script as limited user?
This is the relevant part of my init script:
#!/bin/bash
APP_NAME="myapp"
APP_HOME=/home/user1/jetty
JAVA_HOME=/opt/local/java/latest
echo "Service $APP_NAME - [$1]"
echo "JAVA_HOME -> $JAVA_HOME"
echo "APP_HOME -> $APP_HOME"
echo "APP_NAME -> $APP_NAME"
function start {
if pkill -0 -f $APP_NAME.jar > /dev/null 2>&1
then
echo "Service [$APP_NAME] is already running. Ignoring startup request."
exit 1
fi
echo "Starting application..."
cd $APP_HOME
nohup $JAVA_HOME/bin/java -jar $APP_HOME/$APP_NAME.jar\
< /dev/null > $APP_HOME/logs/app.log 2>&1 &
}
On Ubuntu you should use the program start-stop-daemon for this. It has options for launching daemons as different users, managing pid files, changing the working directory, and pretty much anything else that is usually needed by init scripts.
Please consider following crontab (root):
SHELL=/bin/bash
...
...
0 */3 * * * /var/maintenance/raid.sh
And the bash script /var/maintenance/raid.sh:
#!/bin/bash
echo -n "Checking /dev/md0... "
if ! [ $(mdadm --detail /dev/md0 | grep -c "active sync") -eq 2 ]; then
mdadm --detail /dev/md0 | mail -s "Raid problem /dev/md0" "my#email.com";
echo "ERROR"
else
echo "ALL OK"
fi;
#-------------------------------------------------------
echo -n "Checking /dev/md1... "
...
And this is what happen when...
...executed from shell prompt (bash):
Mail with mdadm --detail /dev/md0 output is sent to my email (proper behaviour)
...executed by cron:
Blank mail is sent to my email (subject is there, but there is no message)
Why such difference and how to fix it?
As indicated in the comments, do use full paths on crontab scripts, because crontab does have different environment variables than the normal user (root in this case).
In your case, instead of mdadm, /sbin/mdadm makes it.
How to get the full path of a command? Using the command command -v:
$ command -v rm
/bin/rm
cron tasks run in a shell that is started without your login scripts being run, which set up paths, environment variables etc.
When building cron tasks, prefer things like absolute paths and explicit options etc
Before running your script as a cron job, you can test it with no environment variables using env -i
env -i /var/maintenance/raid.sh
I have this very simple bash script:
#!/opt/bin/bash
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/bkp /volume1/backups/sql 2>&1 > /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/www/vhosts /volume1/backups/vhosts 2>&1 >> /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics myhost::/etc /volume1/backups/etc 2>&1 >> /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics /volume1/homes /volume1/backups/homes 2>&1 >> /var/log/rdiff-backup.log;
cat /var/log/rdiff-backup.log | /opt/bin/nail -s "rdiff-backup log" me#email.com;
if I run the script from the command line, in this way:
nohup /path/to/my/scipt.sh &
it works fine, appending each rdiff-backup statistic report to the rdiff-backup.log and sending this file to my email address, as expected. But if I put the script in the crontab, the script make only one rdiff-backup job sending statistics via email. I cannot understand because the script doesn't work in the same way...
Any idea?
this is my cronjob entry:
30 19 * * * /opt/bin/bash /volume1/backups/backup.sh
via crontab only the last job is executed correctly, I think because this is the only one local backup. When I execute the script from command line I use the root user, and the public key of the root user is in the /root/./ssh/authorized_keys of the remote machine. The owner of the crontab file is the root user too, I created them through "crontab -e" using the root account.
First of all you need to make sure the script used in cron doesn't output anything, otherwise:
cron will assume there is an error
you will not see the error if any
A solution for this is to use
30 19 * * * /opt/bin/bash /volume1/backups/backup.sh 2>&1 >> /var/log/rdiff-backup-cron.log;
Second of all, it appears you are losing env variables when executing via cron, try adding the env settings to your script
#!/opt/bin/bash
. /root/.profile
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/bkp /volume1/backups/sql 2>&1 > /var/log/rdiff-backup.log
if /root/.profile doesn't, exist try adding . /root/.bashrc or /etc/profile
I hope this helps.