I have a bash script which picks up files from /tmp and emails them to me. I run this script as root and it works perfectly but I am trying to get this automated with crontab.
Added the job to crontab, again running as root, and now I get 'Couldn't lock /sent'.
I managed to confirm it's using the file in /root by changing it's name in Muttrc and tried permission at 600 and 777.
(Also getting an error Segmentation fault, hoping that will go away if I fix the above.)
Anyone any ideas why Mutt is different as a cron job with the same user and the same file.
I simplified the script as follows and is doing exactly the same, works from root shell, but not in crontab.
error:-
Couldn't lock /sent
/data/mediators/email_file: line 5: 1666 Segmentation fault mutt $email -s "test" -i /tmp/test.txt < /dev/null
email_file script:-
#!/bin/bash
email=——#——.com
mutt $email -s "test" -i /tmp/test.txt < /dev/null
crontab:-
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=——#—-.com
HOME=/
54 02 * * * root /data/mediators/email_file
I also added printenv to the job and compared to a server where this runs OK. The difference is that the working system has USER=root, whereas the non-working one does not show this variable as being set.
The issue is combination of HOME=/ env variable in crontab and default mutt record configuration which defaults to ~/sent.
mutt stores sent emails in the record file. So choose whether you want to keep them (fix crontab's env var HOME or set mutt's record to meaningful value.
Add this option to mutt command in email_file If you want to set it:
-e 'set record=/root/sent'
or unset it with:
-e 'unset record'
You can find more in man pages muttrc(5)
record
Type: path
Default: “~/sent”
This specifies the file into which your outgoing messages should be appended. (This is meant as the primary method for saving a copy of your messages, but another way to do this is using the “my_hdr” command to create a “Bcc:” field with your email address in it.)
The value of $record is overridden by the $force_name and $save_name variables, and the “fcc-hook” command. Also see $copy and $
Related
I have a script that checks if the PPTP VPN is running, and if not it reconnects the PPTP VPN. When I run the script manually it executes fine, but when I make a cron job, it's not running.
* * * * * /bin/bash /var/scripts/vpn-check.sh
Here is the script:
#!/bin/sh
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult="`echo "$result" | sed 's/^\(.................................\).*$$'`"
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
finally i found a solution ... instead of entering the cronjob with
crontab -e
i needed to edit the crontab file directly
nano /etc/crontab
adding e.g. something like
*/5 * * * * root /bin/bash /var/scripts/vpn-check.sh
and its fine now!
Thank you all for your help ... hope my solution will help other people as well.
After a long time getting errors, I just did this:
SHELL=/bin/bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin
* * * * * /bin/bash /home/joaovitordeon/Documentos/test.sh
Where test.sh contains:
#!/bin/bash
/usr/bin/python3 /home/joaovitordeon/Documentos/test.py;
In my case, the issue was that the script wasn't marked as executable. To make sure it is, run the following command:
chmod +x your_script.sh
If you're positive the script runs outside of cron, execute
printf "SHELL=$SHELL\nPATH=$PATH\n* * * * * /bin/bash /var/scripts/vpn-check.sh\n"
Do crontab -e for whichever crontab you're using and replace it with output of the above command. This should mirror most of your environment in case there is some missing path issue or something else. Also check logs for any errors it's getting.
Though it definitly looks like the script has an error or you messed something up when copying it here
sed: -e expression #1, char 44: unterminated `s' command
./bad.sh: 5: ./bad.sh: [[: not found
Simple alternate script
#!/bin/bash
if [[ $(ping -c3 192.168.17.27) == *"0 received"* ]]; then
/usr/sbin/pppd call home
fi
Your script can be corrected and simplified like this:
#!/bin/sh
log=/tmp/vpn-check.log
{ date; ping -c3 192.168.17.27; } > $log
if grep -q '0 received' $log; then
/usr/sbin/pppd call home >>$log 2>&1
fi
Through our discussion in comments we confirmed that the script itself works, but pppd doesn't, when running from cron. This is because something must be different in an interactive shell like your terminal window, and in cron. This kind of problem is very common by the way.
The first thing to do is try to remember what configuration is necessary for pppd. I don't use it so I don't know. Maybe you need to set some environment variables? In which case most probably you set something in a startup file, like .bashrc, which is usually not used in a non-interactive shell, and would explain why pppd doesn't work.
The second thing is to check the logs of pppd. If you cannot find the logs easily, look into its man page, and it's configuration files, and try to find the logs, or how to make it log. Based on the logs, you should be able to find what is missing when running in cron, and resolve your problem.
Was having a similar problem that was resolved when a sh was put before the command in crontab
This did not work :
#reboot ~myhome/mycommand >/tmp/logfile 2>&1
This did :
#reboot sh ~myhome/mycommand >/tmp/logfile 2>&1
my case:
crontab -e
then adding the line:
* * * * * ( cd /directory/of/script/ && /bin/sh /directory/of/script/scriptItself.sh )
in fact, if I added "root" as per the user, it thought "root" was a command, and it didn't work.
As a complement of other's answers, didn't you forget the username in your crontab script ?
Try this :
* * * * * root /bin/bash /var/scripts/vpn-check.sh
EDIT
Here is a patch of your code
#!/bin/sh
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult=`echo "$result" | /bin/sed 's/^\(.................................\).*$/\1/'`
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
In my case, it could be solved by using this:
* * * * * root ( cd /directory/of/script/ && /directory/of/script/scriptItself.sh )
I used some ./folder/-references in the script, which didn't work.
The problem statement is script is getting executed when run manually in the shell but when run through cron, it gives "java: command not found" error -
Please try below 2 options and it should fix the issue -
Ensure the script is executable .If it's not, execute below -
chmod a+x your_script_name.sh
The cron job doesn’t run with the same user with which you are executing the script manually - so it doesn't have access to the same $PATH variable as your user which means it can't locate the Java executable to execute the commands in the script. We should first fetch the value of PATH variable as below and then set it(export) in the script -
echo $PATH can be used to fetch the value of PATH variable.
and your script can be modified as below - Please see second line starting with export
#!/bin/sh
export PATH=<provide the value of echo $PATH>
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult="`echo "$result" | sed 's/^\(.................................\).*$$'`"
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
First of all, check if cron service is running. You know the first question of the IT helpdesk: "Is the PC plugged in?".
For me, this was happening because the cronjob was executing from /root directory but my shell script (a script to pull the latest code from GitHub and run the tests) were in a different directory. So, I had to edit my script to have a cd to my scripts folder. My debug steps were
Verified that my script run independent of cron job
Checked /var/log/cron to see if the cron jobs are running. Verified that the job is running at the intended time
Added an echo command to the script to log the start and end times to a file. Verified that those were getting logged but not the actual commands
Logged the result of pwd to the same file and saw that the commands were getting executed from /root
Tried adding a cd to my script directory as the first line in the script. When the cron job kicked off this time, the script got executed just like in step 1.
it was timezone in my case. I scheduled cron with my local time but server has different timezone and it does not run at all. so make sure your server has same time by date cmd
first run command env > env.tmp
then run cat env.tmp
copy PATH=.................. Full line and paste into crontab -e, line before your cronjobs.
try this
/home/your site folder name/public_html/gistfile1.sh
set cron path like above
I am trying to run a shell script on crontab on Ubuntu platform. I have tried googling and other links but nothing has helped so far.
This is my crontab:
*/2 * * * * sudo bash /data/html/mysite/site_cleanup.sh
This is the content of my sh file:
#!/bin/sh
# How many days retention do we want ?
DAYS=0
# geting present day
now=$(date +"%m_%d_%Y")
# Where is the base directory
BASEDIR=/data/html/mysite
#where is the backup directory
BKPDIR=/data/html/backup
# Where is the log file
LOGFILE=$BKPDIR/log/mysite.log
# add to tar
tar -cvzf $now.tar.gz $BASEDIR
mv $now.tar.gz $BKPDIR
# REMOVE OLD FILES
echo `date` Purge Started >> $LOGFILE
find $BASEDIR -mtime +$DAYS | xargs rm
echo `date` Purge Completed >> $LOGFILE
The same script runs from a terminal and gives the desired result.
Generic troubleshooting for noninteractive shell scripts
Put set -x; exec 2>/path/to/logfile at the top of your script to log all subsequent commands to a file as they're run. If this doesn't work, you'll know that your script isn't being run at all; if it does, you'll know where it fails and how.
If this is a personal crontab
If you're running crontab -e as a user (without sudo), then the crontab being modified is one for commands run with that user's permissions. Check that file permissions allow that user to modify the content in question (which, if these files are in a cgi-bin directory, may require being run by the same user as the web server).
If your intent is to have commands run as root, rather than as your own user, be sure you use sudo when editing the crontab to edit the system crontab instead (but please take care as to your script's correctness in this case -- carelessness such as missing quotes or lack of appropriate precautions in xargs usage can cause a script to delete the wrong files if malicious filenames are created):
sudo crontab -e ## to edit the system (root) crontab
...or, if you're cleaning up files owned by the apache user (for example; check which account is correct for your own operating system and web server):
sudo -u apache crontab -e ## to edit the apache user's crontab
Troubleshooting for a system crontab
Do not attempt to put a sudo command within the commands run by cron; with sudo's default configuration, it requires a TTY (a keyboard and screen) to be attached to a session in order to run. Thus, your crontab line should not contain sudo, but instead should look like the following:
*/2 * * * * bash /data/html/mysite/site_cleanup.sh
Your issue is likely coming from the sudo call from your user level cron. Unless you've gone through and edited the bashrc profile to allow that script to run without sudo it'll hang up every time.
So you can lookup how to run a script with no password by modifying the bashrc profile, remove the sudo call if you aren't doing something in your script that calls for Super User permissions, or as a last ditch, extremely bad idea you can call your script from root's cron by doing sudo crontab -e or sudo env EDITOR=nano crontab -e if you prefer nano as your editor.
try to add this line to the crontab of user root and without the sudo.
like this:
*/2 * * * * bash /data/html/mysite/site_cleanup.sh
I'm having a problem with a script that writes to a log file during a backup procedure. Tested perfectly when called from the root shell, but fails when run from the cron demon.
Backup is done over a series of partitions and the on-site admin will rotate the drives in the top dock weekly. In order to know where the most recent backup is located I've included the following lines
sudo hdparm -I /dev/sdb | grep 'Model Number'
sudo hdparm -I /dev/sdb | grep 'Serial Number'
I've tried this with a >> /batch/backup.log and without.
When the bash script is run from the command line, it works beautifully. But when the crontab calls the script the output from these lines is blank.
crontab entry: 00 00 * * * /batch/backup.bat >> /batch/backup.log
I have no idea why other than the possibility that cron can't handle the pipe or the grep or something.
I have isolated the lines in a test.bat but they remain blank.
The backup script uses the hdparm to spin down the drive at the end, but now I wonder if that's not working properly either if cron can't handle hdparm.
That is probably because hdparm is not in the PATH when the script is executed through cron. Although less likely, same might apply to grep as well.
Try replacing hdparm with /full/path/to/hdparm in your script.
You need to either put this in the root crontab, or you need to store your password in plain text and pipe it into the sudo command. That second option is obviously NOT RECOMMENDED. See https://askubuntu.com/questions/173924/how-to-run-a-cron-job-using-the-sudo-command
As #Paul hinted, it is also possible to create a directive in /etc/sudoers to override the need for a password for a specific user / host / command combination. See https://askubuntu.com/a/159009
Copying just a little bit from that answer:
If your user is called user and your host is called host you could add these lines to /etc/sudoers:
user host = (root) NOPASSWD: /sbin/shutdown
user host = (root) NOPASSWD: /sbin/reboot
This will allow the user user to run the desired commands on host without entering a password. All other sudoed commands will still require a password.
Edit the crontab entry as below
00 00 * * * /batch/backup.bat 1> /batch/backup.op 2> /batch/backup.err
Standard output will be redirected to /batch/backup.op
Standard error will be redirected to /batch/backup.err
Check the errors in /batch/backup.err and fix
I have a web server (odin) and a backup server (jofur). On jofur, I can run the following code to rsync my web directories (via key authentication) from odin to jofur:
rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
If I enter this into the command line, everything rsyncs perfectly:
myuser#jofur:~$ rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
receiving incremental file list
sent 23 bytes received 1921 bytes 1296.00 bytes/sec
total size is 349557271 speedup is 179813.41
I want this to run every morning, so I edited my crontab to read this:
0 4 * * * rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
This doesn't work. The following message is deposited in /var/mail/myuser:
Could not create directory '/home/myuser/.ssh'.
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(605) [Receiver=3.0.9]
I'm not sure what this error means. I'm wary of futzing blindly with permissions because I don't want to leave any backdoors open. Any suggestions?
Its hard to tell whether cron is using the wrong rsync binary or whether rsync requires some variable which is not being set in cron. Please set the stdout/stderr as shown below and pass on the output of the log file
Also, try doing a "which rsync" from the command line ; this will tell you which rsync you are using from the command line.
0 4 * * * rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin > /tmp/cron_output.log 2>&1
EDIT :
Can you create a shell script called SOME_DIR/cron_job_rsync.sh which contains the following. Make sure you set the execute bit.
#!/bin/sh
/usr/sbin/rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
And modify the cronjob as shown below
0 4 * * * SOME_DIR/cron_job_rsync.sh >/tmp/cron_output.log 2>&1
I had a similar issue. Mine was the HOME directory was encrypted.
If your user is logged, it works the known_hosts.
But when it's a cron, the cron uses the right user BUT it does not have access to your $HOME/~/.ssh directory because is encrypted :-(
i got the same error just like you.
I finally found user home directory is an 'mount point', when logged in, it changed.
You can use the shell command 'mount' to check if you have the same way to use home directory.
So, i logged in and 'cd /', then do
```
cp -ar ${HOME}/.ssh /tmp/
sudo umount ${HOME}
mv /tmp/.ssh ${HOME}
```
There is may failed, because you need to check the ${HOME} if you have the right to write, if not, try sudo or add writable to ${HOME}.
After that, every thing being fine.
Please follow the below steps to avoid the error
http://umasarath52.blogspot.in/2013/09/solved-rsync-not-executing-via-cron.html
I resolved this issue by communicating with the administrators for my server. Here is what they told me:
For advanced security and performance, we use 1H (Hive) which
utilizes a chrooted environment for users. Libraries and binaries
should be copied to the chrooted environment to make them accessible.
They sent me a follow up email telling me that the "Relevent" packages have been installed. At that point, the problem was resolved. Unfortunately, I didn't get any additional information from them. The host was Arvixe, but I'm guessing that anyone using 1H (Hive) will encounter a similar problem. Hopefully this answer will be helpful.
Use the rrsync script together with a dedicated ssh key as follows:
REMOTE server
mkdir ~/bin
gunzip /usr/share/doc/rsync/scripts/rrsync.gz -c > ~/bin/rrsync
chmod +x ~/bin/rrsync
LOCAL computer
ssh-keygen -f ~/.ssh/id_remote_backup -C "Automated remote backup" #NO passphrase
scp ~/.ssh/id_remote_backup.pub devel#10.10.10.83:/home/devel/.ssh
REMOTE computer
cat id_remote_backup.pub >> authorized_keys
Prepend to the newly added line the following
command="$HOME/bin/rrsync -ro ~/backups/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding
So that the result looks like
command="$HOME/bin/rrsync -ro ~/backups/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding ssh-rsa AAA...vp Automated remote backup
LOCAL
Put in your crontab the following script with x permission:
#!/bin/sh
echo ""
echo ""
echo "CRON:" `date`
set -xv
rsync -e "ssh -i $HOME/.ssh/id_remote_backup" -avzP devel#10.10.10.83:/ /home/user/servidor
Source: http://www.guyrutenberg.com/2014/01/14/restricting-ssh-access-to-rsync/
I did several steps to make it work.
Check your paths. For every command you'll use check which [command] and use that full path for the crontab
Open crontab as the user you want to run it with so it has access to that users ssh-key
Add (remember to user which) ssh-agent && [your ssh-command] so it can connect over ssh.
When authentication still fails at this point. Try to generate a passwordless ssh-key. This way you can skip the password prompting.
For debugging it is useful to add -vvv to the ssh command in rsync. It makes it clear what goes wrong.
Using the correct keyring solved the issue for me. Add the following line to your crontab:
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh
In total, your crontab (edited by calling crontab -e from your terminal) should now look as follows:
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh
0 4 * * * rsync -avz backups#odin.mydomain.net:/home/backups /home/myuser/odin
Background: It turns out that some Linux distributions use a keyring to protect your public-private key pair - so the key pair is password-protected without ever noticing you. Consequently, rsync is not able to open your ssh key for authentication.
Note that I also omitted the -e ssh; I think it is not necessary here.
Further Troubleshooting: rsync does not provide a lot of debugging output. What helped me identify the problem was to put a dummy scp command, which is much more verbose, in my crontab. A crontab entry for troubleshooting may look something like:
* * * * * scp -v backups#odin.mydomain.net:/home/backups/dummy.txt /home/myuser/odin/dummy.txt >> /home/myuser/odin/dummy.txt.log 2>&1
The above command will run every minute (great for developing) and it will copy a file /home/backups/dummy.txt to your local machine. All logs (stdout and stderr) are written to to /home/myuser/odin/dummy.txt.log. Inspect these logs to see where the error precisely comes from.
Reference: The troubleshooting explained above lead me to the solution: https://unix.stackexchange.com/a/332353/395749
I'm using a basic shell script to log the results of top, netstat, ps and free every minute.
This is the script:
/scripts/logtop:
TERM=vt100
export TERM
time=$(date)
min=${time:14:2}
top -b -n 1 > /var/log/systemCheckLogs/$min
netstat -an >> /var/log/systemCheckLogs/$min
ps aux >> /var/log/systemCheckLogs/$min
free >> /var/log/systemCheckLogs/$min
echo "Message Content: $min" | mail -s "Ran System Check script" email#domain.com
exit 0
When I run this script directly it works fine. It creates the files and puts them in /var/log/systemCheckLogs/ and then sends me an email.
I can't, however, get it to work when trying to get cron to do it every minute.
I tried putting it in /var/spool/cron/root like so:
* * * * * /scripts/logtop > /dev/null 2>&1
and it never executes
I also tried putting it in /var/spool/cron/myservername and also like so:
* * * * * /scripts/logtop > /dev/null 2>&1
it'll run every minute, but nothing gets created in systemCheckLogs.
Is there a reason it works when I run it but not when cron runs it?
Also, here's what the permissions look like:
-rwxrwxrwx 1 root root 326 Jul 21 01:53 logtop
drwxr-xr-x 2 root root 4096 Jul 21 01:51 systemCheckLogs
Normally crontabs are kept in "/var/spool/cron/crontabs/". Also, normally, you update it with the crontab command as this HUPs crond after you're done and it'll make sure the file gets in the correct place.
Are you using the crontab command to create the cron entry? crontab to import a file directly. crontab -e to edit the current crontab with $EDITOR.
All jobs run by cron need the interpreter listed at the top, so cron knows how to run them.
I can't tell if you just omitted that line or if it is not in your script.
For example,
#!/bin/bash
echo "Test cron jon"
When running from /var/spool/cron/root, it may be failing because cron is not configured to run for root. On linux, root cron jobs are typically run from /etc/crontab rather than from /var/spool/cron.
When running from /var/spool/cron/myservername, you probably have a permissions problem. Don't redirect the error to /dev/null -- capture them and examine.
Something else to be aware of, cron doesn't initialize the full run environment, which can sometimes mean you can run it just fine from a fully logged-in shell, but it doesn't behave the same from cron.
In the case of above, you don't have a "#!/bin/shell" up top in your script. If root is configured to use something like a regular bourne shell or cshell, the syntax you use to populate your variables will not work. This would explain why it would run, but not populate your files. So if you need it to be ksh, "#!/bin/ksh". It's generally best not to trust the environment to keep these things sane. If you need your profile run the do a ". ~/.profile" up front as well. Or a quick and dirty way to get your relatively full env is to do it from su as such "* * * * * su - root -c "/path/to/script" > /dev/null 2>&1
Just some things I've picked up over the years. You're definitely expecting a ksh based on your syntax, so you might want to be sure it's using it.
Thanks for the tips... used a little bit of each answer to get to the bottom of this.
I did have the interpreter at the top (wasn't shown here), but may have been wrong.
Am using #!/bin/bash now and that works.
Also had to tinker with the permissions of the directory the log files are being dumped in to get things working.