I am using a crontab to run my scripts periodically, Here is the code that I have added in crontab -e
* * * * * /usr/bin/python3 /home/mark/WORKSPACE/ep_prac/scripts/main.py > $HOME/test-cron.log 2>&1; mail -s "CronJob is run successfully" abc#gmail.com, xyz#gmail.com < /home/mark/test-cron.log
I want to mail the path of "test-cron.log" file. I am running crunch of time, searched a lot but couldnt file a relevant solution
Instead of sending the path, I found a way to attach the log file itself and mail it to user.
mail -A "$HOME/test-cron.log -s "Cron Job run success" xyz#gmail.com"
Related
I have followed up this question successfully, Using CRON jobs to visit url?, to maintain the following Cron task:
*/30 * * * * wget -O - https://example.com/operation/lazy-actions?lt=SOME_ACCESS_TOKEN_HERE >/dev/null 2>&1
The above Cron task works fine and it visits the URL periodically every 30 minutes.
However, the access token is recorded in a text file found in /home/myaccount/www/site/aToken.txt, the aToken file is very simple text file of one line which just contains the token string.
I have tried to read its contents and pass, using cat, it to the crontab command like the following:
*/30 * * * * wget -O - https://example.com/operation/lazy-actions?lt=|cat /home/myaccount/www/site/aToken.txt| >/dev/null 2>&1
However, the above solution has been failed to run the cronjob.
I edit Cronjobs using crontab -e using nano on Ubuntu 16.04
This is a quick solution that will do exactly what you want without the complicated one-liner:
Create this file in your myaccount -- You may also put it into your bin directory if you so desire just remember where you put it so you can call it from your CRON. Also make sure the user has permissions to read/write to the directory the sh file is in
wget.sh
#!/bin/bash
#simple cd -- change directory
cd /home/myaccount/www/site/
#grab token into variable aToken
aToken=`cat aToken.txt`
#simple cd -- move to wget directory
cd /wherever/you/want/the/wget/results/saved
#Notice the $ -- This is how we let the shell know that aToken is a variable = $aToken
#wget -O - https://example.com/operation/lazy-actions?lt=$aToken
wget -q -nv -O /tmp/wget.txt https://example.com/operation/lazy-actions?lt=$aToken >/dev/null 2>/dev/null
# You can writle logs etc etc afterward here. IE
echo "Job was successful" >> /dir/to/logs/success.log
Then simply call this file with your CRON like you are doing already.
*/30 * * * * sh /home/myaccount/www/site/wget.sh >/dev/null 2>&1
Built on that question, Concatenate in bash the output of two commands without newline character, I have got the following simple solution:
wget -O - https://example.com/operation/lazy-actions?lt="$(cat /home/myaccount/www/site/aToken.txt)" >/dev/null 2>&1
Where it able to read the contents of the text file and then echoed to the command stream.
I am trying crontab but crontab for email is not working.However, I want that crontab runs a shell script and that shell script runs a python script.
One crontab I tried is
* * * * * echo " This is current date and time $(date)"
but this is not getting printed on screen. I don't understand what is wrong I am doing.
for sending the email form this command.
echo " This is a message " | mailx -s "Subject" mymail#email.com
you need to make sure port 25 is open on your system.
you can use the mailq command for check email in the queue on your system.
you can check local port is open or not from
telnet localhost 25
nc localhost 25
I have this line in crontab:
* * * * * /var/www/dir/sh/mysql_dumb.sh | mail -s "mysql_dump" example#mail.com
(every minute only a sample)
So, all works fine, but the email is empty.
UPDATE:
The output from mysql_dumb.sh is a *.sql file and they save the file in a directory.
How can I send a copy (*.sql file) from this output -> mysql_dumb.sh to my email?
mysql_dumb.sh:
#!/bin/bash
PATH=/usr/bin:/bin
SHELL=/bin/bash
/usr/bin/mysqldump -u USER -pPASS DATABASE > /var/www/dir/backup/backup_DB_`date +%d_%m_%Y`.sql
If the script is reporting errors, they may be going to stderr, but you're only redirecting stdout. You can redirect stderr by adding 2>&1 to the command:
* * * * * /var/www/dir/sh/mysql_dump.sh 2>&1 | mail -s "mysql_dump" example#mail.example
From a crond perspective more accurate is to place in to your cron:
MAILTO=example#mail.com
* * * * * /var/www/dir/sh/mysql_dumb.sh
* * * * * /var/www/dir/sh/other.sh
* * * * * /var/www/dir/sh/other2.sh
Look at the last line of mysql_dumb.sh:
/usr/bin/mysqldump -u USER -pPASS DATABASE > /var/www/dir/backup/backup_DB_`date +%d_%m_%Y`.sql
The > is redirecting the output of mysqldump to the file /var/www/dir/backup/backup_DB_date +%d_%m_%Y.sql
Do you want to store a backup of the database locally?
If not, take out the the > /var/www/dir/backup/backup_DB_`date +%d_%m_%Y`.sql and put the crontab entry back to
* * * * * /var/www/dir/sh/mysql_dump.sh 2>&1 | mail -s "mysql_dump" example#mail.example
If you do want a copy of the file locally, I would suggest using tee which will write the output to the file and put the output back out on stdout, which will later be picked up by crontab.
I would change the last line of mysql_dumb.sh to be:
/usr/bin/mysqldump -u USER -pPASS DATABASE | tee /var/www/dir/backup/backup_DB_`date +%d_%m_%Y`.sql
Again I would change the crontab entry back to:
/usr/bin/mysqldump -u USER -pPASS DATABASE > /var/www/dir/backup/backup_DB_`date +%d_%m_%Y`.sql
The advantage here is mail can read the information from stdout and isn't dependent on the file being written and then read correctly. While that may be a small difference, in my experience using tee will be more reliable.
If you want to store your cron job's output on the file on server and also want to send that output file to your mail address then you can use below command.
And you can use it on cronjob to automatically run on specific interval of time, here i am dumping mysql db and sending it to my mail with attachment of dumped sql file.
* * * * * mysqldump --user username --password='p#ssword' DBNAME | gzip > /home/user/db_backup/db_backup_`date +\%d-\%m-\%Y-\%H-\%M`.sql.gz && mailx -a /home/user/db_backup/db_backup_`date +\%d-\%m-\%Y-\%H-\%M`.sql.gz -s 'database backup date:'`date +\%d-\%m-\%Y-\%H:\%M` example#gmail.com
I have one cron job like this:
* * * * * urlwatch | mail -s "job changes" pc_xxx#msn.com
It mails every minute as expected. However when I alter the test html page on my local server it doesn't email the differences, just continues to mail a blank mail with the title 'job changes'.
When I paste the job to a prompt:
pc#dellbox:/$urlwatch | mail -s "job changes" pc_xxx#msn.com
and run before/after changing the html, it emails the differences in the 2nd email as expected.
(pc is the owner of urls.txt and the cronjob was created by pc via crontab -e)
Why does the cron version not email the urlwatch output?
This is driving me batty...
Any/all help gratefully received.
ps couldn't create urlwatch as a new tag - need 1500 rep :(
Update:
If I split the command into two bits like this:
urlwatch > ~/.urlwatch/output.txt
mail -s "output" pc_xxx#msn.com < ~/.urlwatch/output.txt
This works.
If I join the two statements with a pipe like this:
urlwatch > ~/.urlwatch/output.txt | mail -s "output" pc_xxx#msn.com < ~/.urlwatch/output.txt
I get a prompt IMMEDIATELY that says
Null message body; hope that's ok
I notice urlwatch takes 2 - 3 seconds to complete, and I understand shell commands wait for preceding commands to finish (unless you're using &?). Dunno if this is significant.
Also, I'm using sSMTP...
Use full path when calling urlwatch:
$ which urlwatch
/usr/bin/urlwatch
Then your cron have to be::
* * * * * /usr/bin/urlwatch | mail -s "job changes" pc_xxx#msn.com
Put .urlwatch directories under /root/ as the program will look there to store its data
#askchipbug: your case looks very specific, but here are the things to do that may help:
-check /var/log/mail.err or /var/log/mail.log to find hints
-crontab user/permission
-some server strips content for not a good reason
-i use mailx it works fine for me and here is my cron job in crontab -e:
* * * * * urlwatch --urls=/var/www/html/urls.txt --hooks=/home/foo/hooks.py | mailx -v -r "bar#gmail.com" -s "This is the subject line" -S smtp="smtp.gmail.com:587" -S smtp-use-starttls -S smtp-auth=login -S smtp-auth-user="foo2#gmail.com" -S smtp-auth-password="bardaf16charsfoo" -S ssl-verify=ignore receipient#hotmail.com
i found the info for heirloom-mailx client installation from here: http://www.binarytides.com/linux-mail-with-smtp/
hope it helps.
I have this very simple bash script:
#!/opt/bin/bash
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/bkp /volume1/backups/sql 2>&1 > /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/www/vhosts /volume1/backups/vhosts 2>&1 >> /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics myhost::/etc /volume1/backups/etc 2>&1 >> /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics /volume1/homes /volume1/backups/homes 2>&1 >> /var/log/rdiff-backup.log;
cat /var/log/rdiff-backup.log | /opt/bin/nail -s "rdiff-backup log" me#email.com;
if I run the script from the command line, in this way:
nohup /path/to/my/scipt.sh &
it works fine, appending each rdiff-backup statistic report to the rdiff-backup.log and sending this file to my email address, as expected. But if I put the script in the crontab, the script make only one rdiff-backup job sending statistics via email. I cannot understand because the script doesn't work in the same way...
Any idea?
this is my cronjob entry:
30 19 * * * /opt/bin/bash /volume1/backups/backup.sh
via crontab only the last job is executed correctly, I think because this is the only one local backup. When I execute the script from command line I use the root user, and the public key of the root user is in the /root/./ssh/authorized_keys of the remote machine. The owner of the crontab file is the root user too, I created them through "crontab -e" using the root account.
First of all you need to make sure the script used in cron doesn't output anything, otherwise:
cron will assume there is an error
you will not see the error if any
A solution for this is to use
30 19 * * * /opt/bin/bash /volume1/backups/backup.sh 2>&1 >> /var/log/rdiff-backup-cron.log;
Second of all, it appears you are losing env variables when executing via cron, try adding the env settings to your script
#!/opt/bin/bash
. /root/.profile
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/bkp /volume1/backups/sql 2>&1 > /var/log/rdiff-backup.log
if /root/.profile doesn't, exist try adding . /root/.bashrc or /etc/profile
I hope this helps.