cron job to find a file and list its output if found - linux

I am trying to get a cron job setup to run every 10 mins to find 2 particular file(lets say a and b) and if found cat its output and the timestamp when the file was created and send it as an email in suse linux.
could anyone please suggest .
Thank you
Jonu Joy

Assuming that mail-delivery as such works, and that you know how to edit crontabs ...
Put the following into a script (modify paths to match your system, I don't have suse here to play with), make it executable, and run that from cron every ten minutes.
#!/bin/bash
find . -name a -o -name b|while read file; do ls -l $file; cat $file; echo "" ; done | mail user#domain
And then:
chmod +x /path/to/script/above
Run from cron like so:
0/10 * * * * /path/to/script/above

Related

Remote to Local rolling backup script

I'm trying to create a bash script that runs through crontab to execute a backup remote to local. Everything works but my rolling backup part, where it only keeps 4 backups.
#!/bin/bash
dateForm=`date +%m-%d-%Y`
fileName=[redacted]-"$dateForm"
echo backup started for [redacted] on: $dateForm >> /home/backups/backLog.log
ls -tQ /home/backups/[redacted] | tail -n+5 | xargs -r rm
ssh root#[redacted] "tar jcf - -C /home/[redacted]/[redacted] ." > "/home/backups/[redacted]/$fileName".tar.bz2
if [ ! -f "/home/backups/[redacted]/$fileName.tar.bz2" ]
then
echo "something went wrong with the backup for $fileName!" >> /home/backups/backLog.log
else
echo "Backup completed for $fileName" >> /home/backups/backLog.log
fi
the ls line will work if executed in the directory just fine, but because crontab is executing it and I need the script to be outside of the folder it's targeting. I can't get it to target the rm to the correct directory utilizing the piped ls
I was able to come up with an interesting solution after studying the man page for ls a little more and utilizing find to grab the full paths.
ls -tQ $(find /home/backups/[redacted] -type f -name "*") | tail -n+5 | xargs -r rm
just posting an answer for someone that didn't want to create a rolling backup script that completely depended on date formatting, as there would ALWAYS be at least 4 backups in the folder targeted.

Script to check the change of crontab using diff

I need a script which needs to look in a way that take copy of the current crontab in a file then every day a cron tab copy needs to be taken and it needs to compare using "diff" command if it is not matching it needs to send alert mail.Can any one please help me on this?
Currently I'm using the below script But issue with this script is it sends alerts even if the Changes made in the crontab are correct.But I want to compare the contents using the diff command.So this script not suits for my requirement
#!/bin/sh
export smtp=smtprelay.intra.coriant.com:25
CROND=/home/ssx00001
ALERT=redmine#coriant.com
checkf=last.crontab.check
if [ -f $checkf ]
then
find $CROND -type f -newer $checkf | while read tabfile
do
echo "Crontab file for Redmine has changed" | mail -s "Crontab changed" $ALERT
done
fi
touch $CHECKF
#!/bin/sh
export smtp=smtprelay.intra.coriant.com:25
ALERT=redmine#coriant.com
crontab -l > /home/ssx00001/y.txt
cat y.txt
diff /home/ssx00001/x.txt /home/ssx00001/y.txt > /home/ssx00001/z.txt
ab=`cat z.txt | wc -l`
echo $ab
if [[ $ab != 0 ]]; then
echo "Crontab for Redmine has changed" | mail -s "Crontab modified" $ALERT
fi
(/home/ssx00001 is the path in which files stored ?)
Also create a file in /home/ssx00001 as x.txt which contains data of current cronjobs
The problem you have is that the diff command requires two files to compare. You cannot check for changes in a file without saving an old version of the file to check against. The crontab command does not do this.
Your best bet is to write a wrapper around the crontab command which saves a copy of the original crontab file, runs crontab to edit and install the new file, and then runs diff with the file you saved.

syslog shows cronjob running

I have created a script to delete old files and put it in crontab to run every 2 mins. I can see that the syslog shows the cronjob running, but the files are not deleted. I can run the script manually, it runs without any errors. And I also used "sudo crontab -e" so as to give root permissions to the cronjob. Any ideas why the files are not deleted?
Crontab is as follows:
*/2 * * * * /bin/bash /mnt/md0/capture/delete_old_pcap.sh
02 00,12 * * * sh /usr/bin/nfexpire.sh
The script is as follows:
#!/bin/bash
ulimit -S -s 50000
LIMIT=10
NO=0
#Get the number of files, that has `*.pcap` in its name, with last modified
NUMBER=$(find /mnt/md0/capture/DCN/ -maxdepth 1 -name "*.pcap" |wc -l)
if [[ $NUMBER -gt $LIMIT ]] #if number greater than limit
then
del=$(($NUMBER-$LIMIT))
if [ "$del" -lt "$NO" ]
then
del=$(($del*-1))
fi
FILES=$(find /mnt/md0/capture/DCN/ -maxdepth 1 -type f -name "*.pcap" -print0 |$
rm -f ${FILES[#]}
#delete the originals
fi
not sure it will solve your problem, but try:
*/2 * * * * /bin/sh /mnt/md0/capture/delete*.sh
02 00,12 * * * /bin/sh /usr/bin/nfexpire.sh
i.e. give the full path to the shell when executing the commands.
I wildcards won't work as other scripts will be taken as arguments to the first script (good point #broslow). Instead, make a script that calls all the other scripts.
Something like the following:
script /mnt/md0/capture/delete.sh:
for f in delete.d/*.sh; do
/bin/sh $f
done
with all scripts in /mnt/md0/capture/delete.d/
and then in your crontab:
*/2 * * * * /bin/sh /mnt/md0/capture/delete.sh
Finally check your mail on your local computer, crontab sends output/reports on error by mail (i.e. type mail as the user running the crontab on the command line, i.e. as root in your case).

Script with lsof works well on shell not on cron

I have a small script do count open files on Linux an save results into a flat file. I intend to run it on Cron every minute to gather results later. Script follows:
/bin/echo "Timestamp: ` date +"%m-%d-%y %T"` Files: `lsof | grep app | wc -l`"
And the crontab is this:
*/1 * * * * /usr/local/monitor/appmon.sh >> /usr/local/monitor/app_stat.txt
If I run from shell ./script.sh it works well and outputs as:
Timestamp: 01-31-13 09:33:59 Files: 57
But on the Cron output is:
Timestamp: 01-31-13 09:33:59 Files: 0
Not sure if any permissions are needed or similar. I have tried with sudo on lsof without luck as well.
Any hints?
from your working cmd-line, do
which lsof
which grep
which wc
which date
Take the full paths for each of these commands and add them into your shell script, producing something like
/bin/echo "Timestamp: `/bin/date +"%m-%d-%y %T"` Files: `/usr/sbin/lsof | /bin/grep app | /bin/wc -l`"
OR you can set a PATH var to include the missing values in your script, i.e.
PATH=/usr/sbin:${PATH}
Also unless you expect your script to be run from a true Bourne Shell environment, join the early 90's and use the form $( cmd ... ) for cmd-substitution, rather than backticks. The Ksh 93 book, published in 1995 remarks that backticks for command substitution are deprecated ;-)
IHTH

bash script runs from shell but not from cron job

Cron installation is vixie-cron
/etc/cron.daily/rmspam.cron
#!/bin/bash
/usr/bin/rm /home/user/Maildir/.SPAM/cur/*;
I Have this simple bash script that I want to add to a cron job (also includes spam learning commands before) but this part always fails with "File or directory not found" From what I figure is the metachar isn't being interperted correctly when run as a cron job. If I execute the script from the commandline it works fine.
I'd like a why for this not working and of course a working solution :)
Thanks
edit #1
came back to this question when I got popular question badge for it. I first did this,
#!/bin/bash
find /home/user/Maildir/.SPAM/cur/ -t file | xargs rm
and just recently was reading through the xargs man page and changed it to this
#!/bin/bash
find /home/user/Maildir/.SPAM/cur/ -t file | xargs --no-run-if-empty rm
short xargs option is -r
If there are no files in the directory, then the wildcard will not be expanded and will be passed to the command directly. There is no file called "*", and then the command fails with "File or directory not found." Try this instead:
if [ -f /home/user/Maildir/.SPAM/cur/* ]; then
rm /home/user/Maildir/.SPAM/cur/*
fi
Or just use the "-f" flag to rm. The other problem with this command is what happens when there is too much spam for the maximum length of the command line. Something like this is probably better overall:
find /home/user/Maildir/.SPAM/cur -type f -exec rm '{}' +
If you have an old find that only execs rm one file at a time:
find /home/user/Maildir/.SPAM/cur -type f | xargs rm
That handles too many files as well as no files. Thanks to Charles Duffy for pointing out the + option to -exec in find.
Are you specifying the full path to the script in the cronjob?
00 3 * * * /home/me/myscript.sh
rather than
00 3 * * * myscript.sh
On another note, it's /bin/rm on all of the linux boxes I have access to. Have you double-checked that it really is /usr/bin/rm on your machine?
try adding
MAILTO=your#email.address
to the top of your cron file and you should get any input/errors mailed to you.
Also consider adding the command as a cronjob
0 30 * * * /usr/bin/rm /home/user/Maildir/.SPAM/cur/*
Try using a force option and forget about adding a path to rm command. I think it should not be needed...
rm -f
This will ensure that even if there are no files in the directory, rm command will not fail. If this is a part of a shell script, the * should work. It looks to me that you might have an empty dir...
I understand that the rest of the script is being executed, right?
Is rm really located in /usr/bin/ on your system? I have always thought that rm should reside in /bin/.

Resources