Could anybody please let me know why my code is not executing via cron job. But the same perfectly executing when i run manually.
cronjob
30 19 * * * /backup1/RMAN/dumpremoval.sh > /backup1/RMAN/dumpremoval.log2 2>&1
CODE
cat /backup1/RMAN/dumpremoval.sh
yest=$(date --date='1 day ago' +"%y%y%m%d")
yestdump=SMARTDB_D_$yest
ls -lrt $yestdump* > /backup1/RMAN/dumpexists 2> /dev/null
if [ $? -eq 0 ]
then
rm -rf $yestdump*
fi
Here my intention is, i want to delete yesterday's backup files daily those are having file names
like SMARTDB_D_20200409*
Thank you..
Related
I've made a little bash script to backup my nextcloud files including my database from my ubuntu 18.04 server. I want the backup to be executed every day. When the job is done I want to reseive one mail if the job was done (additional if it was sucessful or not). With the current script I reseive almost 20 mails and I can't figure out why. Any ideas?
My cronjob looks like this:
* 17 * * * "/root/backup/"backup.sh >/dev/null 2>&1
My bash script
#!/usr/bin/env bash
LOG="/user/backup/backup.log"
exec > >(tee -i ${LOG})
exec 2>&1
cd /var/www/nextcloud
sudo -u www-data php occ maintenance:mode --on
mysqldump --single-transaction -h localhost -u db_user --password='PASSWORD' nextcloud_db > /BACKUP/DB/NextcloudDB_`date +"%Y%m%d"`.sql
cd /BACKUP/DB
ls -t | tail -n +4 | xargs -r rm --
rsync -avx --delete /var/www/nextcloud/ /BACKUP/nextcloud_install/
rsync -avx --delete --exclude 'backup' /var/nextcloud_data/ /BACKUP/nextcloud_data/
cd /var/www/nextcloud
sudo -u www-data php occ maintenance:mode --off
echo "###### Finished backup on $(date) ######"
mail -s "BACKUP" name#domain.com < ${LOG}
Are you sure about the CRON string? For me this means "At every minute past hour 17".
Should be more like 0 17 * * *, right?
I am trying cron job for the first time. I am trying to generate file which will contain user installed application in Ubuntu and the same file needs to be uploaded to server.
I am unable to generate the text file with that information. Below is the command which i am trying.
Script file which needs to be run for the cron job /tmp/aptlist.sh
#!/bin/bash
comm -23 <(apt-mark showmanual | sort -u) <(gzip -dc /var/log/installer/initial-status.gz | sed -n 's/^Package: //p' | sort -u) &> /tmp/$(hostname)-$(date +%d-%m-%Y-%S)
cron has following entry done using crontab -e
:~$ crontab -l
0 0 1 * * /tmp/aptlist.sh > /dev/null 2>&1
syslog has following entry however no file is generated
Oct 21 14:09:01 Astrome46 CRON[14592]: (user) CMD (/tmp/aptlist.sh > /dev/null 2>&1)
Oct 21 14:10:01 Astrome46 CRON[14600]: (user) CMD (/tmp/aptlist.sh > /dev/null 2>&1)
Kindly let me know how to fix the issue.
Thank You
Try this:
0 0 1 * * bash /tmp/aptlist.sh > /dev/null 2>&1
If this works then I suspect it is because the file doesn't have executable permissions.
You can find that out by typing in the terminal:
ls -l /tmp/aptlist.sh.
If that is really the case then you can also edit the file permissions to allow it to run by typing:
chmod u+x /tmp/aptlist.sh
This will enable the file owner to run it, but will not allow that to other users. If you need it to run for a different user do:
chmod a+x /tmp/aptlist.sh
Good luck!
I have created a script to delete old files and put it in crontab to run every 2 mins. I can see that the syslog shows the cronjob running, but the files are not deleted. I can run the script manually, it runs without any errors. And I also used "sudo crontab -e" so as to give root permissions to the cronjob. Any ideas why the files are not deleted?
Crontab is as follows:
*/2 * * * * /bin/bash /mnt/md0/capture/delete_old_pcap.sh
02 00,12 * * * sh /usr/bin/nfexpire.sh
The script is as follows:
#!/bin/bash
ulimit -S -s 50000
LIMIT=10
NO=0
#Get the number of files, that has `*.pcap` in its name, with last modified
NUMBER=$(find /mnt/md0/capture/DCN/ -maxdepth 1 -name "*.pcap" |wc -l)
if [[ $NUMBER -gt $LIMIT ]] #if number greater than limit
then
del=$(($NUMBER-$LIMIT))
if [ "$del" -lt "$NO" ]
then
del=$(($del*-1))
fi
FILES=$(find /mnt/md0/capture/DCN/ -maxdepth 1 -type f -name "*.pcap" -print0 |$
rm -f ${FILES[#]}
#delete the originals
fi
not sure it will solve your problem, but try:
*/2 * * * * /bin/sh /mnt/md0/capture/delete*.sh
02 00,12 * * * /bin/sh /usr/bin/nfexpire.sh
i.e. give the full path to the shell when executing the commands.
I wildcards won't work as other scripts will be taken as arguments to the first script (good point #broslow). Instead, make a script that calls all the other scripts.
Something like the following:
script /mnt/md0/capture/delete.sh:
for f in delete.d/*.sh; do
/bin/sh $f
done
with all scripts in /mnt/md0/capture/delete.d/
and then in your crontab:
*/2 * * * * /bin/sh /mnt/md0/capture/delete.sh
Finally check your mail on your local computer, crontab sends output/reports on error by mail (i.e. type mail as the user running the crontab on the command line, i.e. as root in your case).
Hello i have a script like this one:
#!/usr/bin/bash
ARSIP=/apps/bea/scripts/arsip
CURDIR=/apps/bea/scripts
OUTDIR=/apps/bea/scripts/out
DIRLOG=/apps/bea/jboss-6.0.0/server/default/log
LISTFILE=$CURDIR/tmp/file.$$
DATE=`perl -e 'use POSIX; print strftime "%Y-%m-%d", localtime time-86400;'`
JAVACMD=/apps/bea/jdk1.6.0_26/bin/sparcv9/java
HR=00
for (( c=0; c<24; c++ ))
do
echo $DATE $HR
$JAVACMD -jar LatencyCounter.jar LatencyCounter.xml $DATE $HR
sleep 1
cd $OUTDIR
mv btw_120-180.txt btw_120-180-$DATE-$HR.txt
mv btw_180-360.txt btw_180-360-$DATE-$HR.txt
mv btw_60-120.txt btw_60-120-$DATE-$HR.txt
mv failed_to_deliver.txt failed_to_deliver-$DATE-$HR.txt
mv gt_360.txt gt_360-$DATE-$HR.txt
mv out.log out-$DATE-$HR.log
cd -
let HR=10#$HR+1
HR=$(printf %02d $HR);
done
cd $OUTDIR
tar -cf latency-$DATE.tar btw*-$DATE-*.txt gt*$DATE*.txt out-$DATE-*.log
sleep 300
gzip latency-$DATE.tar
sleep 300
/apps/bea/scripts/summaryLatency.sh
sleep 300
rm -f btw* failed* gt* out*
#mv latency-$DATE.tar.gz ../$ARSIP
cd -
It basically execute jar files in same directory as this script and then tar the result, gzip it and execute another bash file then delete all of the previous collected files. The problem is i need this script to run daily and i use crontab to do that. It still return empty tar file but if i execute it manually it works well..I also have other 4 scripts running in crontab and they work good..i still can't figure out what is the main reason of this phenomena
thank you
I'll take a stab: your script is run by /bin/sh instead of /bin/bash.
Try explicitly running it with bash at the cron entry, like this:
* * * * * /bin/bash /your/script
I'm guessing that when you execute $JAVACMD -jar LatencyCounter.jar LatencyCounter.xml $DATE $HR, you're not in the directory containing LatencyCounter.jar. You might want to cd $CURDIR before you enter the for loop.
This
#!/bin/bash
if [ `ps -ef | grep "91.34.124.35" | grep -v grep | wc -l` -eq 0 ]; then sh home/asfd.sh; fi
or this?
ps -ef | grep "91\.34\.124\.35" | grep -v grep > /dev/null
if [ "$?" -ne "0" ]
then
sh home/asfd.sh
else
echo "Process is running fine"
fi
Hello, how can I write a shell script that looks in running processes and if there isn't a process name CONTAINING 91.34.124.35 then execute a file in a certain place and I want to make this run every 30 seconds in a continuous loop, I think there was a sleep command.
you can't use cron since on the implementation I know the smallest unit is one minute. You can use sleep but then your process will always be running (with cron it will started every time).
To use sleep just
while true ; do
if ! pgrep -f '91\.34\.124\.35' > /dev/null ; then
sh /home/asfd.sh
fi
sleep 30
done
If your pgrep has the option -q to suppress output (as on BSD) you can also use pgrep -q without redirecting the output to /dev/null
First of all, you should be able to reduce your script to simply
if ! pgrep "91\.34\.124\.35" > /dev/null; then ./your_script.sh; fi
To run this every 30 seconds via cron (because cron only runs every minute) you need 2 entries - one to run the command, another to delay for 30 seconds before running the same command again. For example:
* * * * * root if ! pgrep "91\.34\.124\.35" > /dev/null; then ./your_script.sh; fi
* * * * * root sleep 30; if ! pgrep "91\.34\.124\.35" > /dev/null; then ./your_script.sh; fi
To make this cleaner, you might be able to first store the command in a variable and use it for both entries. (I haven't tested this).
CHECK_COMMAND="if ! pgrep '91\.34\.124\.35' > /dev/null; then ./your_script.sh; fi"
* * * * * root eval "$CHECK_COMMAND"
* * * * * root sleep 30; eval "$CHECK_COMMAND"
p.s. The above assumes you're adding that to /etc/crontab. To use it in a user's crontab (crontab -e) simply leave out the username (root) before the command.
I would suggest using watch:
watch -n 30 launch_my_script_if_process_is_dead.sh
Either way is fine, you can save it in a .sh file and add it to the crontab to run every 30 seconds. Let me know if you want to know how to use crontab.
Try this:
if ps -ef | grep "91\.34\.124\.35" | grep -v grep > /dev/null
then
sh home/asfd.sh
else
echo "Process is running fine"
fi
No need to use test. if itself will examine the exit code.
You can save your script in file name, myscript.sh
then you can run your script through cron,
*/30 * * * * /full/path/for/myscript.sh
or you can use while
# cat script1.sh
#!/bin/bash
while true; do /bin/sh /full/path/for/myscript.sh ; sleep 30; done &
# ./script1.sh
Thanks.
I have found deamonizing critical scripts very effective.
http://cr.yp.to/daemontools.html
You can use monit for this task. See docu. It is available on most linux distributions and has a straightforward config. Find some examples in this post
For your app it will look something like
check process myprocessname
matching "91\.34\.124\.35"
start program = "/home/asfd.sh"
stop program = "/home/dfsa.sh"
If monit is not available on your platform you can use supervisord.
I also found this question very similar Repeat command automatically in Linux. It suggests to use watch.
Use cron for the "loop every 30 seconds" part.