Ubuntu Cronjob with rsync - linux

Currently on workplaces server we run a daily backup, due to size limits we need it to be only each third day (or something like that). We use Rsync to do the backup. What I'm thinking to do is to just change the run time of the script, so insted of daily it will run every third day.
So I want to know wether this is possible? My Concerns are that the size wont shrink because the backup will still do a "3-days backup" insted of just one day. It's hard to explain so I'll show it by exampel.
What I want:
Day 1 - Run Backup
Day 2
Day 3
Day 4 - Run Backup
Day 5
What I fear will happen:
Day 1 - Run Backup
Day 2 - Backup applied from Day 4
Day 3 - Backup applied from Day 4
Day 4 - Run Backup
Day 5
the crontab job looks like this:
5 7 * * * ../rsyncsnapshot.sh daily 30
the script looks like this
if [ $# != 2 ]; then
echo "Usage: backup.sh interval_name count"
exit 1
fi
NAME=$1
COUNT=$2
TIMESTAMP=`date -u "+%Y-%m-%d %H:%M:%S%z"`
echo "*** Backup started $TIMESTAMP (interval $NAME, count $COUNT) ***"
echo "Deleting $DEST_DIR/$NAME.$((COUNT-1))"
ssh $DEST_HOST rm -rf $DEST_DIR/$NAME.$(($COUNT-1))
for i in `seq $(($COUNT-1)) -1 2`;
do
j=$(($i-1))
echo "Moving $DEST_DIR/$NAME.$j to $DEST_DIR/$NAME.$i"
ssh $DEST_HOST mv $DEST_DIR/$NAME.$j $DEST_DIR/$NAME.$i
done
echo "Copying $DEST_DIR/$NAME.0 to $DEST_DIR/$NAME.1"
ssh $DEST_HOST cp -al $DEST_DIR/$NAME.0 $DEST_DIR/$NAME.1
echo "Copying source ($SRC) to $DEST_HOST:$DEST_DIR/$NAME.0/"
rsync $RSYNC_ARGS $SRC $DEST_HOST:$DEST_DIR/${NAME}.0/
ssh $DEST_HOST touch $DEST_DIR/$NAME.0
TIMESTAMP=`date -u "+%Y-%m-%d %H:%M:%S%z"`
echo "*** Backup ended $TIMESTAMP ***"
echo "Quota as follows:"
ssh $DEST_HOST quota

To reduce the amount of space you use significantly, you'll need to reduce the number of copies you keep. This is the 2nd argument to the script. So if you run every 3 days, and want to keep a month of backups, change it to:
../rsyncsnapshot.sh daily 10

Related

How to prevent orphaned ssh-agent processes from cronjob script?

I've setup a cron job on my RHEL server to securely transfer some backups to another environment every hour. It seems like the first few lines create an ssh-agent that sticks around indefinitely (I've discovered hundreds of orphaned ssh-agent processes at times). I've read a bit about different thoughts on how to manage this sort of thing but I'm not really understanding how I should change this to prevent orphaned agents. Any help is appreciated (I'm a front-end/javascript person so this is new to me).
Cron job;
#hourly /var/www/html/scripts/backups_transfer_sftp/run.sh >> /var/log/backup-transfers.log 2>&1
/var/www/html/scripts/backups_transfer_sftp/run.sh
#!/bin/sh
if [ -z "$SSH_AUTH_SOCK" ] ; then
eval `ssh-agent -s`
ssh-add ~/.ssh/remote_backups
fi
printf "\n======\n"
date
printf "|| Beginning backup replication...\n"
cd /var/www/html/wp-content/uploads/backups
echo "|| Transferring daily backups..."
lftp sftp://remote_backups <<EOF
cd /home/web/
mirror --exclude=.* --include=./*.zip --include=./*dat ./ backups --delete --reverse --only-newer -v
bye
EOF
cd ../backups-weekly
echo "|| Transferring weekly backups..."
lftp sftp://remote_backups <<EOF
cd /home/web/
mirror --exclude=.* --include=./*.zip ./ backups-weekly --delete --reverse --only-newer -v
bye
EOF
now=$(date)
echo "|| Backup replication complete!"
echo "|| Completed at: ${now}"

Comparison operations in cron

I'd like to configure cron to run script on the first day of every month if it is a working day. If the month starts on a weekend, I want the script to be run on the first following week day.
I tried do it this way
0 9 1-3 * * dom=$(date +\%d); dow=$(date +\%u); [[ $dow -lt 6 && (( $dom -eq 1 || $dow -eq 1 )) ]] && script.sh
This works in bash, but seems like cron can't execute comparisons.
By advice of Roadowl, adding
SHELL=/bin/bash
at the beginning of cron file made things work.

how to set timer for each script that runs

Dear friend and colleges
it's lovely to be here in stack overflow the best cool site
Under /tmp/scripts we have around 128 scripts that perform many tests
As
verify_dns.sh
verify_ip.sh
verify_HW.sh
And so on
we decided to run all scripts under the current folder - /tmp/scfipt
with the following code
script_name=` find /tmp/scripts -maxdepth 1 -type f -name "verify_*" -exec basename {} \; `
for i in $script_name
do
echo running the script - $i
/tmp/scripts/$i
done
So output is like this
running the script - verify_dns.sh
running the script - verify_ip.sh
.
.
What we want to add - is the ability to print also the time that script runs
As the following example
running the script - verify_dns.sh - 16.3 Sec
running the script - verify_ip.sh - 2.5 Sec
.
.
My question , how we can add this ability in my code ?
Note - os version - is redhat 7.2
for calculating seconds you can use
SECONDS=0 ;
your_bash_script ;
echo $SECONDS
for more sensitive calculation
start=$(date +'%s%N')
your_shell_script.sh
echo "It took $((($(date +'%s%N') - $start)/100000)) miliseconds"
for internal time function
time your_shell_script.sh
Edit: example provided for OP
for i in $script_name
do
echo running the script - $i
start=$(date +'%s%N')
/tmp/scripts/$i
echo "It took $((($(date +'%s%N') - $start)/100000)) miliseconds"
done
for i in $script_name
do
echo running the script - $i
time /tmp/scripts/$i
done
You can use the time command to tell you how long each one took:
TIMEFORMAT="%E"
for i in $script_name
do
echo -en "running the script - $i\t - "
exec 3>&1 4>&2
var=$( { time /tmp/scripts/$i 1>&3 2>&4; } 2>&1) # Captures time only
exec 3>&- 4>&-
echo "$var Sec"
done
This works regardless of if your scripts produce any output/stderr. See this link for capturing only the output of time: get values from 'time' command via bash script
While it doesn't put the output on the same line, this might suit your needs.
for i in $script_name
do { set -x;
time "$i";
} 2>&1 | grep -Ev '^(user|sys|$)'
done

Script in crontab to be executed only if is equal or exceeds a value

I currently have a script in crontab for rsync (and some other small stuff). Right now the script is executed every 5 minutes. I modified the script to look for a specific line from the rsync part (example from my machine, not the actual code):
#!/bin/bash
Number=`/usr/bin/rsync -n --stats -avz -e ssh 1/ root#127.0.0.1 | grep "Number of regular files transferred" | cut -d':' -f 2 | tr -d 040\054\012`
echo $Number
Let's say the number is 10. If the number is 10 or below I want the script executed through the crontab. But if the number is bigger I want to be executed ONLY manually.
Any ides?
Maybe you can use an argument to execute it manually, for example:
if [[ $Number -le 10 || $1 == true ]];then
echo "executing script..."
fi
This will execute if $Number is less or equal to 10 or if you execute it with true as the first positional argument, so if $Number is greater than 10 it won't execute in your crontab and you can execute your script manually with ./your_script true.

Shell script: Count files, delete 'X' oldest file

I am new to scripting. Currently I have a script that backs up a directory every day to a file server. It deletes the oldest file outside of 14 days. My issue is I need it to count the actual files and delete the 14th oldest one. When going by days, if the file server or host is down for a few days or longer, when back up it will delete a couple days worth of backups or even all of them. Pending down time. I want it to always have 14 days worth of backups.
I tried searching around and could only find solutions related to deleting by dates. Like what I have now.
Thank you for the help/advice!
My code I have, sorry its my first attempt at scripting:
#! /bin/sh
#Check for file. If not found, the connection to the file server is down!
if
[ -f /backup/connection ];
then
echo "File Server is connected!"
#Directory to be backed up.
backup_source="/var/www/html/moin-1.9.7"
#Backup directory.
backup_destination="/backup"
#Current date to name files.
date=`date '+%m%d%y'`
#naming the file.
filename="$date.tgz"
echo "Backing up directory"
#Creating the back up of the backup_source directory and placing it into the backup_destination directory.
tar -cvpzf $backup_destination/$filename $backup_source
echo "Backup Finished!"
#Search for folders older than '+X' days and delete them.
find /backup -type f -ctime +13 -exec rm -rf {} \;
else
echo "File Server is NOT connected! Date:`date '+%m-%d-%y'` Time:`date '+%H:%M:%S'`" > /user/Desktop/error/`date '+%m-%d-%y'`
fi
Something along the lines like this might work:
ls -1t /path/to/directory/ | head -n 14 | tail -n 1
in the ls command, -1 is to list just the filenames (nothing else), -t is to list them in chronological order (newest first). Piping through the head command takes just the first 14 from the output of the ls command, then tail -n 1 takes just the last from that list. This should give the the file that is 14th newest.
Here is another suggestion. The following script simply enumerates the backups. This eases the task of keeping track of the last n backups. If you need to know the actual creation date you can simply check the file metadata, e.g. using stat.
#!/bin/sh
set -e
backup_source='somedir'
backup_destination='backup'
retain=14
filename="backup-$retain.tgz"
check_fileserver() {
nc -z -w 5 file.server.net 80 2>/dev/null || exit 1
}
backup_advance() {
if [ -f "$backup_destination/$filename" ]; then
echo "removing $filename"
rm "$backup_destination/$filename"
fi
for i in $(seq $(($retain)) -1 2); do
file_to="backup-$i.tgz"
file_from="backup-$(($i - 1)).tgz"
if [ -f "$backup_destination/$file_from" ]; then
echo "moving $backup_destination/$file_from to $backup_destination/$file_to"
mv "$backup_destination/$file_from" "$backup_destination/$file_to"
fi
done
}
do_backup() {
tar czf "$backup_destination/backup-1.tgz" "$backup_source"
}
check_fileserver
backup_advance
do_backup
exit 0

Resources