split scp of backup files to different smb shares based on date - linux

I backup files to Tar files once a day and grab from our Ubuntu servers using a backup shell script and put them in a share. We only have 5TB shares but can have several.
At the moment we need more as we backup 30 days worth of Tar files.
I need a method where the first 10 days go to share one, next ten to share tow, next 11 to share three
Currently each Server VM runs the following script to backup and tar folders and place then in another folder ready to be grabbed by the backup server
!/bin/bash
appname=myapp.com
dbname=mydb
dbuser=myDBuser
dbpass=MyDBpass
datestamp=`date +%d%m%y`
rm -f /var/mybackupTars/* > /dev/null 2>&1
mysqldump -u$dbuser -p$dbpass $dbname > /var/mybackups/$dbname-$datestamp.sql && gzip /var/mybackupss/$dbname-$datestamp.sql
tar -zcf /var/mybackups/myapp-$datestamp.tar.gz /var/www/myapp > /dev/null 2>&1
tar -zcf /var/mydirectory/myapp-$datestamp.tar.gz /var/www/html/myapp > /dev/null 2>&1
I then grab the backups using a script on the backup server and put them in a share
#!/bin/bash
#
# Generate a list of myapps to grab
df|grep myappbackups|awk -F/ '{ print $NF }'>/tmp/myapplistlistsmb
# Get each app in turn
for APPNAME in `cat /tmp/applistsmb`
do
cd /srv/myappbackups/$APPNAME
scp $APPNAME:* .
done
I know this is a tough one but I really need 3 shares with ten days worth in each share
I do not anticipate the backup script changing on each server VM that backs up to itself
Only maybe the grabber script that puts the dated backups in the share on the backup server
Or am I wrong??
Any help would be great

Related

bash cript: ssh create file; sleep 3m; rm file;

Trying to create a script that will ssh into server, backup some files, sleep for 3 minutes, then remove the files.
While it's sleeping the same script is back to local and rsync the file. Then when the 3 minutes are up... file is removed.
Just trying this so as to not connect twice with ssh.
ssh $site "
tar -zcf $domain-$date.tar.gz $path;
{ sleep 3m && rm -f $domain-$date.tar.gz };
"
rsync -az $site:$domain-$date.tar.gz ~/WebSites/$domain/BackUp/$date;
I tried with command grouping with (), to create a sub command, but I think the variable would not be read. Not sure.
Your ssh command will sleep for 3 minutes and remove the files, then your script proceeds to try to rsync the files that got removed. There is no easy workaround for having your first ssh command sleep while your own script proceeds to run rsync.
Do either:
ssh into the server twice. After rsync completes, ssh into the server again and remove the files.
Tell rsync to remove the files after it's synced them. Add the --remove-source-files to rsync.

Half of Crontab script works, the other half doesn't

I'm having some difficulty with my Crotab. I've run through this list: https://askubuntu.com/questions/23009/reasons-why-crontab-does-not-work Which was helpful. I have the following crontab
35 02 * * * /root/scripts/backup
Which runs:
#!/bin/bash
PATH=/home:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
echo "Time: $(date)" >> /home/me/SystemSync.log
rsync -ar -e ssh /home/ root#IPADDRESSOFNAS:/volume1/backup/server/SystemSSD/
echo "Time: $(date)" >> /SecondHD/RsyncDebug.log
sudo rsync -arv -e ssh /SecondHD/ root#IPADDRESSOFNAS:/volume1/backup/server/SecondHD/
The .log files are written to the server, with the intention that they will be synced with the NAS if it is successfully synced.
I know the following:
The SystemSSD portion of the script works as do both of the log portions of the script. On the NAS, I see the Systemsync.log file with the newest entry. I find the RsyncDebug.log on the server with the updated time stamp, but not on the on the NAS.
The script runs in its entirety when I run it from the command line, just not in crontab.
Potentially pertinent information:
I'm running CentOS 6 on the server.
The system drive is a 1TB SSD and the Second drive is a 4 TB Raid1 hard drive with ca. 1 TB of space remaining.
The NAS volume has 5TB drives, with about 1 TB of space remaining.
Thanks in advance. Someday I hope to teach as much as I learn from this community.

Linux - copying only new files from one server to another

I have a server where files are transferred thru FTP to a location. All files are there since transfer beginning (January 2015).
I want to make a new server and transfer the files from first server's location.
Basically, I need a cron job to run scp and transfer only new files since last run.
Connection between servers with ssh is working and I can transfer files without restiction between servers.
How can I achieve this in Ubuntu?
The possible duplicate with the other question doesn't stand because, on my destination server I will have just one file where I should keep the date of last cron run and the files which will be copied from first server will be parsed and deleted afterwards.
rsync will simply make sure that all files exists in both servers, correct?
I manage to set-up the cron job on remote computer using the following:
I created first a timestamp file which will keep the last timestamp when cron job run:
touch timestamp
Then I copy all files with ssh and scp commands:
ssh username#remote find <files_path> -type f -newer timestamp | xargs -i scp -i username#remote:'{}' <local_path>
Then I touch timestamp file with new modified time:
touch -m timestamp
The only problem with this script is that, if a file is copied to remote host during ssh run before touching timestamp second time, this file is ignored on later runs.
Later edit:
To be sure that there is no gap between timestamp file and actual run because of ssh command duration, the script was changed to:
touch timestamp_new
ssh username#remote find <files_path> -type f -newer timestamp | xargs -i scp -i username#remote:'{}' <local_path>
rm -rf timestamp
mv timestamp_new timestamp

Storing application logs for a day

I have an application running in Production. Where we create the logs file. The maximum count of log file is set to 10 and maximum debug write is set to some value such that when a log file becomes of 6MB a new log file is created..
So, we have logs rolling over with file names like :-
<file_name>.log
<file_name>.log.1
<file_name>.log.2
...
<file_name>.log.10
What my problem is that only logs for 15 minutes can be found in these 10 log files.
I know I can update my code base to use DailyRollingFileAppender. But what I'm looking for is a short term solution to store logs for a day such that it can be done without any code changes or minimal code/configuartion changes. For exmaple may be I can acheive this via some cron job or linux command.. etc.
Note:- I'm running this application on Linux OS in production.
Any quick help is highly appreciated.
~Thanks
You may do this create a shell script and adding it to cron jobs.
NOW_DATE=$(date +"%m-%d-%Y-%H-%M")
cd /var/log/OLD_LOGS
mkdir /var/log/OLD_LOGS/$NOW_DATE
cd /var/log/
mv server.log.* /var/log/OLD_LOGS/$NOW_DATE/
mv *.zip /var/log/OLD_LOGS/$NOW_DATE/
cp server.log /var/log/OLD_LOGS/$NOW_DATE/
cd /var/log/OLD_LOGS/$NOW_DATE
x=$(ls -l |wc -l)
if [ $x -le 1 ] then
SUBJECT="There is an issue with generating server log - less number of files"
EMAIL="support#abc.com"
EMAILMESSAGE="/tmp/errormsg.txt"
/bin/mail -s "$SUBJECT" "$EMAIL" < $EMAILMESSAGE
fi
cd /var/log/OLD_LOGS/
zip -r $NOW_DATE.zip $NOW_DATE
rm -r -f $NOW_DATE
find /var/log/ -type f -mtime +180 -exec rm {} \;
If the application tries to create .log.11 and if it overwrites the old ones as per the script conditions, there is no possibility of having the logs for a day.
What i understand is application logging is much such that all the 10 files have loggings for last 15 minutes and the new lines are again overwritten on them.
Application logic should be modified to make sure it captures a day logs. Also, please make sure to zip the files at regular intervals so that you can save some disk space.

How to write a script for backup using bacula?

I am very new to this shell scripting and bacula. I want to create a script that schedules the backup using bacula?
How to do that?
Any lead is appreciated?
Thanks.
If you are going to administer your own Linux system, learn bash. The man page is really quite detailed and useful. Do man bash.
If you are really new to Linux and command-lines, administering bacula is not for newbies. It is a fairly comprehensive backup system, for multiple systems, with a central database, which means that is is also complex.
There are much simpler tools available on Linux to perform simple system backups, which are just as reliable. If you just want to backup you home directory, tar or zip are excellent tools. In particular, tar can do both full backups and incremental backups.
Assuming that you really want to use bacula and have enough information to write a couple of simple scripts, then even so, the original request is ambiguous.
Do you mean schedule a periodic cron job to accomplish backups unattended? Or, do you mean to schedule an single invocation of bacula at a determined time and date?
In either case, it's a good idea to create two simple scripts: one to perform a full backup, and one to perform an incremental backup. The full backup should be run, say, once a week or once a month, and the incremental backup should be run every day, or once a week -- depending on how often your system data changes.
Most modest sites undergoing daily usage would have a daily incremental backup with a full backup on the weekends (say, Sunday). This way, if the system crashed on, say, Friday, you would need to recover with the most recent full backup (on the previous Sunday), and then recover with each day's incremental backup (Mon, Tue, Wed, Thu). You would probably lose data changes that had occurred on the day of the crash.
If the rate of data change was hourly, and recovery at an hourly rate was important, then the incrementals should be scheduled for each hour, with full backups each night.
An important consideration is knowing what, exactly, is to be backed up. Most home users want their home directory to be recoverable. The OS root and application partitions are often easily recoverable without backups. Alternatively, these are backed up on a very infrequent schedule (say once a month or so), since they change must less frequently than the user's home director.
Another important consideration is where to put the backups. Bacula supports external storage devices, such as tapes, which are not mounted filesystems. tar also supports tape archives. Most home users have some kind of USB or network-attached storage that is used to store backups.
Let's assume that the backups are to be stored on /mnt/backups/, and let's assume that the user's home directory (and subdirectories) are all to be backed up and made recoverable.
% cat <<EOF >/usr/local/bin/full-backup
#!/bin/bash
# full-backup SRCDIRS [--options]
# incr-backup SRCDIRS [--options]
#
# set destdir to the path at which the backups will be stored
# each backup will be stored in a directory of the date of the
# archive, grouped by month. The directories will be:
#
# /mnt/backups/2014/01
# /mnt/backups/2014/02
# ...
# the full and incremental files will be named this way:
#
# /mnt/backups/2014/01/DIR-full-2014-01-24.192832.tgz
# /mnt/backups/2014/01/DIR-incr-2014-01-25.192531.tgz
# ...
# where DIR is the name of the source directory.
#
# There is also a file named ``lastrun`` which is used for
# its last mod-time which is used to select files changed
# since the last backup.
$PROG=${0##*/} # prog name: full-backup or incr-backup
destdir=/mnt/backup
now=`date +"%F-%H%M%S"`
monthdir=`date +%Y-%m`
dest=$destdir/$monthdir/
set -- "$#"
while (( $# > 0 )) ; do
dir="$1" ; shift ;
options='' # collect options
while [[ $# -gt 0 && "x$1" =~ x--* ]]; do # any options?
options="$options $1"
shift
done
basedir=`basename $dir`
fullfile=$dest/$basedir-full-$now.tgz
incrfile=$dest/$basedir-incr-$now.tgz
lastrun=$destdir/lastrun
case "$PROG" in
full*) archive="$fullfile" newer= kind=Full ;;
incr*) archive="$incrfile" newer="--newer $lastrun" kind=Incremental ;;
esac
cmd="tar cfz $archive $newer $options $dir"
echo "$kind backup starting at `date`"
echo ">> $cmd"
eval "$cmd"
echo "$kind backup done at `date`"
touch $lastrun # mark the end of the backup date/time
exit
EOF
(cd /usr/local/bin ; ln -s full-backup incr-backup )
chmod +x /usr/local/bin/full-backup
Once this script is configured and available, it can be scheduled with cron. See man cron. Use cron -e to create and edit a crontab entry to invoke full-backup once a week (say), and another crontab entry to invoke incr-backup once a day. The following are three sample crontab entries (see man 5 crontab for details on syntax) for performing incremental and full backups, as well as removing old archives.
# run incremental backups on all user home dirs at 3:15 every day
15 3 * * * /usr/local/bin/incr-backup /Users
# run full backups every sunday, at 3:15
15 3 * * 7 /usr/local/bin/full-backup /Users
# run full backups on the entire system (but not the home dirs) every month
30 4 * 1 7 /usr/local/bin/full-backup / --exclude=/Users --exclude=/tmp --exclude=/var
# delete old backup files (more than 60 days old) once a month
15 3 * 1 7 find /mnt/backups -type f -mtime +60 -delete
Recovering from these backups is an exercise left for later.
Good luck.
I don't think it gives meaning to have a cron scheduled script to activate Bacula.
The standard way to schedule backup using bacula is :
1) Install the Bacula file daemon on the machine you want to backup and then
2) Configure your Bacula Directory to schedule the backup
ad 1)
If your machine to backup is Debian or Ubuntu, you can install the Bacula file daemon from the shell like this:
shell> apt-get install bacula-fd (bacula-fd stands for Bacula File Daemon)
If your machine to backup is Windows, then you need to download the Bacula file daemon and install it. You can download here : http://sourceforge.net/projects/bacula/files/Win32_64/ (select the version that match your Bacula server version)
ad 2)
You need to find the bacula-dir.conf file on your Bacula server (if you installed Bacula Director on a Ubuntu machine, then the path is : /etc/bacula/bacula-dir.conf)
The bacula-dir.conf schedule section is very flexible and therefore also somewhat complicated, here is an example :
Schedule {
Name = "MonthlyCycle"
Run = Level=Full on 1 at 2:05 # full backup the 1. of every month at 2:05.
Run = Level=Incremental on 2-31 at 2:05 # incremental backup all other days.
}
Note that there are a lot more configuration necessary to run Bacula, here is a full tutorial how to install, configure, backup and restore Bacula : http://webmodelling.com/webbits/miscellaneous/bacula.aspx (disclaimer : I wrote the Bacula tutorial myself)

Resources