My script is designed to gather files and back them up across the network using ssh (because it is the only thing unblocked by the firewall) and then delete any backups that are over 30 days old. However, when the code is run I receive this error message:
receiving incremental file list rsync: mkdir
"/var/home/username_local/BackupTo/username/2015-08-10" failed: No such
file or directory (2) rsync error: error in file IO (code 11) at
main.c(576) [receiver=3.0.6]
The code I am using is below:
#!/bin/bash
#User who's files are being backed up
BNAME=username
#directory to back up
BDIR=/home/username/BackThisUp
#directory to backup to
BackupDir=/var/home/username_local/BackupTo
#user
RUSER=$USER
#SSH Key
KEY=/var/home/username_local/.ssh
#Backupname
RBackup=`date +%F`
#Backup Server
BServ=backup.server
#Path
LPATH='Data for backup'
#date
DATE=`date +%F`
#Transfer new backups
rsync -avpHrz -e "ssh -i $KEY" $BNAME#$BServ:$BDIR $BackupDir/$BNAME/$DATE
find $BackupDir/$BNAME -type d -ctime +30 -exec rm -rf {} \;
EDIT: The problem was solved when I added:
mkdir $BackupDir/$BNAME > /dev/null 2>&1
before the rsync
I think rsync will not writ to non existing destination directories.
You could prepend your rsync with an
ssh $BNAME#$BServ "mkdir $BackupDir/$BNAME/$DATE"
but only if the variables for mkdir are known safe. This can be a code injection if e.g. your $BackupDir is provided externally from the script.
Another trick would be to prepend the actual rsync (started on the remote server) with... mkdir:
rsync -avpHrz --rsync-path "mkdir -m 700 $BackupDir/$BNAME/$DATE && rsync" -e "ssh -i $KEY" $BNAME#$BServ:$BDIR $BackupDir/$BNAME/$DATE
Related
I have the following bash script. In the script I use rsync to copy files from a source to a destination. In the first call of rsync I copy all the files and in the second call I double-check the files and if the checksum is valid the copied files are deleted in the source.
#!/bin/bash
set -e
rsync --info=progress2 -r --include='database/session_*.db' --exclude 'database/session*' /local/data/ /import/myNas/data
rsync --info=progress2 -r --include='database/session_*.db' --exclude 'database/session*' --checksum --remove-source-files /local/data/ /import/myNas/data
The problem now is that while rsync is running new files are written to /local/data. I would like that rsync takes a snapshot of the list of files in source (/local/data) when it runs the first time and then only copies these files. In the second run rsync should then also only run on these files from the snapshot (i.e. calculate the checksum and then delete the files). That means the new added files should not be touched.
Is this possible?
Populating a null delimited list of files to synchronize before running rsync with this list:
#!/usr/bin/env bash
##### Settings #####
# Location of the source data files
declare -r SRC='/local/data/'
# Destination of the data files
declare -r DEST='/import/myNas/data/'
##### End of Settings #####
set -o errexit # same as set -e, exit if command fail
declare -- _temp_fileslist
trap 'rm -f "$_temp_fileslist"' EXIT
_temp_fileslist=$(mktemp) && typeset -r _temp_fileslist
# Populate files list as null delimited entries
find "$SRC" \
-path '*/database/session_*.db' \
-and -not -path '*/database/session*' \
-fprinf "$_temp_fileslist" '%P\0'
# --from0 tells rsync to read a null delimited list
# --files-from= tells to read the include list from this file
if rsync --info=progress2 --recursive \
--from0 "--files-from=$_temp_fileslist" -- "$SRC" "$DEST";
then rsync --info=progress2 --recursive \
--from0 "--files-from=$_temp_fileslist" \
--checksum --remove-source-files -- "$SRC" "$DEST"
fi
How I can compress files in this scenario:
I have folder structure like this:
User1:
/home/user1/websites/website1
/home/user1/websites/website2
User2:
/home/user2/websites/website1
/home/user2/websites/website2
/home/user2/websites/website3
And I try now (need) to do backup like this:
Folders for backups per user:
/backup/date/websites/user1/
/backup/date/websites/user1/
And I need backup tar in user directory separately per website like this:
/backup/date/websites/user1/website1.tar.gz
/backup/date/websites/user1/website2.tar.gz
/backup/date/websites/user2/website1.tar.gz
/backup/date/websites/user2/website2.tar.gz
/backup/date/websites/user2/website3.tar.gz
I have script like this one to do half of this work:
#VARIABLES
BKP_DATE=`date +"%F"`
BKP_DATETIME=`date +"%H-%M"`
#BACKUPS FOLDERS
BKP_DEST=/backup/users
BKP_DEST_DATE=$BKP_DEST/$BKP_DATE
BKP_DEST_TIME=$BKP_DEST_DATE/$BKP_DATETIME
BACKUP_DIR=$BKP_DEST_TIME
#NUMBER OF DAYS TO KEEP ARCHIVES IN BACKUP DIRECTORY
KEEP_DAYS=7
#Create folders
mkdir -p $BKP_DEST_DATE
mkdir -p $BKP_DEST_TIME
mkdir -p $BACKUP_DIR
#DELETE FILES OLDER THAN {*} DAYS IN BACKUP SERVER DIRECTORY
#echo 'Deleting backup folder older than '${KEEP_DAYS}' days'
find $BKP_DEST/* -type d -ctime +${KEEP_DAYS} -exec rm -rf {} \;
#Do backups
#List only names available users data directories
usersdirectories=`cd /home && find * -maxdepth 0 -type d | grep -Ev "(tmp|root)"`
#Creating directories per user name
for i in $usersdirectories; do
mkdir -p $BACKUP_DIR/$i/websites
done
But if u see, i haven't how to do tar this for separately archives. In my half script I have done:
Create folder structure for backup by datetime (/backup/users/day/hour-minutes)
Create folder structure for backup by users names (/backup/users/day/hour-minutes/user1)
Thanks for all users who try to help me!
I will try to complete your script, but I can't debug it because your environment is hard to reproduce. It is better in the future that you provide a minimal reproducible example.
#VARIABLES
BKP_DATE=$(date +"%F")
BKP_DATETIME=$(date +"%H-%M")
#BACKUPS FOLDERS
BKP_DEST=/backup/users
BKP_DEST_DATE=$BKP_DEST/$BKP_DATE
BKP_DEST_TIME=$BKP_DEST_DATE/$BKP_DATETIME
BACKUP_DIR=$BKP_DEST_TIME
#NUMBER OF DAYS TO KEEP ARCHIVES IN BACKUP DIRECTORY
KEEP_DAYS=7
#Create folders
mkdir -p $BKP_DEST_DATE
mkdir -p $BKP_DEST_TIME
mkdir -p $BACKUP_DIR
#DELETE FILES OLDER THAN {*} DAYS IN BACKUP SERVER DIRECTORY
#echo 'Deleting backup folder older than '${KEEP_DAYS}' days'
find $BKP_DEST/* -type d -ctime +${KEEP_DAYS} -exec rm -rf {} \;
#Do backups
#List only names available users data directories
usersdirectories=$(cd /home && find * -maxdepth 0 -type d | grep -Ev "(tmp|root)")
#Creating directories per user name
for i in $usersdirectories; do
for w in $(/home/$i/websites/*); do
ws=$(basename $w)
mkdir -p $BACKUP_DIR/$i/websites/$ws
tar -czvf $BACKUP_DIR/$i/websites/$ws.tar.gz /home/$i/websites/$ws
done
done
I suppose there are no blanks inside the directory names website1, etc...
I also replaced the deprecated backticks operators of your code by $(...).
I wish I could comment to ask for more clarification but I will attempt to help you.
#!/bin/bash
# example for user1
archive_path=/backup/data/websites/user1
file_path=/home/user1/websites/*
for i in $file_path
do
tar czf $archive_path/$(basename $i).tar.gz $file_path
done
Okay, after small fix. Version from Pierre working now. If u want, u can adjust it for yours need.
#VARIABLES
BKP_DATE=$(date +"%F")
BKP_DATETIME=$(date +"%H-%M")
#BACKUPS FOLDERS
BKP_DEST=/backup
BKP_DEST_DATE=$BKP_DEST/$BKP_DATE
BKP_DEST_TIME=$BKP_DEST_DATE/$BKP_DATETIME
BACKUP_DIR=$BKP_DEST_TIME
#NUMBER OF DAYS TO KEEP ARCHIVES IN BACKUP DIRECTORY
KEEP_DAYS=7
#Create folders
mkdir -p $BKP_DEST_DATE
mkdir -p $BKP_DEST_TIME
mkdir -p $BACKUP_DIR
#DELETE FILES OLDER THAN {*} DAYS IN BACKUP SERVER DIRECTORY
#echo 'Deleting backup folder older than '${KEEP_DAYS}' days'
find $BKP_DEST/* -type d -ctime +${KEEP_DAYS} -exec rm -rf {} \;
#Do backups
#List only names available users data directories
usersdirectories=$(cd /home/user && find * -maxdepth 0 -type d | grep -Ev "(tmp|root)")
#Creating directories per user name
for i in $usersdirectories; do
for w in /home/user/$i/*; do
#echo ok
ws=$(basename $w)
echo mkdir -p $BACKUP_DIR/$i
echo tar -czvf $BACKUP_DIR/$i/$ws.tar.gz /home/user/$i/$ws
done
done
I need to back up a folder containing more than 2500 sub folders and around 3TB size then ftp it to windows based FTP Server.
As i know tar command is not able to do so.
Thus is it possible to create a script to "tar" each 500 sub folders?
If yes please share your command.
BTW: one way i would do is just using mput in ftp without compression:
touch ftp_temp.sh
chmod +x ftp_temp.sh
vim ftp_temp.sh :
'''
#!/bin/bash
HOST='10. 20.30.40'
USER='ftpuser'
PASSWD='ftpuserpasswd'
ftp -n -v $HOST << EOT
ascii
user $USER $PASSWD
prompt
cd upload
mput
bye
EOT
'''
The following pattern may give you an idea:
find . -type d -maxdepth 1 -print0 | xargs -0 -n 500 echo
find will locate the names of all directories in the current directory, while xargs will pass up to 500 of them at a time to another command (in the above example, the echo command).
I am attempting to write a script in rsync to save daily backups in new directories named after the date they are created, before they are deleted 30 days after being created. The code below works, but it will quickly fill up my memory because the -u option will not see that several files in the directory structure already exist in a previous backup. Is there a better way to do this to preserve memory/bandwidth? I have had the --delete and --backup-dir options mentioned to me, but I have no idea how they would apply to this specific scenario.
#!/bin/bash
#User who's files are being backed up
BNAME=username
#directory to back up
BDIR=/home/username/BackThisUp
#directory to backup to
BackupDir=/var/home/username_local/BackupTo
#user
RUSER=$USER
#SSH Key
KEY=/var/home/username_local/.ssh
#Backupname
RBackup=`date +%F`
#Backup Server
BServ=backup.server
#Path
LPATH='Data for backup'
#date
DATE=`date +%F`
#make parent directory for backup
mkdir $BackupDir/$BNAME > /dev/null 2>&1
#Transfer new backups
rsync -avpHrz -e "ssh -i $KEY" $BNAME#$BServ:$BDIR $BackupDir/$BNAME/$DATE
find $BackupDir/$BNAME -type d -ctime +30 -exec rm -rf {} \;
I might do somethign simpler. Create a hash that only has the date's
day in it. For example, 8/11/2015 would hash to 11
Then do something like
# this number changes based on date.
hash=`date +%d`
rm -rf backup_folder/$hash
# then recreate backup_folder/$hash
You'll have around 30 days of backups. You may want to zip/compress these folders, assuming you have 30 times the size of the folder available on the disk.
This question already has answers here:
Linux: copy and create destination dir if it does not exist
(27 answers)
Closed 7 years ago.
When copying a file using cp to a folder that may or may not exist, how do I get cp to create the folder if necessary? Here is what I have tried:
[root#file nutch-0.9]# cp -f urls-resume /nosuchdirectory/hi.txt
cp: cannot create regular file `/nosuchdirectory/hi.txt': No such file or directory
To expand upon Christian's answer, the only reliable way to do this would be to combine mkdir and cp:
mkdir -p /foo/bar && cp myfile "$_"
As an aside, when you only need to create a single directory in an existing hierarchy, rsync can do it in one operation. I'm quite a fan of rsync as a much more versatile cp replacement, in fact:
rsync -a myfile /foo/bar/ # works if /foo exists but /foo/bar doesn't. bar is created.
I didn't know you could do that with cp.
You can do it with mkdir ..
mkdir -p /var/path/to/your/dir
EDIT
See lhunath's answer for incorporating cp.
One can also use the command find:
find ./ -depth -print | cpio -pvd newdirpathname
mkdir -p `dirname /nosuchdirectory/hi.txt` && cp -r urls-resume /nosuchdirectory/hi.txt
There is no such option. What you can do is to run mkdir -p before copying the file
I made a very cool script you can use to copy files in locations that doesn't exist
#!/bin/bash
if [ ! -d "$2" ]; then
mkdir -p "$2"
fi
cp -R "$1" "$2"
Now just save it, give it permissions and run it using
./cp-improved SOURCE DEST
I put -R option but it's just a draft, I know it can be and you will improve it in many ways. Hope it helps you
rsync is work!
#file:
rsync -aqz _vimrc ~/.vimrc
#directory:
rsync -aqz _vim/ ~/.vim
cp -Rvn /source/path/* /destination/path/
cp: /destination/path/any.zip: No such file or directory
It will create no existing paths in destination, if path have a source file inside.
This dont create empty directories.
A moment ago i've seen xxxxxxxx: No such file or directory, because i run out of free space. without error message.
with ditto:
ditto -V /source/path/* /destination/path
ditto: /destination/path/any.zip: No space left on device
once freed space cp -Rvn /source/path/* /destination/path/ works as expected