I have a scenario where I have to take an incremental backup every 5 minutes of a large file which is around 100GB on the local machine if the content of the file changes.
Filename: example.txt
Backups: example.txt.00:05, example.txt.00:10, example.txt.00:15 and
so on.
What will be the most optimized way to do this?
If I opt for diff than it will take a lot of time to check the content of the file.
I would prefer doing it with rsync but I am unsure about how it will manage the multiple files.
I figured this out with the man page of rsync.
-b, --backup make backups (see --suffix & --backup-dir)
--backup-dir=DIR make backups into hierarchy based in DIR
--suffix=SUFFIX backup suffix (default ~ w/o --backup-dir)
Script:
#!/bin/bash
while True
do
timestamp=$(date +"%H:%M:%S")
echo $timestamp
rsync -avschz --backup --backup-dir=archive --suffix="-$timestamp" example.txt backup
sleep 300
done
The above script will create an archive directory inside the backup directory and rename the files accordingly.
Output:
imohit:rsync-script ethicalmohit$ ls -l backup/archive/
total 88064
-rw-r--r-- 1 ethicalmohit staff 18874368 Mar 25 03:06 example.txt-03:15:41
-rw-r--r-- 1 ethicalmohit staff 12582912 Mar 25 03:17 example.txt-03:25:42
-rw-r--r-- 1 ethicalmohit staff 13631488 Mar 25 03:25 example.txt-03:30:42
Related
I am trying to use rsync to backup some data from one computer (PopOS! 21.04) to another (Rocky 8.4). But no matter which flags I use with rsync, file permissions and ownership never seem to be saved.
What I do, is run this command locally on PopOS:
sudo rsync -avz /home/user1/test/ root#192.168.10.11:/root/ttt/
And the result I get something link this:
[root#rocky_clone0 ~]# ls -ld ttt/
drwxrwxr-x. 2 user23 user23 32 Dec 17 2021 ttt/
[root#rocky_clone0 ~]# ls -l ttt/
total 8
-rw-rw-r--. 1 user23 user23 57 Dec 17 2021 test1
-rw-rw-r--. 1 user23 user23 29 Dec 17 2021 test2
So all the file ownership change to user23, which is the only regular user on Rocky. I don't understand how this happens, with rsync I am connecting to root on the remote host, but as the result files are copied as user23. Why isn't -a flag work properly in this case?
I have also tried these flags:
sudo rsync -avz --rsync-path="sudo -u user23 rsync -a" /home/user1/test root#192.168.10.11:/home/user23/rrr
This command couldn't copy to the root directory, so I had to change the remote destination to user23's home folder. But the result is the same.
If someone could explain to me what am I doing wrong, and how to backup files with rsync so that permissions and ownership stay the same as on the local computer I would very much appreciate it.
Have a look at how the (target)filesystem is mounted on the Rocky(target) system.
Some mounted filesystems (such as many FUSE mounts) do not support the classical unix permissions, and simply use the name of the user who mounted the filesystem as owner/group.
Any attempt to chown/chmod/etc (either by you or by rsync) will just silently be ignored, but appear to "succeed" (no errors reported).
There are two directories that contains these files:
First one /usr/local/nagios/etc/hosts
[root#localhost hosts]$ ll
total 12
-rw-rw-r-- 1 apache nagios 1236 Feb 7 10:10 10.80.12.53.cfg
-rw-rw-r-- 1 apache nagios 1064 Feb 27 22:47 10.80.12.62.cfg
-rw-rw-r-- 1 apache nagios 1063 Feb 22 12:02 localhost.cfg
And the second one /usr/local/nagios/etc/services
[root#localhost services]$ ll
total 20
-rw-rw-r-- 1 apache nagios 2183 Feb 27 22:48 10.80.12.62.cfg
-rw-rw-r-- 1 apache nagios 1339 Feb 13 10:47 Check usage _etc.cfg
-rw-rw-r-- 1 apache nagios 7874 Feb 22 11:59 localhost.cfg
And I have a script that goes through file in Hosts directory and paste some lines from that file in the file in the Services directory.
The script is ran like this:
./nagios-contacts.sh /usr/local/nagios/etc/hosts/10.80.12.62.cfg /usr/local/nagios/etc/services/10.80.12.62.cfg
How can I achieve that another script calls my script and goes through every file in the Hosts directory and does its job for the files with the same name in the Service directory?
In my script I´m pulling out contacts from the 10.80.12.62.cfg in the Hosts directory and appending them to the file with the same name in the Service directory.
Don't use ls output as an input to for loop instead use the built-in wild-cards. See why it's not a good idea.
for f in /usr/local/nagios/etc/hosts/*.cfg
do
basef=$(basename "$f")
./nagios-contacts.sh "$f" "/usr/local/nagios/etc/services/${basef}"
done
It sounds like you just need to do some iteration.
echo $(pwd)
for file in $(ls); do ./nagious-contacts.sh $file; done;
So it will loop over all files in the current directory.
You can also modify it as well by doing something more absolute.
abspath=$1
for file in $(ls $abspath); do ./nagious-contacts.sh $abspath/$file; done
which would loop over all files in a set directory, and then pass the abspath/filename into your script.
I need some help to explain cron for backup files.
I have a shell script to backup logfile running in RHEL 6.7 and Solaris 10. It'll move the logfile to backup directory and gunzip each logfile.
Here the script.
#!/bin/bash
# Defined variable
dirLog=/app/rbt3/prod/cda/logs
dirBackup=/app/rbt3/prod/cda/logs/backup
# Change directory to CDA logfile
cd $dirLog
# Backup mechanism
for file in `ls *.log.*` ; do
#echo "FileSemua -> $file"
echo " Pindahkan file $file ke directory $dirBackup "
/bin/mv $dirLog/$file $dirBackup
echo " start Gzip file [$file]..... "
/bin/gzip $dirBackup/$file
echo " done Gzip file [$file]..... "
done
The script is registered in crontab to run every day at 1:20 AM.
20 1 * * * /app/prod/logs/backupLog.sh
Here the backup files that cron created.
-rw-r--r-- 1 user3 user 36344 Nov 18 11:59 alarm.log.20161117.gz
-rw-r--r-- 1 user3 user 35085 Nov 19 11:59 rsync.log.20161117.gz
-rw-r--r-- 1 user3 user 35018 Nov 20 11:59 trace.log.20161117.gz
As far as I know, when we register the script in cron for a specific time. It'll be running and create the files exactly as the cron said (Please correct me if I'm wrong). But in my case, the time for backup files created is different from the cron. Did I miss something?
Thanks.
When you move file time stamps not change. and you use gzip for compressing which use file time stamps.thats why you see different time stamps
$ ls -l
-rw-r--r-- 1 user3 user 36344 Nov 18 11:59 alarm.log.20161117.gz
-rw-r--r-- 1 user3 user 35085 Nov 19 11:59 rsync.log.20161117.gz
-rw-r--r-- 1 user3 user 35018 Nov 20 11:59 trace.log.20161117.gz
ls -l shows the last update time of your log file
I am trying to copy directories (& files) recursively from one directory to another.
I tried the following -
rsync -avz <source> <target>
cp -ruT <source> <taret>
Both were successful. but, when i try to compare the sizes using (du -c), the empty directories seem to have mismatch in size.
In target directory
drwxrwxr-x 2 abc devl 4096 Jun 9 01:25 .
drwxrwxr-x 4 abc devl 4096 Jul 20 07:46 ..
In source directory
drwxrwxr-x 2 prod ops 2 Jun 9 01:25 .
drwxrwxr-x 4 prod ops 36 Jul 20 07:46 ..
Is there a special way to handle this? diff -qr doesn't show any differences though.
Thanks for your help.
Are both folders on the same volume? If not chances are that the sector size for those volumes are different and in turn the inode sizes differ. In case of diff it's just looking at whenever or not the directory exists and if it contains the corresponding files. It's similar in how diff doesn't include permission differences because those might be pretty system specific.
A pretty comprehensive answer can be found here: Why size reporting for directories is different than other files?
i create backup folder in ftp server , and send all my .tar.gz file into /backup folder
using (put file.tar.gz backup)
while i retrieve backup,, i get backup folder as backup files. ,, how to convert the file to folder ..
ftp server
ls
227 Entering Passive Mode (10,21,131,105,76,56)
150 Accepted data connection
drwxr-xr-x 6 100 ftpgroup 7 Oct 20 19:57 .
drwxr-xr-x 6 100 ftpgroup 7 Oct 20 19:57 ..
-r-------- 1 100 ftpgroup 84 Oct 21 11:15 .banner
drwxrwxrwx 3 100 ftpgroup 4 Oct 20 18:28 backup
drwxrwxrwx 2 100 ftpgroup 3 Oct 20 19:45 dailybackup
drwxrwxr-x 2 100 ftpgroup 3 Oct 20 19:57 hi5songs
drwxrwxr-x 2 100 ftpgroup 3 Oct 20 19:49 whole
226-Options: -a -l
226 7 matches total
i tried :
ftp> mget backup``
mget .? y
227 Entering Passive Mode (10,21,131,105,62,8)
550 I can only retrieve regular files
mget ..? y
Warning: embedded .. in .. (changing to !!)
227 Entering Passive Mode (10,21,131,105,46,39)
550 Can't open !!: No such file or directory
mget backup? y
227 Entering Passive Mode (10,21,131,105,72,24)
550 I can only retrieve regular files
mget cpanelbackup? y
227 Entering Passive Mode (10,21,131,105,73,69)
550 Can't open cpanelbackup: No such file or directory
while
i use (get backup home)
it successfully retrieve but as files shown below
server:
'root#azar [/home]# ls
./ backup.2* .cpan/ dailybackup hi5songs.4 oldeserver
../ backup.3* cPanelInstall/ hi5songs/ hi5songs.5 oldserver/
0_README_BEFORE_DELETING_VIRTFS backup.4* .cpanm/ hi5songs.1 home quota.user
backup/ backup.5* .cpcpan/ hi5songs.2 latest virtfs/
backup.1* .banner cpeasyapache/ hi5songs.3 lost+found/ whole'
i got that backup with green color executable file like backup.1* ( note: i cant open those file and extract those files) what to do
how to get my .tar.gz file back
please guide me,,
advance thanks,,
Updated Answer
If you want to get all files from /some/place on your server, to /home/here on your local machine, you would either do this:
cd /home/here # change directory before starting FTP
ftp server ... # connect
cd /some/place # go to desired folder on server
bi # ensure no funny business with line-endings
mget * # get all files
or you can change directory locally, within FTP like this:
ftp server ... # connect
cd /some/place # go to desired folder on server
lcd /home/here # LOCALLY change directory to where you want the files to 'land'
bi # ensure no funny business
mget * # get all files
Original Answer
I cannot understand your question at all, but you are doing some things wrong.
You cannot use GET or MGET to get a folder (directory) like you are trying to do with mget backup. You can only GET a file. Now your file may be a tar-file with more than one file in it, but it is still a file.
If you are getting tar-files and binary files, you should use BINARY mode to ensure line-end characters that may occur in binary files are not translated between Windows and Unix line-endings. So, as a matter of course, you should issue BI command before you get files.
If you have several files in your backup directory, you should probably do cd backup then bi
then mget *