Linux - copying only new files from one server to another - linux

I have a server where files are transferred thru FTP to a location. All files are there since transfer beginning (January 2015).
I want to make a new server and transfer the files from first server's location.
Basically, I need a cron job to run scp and transfer only new files since last run.
Connection between servers with ssh is working and I can transfer files without restiction between servers.
How can I achieve this in Ubuntu?
The possible duplicate with the other question doesn't stand because, on my destination server I will have just one file where I should keep the date of last cron run and the files which will be copied from first server will be parsed and deleted afterwards.
rsync will simply make sure that all files exists in both servers, correct?

I manage to set-up the cron job on remote computer using the following:
I created first a timestamp file which will keep the last timestamp when cron job run:
touch timestamp
Then I copy all files with ssh and scp commands:
ssh username#remote find <files_path> -type f -newer timestamp | xargs -i scp -i username#remote:'{}' <local_path>
Then I touch timestamp file with new modified time:
touch -m timestamp
The only problem with this script is that, if a file is copied to remote host during ssh run before touching timestamp second time, this file is ignored on later runs.
Later edit:
To be sure that there is no gap between timestamp file and actual run because of ssh command duration, the script was changed to:
touch timestamp_new
ssh username#remote find <files_path> -type f -newer timestamp | xargs -i scp -i username#remote:'{}' <local_path>
rm -rf timestamp
mv timestamp_new timestamp

Related

Linux shell script to download file(s) from server to PC while connected with putty

I am connected to a server through putty, and I want to download (to my PC) certain files on a regular basis using a shell script. Specifically, these are the files...
ls -t ~/backup | head -n2
What is the best strategy for this? I was trying with command line FTP but I am prompted to login to something. I'm already logged into the server that has the files I need to download, so I am missing something.
The SSH protocol can be a good way, with scp command. You can take a look at this thread
To automate the process and script a solution, you will need use password-less ssh and ssh keys.
The first step will be to get the list of files to copy and so:
fils=$(ssh username#host ls -t ~/backup | head -n2)
Then once we have the files read into a variable fils, we can loop on the entries and run a secure copy command:
while read fyle
do
scp username#host:~/"$fyle" "$fyle"
done <<< "$fils"

How can I create a system to process files directly after they are transferred to a different server?

Currently, I have a server that needs to batch process a bunch of files. All of the files are on Server A, which is running Ubuntu, but I need them to be processed on a macos server. Right now, I have a script that will transfer all the files from Server A to Server B, process all the files, then transfer all the files back to Server A.
The bash file looks like this (simplified):
script -q -c "scp -r files_to_process b:process_these 2>&1"
ssh b "process_all.sh"
script -q -c "scp -r processed_files a:final_dir 2>&1"
My question is this: is there any easy way to implement a simple queue between these servers?
Once a file has been transferred to b, I am wasting time by not just processing the files immediately.

SCP to transfer files from remote server which are modified after specific time

In remote server log files are rotated as shown below when the size of the active log file (file.log) is reached 100mb
delete file.log.4
file.log.3 -> file.log.4
file.log.2 -> file.log.3
file.log.1 -> file.log.2
file.log -> file.log.1
Initially all the files will be moved to local server and renamed as below
file.log_timestamp_of_log4
file.log_timestamp_of_log3
file.log_timestamp_of_log2
file.log_timestamp_of_log1
Then after only those files which are modified after the last script run time should be moved to local server.
for example next time when the script runs if file.log.1 and file.log.2 has modification time greater than the previous script rum time then only these should be moved to local server.
Can this be done using scp ?
scp is command to copy from one server to other. So if you are copying from remote to local yes you can use scp. To fetch previous modified date you can use date -r . You can save last script run time to compare. You need to use scp -p to preserve modifed date. To calculate size you can use du -h
So do something like following algo
scp -p remotepath:/filename localpath
last_mod = date -r filename
size = du -h filename
if last_mod > script_runtime
{ if size > 100 MB
{ mv filename > filename1 }
}

split scp of backup files to different smb shares based on date

I backup files to Tar files once a day and grab from our Ubuntu servers using a backup shell script and put them in a share. We only have 5TB shares but can have several.
At the moment we need more as we backup 30 days worth of Tar files.
I need a method where the first 10 days go to share one, next ten to share tow, next 11 to share three
Currently each Server VM runs the following script to backup and tar folders and place then in another folder ready to be grabbed by the backup server
!/bin/bash
appname=myapp.com
dbname=mydb
dbuser=myDBuser
dbpass=MyDBpass
datestamp=`date +%d%m%y`
rm -f /var/mybackupTars/* > /dev/null 2>&1
mysqldump -u$dbuser -p$dbpass $dbname > /var/mybackups/$dbname-$datestamp.sql && gzip /var/mybackupss/$dbname-$datestamp.sql
tar -zcf /var/mybackups/myapp-$datestamp.tar.gz /var/www/myapp > /dev/null 2>&1
tar -zcf /var/mydirectory/myapp-$datestamp.tar.gz /var/www/html/myapp > /dev/null 2>&1
I then grab the backups using a script on the backup server and put them in a share
#!/bin/bash
#
# Generate a list of myapps to grab
df|grep myappbackups|awk -F/ '{ print $NF }'>/tmp/myapplistlistsmb
# Get each app in turn
for APPNAME in `cat /tmp/applistsmb`
do
cd /srv/myappbackups/$APPNAME
scp $APPNAME:* .
done
I know this is a tough one but I really need 3 shares with ten days worth in each share
I do not anticipate the backup script changing on each server VM that backs up to itself
Only maybe the grabber script that puts the dated backups in the share on the backup server
Or am I wrong??
Any help would be great

Copying updated files from one server to another after 15 min

I want to copy updated file from one server to another every 15 min when the new file gets generated. I have written code using expect script. It works fine but after 15 min it copies all the files in the directory i.e. it replaces and copy latest one also. I want only updated file (updated every 15 min) to get copied and not all the files.
Here is my script:
while :
do
expect -c "spawn scp -P $Port sftpuser#$IP_APP:/mnt/oam/PmCounters/LBO* Test/;expect \"password\";send \"password\r\";expect eof"
sleep 900
done
can I use rsync or any other approach and how?
rsync does only copy changed or new file by default. Use for example that syntax:
rsync -avz -e ssh remoteuser#remotehost:/remote/dir /local/dir/
That specifies ssh as remote shell to use (-e ssh ...), -a activates archive mode, -v sets verbose output and -z compresses the transfer.
You could run that every 15 minutes by a cronjob.
For the password you can use the $RSYNC_PASSWORD environment variable or the --password-file flag.

Resources