Copying updated files from one server to another after 15 min - linux

I want to copy updated file from one server to another every 15 min when the new file gets generated. I have written code using expect script. It works fine but after 15 min it copies all the files in the directory i.e. it replaces and copy latest one also. I want only updated file (updated every 15 min) to get copied and not all the files.
Here is my script:
while :
do
expect -c "spawn scp -P $Port sftpuser#$IP_APP:/mnt/oam/PmCounters/LBO* Test/;expect \"password\";send \"password\r\";expect eof"
sleep 900
done
can I use rsync or any other approach and how?

rsync does only copy changed or new file by default. Use for example that syntax:
rsync -avz -e ssh remoteuser#remotehost:/remote/dir /local/dir/
That specifies ssh as remote shell to use (-e ssh ...), -a activates archive mode, -v sets verbose output and -z compresses the transfer.
You could run that every 15 minutes by a cronjob.
For the password you can use the $RSYNC_PASSWORD environment variable or the --password-file flag.

Related

bash cript: ssh create file; sleep 3m; rm file;

Trying to create a script that will ssh into server, backup some files, sleep for 3 minutes, then remove the files.
While it's sleeping the same script is back to local and rsync the file. Then when the 3 minutes are up... file is removed.
Just trying this so as to not connect twice with ssh.
ssh $site "
tar -zcf $domain-$date.tar.gz $path;
{ sleep 3m && rm -f $domain-$date.tar.gz };
"
rsync -az $site:$domain-$date.tar.gz ~/WebSites/$domain/BackUp/$date;
I tried with command grouping with (), to create a sub command, but I think the variable would not be read. Not sure.
Your ssh command will sleep for 3 minutes and remove the files, then your script proceeds to try to rsync the files that got removed. There is no easy workaround for having your first ssh command sleep while your own script proceeds to run rsync.
Do either:
ssh into the server twice. After rsync completes, ssh into the server again and remove the files.
Tell rsync to remove the files after it's synced them. Add the --remove-source-files to rsync.

wget -o command output generates more than one file, is it possible to get only one?

I am executing the "wget -o " and because the output is bigger than expected, it is split in more than one file. Is there a way to get only one file? If this is possible I would prefer to use only the command wget.
The command wget that I am executing is:
$ wget -o neighborhoods.json https://raw.githubusercontent.com/mongodb/docs-assets/geospatial/neighborhoods.json
And the multiple output is:
-rw-rw-r-- 1 ubuntu ubuntu 6652 Mar 4 01:15 neighborhoods.json
-rw-rw-r-- 1 ubuntu ubuntu 4137081 Mar 4 01:15 neighborhoods.json.1
Look well at the wget output, you will see what it is/will be doing. wget does not split files if they are long; instead, it avoids to overwrite files, if they exist (creating a new file instead of touching the already existing one).
Delete the two files neighborXXX, and start wget again; be sure it finishes without problems: it will write (create) the single file you asked for. If it is interrupted, and you restart it, it will create a new file (appending .1 and so on).
You can pass it the option -c to tell it to continue a broken download, if it was interrupted - most of the times it works well (not always tough).

SCP to transfer files from remote server which are modified after specific time

In remote server log files are rotated as shown below when the size of the active log file (file.log) is reached 100mb
delete file.log.4
file.log.3 -> file.log.4
file.log.2 -> file.log.3
file.log.1 -> file.log.2
file.log -> file.log.1
Initially all the files will be moved to local server and renamed as below
file.log_timestamp_of_log4
file.log_timestamp_of_log3
file.log_timestamp_of_log2
file.log_timestamp_of_log1
Then after only those files which are modified after the last script run time should be moved to local server.
for example next time when the script runs if file.log.1 and file.log.2 has modification time greater than the previous script rum time then only these should be moved to local server.
Can this be done using scp ?
scp is command to copy from one server to other. So if you are copying from remote to local yes you can use scp. To fetch previous modified date you can use date -r . You can save last script run time to compare. You need to use scp -p to preserve modifed date. To calculate size you can use du -h
So do something like following algo
scp -p remotepath:/filename localpath
last_mod = date -r filename
size = du -h filename
if last_mod > script_runtime
{ if size > 100 MB
{ mv filename > filename1 }
}

Linux - copying only new files from one server to another

I have a server where files are transferred thru FTP to a location. All files are there since transfer beginning (January 2015).
I want to make a new server and transfer the files from first server's location.
Basically, I need a cron job to run scp and transfer only new files since last run.
Connection between servers with ssh is working and I can transfer files without restiction between servers.
How can I achieve this in Ubuntu?
The possible duplicate with the other question doesn't stand because, on my destination server I will have just one file where I should keep the date of last cron run and the files which will be copied from first server will be parsed and deleted afterwards.
rsync will simply make sure that all files exists in both servers, correct?
I manage to set-up the cron job on remote computer using the following:
I created first a timestamp file which will keep the last timestamp when cron job run:
touch timestamp
Then I copy all files with ssh and scp commands:
ssh username#remote find <files_path> -type f -newer timestamp | xargs -i scp -i username#remote:'{}' <local_path>
Then I touch timestamp file with new modified time:
touch -m timestamp
The only problem with this script is that, if a file is copied to remote host during ssh run before touching timestamp second time, this file is ignored on later runs.
Later edit:
To be sure that there is no gap between timestamp file and actual run because of ssh command duration, the script was changed to:
touch timestamp_new
ssh username#remote find <files_path> -type f -newer timestamp | xargs -i scp -i username#remote:'{}' <local_path>
rm -rf timestamp
mv timestamp_new timestamp

Simple bash script runs asynchronously when run as a cron job

I have a backup script written that will do the following in this order:
Zip up files via SSH on a remote backup server
Dump my local database
Transfer my local database via SSH rsync to the backup server
Now when I run this script from the command line in RHEL it works a-ok perfectly fine.
BUT when I set this script to run via a cronjob, the script does run, but from what I can tell, it's somehow running those above 3 commands simultaneously. Because of that, things are getting done out of order (my local database is completed dumping and transferred before the #1 zip job is actually complete).
Has anyone run across such a strange scenario? As the most simple fix, is there a way to force a script to run synchronously? Maybe add some kind of command to wait for the prior line to complete before moving on?
EDIT I added a example version of my backup script. It seems that the second line of my script runs at the same time as the first line of my script, so while the SSH command has been issued, it has not yet completed before my second line triggers and an SQL dump begins.
#!/bin/bash
THEDIR="sample"
THEDBNAME="mydatabase"
ssh -i /rsync/mirror-rsync-key sample#sample.com "tar zcvpf /$THEDIR/old-1.tar /$THEDIR/public_html/*"
mysqldump --opt -Q $THEDBNAME > mySampleDb
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/public_html/ sample#sample.com:/$THEDIR/public_html/
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/ sample#sample.com:/$THEDIR/
Unless you're explicitly using backgrounding (&) everything should run one-by-one, waiting until the prior finishes.
Perhaps you are actually seeing overlapping prior executions by cron? If so, you can prevent multi-execution by calling your script with flock
e.g. midnight cron entry from
0 0 * * * backup.sh
to
0 0 * * * flock -n /tmp/backup.lock -c backup.sh
If you want to run commands in a sequential order you can use ; operator.
; – semicolon operator
This operator Run multiple commands in one go, but in a sequential order. If we take three commands separated by semicolon, second command will run after first command completion, third command will run only after second command execution completes. One point we should know is that to run second command, it do not depend on first command exit status.
Execute ls, pwd, whoami commands in one line sequentially one after the other.
ls;pwd;whoami
Please correct me if i am not understanding your question correctly.

Resources