I have a set of folders on my local nix box and 3 nfs mounts from my NAS. All my permissions seems set to be able to write to my NAS. I also have a mirror of the folder structure on my NAS.
I want to use rsync as a cronjob so that anything new in a folder or subfolder is synced to my NAS with an 'ignore-existing' flag. After the cronjob has synced I want to clear out the folders that are in my local machine so I just have an empty folder structure. Is it anything more than this?
rsync -a -v --ignore-existing --remove-source-files src dst
No, there isn't.
I do pretty much the same from cron via a script used on a few machines which has
POSTCMD="cd /var/local/backup/${HOSTNAME} && \
rsync --password-file /root/.rsyncd.passwd \
--archive \
--delete-after . ${NASSERVER}::backup/${HOSTNAME}"
Related
I have a Thinkpad running Linux (Ubuntu 14.04) which is on a wired network and a Mac running Yosemite on wireless, in a different subnet. They're both work machines. I also have a 1TB encrypted USB external Lenovo disk. I have created the following script to run from cron from the Thinkpad to sync the hidden folders in /home/greg with the external drive (connected to the TP), providing it's mounted to the right dir. Then it should sync the remaining, non-hidden) content of /home/greg and perhaps select customised parts of /etc. Once that's done, it should do something similar for the Mac, keeping the hidden files separate but doing a union of the content. My first rsync is meant to only include the hidden files (.*/) in /home/greg and the second rsync is meant to grab everything that's not hidden in that directory. The following is a work in progress.
#!/bin/bash
#source
LOCALHOME="/home/greg/"
#target disk
DRIVEBYIDPATH="/dev/disk/by-id"
DRIVEID="disk ID here"
DRIVE=$DRIVEBYIDPATH/$DRIVEID
#mounted target directories
DRIVEMOUNTPOINT="/media/Secure-BU-drive"
THINKPADHIDDENFILES="/TPdot"
MACHIDDENFILES="/MACdot"
BACKUPDIR="/homeBU"
#if test -a $DRIVE ;then echo "-a $?";fi
# Check to see if the drive is showing up in /dev/disk/by-id
function drivePresent {
if test -e $DRIVE
then echo "$DRIVE IS PRESENT!"
driveMounted
else
echo "$DRIVE is NOT PRESENT!"
fi
}
# Check to see if drive is mounted where expected by rsync and if not mount it
function driveMounted {
mountpoint -q $DRIVEMOUNTPOINT
if [[ $? == 0 ]]
then
syncLocal #make sure local has completed before remote starts
#temp disabled syncRemote
else
echo "drive $DRIVEID is PRESENT but NOT MOUNTED. Mounting $DRIVE on $DRIVEMOUNTPOINT"
mount $DRIVE $DRIVEMOUNTPOINT
if [ $? == 0 ]
then
driveMounted
#could add a counter + while/if to limit the retries to say 5?
fi # check mount worked, then re-test until true, at which point the test will follow the other path
fi
}
# Archive THINKPAD to USB encrypted drive
function syncLocal {
echo "drive $DRIVEID is PRESENT and MOUNTED on $DRIVEMOUNTPOINT- now do rsync"
#rsync for all the Linux application specific files (hidden directories)
rsync -ar --delete --update $LOCALHOME/.* $DRIVEMOUNTPOINT/$BACKUPDIR/$THINKPADHIDDENFILES
#rsync for all the content (non-hidden directories)
rsync -ar --delete --exclude-from ./excludeFromRsync.txt $LOCALHOME $DRIVEMOUNTPOINT/$BACKUPDIR
#rsync for Linux /etc dir (includes some custom scripts and cron jobs)
#rsync
}
# Sync MAC to USB encrypted drive
function syncRemote { # Sync Mac to USB encrypted drive
echo "drive $DRIVEID is PRESENT and MOUNTED on $DRIVEMOUNTPOINT- now do rsync"
#rsync for all the Mac application specific files (hidden directories)
rsync -h --progress --stats -r -tgo -p -l -D --update /home/greg/ /media/Secure-BU-drive/
#rsync for all the content (non-hidden directories)
rsync -av --delete --exclude-from ./excludeFromRsync.txt $LOCALHOME $DRIVEMOUNTPOINT/$BACKUPDIR
#rsync for Mac /etc dir (includes some custom scripts and cron jobs)
rsync
}
#This is the program starting
drivePresent
The content of the exclude file mentioned in the second rsync in syncLocal is (nb syncRemote is disabled for the moment):
.cache/
/Downloads/
.macromedia/
.kde/cache-North/
.kde/socket-North/
.kde/tmp-North/
.recently-used
.local/share/trash
**/*tmp*/
**/*cache*/
**/*Cache*/
**~
/mnt/*/**
/media/*/**
**/lost+found*/
/var/**
/proc/**
/dev/**
/sys/**
**/*Trash*/
**/*trash*/
**/.gvfs/
/Documents/RTC*
.*
My problem is that the first local rsync that's meant to be capturing ONLY the /home/greg/.* files seems to have captured everything or possible has failed silently and allowed the second local rsync to run but without excluding the /home/greg/.* files?
I know I've added a load of possibly irrelevant context but I thought it might help set my expectations for the related rsyncs. Sorry if I've gone overboard.
Thanks in advance
Greg
You have to be very careful with .* as it will pull in . and ... So that's your first rsync line:
rsync -ar --delete --update $LOCALHOME/.* $DRIVEMOUNTPOINT/$BACKUPDIR/$THINKPADHIDDENFILES
The shell expand * and so rsync sees . and .. and it will go full-recursive on those two!
I wonder if this might help: --exclude . --exclude .. Well, I'm sure you know about using -vn to help you debug rsync issues.
Am using wget for download files from ftp.
Ftp folder have name /var/www/html/
Inside this folder is located tree of folders & files, ~20 levels.
Am trying make ftp download (have no ssh access), of this all with wget.
wget -- recursive -nv --user user --password pass ftp://site.tld/var/www/folder/
This one command runs Ok. But it creates an folder structure.
~/back/site.tld/var/www/html/my-files-and-folders-here
Question:
Is any possibility - to say wget, not create ~/site.tld/var/www/html/ but make all tree, in current folder?
i.e. ~/back/my-files-want-here/ I.e. - to trim/cut certain path?
Thanks
Look for --no-host-directories and --cut-dirs in the manpage.
This should work like expected (maybe you have to increase/decrease cut-dirs):
wget --recursive --no-verbose --no-host-directories --cut-dirs=3 --user user --password password ftp://site.tld/var/folder
I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user#10.1.1.1:/var/www/
The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.
Is this possible? If so, how would I go about doing that?
** EDIT **
There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html
** EDIT 2 **
I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:
sudo rsync -az user#10.1.1.2:/var/www/ /var/www/
This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.
You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user#targethost:/path
This lets you authenticate as user on targethost, but still get privileged write permission through sudo. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from option.
rsync -av /path/to/files user#targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user#targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.
As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user#10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user#10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user#10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user#10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
I had a similar problem and cheated the rsync command,
rsync -avz --delete root#x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user#remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
you tar up your directory,
instead of creating a tar file, you send the tar output to stdout,
that stdout is piped through an SSH command to a receiving tar on the other host,
but that receiving tar is run by sudo, so it has privileged write access to set usernames.
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website
I connected to Amazon's linux instance from ssh using private key. I am trying to copy entire folder from that instance to my local linux machine .
Can anyone tell me the correct scp command to do this?
Or do I need something more than scp?
Both machines are Ubuntu 10.04 LTS
another way to do it is
scp -i "insert key file here" -r "insert ec2 instance here" "your local directory"
One mistake I made was scp -ir. The key has to be after the -i, and the -r after that.
so
scp -i amazon.pem -r ec2-user#ec2-##-##-##:/source/dir /destination/dir
Call scp from client machine with recursive option:
scp -r user#remote:src_directory dst_directory
scp -i {key path} -r ec2-user#54.159.147.19:{remote path} {local path}
For EC2 ubuntu
go to your .pem file directory
scp -i "yourkey.pem" -r ec2user#DNS_name:/home/ubuntu/foldername ~/Desktop/localfolder
You could even use rsync.
rsync -aPSHiv remote:directory .
This's how I copied file from amazon ec2 service to local window pc:
pscp -i "your-key-pair.pem" username#ec2-ip-compute.amazonaws.com:/home/username/file.txt C:\Documents\
For Linux to copy a directory:
scp -i "your-key-pair.pem" -r username#ec2-ip-compute.amazonaws.com:/home/username/dirtocopy /var/www/
To connect to amazon it requires key pair authentication.
Note:
Username most probably is ubuntu.
I use sshfs and mount remote directory to local machine and do whatever you want. Here is a small guide, commands may change on your system
This is also important and related to the above answer.
Copying all files in a local directory to EC2. This is a Unix answer.
Copy the entire local folder to a folder in EC2:
scp -i "key-pair.pem" -r /home/Projects/myfiles ubuntu#ec2.amazonaws.com:/home/dir
Copy only the entire contents of local folder to folder in EC2:
scp -i "key-pair.pem" -r /home/Projects/myfiles/* ubuntu#ec2.amazonaws.com:/home/dir
I do not like to use scp for large number of files as it does a 'transaction' for each file. The following is much better:
cd local_dir; ssh user#server 'cd remote_dir_parent; tar -c remote_dir' | tar -x
You can add a z flag to tar to compress on server and uncompress on client.
One way I found on youtube is to connect a local folder with a shared folder in EC2 instance. Please view this video for the full instruction. The sharing is instantaneous.
I need to copy the whole contents of a linux server, but I'm not sure how to do it recursively.
I have a migration script to run on the server itself, but it won't run because the disc is full, so I need something I can run remotely which just gets everything.
I need to copy the whole contents of a linux server, but I'm not sure how to do it recursively.
How about
scp -r root#remotebox:/ your_local_copy
sudo rsync -hxDPavil -H --stats --delete / remote:/backup/
this will copy everything (permissions, owners, timestamps, devices, sockets, hardlinks etc). It will also delete stuff that no longer exists in source. (note that -x indicates to only copy files within the same mountpoint)
If you want to preserve owners but the receiving end is not on the same domain, use --numeric-ids
To automate incremental backup w/snapshots, look at rdiff-backup or rsnapshot.
Also, gnu tar is highly underrated
sudo tar cpf / | ssh remote 'cd /backup && tar xv'