Rsync cropping symlink in transit - linux

I have two directories of which files are linked together with symlinks.
i.e. /directory1/files/file_a.txt to /directory1/directory2/files/file_a.txt
to symlink to eachother I build them so they are of the format ../../files/file_a.txt
The symlink is fine on my host server however not fine on my client server and is cropping the symlink to ../files/file_a.txt
meaning the full path isnt present and it is erroring.
My rsync command is
/bin/nice -n 15 /usr/bin/rsync -a -v -r --partial -ogp -l -H --delete --delay-updates --exclude=activated --exclude current/ -e "ssh -F **path to ssh config file**" $package->getDir().'/files '.$package->getDir().'/'.$package->getDate().' root#'. $client->getAddress().'::triplecast/'.$package->getId().'_'.preg_replace("/[^0-9a-zA-Z\-\'_]/", "", $package->getName()). "
Any ideas on what might be happening

It turns out that it was missing the forward slash from root#$client->getAddress().'::triplecast/'.
I have now changed it to
root#'. $client->getAddress().':/triplecast/'
no idea why that made a difference, if you know why please comment below.

Related

Rsync to Amazon Linux EC2 instance - failed: No such file or directory

I want to upload the content of one directory to my Amazon EC2 with rsync:
rsync -r -t -v --progress -z -s -e "ssh -i /home/mostafa/keyamazon.pem" /home/mostafa/splitfiles ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:~/splitfiles
but I receive the following error message:
sending incremental file list
rsync: link_stat "/home/mostafa/splitfiles" failed: No such file or directory (2)
rsync: change_dir#3 "/home/ubuntu//~" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(712) [Receiver=3.1.0]
and if I do a dry run with grsync, it works correctly
In rsync the trailing / is very important. Also you rsync usually defaults to ssh when one of the destinations contains a host.
So you if you want to preserver modification times then you can get rid of the -e and -s options.
Your command could be written as /home/mostafa/splitfiles/ ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:splitfiles/ - notice the trailing /'s provided that you have ssh configured to read the private key from your home directory.
On ubuntu you can add this to the key chain, by going
ssh-add [key-file]
And this will save you having to specify the keyfile everytime you ssh into the AWS machine.
The errors seem to say that on the local machine you don't have a source directory and the destination doesn't exist.
I completed this task with Filezilla instead, easier to use.
You are at home ~ if you cd ../ to root you will be able to run the command.

Rsync to Amazon Ec2 Instance

I have an EC2 instance running and I am able to SSH into it.
However, when I try to rsync, it gives me the error Permission denied (publickey).
The command I'm using is:
rsync -avL --progress -e ssh -i ~/mykeypair.pem ~/Sites/my_site/* root#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com:/var/www/html/
I also tried
rsync -avz ~/Sites/mysite/* -e "ssh -i ~/.ssh/id_rsa.pub" root#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com:/var/www/html/
Thanks,
I just received that same error. I had been consistently able to ssh with:
ssh -i ~/path/mykeypair.pem \
ubuntu#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com
But when using the longer rsync construction, it seemed to cause errors. I ended up encasing the ssh statement in quotations and using the full path to the key. In your example:
rsync -avL --progress -e "ssh -i /path/to/mykeypair.pem" \
~/Sites/my_site/* \
root#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com:/var/www/html/
That seemed to do the trick.
Below is what I used and it worked. Source was ec2 and target was home machine.
sudo rsync -azvv -e "ssh -i /home/ubuntu/key-to-ec2.pem" ec2-user#xx.xxx.xxx.xx:/home/ec2-user/source/ /home/ubuntu/target/
use rsync to copy files between servers
copy file from local machine to server
rsync -avz -e "ssh -i /path/to/key.pem" /path/to/file.txt <username>#<ip/domain>:/path/to/directory/
copy file from server to local machine
rsync -avz -e "ssh -i /path/to/key.pem" <username>#<ip/domain>:/path/to/directory/file.txt /path/to/directory/
note: use command with sudo if you are not a root user
After suffering a little bit, I believe this will help:
I am using the below command and it has worked without problems:
rsync -av --progress -e ssh /folder1/folder2/* root#xxx.xxx.xxx.xxx:/folder1/folder2
First consideration:
Use the --rsync-path
I prefer in a shell script:
#!/bin/bash
RSYNC = /usr/bin/rsync
$RSYNC [options] [source] [destination]
Second consideration:
Create a publick key by command below for communication between the servers in question. She will not be the same as provided by Amazon.
ssh-keygen -t rsa
Do not forget to enable permission on the target server in /etc/ssh/sshd_config (UBUNTU and CENTOS).
Sync files from one EC2 instance to another
http://ask-leo.com/how_can_i_automate_an_sftp_transfer_between_two_servers.html
Use -v option for verbose and better identify errors.
Third Consideration
If both servers are on EC2 make a restraint by security group
In the security group Server Destination:
inbound:
Source / TCP port
22 / IP Security (or group name) of the source server
This worked for me:
nohup rsync -zravu --partial --progress -e "ssh -i xxxx.pem" ubuntu#xx.xx.xx.xx:/mnt/data /mnt2/ &

rsync over SSH preserve ownership only for www-data owned files

I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user#10.1.1.1:/var/www/
The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.
Is this possible? If so, how would I go about doing that?
** EDIT **
There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html
** EDIT 2 **
I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:
sudo rsync -az user#10.1.1.2:/var/www/ /var/www/
This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.
You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user#targethost:/path
This lets you authenticate as user on targethost, but still get privileged write permission through sudo. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from option.
rsync -av /path/to/files user#targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user#targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.
As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user#10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user#10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user#10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user#10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
I had a similar problem and cheated the rsync command,
rsync -avz --delete root#x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user#remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
you tar up your directory,
instead of creating a tar file, you send the tar output to stdout,
that stdout is piped through an SSH command to a receiving tar on the other host,
but that receiving tar is run by sudo, so it has privileged write access to set usernames.
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website

Copying files and dirs on remote server while excluding some of them

Server 1 is connected to Server 2 via SSH.
We know this:
I can execute a command such as
" ssh server2 "cp -rv /var/www /tmp" "
which will copy the entire /var/www dir to /tmp. However inside of /var/www we have the following structure(sample LS output below)
$ ls
/web1
/web2
/web3
file1.php
file2.php
file3.php
How can I execute a cp command that will exclude /web1, /web3, file1.php and file3.php (obviously just copying web2 and file2 is not an option since there are significantly more files than just 6)
Note: I am using this to backup Server2 prior to RSYNCing from Server1.
The first two poster's both have good suggestions about rsync. Here's a more complete outline of the process.
(1) You want to backup server 2 before you sync from server 1, so let's do that with rsync. Here's the command as seen from server 1 (assuming it has access to server 2):
ssh user#server2 "rsync $RSYNC_OPTS /var/www/ /path/to/backup"
(2) With server 2 backed up, let's now sync from server 1 (again, as seen from server 1)
rsync $RSYNC_OPTS /path/to/www/ user#server2:/var/www/
As long as you use sane RSYNC_OPTS, the backup and sync should both be reasonable. Richard had a reasonable suggestion for the options:
RSYNC_OPTS="--exclude-from rsync-exclude.txt --stats -avz --numeric-ids -e ssh"
If you want an accurate reproduction, I'd recommend --delete or --delete-after as well. Be sure to lookup details on any options you're unfamiliar with.
For this you should really be using rsync.
I tend to uye an rsync-exclude.txt file to specify what I don't want as it's more future proof.
/public_ftp/.ftpquota
/tmp
/var/local/backups/rsyncs
/backup/rsync
/proc
/dev
so a command could be
rsync --exclude-from rsync-exclude.txt --stats -avz -e ssh \
--numeric-ids /syncfrom/dir user#example.com:/backup/sync-to-dir
edit::
In the case of a local server you can still use rsync, however you could also use tar and exclude what you don't want.
(cd dir1;tar --exclude 'web2/*' -cf -) | (cd dir2; tar -xvf -)
or
find dir1 dir2 >exclude-files (cd dir1;tar --exclude-from exclude-files -cf -) | (cd dir2; tar -xvf -)
I do the same thing when deploying new releases to my webserver. Is it possible for you to use rsync over ssh? rsync allows you to specify an --exclude option and specify either the dirs/files on the command line or via a file. Here's a pretty good writeup that I've used in the past:
http://articles.slicehost.com/2007/10/10/rsync-exclude-files-and-folders
Since what you want to do is "copy all files except those", following your example you could do this :
ssh server2 "cp -rv /var/www/!(web1|web3|file1.php|file3.php) /tmp"
But remember that this is very ugly to backup your server :p

Remote Linux server to remote linux server dir copy. How? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What is the best way to copy a directory (with sub-dirs and files) from one remote Linux server to another remote Linux server? I have connected to both using SSH client (like Putty). I have root access to both.
There are two ways I usually do this, both use ssh:
scp -r sourcedir/ user#dest.com:/dest/dir/
or, the more robust and faster (in terms of transfer speed) method:
rsync -auv -e ssh --progress sourcedir/ user#dest.com:/dest/dir/
Read the man pages for each command if you want more details about how they work.
I would modify a previously suggested reply:
rsync -avlzp /path/to/sfolder name#remote.server:/path/to/remote/dfolder
as follows:
-a (for archive) implies -rlptgoD so the l and p above are superfluous. I also like to include -H, which copies hard links. It is not part of -a by default because it's expensive. So now we have this:
rsync -aHvz /path/to/sfolder name#remote.server:/path/to/remote/dfolder
You also have to be careful about trailing slashes. You probably want
rsync -aHvz /path/to/sfolder/ name#remote.server:/path/to/remote/dfolder
if the desire is for the contents of the source "sfolder" to appear in the destination "dfolder". Without the trailing slash, an "sfolder" subdirectory would be created in the destination "dfolder".
rsync -avlzp /path/to/folder name#remote.server:/path/to/remote/folder
scp -r <directory> <username>#<targethost>:<targetdir>
Log in to one machine
$ scp -r /path/to/top/directory user#server:/path/to/copy
Use rsync so that you can continue if the connection gets broken. And if something changes you can copy them much faster too!
Rsync works with SSH so your copy operation is secure.
Try unison if the task is recurring.
http://www.cis.upenn.edu/~bcpierce/unison/
I used rdiffbackup http://www.nongnu.org/rdiff-backup/index.html because it does all you need without any fancy options. It's based on the rsync algorithm.
If you only need to copy one time, you can later remove the rdiff-backup-data directory on the destination host.
rdiff-backup user1#host1::/source-dir user2#host2::/dest-dir
from the doc:
rdiff-backup also preserves
subdirectories, hard links, dev files,
permissions, uid/gid ownership,
modification times, extended
attributes, acls, and resource forks.
which is an bonus to the scp -p proposals as the -p option does not preserve all (e.g. rights on directories are set badly)
install on ubuntu:
sudo apt-get install rdiff-backup
Check out scp or rsync,
man scp
man rsync
scp file1 file2 dir3 user#remotehost:path
Well, quick answer would to take a look at the 'scp' manpage, or perhaps rsync - depending exactly on what you need to copy. If you had to, you could even do tar-over-ssh:
tar cvf - | ssh server tar xf -
I think you can try with:
rsync -azvu -e ssh user#host1:/directory/ user#host2:/directory2/
(and I assume you are on host0 and you want to copy from host1 to host2 directly)
If the above does not work, you could try:
ssh user#host1 "/usr/bin/rsync -azvu -e ssh /directory/ user#host2:/directory2/"
in the this, it would work, if you already have setup passwordless SSH login from host1 to host2
scp will do the job, but there is one wrinkle: the connection to the second remote destination will use the configuration on the first remote destination, so if you use .ssh/config on the local environment, and you expect rsa and dsa keys to work, you have to forward your agent to the first remote host.
As non-root user ideally:
scp -r src $host:$path
If you already some of the content on $host consider using rsync with ssh as a tunnel.
/Allan
If you are serious about wanting an exact copy, you probably also want to use the -p switch to scp, if you're using that. I've found that scp reads from devices, and I've had problems with cpio, so I personally always use tar, like this:
cd /origin; find . -xdev -depth -not -path ./lost+found -print0 \
| tar --create --atime-preserve=system --null --files-from=- --format=posix \
--no-recursion --sparse | ssh targethost 'cd /target; tar --extract \
--overwrite --preserve-permissions --sparse'
I keep this incantation around in a file with various other means of copying files around. This one is for copying over SSH; the other ones are for copying to a compressed archive, for copying within the same computer, and for copying over an unencrypted TCP socket when SSH is too slow.
scp as mentioned above is usually a best way, but don't forget colon in the remote directory spec otherwise you'll get copy of source directory on local machine.
I like to pipe tar through ssh.
tar cf - [directory] | ssh [username]#[hostname] tar xf - -C [destination on remote box]
This method gives you lots of options. Since you should have root ssh disabled copying files for multiple user accounts is hard since you are logging into the remote server as a normal user. To get around this you can create a tar file on the remote box that still hold that preserves ownership.
tar cf - [directory] | ssh [username]#[hostname] "cat > output.tar"
For slow connections you can add compression, z for gzip or j for bzip2.
tar cjf - [directory] | ssh [username]#[hostname] "cat > output.tar.bz2"
tar czf - [directory] | ssh [username]#[hostname] "cat > output.tar.gz"
tar czf - [directory] | ssh [username]#[hostname] tar xzf - -C [destination on remote box]

Resources