I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user#10.1.1.1:/var/www/
The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.
Is this possible? If so, how would I go about doing that?
** EDIT **
There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html
** EDIT 2 **
I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:
sudo rsync -az user#10.1.1.2:/var/www/ /var/www/
This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.
You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user#targethost:/path
This lets you authenticate as user on targethost, but still get privileged write permission through sudo. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from option.
rsync -av /path/to/files user#targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user#targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.
As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user#10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user#10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user#10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user#10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
I had a similar problem and cheated the rsync command,
rsync -avz --delete root#x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user#remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
you tar up your directory,
instead of creating a tar file, you send the tar output to stdout,
that stdout is piped through an SSH command to a receiving tar on the other host,
but that receiving tar is run by sudo, so it has privileged write access to set usernames.
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website
Related
I'm trying to upload a file from my local desktop to a server and I'm using this command:
scp myFile.txt cooluser#192.168.10.102:/opt/nicescada/web
following the structure: scp filename user#ip:/remotePath.
But I get "Permission Denied". I tried using sudo , but I get the same message. I'm being able to download from the server to my local machine, so I assume I have all permissions needed.
What can be wrong in that line of code?
In case your /desired/path on your destination machine has write access only for root, and if you have an account on your destination machine with sudo privileges (super user privileges by prefixing a sudo to your command), you could also do it the following way:
Option 1 based on scp:
copy the file to a location on your destination machine where you have write access like /tmp:
scp file user#destinationMachine:/tmp
Login to your destination machine with:
ssh user#destinationMachine
Move the file to your /desired/path with:
sudo mv /tmp/file /desired/path
In case you have a passwordless sudo setup you could also combine step 2. and 3. to
ssh user#destination sudo mv /tmp/file /desired/path
Option 2 based on rsync
Another maybe even simpler option would be to use rsync:
rsync -e "ssh -tt" --rsync-path="sudo rsync" file user#destinationMachine:/desired/path
with -e "ssh -tt" added to run sudo without having a tty.
Try and specify the full destination path:
scp myFile.txt cooluser#192.168.10.102:/opt/nicescada/web/myFile.txt
Of course, double-check cooluser has the right to write (not just read) in that folder: 755, not 644 for the web parent folder.
I have the logins and passwords for two linux users (not root), for example user1 and user2.
How to copy files
from /home/user1/folder1 to /home/user2/folder2, using one single shell script (one single script launching, without manually switching of users).
I think I must use a sudo command but didn't found how exactly.
Just this:
cp -r /home/user1/folder1/ /home/user2/folder2
If you add -p (so cp -pr) it will preserve the attributes of the files (mode, ownership, timestamps).
-r is required to copy hidden files as well. See How to copy with cp to include hidden files and hidden directories and their contents? for further reference.
sudo cp -a /home/user1/folder1 /home/user2/folder2
sudo chown -R user2:user2 /home/user2/folder2
cp -a archive
chown -R act recursively
Copies the files and then gives permissions to user2 to be able to access them.
Copies all files including dot files, all sub-directories and does not require directory /home/user2/folder2 to exist prior to the command.
(shopt -s dotglob; cp -a /home/user1/folder1/* /home/user2/folder2/)
Will copy all files (including those starting with a dot) using the standard cp. The /folder2/ should exist, otherwise the results can be nasty.
Often using a packing tool like tar can be of help as well:
cd /home/user1/folder1
tar cf - . | (cd /home/user2/folder2; tar xf -)
I think you need to use this command
sudo -u username /path1/file1 /path2/file2
This command allows you to copy the contents as a particular user from any file path.
PS: The parent directory should be list-able at least in order to copy files from it.
Just to add to fedorqui 'SO stop harming' answer.
I had this same challenge when I tried to change the default admin user for a server from stage_user to prod_user on an Ubuntu 20.04 machine:
First, I created a prod_user using the command below:
sudo adduser prod_user
And then I added the newly created prod_user to the sudo group:
sudo adduser prod_user sudo
Next, I copied all the directories that I needed from the home directory of the stage_user to the prod_user:
sudo cp -r /home/stage_user/folder1/ /home/prod_user/
Next, I changed the ownership of the copied folders from stage_user to prod_user to avoid permission issues:
sudo chown prod_user:prod_user /home/prod_user/folder1
That's all.
I hope this helps
The question has to to do with permissions across users.
I believe by default home permission does allow all people to do listing and changing working directory into another's home:
eg. drwxr-xr-x
Hence in the previous answers people did not realise what you might have encountered.
With more restricted settings like what I had on my web host, nonowner users cannot do anything
eg. drwx------
Even if you use su/sudo and become the other user, you can still only be ONE USER at one time, so when you copy the file back, the same problem of no enough permission still apply.
So. . . use scp instead, treat the whole thing like a network environment let me put it that way and that's it. By the way this question had already been answered once over here (https://superuser.com/questions/353565/how-do-i-copy-a-file-folder-from-another-users-home-directory-in-linux), only cared to reply because this ranked 1st result from my search.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have the following setup to periodically rsync files from server A to server B. Server B has the rsync daemon running with the following configuration:
read only = false
use chroot = false
max connections = 4
syslog facility = local5
log file = /var/adm/rsyncd.log
munge symlinks = false
secrets file = /etc/rsyncd.secrets
numeric ids = false
transfer logging = true
log format = %h %o %f %l %b
[BACKUP]
path = /path/to/archive
auth users = someuser
From server A I am issuing the following command:
rsync -adzPvO --delete --password-file=/path/to/pwd/file/pwd.dat /dir/to/be/backedup/ someuser#192.168.100.100::BACKUP
BACKUP directory is fully read/write/execute to everyone. When I run the rsync command from server A, I see:
afile.txt
989 100% 2.60kB/s 0:00:00 (xfer#78, to-check=0/79)
for each and everyfile in the directory I wish to backup. It fails when I get to writing tmp files:
rsync: mkstemp "/.afile.txt.PZQvTe" (in BACKUP) failed: Permission denied (13)
Hours of googling later and I still can't resolve what seems to be a very simple permission issue. Advice? Thanks in advance.
Additional Information
I just noticed the following occurs at the beginning of the process:
rsync: failed to set permissions on "/." (in BACKUP): Permission denied (13)
Is it trying to set permission on "/"?
Edit
I am logged in as the user - someuser. My destination directory has full read/write/execute permission for everyone, including it's contents. In addition, the destination directory is owned by someuser and in someuser's group.
Follow up
I've found using SSH solves this
Make sure the user you're rsync'd into on the remote machine has write access to the contents of the folder AND the folder itself, as rsync tried to update the modification time on the folder itself.
Even though you got this working, I recently had a similar encounter and no SO or Google searching was of any help as they all dealt with basic permission issues wheres the solution below is somewhat of an off setting that you wouldn't even think to check in most situations.
One thing to check for with permission denied that I recently found having issues with rsync myself where permissions were exactly the same on both servers including the owner and group but rsync transfers worked one way on one server but not the other way.
It turned out the server with problems that I was getting permission denied from had SELinux enabled which in turn overrides POSIX permissions on files/folders. So even though the folder in question could have been 777 with root running, the command SELinux was enabled and would in turn overwrite those permissions which produced a "permission denied"-error from rsync.
You can run the command getenforce to see if SELinux is enabled on the machine.
In my situation I ended up just disabling SELINUX completely because it wasn't needed and already disabled on the server that was working fine and just caused problems being enabled. To disable, open /etc/selinux/config and set SELINUX=disabled. To temporarily disable you can run the command setenforce 0 which will set SELinux into a permissive state rather then enforcing state which causes it to print warnings instead of enforcing.
Rsync daemon by default uses nobody/nogroup for all modules if it is running under root user. So you either need to define params uid and gid to the user you want, or set them to root/root.
I encountered the same problem and solved it by chown the user of the destination folder. The current user does not have the permission to read, write and execute the destination folder files. Try adding the permission by chmod a+rwx <folder/file name>.
This might not suit everyone since it does not preserve the original file permissions but in my case it was not important and it solved the problem for me. rsync has an option --chmod:
--chmod This option tells rsync to apply one or more comma-separated lqchmodrq strings to the permission of the files in the transfer. The
resulting value is treated as though it was the permissions that the
sending side supplied for the file, which means that this option can
seem to have no effect on existing files if --perms is not enabled.
This forces the permissions to be what you want on all files/directories. For example:
rsync -av --chmod=Du+rwx SRC DST
would add Read, Write and Execute for the user to all transferred directories.
I had a similar issue, but in my case it was because storage has only SFTP, without ssh or rsync daemons on it. I could not change anything, bcs this server was provided by my customer.
rsync could not change the date and time for the file, some other utilites (like csync) showed me other errors: "Unable to create temporary file Clock skew detected".
If you have access to the storage-server - just install openssh-server or launch rsync as a daemon here.
In my case - I could not do this and solution was: lftp.
lftp's usage for syncronization is below:
lftp -c "open -u login,password sftp://sft.domain.tld/; mirror -c --verbose=9 -e -R -L /srs/folder /rem/folder"
/src/folder - is the folder on my PC, /rem/folder - is sftp://sft.domain.tld/rem/folder.
you may find mans by the link lftp.yar.ru/lftp-man.html
Windows: Check permissions of destination folders. Take ownership if you must to give rights to the account running the rsync service.
I had the same issue in case of CentOS 7. I went through lot of articles ,forums but couldnt find out the solution.
The problem was with SElinux. Disabling SElinux at the server end worked.
Check SELinux status at the server end (from where you are pulling data using rysnc)
Commands to check SELinux status and disable it
$getenforce
Enforcing ## this means SElinux is enabled
$setenforce 0
$getenforce
Permissive
Now try running rsync command at the client end ,it worked for me.
All the best!
I have Centos 7 server with rsyncd on board:
/etc/rsyncd.conf
[files]
path = /files
By default selinux blocks access for rsyncd to /files folder
# this sets needed context to my /files folder
sudo semanage fcontext -a -t rsync_data_t '/files(/.*)?'
sudo restorecon -Rv '/files'
# sets needed booleans
sudo setsebool -P rsync_client 1
Disabling selinux is an easy but not a good solution
I had the same issue, so I first SSH into the server to confirm that I able to log in to the server by using the command:
ssh -i /Users/Desktop/mypemfile.pem user#ec2.compute-1.amazonaws.com
Then in New Terminal
I copied a small file to the server by using SCP, to make sure I am able to make a connection:
scp -i /Users/Desktop/mypemfile.pem /Users/Desktop/test.file user#ec2.compute-1.amazonaws.com:/home/user/test/
Then In the same new terminal, I tried running rsync:
rsync -avz -e "ssh -i /Users/Desktop/mypemfile.pem" /Users/Desktop/backup/image.img.gz user#ec2.compute-1.amazonaws.com:
If you're on a Raspberry pi or other Unix systems with sudo you need to tell the remote machine where rsync and sudo programs are located.
I put in the full path to be safe.
Here's my example:
rsync --stats -paogtrh --progress --omit-dir-times --delete --rsync-path='/usr/bin/sudo /usr/bin/rsync' /mnt/drive0/ pi#192.168.10.238:/mnt/drive0/
I imagine a common error not currently mentioned above is trying to write to a mount space (e.g., /media/drivename) when the partition isn't mounted. That will produce this error as well.
If it's an encrypted drive set to auto-mount but doesn't, might be an issue of auto-unlocking the encrypted partition before attempting to write to the space where it is supposed to be mounted.
I had the same error while syncing files inside of a Docker container and the destination was a mounted volume (Docker for mac), I run rsync via su-exec <user>. I was able to resolve it by running rsync as root with -og flags (keep owner and group for destination files).
I'm still not sure what caused that issue, the destination permissions were OK (I run chown -R <user> for destination dir before rsync), perhaps somehow related to Docker for Mac slow filesystem.
Take attention on -e ssh and jenkins#localhost: in next example:
rsync -r -e ssh --chown=jenkins:admin --exclude .git --exclude Jenkinsfile --delete ./ jenkins#localhost:/home/admin/web/xxx/public
That helped me
P.S. Today, i realized that when you change (add) jenkins user to some group, permission will apply after slave (agent) restart. And my solution (-e ssh and jenkins#localhost:) need only when you can't restart agent/server.
Yet still another way to get this symptom: I was rsync'ing from a remote machine over ssh to a Linux box with an NTFS-3G (FUSE) filesystem. Originally the filesystem was mounted at boot time and thus owned by root, and I was getting this error message when I did an rsync push from the remote machine. Then, as the user to which the rsync is pushed, I did:
$ sudo umount /shared
$ mount /shared
and the error messages went away.
The group user name for the destination directory and sub directories should be same as per the user.
if the user is 'abc' then the destination directory should be
lrwxrwxrwx 1 abc abc 34 Jul 18 14:05 Destination_directory
command chown abc:abc Destination_directory
Surprisingly nobody have mentioned all powerful SUDO.
Had the same problem and sudo fixed it
run in root access ssh chould solve this problem
or chmod 0777 /dir/to/be/backedup/
or chown username:user /dir/to/be/backedup/
I need to copy the whole contents of a linux server, but I'm not sure how to do it recursively.
I have a migration script to run on the server itself, but it won't run because the disc is full, so I need something I can run remotely which just gets everything.
I need to copy the whole contents of a linux server, but I'm not sure how to do it recursively.
How about
scp -r root#remotebox:/ your_local_copy
sudo rsync -hxDPavil -H --stats --delete / remote:/backup/
this will copy everything (permissions, owners, timestamps, devices, sockets, hardlinks etc). It will also delete stuff that no longer exists in source. (note that -x indicates to only copy files within the same mountpoint)
If you want to preserve owners but the receiving end is not on the same domain, use --numeric-ids
To automate incremental backup w/snapshots, look at rdiff-backup or rsnapshot.
Also, gnu tar is highly underrated
sudo tar cpf / | ssh remote 'cd /backup && tar xv'
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What is the best way to copy a directory (with sub-dirs and files) from one remote Linux server to another remote Linux server? I have connected to both using SSH client (like Putty). I have root access to both.
There are two ways I usually do this, both use ssh:
scp -r sourcedir/ user#dest.com:/dest/dir/
or, the more robust and faster (in terms of transfer speed) method:
rsync -auv -e ssh --progress sourcedir/ user#dest.com:/dest/dir/
Read the man pages for each command if you want more details about how they work.
I would modify a previously suggested reply:
rsync -avlzp /path/to/sfolder name#remote.server:/path/to/remote/dfolder
as follows:
-a (for archive) implies -rlptgoD so the l and p above are superfluous. I also like to include -H, which copies hard links. It is not part of -a by default because it's expensive. So now we have this:
rsync -aHvz /path/to/sfolder name#remote.server:/path/to/remote/dfolder
You also have to be careful about trailing slashes. You probably want
rsync -aHvz /path/to/sfolder/ name#remote.server:/path/to/remote/dfolder
if the desire is for the contents of the source "sfolder" to appear in the destination "dfolder". Without the trailing slash, an "sfolder" subdirectory would be created in the destination "dfolder".
rsync -avlzp /path/to/folder name#remote.server:/path/to/remote/folder
scp -r <directory> <username>#<targethost>:<targetdir>
Log in to one machine
$ scp -r /path/to/top/directory user#server:/path/to/copy
Use rsync so that you can continue if the connection gets broken. And if something changes you can copy them much faster too!
Rsync works with SSH so your copy operation is secure.
Try unison if the task is recurring.
http://www.cis.upenn.edu/~bcpierce/unison/
I used rdiffbackup http://www.nongnu.org/rdiff-backup/index.html because it does all you need without any fancy options. It's based on the rsync algorithm.
If you only need to copy one time, you can later remove the rdiff-backup-data directory on the destination host.
rdiff-backup user1#host1::/source-dir user2#host2::/dest-dir
from the doc:
rdiff-backup also preserves
subdirectories, hard links, dev files,
permissions, uid/gid ownership,
modification times, extended
attributes, acls, and resource forks.
which is an bonus to the scp -p proposals as the -p option does not preserve all (e.g. rights on directories are set badly)
install on ubuntu:
sudo apt-get install rdiff-backup
Check out scp or rsync,
man scp
man rsync
scp file1 file2 dir3 user#remotehost:path
Well, quick answer would to take a look at the 'scp' manpage, or perhaps rsync - depending exactly on what you need to copy. If you had to, you could even do tar-over-ssh:
tar cvf - | ssh server tar xf -
I think you can try with:
rsync -azvu -e ssh user#host1:/directory/ user#host2:/directory2/
(and I assume you are on host0 and you want to copy from host1 to host2 directly)
If the above does not work, you could try:
ssh user#host1 "/usr/bin/rsync -azvu -e ssh /directory/ user#host2:/directory2/"
in the this, it would work, if you already have setup passwordless SSH login from host1 to host2
scp will do the job, but there is one wrinkle: the connection to the second remote destination will use the configuration on the first remote destination, so if you use .ssh/config on the local environment, and you expect rsa and dsa keys to work, you have to forward your agent to the first remote host.
As non-root user ideally:
scp -r src $host:$path
If you already some of the content on $host consider using rsync with ssh as a tunnel.
/Allan
If you are serious about wanting an exact copy, you probably also want to use the -p switch to scp, if you're using that. I've found that scp reads from devices, and I've had problems with cpio, so I personally always use tar, like this:
cd /origin; find . -xdev -depth -not -path ./lost+found -print0 \
| tar --create --atime-preserve=system --null --files-from=- --format=posix \
--no-recursion --sparse | ssh targethost 'cd /target; tar --extract \
--overwrite --preserve-permissions --sparse'
I keep this incantation around in a file with various other means of copying files around. This one is for copying over SSH; the other ones are for copying to a compressed archive, for copying within the same computer, and for copying over an unencrypted TCP socket when SSH is too slow.
scp as mentioned above is usually a best way, but don't forget colon in the remote directory spec otherwise you'll get copy of source directory on local machine.
I like to pipe tar through ssh.
tar cf - [directory] | ssh [username]#[hostname] tar xf - -C [destination on remote box]
This method gives you lots of options. Since you should have root ssh disabled copying files for multiple user accounts is hard since you are logging into the remote server as a normal user. To get around this you can create a tar file on the remote box that still hold that preserves ownership.
tar cf - [directory] | ssh [username]#[hostname] "cat > output.tar"
For slow connections you can add compression, z for gzip or j for bzip2.
tar cjf - [directory] | ssh [username]#[hostname] "cat > output.tar.bz2"
tar czf - [directory] | ssh [username]#[hostname] "cat > output.tar.gz"
tar czf - [directory] | ssh [username]#[hostname] tar xzf - -C [destination on remote box]