rsync in not working on startup? - linux

Hi I'm new in Linux and I have been trying synchronize two folder with rsync command. I'm using CentOS and when I execute command (#rsync -zvr /tmp/f1/ /tmp/f2/) through command line is working fine, but through rc.local on rebooting is not working. The following message is showed:
sending incremental file list
rsync: change_dir "/tmp/f1" failed: Permission denied (13)
rsync: ERROR: cannot stat destination "/tmp/f2/": Permission denied (13)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(554) [receiver=3.0.6]
rsync: connection unexpectedly closed (9 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]
Please some help?

You are having trouble with SELinux. SELinux is a module which allows for much more fine grained access control than file system permissions and ACLs do. Among others, it will disallow access to files for rsync by default, if it is not run by a user from a terminal. Now how can you let it access the files you want?
There are two options. If you are only dealing with directories no other service (including httpd or such) needs access to, you can do the following:
semanage fcontext -a -t public_content_t "/tmp/f1(/.*)?"
semanage fcontext -a -t public_content_t "/tmp/f2(/.*)?"
This should persistently change the SELinux rules to make the directories /tmp/f1 and /tmp/f2 accessible by rsync. In fact, it will set the public_content_t type on the directories and the files. Nodes with that type are accessible by rsync. However, there is a catch, as mentioned: A node (directory or file) can only have one type. Many services have other requirements for files they access, (e.g. sshd requires ssh_t), so you cannot do this in /etc for example.
Another solution is to persistently allow rsync access to all files. This is fine if you do not run the rsync daemon:
setsebool -P rsync_full_access 1
Afterwards, rsync will be able to access all files, even if run from init and not from a users terminal.
Why does it make a difference if rsync is started by a daemon or by a user?
(this is only true for the most common, targeted policy)
SELinux knows users, and normal users use the SELinux-user unconfined_u. unconfined_u is allowed to do pretty much everything the file system ACLs allow it to do. However, init and such are running as system_u, and system_u is far more constrained. This helps to prevent attacks on httpd and other exposed daemons.

If you have just rebooted /tmp will have been cleared and so /tmp/f1 and /tmp/f2 will not exist
rc.local usually runs quite late in the boot sequence so I'd guess that /tmp is mounted rw but it's possible that it is still only mounted ro

Related

what options to use with rsync to sync online new files to remote ntfs drive?

i run a mixed windows and linux network with different desktops, notebooks and raspberry pis. i am trying to establish an off-site backup between a local raspberry pi and an remote raspberry pi. both run on dietpi/raspbian and have an external hdd with ntfs to store the backup data. as the data to be backuped is around 800GB i already initially mirrored the data on the external hdd, so that only the new files have to be sent to the remote drive via rsync.
i now tried various combinations of options including --ignore-existing --size-only -u -c und of course combinations of other options like -avz etc.
the problem is: nothing of the above really changes anything, the system tries to upload all the files (although they are remotely existing) or at least a good number of them.
could you give me hint how to solve this?
I do this exact thing. Here is my solution to this task.
rsync -re "ssh -p 1234” -K -L --copy-links --append --size-only --delete pi#remote.server.ip:/home/pi/source-directory/* /home/pi/target-directory/
The options I am using are:
-r - recursive
-e - specifies to utilize ssh / scp as a method to transmit data, a note, my ssh command uses a non-standard port 1234 as is specified by -p in the -e flag
-K - keep directory links
-L - copy links
--copy-links - a duplicate flag it would seem...
--append - this will append data onto smaller files in case of a partial copy
--size-only - this skips files that match in size
--delete - CAREFUL - this will delete local files that are not present on the remote device..
This solution will run on a schedule and will "sync" the files in the target directories with the files from the source directory. to test it out, you can always run the command with --dry-run which will not make any changes at all and only show you what will be transferred and/or deleted...
all of this info and additional details can be found in the rsync man page man rsync
**NOTE: I use ssh keys to allow connection/transfer without having to respond to a password prompt between these devices.

Rsync to Amazon Linux EC2 instance - failed: No such file or directory

I want to upload the content of one directory to my Amazon EC2 with rsync:
rsync -r -t -v --progress -z -s -e "ssh -i /home/mostafa/keyamazon.pem" /home/mostafa/splitfiles ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:~/splitfiles
but I receive the following error message:
sending incremental file list
rsync: link_stat "/home/mostafa/splitfiles" failed: No such file or directory (2)
rsync: change_dir#3 "/home/ubuntu//~" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(712) [Receiver=3.1.0]
and if I do a dry run with grsync, it works correctly
In rsync the trailing / is very important. Also you rsync usually defaults to ssh when one of the destinations contains a host.
So you if you want to preserver modification times then you can get rid of the -e and -s options.
Your command could be written as /home/mostafa/splitfiles/ ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:splitfiles/ - notice the trailing /'s provided that you have ssh configured to read the private key from your home directory.
On ubuntu you can add this to the key chain, by going
ssh-add [key-file]
And this will save you having to specify the keyfile everytime you ssh into the AWS machine.
The errors seem to say that on the local machine you don't have a source directory and the destination doesn't exist.
I completed this task with Filezilla instead, easier to use.
You are at home ~ if you cd ../ to root you will be able to run the command.

On Linux, how can I share scripts across an SSH connection for the session only?

For work, I have to connect to dozens of Linux machines via SSH (to perform maintenance, monitor the system, install software, etc).
I have a small arsenal of scripts that help me do some of these tasks, and these are located in a folder on my Mac in /Users/me/bin. I want to be able to run these scripts on the remote Linux machine, but for several reasons I do not want these scripts permanently located on these machines (e.g., other people also connect to these remote machines, and it would be unwise to let them execute these files).
So, is possible to share scripts across an SSH connection for the lifetime of the session only?
I have a couple of ideas on how to do this, but I don't know if any of them will work. Firstly, if SSH allows file mounting, I could automatically mount me#mymac:/Users/me/bin to me#linux:/remote_bin when I connect to the remote Linux box, and set my PATH variable to "$PATH:/remote_bin". Secondly, I could set up port forwarding in the connection string (e.g., ssh me#linux -R 9999:127.0.0.1:<SMBPORT|ETC> and every time I connect mount the share and set the $PATH variable.
EDIT: I've come up with a semi-solution. On the linux machine, edit /etc/ssh/sshd_config to add the following subsystem: Subsystem shareduserbinary sudo su -l -c "/bin/mount -t cifs -o port=9999,username=me,nounix,sec=ntlmssp //127.0.0.1/exported_bin /mnt/remote_bin" && bash -l -i -s. When connecting to the remote machine, set up a reverse port forward and invoke the subsystem. E.g.: ssh -R 9999:127.0.0.1:445 -s shareduserbinary me#linux.
EDIT 2: You can make the solution above cleaner, by removing the -l from the sudo command and changing the path from /mnt/remote_bin to $HOME/rbin.
Interesting question. Perhaps you can add a command to ~/.bash_login (assuming you are using bash) to copy the scripts from a remote host (such as your mac) when you login, then add a command to ~/.bash_logout to delete the scripts when you logout. But, as bmargulies points out, it would be a good idea to go a step further and make sure that nobody else has permissions to read or execute the scripts.
You can use OpenSSH's LocalCommand to upload the files (using e.g. scp or rsync) when initiating an SSH session (see man ssh_config and this):
Host server1 server2 [...]
PermitLocalCommand yes
LocalCommand scp -q /Users/bin/me/* %h:temp_bin/
and use .bash_logout or an EXIT-trap that you specify in your .bashrc to delete the contents of the directory on logout.

rsync - mkstemp failed: Permission denied (13) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have the following setup to periodically rsync files from server A to server B. Server B has the rsync daemon running with the following configuration:
read only = false
use chroot = false
max connections = 4
syslog facility = local5
log file = /var/adm/rsyncd.log
munge symlinks = false
secrets file = /etc/rsyncd.secrets
numeric ids = false
transfer logging = true
log format = %h %o %f %l %b
[BACKUP]
path = /path/to/archive
auth users = someuser
From server A I am issuing the following command:
rsync -adzPvO --delete --password-file=/path/to/pwd/file/pwd.dat /dir/to/be/backedup/ someuser#192.168.100.100::BACKUP
BACKUP directory is fully read/write/execute to everyone. When I run the rsync command from server A, I see:
afile.txt
989 100% 2.60kB/s 0:00:00 (xfer#78, to-check=0/79)
for each and everyfile in the directory I wish to backup. It fails when I get to writing tmp files:
rsync: mkstemp "/.afile.txt.PZQvTe" (in BACKUP) failed: Permission denied (13)
Hours of googling later and I still can't resolve what seems to be a very simple permission issue. Advice? Thanks in advance.
Additional Information
I just noticed the following occurs at the beginning of the process:
rsync: failed to set permissions on "/." (in BACKUP): Permission denied (13)
Is it trying to set permission on "/"?
Edit
I am logged in as the user - someuser. My destination directory has full read/write/execute permission for everyone, including it's contents. In addition, the destination directory is owned by someuser and in someuser's group.
Follow up
I've found using SSH solves this
Make sure the user you're rsync'd into on the remote machine has write access to the contents of the folder AND the folder itself, as rsync tried to update the modification time on the folder itself.
Even though you got this working, I recently had a similar encounter and no SO or Google searching was of any help as they all dealt with basic permission issues wheres the solution below is somewhat of an off setting that you wouldn't even think to check in most situations.
One thing to check for with permission denied that I recently found having issues with rsync myself where permissions were exactly the same on both servers including the owner and group but rsync transfers worked one way on one server but not the other way.
It turned out the server with problems that I was getting permission denied from had SELinux enabled which in turn overrides POSIX permissions on files/folders. So even though the folder in question could have been 777 with root running, the command SELinux was enabled and would in turn overwrite those permissions which produced a "permission denied"-error from rsync.
You can run the command getenforce to see if SELinux is enabled on the machine.
In my situation I ended up just disabling SELINUX completely because it wasn't needed and already disabled on the server that was working fine and just caused problems being enabled. To disable, open /etc/selinux/config and set SELINUX=disabled. To temporarily disable you can run the command setenforce 0 which will set SELinux into a permissive state rather then enforcing state which causes it to print warnings instead of enforcing.
Rsync daemon by default uses nobody/nogroup for all modules if it is running under root user. So you either need to define params uid and gid to the user you want, or set them to root/root.
I encountered the same problem and solved it by chown the user of the destination folder. The current user does not have the permission to read, write and execute the destination folder files. Try adding the permission by chmod a+rwx <folder/file name>.
This might not suit everyone since it does not preserve the original file permissions but in my case it was not important and it solved the problem for me. rsync has an option --chmod:
--chmod This option tells rsync to apply one or more comma-separated lqchmodrq strings to the permission of the files in the transfer. The
resulting value is treated as though it was the permissions that the
sending side supplied for the file, which means that this option can
seem to have no effect on existing files if --perms is not enabled.
This forces the permissions to be what you want on all files/directories. For example:
rsync -av --chmod=Du+rwx SRC DST
would add Read, Write and Execute for the user to all transferred directories.
I had a similar issue, but in my case it was because storage has only SFTP, without ssh or rsync daemons on it. I could not change anything, bcs this server was provided by my customer.
rsync could not change the date and time for the file, some other utilites (like csync) showed me other errors: "Unable to create temporary file Clock skew detected".
If you have access to the storage-server - just install openssh-server or launch rsync as a daemon here.
In my case - I could not do this and solution was: lftp.
lftp's usage for syncronization is below:
lftp -c "open -u login,password sftp://sft.domain.tld/; mirror -c --verbose=9 -e -R -L /srs/folder /rem/folder"
/src/folder - is the folder on my PC, /rem/folder - is sftp://sft.domain.tld/rem/folder.
you may find mans by the link lftp.yar.ru/lftp-man.html
Windows: Check permissions of destination folders. Take ownership if you must to give rights to the account running the rsync service.
I had the same issue in case of CentOS 7. I went through lot of articles ,forums but couldnt find out the solution.
The problem was with SElinux. Disabling SElinux at the server end worked.
Check SELinux status at the server end (from where you are pulling data using rysnc)
Commands to check SELinux status and disable it
$getenforce
Enforcing ## this means SElinux is enabled
$setenforce 0
$getenforce
Permissive
Now try running rsync command at the client end ,it worked for me.
All the best!
I have Centos 7 server with rsyncd on board:
/etc/rsyncd.conf
[files]
path = /files
By default selinux blocks access for rsyncd to /files folder
# this sets needed context to my /files folder
sudo semanage fcontext -a -t rsync_data_t '/files(/.*)?'
sudo restorecon -Rv '/files'
# sets needed booleans
sudo setsebool -P rsync_client 1
Disabling selinux is an easy but not a good solution
I had the same issue, so I first SSH into the server to confirm that I able to log in to the server by using the command:
ssh -i /Users/Desktop/mypemfile.pem user#ec2.compute-1.amazonaws.com
Then in New Terminal
I copied a small file to the server by using SCP, to make sure I am able to make a connection:
scp -i /Users/Desktop/mypemfile.pem /Users/Desktop/test.file user#ec2.compute-1.amazonaws.com:/home/user/test/
Then In the same new terminal, I tried running rsync:
rsync -avz -e "ssh -i /Users/Desktop/mypemfile.pem" /Users/Desktop/backup/image.img.gz user#ec2.compute-1.amazonaws.com:
If you're on a Raspberry pi or other Unix systems with sudo you need to tell the remote machine where rsync and sudo programs are located.
I put in the full path to be safe.
Here's my example:
rsync --stats -paogtrh --progress --omit-dir-times --delete --rsync-path='/usr/bin/sudo /usr/bin/rsync' /mnt/drive0/ pi#192.168.10.238:/mnt/drive0/
I imagine a common error not currently mentioned above is trying to write to a mount space (e.g., /media/drivename) when the partition isn't mounted. That will produce this error as well.
If it's an encrypted drive set to auto-mount but doesn't, might be an issue of auto-unlocking the encrypted partition before attempting to write to the space where it is supposed to be mounted.
I had the same error while syncing files inside of a Docker container and the destination was a mounted volume (Docker for mac), I run rsync via su-exec <user>. I was able to resolve it by running rsync as root with -og flags (keep owner and group for destination files).
I'm still not sure what caused that issue, the destination permissions were OK (I run chown -R <user> for destination dir before rsync), perhaps somehow related to Docker for Mac slow filesystem.
Take attention on -e ssh and jenkins#localhost: in next example:
rsync -r -e ssh --chown=jenkins:admin --exclude .git --exclude Jenkinsfile --delete ./ jenkins#localhost:/home/admin/web/xxx/public
That helped me
P.S. Today, i realized that when you change (add) jenkins user to some group, permission will apply after slave (agent) restart. And my solution (-e ssh and jenkins#localhost:) need only when you can't restart agent/server.
Yet still another way to get this symptom: I was rsync'ing from a remote machine over ssh to a Linux box with an NTFS-3G (FUSE) filesystem. Originally the filesystem was mounted at boot time and thus owned by root, and I was getting this error message when I did an rsync push from the remote machine. Then, as the user to which the rsync is pushed, I did:
$ sudo umount /shared
$ mount /shared
and the error messages went away.
The group user name for the destination directory and sub directories should be same as per the user.
if the user is 'abc' then the destination directory should be
lrwxrwxrwx 1 abc abc 34 Jul 18 14:05 Destination_directory
command chown abc:abc Destination_directory
Surprisingly nobody have mentioned all powerful SUDO.
Had the same problem and sudo fixed it
run in root access ssh chould solve this problem
or chmod 0777 /dir/to/be/backedup/
or chown username:user /dir/to/be/backedup/

rsync over SSH preserve ownership only for www-data owned files

I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user#10.1.1.1:/var/www/
The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.
Is this possible? If so, how would I go about doing that?
** EDIT **
There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html
** EDIT 2 **
I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:
sudo rsync -az user#10.1.1.2:/var/www/ /var/www/
This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.
You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user#targethost:/path
This lets you authenticate as user on targethost, but still get privileged write permission through sudo. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from option.
rsync -av /path/to/files user#targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user#targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.
As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user#10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user#10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user#10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user#10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
I had a similar problem and cheated the rsync command,
rsync -avz --delete root#x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user#remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
you tar up your directory,
instead of creating a tar file, you send the tar output to stdout,
that stdout is piped through an SSH command to a receiving tar on the other host,
but that receiving tar is run by sudo, so it has privileged write access to set usernames.
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website

Resources