rsyncing between two non-login users - linux

I have two machines A and B, both of which have 3 users:
root (I don't know a password can just switch using sudo su -)
login (used for sshing into both machines, has a password, is a sudoer)
mysql (standard non-interactive user running mysql server)
What I need to do is to rsync data directory (dir) belonging to mysql from machine A to machine B.
Obviously I can't just do:
rsync -avpE /dir/ B:/dir/
Because neither A nor B have read access to dir
I can't do:
sudo -u mysql rsync -avpE /dir/ B:/dir/
Because now A has access to dir but B doesn't.
So is it possible to construct an rsync command so I copy data across without using some temporary space?

rsync has an option called --rsync-path that might help you:
$ rsync |& grep rsync-path
--rsync-path=PROGRAM specify the rsync to run on the remote machine
The idea is to ask (the local) rsync to ssh to the remote machine (as user login) and then when it wants to call rsync on the remote machine, have it call sudo -u mysql rsync instead of plain rsync. So something like that:
sudo -u mysql rsync -avpE --rsync-path="sudo -u mysql rsync" /dir/ login#B:/dir/
Of course for this to work, the user login on the remote machine must be able to sudo -u mysql without a password.

Related

How to properly upload a local file to a server using mobaXterm?

I'm trying to upload a file from my local desktop to a server and I'm using this command:
scp myFile.txt cooluser#192.168.10.102:/opt/nicescada/web
following the structure: scp filename user#ip:/remotePath.
But I get "Permission Denied". I tried using sudo , but I get the same message. I'm being able to download from the server to my local machine, so I assume I have all permissions needed.
What can be wrong in that line of code?
In case your /desired/path on your destination machine has write access only for root, and if you have an account on your destination machine with sudo privileges (super user privileges by prefixing a sudo to your command), you could also do it the following way:
Option 1 based on scp:
copy the file to a location on your destination machine where you have write access like /tmp:
scp file user#destinationMachine:/tmp
Login to your destination machine with:
ssh user#destinationMachine
Move the file to your /desired/path with:
sudo mv /tmp/file /desired/path
In case you have a passwordless sudo setup you could also combine step 2. and 3. to
ssh user#destination sudo mv /tmp/file /desired/path
Option 2 based on rsync
Another maybe even simpler option would be to use rsync:
rsync -e "ssh -tt" --rsync-path="sudo rsync" file user#destinationMachine:/desired/path
with -e "ssh -tt" added to run sudo without having a tty.
Try and specify the full destination path:
scp myFile.txt cooluser#192.168.10.102:/opt/nicescada/web/myFile.txt
Of course, double-check cooluser has the right to write (not just read) in that folder: 755, not 644 for the web parent folder.

cd to directory and su to particular user on remote server in script

I have some tasks to do on a remote Ubuntu CLI-only server in our offices every 2 weeks. I usually type the commands one by one, but I am trying to find a way (write a script maybe?) to decrease the time I spend in repeating those first steps.
Here is what I do:
ssh my_username#my_local_server
# asks for my_username password
cd /path/to/particular/folder
su particular_user_on_local_server
# asks for particular_user_on_local_server password
And then I can do my tasks (run some Ruby script on Rails applications, copy/remove files, restart services, etc.)
I am trying to find a way to do this in a one-step script/command:
"ssh connect then cd to directory then su to this user"
I tried to use the following:
ssh username#server 'cd /some/path/to/folder ; su other_user'
# => does not keep my connection open to the server, just execute my `cd`
# and then tells me `su: must be run from terminal`
ssh username#server 'cd /some/path/to/folder ; bash ; su other_user'
# => keeps my connection open to the server but doesn't switch to user
# and I don't see the usual `username:~/current/folder` prefix in the CLI
Is there a way to open a terminal (keep connection) on a remote server via ssh and change directory + switch to particular in a automated way? (to make things harder, I'm using Yakuake)
You can force allocation of a pseudo-terminal with -t, change to the desired directory and then replace the shell with one where you are the desired user:
ssh -t username#server 'cd /some/path/to/folder && exec bash -c "su other_user"'
sudo -H keeps the current working directory, so you could do:
ssh -t login_user#host.com 'cd /path/to/dir/; sudo -H -u other_user bash'
The -t parameter of ssh is needed otherwise the second sudo won't be able to ask you for your password.

Changing user to root when connected to a linux server and copying files

My script is coded in a way that doesn't allow you to connect to a server directly by root. This code basically copies files from a server to my computer and it works but I don't have access to many files because only root can access them. How can I connect to a server as a user and then copy its files by switching to root?
Code I want to change:
sshpass -p "password" scp -q -r username#74.11.11.11:some_directory copy_it/here/
In other words, I want to be able to remotely copy files which are only accessible to root on a remote server, but don't wish to access the remote server via ssh/scp directly as root.
Is it possible through only ssh and not sshpass?
If I understand your question correctly, you want to be able to remotely copy files which are only accessible to root on the remote machine, but you don't wish to (or can't) access the remote machine via ssh/scp directly as root. And a separate question is whether it could be done without sshpass.
(Please understand that the solutions I suggest below have various security implications and you should weigh up the benefits versus potential consequences before deploying them. I can't know your specific usage scenario to tell you if these are a good idea or not.)
When you ssh/scp as a user, you don't have access to the files which are only accessible to root, so you can't copy all of them. So you need to instead "switch to root" once connected in order to copy the files.
"Switching to root" for a command is accomplished by prefixing it with sudo, so the approach would be to remotely execute commands which copy the files via sudo to /tmp on the remote machine, changes their owner to the connected user, and then remotely copy them from /tmp:
ssh username#74.11.11.11 "sudo cp -R some_directory /tmp"
ssh username#74.11.11.11 "sudo chown -R username:username /tmp/some_directory"
scp -q -r username#74.11.11.11:/tmp/some_directory copy_it/here/
ssh username#74.11.11.11 "rm -r /tmp/some_directory"
However, sudo prompts for the user's password, so you'll get a "sudo: no tty present and no askpass program specified" error if you try this. So you need to edit /etc/sudoers on the remote machine to authorize the user to use sudo for the needed commands without a password. Add these lines:
username ALL=NOPASSWD: /bin/cp
username ALL=NOPASSWD: /bin/chown
(Or, if you're cool with the user being able to execute any command via sudo without being prompted for password, you could instead use:)
username ALL=NOPASSWD: ALL
Now the above commands will work and you'll be able to copy your files.
As for avoiding using sshpass, you could instead use a public/private key pair, in which a private key on the local machine unlocks a public key on the remote machine in order to authenticate the user, rather than a password.
To set this up, on your local machine, type ssh-keygen. Accept the default file (/home/username/.ssh/id_rsa). Use an empty passphrase. Then append the file /home/username/.ssh/id_rsa.pub on the local machine to /home/username/.ssh/authorized_keys on the remote machine:
cat /home/username/.ssh/id_rsa.pub | ssh username#74.11.11.11 \
"mkdir -m 0700 -p .ssh && cat - >> .ssh/authorized_keys && \
chmod 0600 .ssh/authorized_keys"
Once you've done this, you'll be able to use ssh or scp from the local machine without password authorization.

mount unmount without sudo

I am trying to write a script that would ssh into a host, perform mount operation there, run some other commands and exit.
other commands (cd, cp) do not require sudo privelages but mount option requries sudo permission. I want to write a script that would do:
ssh user#server "mount -t nfs xx.xx.xx.xx:/ /nfs -o rsize=4096,wsize=4096 ; cp pqr rst ; umount /nfs ;"
and some other non-sudo commands. How can I do this without a sudo option and without entering any passwords when the script is running.
Desktop linux distributions use udisks to grant non-root users limited mounting priviliges.
udisks version 2
udisksctl mount -b [device]
udisks version 1
udisks --mount [device]
Of course, if we are talking about a server VM, then these tools might not be installed.
Installing them would require root access (once)
You must add /nfs entry to /etc/fstab on the server host.
In the list of options of the entry must be option user or users (depends on that if you want that user could unmount the filesystem or not).
Example:
xx.xx.xx.xx:/ /nfs nfs rsize=4096,wsize=4096,user 0 0
You can allow that user to mount without needing sudo power.
Use NOPASSWD directory
Follow this Link.
Or you may prefer to write expect script which will have password written and password will be entered when it prompts for it.

rsync over SSH preserve ownership only for www-data owned files

I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user#10.1.1.1:/var/www/
The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.
Is this possible? If so, how would I go about doing that?
** EDIT **
There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html
** EDIT 2 **
I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:
sudo rsync -az user#10.1.1.2:/var/www/ /var/www/
This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.
You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user#targethost:/path
This lets you authenticate as user on targethost, but still get privileged write permission through sudo. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from option.
rsync -av /path/to/files user#targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user#targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.
As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user#10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user#10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user#10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user#10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
I had a similar problem and cheated the rsync command,
rsync -avz --delete root#x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user#remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
you tar up your directory,
instead of creating a tar file, you send the tar output to stdout,
that stdout is piped through an SSH command to a receiving tar on the other host,
but that receiving tar is run by sudo, so it has privileged write access to set usernames.
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website

Resources