I'm trying to run rsync in daemon mode on one machine, called node0. I want the machine to accept connections only from node0 (self) or node1 (some other machine which is defined)
The only path I want to be available for use it /tmp/. So that any read/write operation to any other path should be disallowed, for any originatting machine.
I do not want user restrictions, nor rsync to run over ssh.
I've set up /etc/rsyncd.conf as following:
syslog facility = syslog
max verbosity = 3
log file = /var/log/mylog.log
port = 873
[Proj1]
path = /tmp/
include = /tmp/
exclude = *
comment = some comment
hosts allow = node0, node1
hosts deny = *
max verbosity = 3
log file = /var/log/filedistributer.log
I than execute rsync in daemon mode with the following command:
/usr/bin/rsync --daemon --config=/etc/rsyncd.conf --verbose
Than from node1 I run:
rsync -a test.out node0:/tmp/ and it works correctly.
But if I run: rsync -a max.out node0:/someOtherDir and it also works - where it should not.
If I go to a different machine and run:
rsync -a someFile.out root#node0.MYHOSTNAME:/someOtherDir
It asks me to authorize the ssh key, and enter password for root, after that I copies the file... It should not.
What am I missing? None of my requirments are met:
Rsync uses SSH, when it should not
Rsync does not restrict hosts.
Rsync does not restrict folders
The log file, despite the level 3 verbosity only has one line (per each daemon startup) in the form of:
2013/11/18 12:32:35 [22289] rsyncd version 3.0.6 starting, listening
on port 873
Additionally another problem I encounter is that if I run the rsync command from node0 to node1, it still succeeds, even without me starting rsync in daemon mode on node1.
Your help is appriciated.
Thanks,
Max.
So after some digging what I came up with is that:
By default when running rsync it will try to use ssh as its transport, hence it ssh's to the destination machine and invokes rsync there.
To be used it will require valid credentials on the destination machine. There is no way to avoid this - other than removing the ssh daemon from the destination machine.
In order to restrict rsync from working in this mode, it is actually required to restrict ssh or rsh (or whatever shell is used as transport) from accepting connections from unrelated hosts. Yet it will still be impossible to force the "white list" hosts to operate only on the desired folders in this mode.
My restrictions to rsync in daemon mode using rsyncd.conf are correct and they work, but in order to use tcp as the transport layer and not ssh, it is required to use rsync differently
rsync rsync://node0/Proj1
This will actually make rsync go through a tcp connection and connect to the rsync daemon on the destination.
A very valueble source of information on how rsyn works can be found in the link below:
http://rsync.samba.org/how-rsync-works.html
Did you make sure the config file is actually red by the daemon when launched?
Try for example to deny all hosts and check if the rule applies.
Related
I want to transfer files between two servers , files size is aproximately 170GB.
On one server , there is Direct Admin control panel, and on the other one is Cpanel .
I've ftp & ssh access on both servers. I know about scp command on ssh, but as I've tried it and I didn't succeed , I prefer to use ftp commands. Because there were some connection or other errors on ssh , so the transfer progress was stopping and I couldn't resume the progress by skipping already uploaded files. So what should I do?
You can use rsync, it will continue where it stopped.
Go to one of the servers and do:
rsync -avz other.server.com:/path/to/directory /where/to/save
You can omit z option if the data is not compressible.
This is with assumption that the user name on both servers is the same.
If not you will need to add -e 'ssh -l login_name' to the above command.
Often I face this situation: I sshed into a remote server and ran some programs, and I want to copy their output files back to my local machine. What I do is remember the file path on remote machine, exit the connection, then scp user#remote:filepath .
Obviously this is not optimal. What I'm looking for is a way to let me scp file back to local machine without exiting the connection. I did some searching, almost all results are telling me how to do scp from my local machine, which I already know.
Is this possible? Better still, is it possible without needing to know the IP address of my local machine?
Given that you have an sshd running on your local machine, it's possible and you don't need to know your outgoing IP address. If SSH port forwarding is enabled, you can open a secure tunnel even when you already have an ssh connection opened, and without terminating it.
Assume you have an ssh connection to some server:
local $ ssh user#example.com
Password:
remote $ echo abc > abc.txt # now we have a file here
OK now we need to copy that file back to our local server, and for some reason we don't want to open a new connection. OK, let's get the ssh command line by pressing Enter ~C (Enter, then tilde, then capital C):
ssh> help
Commands:
-L[bind_address:]port:host:hostport Request local forward
-R[bind_address:]port:host:hostport Request remote forward
-D[bind_address:]port Request dynamic forward
-KR[bind_address:]port Cancel remote forward
That's just like the regular -L/R/D options. We'll need -R, so we hit Enter ~C again and type:
ssh> -R 127.0.0.1:2222:127.0.0.1:22
Forwarding port.
Here we forward remote server's port 2222 to local machine's port 22 (and here is where you need the local SSH server to be started on port 22; if it's listening on some other port, use it instead of 22).
Now just run scp on a remote server and copy our file to remote server's port 2222 which is mapped to our local machine's port 22 (where our local sshd is running).
remote $ scp -P2222 abc.txt user#127.0.0.1:
user#127.0.0.1's password:
abc.txt 100% 4 0.0KB/s 00:00
We are done!
remote $ exit
logout
Connection to example.com closed.
local $ cat abc.txt
abc
Tricky, but if you really cannot just run scp from another terminal, could help.
I found this one-liner solution on SU to be a lot more straightforward than the accepted answer. Since it uses an environmental variable for the local IP address, I think that it also satisfies the OP's request to not know it in advance.
based on that, here's a bash function to "DownLoad" a file (i.e. push from SSH session to a set location on the local machine)
function dl(){
scp "$1" ${SSH_CLIENT%% *}:/home/<USER>/Downloads
}
Now I can just call dl somefile.txt while SSH'd into the remote and somefile.txt appears in my local Downloads folder.
extras:
I use rsa keys (ssh-copy-id) to get around password prompt
I found this trick to prevent the local bashrc from being sourced on the scp call
Note: this requires SSH access to local machine from remote (is this often the case for anyone?)
The other answers are pretty good and most users should be able to work with them. However, I found the accepted answer a tad cumbersome and others not flexible enough. A VPN server in between was also causing trouble for me with figuring out IP addresses.
So, the workaround I use is to generate the required scp command on the remote system using the following function in my .bashrc file:
function getCopyCommand {
echo "scp user#remote:$(pwd)/$1 ."
}
I find rsync to be more useful if the local system is almost a mirror of the remote server (including the username) and I require to copy the directory structure also.
function getCopyCommand {
echo "rsync -rvPR user#remote:$(pwd)/$1 /"
}
The generated scp or rsync command is then simply pasted on my local terminal to retrieve the file.
You would need a local ssh server running in your machine, then you can just:
scp [-r] local_content your_local_user#your_local_machine_ip:
Anyway, you don't need to close your remote connection to make a remote copy, just open another terminal and run scp there.
On your local computer:
scp root#remotemachine_name_or_IP:/complete_path_to_file /local_path
I wrote a simple Bash script to change the network address of a Linux Host:
#!/bin/sh
REMOTE_HOST=192.168.2.127 # Default Host address
NEW_IP=192.168.30.33 # New IP I want to set
NEW_GW=192.168.30.1 # New Gateway I want to set
sudo ifconfig eth0 192.168.2.1 # Moving to the right network...
#ping $REMOTE_HOST -c 3 # I can correctly ping the host here...
ssh-copy-id root#${REMOTE_HOST} > /dev/null # ...for my comfort...
# Setting the network with new values for the IP addr and the GW...
COMMAND="sed -i 's#address *\\([0-9.]\\+\\)#address ${NEW_IP}#' /etc/network/interfaces\
&& sed -i 's#gateway *\\([0-9.]\\+\\)#gateway ${NEW_GW}#' /etc/network/interfaces"
ssh root#${REMOTE_HOST} $COMMAND
# done!
# Now restart the network services:
ssh root#${REMOTE_HOST} "/etc/init.d/networking restart &" & # (Note the 2nd '&' !!!)
# Come back to my old IP
sudo ifconfig eth0 192.168.30.10
sudo route add default gw 192.168.30.1
This script works almost perfectly but:
1) If I run it from my home folder, no problems; if I run it from a NFS shared folder the script hangs for a minute or two before to end correctly
2) If I omit the second '&' when restarting the network on the host the command never returns...
The questions are:
1) What causes the long wait (NFS, different IP address, different gateway)? And is it possible to workaround it?
2) Why it happens? How could I avoid it?
Thanks for any kind of help and sorry for my bad English!
You're restarting networking services, which drops all active connections.
Bash reads the file you're running line by line. Since NFS is a Network File System, this will terminate the connection to the file. So the system waits (can't actually) with executing the lines after networking restart until the connection is re-established.
Instead, you should first make a local copy of the entire script and then run it locally.
You could also code a script for that ;-)
I would like to script a sequence of commands involving multiple ssh and scp calls. On a daily basis, I find myself manually performing this task:
From LOCAL system, ssh to SYSTEM1
mkdir /tmp/data on SYSTEM1
from SYSTEM1, ssh to SYSTEM2
mkdir /tmp/data on SYSTEM2
from SYSTEM2, SSH to SYSTEM3
scp files from SYSTEM3:/data to SYSTEM2:/tmp/data
exit to SYSTEM2
scp files from SYSTEM2:/data and SYSTEM2:/tmp/data to SYSTEM1:/tmp/data
rm -fr SYSTEM2:/tmp/data
exit to SYSTEM1
scp files from SYSTEM1:/data and SYSTEM1:/tmp/data to LOCAL:/data
rm -fr SYSTEM1:/tmp/data
I do this process at LEAST once a day and it takes approximately 5-10 minutes going between the different systems and then cleaning up afterwards. I would really like to automate this in a bash script but my amateur attempts so far have been unsuccessful. As you might suspect, the systems communication is constrained, meaning LOCAL can only see System1, System2 can only see System1 and System3, system3 can only see system2, etc. You get the idea. What is the best way to do this? Additionally, System1 is a hub for many other systems so SYSTEM2 must be indicated by the user (System3 will always have the same relative IP/hostname compared to any SYSTEM2).
I tried just putting the commands in the proper order in a shell script and then manually typing in the passwords when prompted (which would already be a huge gain in efficiency) but either the method doesn't work or my execution of the script is wrong. Additionally, I would want to have a command line argument for the script that would take a pattern for which 'system2' to connect to, a pattern for the data to copy, and a target location for the data on the local system.
Such as
./grab_data system2 *05-14* ~/grabbed-data
I did some searching and I think my next step would be to have scripts on each system that perform the local tasks, and then execute the scripts via ssh commands from the respective remote system. Is there a better way? What commands should I look at using and what would be the general approach to this automating this sort of nested ssh and scp problem?
I realize my description may be a bit convoluted so please ask for clarification on any area that I did not properly describe.
Thanks.
You can simplify this process a lot by tunneling ssh connections over other ssh connections (see this previous answer). The way I'd do it is to create an .ssh/config file on the LOCAL system with the following entries:
Host SYSTEM3
ProxyCommand ssh -e none SYSTEM2 exec /usr/bin/nc %h %p 2>/dev/null
HostName SYSTEM3.full.domain
User system3user
Host SYSTEM2
ProxyCommand ssh -e none SYSTEM1 exec /usr/bin/nc %h %p 2>/dev/null
HostName SYSTEM2.full.domain
User system2user
Host SYSTEM1
HostName SYSTEM1.full.domain
User system1user
(That's assuming both intermediate hosts have netcat installed as /usr/bin/nc -- if not, you may have to find/install some equivalent way of gatewaying stdin&stdout into a TCP session.)
With this set up, you can use scp SYSTEM3:/data /data on LOCAL, and it'll automatically tunnel through SYSTEM1 and SYSTEM2 (and ask for the passwords for the three SYSTEMn's in order -- this can be a little confusing, especially if you mistype one).
If you're connecting to multiple systems, and especially if you have to forward connections through intermediate hosts, you will want to use public key authentication with ssh-agent forwarding enabled. That way, you only have to authenticate once.
Scripted SSH with agent forwarding may suffice if all you need to do is check the exit status from your remote commands, but if you're going to do anything complex you might be better off using expect or expect-lite to drive the SSH/SCP sessions in a more flexible way. Expect in particular is designed to be a robust replacement for interactive sessions.
If you stick with shell scripting, and your filenames change a lot, you can always create a wrapper around SSH or SCP like so:
# usage: my_ssh [remote_host] [command_line]
# returns: exit status of remote command, or 255 on SSH error
my_ssh () {
local host="$1"
shift
ssh -A "$host" "$#"
}
Between ssh-agent and the wrapper function, you should have a reasonable starting point for your own efforts.
Another way could be to use rsync, which tomatically creates any needed directories and, if you want, removes the copied source files.
In your case, you could work with the commands
home:~$ ssh system1
system1:~$ ssh system2
system2:~$ rsync -aPSHiv system3:/data /tmp/data
system2:~$ exit
system1:~$ rsync -aPSHiv --remove-source-files system2:/tmp/data /tmp/data
system1:~$ rsync -aPSHiv system2:/data /tmp/data
system1:~$ exit
home:~$ rsync -aPSHiv --remove-source-files system1:/tmp/data /tmp/data
home:~$ rsync -aPSHiv system1:/data /data
If you combine this with Gordon's approach, you can even reduce that to
home:~$ rsync -aPSHiv system1:/data/ system2:/data/ system3:/data/ /data/
Note that rsync makes a difference between ...data and ...data/ - the former means the directory and its contents, the latter just the contents. If you mix them up, you might end up with a directory data in another directory data.
Besides, you simplify things if you work with public SSH keys instead of passwords.
how do I execute command/script on a remote linux box?
say I want to do service tomcat start on box b from box a.
I guess ssh is the best secured way for this, for example :
ssh -OPTIONS -p SSH_PORT user#remote_server "remote_command1; remote_command2; remote_script.sh"
where the OPTIONS have to be deployed according to your specific needs (for example, binding to ipv4 only) and your remote command could be starting your tomcat daemon.
Note:
If you do not want to be prompt at every ssh run, please also have a look to ssh-agent, and optionally to keychain if your system allows it. Key is... to understand the ssh keys exchange process. Please take a careful look to ssh_config (i.e. the ssh client config file) and sshd_config (i.e. the ssh server config file). Configuration filenames depend on your system, anyway you'll find them somewhere like /etc/sshd_config. Ideally, pls do not run ssh as root obviously but as a specific user on both sides, servers and client.
Some extra docs over the source project main pages :
ssh and ssh-agent
man ssh
http://www.snailbook.com/index.html
https://help.ubuntu.com/community/SSH/OpenSSH/Configuring
keychain
http://www.gentoo.org/doc/en/keychain-guide.xml
an older tuto in French (by myself :-) but might be useful too :
http://hornetbzz.developpez.com/tutoriels/debian/ssh/keychain/
ssh user#machine 'bash -s' < local_script.sh
or you can just
ssh user#machine "remote command to run"
If you don't want to deal with security and want to make it as exposed (aka "convenient") as possible for short term, and|or don't have ssh/telnet or key generation on all your hosts, you can can hack a one-liner together with netcat. Write a command to your target computer's port over the network and it will run it. Then you can block access to that port to a few "trusted" users or wrap it in a script that only allows certain commands to run. And use a low privilege user.
on the server
mkfifo /tmp/netfifo; nc -lk 4201 0</tmp/netfifo | bash -e &>/tmp/netfifo
This one liner reads whatever string you send into that port and pipes it into bash to be executed. stderr & stdout are dumped back into netfifo and sent back to the connecting host via nc.
on the client
To run a command remotely:
echo "ls" | nc HOST 4201