How to transfer a file in an "indirect" ssh connection? - linux

I have to access my server in such way: localhost -> remote1 -> remote2 (my server)
[xxxx#localhost] $ ssh yyyy#remote1
[yyyy#remote1] $ ssh zzzz#remote2
[zzzz#remote2] $ echo "now I logined into my server..."
I know how to transfer files with scp. however I have no read or write permissions on remote1. How can I transfer a file to remote2?

Another alternative could be to use a Proxy command:
scp -o ProxyCommand='ssh yyy#remote1 netcat %h %p 2> /dev/null' zzz#remote2:fromfile tofile
if remote1 has netcat installed. Other viable options could be nc or socat (the latter has a different syntax).

Try this,
ssh -L localhost:8022:remote2:22 remote1
Now, you can use localhost port 8022 to contact 22 of remote2 via remote1. This session session should be active whenever you need to transfer. Use
scp -P 8022 /path/locale/file 127.0.0.1:/path/on/remote2
This is commonly called as SSH Tunneling. You can search and get to know lot about it.

Related

ssh "port 22: no route to host" error in bash script

I wrote a scipt to execute a couple of ssh remote comands relating to apache storm. When I execute the script it says:
ssh: connect to host XXX.XXX.XXX.XXX port 22: No route to host
ssh: connect to host XXX.XXX.XXX.XXX port 22: No route to host
ssh: connect to host XXX.XXX.XXX.XXX port 22: No route to host
ssh: connect to host XXX.XXX.XXX.XXX port 22: Connection refused
If I execute the commands manually it works out well and I can ping the machine. So that there has to be something wrong with this code:
while [ $i -le $numVM ]
do
if [ $i -eq 1 ];then
ssh -i file root#IP1 './zookeeper-3.4.6/bin/zkServer.sh start'
else
ssh -i file root#IP2 'sed -i '"'"'s/#\ storm.zookeeper.servers.*/storm.zookeeper.servers:/'"'"' /root/apache-storm-0.9.3/conf/storm.yaml'
ssh -i file root#IP2 'sed -i '"'"'0,/^#[[:space:]]*-[[:space:]]*\"server1\".*/s//" - \"'${IParray[1]}'\""/'"'"' /root/apache-storm-0.9.3/conf/storm.yaml'
ssh -i file root#IP2 'sed -i '"'"'s/#\ nimbus.host:.*/"nimbus.host: \"'${IParray[2]}'\""/'"'"' /root/apache-storm-0.9.3/conf/storm.yaml'
ssh -i file root#IP2 './zookeeper-3.4.6/bin/zkCli.sh -server ${IParray[1]} &'
sleep 10
ssh -i file root#IP2 './apache-storm-0.9.3/bin/storm nimbus &' &
sleep 10
ssh -i file root#IP2 './apache-storm-0.9.3/bin/storm ui &' &
sleep 10
ssh -i file root#IP2 './apache-storm-0.9.3/bin/storm supervisor &' &
fi
((i++))
done
I'm starting several processes on 2 virtual machines that are deployed from the same image, so that they are identical in general. The confusing part is, that the first ssh command (zkServer.sh start) is working well but if I the script tries to execute the three "sed"-ssh-commands I get the error message above. But then the last four ssh-commands are working out well again. That does not make any sense to me...
Several things I can think of:
Most sshd daemons won't allow root access. Heck, many versions of Unix/Linux no longer allow root login. If you need root access, you need to use sudo.
The sshd daemon on the remote machine isn't running. Although rare, some sites may never had it setup, or purposefully shut it off as a security issue.
Your ssh commands themselves are incorrect.
Instead of executing the ssh commands in your shell script, modify the shell script just to print out what you're attempting to execute. Then, see if you can execute the command outside of the shell script. This way you can determine whether the problem is the shell script, or the problem is with the ssh command itself.
If your ssh commands don't work outside from the command line, you can then simplify them and see if you can determine what the issue could be. You have ssh -i file root#IP2. Is this suppose to be ssh -i $file root#$IP2? (i.e., you're missing the leading sigil).
$ ssh -i file root#$IP2 ls # Can't get simpler than this...
$ ssh -i file root#IPS # See if you can remotely log on...
$ ssh root#IP2 # Try it without an 'identity file'
$ ssh bob#IP2 # Try it as a user other than 'root'
$ telnet IP2 22 # Is port 22 even open on the remote machine?
If these don't work, then you have some very basic issue with the setup of your remote machine's sshd command.

How to scp back to local when I've already sshed into remote machine?

Often I face this situation: I sshed into a remote server and ran some programs, and I want to copy their output files back to my local machine. What I do is remember the file path on remote machine, exit the connection, then scp user#remote:filepath .
Obviously this is not optimal. What I'm looking for is a way to let me scp file back to local machine without exiting the connection. I did some searching, almost all results are telling me how to do scp from my local machine, which I already know.
Is this possible? Better still, is it possible without needing to know the IP address of my local machine?
Given that you have an sshd running on your local machine, it's possible and you don't need to know your outgoing IP address. If SSH port forwarding is enabled, you can open a secure tunnel even when you already have an ssh connection opened, and without terminating it.
Assume you have an ssh connection to some server:
local $ ssh user#example.com
Password:
remote $ echo abc > abc.txt # now we have a file here
OK now we need to copy that file back to our local server, and for some reason we don't want to open a new connection. OK, let's get the ssh command line by pressing Enter ~C (Enter, then tilde, then capital C):
ssh> help
Commands:
-L[bind_address:]port:host:hostport Request local forward
-R[bind_address:]port:host:hostport Request remote forward
-D[bind_address:]port Request dynamic forward
-KR[bind_address:]port Cancel remote forward
That's just like the regular -L/R/D options. We'll need -R, so we hit Enter ~C again and type:
ssh> -R 127.0.0.1:2222:127.0.0.1:22
Forwarding port.
Here we forward remote server's port 2222 to local machine's port 22 (and here is where you need the local SSH server to be started on port 22; if it's listening on some other port, use it instead of 22).
Now just run scp on a remote server and copy our file to remote server's port 2222 which is mapped to our local machine's port 22 (where our local sshd is running).
remote $ scp -P2222 abc.txt user#127.0.0.1:
user#127.0.0.1's password:
abc.txt 100% 4 0.0KB/s 00:00
We are done!
remote $ exit
logout
Connection to example.com closed.
local $ cat abc.txt
abc
Tricky, but if you really cannot just run scp from another terminal, could help.
I found this one-liner solution on SU to be a lot more straightforward than the accepted answer. Since it uses an environmental variable for the local IP address, I think that it also satisfies the OP's request to not know it in advance.
based on that, here's a bash function to "DownLoad" a file (i.e. push from SSH session to a set location on the local machine)
function dl(){
scp "$1" ${SSH_CLIENT%% *}:/home/<USER>/Downloads
}
Now I can just call dl somefile.txt while SSH'd into the remote and somefile.txt appears in my local Downloads folder.
extras:
I use rsa keys (ssh-copy-id) to get around password prompt
I found this trick to prevent the local bashrc from being sourced on the scp call
Note: this requires SSH access to local machine from remote (is this often the case for anyone?)
The other answers are pretty good and most users should be able to work with them. However, I found the accepted answer a tad cumbersome and others not flexible enough. A VPN server in between was also causing trouble for me with figuring out IP addresses.
So, the workaround I use is to generate the required scp command on the remote system using the following function in my .bashrc file:
function getCopyCommand {
echo "scp user#remote:$(pwd)/$1 ."
}
I find rsync to be more useful if the local system is almost a mirror of the remote server (including the username) and I require to copy the directory structure also.
function getCopyCommand {
echo "rsync -rvPR user#remote:$(pwd)/$1 /"
}
The generated scp or rsync command is then simply pasted on my local terminal to retrieve the file.
You would need a local ssh server running in your machine, then you can just:
scp [-r] local_content your_local_user#your_local_machine_ip:
Anyway, you don't need to close your remote connection to make a remote copy, just open another terminal and run scp there.
On your local computer:
scp root#remotemachine_name_or_IP:/complete_path_to_file /local_path

SCP command Clarification

I'm using the scp commands to pull some files from the remote server and one variation of the command is not working.
I have 2 files names one.xml and two.xml in a remote server and I'm pulling these two files into the current dir using the following command:
scp stuadmin#10.44.220.112:/student/class/Intermediate/one.xml .
scp stuadmin#10.44.220.112:/student/class/Intermediate/two.xml .
The above command works fine but if I use wildcards to pull all the xml files in a single shot as shown below it returns scp: No match.
scp stuadmin#10.44.220.112:/student/class/Intermediate/*.xml .
Why is it working if I pull the files individually and not working if I try to pull using wildcards.
Yeah this is for SuperUser. The answer is because the asterisk does a wildcard expansion FIRST before the command even runs. It is expanded by your SHELL. echo *.xml will really run echo file1.xml file2.xml and that is what echo sees, while bash sees the *.xml.
Since you are passing multiple files and paths to SCP, it gets confused as the first argument (or 2nd) is not a host:/path.
Put 'echo' in front of your command to see what is really being executed. You can't use wildcards on a remote host (unless you escape them first).
You can clarify from these results
You can simplify this process a lot by tunneling ssh connections over other ssh connections (see this previous answer). The way I'd do it is to create an .ssh/config file on the LOCAL system with the following entries:
Host SYSTEM3
ProxyCommand ssh -e none SYSTEM2 exec /usr/bin/nc %h %p 2>/dev/null
HostName SYSTEM3.full.domain
User system3user
Host SYSTEM2
ProxyCommand ssh -e none SYSTEM1 exec /usr/bin/nc %h %p 2>/dev/null
HostName SYSTEM2.full.domain
User system2user
Host SYSTEM1
HostName SYSTEM1.full.domain
User system1user
(That's assuming both intermediate hosts have netcat installed as /usr/bin/nc -- if not, you may have to find/install some equivalent way of gatewaying stdin&stdout into a TCP session.)
With this set up, you can use scp SYSTEM3:/data /data on LOCAL, and it'll automatically tunnel through SYSTEM1 and SYSTEM2 (and ask for the passwords for the three SYSTEMn's in order -- this can be a little confusing, especially if you mistype one).

How to purge connections left open by SSH ProxyCommand?

I have a webserver WWW1 and a front-facing proxy PRX. I use SSH ProxyCommand to connect to WWW1's internal IP (private IP) via PRX (private+public IP). For some connections (not all) I see a network connection left open after I'm finished. These add up!
~/.ssh/config
Host *
ServerAliveInterval 5
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p
Host WWW1 WWW2 WWW3
User foo
ProxyCommand ssh -q -a -x PRX nc %h 22
IdentityFile ~/.ssh/id_foo_WWWx
On PRX, lsof | grep WWW1:ssh shows 124 open connections at the moment. On WWW1, the same command shows 243 open connections. There are similar open connections for WWW2, WWW3 etc.
WWW1 and PRX are Debian. Client connections are coming from a mix of Debian, Ubuntu and OSX10.6. I use Emacs Tramp but this has no special configuration (AFAIK) outside of my ~/.ssh/config.
I'm concerned about running out of internal ports, and ideally I want these connections to clean themselves up without intervention. Ideally by configuring them to kill themselves off; failing that a command I can kill old processes with is fine!
A better way would be to use the -W option of SSH, so you could put
ProxyCommand ssh -q -a -x PRX -W %h:22
instead of
ProxyCommand ssh -q -a -x PRX nc %h 22
This way you get rid of dependence on nc too.
Don't know whether it matters but I use nc -w 1 %h %p

linux execute command remotely

how do I execute command/script on a remote linux box?
say I want to do service tomcat start on box b from box a.
I guess ssh is the best secured way for this, for example :
ssh -OPTIONS -p SSH_PORT user#remote_server "remote_command1; remote_command2; remote_script.sh"
where the OPTIONS have to be deployed according to your specific needs (for example, binding to ipv4 only) and your remote command could be starting your tomcat daemon.
Note:
If you do not want to be prompt at every ssh run, please also have a look to ssh-agent, and optionally to keychain if your system allows it. Key is... to understand the ssh keys exchange process. Please take a careful look to ssh_config (i.e. the ssh client config file) and sshd_config (i.e. the ssh server config file). Configuration filenames depend on your system, anyway you'll find them somewhere like /etc/sshd_config. Ideally, pls do not run ssh as root obviously but as a specific user on both sides, servers and client.
Some extra docs over the source project main pages :
ssh and ssh-agent
man ssh
http://www.snailbook.com/index.html
https://help.ubuntu.com/community/SSH/OpenSSH/Configuring
keychain
http://www.gentoo.org/doc/en/keychain-guide.xml
an older tuto in French (by myself :-) but might be useful too :
http://hornetbzz.developpez.com/tutoriels/debian/ssh/keychain/
ssh user#machine 'bash -s' < local_script.sh
or you can just
ssh user#machine "remote command to run"
If you don't want to deal with security and want to make it as exposed (aka "convenient") as possible for short term, and|or don't have ssh/telnet or key generation on all your hosts, you can can hack a one-liner together with netcat. Write a command to your target computer's port over the network and it will run it. Then you can block access to that port to a few "trusted" users or wrap it in a script that only allows certain commands to run. And use a low privilege user.
on the server
mkfifo /tmp/netfifo; nc -lk 4201 0</tmp/netfifo | bash -e &>/tmp/netfifo
This one liner reads whatever string you send into that port and pipes it into bash to be executed. stderr & stdout are dumped back into netfifo and sent back to the connecting host via nc.
on the client
To run a command remotely:
echo "ls" | nc HOST 4201

Resources