ssh proxy command with netcat - linux

In example like this:
Host destination
ProxyCommand ssh gateway nc %h %p
Is the connection between the gateway and the destination encrypted? I am confused because I have 2 hypotheses and both are not convincing:
It's not encrypted. The stdin in source goes through the source-gateway ssh connection encrypted, and get decrypted before being passed to nc, i.e nc's stdin is the same as stdin into ssh client at source. But I think the %p is 22, the ssh port -- which doesn't fit with this hypothesis.
It's encrypted, the sshd daemon at gateway pass to nc encrypted data. Then say instead of executing nc, we are executing cat, does sshd daemon pass it the encrypted data too? That doesn't sound right either.

Of course it is encrypted! Just to understand well what is going on here:
[ client $ ssh destination ]
|
'-> [ gateway $ nc destination 22 ]
|
'-> [ destination $ whatever]
On client you run just ssh destination. This is translated into ssh gateway nc destination 22.
So first executed command is ssh gateway with command. We have encrypted first step for sure.
The nc destination 22 command is run in this session on gateway server. And it does basically the thing that it redirects all I/O to the destination host, just as it is (but we are already in encrypted channel!).
So you will do once more key exchange with and authentication with destination and after it will succeed, you will get probably shell prompt there. So it is again encrypted.

Related

How to run ssh over an existing TCP connection

I want to be able to SSH to a number linux devices at once, behind different NATs. I can't configure the network that they are on. However, I'm having trouble getting ssh to go over an existing connection.
I have full control over both my client and the devices. Here's the process so far:
On my client, I first run
socat TCP-LISTEN:5001,pktinfo,fork EXEC:./create_socket.sh,fdin=3,fdout=4,nofork
Contents of ./create_socket.sh:
ssh -N -M -S "~/sockets/${SOCAT_PEERADDR}" -o "ProxyCommand=socat - FD:3!!FD:4" "root#${SOCAT_PEERADDR}"
On the device, I'm running
socat TCP:my_host:4321 TCP:localhost:22
However, nothing comes in or out of FD:3!!FD:4, I assume because the ProxyCommand is a subprocess. I've also tried setting fdin=3,fdout=3 and changing ./create_socket.sh to:
ssh -N -M -S "~/sockets/${SOCAT_PEERADDR}" -o "ProxyUseFdpass=yes" -o "ProxyCommand=echo 3" "root#${host}"
This prints an error:
mm_receive_fd: no message header
proxy dialer did not pass back a connection
I believe this is because the fd should be sent in some way using sendmsg, but the fd doesn't originate from the subprocess anyways. I'd like to make it as simple as possible, and this feels close to workable.
You want to turn the client/server model on its head and make a generic server to spawn a client on-demand and in-response-to an incoming unauthenticated TCP connection from across a network boundary, and then tell that newly-spawned client to use that unauthenticated TCP session. I think that may have security considerations that you haven't thought of. If a malicious person spams connections to your computer, your computer will spawn a lot of SSH instances to connect back and these processes can take up a lot of local system resources while authenticating. You're effectively trying to set up SSH to automatically connect to an untrusted (unverified) remote-initiated machine across a network boundary. I can't stress how dangerous that could be for your client computer. Using the wrong options could expose any credentials you have or even give a malicious person full access to your machine.
It's also worth noting that the scenario you're asking to do, building a tunnel between multiple devices to multiplex additional connections across an untrusted network boundary, is exactly the purpose of VPN software. Yes, SSH can build tunnels. VPN software can build tunnels better. The concept would be that you'd run a VPN server on your client machine. The VPN server will create a new (virtual) network interface which represents only your devices. The devices would connect to the VPN server and be assigned an IP address. Then, from the client machine, you'd just initiate SSH to the device's VPN address and it will be routed over the virtual network interface and arrive at the device and be handled by its SSH daemon server. Then you don't need to muck around with socat or SSH options for port forwarding. And you'd get all the tooling and tutorials that exist around VPNs. I strongly encourage you to look at VPN software.
If you really want to use SSH, then I strongly encourage you to learn about securing SSH servers. You've stated that the devices are across network boundaries (NAT) and that your client system is unprotected. I'm not going to stop you from shooting yourself in the foot but it would be very easy to spectacularly do so in the situation you've stated. If you're in a work setting, you should talk to your system administrators to discuss firewall rules, bastion hosts, stuff like that.
Yes, you can do what you've stated. I strongly advise caution though. I advise it strongly enough that I won't suggest anything which would work with that as stated. I will suggest a variant with the same concepts but more authentication.
First, you've effectively set up your own SSH bounce server but without any of the common tooling compatible with SSH servers. So that's the first thing I'd fix: use SSH server software to authenticate incoming tunnel requests by using ssh client software to initiate the connection from the device instead of socat. ssh already has plenty of capabilities to create tunnels in both directions and you get authentication bundled with it (with socat, there's no authentication). The devices should be able to authenticate using encryption keys (ssh calls these identities). You'll need to connect once manually from the device to verify and authorize the remote encryption key fingerprint. You'll also need to copy the public key file (NOT the private key file) to your client machine and add it to your authorized_keys files. You can ask for help on that separately if you need it.
A second issue is that you appear to be using fd3 and fd4. I don't know why you're doing that. If anything, you should be using fd0 and fd1 since these are stdin and stdout, respectively. But you don't even need to do that if you're using socat to initiate a connection. Just use - where stdin and stdout are meant. It should be completely compatible with -o ProxyCommand without specifying any file descriptors. There's an example at the end of this answer.
The invocation from the device side might look like this (put it into a script file):
IDENTITY=/home/WavesAtParticles/.ssh/tunnel.id_rsa # on device
REMOTE_SOCKET=/home/WavesAtParticles/.ssh/$(hostname).sock # on client
REMOTEUSER=WavesAtParticles # on client
REMOTEHOST=remotehost # client hostname or IP address accessible from device
while true
do
echo "$(date -Is) connecting"
#
# Set up your SSH tunnel. Check stderr for known issues.
ssh \
-i "${IDENTITY}" \
-R "${REMOTE_SOCKET}:127.0.0.1:22" \
-o ExitOnForwardFailure=yes \
-o PasswordAuthentication=no \
-o IdentitiesOnly=yes \
-l "${REMOTEUSER}" \
"${REMOTEHOST}" \
"sleep inf" \
2> >(
read -r line
if echo "${line}" | grep -q "Error: remote port forwarding failed"
then
ssh \
-i "${IDENTITY}" \
-o PasswordAuthentication=no \
-o IdentitiesOnly=yes \
-l "${REMOTEUSER}" \
"${REMOTEHOST}" \
"rm ${REMOTE_SOCKET}" \
2>/dev/null # convince me this is wrong
echo "$(date -Is) removed stale socket"
fi
#
# Re-print stderr to the terminal
>&2 echo "${line}" # the stderr line we checked
>&2 cat - # and any unused stderr messages
)
echo "disconnected"
sleep 30
done
Remember, copying and pasting is bad in terms of shell scripts. At a minimum, I recommend you read man ssh and man ssh_config, and to check the script against shellcheck.net. The intent of the script is:
In a loop, have your device (re)connect to your client to maintain your tunnel.
If the connection drops or fails, then reconnect every 30 seconds.
Run ssh with the following parameters:
-i "${IDENTITY}": specify a private key to use for authentication.
-R "${REMOTE_SOCKET}:127.0.0.1:22": specify a connection request forwarder which accept connections on the Remote side /home/WavesAtParticles/$(hostname).sock then forward them to the local side by connecting to 127.0.0.1:22.
-o ExitOnForwardFailure=yes: if the remote side fails to set up the connection forwarder, then the local side should emit an error and die (and we check for this error in a subshell).
-o PasswordAuthentication=no: do not fall back to a password request, particularly since the local user isn't here to type it in
-o IdentitiesOnly=yes: do not use any default identity nor any identity offered by any local agent. Use only the one specified by -i.
-l "${REMOTEUSER}": log in as the specified user.
remotehost, eg your client machine that you want a device to connect to.
Sleep forever
If the connection failed because of a stale socket, then work around the issue by:
Log in separately
Delete the (stale) socket
Print today's date indicating when it was deleted
Loop again
There's an option which is intended to make this error-handling redundant: StreamLocalBindUnlink. However the option does not correctly work and has a bug open for years. I imagine that's because there really aren't many people who use ssh to forward over unix domain sockets. It's annoying but not difficult to workaround.
Using a unix domain socket should limit connectivity to whoever can reach the socket file (which should be only you and root if it's placed in your ${HOME}/.ssh directory and the directory has correct permissions). I don't know if that's important for your case or not.
On the other hand you can also simplify this a lot if you're willing to open a TCP port on 127.0.0.1 for each device. But then any other user on the same system can also connect. You should specifically listen on 127.0.0.1 which would then only accept connections from the same host to prevent external machines from reaching the forwarding port. You'd change the ${REMOTE_SOCKET} variable to, for example, 127.0.0.1:4567 to listen on port 4567 and only accept local connections. So you'd lose the named socket capability and permit any other user on the client machine to connect to your device, but gain a much simpler tunnel script (because you can remove the whole bit about parsing stderr to remove a stale socket file).
As long as your device is online (can reach your workstation's incoming port) and is running that script, and the authentication is valid, then the tunnel should also be online or coming-online. It will take some time to recover after a loss (and restore) of network connectivity, though. You can tune that with ConnectTimeout, TCPKeepAlive, and ServerAliveInterval options and the sleep 30 part of the loop. You could run it in a tmux session to keep it going even when you don't have a login session running. You could also run it as a system service on the device to bring it online even after recovering from a power failure.
Then from your client, you can connect in reverse:
ssh -o ProxyCommand='socat - unix-connect:/home/WavesAtParticles/remotehost.sock' -l WavesAtParticles .
In this invocation, you'll start ssh. It will then set up the proxycommand using socat. It will take its stdin/stdout and relay it through a connected AF_UNIX socket at the path provided. You'll need to update the path for the remote host you expect. But there's no need to specify file descriptors at all.
If ssh complains:
2019/08/26 18:09:52 socat[29914] E connect(5, AF=1 "/home/WavesAtParticles/remotehost.sock", 7): Connection refused
ssh_exchange_identification: Connection closed by remote host
then the tunnel is currently down and you should investigate the remotehost device's connectivity.
If you use the remote forwarding option with a TCP port listening instead of a unix domain socket, then the client-through-tunnel-to-remote invocation becomes even easier: ssh -p 4567 WavesAtParticles#localhost.
Again, you're trying to invert the client/server model and I don't think that's a very good idea to do with SSH.
I’m going to try this today:
http://localhost.run/
It seems like what you are looking for.
Not to answer your question but helpful for people who may not know:
Ngrok is the easiest way I’ve found. they do webservers as well as tcp connections. I’d recommend installing it through homebrew.
https://ngrok.com/product
$ ngrok http 5000
In the terminal for http, 5000 being the port of your application.
$ ngrok tcp 5000
In the terminal for tcp.
It’s free for testing(random changing domains).
For tcp connections remove “http://“ from the web address to get the IP address. Sorry I can’t remember. I think the client ports to 80 and I believe you can change that by adding port 5001 or something, google it to double check

keychain ssh-agent incorrect informations

I'm using keychain, which manages my key(s) with ssh-agent perfectly.
I want to check the state of ssh-agent on each linux host. I tried with :
ssh-add -l
1024 f7:51:28:ea:98:45:XX:XX:XX:XX:XX:XX /root/.ssh/id_rsa (RSA)
Locally, this command works and is coherent.
But with a distant SSH command, I don't know why the result is not the same.. :
## host1, locally
ssh-add -l
1024 f7:51:28:ea:98:45:XX:XX:XX:XX:XX:XX /root/.ssh/id_rsa (RSA)
## host 2, command to host1 :
ssh host1 "ssh-add -l"
Could not open a connection to your authentication agent.
Maybe someone could explain me ? It't disturbing, because I would to monitor ssh-agent states..
Thanks.
EDIT : Even with SSH Agent Forwarding enabled, the distant command returns only the local state of agent. Other distant commands works, with or without key loaded..
## Host1, locally
ssh-add -l
1024 f7:51:28:ea:98:45:XX:XX:XX:XX:XX:XX /root/.ssh/id_rsa (RSA)
## From Host2, locally and distant :
ssh-add -l
The agent has no identities.
ssh -A host1 "ssh-add -l"
The agent has no identities.
You seem to misunderstand how keychain/ssh-agent work. When you log onto a system (we'll call it 'home'), it starts a program. As part of starting this program it exposes a file that ssh-add can connect to. When the name of this file is stored in the SSH_AUTH_SOCK variable, it becomes accessible by ssh and by ssh-add when run with this enviroment variable set appropriately.
When you ssh to a remote system, if the ForwardAgent property is set to true in your configuration, a channel is established allowing this key information to pass from your 'home' system to the system that you've ssh'd to. In order to expose this key information, another SSH_AUTH_SOCK variable is set on this remote system. So now we have:
# home system (host1)
host1$ ssh-add -l
1024 BLAH....
host1$ echo $SSH_AUTH_SOCK
/tmp/ssh-YJNLu2LFMKbO/agent.1472
home$ ssh -A host2
host2$ ssh-add -l
1024 BLAH
host2$ echo $SSH_AUTH_SOCK
/tmp/ssh-vqdu733feY/agent.23877
host2$ ssh -A host1
host1$ ssh-add -l
1024 BLAH
host1$ echo $SSH_AUTH_SOCK
/tmp/ssh-fuKgOaaQ7b/agent.23951
so with every connection, a socket is created on the remote system to ferry the key data through the chain to the original 'home' system. So in this example:
ssh-agent(on host1) makes a SOCKET -> ssh(host2) [provides a SOCKET connecting back to the SOCKET on host1] -> ssh(host1) [provides a SOCKET connecting back to the socket on host2]
So ssh, once you connect to the remote system is providing a socket that connects back to the socket from the system that it came from.
If you log on to a system directly (e.g. logging onto host2 at the console), then there is absolutely no connection back to host1. If an agent is started on host2, then it provides it's own, private socket, that you are communicating with.
Where things can go wrong:
You've enabled agent forwarding on your connection from host1 -> host2; however the script that runs at login on host2 ignores the presence of this socket and starts it's own private agent on host2. As a result when you ask for lists of registered keys using ssh-add -l, it talks to the socket provided by the agent running on host2. This agent does not have access to the keys from host1 as it ignored the socket.
agent forwarding can be disabled by the sshd_config, which means that the server administrator has configured the system to prevent people forwarding their agent information into this system (if there's an AllowAgentForwarding no line in the sshd_config then this would be the case).
The first case would be when there is a badly written login script that ignores the presence of the variable - i.e. it doesn't properly detect the fact that the connection is remote and there is a socket being forwarded - this is rare, but can happen. If it does then the script would need to be rewritten to detect this case.
If the administrator of the remote system has disabled agent forwarding, then you need to ask for it to be enabled.

How to scp back to local when I've already sshed into remote machine?

Often I face this situation: I sshed into a remote server and ran some programs, and I want to copy their output files back to my local machine. What I do is remember the file path on remote machine, exit the connection, then scp user#remote:filepath .
Obviously this is not optimal. What I'm looking for is a way to let me scp file back to local machine without exiting the connection. I did some searching, almost all results are telling me how to do scp from my local machine, which I already know.
Is this possible? Better still, is it possible without needing to know the IP address of my local machine?
Given that you have an sshd running on your local machine, it's possible and you don't need to know your outgoing IP address. If SSH port forwarding is enabled, you can open a secure tunnel even when you already have an ssh connection opened, and without terminating it.
Assume you have an ssh connection to some server:
local $ ssh user#example.com
Password:
remote $ echo abc > abc.txt # now we have a file here
OK now we need to copy that file back to our local server, and for some reason we don't want to open a new connection. OK, let's get the ssh command line by pressing Enter ~C (Enter, then tilde, then capital C):
ssh> help
Commands:
-L[bind_address:]port:host:hostport Request local forward
-R[bind_address:]port:host:hostport Request remote forward
-D[bind_address:]port Request dynamic forward
-KR[bind_address:]port Cancel remote forward
That's just like the regular -L/R/D options. We'll need -R, so we hit Enter ~C again and type:
ssh> -R 127.0.0.1:2222:127.0.0.1:22
Forwarding port.
Here we forward remote server's port 2222 to local machine's port 22 (and here is where you need the local SSH server to be started on port 22; if it's listening on some other port, use it instead of 22).
Now just run scp on a remote server and copy our file to remote server's port 2222 which is mapped to our local machine's port 22 (where our local sshd is running).
remote $ scp -P2222 abc.txt user#127.0.0.1:
user#127.0.0.1's password:
abc.txt 100% 4 0.0KB/s 00:00
We are done!
remote $ exit
logout
Connection to example.com closed.
local $ cat abc.txt
abc
Tricky, but if you really cannot just run scp from another terminal, could help.
I found this one-liner solution on SU to be a lot more straightforward than the accepted answer. Since it uses an environmental variable for the local IP address, I think that it also satisfies the OP's request to not know it in advance.
based on that, here's a bash function to "DownLoad" a file (i.e. push from SSH session to a set location on the local machine)
function dl(){
scp "$1" ${SSH_CLIENT%% *}:/home/<USER>/Downloads
}
Now I can just call dl somefile.txt while SSH'd into the remote and somefile.txt appears in my local Downloads folder.
extras:
I use rsa keys (ssh-copy-id) to get around password prompt
I found this trick to prevent the local bashrc from being sourced on the scp call
Note: this requires SSH access to local machine from remote (is this often the case for anyone?)
The other answers are pretty good and most users should be able to work with them. However, I found the accepted answer a tad cumbersome and others not flexible enough. A VPN server in between was also causing trouble for me with figuring out IP addresses.
So, the workaround I use is to generate the required scp command on the remote system using the following function in my .bashrc file:
function getCopyCommand {
echo "scp user#remote:$(pwd)/$1 ."
}
I find rsync to be more useful if the local system is almost a mirror of the remote server (including the username) and I require to copy the directory structure also.
function getCopyCommand {
echo "rsync -rvPR user#remote:$(pwd)/$1 /"
}
The generated scp or rsync command is then simply pasted on my local terminal to retrieve the file.
You would need a local ssh server running in your machine, then you can just:
scp [-r] local_content your_local_user#your_local_machine_ip:
Anyway, you don't need to close your remote connection to make a remote copy, just open another terminal and run scp there.
On your local computer:
scp root#remotemachine_name_or_IP:/complete_path_to_file /local_path

linux execute command remotely

how do I execute command/script on a remote linux box?
say I want to do service tomcat start on box b from box a.
I guess ssh is the best secured way for this, for example :
ssh -OPTIONS -p SSH_PORT user#remote_server "remote_command1; remote_command2; remote_script.sh"
where the OPTIONS have to be deployed according to your specific needs (for example, binding to ipv4 only) and your remote command could be starting your tomcat daemon.
Note:
If you do not want to be prompt at every ssh run, please also have a look to ssh-agent, and optionally to keychain if your system allows it. Key is... to understand the ssh keys exchange process. Please take a careful look to ssh_config (i.e. the ssh client config file) and sshd_config (i.e. the ssh server config file). Configuration filenames depend on your system, anyway you'll find them somewhere like /etc/sshd_config. Ideally, pls do not run ssh as root obviously but as a specific user on both sides, servers and client.
Some extra docs over the source project main pages :
ssh and ssh-agent
man ssh
http://www.snailbook.com/index.html
https://help.ubuntu.com/community/SSH/OpenSSH/Configuring
keychain
http://www.gentoo.org/doc/en/keychain-guide.xml
an older tuto in French (by myself :-) but might be useful too :
http://hornetbzz.developpez.com/tutoriels/debian/ssh/keychain/
ssh user#machine 'bash -s' < local_script.sh
or you can just
ssh user#machine "remote command to run"
If you don't want to deal with security and want to make it as exposed (aka "convenient") as possible for short term, and|or don't have ssh/telnet or key generation on all your hosts, you can can hack a one-liner together with netcat. Write a command to your target computer's port over the network and it will run it. Then you can block access to that port to a few "trusted" users or wrap it in a script that only allows certain commands to run. And use a low privilege user.
on the server
mkfifo /tmp/netfifo; nc -lk 4201 0</tmp/netfifo | bash -e &>/tmp/netfifo
This one liner reads whatever string you send into that port and pipes it into bash to be executed. stderr & stdout are dumped back into netfifo and sent back to the connecting host via nc.
on the client
To run a command remotely:
echo "ls" | nc HOST 4201

Loop exits after first iteration

cat hosts.txt | while read h; do telnet $h; done
When I run this, it telnets to first host which denies the connection but then exits instead of looping over the other hosts in the file.
Why is it exiting the loop after the first host and how can I fix it?
Thanks.
That works just fine in my bash:
$ cat hosts.txt
machine1
machine3
$ cat hosts.txt | while read h; do telnet $h; done
Trying 9.190.123.47...
telnet: Unable to connect to remote host: Connection refused
Trying 9.190.123.61...
telnet: Unable to connect to remote host: Connection refused
However, when I connect to a machine that doesn't refuse the connection, I get:
$ cat hosts.txt
machine2
machine1
machine3
$ cat hosts.txt | while read h; do telnet $h; done
Trying 9.190.123.48...
Connected to machine2.
Escape character is '^]'.
Connection closed by foreign host.
That's because I'm actually connecting successfully and all the other host names are being sent to the telnet session. These are no doubt being used to attempt a login, the remaining host names are invalid as user/password and the session is being closed because of that.
If you just want to log in interactively to each system, you can use:
for h in $(cat hosts.txt); do telnet $h 1023; done
which will not capture the rest of the host names into the first successful session.
If you want to truly automate a telnet session, you should look into a tool such as expect or using one of the remoting UNIX tools such as rsh.
telnet is interactive. what are you actually wanting to do? If you want to automate "telnet" session, you can use SSH
while read -r host
do
ssh <options> <commands> "$host"
done <"hosts.txt"
As mentioned previously telnet is an interactive program that expects input.
At a guess all hosts after the first are consumed as input by the first telnet.
It is not clear what your script is trying to do. Perhaps you need to be clearer on what you are trying to do.

Resources