Reverse ssh tunnel fails to bind to port when tunnel is torn down and restarted - linux

I have a host that starts a reverse ssh tunnel upon bootup like this:
ssh -N -R 2222:localhost:22 root#10.1.2.6
It works great and the reverse tunnel is formed. But whenever I reboot the host, the remote server that the tunnel is built to says this:
Sep 28 13:13:59 kali sshd[4547]: error: bind: Address already in use
Sep 28 13:13:59 kali sshd[4547]: error: channel_setup_fwd_listener_tcpip: cannot listen to port: 2222
In order for me to resolve this I have to wait a few minutes for the old ssh tunnel to timeout, then find the new ssh connection and kill it, then when I rebuild the ssh tunnel it works fine.
Is there an ssh command or autossh command that does something like checks if the remote host can bind that port, if not, try again in a few seconds?

I believe I have run into the same issue as the original poster. I seem to have found the solution at the end of the accepted answer of this question:
If the client reconnect before the connection has terminated on the server, you can end up in a situation where the new ssh connection is live, but has no port forwardings. In order to avoid that, you need to use the ExitOnForwardFailure keyword on the client side.
I have thus added the following line to my /etc/ssh/ssh_config file at the client side:
ExitOnForwardFailure yes
According to the ssh man page, this option will cause "a client started with -f [to] wait for all remote port forwards to be successfully established before placing itself in the background".
This seems to cause ssh to fail when attempting to start an ssh tunnel immediately after killing one. The option thus enables repeating the attempt until the tunnel is correctly re-established.

Related

autossh tunnel hangs because of „Adress already in use” regardless of all timeouts

I use autossh to create a remote tunnel with the following command (IPs and Port Changed):
autossh -M 0 -o "ServerAliveInterval 5" -o "ServerAliveCountMax 3" -f -T -N -i /root/.ssh/id_rsa -R 1602:localhost:443 root#123.123.123.123
And the server has this config in sshd:
GatewayPort yes
ClientAliveInterval 10
ClientAliveCountMax 6
This works most of the time like a charm. Also timeouts and disconnects get handled very well.
But there is one exception:
If there is only a very short interruption of the network connection – the client notice this and start a reconnect. But the server hasn’t noticed this yet and still uses this port 1602. I can then see in server log the message: sshd[431646]: error: bind [::]:1602: Address already in use.
But autossh does not hang up and try again, it keeps the not working tunnel open. A few seconds later, the server recognise the disconnect of the old tunnel and frees the port 1602.
Now I have a autossh/ssh tunnel – doing all the watchdog stuff (I can see in log this keep alive message all 5 seconds) and staying alive. The port on the server is now unused. And the tunnel is not working, because the port is not allocated at all now.
Autossh does not recover from this state without manual interaction. There are multiple ways to recover manually, but this is not the question.
My questions are:
Why autossh does not hang up and retry if the port is in use (would solve the issue)
Or
How to force free the port and rebind to the new tunnel on reconnect?
Or
How to detect tunnels without actual ports bound to it in order to kill them (for example each minute in a cronjob)
Im searching for a way to automatically recover from this state. And I wonder why this race condition is not mentioned in any place of the internet, even if it can be reproduced easily.
You need to add -o "ExitOnForwardFailure yes" as an autossh option.

SSH tunnel always trying port 22

I want create ssh tunnel between local machine and remote server, so I use this command on my local machine:
sudo ssh -R 443:localhost:443 SERVER_IP
Everything is working, I can connect to my local machine through remote server - using port 443.
Problem is, that sometimes it just doesnt work and I get a message:
connect to host SERVER_IP port 22: Connection refused
Strange is, that connection to port 22 is working on remote (I can connect there without problem at that exact moment), weird is just, that sometimes it is working and sometimes id does not. Do you have any idea why? Or do you know what is going on?
ssh runs by default on port 22. While your command is setting up a proxy to pass port 443 from one host to port 443 on a different host, the underlying ssh connection still runs on port 22.
Connection refused means that the target host SERVER_IP is not running an sshd daemon and/or is not listening to port 22. You will need to figure out and fix whatever is wrong with the SERVER_IP machine.
22 is the default port, the ssh client will connect to it until you specify an other port using -p, example:
ssh -R 12345:localhost:12345 SERVER_IP -p 443
The error you have is not about the tunnel but about the server's port.
You should check that the server is indeed started and listening on port 22 and there's no firewall in the way.

Listening port putty tunnel does not work

The goal is to connect to my home computer from outside. The ISP blocks all the ports (and demands $$$ for business package with static ip address), so simple port forwarding on home router does not work.
I have used putty to tunnel a listening port to a remote server: R2221:###.###.###.###:2221 (to make things simpler, the test server is a simple ftp server running on my home windows machine) (the entire ip address has to be specified -- with OpenSSH 1.0 running on the linux box wildcard address results in refusal of connection) (GatewayPorts are set to on).
Netstat -a confirms that port 2221 on the linux box is open and listening. However, whenever I try to connect to that port, it simply hangs. Command line ftp client says "connected to ###.###.###.###" and that's it. Running netstat again shows dozens of opened connections to port 2221, all coming from my windows box (I tried using browser as well as command line ftp client).
Which step am I missing?
Tried with RDP, VNC and FTP -- all of them hang, all of them connect when connecting through my home network (or my home router).
EDIT The setup is as follows:
PC 1 (windows) has FTP service running on port 2221. It uses PuTTY to tunnel a listening port to PC 2 (linux). PC 2 does show listening port when running netstat. Connecting to port 2221 on PC 2 either form PC 2 or from PC 3 results in hanging.
EDIT 2 Aaaand it worked. Using 127.0.0.1 instead of the remote machine's ip address was what corrected it. Thanks a lot.
Are you sure your -R command is correct? From what you say I suppose the command should be R2221:127.0.0.1:2221. The -R ssh option in form of port:host:hostport does the following: it opens port port on the remote side and forwards the connection to that port to local address host:hostport, and this connection is made from the local machine.
To make your local machine (the one that is running ssh client, e.g. PuTTY) connect to your local FTP server running on the same machine, use 127.0.0.1 as an address.
It also explains why you see a strange behaviour: when you actually connect to xxx.xxx.xxx.xxx:2221, it forwards the connection to the same address xxx.xxx.xxx.xxx:2221 and you get some kind of a loop.

offering mysql on localhost via a ssh layer

I have 2 machines: One has a mysql server that runs on localhost.The second one has no mysql server. I want to access the mysql server from the first machine on the second machine also on localhost. It should be something like a virtual localhost.
The first machine should log in the second machine via secure socket and should emulate the server there somehow.
Is something like this possible, how is it called , and how does it work.
Is this what is called a tunnel?
Yes, this is what is called a tunnel.
Assuming host A is running the mysql server and host B is the one that dose not.
To create the tunnel enter the following on host B:
ssh -L 3306:localhost:3306 username#A
(Add -f -N to the command to not execute any command on the remote host and immediately background the ssh connection).
This creates a listening port 3306 on host B which is forwarded over the ssh tunnel to localhost:3306 on host A.
Now just run mysql on host B and you should be able to connect to the mysql server on host A.
Hope it helps!

SSH Agent forward specific keys rather than all registered ssh keys

I am using agent forwarding, it works fine. But the ssh client is sharing all registered (ssh-add) keys with the remote server. I have personal keys that I don't want to share with the remote server. Is there a way to restrict with keys are being forwarded?
I have multiple github accounts and aws accounts. I don't want to share all the ssh-keys.
Looks like it is possible with OpenSSH 6.7 - it supports unix socket forwarding. We could start secondary ssh-agent with specific keys and forward it's socket to remote host. Unfortunately this version is not available for my server/client systems at the time of writing.
I have found another possible solution, using socat and standard SSH TCP forwarding.
Idea
On local host we run secondary ssh-agent with only keys we want to see on remote host.
On local host we set up forwarding of TCP connections on some port (portXXX) to secondary ssh-agent's socket.
On remote host we set up forwarding from some socket to some TCP port (portYYY).
Then we establish ssh connection with port forwarding from remote's portYYY to local portXXX.
Requests to ssh agent go like this:
local ssh-agent (secondary)
^
|
v
/tmp/ssh-.../agent.ZZZZZ - agent's socket
^
| (socat local)
v
localhost:portXXX
^
| (ssh port forwarding)
v
remote's localhost:portYYY
^
| (socat remote)
v
$HOME/tmp/agent.socket
^
| (requests for auth via agent)
v
SSH_AUTH_SOCK=$HOME/tmp/agent.socket
^
| (uses SSH_AUTH_SOCK variable to find agent socket)
v
ssh
Drawbacks
It is not completely secure, because ssh-agent becomes partially available through TCP: users of remote host can connect to your local agent on 127.0.0.1:portYYY, and other users of your local host can connect on 127.0.0.1:portXXX. But they will see only limited set of keys you manually added to this agent. And, as AllenLuce mentioned, they can't grab it, they only could use it for authentication while agent is running.
socat must be installed on remote host. But looks like it is possible to simply upload precompiled binary (I tested it on FreeBSD and it works).
No automation: keys must be added manually via ssh-add, forwarding requires 2 extra processes (socat) to be run, multiple ssh connections must be managed manually.
So, this answer is probably just a proof of concept and not a production solution.
Let's see how it can be done.
Instruction
Client side (where ssh-agent is running)
Run new ssh-agent. It will be used for keys you want to see on remote host only.
$ ssh-agent # below is ssh-agent output, DO NOT ACTUALLY RUN THESE COMMANDS BELOW
SSH_AUTH_SOCK=/tmp/ssh-qVnT0UsgV6yO/agent.22982; export SSH_AUTH_SOCK;
SSH_AGENT_PID=22983; export SSH_AGENT_PID;
It prints some variables. Do not set them: you will loose your main ssh agent. Set another variable with suggested value of SSH_AUTH_SOCK:
SSH_AUTH_SECONDARY_SOCK=/tmp/ssh-qVnT0UsgV6yO/agent.22982
Then establish forwarding from some TCP port to our ssh-agent socket locally:
PORT=9898
socat TCP4-LISTEN:$PORT,bind=127.0.0.1,fork UNIX-CONNECT:$SSH_AUTH_SECONDARY_SOCK &
socat will run in background. Do not forget to kill it when you're done.
Add some keys using ssh-add, but run it with modified enviromnent variable SSH_AUTH_SOCK:
SSH_AUTH_SOCK=$SSH_AUTH_SECONDARY_SOCK ssh-add
Server side (remote host)
Connect to remote host with port forwarding. Your main (not secondary) ssh agent will be used for auth on hostA (but will not be available from it, as we do not forward it).
home-host$ PORT=9898 # same port as above
home-host$ ssh -R $PORT:localhost:$PORT userA#hostA
On remote host establish forwarding from ssh-agent socket to same TCP port as on your home host:
remote-host$ PORT=9898 # same port as on home host
remote-host$ mkdir -p $HOME/tmp
remote-host$ SOCKET=$HOME/tmp/ssh-agent.socket
remote-host$ socat UNIX-LISTEN:$SOCKET,fork TCP4:localhost:$PORT &
socat will run in background. Do not forget to kill it when you're done. It does not automatically exit when you close ssh connection.
Connection
On remote host set enviromnent variable for ssh to know where agent socket (from previous step) is. It can be done in same ssh session or in parallel one.
remote-host$ export SSH_AUTH_SOCK=$HOME/tmp/ssh-agent.socket
Now it is possible to use secondary agent's keys on remote host:
remote-host$ ssh userB#hostB # uses secondary ssh agent
Welcome to hostB!
The keys themselves are not shared by forwarding your agent. What's forwarded is the ability to contact the ssh-agent on your local host. Remote systems send challenge requests through the forwarding tunnel. They do not request the keys themselves.
See http://www.unixwiz.net/techtips/ssh-agent-forwarding.html#fwd for a graphical explanation.

Resources