SSH Agent forward specific keys rather than all registered ssh keys - linux

I am using agent forwarding, it works fine. But the ssh client is sharing all registered (ssh-add) keys with the remote server. I have personal keys that I don't want to share with the remote server. Is there a way to restrict with keys are being forwarded?
I have multiple github accounts and aws accounts. I don't want to share all the ssh-keys.

Looks like it is possible with OpenSSH 6.7 - it supports unix socket forwarding. We could start secondary ssh-agent with specific keys and forward it's socket to remote host. Unfortunately this version is not available for my server/client systems at the time of writing.
I have found another possible solution, using socat and standard SSH TCP forwarding.
Idea
On local host we run secondary ssh-agent with only keys we want to see on remote host.
On local host we set up forwarding of TCP connections on some port (portXXX) to secondary ssh-agent's socket.
On remote host we set up forwarding from some socket to some TCP port (portYYY).
Then we establish ssh connection with port forwarding from remote's portYYY to local portXXX.
Requests to ssh agent go like this:
local ssh-agent (secondary)
^
|
v
/tmp/ssh-.../agent.ZZZZZ - agent's socket
^
| (socat local)
v
localhost:portXXX
^
| (ssh port forwarding)
v
remote's localhost:portYYY
^
| (socat remote)
v
$HOME/tmp/agent.socket
^
| (requests for auth via agent)
v
SSH_AUTH_SOCK=$HOME/tmp/agent.socket
^
| (uses SSH_AUTH_SOCK variable to find agent socket)
v
ssh
Drawbacks
It is not completely secure, because ssh-agent becomes partially available through TCP: users of remote host can connect to your local agent on 127.0.0.1:portYYY, and other users of your local host can connect on 127.0.0.1:portXXX. But they will see only limited set of keys you manually added to this agent. And, as AllenLuce mentioned, they can't grab it, they only could use it for authentication while agent is running.
socat must be installed on remote host. But looks like it is possible to simply upload precompiled binary (I tested it on FreeBSD and it works).
No automation: keys must be added manually via ssh-add, forwarding requires 2 extra processes (socat) to be run, multiple ssh connections must be managed manually.
So, this answer is probably just a proof of concept and not a production solution.
Let's see how it can be done.
Instruction
Client side (where ssh-agent is running)
Run new ssh-agent. It will be used for keys you want to see on remote host only.
$ ssh-agent # below is ssh-agent output, DO NOT ACTUALLY RUN THESE COMMANDS BELOW
SSH_AUTH_SOCK=/tmp/ssh-qVnT0UsgV6yO/agent.22982; export SSH_AUTH_SOCK;
SSH_AGENT_PID=22983; export SSH_AGENT_PID;
It prints some variables. Do not set them: you will loose your main ssh agent. Set another variable with suggested value of SSH_AUTH_SOCK:
SSH_AUTH_SECONDARY_SOCK=/tmp/ssh-qVnT0UsgV6yO/agent.22982
Then establish forwarding from some TCP port to our ssh-agent socket locally:
PORT=9898
socat TCP4-LISTEN:$PORT,bind=127.0.0.1,fork UNIX-CONNECT:$SSH_AUTH_SECONDARY_SOCK &
socat will run in background. Do not forget to kill it when you're done.
Add some keys using ssh-add, but run it with modified enviromnent variable SSH_AUTH_SOCK:
SSH_AUTH_SOCK=$SSH_AUTH_SECONDARY_SOCK ssh-add
Server side (remote host)
Connect to remote host with port forwarding. Your main (not secondary) ssh agent will be used for auth on hostA (but will not be available from it, as we do not forward it).
home-host$ PORT=9898 # same port as above
home-host$ ssh -R $PORT:localhost:$PORT userA#hostA
On remote host establish forwarding from ssh-agent socket to same TCP port as on your home host:
remote-host$ PORT=9898 # same port as on home host
remote-host$ mkdir -p $HOME/tmp
remote-host$ SOCKET=$HOME/tmp/ssh-agent.socket
remote-host$ socat UNIX-LISTEN:$SOCKET,fork TCP4:localhost:$PORT &
socat will run in background. Do not forget to kill it when you're done. It does not automatically exit when you close ssh connection.
Connection
On remote host set enviromnent variable for ssh to know where agent socket (from previous step) is. It can be done in same ssh session or in parallel one.
remote-host$ export SSH_AUTH_SOCK=$HOME/tmp/ssh-agent.socket
Now it is possible to use secondary agent's keys on remote host:
remote-host$ ssh userB#hostB # uses secondary ssh agent
Welcome to hostB!

The keys themselves are not shared by forwarding your agent. What's forwarded is the ability to contact the ssh-agent on your local host. Remote systems send challenge requests through the forwarding tunnel. They do not request the keys themselves.
See http://www.unixwiz.net/techtips/ssh-agent-forwarding.html#fwd for a graphical explanation.

Related

how to check linux ssh tunnel is really successfully active?

i made some tests with verbose argument while initializing ssh tunnels,
with a GOOD a WRONG destination address,
but i didn't see a difference between good and bad ssh tunnel initialization.
when i launch my ssh tunnel with a reachable ip address, like that :
ssh -L 3338:<reachable-ip-adress>:4000 my-user#bastion1.amfinesoft.net
verbose mode returns to me :
debug1: Requesting forwarding of local forward LOCALHOST:3338 -> ip-adress:4000
AND when i launch my ssh tunnel with a UNREACHABLE ip address, like that :
ssh -L 3339:<unreachable-ip-adress>:4001 my-user#bastion1.amfinesoft.net
verbose mode returns to me the same output !
debug1: Requesting forwarding of local forward LOCALHOST:3339 -> ip-adress:4001
In the first test, i know my ssh tunnel is correctly intialized, but not the second test.
So, my question is : how to check, on my bastion1 machine, or on my localhost machine that desired ssh tunnel has beenn correctly initialized ?
The tunnel was setup correctly in both cases, as the tunnel exists only between your local system and the system you ssh to (bastion1).
Setting up a tunnel will not check, if packets can actually be forwarded, as ssh has no knowledge on the protocol inside the tunnel and how forwarding works. Only once you send a packet through the tunnel and the sshd on bastion1.amfinesoft.net will try to forward it to the unreachable IP-Address, you will be able to see, whether or not your target address is reachable.
So to check if your tunnel is working, you need to check whether the target system can be reached, using the protocol you are tunnelling.

SSH Tunnel to Ngrok and Initiate RDP

I am trying to access my Linux machine from anywhere in the world. I have tried originally port forwarding and then ssh'ing in; however, I believe my school's WiFi won't allow port forwarding (every time I ran it, it would tell me connection refused). I have setup an account with ngrok and I can remotely SSH in, but now I am wondering if it is possible to RDP. I tried connecting via the Microsoft Remote Desktop app on Mac, but it instantly crashes. I have also looked at trying to connect with localhost, but it's not working. So far, I have tried (with xxxx being the port):
ssh -L xxxx:localhost:xxxx 0.tcp.ngrok.io
and
ssh -L xxxx:localhost:xxxx <user>#0.tcp.ngrok.io
but my computer won't allow it and after about 2 or 3 times, it warns me of a possible DNS Spoofing. Is there anyway that I can run a remote desktop of my linux machine that I have ssh tunneled to (from my mac) on ngrok? Thank you!
First you'll need to sign up with ngrok if you haven't already and you'll be given an authtoken. You'll need to install this by running
./ngrok authtoken <insert your token here>
This will save your token to a file located ../username/.ngrok/ngrok.yml
Then you'll need to ask ngrok to create a TCP tunnel from their servers to your local machine's Remote Desktop port which should be 3389 by default
ngrok tcp 3389
Give it 30 seconds or so then jump to http://localhost:4040/status to see what the tcp address ngrok has allocated you. It should look something like tcp://1.tcp.ngrok.io:158764
Now you should be able to remote into your machine using address 1.tcp.ngrok.io:158764

Reverse ssh tunnel fails to bind to port when tunnel is torn down and restarted

I have a host that starts a reverse ssh tunnel upon bootup like this:
ssh -N -R 2222:localhost:22 root#10.1.2.6
It works great and the reverse tunnel is formed. But whenever I reboot the host, the remote server that the tunnel is built to says this:
Sep 28 13:13:59 kali sshd[4547]: error: bind: Address already in use
Sep 28 13:13:59 kali sshd[4547]: error: channel_setup_fwd_listener_tcpip: cannot listen to port: 2222
In order for me to resolve this I have to wait a few minutes for the old ssh tunnel to timeout, then find the new ssh connection and kill it, then when I rebuild the ssh tunnel it works fine.
Is there an ssh command or autossh command that does something like checks if the remote host can bind that port, if not, try again in a few seconds?
I believe I have run into the same issue as the original poster. I seem to have found the solution at the end of the accepted answer of this question:
If the client reconnect before the connection has terminated on the server, you can end up in a situation where the new ssh connection is live, but has no port forwardings. In order to avoid that, you need to use the ExitOnForwardFailure keyword on the client side.
I have thus added the following line to my /etc/ssh/ssh_config file at the client side:
ExitOnForwardFailure yes
According to the ssh man page, this option will cause "a client started with -f [to] wait for all remote port forwards to be successfully established before placing itself in the background".
This seems to cause ssh to fail when attempting to start an ssh tunnel immediately after killing one. The option thus enables repeating the attempt until the tunnel is correctly re-established.

How to access a host port (bind with ssh -R) from a container?

Using Docker 1.12.1, I face a strange behaviour trying to access a host port created with ssh -R.
Basically I try to access a service running on port 12345 on my local machine from a docker container running on a server.
I opened a ssh connection with ssh -R *:12345:localhost:12345 user#server to open a port 12345 on server that forwards to port 12345 on my local machine.
Now when I try curl https://172.17.42.1:12345 inside the container (172.17.42.1 is the IP to access the docker host from the docker container) I get :
root#f6873fe1109b:/# curl https://172.17.42.1:12345
curl: (7) Failed to connect to 172.17.42.1 port 12345: Connection refused
But on server the command curl http://localhost:12345 succeeds (eg. no Connection refused)
server$ curl http://localhost:12345
curl: (52) Empty reply from server
I don't really understand how the port binding done with ssh differs from a test with nc on server (it works) :
# on server
nc -l -p 12345
# inside a container
root#f6873fe1109b:/# curl http://172.17.42.1:12345
curl: (52) Empty reply from server
NB: the container was started with docker run -it --rm maven:3-jdk-8 bash.
What can I do to allow my container to access the host port corresponding to a ssh binding ?
From man ssh:
-R [...]
... Specifying a remote bind_address will only succeed if the server's GatewayPorts option is enabled
And man sshd_config:
GatewayPorts
Specifies whether remote hosts are allowed to connect to ports forwarded for the client. By default, sshd(8) binds remote port forwardings to the loopback address. This prevents other remote hosts from connecting to forwarded ports. GatewayPorts can be used to specify that sshd should allow remote port forwardings to bind to non-loopback addresses, thus allowing other hosts to connect. The argument may be “no” to force remote port forwardings to be available to the local host only, “yes” to force remote port forwardings to bind to the wildcard address, or “clientspecified” to allow the client to select the address to which the forwarding is bound. The default is “no”.
This means that a default sshd server installation only allows to create forwards that bind to the local interface. If you want to allow forwards to other interfaces then loopback, you need to set the GatewayPorts option to yes or clientspecified in your /etc/ssh/sshd_config

offering mysql on localhost via a ssh layer

I have 2 machines: One has a mysql server that runs on localhost.The second one has no mysql server. I want to access the mysql server from the first machine on the second machine also on localhost. It should be something like a virtual localhost.
The first machine should log in the second machine via secure socket and should emulate the server there somehow.
Is something like this possible, how is it called , and how does it work.
Is this what is called a tunnel?
Yes, this is what is called a tunnel.
Assuming host A is running the mysql server and host B is the one that dose not.
To create the tunnel enter the following on host B:
ssh -L 3306:localhost:3306 username#A
(Add -f -N to the command to not execute any command on the remote host and immediately background the ssh connection).
This creates a listening port 3306 on host B which is forwarded over the ssh tunnel to localhost:3306 on host A.
Now just run mysql on host B and you should be able to connect to the mysql server on host A.
Hope it helps!

Resources