keychain ssh-agent incorrect informations - linux

I'm using keychain, which manages my key(s) with ssh-agent perfectly.
I want to check the state of ssh-agent on each linux host. I tried with :
ssh-add -l
1024 f7:51:28:ea:98:45:XX:XX:XX:XX:XX:XX /root/.ssh/id_rsa (RSA)
Locally, this command works and is coherent.
But with a distant SSH command, I don't know why the result is not the same.. :
## host1, locally
ssh-add -l
1024 f7:51:28:ea:98:45:XX:XX:XX:XX:XX:XX /root/.ssh/id_rsa (RSA)
## host 2, command to host1 :
ssh host1 "ssh-add -l"
Could not open a connection to your authentication agent.
Maybe someone could explain me ? It't disturbing, because I would to monitor ssh-agent states..
Thanks.
EDIT : Even with SSH Agent Forwarding enabled, the distant command returns only the local state of agent. Other distant commands works, with or without key loaded..
## Host1, locally
ssh-add -l
1024 f7:51:28:ea:98:45:XX:XX:XX:XX:XX:XX /root/.ssh/id_rsa (RSA)
## From Host2, locally and distant :
ssh-add -l
The agent has no identities.
ssh -A host1 "ssh-add -l"
The agent has no identities.

You seem to misunderstand how keychain/ssh-agent work. When you log onto a system (we'll call it 'home'), it starts a program. As part of starting this program it exposes a file that ssh-add can connect to. When the name of this file is stored in the SSH_AUTH_SOCK variable, it becomes accessible by ssh and by ssh-add when run with this enviroment variable set appropriately.
When you ssh to a remote system, if the ForwardAgent property is set to true in your configuration, a channel is established allowing this key information to pass from your 'home' system to the system that you've ssh'd to. In order to expose this key information, another SSH_AUTH_SOCK variable is set on this remote system. So now we have:
# home system (host1)
host1$ ssh-add -l
1024 BLAH....
host1$ echo $SSH_AUTH_SOCK
/tmp/ssh-YJNLu2LFMKbO/agent.1472
home$ ssh -A host2
host2$ ssh-add -l
1024 BLAH
host2$ echo $SSH_AUTH_SOCK
/tmp/ssh-vqdu733feY/agent.23877
host2$ ssh -A host1
host1$ ssh-add -l
1024 BLAH
host1$ echo $SSH_AUTH_SOCK
/tmp/ssh-fuKgOaaQ7b/agent.23951
so with every connection, a socket is created on the remote system to ferry the key data through the chain to the original 'home' system. So in this example:
ssh-agent(on host1) makes a SOCKET -> ssh(host2) [provides a SOCKET connecting back to the SOCKET on host1] -> ssh(host1) [provides a SOCKET connecting back to the socket on host2]
So ssh, once you connect to the remote system is providing a socket that connects back to the socket from the system that it came from.
If you log on to a system directly (e.g. logging onto host2 at the console), then there is absolutely no connection back to host1. If an agent is started on host2, then it provides it's own, private socket, that you are communicating with.
Where things can go wrong:
You've enabled agent forwarding on your connection from host1 -> host2; however the script that runs at login on host2 ignores the presence of this socket and starts it's own private agent on host2. As a result when you ask for lists of registered keys using ssh-add -l, it talks to the socket provided by the agent running on host2. This agent does not have access to the keys from host1 as it ignored the socket.
agent forwarding can be disabled by the sshd_config, which means that the server administrator has configured the system to prevent people forwarding their agent information into this system (if there's an AllowAgentForwarding no line in the sshd_config then this would be the case).
The first case would be when there is a badly written login script that ignores the presence of the variable - i.e. it doesn't properly detect the fact that the connection is remote and there is a socket being forwarded - this is rare, but can happen. If it does then the script would need to be rewritten to detect this case.
If the administrator of the remote system has disabled agent forwarding, then you need to ask for it to be enabled.

Related

ssh-copy-id fails when run from within a remote session

I have a task to copy ssh keys from one node to all others in an array. For this, I wrote a simple bash script which copies itself to other nodes and runs it there. What confuses me is the fact that ssh-copy-id works fine on the node where the script is executed manually but it fails if run remotely in an ssh session. Here’s the script:
1 #!/bin/bash
2 # keys-exchange.sh
4 nodes=( main worker-01 worker-02 worker-03 )
6 for n in $( echo "${nodes[#]}" ); do
7 [ $n != $HOSTNAME ] && ssh-copy-id $n
8 done
10 if [ -z $REMOTE ]; then
11 for n in $( echo ${nodes[#]} ); do
12 if [ $n != $HOSTNAME ]; then
13 scp $0 $USER#$n:$0 > /dev/null
14 ssh $USER#$n "REMOTE=yes HOSTNAME=$n $0 ; rm -f $0"
15 fi
16 done
17 fi
The code in rows 6-8 works fine copying the ssh key to all nodes other than itself. Then, if the REMOTE variable is not set, code in rows 11-16 copies the script to remote nodes (except the node it’s running on, row 12) and runs it there. In row 14, I set and pass the variable REMOTE to skip the code block in rows 10-17 (so the script copies itself only from the source node to others), and the variable HOSTNAME because I found it’s not set in an ssh session. The user’s name and the script path are completely the same on the source node and all destination nodes.
When running on the source node, it works properly asking for a confirmation and the remote host's password. But the script that has just run successfully on the source node fails running in the remote ssh session: ssh-copy-id fails with the following error:
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/username/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: ERROR: Host key verification failed.
At that moment, no .ssh/known_hosts file is present on a remote node so I can't do ssh-keygen -R. What am I missing and how to make it work?
ssh $USER#$n "REMOTE=yes HOSTNAME=$n $0 ; rm -f $0"
Try running ssh with the "-tt" option to request a PTY (pseudo-TTY) for the remote session:
ssh -tt $USER#$n "REMOTE=yes HOSTNAME=$n $0 ; rm -f $0"
^^^
In the case that you're describing, you're launching ssh on the remote system to connect to a third system. The ssh instance doesn't have a saved copy of the third host's host key. So you'd normally expect ssh to prompt the user whether to continue connecting to the third host. Except that it's not prompting the user--it's just refusing to connect to the third host.
When ssh is invoked with a command to run on the remote system, by default it runs that command without a TTY. In this case, the remote ssh instance sees that it's running without a TTY and runs non-interactively. When it's non-interactive, it doesn't prompt the user for things like passwords and whether to accept a host key or not.
Running the local ssh instance with "-tt" causes it to request a PTY for the remote session. So the remote ssh instance will have a TTY and it will prompt the user--you--for things like host key confirmations.
ssh-copy-id is not copying your keys to remote hosts, it's adding them to ~/.ssh/authorized_keys and when you jump to that remote host there are no keys(or are they?) so there is nothig to copy further. And if ssh-copy-id run without -i option it'll copy(add to authorized_keys) all .pub keys from your ~/.ssh dir wich could be not desired so i suggest to run it like this ssh-copy-id -i $key $host
Be sure that on the destination side, the /etc/ssh/sshd_config is configured to accept the type of key that was generated.
PubkeyAcceptedKeyTypes ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa
I generated the key using ssh-keygen -t rsa -b 4096 ... however, up above the following line did not include the ,ssh-rsa at the end, so even though the ssh-copy-id updated my destination, the sshd did not accept rsa type generated keys. Once i added ,sha-rsa, the did systemctl restart sshd it worked!

ssh proxy command with netcat

In example like this:
Host destination
ProxyCommand ssh gateway nc %h %p
Is the connection between the gateway and the destination encrypted? I am confused because I have 2 hypotheses and both are not convincing:
It's not encrypted. The stdin in source goes through the source-gateway ssh connection encrypted, and get decrypted before being passed to nc, i.e nc's stdin is the same as stdin into ssh client at source. But I think the %p is 22, the ssh port -- which doesn't fit with this hypothesis.
It's encrypted, the sshd daemon at gateway pass to nc encrypted data. Then say instead of executing nc, we are executing cat, does sshd daemon pass it the encrypted data too? That doesn't sound right either.
Of course it is encrypted! Just to understand well what is going on here:
[ client $ ssh destination ]
|
'-> [ gateway $ nc destination 22 ]
|
'-> [ destination $ whatever]
On client you run just ssh destination. This is translated into ssh gateway nc destination 22.
So first executed command is ssh gateway with command. We have encrypted first step for sure.
The nc destination 22 command is run in this session on gateway server. And it does basically the thing that it redirects all I/O to the destination host, just as it is (but we are already in encrypted channel!).
So you will do once more key exchange with and authentication with destination and after it will succeed, you will get probably shell prompt there. So it is again encrypted.

How to scp back to local when I've already sshed into remote machine?

Often I face this situation: I sshed into a remote server and ran some programs, and I want to copy their output files back to my local machine. What I do is remember the file path on remote machine, exit the connection, then scp user#remote:filepath .
Obviously this is not optimal. What I'm looking for is a way to let me scp file back to local machine without exiting the connection. I did some searching, almost all results are telling me how to do scp from my local machine, which I already know.
Is this possible? Better still, is it possible without needing to know the IP address of my local machine?
Given that you have an sshd running on your local machine, it's possible and you don't need to know your outgoing IP address. If SSH port forwarding is enabled, you can open a secure tunnel even when you already have an ssh connection opened, and without terminating it.
Assume you have an ssh connection to some server:
local $ ssh user#example.com
Password:
remote $ echo abc > abc.txt # now we have a file here
OK now we need to copy that file back to our local server, and for some reason we don't want to open a new connection. OK, let's get the ssh command line by pressing Enter ~C (Enter, then tilde, then capital C):
ssh> help
Commands:
-L[bind_address:]port:host:hostport Request local forward
-R[bind_address:]port:host:hostport Request remote forward
-D[bind_address:]port Request dynamic forward
-KR[bind_address:]port Cancel remote forward
That's just like the regular -L/R/D options. We'll need -R, so we hit Enter ~C again and type:
ssh> -R 127.0.0.1:2222:127.0.0.1:22
Forwarding port.
Here we forward remote server's port 2222 to local machine's port 22 (and here is where you need the local SSH server to be started on port 22; if it's listening on some other port, use it instead of 22).
Now just run scp on a remote server and copy our file to remote server's port 2222 which is mapped to our local machine's port 22 (where our local sshd is running).
remote $ scp -P2222 abc.txt user#127.0.0.1:
user#127.0.0.1's password:
abc.txt 100% 4 0.0KB/s 00:00
We are done!
remote $ exit
logout
Connection to example.com closed.
local $ cat abc.txt
abc
Tricky, but if you really cannot just run scp from another terminal, could help.
I found this one-liner solution on SU to be a lot more straightforward than the accepted answer. Since it uses an environmental variable for the local IP address, I think that it also satisfies the OP's request to not know it in advance.
based on that, here's a bash function to "DownLoad" a file (i.e. push from SSH session to a set location on the local machine)
function dl(){
scp "$1" ${SSH_CLIENT%% *}:/home/<USER>/Downloads
}
Now I can just call dl somefile.txt while SSH'd into the remote and somefile.txt appears in my local Downloads folder.
extras:
I use rsa keys (ssh-copy-id) to get around password prompt
I found this trick to prevent the local bashrc from being sourced on the scp call
Note: this requires SSH access to local machine from remote (is this often the case for anyone?)
The other answers are pretty good and most users should be able to work with them. However, I found the accepted answer a tad cumbersome and others not flexible enough. A VPN server in between was also causing trouble for me with figuring out IP addresses.
So, the workaround I use is to generate the required scp command on the remote system using the following function in my .bashrc file:
function getCopyCommand {
echo "scp user#remote:$(pwd)/$1 ."
}
I find rsync to be more useful if the local system is almost a mirror of the remote server (including the username) and I require to copy the directory structure also.
function getCopyCommand {
echo "rsync -rvPR user#remote:$(pwd)/$1 /"
}
The generated scp or rsync command is then simply pasted on my local terminal to retrieve the file.
You would need a local ssh server running in your machine, then you can just:
scp [-r] local_content your_local_user#your_local_machine_ip:
Anyway, you don't need to close your remote connection to make a remote copy, just open another terminal and run scp there.
On your local computer:
scp root#remotemachine_name_or_IP:/complete_path_to_file /local_path

Trouble executing ssh IPAddressA -l user "ssh -l IPAddressB ls" from my bash script

I'm currently facing a weird problem while executing a command from my bash script.
My script has this command,
ssh IPAddressA -l root "ssh -l root IPAddressB ls"
where IPAddressA & IPAddressB would be replaced by hard coded IP addresses of two machines accessible from each other.
The user would enter the password whenever asked. But, I'm getting this error after I enter the IPAddressA's password.
root#IPAddressA's password:
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
]$
There's a better trick for that..
In ~/.ssh/config add a host entry for IPAddressA, configured like so:
Host IPAddressA
User someguy
ProxyCommand ssh -q someguy#IPAddressB nc -q0 %h 22
The slick thing about this method is that you can scp/sftp to IPAddressB without any weird stuff on your shell command line.
For bonus points, generate yourself a public key-pair and drop the public key on both IPAddressA and IPAddressB in ~/.ssh/authorized_keys. If you don't put a password on it, you won't even be bothered to enter that.
Additionally, if you're trying to get access to a remote LAN that only has a single entry point - SSH can actually act as a VPN client, bridging you through the proxy host. Of course, the remote end needs to support tap/tun devices (as does your local machine)... But if it's all there already.. super painless mechanism to bridge.
When the inner ssh password is prompted, there's no interactive keyboard available. You can get what you want with ssh tunneling.
ssh root#IPAddressA -L2222:IPAddressB:22 -Nf
ssh root#localhost -p2222
The first line open a tunnel, so your localhost 2222 port points to IPAddressB:22 andd bring the ssh process in background (-f) without executing a command (-N)
The second line connects IPAddressB:22 through the new opened tunnel

linux execute command remotely

how do I execute command/script on a remote linux box?
say I want to do service tomcat start on box b from box a.
I guess ssh is the best secured way for this, for example :
ssh -OPTIONS -p SSH_PORT user#remote_server "remote_command1; remote_command2; remote_script.sh"
where the OPTIONS have to be deployed according to your specific needs (for example, binding to ipv4 only) and your remote command could be starting your tomcat daemon.
Note:
If you do not want to be prompt at every ssh run, please also have a look to ssh-agent, and optionally to keychain if your system allows it. Key is... to understand the ssh keys exchange process. Please take a careful look to ssh_config (i.e. the ssh client config file) and sshd_config (i.e. the ssh server config file). Configuration filenames depend on your system, anyway you'll find them somewhere like /etc/sshd_config. Ideally, pls do not run ssh as root obviously but as a specific user on both sides, servers and client.
Some extra docs over the source project main pages :
ssh and ssh-agent
man ssh
http://www.snailbook.com/index.html
https://help.ubuntu.com/community/SSH/OpenSSH/Configuring
keychain
http://www.gentoo.org/doc/en/keychain-guide.xml
an older tuto in French (by myself :-) but might be useful too :
http://hornetbzz.developpez.com/tutoriels/debian/ssh/keychain/
ssh user#machine 'bash -s' < local_script.sh
or you can just
ssh user#machine "remote command to run"
If you don't want to deal with security and want to make it as exposed (aka "convenient") as possible for short term, and|or don't have ssh/telnet or key generation on all your hosts, you can can hack a one-liner together with netcat. Write a command to your target computer's port over the network and it will run it. Then you can block access to that port to a few "trusted" users or wrap it in a script that only allows certain commands to run. And use a low privilege user.
on the server
mkfifo /tmp/netfifo; nc -lk 4201 0</tmp/netfifo | bash -e &>/tmp/netfifo
This one liner reads whatever string you send into that port and pipes it into bash to be executed. stderr & stdout are dumped back into netfifo and sent back to the connecting host via nc.
on the client
To run a command remotely:
echo "ls" | nc HOST 4201

Resources