Adding ssh keys to ssh-agent fails w/ running agent, environment variables set - linux

[SSH] "Could not open a connection to your authentication agent". error
I am trying to add ssh keys into my ssh agent. I start by making sure that the ssh-agent is running.
exec ssh-agent bash
I make sure that ssh-agent is running.
ps axu | grep [s]sh
and get the following
root 1562 ... ssh-agent bash
The env variables are set correctly.
SSH_AGENT_PID=1562
SSH_AUTH_SOCK=/tmp/ssh-699iHAxuK4xX/agent.1561
However when I try to add the private key using
sudo ssh-add ~/.ssh/peter-key
I get the ssh error
Could not open a connection to your authentication agent.
I have tried the suggestions on stackoverflow and serverfault but nothing.
Note: I am running a linux machine on one of the free tier AWS machines with ubuntu. My instance's security group allow (temporarily) all incoming and outgoing ssh connections from any IP address. Anyone know what the error could be?

Just use
ssh-add ~/.ssh/peter-key
...not...
sudo ssh-add ~/.ssh/peter-key
Using sudo (optionally/configurably, but typically) clears a number of environment variables, including the ones you just verified were set. (Compare output of sudo env and plain env to see this effect).
If you must use sudo to read the key, then you can ensure that the necessary environment variable is set on the other side by doing so explicitly yourself:
sudo env "SSH_AUTH_SOCK=$SSH_AUTH_SOCK" ssh-add ~/.ssh/peter-key
However, it's possible for security-sensitive programs working with UNIX domain sockets to check the ownership and permission of software on the other end of that socket, and to refuse to communicate with anything running on a user account different from what they expect, so it's possible that this approach may not be future-proof against security features added to ssh-agent.

Related

How to disable ssh-agent forwarding

ssh-agent forwarding can be accomplished with ssh -A ....
Most references I have found state that the local machine must configure ~/.ssh/config to enable AgentForwarding with the following code:
Host <trusted_ip>
ForwardAgent yes
Host *
ForwardAgent no
However, with this configuration, I am still able to see my local machines keys when tunneling into a remote machine, with ssh -A user#remote_not_trusted_ip, and running ssh-add -l.
From the configuration presented above, I would expect that the ssh-agent forwarding would fail and the keys of the local machine would not be listed by ssh-add -l.
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
It is the default behavior. If you do not allow it in ~/.ssh/config it will not be forwarded. But the command-line arguments have higher priority so it overwrites what is defined in the configuration,as explained in the manual page for ssh_config:
ssh(1) obtains configuration data from the following sources in the following order:
command-line options
user's configuration file (~/.ssh/config)
system-wide configuration file (/etc/ssh/ssh_config)
So as already said, you just need to provide correct arguments to ssh.
So back to the questions:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
Because the command-line argument -A has higher priority than the configuration files.
How can I prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
Do not use -A command-line option if you do not want forward your ssh-agent. Use -a command-line option instead.
You are using -A option to connect. man ssh says :
-A Enables forwarding of the authentication agent connection.
You should connect without -A, just using :
ssh user#remote_not_trusted_ip
CLI args will have priority on ssh config file.
By the way, if you want to connect to your trusted ip without forwarding, you can also use :
ssh -a user#trusted_ip
-a Disables forwarding of the authentication agent connection.
This is over a year old, but I encountered the same issue and landed on a config option that works.
I had a problem when I connected from my home computer to my work computer that Git commands no longer worked. I figured out that it was because the connecting home computer's public key was forwarded, which was not configured for that GitHub account.
The -a command line options fixed the problem by not forwarding the authentication agent connection. I also thought that the equivalent ~/.ssh/config option would be this:
ForwardAgent no
When that didn't work I looked for other configuration variables, and finally found that this one worked.
IdentityAgent none
This part of the man-page is crucial:
Since the first obtained value for each parameter is used, more
host-specific declarations should be given near the beginning of the
file, and general defaults at the end.
Put your Host * with ForwardAgent yes at the end and the specific Host with ForwardAgent, not at the start of the .ssh/config
Not an answer to the question, and maybe just semantics:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
My understanding is that authentication keys are never "forwarded" to a remote computer. Rather ssh-agent forwards authentication challenges from a remote server back to the computer that holds the authentication private key through whatever chain of remote computers the ssh connection is running through.

the usage of scp and ssh

I'm newbie to Linux and trying to set up a passphrase-less ssh. I'm following the instructions in this link: http://wiki.hands.com/howto/passphraseless-ssh/.
In the above link, it said:"One often sees people using passphrase-less ssh keys for things like cron jobs that do things like this:"
scp /etc/bind/named.conf* otherdns:/etc/bind/
ssh otherdns /usr/sbin/rndc reload
which is dangerous because the key that's being used here is being offered root write access, when it need not be.
I'm kind of confused by the above commands.
I understand the usage of scp. But for ssh, what does it mean "ssh otherdns /usr/sbin/rndc reload"?
"the key that's being used here is being offered root write access."
Can anyone also help explain this sentence more detail? Based on my understanding, the key is the public key generated by one server and copied
to otherdns. What does it mean "being offered root write access"?
it means to run a command on a remote server.
the syntax is
ssh <remote> <cmd>
so in your case
ssh otherdns /usr/sbin/rndc reload
is basically 4 parts:
ssh: run the ssh executable
otherdns: is the remote server; it's lacking a user information, so the default user (the same as currently logged in; or the one configured in ~/.ssh/config for this remote machine)
/usr/sbin/rndc is a programm on the remote server to be run
reload is an argument to the program to be run on the remote machine
so in plain words, your command means:
run the program /usr/sbin/rndc with the argument reload on the remote machine otherdns

Ssh public key authentication without changing system files

I am changing different parameters like RSAAuthentication, PubkeyAuthentication and PasswordAuthentication (sudo vim /etc/ssh/sshd_config) to disable ssh password authentication to force ssh login via public key only.
The experiments are adversely affecting many users as they suddenly find "Connection refused" while trying to ssh to the server. I want to avoid these experiments. Is there any work around to enable public key authentication without touching system files like /etc/ssh/ssd_config?
Sure. Set up an alternative configuration file, and run sshd on another port while you are experimenting:
cp sshd_config sshd_config_working
/usr/sbin/sshd -p 2222 -f sshd_config_working
Now you can connect with:
ssh -p 2222 user#localhost
And you can make as many changes as you want until you it working as desired. At that point, copy your _working config back to the main config file and restart sshd.
Alternatively, stop mucking about on a production server and set up a virtual machine or cotainer for testing, where you can modify the sshd configuration as much as you want without affecting anybody.

Using custom ssh-agent for communicating with Github

I inherited a deployment system that is currently broken and I'm at a loss at how to fix it.
The basic setup is adding 3 keys to ssh-agent and pulling a few private repos from Github via the Go deployment software from ThoughtWorks.
I seem to need to have one ssh-agent running that can be accessed by multiple user accounts.
I've started a ssh-agent and added the keys to it and then I was able to clone private repos from the command line without issue, but when the main application( which is using the same user account ) tries to clone it fails with a permission denied error.
My guess is that the ssh-agent that is holding the keys is not accessible to the application for some reason.
Here are the instructions that I have:
export SSH_AUTH_SOCK=/var/go/ssh-agent.sock
ssh-add ~/.ssh/go_deploy_id_rsa
ssh-add ~/.ssh/go_id_rsa
ssh-add ~/.ssh/deploy_id_rsa
When I set the SSH_AUTH_SOCK environment variable it seems to kill any ssh-agent that is/was running and when I issue the ssh-add command I get the classic:
"Could not open a connection to your authentication agent."
So basically how do I start ssh-agent AND have it use the SSH_AUTH_SOCK I defined earlier and stay running so that the Go application uses it when it communicates with Github?
This use to work so I know that the setup is technically valid.
SOLVED: It turns out the ssh-agent socket that I was using was stale. Deleting the socket and re-creating it allowed the keys to be added and communication worked again.

Define a set keyfile for Ubuntu to use when SSHing into a server

I have two Amazon EC2 Ubuntu instances. When I connect to one of them, I can do
ssh ubuntu#54.123.4.56
and the shell uses the correct keyfile from my ~/.ssh directory.
I just set up a new instance, and I'm trying to figure out how to replicate that behavior for this new one. It's a minor thing, just driving me nuts. When I log in with:
ssh -i ~/.ssh/mykey.pem ubuntu#54.987.6.54
it works fine, but with just
ssh ubuntu#54.987.6.54
I get:
Permission denied (publickey).
I have no idea how I managed to get it to work this way for the first server, but I'd like to be able to run ssh into the second server without the "-i abc.pem" argument. Permissions are 600:
-r-------- 1 mdexter mdexter 1692 Nov 11 20:40 abc.pem
What I have tried: I copied the public key from authorized_keys on the remote server and pasted it to authorized_keys on the local server, with mdexter#172.12.34.56 (private key) because I thought that might be what created the association in the shell between that key and that server for the shell.
The only difference I can recall between how I set up the two servers is that with the first, I created a .ppk key in PuTTy so that I could connect through FileZilla for SFTP. But I think SSH is still utilizing the .pem given by Amazon.
How can I tell the shell to just know to always use my .pem key for that server when SSHing into that particular IP? It's trivial, but I'm trying to strengthen my (rudimentary) understanding of public/private keys and I'm wondering if this plays into that.
You could solve this in 3 ways:
By placing the contents of your ~/.ssh/mykey.pem into ~/.ssh/id_rsa on the machine where you are ssh'ing into 2nd instance. Make sure you also change the permissions of ~/.ssh/id_rsa to 600.
Using ssh-agent (ssh-agent will manage the keys for you)
Start ssh-agent
eval `ssh-agent -s`
Add the key to ssh-agent using ssh-add
ssh-add mykey.pem
Using ssh-config file:
You could use ssh config file. From the machine where you are trying to ssh, keep the following contents in the ~/.ssh/config file (make sure to give this file 600 permissions):
Host host2
HostName 54.987.6.54
Port 22
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Once you do that now you could access do the ssh like this:
ssh host2
After performing any of the above steps you should be able to ssh into your second instance with out specifying the key path.
Note: The second option requires you to add the key using ssh-add every time you logout and log back in so to make that a permanent injection see this SO question.

Resources