IdentityFile ignored in ssh configuration - linux

My ssh configuration inside /root/.ssh/config:
Host *
IdentityFile /root/.ssh/id_rsa_api
IdentityFile /root/.ssh/id_rsa_ui
I use these keys to be able to clone GitHub repositories. However, only the first IdentityFile (API) works. For the second, it says Repository not found as I start cloning. When I swap the configuration like:
Host *
IdentityFile /root/.ssh/id_rsa_ui
IdentityFile /root/.ssh/id_rsa_api
This way I can clone the UI, but not the API. As a consequence, I see that the keys are correct, but the second IdentityFile is always ignored. What could the problem be?
I cannot use ssh-add because I configure ssh inside a Dockerfile and ssh-agent is not running when the container is build.

Do you have any other keys besides the two you've listed in the question? The OpenSSH server sshd will drop a client after too many failed authentication attempts. If you have enough keys, your client may be trying all of them and being dropped before it gets through all of the keys you've listed. Running ssh with the -v parameter will show which keys ssh tries to use to authenticate.
The sshd_config parameter MaxAuthTries determines how many times a client can attempt to authenticate. The default is 6.
If this is the problem, you may be able to avoid it by setting the ssh_config parameter IdentitiesOnly. This prevents your client from using identities that didn't come from the ssh configuration files. Another thing to consider is to use more specific Host or Match directives, so you only apply a key to the specific hosts where the key should be used.

https://developer.github.com/guides/managing-deploy-keys/#deploy-keys
Deploy keys only grant access to a single repository. More complex
projects may have many repositories to pull to the same server
So I dropped using deploy keys. Instead I created an ssh key that allows access to all of my private repositories. This way I have a single IdentityFile.

Related

Logging in via SSH to a Linux host via ssh key always fails on first try, tnen works. Is there some configurable timeout?

I have created ssh keys and registered my public key on the target host under .ssh/.authorized_keys.
And it also generally works. I just observe a strange behavior: When I try to login the first time in the morning, I see "Server refused our key" and get forced to enter my passphrase. Any consecutive attempts then work fine and I see in the console output that it's registering with my key.
If I don't log in for a longer time, then a new login would show the same behavior as above and I get forced to enter my passphrase.
So I was wondering: Is there maybe a configurable value that prevents me registering with my key after certain time that I can just increase or deactivate?
You may find your answer here. Some servers are configured to verify the hosts before they can login for the first time.
https://unix.stackexchange.com/questions/42643/ssh-key-based-authentication-known-hosts-vs-authorized-keys
We can make SSH automatically write new host keys to the known_hosts file by setting StrictHostKeyChecking to “no” in the ~/.ssh/config file.
StrictHostKeyChecking=no

Can't match ssh key for git. Makes local fingerprint instead?

My target is to set up git on my Linux server box so I can commit/push through a batch file from my windows machine.
I was hoping for something similar to how I did it with svn in the past, such that I could create a user that had certain read/write privileges. I am more than happy for it to be ssh key dependant.
Thus far every time I try to put an ssh key on my computer and on the server, it just ignores it and makes its own:
The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
RSA key fingerprint is xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx
Are you sure you want to continue connecting (yes/no)?
And this means I have to insert a password every time so my batch file solution won't work.
I placed the key on my windows machine both in "C:/Users/Ryan/.ssh/" and "C:/Program Files (x86)/Git/.ssh/" in the msysgit installation directory. I also installed it onto my server to the suggested git user. I did the basic installation following the git documentation:
- Generated myself an SSH key using puttygen.
- Copied it to my server and cat'd it to authorized_keys in /home/git/.ssh/
- Init'd bare git repository etc.
I can push/pull but I have to use the RSA fingerprint and use the git account password to log in rather than using an ssh key.
Am I doing something wrong is it actually supposed to work like this?
I haven't fully read into making a git daemon instead, perhaps that is what I am after?
Make sure you are starting a DOS session with git-cmd.bat from your msysgit distribution: that will set the HOME environment variable properly (usually %USERPROFILE%).
The public (id_rsa.pub) and private (id_rsa) keys need to be in %HOME%\.ssh.
The message The authenticity of host should only occurs one, at the first ssh connection. Once it is done, don't delete the %HOME%\.ssh\known_hosts file it has created.

Git - SSH - Hosts: How can I delegate different IPs to remote origin, depending on what works each time?

When at home, git fetch origin master should connect over ssh to git#192.168.xx.xx.
When on the road, git fetch origin master should connect over ssh to git#xxx.linkpc.net.
Possible Solutions
Having multiple remotes for the same repository/branch works but git tracks multiple heads unnecessarily. This solution just messes with the beauty.
Assigning a hostname to remote and comment/uncomment /etc/hosts entries to delegate ips is a nice solution but includes sudoing and entering the root password which is kind of tedious.
A per user hosts file is out of the question.
Writing a script that each time the origin is called will delegate the correct ip, is ideal.
Writing a script that will be executed as root each time a user calls it, seems to be a marginally fair solution.
Question
How could someone tackle with this issue using a solution that -by all means- shall not be considered "of a hackish character"?
Remember that you are tunneling over SSH, so your ~/.ssh/config brings a per-user hosts file back into the question, e.g.,
Host dynamic-repo-host
HostName 192.168.xx.xx
#HostName xxx.linkpc.net
Then modify the URL of origin as in
ssh config remote.origin.url git#dynamic-repo-host
Modifying ~/.ssh/config by hand as appropriate then gives the effect you want.
If you want a completely hands off solution, look into using Match in ~/.ssh/config, but the specific commands to execute will depend on the particulars of the networks on which your local machine runs.
I'm going to give you a hint and you can formulate the solution. You might be able to use your SSH config with the Match and exec keyword :
Match
Restricts the following declarations (up to the next Host or Match keyword) to be used only when the conditions following the Match keyword are satisfied.
...
The exec keyword executes the specified command under the user's shell. If the command returns a zero exit status then the condition is
considered true.

Check identity of remote-user after password-less ssh-login?

After password-less ssh-login, is there any way in Linux to retrieve the identity of the remote-user that logged in?
I would like to take some different actions in the login-scripts,
depending on from which remote host/userid I do ssh-login.
The originating system's username is not recorded unless you use something like this answer - i.e. push the username as part of the connection. The remote host is encoded in the SSH_CLIENT environment variable, so that can be determined.
You could try to finger the remote system, but that requires fingerd to be running, which is not a common service these days.
You'll have better luck using specific keys for users, which can have options set at the start of the key such as environment="NAME=value" in the authorized_keys file to kind-of determine the remote user that connected. e.g.
environment="REMOTEUSER=fred" ssh-rsa <blahblahkey> <comment>
The use of the environment option in the key will only work if you've got PermitUserEnvironment set in the sshd config, otherwise the line in the authorized_keys gets ignored and you'll be prompted for a password.

SSH on Linux: Disabling host key checking for hosts on local subnet (known_hosts)

I work on a network where the systems at an IP address will change frequently. They are moved on and off the workbench and DHCP determines the IP they get.
It doesn't seem straightforward how to disable host key caching/checking so that I don't have to edit ~/.ssh/known_hosts every time I need to connect to a system.
I don't care about the host authenticity, they are all on the 10.x.x.x network segment and I'm relatively certain that nobody is MITM'ing me.
Is there a "proper" way to do this? I don't care if it warns me, but halting and causing me to flush my known_hosts entry for that IP every time is annoying and in this scenario it does not really provide any security because I rarely connect to the systems more than once or twice and then the IP is given to another system.
I looked in the ssh_config file and saw that I can set up groups so that the security of connecting to external machines could be preserved and I could just ignore checking for local addresses. This would be optimal.
From searching I have found some very strong opinions on the matter, ranging from "Don't mess with it, it is for security, just deal with it" to "This is the stupidest thing I have ever had to deal with, I just want to turn it off" ... I'm somewhere in the middle. I just want to be able to do my job without having to purge an address from the file every few minutes.
Thanks.
This is the configuration I use for our ever-changing EC2 hosts:
maxim#maxim-desktop:~$ cat ~/.ssh/config
Host *amazonaws.com
IdentityFile ~/.ssh/keypair1-openssh
IdentityFile ~/.ssh/keypair2-openssh
User ubuntu
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
This disables host confirmation StrictHostKeyChecking no and also uses a nice hack to prevent ssh from saving the host identify to a persistent file UserKnownHostsFile /dev/null note that as an added value I've added the default user with which to connect to the host and the option to try several different identify private keys.
Assuming you're using OpenSSH, I believe you can set the
CheckHostIP no
option to prevent host IPs from being checked in known_hosts. From the man page:
CheckHostIP
If this flag is set to 'yes', ssh(1)
will additionally check the host IP
address in the known_hosts file. This
allows ssh to detect if a host key
changed due to DNS spoofing. If the
option is set to 'no', the check will
not be executed. The default is
'yes'.
This took me a while to find. The most common use-case I've seen is when you've got SSH tunnels to remote networks. All the solutions here produced warnings which broke my Nagios scripts.
The option I needed was:
NoHostAuthenticationForLocalhost yes
Which, as the name suggests also only applies to localhost.
Edit your ~/.ssh/config
nano ~/.ssh/config (if there wasn't one already, don't worry, nano will create a new file)
Add the following config:
Host 192.168.*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
If you want to disable this temporarily or without needing to change your SSH configuration files, you can use:
ssh -o UserKnownHostsFile=/dev/null username#hostname
Since every other answer explains how to disable the key checking, here are two ideas that preserve the key checking, but avoid the problem:
Use hostnames. This is easy if you control the DHCP server and can assign proper names. After that you can just use the known hostnames, the changing ips don't matter.
Use hostnames. Even if you don't control the DHCP server, you can use a service like avahi, which will broadcast the name of the server in our local network. It takes care of solving collisions and other issues.
Use host key signing. After you built a machine, sign it with a local CA (you don't need a global trusted CA for that). After that, you don't need to trust each host separately on your machine. It's enough that you trust the signing CA in the known_hosts file. More information in the ssh-keygen man page or at many blog posts (https://www.digitalocean.com/community/tutorials/how-to-create-an-ssh-ca-to-validate-hosts-and-clients-with-ubuntu)

Resources