Bad configuration option: Identityfile - linux

my ssh config was okay and it was working fine, however recently my Github ssh connection didn't work and also I wasn't able to connect to my private server using ssh connection. When I try to ssh, I get follwing error:
/home/hacku/.ssh/config: line 9: Bad configuration option: Identityfile
/home/hacku/.ssh/config: line 16: Bad configuration option: Identityfile
/home/hacku/.ssh/config: terminating, 2 bad configuration options
And here is my config file:
Host github.com
User git
Port 22
Hostname github.com
IdentityFile ~/.ssh/github_ssh
TCPKeepAlive yes
Host linode
HostName serv_ip_address
User hackU
Port 22
IdentityFile ~/.ssh/private_key
I copied exact same config file and my private key into another machine and it worked great (Termux, ssh version => OpenSSH_8.6p1, OpenSSL 1.1.1l 24 Aug 2021).
I checked my ssh package version it was OpenSSH_8.7p1, so I thought maybe the update broke it. So I downgraded it to OpenSSH_8.6p1, OpenSSL 1.1.1l 24 Aug 2021, it also didn't work, additionally I tried to restart sshd by using
sudo systemctl restart sshd
But none of the above worked.
I'm using manjaro gnome edition as my daily driver.
Thanks beforehand.

Everything theoretically seemed okay but the thing was that it was weirdly throwing this error. After doing some reading, I found this information here:
if you use an ssh-agent, ssh will automatically try to use the keys in the agent, even if you have not specified them with in ssh_config's IdentityFile (or -i) option. This is a common reason you might get the Too many authentication failures for user error. Using the IdentitiesOnly yes option will disable this behavior.
So I completely deleted IdentityFile option. Hence my final config file is like that and both connection works just fine.
Host github.com
User git
Port 22
Hostname github.com
TCPKeepAlive yes
Host linode
HostName server_ip_address
User hackU
Port 22
However, the reason for the problem for me still is unknown. I would be glad to hear, in case someone finds it out.

Related

SSL handshake failed when trying to add remote GitLab account in GitAhead under openSUSE Leap 15

I successfully added remote (private) GitLab account under Windows 10 in GitAhead but under a Linux openSUSE Leap 15 I got "Connection failed: SSL handshake failed".
Note that I can clone, pull, fetch, commit, push in repositories from repositories in the GitLab I want to add, I also tried to reset SSH handshake with:
$ ssh-keygen -R gitlab.mydomain.net
# Host gitlab.mydomain.net found: line 31
/home/user/.ssh/known_hosts updated.
Original contents retained as /home/user/.ssh/known_hosts.old
$ ssh git#gitlab.mydomain.net
The authenticity of host 'gitlab.mydomain.net (<IP>)' can't be established.
ECDSA key fingerprint is SHA256:**************.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'gitlab.mydomain.net,<IP>' (ECDSA) to the list of known hosts.
Welcome to GitLab, #UserName!
Connection to gitlab.mydomain.net closed.
But it still does not work, anyone knows if there is something to configure to allow it under Linux ?
Thanks
For a starter, check the rights on directories on the server-side. The home-dir as well as the .ssh-dir should be treated with chmod 700. The same is true for the key files.
You should aim for a passwordless login on your server. As soon as this works, GitAhead should be fine. If you have a Git-Shell in your server-side /etc/passwd, replace it by /bin/sh for the sake of sending your pubkey: On the client, enter ssh-copy-id -i yourprivatekeyfile somerandomgituser#ipofyourgitserver. After that, if successful, you can reset the /etc/passwd line back to the Git-Shell.

How to disable ssh-agent forwarding

ssh-agent forwarding can be accomplished with ssh -A ....
Most references I have found state that the local machine must configure ~/.ssh/config to enable AgentForwarding with the following code:
Host <trusted_ip>
ForwardAgent yes
Host *
ForwardAgent no
However, with this configuration, I am still able to see my local machines keys when tunneling into a remote machine, with ssh -A user#remote_not_trusted_ip, and running ssh-add -l.
From the configuration presented above, I would expect that the ssh-agent forwarding would fail and the keys of the local machine would not be listed by ssh-add -l.
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
It is the default behavior. If you do not allow it in ~/.ssh/config it will not be forwarded. But the command-line arguments have higher priority so it overwrites what is defined in the configuration,as explained in the manual page for ssh_config:
ssh(1) obtains configuration data from the following sources in the following order:
command-line options
user's configuration file (~/.ssh/config)
system-wide configuration file (/etc/ssh/ssh_config)
So as already said, you just need to provide correct arguments to ssh.
So back to the questions:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
Because the command-line argument -A has higher priority than the configuration files.
How can I prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
Do not use -A command-line option if you do not want forward your ssh-agent. Use -a command-line option instead.
You are using -A option to connect. man ssh says :
-A Enables forwarding of the authentication agent connection.
You should connect without -A, just using :
ssh user#remote_not_trusted_ip
CLI args will have priority on ssh config file.
By the way, if you want to connect to your trusted ip without forwarding, you can also use :
ssh -a user#trusted_ip
-a Disables forwarding of the authentication agent connection.
This is over a year old, but I encountered the same issue and landed on a config option that works.
I had a problem when I connected from my home computer to my work computer that Git commands no longer worked. I figured out that it was because the connecting home computer's public key was forwarded, which was not configured for that GitHub account.
The -a command line options fixed the problem by not forwarding the authentication agent connection. I also thought that the equivalent ~/.ssh/config option would be this:
ForwardAgent no
When that didn't work I looked for other configuration variables, and finally found that this one worked.
IdentityAgent none
This part of the man-page is crucial:
Since the first obtained value for each parameter is used, more
host-specific declarations should be given near the beginning of the
file, and general defaults at the end.
Put your Host * with ForwardAgent yes at the end and the specific Host with ForwardAgent, not at the start of the .ssh/config
Not an answer to the question, and maybe just semantics:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
My understanding is that authentication keys are never "forwarded" to a remote computer. Rather ssh-agent forwards authentication challenges from a remote server back to the computer that holds the authentication private key through whatever chain of remote computers the ssh connection is running through.

X11 forwarding request failed on channel 0

When I do "ssh -X abcserver", I got message "X11 forwarding request failed on channel 0".
I checked online and it was suggested to solve it by switching "X11UseLocalhost no" to "X11UseLocalhost yes".
However, both my manager and I don't have this administrative privilege. I am wondering, except this solution, whether there is another option to solve the issue ? I also don't have sudo privilege to directly install X11 on the server.
My local platform is:
Linux version 3.16.0-4-amd64 (debian-kernel#lists.debian.org)
(gcc version 4.8.4 (Debian 4.8.4-1) ) #1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02)
The remote platform is:
Linux version 3.13.0-88-generic (buildd#lgw01-16)
(gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) )
#135-Ubuntu SMP Wed Jun 8 21:10:42 UTC 2016
Adding the -v option to ssh when trying to log in will give a lot of debug information which might give a clue to exactly what the problem is, like for instance
debug1: Remote: No xauth program; cannot forward with spoofing.
which in my case installing xauth on the server fixed the issue.
I had to edit the sshd config file on the remote server to fix the issue. It worked on Ubuntu 16.04 Server:
$ sudo vim /etc/ssh/sshd_config
Set `X11UseLocalhost no`
Save the file.
$ sudo service sshd restart
$ exit
Now it works!
$ ssh -X user#remotehost
$ xclock
sudo apt install xauth
change the line #AddressFamily any to AddressFamily inet in /etc/ssh/sshd_config
sudo service ssh restart
This is enough on Ubuntu 18.04 LTS.
After login with ssh -X (or after activating the PuTTY / KiTTY option "Enable X11 forwarding") you should see that the environment variable DISPLAY is automatically defined to localhost:10.0 or similar. After first successful login (with a functional X11 forwarding) the file .Xauthority will be generated. Another positive sign of success.
If you are interested to see and to understand the details of X11 forwarding within your session you can try with lsof -i -P|grep ssh.
1.make sure that during ssh -X root#server you have root permission.
2.update the /etc/ssh/sshd_config and make sure this line is uncommented
X11Forwarding yes
3.systemctl restart sshd
4.exit from server
5.ssh -X root#server
6.virt-manager
In my case, as superuser, editing /etc/ssh/sshd_config on the remote host and changing the following line fixed it.
From
#X11Forwarding no
to
X11Forwarding yes
Then: pkill -HUP sshd on the remote host to make sshd reload its config, which also closes the sshd session.
After X11 forwarding suddenly stopped working after no other changes than moving the ssh server to another wifi, I followed the answer to this seemingly completely different question and it worked.
In other words, it seems the solution for me was to specify AddressFamily inet in /etc/ssh/sshd_config.

putty connect successfully, while pscp run into "server refused our key"

I create one SUSE linux EC2 instance in Amazon AWS.
And it is OK to 'putty' to access the instance (use the key-pair file, let's call it key.pem, I have converted it to key.ppk), and when log in the host, I am using 'root' user name, and it is OK.
login as: root
Authenticating with public key "imported-openssh-key"
Last login: Tue Apr 15 15:17:55 2014 from x.x.x.x
SUSE Linux Enterprise Server 11 SP3 x86_64 (64-bit)
As "root" use the:
- zypper command for package management
- yast command for configuration management
Management and Config: https://www.suse.com/suse-in-the-cloud-basics
Documentation: http://www.suse.com/documentation/sles11/
Have a lot of fun...
While when I try to use 'pscp' to copy files, it always failed, and outputs
Server refused our key
Using Keyboard-interactive authentication.
Password:
My 'pscp' command usage as following
C:\Users\t440s\Downloads\putty\pscp.exe -i key.pps test.txt root#myhost.compute.amazonaws.com:/tmp
Actually, I do not know my password.
And I checked following section of /etc/ssh/sshd_config, seems root do not need password
# Authentication:
#LoginGraceTime 2m
PermitRootLogin without-password
PasswordAuthentication no
I am using win8.
Please help me. Yours.
I would like you should use Git Bash tool http://git-scm.com/download/win its free and opensource, Please download and install , You have Unix environment is windows :)
now in the git bash type command ls to check where you are and now you can type this command in GIT bash
scp -i /c/Users/USERNAME/Download/key.pem filename.txt ec2-user#ec2-81.1821.1..eu-west-1.compute.amazonaws.com:/tmp
You can replace the user ec2-user to ubuntu or any other which is associate to that machine I dont think root work. Let me know is that works for you

Gitolite Error: gitolite-admin not a repo

Quick Note: Before anyone points it out, I did originally post this on Server Fault, but after doing so I realized this site may be more appropriate. Sorry for the "double post".
I had installed gitolite about 6 months ago and all of a sudden I started getting this error:
fatal: 'gitolite-admin' does not appear to be a git repository
fatal: The remote end hung up unexpectedly
I have read a lot of other topics on this and done everything they suggested from removing the auth keys and adding a config file in ~/.ssh. Mine is below:
host myhost
user git
hostname myhost
port 22
identityfile ~/.ssh/id_rsa
host mygit
user git
hostname myhost
port 22
identityfile ~/.ssh/obto
Sadly, though, I'm still getting the fatal error. Does anyone have any ideas?
I solved this issue by doing what you just said: I create a file called config in my client machine:
vim ~/.ssh/config
Host 192.168.0.14
user git
hostname 192.168.0.14
port 22
identityfile ~/.ssh/userX
The userX file is your public file (userX.pub). Then I cloned the gitolite-admin repository in my client machine by doing:
**git clone 192.168.0.14:gitolite-admin**
Cloning into 'gitolite-admin'... Enter passphrase for key
'/home/userX':
Now you should enter the password of your key. And that's it. I hope this helps.
Regards.

Resources