Basic SSH Connection using Paramiko fails - python-3.x

I'm learning the basics on paramiko and for that purpose I setup a basic lab where I connect an Ubuntu VM to a router emulated in EVE-ng.
The first step was to generate a key pair in the client via ssh-Keygen
Next I loaded the public key to the remote server (the Cisco router) using the following command:
ip ssh pubkey-chain
username administrator
key-hash ssh-rsa 97D0E9B5630D05D78EA9531053124BFF
Right after that I was able to login to the Cisco router from the Ubuntu VM:
$ ssh administrator#192.168.1.1
7206_1.rt#
Then, from the same client I started a Python shell session and tried to establish an SSH session using Paramiko:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('192.168.1.1', username='administrator', password='password', key_filename= '/home/administrator/.ssh/id_rsa.pub')
But this time I got the following exception:
Exception: Illegal info request from server
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/paramiko/transport.py", line 2109, in run
handler(self.auth_handler, m)
File "/usr/local/lib/python3.8/dist-packages/paramiko/auth_handler.py", line 661, in _parse_userauth_info_request
raise SSHException("Illegal info request from server")
paramiko.ssh_exception.SSHException: Illegal info request from server
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/paramiko/client.py", line 435, in connect
self._auth(
File "/usr/local/lib/python3.8/dist-packages/paramiko/client.py", line 764, in _auth
raise saved_exception
File "/usr/local/lib/python3.8/dist-packages/paramiko/client.py", line 751, in _auth
self._transport.auth_password(username, password)
File "/usr/local/lib/python3.8/dist-packages/paramiko/transport.py", line 1498, in auth_password
raise SSHException("No existing session")
paramiko.ssh_exception.SSHException: No existing session
The remote router SSH debug shows that authentication failed:
*Aug 16 01:18:07.295: SSH2 0: MAC compared for #5 :ok
*Aug 16 01:18:07.299: SSH2 0: input: padlength 16 bytes
*Aug 16 01:18:07.299: SSH2 0: Using method = publickey
*Aug 16 01:18:07.307: SSH2 0: send:packet of length 432 (length also includes padlen of 4)
*Aug 16 01:18:07.307: SSH2 0: computed MAC for sequence no.#5 type 60
*Aug 16 01:18:07.311: SSH2 0: Authenticating 'administrator' with method: publickey
*Aug 16 01:18:07.327: SSH2 0: SSH ERROR closing the connection
*Aug 16 01:18:07.331: SSH2 0: send:packet of length 80 (length also includes padlen of 15)
*Aug 16 01:18:07.331: SSH2 0: computed MAC for sequence no.#6 type 1
*Aug 16 01:18:07.335: SSH2 0: Pubkey Authentication failed for user administrator
*Aug 16 01:18:07.335: SSH0: password authentication failed for administrator
At this point I can't tell whether the issue is in the server or in the router as all works fine when connecting directly from server to router without Paramiko.
Thanks.

Ok, looks like by default, Paramiko searches for discoverable private key files in ~/.ssh/ that's fine if trying to connect to another server, but since it's trying to reach a router, this feature needs to be disabled by setting look_for_keys to False. That fixed the issue (as long as this is not a production environment) which is my case.

In case it helps anyone else, I was receiving this same "Illegal info request from server" error because the password being used had a flag on it that it needed to be updated. I only saw this when logging in manually via WinSCP.

Authentication is done via public key at /home/administrator/.ssh/id_rsa.pub
Not quite: it is done using the private key of the local user you are using when typing:
ssh administrator#192.168.1.1
'administrator' is the name of the remote account used to open a session on the remote server 192.168.1.1
The authentication, on the remote side, will be done using ~administrator/.ssh/authorized_keys (again, on the remote machine), to check if the local ~/.ssh/id_rsa.pub public key was properly registered in the remote ~administrator/.ssh/authorized_keys.
Your local account might be also 'administrator', but that same local account might not be the same when executing the Python shell.
When you see
Authenticating 'administrator' with method: publickey
SSH is talking of the remote 'administrator' account on the remote server, irrespective of the local user account you are in.

Related

Denyhosts on Centos7 option DENY_THRESHOLD_INVALID does not work

using centos7 and denyhosts 2.9 i noticed some strange behavior.
My config is set to:
DENY_THRESHOLD_INVALID = 3
DENY_THRESHOLD_VALID = 10
Which, in my understanding is like: after 3 failed login attempts of NON-EXISTING users from hosts X, deny that host.
After 10 failed logins attempts from EXISTING users from hosts X, deny that host.
While the latter works just fine, the DENY_THRESHOLD_INVALID = 3 setting does not work.
What i noticed is that the /var/log/secure, that danyhosts parses, does handly logns from non-existing accounts and logins from account that exist but are using the wrong pasword, are handled differently.
Aug 10 12:32:42 ftp sshd[27176]: Invalid user adminx from xxx.128.30.135 port 42800
Aug 10 12:32:42 ftp sshd[27176]: input_userauth_request: invalid user adminx [preauth]
Aug 10 12:32:42 ftp sshd[27176]: Connection closed by xxx.128.30.135 port 42800 [preauth]
vs.
Aug 10 12:33:46 ftp sshd[27238]: Failed password for exchange from xxx.128.30.135 port 42802 ssh2
Does anyone know of denyhosts has problems parsing the /var/log/secure file on centos with non-existing accounts vs. existing accounts that use wrong passwords?
Denyhosts debug log also does not say anything. It seems to ignore the login attempt from non-existend users.
any help would be appreciated. Thanks.

How do I get Ansible to ping other AWS servers?

I am using RHEL 7.x as my control server. I have installed Ansible 2.2.2.0. The managed nodes are running CentOS 6. I cannot upgrade Ansible because of an incompatibility.
Without Ansible, I can ping the managed servers from the control server. From the control server I can SSH to the managed nodes without password authentication. With Ansible from the control server, I cannot ping the managed servers. Why cannot I use basic Ansible operations (e.g., ansible -m ping all)?
Here are some details. As root, I run this:
ansible -m ping all -vvvv
I saw this:
| UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data
/etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying
options for *\r\npercent_expand: unknown key %C\r\n",
"unreachable": true
So I rebooted.
I tried it again. I saw this:
[WARNING]: scp transfer mechanism failed on [x.y.z.z]. Use
ANSIBLE_DEBUG=1 to see detailed information
x.y.z.z | FAILED! => {
"failed": true,
"msg": "failed to transfer file to Please login as the user \"centos\" rather than the user \"root\"./ping.py:\n\nExecuting:
program /usr/bin/ssh host x.y.z.z, user (unspecified), command scp -v
-t 'Please login as the user \"centos\" rather than the user \"root\"./ping.py'\nOpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb
2013\r\ndebug1: Reading configuration data
/etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying
options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2:
fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master
version 4\r\ndebug3: mux_client_forwards: request forwardings: 0
local, 0 remote\r\ndebug3: mux_client_request_session:
entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3:
mux_client_request_alive: done pid = 10256\r\ndebug3:
mux_client_request_session: session request sent\r\ndebug1:
mux_client_request_session: master session id: 4\r\nPlease login as
the user \"centos\" rather than the user \"root\".\n" } [WARNING]:
scp transfer mechanism failed on [z.x.y.w]. Use ANSIBLE_DEBUG=1 to see
detailed information
z.x.y.w | FAILED! => {
"failed": true,
"msg": "failed to transfer file to Please login as the user \"centos\" rather than the user \"root\"./ping.py:\n\nExecuting:
program /usr/bin/ssh host z.x.y.w, user (unspecified), command scp -v
-t 'Please login as the user \"centos\" rather than the user \"root\"./ping.py'\nOpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb
2013\r\ndebug1: Reading configuration data
/etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying
options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2:
fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master
version 4\r\ndebug3: mux_client_forwards: request forwardings: 0
local, 0 remote\r\ndebug3: mux_client_request_session:
entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3:
mux_client_request_alive: done pid = 10259\r\ndebug3:
mux_client_request_session: session request sent\r\ndebug1:
mux_client_request_session: master session id: 4\r\nPlease login as
the user \"centos\" rather than the user \"root\".\n" }
I then assumed the Linux user "centos" (su centos). I then tried the ansible commands again. I ran this command:
ansible -m ping all -vvvv
I saw this:
x.y.z.z | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_6.6.1, OpenSSL 1.0.1e
-fips 11 Feb 2013\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\nd
ebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1:
auto-mux:
Trying existing master\r\ndebug1: Control socket
\"/home/centos/.ansible/cp/ansi
ble-ssh-x.y.z.z-22-centos\" does not exist\r\ndebug2: ssh_connect:
needpri
v 0\r\ndebug1: Connecting to x.y.z.z [x.y.z.z] port 22.\r\ndebug2: f
d 3 setting O_NONBLOCK\r\ndebug1: fd 3 clearing O_NONBLOCK\r\ndebug1:
... partially removed because it "looked like spam"
est\r\ndebug2: we sent a publickey packet, wait for reply\r\ndebug1:
Authenticat
ions that can continue:
publickey,gssapi-keyex,gssapi-with-mic\r\ndebug1: Trying
private key: /home/centos/.ssh/id_dsa\r\ndebug3: no such identity:
/home/centos
/.ssh/id_dsa: No such file or directory\r\ndebug1: Trying private key:
/home/cen
tos/.ssh/id_ecdsa\r\ndebug3: no such identity:
/home/centos/.ssh/id_ecdsa: No su
ch file or directory\r\ndebug1: Trying private key:
/home/centos/.ssh/id_ed25519
\r\ndebug3: no such identity: /home/centos/.ssh/id_ed25519: No such
file or dire
ctory\r\ndebug2: we did not send a packet, disable method\r\ndebug1:
No more aut
hentication methods to try.\r\nPermission denied
(publickey,gssapi-keyex,gssapi-
with-mic).\r\n",
"unreachable": true }
My ansible.cfg file looks like this:
[defaults]
host_key_checking = False
library = ../extra_modules
roles_path = ../roles
pipelining = True
remote_user = centos
forks = 20
log_path = ./ansible.log
[ssh_connection]
control_path = ~/.ssh/ansible-ssh-%%C
What is wrong? Why cannot I ping Ansible managed nodes?
Can you please share your Ansible hosts/inventory file and.ssh folder (ls ~/.ssh)?
Also please try do to something like that and passing the ssh private key and the user name variables via cli:
ansiblie.cfg
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o StrictHostKeyChecking=no
control_path = /tmp/ansible-ssh-%%h-%%p-%%r
command:
ansible -m ping all -i <inventory_file> --private-key=~/.ssh/<your pem key.pem> -u <login user ubuntu/centos>

Bro 2.4.1 generating E-mail notice for SSH Bruteforce Attack

I'm having trouble generating an email notice when someone is trying to do an ssh bruteforce attack on my server with Bro (v2.4.1). I have a Bro script like this which redefines the max login attemps to 5 per 24 hours:
#load protocols/ssh/detect-bruteforcing
redef SSH::password_guesses_limit=5;
redef SSH::guessing_timeout=1440 mins;
hook Notice::policy(n: Notice::Info)
{
if ( n$note == SSH::Password_Guessing && /192\.168\.178\.16/ in n$sub )
add n$actions[Notice::ACTION_EMAIL];
}
where 192.168.178.16 is the local ip of my server and I've made sure the script gets loaded by including it in $PREFIX/site/local.bro. The output of broctl scripts shows that the script is loaded just fine on startup. However, I never receive any email notice of ssh bruteforcing attacks.
Connection summaries, dropped packets and invalid ssl certificate notices are emailed just fine, so it's not an email configuration issue. When I check the ssh log output like so:
sudo cat /opt/bro/logs/current/ssh.log | bro-cut -d ts uid id.orig_h id.orig_p id.resp_h id.resp_p version auth_success direction client server cipher_alg
The 6 failed login attemps (that I generated for testing this) are logged just fine in /opt/bro/logs/current/ssh.log:
2016-11-11T14:45:08+0100 CRoENl2L4n5RIkMd0l 84.241.*.* 43415 192.168.178.16 22 2 - INBOUND SSH-2.0-JuiceSSH SSH-2.0-OpenSSH_6.7p1 Raspbian-5+deb8u3 aes128-ctr
2016-11-11T14:45:13+0100 CMflWI2ESA7KVZ3Cmk 84.241.*.* 43416 192.168.178.16 22 2 - INBOUND SSH-2.0-JuiceSSH SSH-2.0-OpenSSH_6.7p1 Raspbian-5+deb8u3 aes128-ctr
2016-11-11T14:45:17+0100 CZuyQO2NxvmpsmsWwg 84.241.*.* 43417 192.168.178.16 22 2 - INBOUND SSH-2.0-JuiceSSH SSH-2.0-OpenSSH_6.7p1 Raspbian-5+deb8u3 aes128-ctr
2016-11-11T14:45:20+0100 CC86Fi3IGZIFCoot2l 84.241.*.* 43418 192.168.178.16 22 2 - INBOUND SSH-2.0-JuiceSSH SSH-2.0-OpenSSH_6.7p1 Raspbian-5+deb8u3 aes128-ctr
2016-11-11T14:45:25+0100 CHqcJ93qRhONQC1bm4 84.241.*.* 43419 192.168.178.16 22 2 - INBOUND SSH-2.0-JuiceSSH SSH-2.0-OpenSSH_6.7p1 Raspbian-5+deb8u3 aes128-ctr
2016-11-11T14:45:28+0100 CdV0xh1rI4heYaFDH2 84.241.*.* 43420 192.168.178.16 22 2 - INBOUND SSH-2.0-JuiceSSH SSH-2.0-OpenSSH_6.7p1 Raspbian-5+deb8u3 aes128-ctr
However I never get any email notice of this happening. The only reason I can think of is I have password login over ssh disabled, so maybe the login attemps without a private key are not firing the ssh_failed_login events in Bro? The auth_success column in above table shows a "-" for the failed login attemps whereas a succesfull login shows a "T", so maybe that should be a "F" in order for the event to fire?
Any help or suggestions is greatly appreciated!
Due to SSH being encrypted, we've had to resort to heuristics for detection of successful and unsuccessful authentications. Those heuristics have improved through time but are still far from perfect. If the "auth_success" column is unset like it is in the examples you provided it means that Bro was unable to make the guess if the login was successful or not.
The reason that the bruteforce detection script isn't working is because it's never detecting an unsuccessful login. Your suspicion at the end of your question is correct.

ssh command is not working while connecting linux to solaris

I am trying to connect to remote solaris machine from a linux server using ssh but not able to connect to the solaris machine. I am using below ssd command to connect to the solaris machine
ssh <host_name>
After giving this command, I am not getting any prompt for username and password. Is it the limitation for linux to solaris connection ??
The output is:
root#host> ssh -v user#solaris_host
OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to solaris_host [solaris_host] port 22.
debug1: connect to address solaris_host port 22: Connection timed out
ssh: connect to host solaris_host port 22: Connection timed out
Go over following steps
Check the network connectivity with your target, e.g. ping.
Check if the port 22 is open on your remote host e.g. nmap -A 192.168.0.5/32 -p 22
Check if ssh daemon is running on your target svcs ssh
Come back, when the problem still exists.

Gitlab 6.9.2 access denied for deploy key

I am trying to clone repository via SSH. Public key for my user is set as deploy key in a project.
I got this error message:
Access denied.
fatal: The remote end hung up unexpectedly
Here is my /var/log/secure for this attempt
Jul 16 11:09:54 gitlab sshd[32217]: Accepted publickey for git from <IP> port 55499 ssh2
Jul 16 11:09:54 gitlab sshd[32217]: pam_unix(sshd:session): session opened for user git by (uid=0)
Jul 16 11:09:54 gitlab sshd[32219]: Received disconnect from <IP>: 11: disconnected by user
Jul 16 11:09:54 gitlab sshd[32217]: pam_unix(sshd:session): session closed for user git
And here is /var/log/gitlab/gitlab-shell/gitlab-shell.log
[2014-07-16T11:09:54.407037 #32220] ERROR -- : API call <GET https://gitlab//api/v3/internal/allowed?action=git-upload-pack&ref=_any&project=group%2Fproject&forced_push=false&key_id=5> failed: 404 => <{"message":"404 Not found"}>.W,
[2014-07-16T11:09:54.407161 #32220] WARN -- : gitlab-shell: Access denied for git command <git-upload-pack 'group/project.git'> by user with key key-5.
Can you please help me to figure out what's wrong?
For many other deploy keys specified everything works just well.
Today I run into the same behavior as you describe. I found an open issue in gitlabhq(https://github.com/gitlabhq/gitlabhq/issues/6908).
The problem is that it can happen that the same public key are two times listed in /home/git/.ssh/authorized_keys. In my case I deleted the deploy key and recreate it to have a better name. In this case the key was not remove from the authorized_keys file.
After I deleted the deploy key and the corresponding lines in the authorized_keys file and recreate the deploy key in my project the access is working.

Resources