fabric keeps asking for password using SSH connection - azure

I'm trying to connect to a windows azure instance using fabric, but despite I configure ssh conection to execute commands, fabric keeps asking for password.
This is my fabric file:
def azure1():
env.hosts = ['host.cloudapp.net:60770']
env.user = 'adminuser'
env.key_filename = './azure.key'
def what_is_my_name():
run('whoami')
I run it as:
fab -f fabfile.py azure1 what_is_my_name
or
fab -k -f fabfile.py -i azure.key -H adminuser#host.cloudapp.net:60770 -p password what_is_my_name
But nothing worked, it keeps asking for user password despite I enter it correctly.
Executing task 'what_is_my_name'
run: whoami
Login password for 'adminuser':
Login password for 'adminuser':
Login password for 'adminuser':
Login password for 'adminuser':
If I try to connect directly with ssh, it works perfectly.
ssh -i azure.key -p 60770 adminuser#host.cloudapp.net
I've tried the advises given in other questions (q1 q2 q3) but nothing works.
Any idea what I am doing wrong?
Thank you

Finally I found the problem is due to the public-private key pair generation.
I followed the steps provided in windows azure guide, there the keys are generated using openssl, so the process outcomes a public key stored in a pem file you must upload to your instance during creation process.
The problem is that this private key obtained is not correctly recognized by paramiko, so fabric won't work. If you try to open a ssh connection using paramiko from python interpreter:
>>> import paramiko, os
>>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG)
>>> ssh = paramiko.SSHClient()
>>> ssh.load_host_keys('private_key_file.key') # private key file generated using openssl
>>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
>>> ssh.connect("web1.cloudapp.net",port=56317)
Gives me the error:
DEBUG:paramiko.transport:Trying SSH agent key a9d8dd41609191ebeedbe8df768ad8c9
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".. /paramiko/client.py", line 337, in connect
self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys)
File ".. /paramiko/client.py", line 528, in _auth
raise saved_exception
paramiko.PasswordRequiredException: Private key file is encrypted
When the key file isn't encrypted.
To solve this, I created the key pair using openssh and then convert the public key to pem to upload it to azure:
# Create key with openssh
ssh-keygen -t rsa -b 2048 -f private_key_file.key
# extract public key and store as x.509 pem format
openssl req -x509 -days 365 -new -key private_key_file.key -out public_key_file.pem
# upload public_key_file.pem file during instance creation
# check connection to instance
ssh -i private_key_file.key -p 63534 adminweb#host.cloudapp.net
This solved the problem.

To debug fabric's ssh connections, add these lines to your fabfile:
import paramiko, os
paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG)
This will print all of paramiko's debug messages. Paramiko is the ssh library that fabric uses.
Note that since Fabric 1.4 you have to specifically enable using ssh config:
env.use_ssh_config = True
(Note: I'm pretty sure absolutely certain that my fabfile used to work with Fabric > 1.5 without this option, but it doesn't now that I upgraded to 1.10).

Related

SSHTunnel searching for default private key (id_rsa) instead of the ssh_pkey I specify

I am using MacOS and something in my system config keeps telling python's SSHTunnelForwarder to use my default id_rsa file INSTEAD of the file I specify in the configuration below:
>>> from sshtunnel import SSHTunnelForwarder
>>> db_tunnel = SSHTunnelForwarder(
... ssh_host="localhost",
... ssh_username="username",
... ssh_port=1111,
... ssh_pkey="~/.ssh/test",
... remote_bind_address=("my-remote-database-domain", 3306)
... )
Gives me this error message:
2022-03-23 13:15:35,715| ERROR | Password is required for key /Users/me/.ssh/id_rsa
Where can I edit the config to override this search for the wrong key?
My ~/.ssh/config:
Host *
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_rsa
IdentityFile ~/.ssh/test
And yes, I have run ssh-add ~/.ssh/test to add this key.
What else could be confusing it?
This error message doesn't matter if you have assigned other keys to create the connection, you can insert more logs to confirm that your program has continued to run.
And make sure the host in your MySQL connect parameter must be 127.0.0.1 :
db = pymysql.connect(host="127.0.0.1",
user=user,
password=password,
database='db', port= db_tunnel.local_bind_port)

NixOps: How to deploy to an existing NixOS VM?

I have almost the same problem as in this question, but it was never answered:
nixops: how to use local ssh key when deploying on machine with existing nixos (targetEnv is none)?
I'm not using Terraform though. Just NixOS + NixOps. So far, I:
Created a new VM on Vultr
Did a standard NixOS install from the current iso (20.09 pre something), setting a root password
Enabled ssh with root password authentication and did a nixos-rebuild switch
Manually generated an ssh keypair on my laptop
sshed into the VM with the password and added the public key to /root/.ssh/authorized_keys
Now I can ssh into the VM manually with the new key, as expected:
ssh -i .secrets/vultrtest1_rsa root#XXX.XXX.XXX.XXX
Cool. Next, I copied the existing NixOS config files to my laptop and tried to wire them up to NixOps. I tried a minimal test1.nix, as well as adding the deployment."none" and/or users.users.root.openssh sections below.
vultrtest1
├── configuration.nix
└── hardware-configuration.nix
test1.nix
# test1.nix
{
network.description = "vultr test 1";
network.enableRollback = true;
vultrtest1 = { config, pkgs, ... } : {
deployment.targetHost = "XXX.XXX.XXX.XXX";
imports = [ ./vultrtest1/configuration.nix ];
# deployment.targetEnv = "none"; # existing nixos vm
# same result with or without this section:
deployment."none" = {
sshPrivateKey = builtins.readFile ./secrets/vultrtest1_rsa;
sshPublicKey = builtins.readFile ./secrets/vultrtest1_rsa.pub;
sshPublicKeyDeployed = true;
};
# same result with or without this:
users.users.root.openssh.authorizedKeys.keyFiles = [ ./secrets/vultrtest1_rsa.pub ];
};
}
In all cases, when I try to create and deploy the network NixOps tries to generate another SSH key, then fails to log in with it:
$ nixops create test1.nix -d test1
created deployment ‘b4ac25fa-c842-11ea-9a84-00163e5e6c00’
b4ac25fa-c842-11ea-9a84-00163e5e6c00
$ nixops list
+--------------------------------------+-------+------------------------+------------+------+
| UUID | Name | Description | # Machines | Type |
+--------------------------------------+-------+------------------------+------------+------+
| b4ac25fa-c842-11ea-9a84-00163e5e6c00 | test1 | Unnamed NixOps network | 0 | |
+--------------------------------------+-------+------------------------+------------+------+
$ nixops deploy -d test1
vultrtest1> generating new SSH keypair... done
root#XXX.XXX.XXX.XXX: Permission denied (publickey,keyboard-interactive).
vultrtest1> could not connect to ‘root#XXX.XXX.XXX.XXX’, retrying in 1 seconds...
root#XXX.XXX.XXX.XXX: Permission denied (publickey,keyboard-interactive).
vultrtest1> could not connect to ‘root#XXX.XXX.XXX.XXX’, retrying in 2 seconds...
root#XXX.XXX.XXX.XXX: Permission denied (publickey,keyboard-interactive).
vultrtest1> could not connect to ‘root#XXX.XXX.XXX.XXX’, retrying in 4 seconds...
root#XXX.XXX.XXX.XXX: Permission denied (publickey,keyboard-interactive).
vultrtest1> could not connect to ‘root#XXX.XXX.XXX.XXX’, retrying in 8 seconds...
root#XXX.XXX.XXX.XXX: Permission denied (publickey,keyboard-interactive).
Traceback (most recent call last):
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/bin/..nixops-wrapped-wrapped", line 991, in <module>
args.op()
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/bin/..nixops-wrapped-wrapped", line 412, in op_deploy
max_concurrent_activate=args.max_concurrent_activate)
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/lib/python2.7/site-packages/nixops/deployment.py", line 1063, in deploy
self.run_with_notify('deploy', lambda: self._deploy(**kwargs))
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/lib/python2.7/site-packages/nixops/deployment.py", line 1052, in run_with_notify
f()
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/lib/python2.7/site-packages/nixops/deployment.py", line 1063, in <lambda>
self.run_with_notify('deploy', lambda: self._deploy(**kwargs))
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/lib/python2.7/site-packages/nixops/deployment.py", line 996, in _deploy
nixops.parallel.run_tasks(nr_workers=-1, tasks=self.active_resources.itervalues(), worker_fun=worker)
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/lib/python2.7/site-packages/nixops/parallel.py", line 44, in thread_fun
result_queue.put((worker_fun(t), None, t.name))
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/lib/python2.7/site-packages/nixops/deployment.py", line 979, in worker
os_release = r.run_command("cat /etc/os-release", capture_stdout=True)
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/lib/python2.7/site-packages/nixops/backends/__init__.py", line 337, in run_command
return self.ssh.run_command(command, self.get_ssh_flags(), **kwargs)
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/lib/python2.7/site-packages/nixops/ssh_util.py", line 280, in run_command
master = self.get_master(flags, timeout, user)
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/lib/python2.7/site-packages/nixops/ssh_util.py", line 200, in get_master
compress=self._compress)
File "/nix/store/kybdy5m979h4kvswq2gx3la3rpw5cq5k-nixops-1.7/lib/python2.7/site-packages/nixops/ssh_util.py", line 57, in __init__
"‘{0}’".format(target)
nixops.ssh_util.SSHConnectionFailed: unable to start SSH master connection to ‘root#XXX.XXX.XXX.XXX’
What am I missing? Perhaps I can manually add the key NixOps just generated?
Update: I used SQLiteBrowser to look inside the NixOps state database and pasted the generated public key into authorized_keys. Now I can ssh in with the newly generated key manually, but NixOps still fails to deploy.
Solved it temporarily, in a not-very-satisfying way:
browsed the database for the public + private key NixOps generated
manually added those to authorized_keys on the VM
also added the old key to the local ~/.ssh with an entry in ~/.ssh/config
No idea why NixOps uses the local ssh config, or how to prevent that. The entry that works looks like:
Host XXX.XXX.XXX.XXX
HostName XXX.XXX.XXX.XXX
Port 22
User root
IdentityFile ~/.ssh/vultrtest1_rsa
Will wait a couple days, then mark this as the solution unless anyone can explain how to tell NixOps to use the local key from .secrets instead of ~/.ssh.
Looking at the source at
https://github.com/NixOS/nixops/blob/master/nix/options.nix
there is deployment.provisionSSHKey option
which says.
deployment.provisionSSHKey = mkOption {
type = types.bool;
default = true;
description = ''
This option specifies whether to let NixOps provision SSH deployment keys.
NixOps will by default generate an SSH key, store the private key in its state file,
and add the public key to the remote host.
Setting this option to <literal>false</literal> will disable this behaviour
and rely on you to manage your own SSH keys by yourself and to ensure
that <command>ssh</command> has access to any keys it requires.
'';
};
Maybe this can help? Once i'll get back to my Nixops machine, I'll give it a try.

Paramiko cannot open an ssh connection even with load_system_host_keys + WarningPolicy

I am connecting to a remote server with the following code:
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.WarningPolicy())
ssh.connect(
hostname=settings.HOSTNAME,
port=settings.PORT,
username=settings.USERNAME,
)
When I'm on local server A, I can ssh onto the remote from the command line, suggesting it is in known_hosts. And the code works as expected.
On local server B, I can also ssh onto the remote from the command line. But when I try to use the above code I get:
/opt/mysite/virtualenv/lib/python3.5/site-packages/paramiko/client.py:763: UserWarning: Unknown ssh host key for [hostname]:22: b'12345'
key.get_fingerprint())))
...
File "/opt/mysite/virtualenv/lib/python3.5/site-packages/paramiko/client.py", line 416, in connect
look_for_keys, gss_auth, gss_kex, gss_deleg_creds, t.gss_host,
File "/opt/mysite/virtualenv/lib/python3.5/site-packages/paramiko/client.py", line 702, in _auth
raise SSHException('No authentication methods available')
paramiko.ssh_exception.SSHException: No authentication methods available
Unlike "SSH - Python with paramiko issue" I am using both load_system_host_keys and WarningPolicy, so I should not need to programatically add a password or key (and I don't need to on local server A).
Is there some system configuration step I've missed?
Try to use the fabric (this is written based on invoke + paramiko) instead of the paramiko and set the following parameters:
con = fabric.Connection('username#hostname' ,connect_kwargs={'password': 'yourpassword', 'allow_agent': False}
If it's keep falling, try to check if your password is still valid and you're not required to change your password.
I tested with the wrong user on local server B. The user running the Python process did not have ssh permissions after all. (Command line ssh failed for that user.) Once I gave it permissions, the connection worked as expected.

Connecting to FTPS with Python

I am trying to connect to an FTPS server which requires anonymous login with a .pfx certificate.
I have been given instructions for how to access it through the gui application SmartFTP which does work, thus I know I haven't got any firewall issues etc. However, for this workflow getting access to it through python would be ideal. Below are the settings I have been given:
Protocol: FTPS (Explicit)
Host: xxx.xxx.xxx.xxx
Port: 21
login type: Anonymous
Client Certificate: Enabled (providing a .pfx file)
Send FEAT: Send before and after login
I am having trouble picking the python module best suited to this with a full example using a .pfx certificate. Currently I have only tried the standard FTP module using the below code. Does anyone have a worked example?
from ftplib import FTP_TLS
ftps = FTP_TLS(host='xxx.xxx.xxx.xxx',
keyfile=r"/path/to.pfx"
)
ftps.login()
ftps.prot_p()
ftps.retrlines('LIST')
ftps.quit()
Using the above code I get:
ValueError: certfile must be specified
Client versions:
Ubuntu == 14.04,
Python == 3.6.2
Update
Think I am a little closer with the code below but getting a new error:
from ftplib import FTP_TLS
import tempfile
import OpenSSL.crypto
def pfx_to_pem(pfx_path, pfx_password):
""" Decrypts the .pfx file to be used with requests. """
with tempfile.NamedTemporaryFile(suffix='.pem') as t_pem:
f_pem = open(t_pem.name, 'wb')
pfx = open(pfx_path, 'rb').read()
p12 = OpenSSL.crypto.load_pkcs12(pfx, pfx_password)
f_pem.write(OpenSSL.crypto.dump_privatekey(OpenSSL.crypto.FILETYPE_PEM, p12.get_privatekey()))
f_pem.write(OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_PEM, p12.get_certificate()))
ca = p12.get_ca_certificates()
if ca is not None:
for cert in ca:
f_pem.write(OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_PEM, cert))
f_pem.close()
yield t_pem.name
pfx = pfx_to_pem(r"/path/to.pfx", 'password')
ftps = FTP_TLS(host='xxx.xxx.xxx.xxx',
context=pfx
)
ftps.login()
ftps.prot_p()
# ftps.prot_c()
print(ftps.retrlines('LIST'))
ftps.quit()
Error:
ftplib.error_perm: 534 Local policy on server does not allow TLS secure connections.
Any Ideas?
Cheers
It sounds like you try to do SFTP. FTP over SSL is not the same as SFTP. As far as I know SFTP (which is related to SSH) is not possible with the standard library.
See this for more about SFTP in Python: SFTP in Python? (platform independent)

SFTP into Google Compute Engine from windows 7 Client

I am trying to SFTP a Debian-7-Wheezy-V20140807 Instance on Google Compute Engine from Windows 7, 64 Bit Client.
Finally my problem got solved using this
Also in Windows I first installed Cygwin and then used set the Environment Variable CLOUDSDK_PYTHON to python instead of C:\python27\python.exe
and finally from cygwin ran this
curl https://sdk.cloud.google.com | bash
All the instructions below are symptoms. First i tried Filezilla, which errors out with message
Status: Waiting to retry...
Status: Connecting to 23.xx.xx.xx..
Response: fzSftp started
Command: open "Abdul#23.236.51.19" 22
Error: Disconnected: No supported authentication methods available (server sent: publickey)
Error: Could not connect to server
User :root
password :
<same as passphrase set up on running SSH key>
I have also tried gcloud compute copy-files
gcloud compute copy-files deccan4-clone:/etc/ssh/ssh_host_rsa_key.pub ssh_host_rsa_key.pub --zone=us-central1-b
ssh_host_rsa_key.pub: **Permission denied
ERROR**: (gcloud.compute.copy-files) exit code 1: /usr/bin/scp -i /home/Abdul/.ssh/google_compute_engine
sudo gcloud compute copy-files deccan4-clone:/etc/ssh/ssh_host_rsa_key.pub ssh_host_rsa_key.pub --zone=us-central1-b
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: ssh-keygen will be executed to generate a key.
This tool needs to create the directory /root/.ssh before being able
to generate SSH keys.
Do you want to continue (Y/n)? Y
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/google_compute_engine.
Your public key has been saved in /root/.ssh/google_compute_engine.pub.
The key fingerprint is:
root#deccan4-clone
**ERROR: (gcloud.compute.copy-files) some requests did not succeed:
- Insufficient Permission**
I have also tried
Abdul#deccan4-clone:/home/a_rahman_synergywell_com$ gcloud compute copy-files de ccan4-clone:test.txt test.txt --zone=us-central1-b
scp: test.txt: No such file or directory
ERROR: (gcloud.compute.copy-files) exit code 1: /usr/bin/scp -i /home/Abdul/.ssh
/google_compute_engine -r Abdul#23.236.51.19:test.txt test.txt
Please let me know if I am missing some key setup.

Resources