I'm trying to setup continuous deployment for an Azure website using bitbucket.
The problem is I'm using a submodule (which I own) that Azure doesn't have permission to, because it doesn't add that by default.
I'm trying to figure out how to add an SSH key so that Azure can connect and get the submodule.
Steps I've taken.
Created a New Public/Private Key with PuttyGen, Added the public key to my bitbucket account under the name Azure
FTPed into Azure, and added both the public and private key files (.ppk) to the .ssh directory (yeah I didn't know which one I was suppose to add). They are named azurePrivateKey.ppk, and azurePublicKey.
Updated my config file to look like this
HOST *
StrictHostKeyChecking no
Host bitbucket.org
HostName bitbucket.org
PreferredAuthentications publickey
IdentityFile ~/.ssh/azurePrivateKey.ppk
(no clue if that's right)
Updated my Known Hosts to look like this
bitbucket.org,131.103.20.168, <!--some key here...it was here when i opened the file, assuming it's the public key for the repo i tried to add-->
bitbucket.org,131.103.20.168, <!--the new public key i tried to add-->
And I still get the same error, no permission to get submodule. So i'm having trouble figuring out which step I did incorrectly as I've never done this before.
Better late then never, and it could be usefull for others :
A Web App already have a ssh key, to get it :
https://[web-site-name].scm.azurewebsites.net/api/sshkey?ensurePublicKey=1
You can then add this key to you git repo deploy key.
I've never set that up in Azure but some general rules of thumb for handling SSH keys:
The private key in $HOME/.ssh/ must have file mode 600 (RW only for the owner)
You need both, the public and the private key in this folder, usually named id_rsa and id_rsa.pub but you can change the filename to whatever you like
You have to convert the private key generated by puttykeygen to a OpenSSH2 compatible format, see How to convert SSH keypairs generated using PuttyGen
known_hosts stores the public keys of the servers you've already connected to. That's useful to make sure that you are really connecting to the same server again. more detailed information on this topic
HTH
So if you like me had multiple private submodules on the same github account as the app service is deployed at you can give your service access to all your modules by moving the deployment key.
Go to the repo where your service is hosted.
In settings go to deploy keys.
Remove the deployment key.
Get the public key from https://[your-web-app].scm.azurewebsites.net/api/sshkey?ensurePublicKey=1
Add the key to your SSH keys in the account settings for github.
If you have modules on several accounts you can add the same key to each account.
After this the service can access private repos on all accounts with the key.
Related
I created a VM using "use existing public key". When I try logging into Linux Server using SSH, I'm getting error as "Permission denied (publickey)". In case if I select "use existing key stored in Azure", it is working as expected.
Can you please suggest why I'm getting this error.
Regards,
Santosh
You can not use the public key created in Azure as Use Existing Public Key. Public Key Created in Azure in only for User existing key stored in Azure.
For using the Use Existing Public Key You need to create a Public Key in your local machine or you can create using azureCLI as well.
The following command creates an SSH key pair using RSA encryption and a bit length of 4096:
ssh-keygen -m PEM -t rsa -b 4096
Using the above command you will have two key one is private(id_rsa) and another is public(id_rsa.pub).Use the Public One you can get the key at this location cd /home/rahul/.ssh/
Refere this Microsoft Document.
I'm recently working on creating a cloud instance on Azure. Once I created a new VM for the service I need, it always lets me download a pem file. However, it seems like I can log in to the VM through SSH without using the pem file.
Besides that, when I check the "authorized_keys" file on the new VM, it includes a public key, which is not the one on my local machine's "id_rsa.pub" file.
I'm wondering how I could log in without the public key stored in the authorized_keys file?
I think this question is related to SSH, thanks in advance!
Why we get a pem file when creating a VM on Microsoft Azure?
Disabling password logins to SSH is a common practice for SSH hardening [1,2]. The PEM file provided by default will help you achieve this.
Besides that, when I check the "authorized_keys" file on the new VM,
it includes a public key, which is not the one on my local machine's
"id_rsa.pub" file
I believe you are viewing the file for another user or comparing the wrong keys.
I'm wondering how I could log in without the public key stored in the
authorized_keys file?
You could change the authorized_keys file you are referring to by modifying the AuthorizedKeysFile variable in the /etc/ssh/sshd_config file.
I'm in the process of creating Ansible scripts to deploy my websites. Some of my sites use SSL for credit card transactions. I'm interested in automating the deployment of SSL as much as possible too. This means I would need to automate the distribution of the private key. In other words, the private key would have to exist in some format off the server (in revision control, for example).
How do I do this safely? Some ideas that I've come across:
1) Use a passphrase to protect the private key (http://red-badger.com/blog/2014/02/28/deploying-ssl-keys-securely-with-ansible/). This would require providing the passphrase during deployment.
2) Encrypt the private key file (aescrypt, openssl, pgp), similar to this: https://security.stackexchange.com/questions/18951/secure-way-to-transfer-public-secret-key-to-second-computer
3) A third option would be to generate a new private key with each deployment and try to find a certificate provider who accommodates automatic certificate requests. This could be complicated and problematic if there are delays in the process.
Have I missed any? Is there a preferred solution or anyone else already doing this?
Another way to do this would be to use Ansible Vault to encrypt your private keys while at rest. This would require you to provide the vault password either on the Ansible command line or from a text file or script that Ansible would read it from.
There really isn't a preferred method. My guess is that if you asked 10 users of Ansible you'd get 10 different answers with regards to security. Since our company started using Ansible long before Ansible Vault was available we basically stored all sensitive files in local directories on servers that only our operations team has access to. At some point we might migrate to Ansible Vault since its integrated with Ansible, but we haven't gotten to that point yet.
I'm remotely logging into my Raspberry Pi via SSH. I'm starting to use it for web development testing and would like to push to git repositories from the Raspberry Pi. Do I reuse the public key or do I need to make a new pair of keys? Do I need to use ssh-agent to manage the keys?
The public key used for SSH login is written in ~/.ssh/authorized_keys
I already tried making new key pairs with ssh-keygen and adding the new public key as a Bitbucket Deployment key.
Thanks!
A pair of key is supposed to represent an identity, i.e. your own machine. Unless you have different machines with different levels of trust (for example, a work machine and a personal machine), you don't need to generate different pair of keys on the same machine for different services.
Concerning key pairs, note that they are pairs, i.e. two keys (a public and a private one). id_rsa is the usual name of your private key on that machine. authorized_keys is the usual name of a list of public keys, for other machines, authorized to log in on that machine. The name of your public key is most certainly id_rsa.pub. That's why copying your authorized_keys to id_rsa elsewhere doesn't make much sense.
I have set up an Amazon EC2 instance and am able to SSH into it. Can anyone please tell me how I could allow additional users to SSH into this instance from a different location?
Max.
I started out creating additional users. But it is pointless if you want to give them sudo access anyway. Which you probably do want right? Because giving them sudo acccess gives them access they want to do anyway, so creating their user account was just a waste of time. Additionally creating additional users is an onerous task and leads to a lot of different permissions problems, and means you have to monkey around with the sudoers file to allow them to undertake sudo tasks without entering their password everytime.
My recommendation is to get the new user to provide you with a public key and have them use the primary ubuntu or root account directly:
ssh-keygen -f matthew
And get them to give you the .pub keyfile and paste it into the .ssh/authorized_keys file of your ec2 server.
Then they can login with their private key directly into the ubuntu or root account of your ec2 instance.
I store the authorized_keys file in my private github account. The public keys are not very useful unless you have the private key component so putting it in github seems fine to me. I then put the deployment of my centrally stored authorized_keys file as part of my new server production process.
And then remove the public key from access when they leave your employment. This will lock them out.
Create additional users at a *nix command prompt
useradd
Create a new rule in the security group which has been applied to your instance, enabling ssh for the public IP Range of your remote user
For specific instructions check out: http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1233.
1.
Max.