I'm in the process of creating Ansible scripts to deploy my websites. Some of my sites use SSL for credit card transactions. I'm interested in automating the deployment of SSL as much as possible too. This means I would need to automate the distribution of the private key. In other words, the private key would have to exist in some format off the server (in revision control, for example).
How do I do this safely? Some ideas that I've come across:
1) Use a passphrase to protect the private key (http://red-badger.com/blog/2014/02/28/deploying-ssl-keys-securely-with-ansible/). This would require providing the passphrase during deployment.
2) Encrypt the private key file (aescrypt, openssl, pgp), similar to this: https://security.stackexchange.com/questions/18951/secure-way-to-transfer-public-secret-key-to-second-computer
3) A third option would be to generate a new private key with each deployment and try to find a certificate provider who accommodates automatic certificate requests. This could be complicated and problematic if there are delays in the process.
Have I missed any? Is there a preferred solution or anyone else already doing this?
Another way to do this would be to use Ansible Vault to encrypt your private keys while at rest. This would require you to provide the vault password either on the Ansible command line or from a text file or script that Ansible would read it from.
There really isn't a preferred method. My guess is that if you asked 10 users of Ansible you'd get 10 different answers with regards to security. Since our company started using Ansible long before Ansible Vault was available we basically stored all sensitive files in local directories on servers that only our operations team has access to. At some point we might migrate to Ansible Vault since its integrated with Ansible, but we haven't gotten to that point yet.
Related
I am wondering if there is any straightforward way of injecting files/secrets into the vms of a scaleset, either as you perform the (ARM) deployment or change the image.
This would be application-level passwords, certificates, and so on, that we would not want to be stored on the images.
I am using the linux custum script extension for the entrypoint script, and realize that it's possible to inject some secrets as parameters to that script. I assume this would not work with certificates however (too big/long), and it would not be very future-proof as we would need to redeploy the template (and rewrite the entrypoint script) whenever we want to add or remove a secret.
Windows based VMSS can get certificates from the KV directly during deployment, but linux ones cannot do that. Also, there is a customData property which allows you to pass in whatever you want (i think its limited to 64kb base64 encoded data), but that is not really flexible as well.
One way of solving this - write an init script that would use Managed Service Identity to get secrets from the Key Vault, this way you get several advantages:
You dont store secrets in the templates\vm configuration
You can update the secret and all the VMSS will get new version on the next deployment
You dont have to edit the init script unless secret names changed or new secrets got introduced.
I setup up a prototype cluster in Azure Kubernetes Service to test the ability to configure HTTPS ingress with cert-manager. I was able to make everything work, now I'm ready to setup my production environment.
The problem is I used the sub domain name I needed (sub.domain.com) on the prototype and now I can't seem to make Let's Encrypt give a certificate to the production cluster.
I'm still very new to Kubernetes and I can't seem to find a way to export or move the certificate from one to the other.
Update:
It appears that the solution provided below would have worked, but it came down to needing to suspend/turnoff the prototype's virtual machine. Within a couple minutes the production environment picked up the certificate.
you can just do something like:
kubectl get secret -o yaml
and just copy\paste your certificate secret to a new cluster, or use something like heptio ark to do backup\restore.
ps. I dont know why it wouldn't let you create a new cert, at worst you would need to wait 7 days for your rate limit to refresh.
I have a pod that runs containers which require access to sensitive information like API keys and DB passwords. Right now, these sensitive values are embedded in the controller definitions like so:
env:
- name: DB_PASSWORD
value: password
which are then available inside the Docker container as the $DB_PASSWORD environment variable. All fairly easy.
But reading their documentation on Secrets, they explicitly say that putting sensitive configuration values into your definition breaches best practice and is potentially a security issue. The only other strategy I can think of is the following:
create an OpenPGP key per user community or namespace
use crypt to set the configuration value into etcd (which is encrypted using the private key)
create a kubernetes secret containing the private key, like so
associate that secret with the container (meaning that the private key will be accessible as a volume mount), like so
when the container is launched, it will access the file inside the volume mount for the private key, and use it to decrypt the conf values returned from etcd
this can then be incorporated into confd, which populates local files according to a template definition (such as Apache or WordPress config files)
This seems fairly complicated, but more secure and flexible, since the values will no longer be static and stored in plaintext.
So my question, and I know it's not an entirely objective one, is whether this is completely necessary or not? Only admins will be able to view and execute the RC definitions in the first place; so if somebody's breached the kubernetes master, you have other problems to worry about. The only benefit I see is that there's no danger of secrets being committed to the filesystem in plaintext...
Are there any other ways to populate Docker containers with secret information in a secure way?
Unless you have many megabytes of config, this system sounds unnecessarily complex. The intended usage is for you to just put each config into a secret, and the pods needing the config can mount that secret as a volume.
You can then use any of a variety of mechanisms to pass that config to your task, e.g. if it's environment variables source secret/config.sh; ./mybinary is a simple way.
I don't think you gain any extra security by storing a private key as a secret.
I would personally resolve to user a remote keymanager that your software could access across the net over a HTTPS connection. For example Keywhiz or Vault would probably fit the bill.
I would host the keymanager on a separate isolated subnet, and configure firewall to only allow access to ip addresses which I expected to need the keys. Both KeyWhiz and Vault comes with an ACL mechanism, so you may not have to do anything with firewalls at all, but it does not hurt to consider it -- however the key here is to host the keymanager on a separate network, and possible even a separate hosting provider.
You local configuration file in the container would contain just the URL of the key service, and possible a credentials to retrieve the key from the keymanager -- the credentials would be useless to an attacker if he didn't match the ACL/IP addresses.
I'm trying to setup continuous deployment for an Azure website using bitbucket.
The problem is I'm using a submodule (which I own) that Azure doesn't have permission to, because it doesn't add that by default.
I'm trying to figure out how to add an SSH key so that Azure can connect and get the submodule.
Steps I've taken.
Created a New Public/Private Key with PuttyGen, Added the public key to my bitbucket account under the name Azure
FTPed into Azure, and added both the public and private key files (.ppk) to the .ssh directory (yeah I didn't know which one I was suppose to add). They are named azurePrivateKey.ppk, and azurePublicKey.
Updated my config file to look like this
HOST *
StrictHostKeyChecking no
Host bitbucket.org
HostName bitbucket.org
PreferredAuthentications publickey
IdentityFile ~/.ssh/azurePrivateKey.ppk
(no clue if that's right)
Updated my Known Hosts to look like this
bitbucket.org,131.103.20.168, <!--some key here...it was here when i opened the file, assuming it's the public key for the repo i tried to add-->
bitbucket.org,131.103.20.168, <!--the new public key i tried to add-->
And I still get the same error, no permission to get submodule. So i'm having trouble figuring out which step I did incorrectly as I've never done this before.
Better late then never, and it could be usefull for others :
A Web App already have a ssh key, to get it :
https://[web-site-name].scm.azurewebsites.net/api/sshkey?ensurePublicKey=1
You can then add this key to you git repo deploy key.
I've never set that up in Azure but some general rules of thumb for handling SSH keys:
The private key in $HOME/.ssh/ must have file mode 600 (RW only for the owner)
You need both, the public and the private key in this folder, usually named id_rsa and id_rsa.pub but you can change the filename to whatever you like
You have to convert the private key generated by puttykeygen to a OpenSSH2 compatible format, see How to convert SSH keypairs generated using PuttyGen
known_hosts stores the public keys of the servers you've already connected to. That's useful to make sure that you are really connecting to the same server again. more detailed information on this topic
HTH
So if you like me had multiple private submodules on the same github account as the app service is deployed at you can give your service access to all your modules by moving the deployment key.
Go to the repo where your service is hosted.
In settings go to deploy keys.
Remove the deployment key.
Get the public key from https://[your-web-app].scm.azurewebsites.net/api/sshkey?ensurePublicKey=1
Add the key to your SSH keys in the account settings for github.
If you have modules on several accounts you can add the same key to each account.
After this the service can access private repos on all accounts with the key.
I got a problem with upgrading my deployment to windows server 2012, my deploy works fine with osfamily=2 and compiled with .net4, but failed at .net4.5 and osfamily=3,
the exception I saw when remote to the vm is "Keyset does not exist", seems to related to some certificates. My program using the certificates to encrypt some stream and should be able to using this certs to decode this stream after I deploy it.
I checked the certs on vm, it is installed fine, in the right place.
So I suspect this is an issue with the different secure policy with 2012 that prevent my role to get the key in the certs.
this blocks me for a while so Thank a lot for any clue!
Keyset does not exist typically refers to an error when your program is trying to access a private key of a certificate and is unable to do so, either because the private key does not exist or because it has no permissions to access it
You will need to find the certificate in question in your certificate store, verify that it contains a private key (that will show up in the properties of the certificate)
And then verify that your process/application pool has permissions to access the private key by right-clicking on the certificate from the certificate store and choosing: All Tasks->Manage Private Keys. From there, make sure to add sufficient users to the allowed list
Hope this helps