I am wondering if there is any straightforward way of injecting files/secrets into the vms of a scaleset, either as you perform the (ARM) deployment or change the image.
This would be application-level passwords, certificates, and so on, that we would not want to be stored on the images.
I am using the linux custum script extension for the entrypoint script, and realize that it's possible to inject some secrets as parameters to that script. I assume this would not work with certificates however (too big/long), and it would not be very future-proof as we would need to redeploy the template (and rewrite the entrypoint script) whenever we want to add or remove a secret.
Windows based VMSS can get certificates from the KV directly during deployment, but linux ones cannot do that. Also, there is a customData property which allows you to pass in whatever you want (i think its limited to 64kb base64 encoded data), but that is not really flexible as well.
One way of solving this - write an init script that would use Managed Service Identity to get secrets from the Key Vault, this way you get several advantages:
You dont store secrets in the templates\vm configuration
You can update the secret and all the VMSS will get new version on the next deployment
You dont have to edit the init script unless secret names changed or new secrets got introduced.
Related
Am using a combination of these tools
Terraform - To deploy the Application specific AWS resources I need
(For instance a secret)
Skaffold - To help with the inner
development loop, surrounding the deployment of K8s resources to
local and remote cluster
Kustomize - To help with templating of
different configurations for different environment
My github action steps are as follows
Terraform to create the AWS resources. At this point it creates a AWS
secrets arn.
Skaffold to deploy the k8s manifests. Skaffold in-turn delegates K8s manifest generation to Kustomize. Within the Kustomize overlay files i need to be able to access the Secrets arn that was created earlier, this arn needs to be injected into the container that is being deployed. How do I achieve this?
Rephrasing the question: How do I pass resources that were created by terraform to be consumed by something like Kustomize (Which is used by skaffold)
(p.s, I really like the choice of my tools thus far as each one excels at one thing. I realize that terraform can possibly do all of it, but that is a choice that I dont want to make unless there are no easier options)
Here is what I have learnt:
I don't think there are any industry standards in terms of how to share this data between the tools across different steps within github actions. That being said here are some of the options
Have the Terraform store the secrets arn in a parameter store. Retrieve the arn from the parameter store in later steps. This means that the steps have to share a static key
Have Terraform update the kustomize files directly (or use kustomize_overlays as datasource)
There could be other similar approaches, but none of these tools have a native way of passing/sharing data
My deploy task using PowerShell script, which use Service Principal for connection to Azure KeyVault for pull secret. Secret (password) store in PowerShell script's code as plain text. Maybe there is another solution how to minimize token viewing.
And also i use powershell inline mode (not separate script) with Azure DevOps Secret Variable in deploy task, but this solution difficult to support (script has several different operations, so you have to keep many versions of the script).
Script is store in Git repository, anyone who has access to it will be able to see the secret and gain access to other keys. Perhaps I don't understand this concept correctly, but if keys cannot be stored in the code, then what should I do?
I devops you can use variable groups and define that the variables is pulled directly from a selected keyvault (if the service principal you have selected have read/list access to the KV) LINK.
This means that you can define all secrets in keyvault, and they would be pulled before any tasks happens in your yaml. To be able to use them in the script you can define them as a env variable or parameter to your script and just reference $env:variable or just $variable, instead of having the secret hardcoded in your script.
How to use Desired State Configuration in combinition with ARM.
Scope:
- We have an Azure virtual machine that is deployed via an ARM template.
- The VM has an extension resource in the ARM template, for the Desired State Configuration
- We need to pass sensitive parameters (in a secure way!) into the Desired State Configuration (we want to create an additional local windows account with the DSC)
- Configuration file is used to know what public key to use for encryption, and to let the VM know which certificate it has to use for decryption (by thumbprint)
- When using ARM, you need to define the configuration data file in a separate property
- I noticed that the DSC service, automically adds an certificate for document encryption to the VM.
Question:
If I want to get this working out of the box, I will need to create the configurationDataFile upfront, and store it somewhere (like blob or something).
However, the 'out-of-the-box' certificate on the VM is only known after the ARM template has been deployed.
I was wondering if there is a way to get the encryption/decryption in DSC working, using the out of the box DSC Certificate on the VM, without using different incremental DSC templates.
So how can I know the out of the box certificate thumbprint at deployment time? (In the arm template?)
Do I actually need to transform the ConfigurationData file for every deployment (and finding the correct thumbprint of the VM), or is there an out of the box way to tell DSC via ARM to use the out of the box created certificate for this?
Because the target VM is also the authoring machine, the passwords can be passed as plain text, as they never leave the Virtual Machine.
This has been verified by Microsoft support.
I have a pod that runs containers which require access to sensitive information like API keys and DB passwords. Right now, these sensitive values are embedded in the controller definitions like so:
env:
- name: DB_PASSWORD
value: password
which are then available inside the Docker container as the $DB_PASSWORD environment variable. All fairly easy.
But reading their documentation on Secrets, they explicitly say that putting sensitive configuration values into your definition breaches best practice and is potentially a security issue. The only other strategy I can think of is the following:
create an OpenPGP key per user community or namespace
use crypt to set the configuration value into etcd (which is encrypted using the private key)
create a kubernetes secret containing the private key, like so
associate that secret with the container (meaning that the private key will be accessible as a volume mount), like so
when the container is launched, it will access the file inside the volume mount for the private key, and use it to decrypt the conf values returned from etcd
this can then be incorporated into confd, which populates local files according to a template definition (such as Apache or WordPress config files)
This seems fairly complicated, but more secure and flexible, since the values will no longer be static and stored in plaintext.
So my question, and I know it's not an entirely objective one, is whether this is completely necessary or not? Only admins will be able to view and execute the RC definitions in the first place; so if somebody's breached the kubernetes master, you have other problems to worry about. The only benefit I see is that there's no danger of secrets being committed to the filesystem in plaintext...
Are there any other ways to populate Docker containers with secret information in a secure way?
Unless you have many megabytes of config, this system sounds unnecessarily complex. The intended usage is for you to just put each config into a secret, and the pods needing the config can mount that secret as a volume.
You can then use any of a variety of mechanisms to pass that config to your task, e.g. if it's environment variables source secret/config.sh; ./mybinary is a simple way.
I don't think you gain any extra security by storing a private key as a secret.
I would personally resolve to user a remote keymanager that your software could access across the net over a HTTPS connection. For example Keywhiz or Vault would probably fit the bill.
I would host the keymanager on a separate isolated subnet, and configure firewall to only allow access to ip addresses which I expected to need the keys. Both KeyWhiz and Vault comes with an ACL mechanism, so you may not have to do anything with firewalls at all, but it does not hurt to consider it -- however the key here is to host the keymanager on a separate network, and possible even a separate hosting provider.
You local configuration file in the container would contain just the URL of the key service, and possible a credentials to retrieve the key from the keymanager -- the credentials would be useless to an attacker if he didn't match the ACL/IP addresses.
I'm in the process of creating Ansible scripts to deploy my websites. Some of my sites use SSL for credit card transactions. I'm interested in automating the deployment of SSL as much as possible too. This means I would need to automate the distribution of the private key. In other words, the private key would have to exist in some format off the server (in revision control, for example).
How do I do this safely? Some ideas that I've come across:
1) Use a passphrase to protect the private key (http://red-badger.com/blog/2014/02/28/deploying-ssl-keys-securely-with-ansible/). This would require providing the passphrase during deployment.
2) Encrypt the private key file (aescrypt, openssl, pgp), similar to this: https://security.stackexchange.com/questions/18951/secure-way-to-transfer-public-secret-key-to-second-computer
3) A third option would be to generate a new private key with each deployment and try to find a certificate provider who accommodates automatic certificate requests. This could be complicated and problematic if there are delays in the process.
Have I missed any? Is there a preferred solution or anyone else already doing this?
Another way to do this would be to use Ansible Vault to encrypt your private keys while at rest. This would require you to provide the vault password either on the Ansible command line or from a text file or script that Ansible would read it from.
There really isn't a preferred method. My guess is that if you asked 10 users of Ansible you'd get 10 different answers with regards to security. Since our company started using Ansible long before Ansible Vault was available we basically stored all sensitive files in local directories on servers that only our operations team has access to. At some point we might migrate to Ansible Vault since its integrated with Ansible, but we haven't gotten to that point yet.