Populating Docker containers with sensitive information using kubernetes - security

I have a pod that runs containers which require access to sensitive information like API keys and DB passwords. Right now, these sensitive values are embedded in the controller definitions like so:
env:
- name: DB_PASSWORD
value: password
which are then available inside the Docker container as the $DB_PASSWORD environment variable. All fairly easy.
But reading their documentation on Secrets, they explicitly say that putting sensitive configuration values into your definition breaches best practice and is potentially a security issue. The only other strategy I can think of is the following:
create an OpenPGP key per user community or namespace
use crypt to set the configuration value into etcd (which is encrypted using the private key)
create a kubernetes secret containing the private key, like so
associate that secret with the container (meaning that the private key will be accessible as a volume mount), like so
when the container is launched, it will access the file inside the volume mount for the private key, and use it to decrypt the conf values returned from etcd
this can then be incorporated into confd, which populates local files according to a template definition (such as Apache or WordPress config files)
This seems fairly complicated, but more secure and flexible, since the values will no longer be static and stored in plaintext.
So my question, and I know it's not an entirely objective one, is whether this is completely necessary or not? Only admins will be able to view and execute the RC definitions in the first place; so if somebody's breached the kubernetes master, you have other problems to worry about. The only benefit I see is that there's no danger of secrets being committed to the filesystem in plaintext...
Are there any other ways to populate Docker containers with secret information in a secure way?

Unless you have many megabytes of config, this system sounds unnecessarily complex. The intended usage is for you to just put each config into a secret, and the pods needing the config can mount that secret as a volume.
You can then use any of a variety of mechanisms to pass that config to your task, e.g. if it's environment variables source secret/config.sh; ./mybinary is a simple way.
I don't think you gain any extra security by storing a private key as a secret.

I would personally resolve to user a remote keymanager that your software could access across the net over a HTTPS connection. For example Keywhiz or Vault would probably fit the bill.
I would host the keymanager on a separate isolated subnet, and configure firewall to only allow access to ip addresses which I expected to need the keys. Both KeyWhiz and Vault comes with an ACL mechanism, so you may not have to do anything with firewalls at all, but it does not hurt to consider it -- however the key here is to host the keymanager on a separate network, and possible even a separate hosting provider.
You local configuration file in the container would contain just the URL of the key service, and possible a credentials to retrieve the key from the keymanager -- the credentials would be useless to an attacker if he didn't match the ACL/IP addresses.

Related

Vaults secrets injected by vault sidecar container inside the pod are visible to kubernetes cluster users/admin

I have integrated the external vault into kubernetes cluster. Vault is injecting the secrets into shared volume “/vault/secrets” inside the pod which can be consumed by application container. Till now everything looks good.
But I can see security risk by inserting the secrets into shared volume in plain text as anyone can access the application secrets who has access to the kubernetes cluster.
Example: Secrets are injected into shared volume at /vault/secrets/config
Now, If kubernetes cluster admin logged in and he can access the pod along with credentials available at the shared volume in plain text format.
Kubectl exec -it <pod> command will be used to enter into pod.
In this case, my concern is cluster admin can access the application secrets (Ex: database passwords) which is security risk. In my scenario vault admin is different and kubernetes cluster admin is different.
Having a shared volume available to all pods in a cluster where all the secrets are stored in plain-text doesn't sound too secure to be honest. You could improve the securtity a little bit (only a little bit) by defining the use-limit (num_uses token attribute) to 1 (one) and alerting whenever legitimate application (that is the one that the secret was intended for) gets token invalid error messege.
I'm a K8s noob but how about this guide:
https://cloud.redhat.com/blog/integrating-hashicorp-vault-in-openshift-4
I know it's for RH OSE but maybe the concept sparks an idea.

Is possible to keep secrets out of state?

For example could reference the password as an environment variable? Even if I did do that it would still be stored in state right?
# Configure the MySQL provider
provider "mysql" {
endpoint = "my-database.example.com:3306"
username = "app-user"
password = "app-password"
}
State snapshots include only the results of resource, data, and output blocks, so that Terraform can compare these with the configuration when creating a plan.
The arguments inside a provider block are not saved in state snapshots, because Terraform only needs the current arguments for the provider configuration, and never needs to compare with the previous.
Even though the provider arguments are not included in the state, it's best to keep specific credentials out of your configuration. Providers tend to offer arguments for credentials as a last resort for unusual situations, but should also offer other ways to provide credentials. For some providers there is some existing standard way to pass credentials, such as the AWS provider using the same credentials mechanisms as the AWS CLI. Other providers define their own mechanisms, such as environment variables.
For the MySQL provider in particular, we should set endpoint in the configuration because that describes what Terraform is managing, but we should use environment variables to specify who is running Terraform. We can use the MYSQL_USERNAME and MYSQL_PASSWORD environment variables to specify the credentials for the individual or system that is running Terraform.
A special exception to this is when Terraform itself is the one responsible for managing the credentials. In that case, the resource that provisioned the credentials will have its data (including the password) stored in the state. There is no way to avoid that because otherwise it would not be possible to use the password elsewhere in the configuration.
For Terraform configurations that manage credentials (rather than just using credentials), they should ideally be separated from other Terraform configurations and have their state snapshots stored in a location where they can be encrypted at rest and accessible only to the individuals or systems that will run Terraform against those configurations. In that case, treat the state snapshot itself as a secret.
No, it's not possible. Your best option is using a safe and encrypted remote backend such as S3 + Dynamodb to keep your state files. I've also read about people using git-crypt, but never tried myself.
That said, you can keep secrets out of your source code using environment variables for inputs.

Deploying a VMSS and injecting secrets

I am wondering if there is any straightforward way of injecting files/secrets into the vms of a scaleset, either as you perform the (ARM) deployment or change the image.
This would be application-level passwords, certificates, and so on, that we would not want to be stored on the images.
I am using the linux custum script extension for the entrypoint script, and realize that it's possible to inject some secrets as parameters to that script. I assume this would not work with certificates however (too big/long), and it would not be very future-proof as we would need to redeploy the template (and rewrite the entrypoint script) whenever we want to add or remove a secret.
Windows based VMSS can get certificates from the KV directly during deployment, but linux ones cannot do that. Also, there is a customData property which allows you to pass in whatever you want (i think its limited to 64kb base64 encoded data), but that is not really flexible as well.
One way of solving this - write an init script that would use Managed Service Identity to get secrets from the Key Vault, this way you get several advantages:
You dont store secrets in the templates\vm configuration
You can update the secret and all the VMSS will get new version on the next deployment
You dont have to edit the init script unless secret names changed or new secrets got introduced.

How do I create hierarchical data structures in Azure Key Vaults

I need a way to store hierarchical data in Azure Key Vaults so that I have a structure similar to:
AppName
/Prod
/Data
/Test
/Data
AppName2
/Prod
/Data
...
As far as I can tell I can only store a flat data structure. I am looking to be able to store data similar to Vault by HashiCorp which allows hierarchies.
For instance, in Vault by HashiCorp, I can get data using a 'path': "app/test/TestConnection" and I get the value at the endpoint of the path: TestConnection.
Any suggestion for alternatives would be fine or instruction on how to do what I need to do with Key Vault.
Thanks
Update
I tried some of the suggestions: MySettings--SomeSection--SecretThing, Multiple Vaults and neither works in the manner I need as described above. Not faulting the input but what I want to do just is not available in Key Vault.
#juunas Turns out that your suggestion may be the best solution. I only just discovered in another article that MySettings--SomeSection--Secret translates into something similar in .NET Core:
MySettings: {
SomeSection: "Secret"
}
Since my client wants to use Key Vault we are probably going to go with storing json structured data per a single secret per application.
Any other suggestions are welcome
Key Vault does not support hierarchies for secrets.
To emulate structure, you can do something similar what .NET Core does with its Key Vault configuration provider. You can specify a secret with a name like Settings--SomeCategory--SomeValue, and it'll correspond to the following JSON when loaded:
{
"Settings": {
"SomeCategory": {
"SomeValue": "value goes here"
}
}
}
So essentially you can use a separator to emulate the structure, similar also to how Azure Blob Storage emulates folders.
I would advice against mixing different environment secrets within the same key vault. Access cannot be restricted to some keys, as access is granted and denied on the Key Vault level only. You probably don't want the same persons/applications to be able to access all the different environments, but instead grant access to the production environment to a selected group of users and applications only, and vice versa.
As the Key Vault service by itself doesn't really cost anything, we at least have taken the approach to create one Key Vault per environment, i.e. dev, test and production. Within that key vault the secrets are "structured" by a prefix, i.e. AppName-Data and AppName2-Data. This gives the added benefit, that when moving from dev to test and to production, the references to the secrets don't need to be changed, as they have the same name in all the environments. Just the reference to the Key Vault needs to be changed, and all is set!

Automate deployment of SSL private key using Ansible

I'm in the process of creating Ansible scripts to deploy my websites. Some of my sites use SSL for credit card transactions. I'm interested in automating the deployment of SSL as much as possible too. This means I would need to automate the distribution of the private key. In other words, the private key would have to exist in some format off the server (in revision control, for example).
How do I do this safely? Some ideas that I've come across:
1) Use a passphrase to protect the private key (http://red-badger.com/blog/2014/02/28/deploying-ssl-keys-securely-with-ansible/). This would require providing the passphrase during deployment.
2) Encrypt the private key file (aescrypt, openssl, pgp), similar to this: https://security.stackexchange.com/questions/18951/secure-way-to-transfer-public-secret-key-to-second-computer
3) A third option would be to generate a new private key with each deployment and try to find a certificate provider who accommodates automatic certificate requests. This could be complicated and problematic if there are delays in the process.
Have I missed any? Is there a preferred solution or anyone else already doing this?
Another way to do this would be to use Ansible Vault to encrypt your private keys while at rest. This would require you to provide the vault password either on the Ansible command line or from a text file or script that Ansible would read it from.
There really isn't a preferred method. My guess is that if you asked 10 users of Ansible you'd get 10 different answers with regards to security. Since our company started using Ansible long before Ansible Vault was available we basically stored all sensitive files in local directories on servers that only our operations team has access to. At some point we might migrate to Ansible Vault since its integrated with Ansible, but we haven't gotten to that point yet.

Resources