What kind of security does `identity` provider in Kubernetes EncryptionConfiguration provide? - security

Ref: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#providers
According to the docs
Resources written as-is without encryption. When set as the first provider, the resource will be decrypted as new values are written.
When set as the first provider, the resource will be decrypted as new values are written. sounds confusing. If resources are written as is with no encryption into etcd, why does decrypted as new values are written mean ?
And following that
By default, the identity provider is used to protect secrets in etcd, which provides no encryption.
What kind of security does identity provider give if no encryption happens and if encryption happens, what kind of encryption is it?

As stated in etcd about security
Does etcd encrypt data stored on disk drives?
No. etcd doesn't encrypt key/value data stored on disk drives. If a user need to encrypt data stored on etcd, there are some options:
Let client applications encrypt and decrypt the data
Use a feature of underlying storage systems for encrypting stored data like dm-crypt
First part of the question:
By default, the identity provider is used to protect secrets in etcd, which provides no encryption.
It means that by default k8s api is using identity provider while storing secrets in etcd and it doesn't provide any encryption.
Using EncryptionConfiguration with the only one provider: identity gives you the same result as not using EncryptionConfiguration at all (assuming you didn't have any encrypted secrets before at all).
All secret data will be stored in plain text in etcd.
Example:
providers:
- identity: {}
Second part of your question:
Resources written as-is without encryption.
This is described and explained in the first part of the question
When set as the first provider, the resource will be decrypted as new values are written.
Take a look at this example:
providers:
- aescbc:
keys:
- name: key1
secret: <BASE 64 ENCODED SECRET>
- identity: {}
What this configuration means for you:
The new provider introduced into your EncryptionConfiguration does not affect existing data.
All existing secrets in etcd (before this configuration has been applied) are still in plain text.
Starting with this configuration all new secrets will be saved using aescbc encryption. All new secrets in etcd will have prefix k8s:enc:aescbc:v1:key1.
In this scenario you will have in etcd a mixture of encrypted and not encrypted data.
So the question is why we are using those two providers?
provider: aescbc is used to write new secrets as encrypted data during write operation and to decrypt existing secrets during read operation.
provider: identity is still necessary to read all not encrypted secrets.
Now we are switching our providers in EncryptionConfiguration:
providers:
- identity: {}
- aescbc:
keys:
- name: key1
secret: <BASE 64 ENCODED SECRET>
In this scenario you will have in etcd a mixture of encrypted and not encrypted data.
Starting with this configuration all new secrets will be saved in plain text
For all existing secrets in etcd with prefix k8s:enc:aescbc:v1:key1 provider: aescbc configuration will be used to decrypt existing secrets stored in etcd.
When set as the first provider, the resource will be decrypted as new values are written
In order to switch from mixture of encrypted and not encrypted data into scenario that we have only "not encrypted" data, you should perform read/write operation for all secrets:
$ kubectl get secrets --all-namespaces -o json | kubectl replace -f -
why's it there if it offers no encryption but the docs seem to talk about decryption and how it protects.
It's necessary to have the provider type of identity if you have a mixture of encrypted and not encrypted data
or if you want to decrypt all existing secrets (stored in etcd) encrypted by another provider.
The following command reads all secrets and then updates them to apply server side encryption. More details can be found in this paragraph
$ kubectl get secrets --all-namespaces -o json | kubectl replace -f -
Depending on your EncryptionConfiguration, all secrets will be saved as not encrypted -if the first provider is: identity or encrypted if the first provider is different type.
In addtion
EncryptionConfig is disabled as default setting. To use it, you have to add --encryption-provider-config in your kube-apiserver configuration. Identity is not encrypting any data, as per Encrypted Providers documentation it has 3x N/A.

Related

Azure Data Factory - Storing Inline Passwords

I have a pipeline in Azure Data Factory that starts by going to a REST API to obtain an authorization token. In order to obtain this token, the initial POST request needs to contain a username, password, and private key in the request body. It looks like this:
{
"Username": "<myusername>",
"Password": "<mypassword>",
"PrivateKey":"<privatekey>"
}
Currently I just have this stored as plain text in the Web activity in ADF
To me this doesn't seem very secure and I'm wondering if there is a better way to store this JSON string. I've looked into Azure Key Vault, but that seems to be for storing "data store" credentials.... What is the best practice for storing credentials like this to be used by ADF?
You can save the individual values as Secrets in Key vault and fetch them individually via Web activity from KeyVault with masked output thereby making your ADF secure.
Below GITHUb location contains the Pipeline JSON :
https://github.com/NandanHegde15/Azure-DataFactory-Generic-Pipelines/blob/main/Get%20Secret%20From%20KeyVault/Pipeline/GetSecretFromKeyVault.json
Other way would be to use SecureString Parameter
But would say to avoid using the parameter and leverage the Key Vault
the credentials can be saved in the key vault secret
The secret can be called for authentication in the linked service that connects to the required base url
Refer https://learn.microsoft.com/en-us/azure/data-factory/connector-http?tabs=data-factory#create-a-linked-service-to-an-http-source-using-ui

No XML encryptor configured - When using Key Vault

I have an netcoreapp2.2 containerized application that uses azure key vault to store keys and also uses:
app.UseAuthentication();
And
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
I am building/running a docker image in a hosted linux environment under App Services. I am using the azure container registry and dev ops pipe line to maintain my app. Azure controls the deployment process and the "docker run" command.
My app works great, however in the container logs I see:
2019-12-13T17:18:12.207394900Z [40m[1m[33mwarn[39m[22m[49m: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
2019-12-13T17:18:12.207436700Z No XML encryptor configured. Key {...} may be persisted to storage in unencrypted form.
...
2019-12-13T17:18:14.540484659Z Application started. Press Ctrl+C to shut down.
I realize there are many other posts on this that allude to using other storage mechanisms, however I am using key vault to store my sensitive data. JWT is all handled by key vault. I have a few application settings that control static variables for DEV/QA/PROD but they are not sensitive data at all.
I am also not sure what key is being stored in memory as all my sensitive keys are completely outside of the application and are called by:
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyVaultClient = new KeyVaultClient(
new KeyVaultClient.AuthenticationCallback(
azureServiceTokenProvider.KeyVaultTokenCallback));
config.AddAzureKeyVault(
$"https://{builtConfig["MY_KEY_VAULT_ID"]}.vault.azure.net/",
keyVaultClient,
new DefaultKeyVaultSecretManager());
I am having a difficult time understanding why this warning is being thrown and also if I should take additional steps to mitigate the issue. I have not personally seen side effects, and app restarts do not seem to have any effect as I am using bearer tokens and other concerns such as token expiration, password resets and the like are not applicable.
So I am left with asking are there any additional steps I can take to avoid this warning? Do I need to ensure that there is a better data at rest mechanism for any configuration settings that may be in my linux environment? Can I safely ignore this warning?
It took me a while to find a way that suited the needs that I have for my application but I wanted to lend some clarity to a number of other stack answers that just did not make sense to me and how I finally understood the problem.
TLDR; Since I was already using key vault, I was confusing how .net core works. I didn't realize that config.AddAzureKeyVault() has nothing to do with how .net core decides to store data at rest on your app service.
When you see this warning:
No XML encryptor configured. Key {GUID} may be persisted to storage in unencrypted form.
it really doesn't matter what GUID was being set: that string of data was not being stored encrypted at rest.
For my risk analysis any information that is not being encrypted at rest is a bad idea as it could mean at anytime in the future some sort of sensitive data could leak and then be exposed to an attacker. In the end, I chose to classify my data at rest as sensitive and err on the side of caution with a potential attack surface.
I have been struggling to try and explain this in a clear and concise way and it is difficult to sum up in a few words. This is what I learned.
Access control (IAM) is your friend in this situation as you can declare a system assigned identity for your application and use role based accessed control. In my case I used my application identity to control access to both key vault and azure storage with RBAC. This makes it much easier to get access without SAS tokens or access keys.
Azure storage will be the final destination for the file you are creating, but it will be the vault that controls the encryption key. I created an RSA key in key vault, and that key is what encrypts the XML file that is throwing the original error.
One of the mistakes I was making in my head was that I wanted two write the encrypted XML to key vault. However, that is not really the use case Microsoft describes. There are two Mechanisms: PersistKeysTo and ProtectKeysWith. As soon as I got that through my thick head, it all made sense.
I used the following to remove the warning and create encrypted data at rest:
services.AddDataProtection()
// Create a CloudBlockBlob with AzureServiceTokenProvider
.PersistKeysToAzureBlobStorage(...)
// Create a KeyVaultClient with AzureServiceTokenProvider
// And point to the RSA key by id
.ProtectKeysWithAzureKeyVault(...);
I had already used RBAC for my application with key vault (with wrap/unwrap permissions), but I also added Storage Blob Data Contributor to the storage account.
How you create your blob is up to you, but one gotcha is creating the access token synchronously:
// GetStorageAccessToken()
var token = new AzureServiceTokenProvider();
return token.GetAccessTokenAsync("https://storage.azure.com/")
.GetAwaiter()
.GetResult();
Then I called it from a method:
var uri = new Uri($"https://{storageAccount}.blob.core.windows.net/{containerName}/{blobName}");
//Credentials.
var tokenCredential = new TokenCredential(GetStorageAccessToken());
var storageCredentials = new StorageCredentials(tokenCredential);
return new CloudBlockBlob(uri, storageCredentials);
After this hurdle was overcame, putting the encryption in was straight forward. The Keyvault ID is the location of the encryption key you are using.
https://mykeyvaultname.vault.azure.net/keys/my-key-name/{VersionGuid}
And creating the client is
var token = new AzureServiceTokenProvider();
var client = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(token.KeyVaultTokenCallback));
services.AddDataProtection()
.ProtectKeysWithAzureKeyVault(client, keyVaultId);
I also have to give credit to this blog: https://joonasw.net/view/using-azure-key-vault-and-azure-storage-for-asp-net-core-data-protection-keys as this pointed me in the right direction.
https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/default-settings?view=aspnetcore-2.2 this also pointed out why keys are not encrypted
https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles - RBAC for apps
https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/overview?view=aspnetcore-3.1 this was confusing at first but has a good warning about how to grant access and limit access in production.
Might be you have to configure your data protection policy to use CryptographicAlogrithms as follow:
.UseCryptographicAlgorithms(new AuthenticatedEncryptorConfiguration()
{
EncryptionAlgorithm = EncryptionAlgorithm.AES_256_CBC,
ValidationAlgorithm = ValidationAlgorithm.HMACSHA256
});
Also, following are few warning which you get around Data protection policy
ASP.Net core DataProtection stores keys in the HOME directory (/root/.aspnet/DataProtection-Keys) so when container restart keys are lost and this might crash the service.
This can be resolve by persisting key at
Persist key at the persistent location (volume) and mount that volume
to docker container
Persist key at the external key store like Azure or Redis
More details about ASP.NET DataProtection:
https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/overview?view=aspnetcore-3.1
https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/introduction?view=aspnetcore-3.1
To mount an external volume (C:/temp-kyes) to docker container volume (/root/.aspnet/DataProtection-Keys) using following command
docker run -d -v /c/temp-keys:/root/.aspnet/DataProtection-Keys container-name
Also, You need to update your Starup.cs - ConfigureServices to configure DataProtection policy
services.AddDataProtection().PersistKeysToFileSystem(new DirectoryInfo(#"C:\temp-keys\"))
.UseCryptographicAlgorithms(new AuthenticatedEncryptorConfiguration()
{
EncryptionAlgorithm = EncryptionAlgorithm.AES_256_CBC,
ValidationAlgorithm = ValidationAlgorithm.HMACSHA256
});

Hiding a secret in Terraform for Azure with a caveat

So I am trying to find someway to hide a secret in Terraform. The caveat is the secret is a Service Principal that is used to connect to our Key Vault. I can't store the secret in the Key Vault as it hasn't connected to the Key Vault yet at that point. This is part of my main tf file.
provider "azurerm" {
alias = "kv_prod"
version = "1.28"
tenant_id = "<tenant id>"
subscription_id = "<sub id>"
client_id = "<SP client id>"
client_secret = "<SP secret>"
}
This is used further down my module to store Storage Account keys and other secrets. It just happens to be in a Prod subscription that not everyone has access to.
Has anyone run into something like this? If so, how would you go about securing that secret?
#maltman There are several ways to hide a secret in terraform. Here is a blog that talks about them:
https://www.linode.com/docs/applications/configuration-management/secrets-management-with-terraform/
However if you are only concerned about encrypting the secrets file while checking in and checking out from git, you can use something like git-crypt
You would have to create a couple of files:
variables.tf -> Define your variables here
variable "client_secret" {
description = "Client Secret"
}
terraform.tfvars -> Give the value of the variable here
client_secret = 'your-secret-value'
Now use git-crypt to encrypt terraform.tfvars while checking into git
For your requirements, I think there are two secure ways for you in comparison.
One is that stored the credential as environment variables so that you do not expose the secret in the tf files. Here's the example.
The other one is that you can log in with the credential for Azure CLI, then just need to set the subscription without exposing the secret in the tf file. Here's the example.
The above two ways are that what I think is secure and possible for you. Hope it helps you.
Terraform doesn't have this feature but by using third party integration it can be achieved.
Storing Secret in Terraform:
Terraform has an external data resource that can be used to run an external program and use the return value further. I have used Ansible vault feature to encrypt and decrypt the secrets and store it encrypted in repository rather as plaintext.
data "external" "mysecret" {
program = ["bash", "-c", "${path.module}/get_ansible_secret.sh"]
query = {
var = "${var.secret_value}"
vault_password_file = "${path.module}/vault-password.sh"
# The file containing the secret we want to decrypt
file = "${var.encrypted_file}"
}
}
Refer the working example: github example
Going to create an ADO pipeline to handle this instead where the code just does not have to be available.

Azure DevOps terraform and AKV

In our case we are doing the following:
1. Infra Agent
a. We create a KV
b. We create a SQL Database in the tf script, including assigning an admin username and password (randomly generated value).
c. We store the username and password as secrets in the newly created KV
2. Data Agent
a. We want to deploy the DDL from the repos onto the SQL Database we created in Infra Agent. We need to use the SQL database username and password stored in the KV to do so
b. In order to read the secrets from the KV our current thinking is to insert the username and password to pipeline parameters in step 1 (i.e. setting them at runtime) so we can reuse the values across other Agents.
A couple of questions:
- Is that the right approach? Should KV be created in the Infra Agent tf scripts? Should we randomly generate passwords (as secrets)?
- What is best practice to access the Database username and password in other Agents, given that:
o We can’t use variable groups because the KV and values won’t be known until runtime
o We can’t use the Key Vault Task (https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops) to read from the KV because the KV name is only known at runtime (via the tf vars file)
b. We create a SQL Database in the tf script, including assigning an admin username and password (randomly generated value).
If you're using Key Vault, then I assume you're talking about Azure SQL Databases. However at the moment Terraform only supports assigning a administrator username and password for the SQL Server instance, not SQL databases.
In this case, I recommend using random_password resources to assign values to azurerm_key_vault_secret which can then be assigned as the azurerm_sql_server administrator password.
With this setup you know for certain that the password in Key Vault is always in sync, and can be treated as the source of truth for your SQL server passwords (unless someone goes and resets the administrator password manually of course).
Now if you ever want to reset an SQL server password, simply taint the random_password, forcing it to be recreated with a new value, which in turn updates the azurerm_key_vault_secret value and then the azurerm_sql_server password.
Here's some quick HCL as an example
resource "random_password" "password" {
length = 16
special = false
}
resource "azurerm_key_vault_secret" "password_secret" {
depends_on = [<the Key Vault access policy for your infra agent which runs terraform apply>]
...
value = random_password.password.result
...
}
resource "azurerm_sql_server" "sql_server" {
...
administrator_login_password = azurerm_key_vault_secret.password_secret.value
...
}
Is that the right approach? Should KV be created in the Infra Agent tf scripts? Should we randomly generate passwords (as secrets)?
This is a sensible approach, but remember that billing is per secret, key or cert and Key Vaults themselves are free. It's recommended to create a Key Vault for each application because access policies can only be applied per Key Vault and not per secret/key/cert.
We can’t use the Key Vault Task (https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops) to read from the KV because the KV name is only known at runtime (via the tf vars file)
Why is this only known at runtime? This sounds like a limitation of your own process since Terraform allows you to specify a name for each Key Vault when you create it. Reconsider if this is really a requirement and why you are doign this. If it definitely is a requirement and your Key Vault names are dynamically generated, then you can use terraform output to get the Key Vault name during the pipeline and set it as a variable during the build.
To fetch the Key Vault name as an output just use the following HCL
output "key_vault_name" {
value = "${azurerm_key_vault.demo_key_vault.name}"
}
and run `terraform output key_vault_name" to write the value to stdout.

Securing Kubernetes secret files for source control?

According to the Kubernetes secrets docs, creating a secret is as easy as base64-encoding the data and placing it in a file.
How then, if base64 can be decoded as easily as it's encoded, can we secure/encrypt the secret values in the file? It would be nice to be able to commit the secret files into source control, however simply committing the file with base64-encoded data is in no way secure.
For example, here is the example given in the docs:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: dmFsdWUtMg0K
username: dmFsdWUtMQ0K
If you went to base64decode.org, you would see that those password/username values simply are "value-2". This file is unfit for source control. How can we secure the data in the file so that it is safe for source control? Or is this considered bad practice, and we should just add the file to .gitignore?
It isn't base64 encoded for security, it is to allow binary content to be stored in secrets. You likely should not commit secret definitions to source control.
For confidential secret keys, can you store them in etcd and retrieve them with confd ?
otherwise, if you really want them in scm, then can you use git-crypt?
https://github.com/AGWA/git-crypt
I'd deploy them with ansible, and encrypt the secrets using ansible-vault, so they could be inside the repository. In addition, they could be stored as text, applying the base64 filter over a template.
Anyway, as it was said before, secrets are not secure. They are just encoded in base64 and could be decoded with:
kubectl get secret mysecret -o jsonpath="{.data.username}" | base64 -d
kubectl get secret mysecret -o jsonpath="{.data.password}" | base64 -d
(what is very useful, by the way)

Resources