I have integrated Azure KeyVault using Terraform. I can ssh into the container and can view the secrets.
My questions is: Is it possible to somehow reference that secret value inside my deployment.yml file that i use for deploying my pods in the kubernetes cluster?
I am using the following deployment file. Normally I access the Kubernetes secrets using the valueFrom and then referencing the secret name and key here. How would that be possible if i want to insert the value of secret using keyvault here.
-spec:
containers:
- name: test-container
image: test.azurecr.io/test-image:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: testSecret
valueFrom:
secretKeyRef:
name: testSecret
key: testSecretPassword
Thanks
You will need a Terraform data source. Data sources allow data to be fetched or computed for use elsewhere in Terraform configuration. Use of data sources allows a Terraform configuration to make use of information defined outside of Terraform, or defined by another separate Terraform configuration.
data "azurerm_key_vault_secret" "test" {
name = "secret-sauce"
key_vault_id = "${data.azurerm_key_vault.existing.id}"
}
output "secret_value" {
value = "${data.azurerm_key_vault_secret.test.value}"
}
You can look at Key Vault FlexVolume to integrate Key Vault into K8s. Secrets, keys, and certificates in a key management system become a volume accessible to pods. Once the volume is mounted, its data is available directly in the container filesystem for your application.
I will be honest, I have not tried this solution and don't know if it will work outside of our AKS offering.
https://www.terraform.io/docs/providers/azurerm/d/key_vault_secret.html
https://blog.azureandbeyond.com/2019/01/29/terraform-azure-keyvault-secrets/
Related
I hope somebody can help me out here.
I have a basic configuration in azure which consists in a web app and database.
The web app is able to connect to the database using managed identity adn everything here works just fine, but i wanted to try the same configuration using aks.
I deployed AKS and enabled managed identity. I deployed a pod into the cluster as follow:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dockerimage
ports:
- containerPort: 80
env:
- name: "ConnectionStrings__MyDbConnection"
value: "Server=server-url; Authentication=Active Directory Managed Identity; Database=database-name"
- name: "ASPNETCORE_ENVIRONMENT"
value: "Development"
securityContext:
allowPrivilegeEscalation: false
restartPolicy: Always
The deployment went trough just smoothly and everything works just fine. But this is where i have the problem and cannot figure out the best solution.
The env block is in plain text, i would like to protect those environment variables by storing them in a keyvault.
I have been looking around into different forums and documentation and the options start confusing me. Is there any good way to achieve security in this scenario?
In my web app, under configurations i have the managed identity enabled and using this i can access the secrets in a keyvault and retrieve them. Can i do the same using AKS?
Thank you so much for any help you can provide or help with.
And please if my question is not 100% clear, just let me know
Upgrade an existing AKS cluster with Azure Key Vault Provider for Secrets Store CSI Driver support
Use a user-assigned managed identity to access KV
Set an environment variable to reference Kubernetes secrets
You will need to do some reading, but the process is straight forward.
The KV secrets will be stored in k8s secrets, that you can reference in the pods environment variables.
You can try to replace environment key-value like you did with Azure Configuration. Using Azure app config, you can
add "ConnectionStrings__MyDbConnection" as 'Key Vault reference' to your kv secret. Then use DefaultAzureCredential or ManagedIdentityCredential class
to setup credential for authentication to app config and key vault resources.
var builder = WebApplication.CreateBuilder(args);
var usermanaged_client_id = "";
var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = usermanaged_client_id });
// Add services to the container.
builder.Configuration.AddAzureAppConfiguration(opt =>
{
opt.Connect(new Uri("https://your-app-config.azconfig.io"), credential)
.ConfigureKeyVault(kv =>
{
kv.SetCredential(credential);
});
});
Make sure that you grant access of Key Vault to the user managed identity.
I am just wondering to know what is the result of running the following command?
kubectl create secret generic NAME [--from-literal=key1=value1]
Should we run it inside a specific file of the project?
Where does it save the result of running this command and how the application uses it's result?
This command will create a secret with the data key, key1 and its respective value, value1 in whatever namespace is set in the current-context of your kubeconfig, (if you have not set this yourself it will be the default namespace).
Should we run it inside a specific file of the project?
For this command, no, it does not matter. This is an imperative command, so in this instance it does not matter where you are on your machine when you run this command. Imperative commands provide all of the resources with your command, in this case a secret with the key key1 and its respective value. The command is not referencing any files in your project, so you can run this from anywhere.
You can contrast this to a similar resource creation command that provides the resource the declarative way:
kubectl apply -f my-file.yaml
this command says to take the resource in the file and create or update the resource - but I need to supply the path to the file, so it would matter in this instance.
Where does it save the result of running this command
A request will be sent to the api-server which will try to create the object. Kubernetes will now include this object as part of its desired state, it will be stored in etcd. You can read more about the Kubernetes components in their docs - but insofar as it is relevant for this question, the secret object now exists in the cluster which is scoped to the current namespace the user is working in (see kubeconfig context).
Secrets are namespaced objects. This means if you want a pod to use the secret it will need to be in the same namespace as the secret. If you have not explicitly set a namespace for your kubeconfig current-context, your command will create a secret in the 'default' namespace. This also means that anyone with access to the default namespace also can access the secret.
to view your secret you can run
kubectl get secret
if you want to see your data in it
kubectl get secret -o yaml
and how the application uses it's result?
There are a number of ways to consume secrets. Here is a example of how to set an env var from key1 in the secret data on a container for use.
note: this snippet only shows a section of a valid yaml template for a deployment which shows a container, named app, it's image and envs.
...
- name: app
image: your-app
env:
- name: NAME
valueFrom:
secretKeyRef:
name: name
key: key1
optional: false # same as default; "name" must exist
# and include a key named "username"
...
(kubernetes secret docs) https://kubernetes.io/docs/concepts/configuration/secret/
A Secret is a way of storing information in Kubernetes.
Where does it save the result of running this command
When you run that kubectl create secret command, it will create a Secret resource in your current namespace. If you're unsure what namespace you're currently configured to use, you can view it by running:
$ kubectl config get-contexts $(kubectl config current-context) | awk '{print $NF}'
NAMESPACE
default
How does the application uses it's result?
You can make the information in a secret available to an application in several ways:
You can expose Secret values as environment variables
You can mount Secrets as files on the filesystem
You can retrieve secrets via the Kubernetes API
The linked documentation contains examples that show what this looks like in practice.
When you run:
kubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret
Kubernetes creates the secret in the current active namespace, you can specify a different namespace without switching into it by just using -n <namespace>. Secrets, are stored in the internal etcd (which is not encrypted by default).
However, if you only want to see how the secret will look like, but not actually create the secret, simply add --dry-run=client -o yaml to the command:
$ kubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret --dry-run=client -o yaml
apiVersion: v1
data:
key1: c3VwZXJzZWNyZXQ=
key2: dG9wc2VjcmV0
kind: Secret
metadata:
creationTimestamp: null
name: my-secret
I want to create with terraform a PVC in AKS cluster using the default storage class that comes with the AKS cluster. Here the doc
If I do kubectl get sc i´m getting:
But not sure how to use terraform code to refere them. Was trying with:
resource "kubernetes_persistent_volume" "volume" {
metadata {
name = "${var.pv_name}"
}
spec {
capacity {
storage = "50Gi"
}
access_modes = ["ReadWriteOnce"]
persistent_volume_source {
azure_disk {
caching_mode = "None"
disk_name = "managed-premium"
kind = "Managed"
}
}
}
}
but It´s saying: The argument "data_disk_uri" is required, but no definition was found.
I get that, it´s indicating that I should enter the URL of the disk from Azure portal, But in this case I didn´t created a disk in azure, using the storage class provided by AKS.
Has somebody was able to create this in AKS before?
You can not create the PV with storage class only, because A StorageClass provides a way for administrators to describe the "classes" of storage they offer. Each StorageClass has a provisioner that determines what volume plugin is used for provisioning PVs. This field must be specified.
The Volume Plugin would be Azure Disk, Azure Files, AWSElasticBlockStore and many more you can refer the document for available volume plugin for Storage Class.
Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
The default storage class provisions a standard SSD Azure disk. A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use one of the pre-created storage classes to create a standard or premium Azure managed disk.
So , Based on above Statement you need to create volume plugin like Azure DIsk, There is no option to Create PV in AKS with storage class without any volume plugin
Reference : https://kubernetes.io/docs/concepts/storage/storage-classes/
I am using azure key vault to save secrets and use as env variables in deployment.yaml.
but issue is I can see these secrets in azure kubernetes cluster in azure portal.
I read in kubernetes documentation that we can use these variables as file instead of env variables for more secure deployment.
What changes do I need do for achieving this
Here are my helm charts -
SecretProviderClass.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: azure-keyvault
spec:
provider: azure
secretObjects:
- secretName: database-configs
type: Opaque
data:
- objectName: DB-URL
key: DB-URL
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: {{ .Values.spec.parameters.userAssignedIdentityID }}
resourceGroup: {{ .Values.spec.parameters.resourceGroup }}
keyvaultName: {{ .Values.spec.parameters.keyvaultName }}
tenantId: {{ .Values.spec.parameters.tenantId }}
objects: |
array:
- |
objectName: DB-URL
objectType: secret
objectAlias: DB-URL
deployment.yaml
env:
- name: DB-URL
valueFrom:
secretKeyRef:
name: database-configs
key: DB-URL
volumeMounts:
- mountPath: "/mnt/azure"
name: volume
- mountPath: "mnt/secrets-store"
name: secrets-mount
readOnly: true
volumes:
- name: volume
persistentVolumeClaim:
claimName: azure-managed-disk
- name: secrets-mount
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-keyvault"
file where helm substituting these values at deployment time-
settings.ini -
[server]
hostname = "localhost"
hot_deployment = false
url = "$env{DB-URL}"
[user_store]
type = "read_only_ldap"
Any help will be really appreciated.
I am looking for secure way to use key vault and kubernetes together
The secrets appear in the Azure Portal Kubernetes Resource View because the SecretProviderClass azure-keyvault has spec.secretObjects field. In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. Use the optional secretObjects field to define the desired state of the synced Kubernetes secret objects. Reference
Removing the spec.secretObjects will prevent the sync of mounted secret content with the AKS cluster.
An environment variable is a dynamic-named value that can affect the way running processes will behave on a computer. They are part of the environment in which a process runs. These should not be confused with files.
The Kubernetes documentation says that a secret can be used with a Pod in 3 ways:
As files in a volume mounted on one or more of its containers.
As container environment variable.
By the kubelet when pulling images for the Pod.
I see that your Helm Chart already has the secret volume mount set up. That leaves the last step from here:
Modify your image or command line so that the program looks for files in the directory where your secrets would appear (in this case it looks like /mnt/secrets-store). Each key in the secret data map becomes the filename under mountPath.
Note: Assuming that you missed / in:
- mountPath: "mnt/secrets-store"
I am still looking for a better answer, but this is what I tried.
Deployed a small dummy deployment with all secrets mapped to volume map and environment variable, matching with SecretProviderClass. This creates secrets in K8S.
Now deploying helmchart using those secrets works.
I know this is overhead to deploy unwanted things + it needs to be Highly Available. But could not find any workaround.
Looking for better answer!
I'm working on integrating AKV and AKS, although I'm hitting a number of road blocks.
At any rate, what I want to ultimately do is automate pulling credentials and API keys from it for local dev clusters too. That way, devs don't have to be bothered with "go here", "do x", etc. They just start-up their dev cluster, the keys and credentials, are pulled automatically and can be managed from a central location.
The AKV and AKS integration, if I could get it working, makes sense because it is the same context. The local dev environments will be entirely different, minikube, clusters so a different context.
I'm trying to wrap my brain around how to grab the keys in the local dev cluster:
Will the secrets-store.csi.k8s.io in the following be available to use for local dev clusters (as taken from the AKV-AKS integration documentation)?
apiVersion: v1
kind: Pod
metadata:
name: nginx-secrets-store-inline
labels:
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: azure-kvname
Or do I need to do something like the following as it is outlined here?
az keyvault secret show --name "ExamplePassword" --vault-name "<your-unique-keyvault-name>" --query "value"
Will the secrets-store.csi.k8s.io in the following be available to use for local dev clusters (as taken from the AKV-AKS integration documentation)?
No, it will not be available in local.
The secrets-store.csi.k8s.io uses managed identity(MSI) to access the keyvault, essentially makes an API call to azure instance metadeta endpoint to get the access token, then use the token to get the secret automatically, it is just available in an Azure environment supported MSI.
Or do I need to do something like the following as it is outlined here?
Yes, to get the secret from azure keyvault in local, your option is to do that manually, for example use the Azure CLI az keyvault secret show you mentioned.