Getting Azure Key Value secrets for local dev clusters - azure

I'm working on integrating AKV and AKS, although I'm hitting a number of road blocks.
At any rate, what I want to ultimately do is automate pulling credentials and API keys from it for local dev clusters too. That way, devs don't have to be bothered with "go here", "do x", etc. They just start-up their dev cluster, the keys and credentials, are pulled automatically and can be managed from a central location.
The AKV and AKS integration, if I could get it working, makes sense because it is the same context. The local dev environments will be entirely different, minikube, clusters so a different context.
I'm trying to wrap my brain around how to grab the keys in the local dev cluster:
Will the secrets-store.csi.k8s.io in the following be available to use for local dev clusters (as taken from the AKV-AKS integration documentation)?
apiVersion: v1
kind: Pod
metadata:
name: nginx-secrets-store-inline
labels:
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: azure-kvname
Or do I need to do something like the following as it is outlined here?
az keyvault secret show --name "ExamplePassword" --vault-name "<your-unique-keyvault-name>" --query "value"

Will the secrets-store.csi.k8s.io in the following be available to use for local dev clusters (as taken from the AKV-AKS integration documentation)?
No, it will not be available in local.
The secrets-store.csi.k8s.io uses managed identity(MSI) to access the keyvault, essentially makes an API call to azure instance metadeta endpoint to get the access token, then use the token to get the secret automatically, it is just available in an Azure environment supported MSI.
Or do I need to do something like the following as it is outlined here?
Yes, to get the secret from azure keyvault in local, your option is to do that manually, for example use the Azure CLI az keyvault secret show you mentioned.

Related

Aks pods env variables keyvault

I hope somebody can help me out here.
I have a basic configuration in azure which consists in a web app and database.
The web app is able to connect to the database using managed identity adn everything here works just fine, but i wanted to try the same configuration using aks.
I deployed AKS and enabled managed identity. I deployed a pod into the cluster as follow:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dockerimage
ports:
- containerPort: 80
env:
- name: "ConnectionStrings__MyDbConnection"
value: "Server=server-url; Authentication=Active Directory Managed Identity; Database=database-name"
- name: "ASPNETCORE_ENVIRONMENT"
value: "Development"
securityContext:
allowPrivilegeEscalation: false
restartPolicy: Always
The deployment went trough just smoothly and everything works just fine. But this is where i have the problem and cannot figure out the best solution.
The env block is in plain text, i would like to protect those environment variables by storing them in a keyvault.
I have been looking around into different forums and documentation and the options start confusing me. Is there any good way to achieve security in this scenario?
In my web app, under configurations i have the managed identity enabled and using this i can access the secrets in a keyvault and retrieve them. Can i do the same using AKS?
Thank you so much for any help you can provide or help with.
And please if my question is not 100% clear, just let me know
Upgrade an existing AKS cluster with Azure Key Vault Provider for Secrets Store CSI Driver support
Use a user-assigned managed identity to access KV
Set an environment variable to reference Kubernetes secrets
You will need to do some reading, but the process is straight forward.
The KV secrets will be stored in k8s secrets, that you can reference in the pods environment variables.
You can try to replace environment key-value like you did with Azure Configuration. Using Azure app config, you can
add "ConnectionStrings__MyDbConnection" as 'Key Vault reference' to your kv secret. Then use DefaultAzureCredential or ManagedIdentityCredential class
to setup credential for authentication to app config and key vault resources.
var builder = WebApplication.CreateBuilder(args);
var usermanaged_client_id = "";
var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = usermanaged_client_id });
// Add services to the container.
builder.Configuration.AddAzureAppConfiguration(opt =>
{
opt.Connect(new Uri("https://your-app-config.azconfig.io"), credential)
.ConfigureKeyVault(kv =>
{
kv.SetCredential(credential);
});
});
Make sure that you grant access of Key Vault to the user managed identity.

How to pass azure key vault secrets to kubernetes pod using file in helm charts

I am using azure key vault to save secrets and use as env variables in deployment.yaml.
but issue is I can see these secrets in azure kubernetes cluster in azure portal.
I read in kubernetes documentation that we can use these variables as file instead of env variables for more secure deployment.
What changes do I need do for achieving this
Here are my helm charts -
SecretProviderClass.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: azure-keyvault
spec:
provider: azure
secretObjects:
- secretName: database-configs
type: Opaque
data:
- objectName: DB-URL
key: DB-URL
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: {{ .Values.spec.parameters.userAssignedIdentityID }}
resourceGroup: {{ .Values.spec.parameters.resourceGroup }}
keyvaultName: {{ .Values.spec.parameters.keyvaultName }}
tenantId: {{ .Values.spec.parameters.tenantId }}
objects: |
array:
- |
objectName: DB-URL
objectType: secret
objectAlias: DB-URL
deployment.yaml
env:
- name: DB-URL
valueFrom:
secretKeyRef:
name: database-configs
key: DB-URL
volumeMounts:
- mountPath: "/mnt/azure"
name: volume
- mountPath: "mnt/secrets-store"
name: secrets-mount
readOnly: true
volumes:
- name: volume
persistentVolumeClaim:
claimName: azure-managed-disk
- name: secrets-mount
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-keyvault"
file where helm substituting these values at deployment time-
settings.ini -
[server]
hostname = "localhost"
hot_deployment = false
url = "$env{DB-URL}"
[user_store]
type = "read_only_ldap"
Any help will be really appreciated.
I am looking for secure way to use key vault and kubernetes together
The secrets appear in the Azure Portal Kubernetes Resource View because the SecretProviderClass azure-keyvault has spec.secretObjects field. In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. Use the optional secretObjects field to define the desired state of the synced Kubernetes secret objects. Reference
Removing the spec.secretObjects will prevent the sync of mounted secret content with the AKS cluster.
An environment variable is a dynamic-named value that can affect the way running processes will behave on a computer. They are part of the environment in which a process runs. These should not be confused with files.
The Kubernetes documentation says that a secret can be used with a Pod in 3 ways:
As files in a volume mounted on one or more of its containers.
As container environment variable.
By the kubelet when pulling images for the Pod.
I see that your Helm Chart already has the secret volume mount set up. That leaves the last step from here:
Modify your image or command line so that the program looks for files in the directory where your secrets would appear (in this case it looks like /mnt/secrets-store). Each key in the secret data map becomes the filename under mountPath.
Note: Assuming that you missed / in:
- mountPath: "mnt/secrets-store"
I am still looking for a better answer, but this is what I tried.
Deployed a small dummy deployment with all secrets mapped to volume map and environment variable, matching with SecretProviderClass. This creates secrets in K8S.
Now deploying helmchart using those secrets works.
I know this is overhead to deploy unwanted things + it needs to be Highly Available. But could not find any workaround.
Looking for better answer!

Pod with Azure File Share configured. Do I need PersistentVolume and PVC as well?

we have defined our YAML with
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
name: mypod
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
and we will before the deployment create secret with kubectl command:
$AKS_PERS_STORAGE_ACCOUNT_NAME
$STORAGE_KEY
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME \
--from-literal=azurestorageaccountkey=$STORAGE_KEY
We already have that existing file share as Azure File Share resource and we have file stored in it.
I am confused if we need to manage and define as well yamls for
kind: PersistentVolume
and
kind: PersistentVolumeClaim
or the above YAML is completely enough?
Are PV and PVC required only if we do not have our file share already created on Azure?
I've read the docs https://kubernetes.io/docs/concepts/storage/persistent-volumes/ but still feeling confused when they need to be defined and when it is OK not to use them at all during the overall deployment process.
Your Pod Yaml is ok.
The Kubernetes Persistent Volumes is a newer abstraction. If your application instead uses PersistentVolumeClaim it is decoupled from the type of storage you use (in your case Azure File Share) so your app can be deployed to e.g. AWS or Google Cloud or Minikube on your desktop without any changes. Your cluster need to have some support for PersistentVolumes and that part can be tied to a specific storage system.
So, to decouple your app yaml from specific infrastructure, it is better to use PersistentVolumeClaims.
Persistent Volume Example
I don't know about Azure File Share, but there is good documentation on Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS).
Application config
Persistent Volume Claim
Your app, e.g. a Deployment or StatefulSet can have this PVC resource
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-azurefile
spec:
accessModes:
- ReadWriteMany
storageClassName: my-azurefile
resources:
requests:
storage: 5Gi
Then you need to create a StorageClass resource that probably is unique for each type of environment, but need to have the same name and support the same access modes. If the environment does not support dynamic volume provisioning you may to have manually create PersistentVolume resource as well.
Examples in different environments:
The linked doc Dynamically create and use a persistent volume with Azure Files in AKS) describes for Azure.
See AWS EFS doc for creating ReadWriteMany volumes in AWS.
Blog about ReadWriteMany storage in Minikube
Pod using Persistent Volume Claim
You typically deploy apps using a Deployment or a StatefulSet but the part declaring the Pod template is similar, except that you probably want to use volumeClaimTemplate instead of PersistentVolumeClaim for StatefulSet.
See full example on Create a Pod using a PersistentVolumeClaim
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: file-share
persistentVolumeClaim:
claimName: my-azurefile # this must match your name of PVC
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: file-share

Azure Container status waiting due to the ImagePullBackOff

I'm new to the Azure AKS and docker. I followed the steps in this article.
Finally, I compleated all the steps and got this status.
But the external IP is not giving the actual output. I checked in the Azure portal, The container status is waiting. Am I missing anything here?
Authentication will be needed to pull the images from ACR. We have to create a docker-registry secret for authentication. To do this, open Cloud Shell on the Azure Portal and run the command below.
> kubectl create secret docker-registry mysecretname --docker-server=myacrname.azurecr.io --docker-username=myacrname--docker-password=myacrpwd --docker-email=myportalemail
Don’t forget to change your password and email address.
To access your password go to your Azure Container Registry go to
https://portal.azure.com/ » Your Container registry » Access keys
Finally make sure that docker image url in the you kubernetes yaml file is right
https://github.com/husseinsa/kubernetes-multi-container-app/blob/master/k8/frontend.yaml
https://github.com/husseinsa/kubernetes-multi-container-app/blob/master/k8/backend.yaml
spec:
containers:
- name: backend
image: mywebregistry.azurecr.io/backend:v1
ports:
- containerPort: 80
put your image url in asure container registry

Can i insert secrets into deployment.yml using Azure KeyVault?

I have integrated Azure KeyVault using Terraform. I can ssh into the container and can view the secrets.
My questions is: Is it possible to somehow reference that secret value inside my deployment.yml file that i use for deploying my pods in the kubernetes cluster?
I am using the following deployment file. Normally I access the Kubernetes secrets using the valueFrom and then referencing the secret name and key here. How would that be possible if i want to insert the value of secret using keyvault here.
-spec:
containers:
- name: test-container
image: test.azurecr.io/test-image:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: testSecret
valueFrom:
secretKeyRef:
name: testSecret
key: testSecretPassword
Thanks
You will need a Terraform data source. Data sources allow data to be fetched or computed for use elsewhere in Terraform configuration. Use of data sources allows a Terraform configuration to make use of information defined outside of Terraform, or defined by another separate Terraform configuration.
data "azurerm_key_vault_secret" "test" {
name = "secret-sauce"
key_vault_id = "${data.azurerm_key_vault.existing.id}"
}
output "secret_value" {
value = "${data.azurerm_key_vault_secret.test.value}"
}
You can look at Key Vault FlexVolume to integrate Key Vault into K8s. Secrets, keys, and certificates in a key management system become a volume accessible to pods. Once the volume is mounted, its data is available directly in the container filesystem for your application.
I will be honest, I have not tried this solution and don't know if it will work outside of our AKS offering.
https://www.terraform.io/docs/providers/azurerm/d/key_vault_secret.html
https://blog.azureandbeyond.com/2019/01/29/terraform-azure-keyvault-secrets/

Resources