Azure CSI Key Vault Provider Dynamic Update Secrets while Pod is Online - azure

We're using AKS, Azure Key Vaults, and presently use the CSI driver to deliver secret data into our containers (via the CSI driver) at container creation.
The documentation for the CSI driver seems to indicate that it supports dynamic key vault value updates via 'autorotation'. However, if we take one of our pods that is online and has secrets mounted and change the secret value in the key store, we are not seeing that value change in our pod -- we've waited > 60m to see if there was some kind of polling interval.
Can anyone confirm if CSI Driver key autorotation is supposed to dynamically keep the secrets in running pods up-to-date? Ultimately, we're looking for a way to refresh our secrets in our pods that come from Azure Key Vaults (via the CSI driver) without incurring a pod reboot. If anyone could point us in the right direction, we'd be grateful.

Looking at the documentation, there are additional configuration you can apply:
enableSecretRotation: Boolean type. If true, periodically updates the pod mount and Kubernetes Secret with the latest content from external secrets store.
rotationPollInterval: Specifies the secret rotation poll interval duration if enableSecretRotation is true. This duration can be adjusted based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest.
syncSecret.enabled: Boolean input. In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. If true, SecretProviderClass allows the secretObjects field to define the desired state of the synced Kubernetes Secret objects.

Related

Vaults secrets injected by vault sidecar container inside the pod are visible to kubernetes cluster users/admin

I have integrated the external vault into kubernetes cluster. Vault is injecting the secrets into shared volume “/vault/secrets” inside the pod which can be consumed by application container. Till now everything looks good.
But I can see security risk by inserting the secrets into shared volume in plain text as anyone can access the application secrets who has access to the kubernetes cluster.
Example: Secrets are injected into shared volume at /vault/secrets/config
Now, If kubernetes cluster admin logged in and he can access the pod along with credentials available at the shared volume in plain text format.
Kubectl exec -it <pod> command will be used to enter into pod.
In this case, my concern is cluster admin can access the application secrets (Ex: database passwords) which is security risk. In my scenario vault admin is different and kubernetes cluster admin is different.
Having a shared volume available to all pods in a cluster where all the secrets are stored in plain-text doesn't sound too secure to be honest. You could improve the securtity a little bit (only a little bit) by defining the use-limit (num_uses token attribute) to 1 (one) and alerting whenever legitimate application (that is the one that the secret was intended for) gets token invalid error messege.
I'm a K8s noob but how about this guide:
https://cloud.redhat.com/blog/integrating-hashicorp-vault-in-openshift-4
I know it's for RH OSE but maybe the concept sparks an idea.

Update Snapshot Location Velero Azure

Currently have velero up and running and it's working great. The only issue I have is that the snap shots of the volumes are being created in the same region as the originals which kinda defeats the purpose of disaster recovery. This flag
--snapshot-location-config
doesn't have arg for region. I know there is a config for the default snap shot location
volumesnapshotlocations.velero.io "default"
Does anyone know how to modify the default so I can get my snap shots into new regions?
Snapshots creation from the main region into a different region is not supported.
Azure zone-redundant snapshots and images for managed disks have a decent 99.9999999999% (12 9's) durability. The availability zones in a region are usually physically separated and even if an outage affects one AZ, you can still access your data from a redundant AZ.
However, if you fear calamities that can affect several square kilometers(multiple zones in a region), you can manually move the snapshots in a different region or even automate the process. Here is a guide to do it.
--snapshot-location-config doesn't have arg for region
--snapshot-location-config doesn't create the storage, you must do so yourself. You can specify a different region, a different Azure subscription, or even a different provider, like AWS.
For Azure, follow the instructions here to create your storage container.
If your provider supports a region config (Azure does not - see Volume Snapshot Location Config doc and Backup Storage Location Config doc), it is configurable using the --config, e.g. --config region=us-west-2. Check your provider plugin to see whether different regions are supported, what the key name is, and what possible values are supported.
Refer to the Velero locations documentation for examples of using multiple snapshot and backup locations.
Update:
Although velero snapshot-location create allows you to specify a --provider, the Limitations/Caveats section of the Location documentation specifically states that only a single set of credentials is supported, and furthermore that Azure specifically does not allow creation of snapshots in a different region:
Velero only supports a single set of credentials for VolumeSnapshotLocations. Velero will always use the credentials provided at install time (stored in the cloud-credentials secret) for volume snapshots.
Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take a Velero backup using a volume snapshot location with a different region than where your cluster’s volumes are, the backup will fail.
I personally find this confusing -- how could one use a different provider without specifying credentials? Regardless, it seems as if storage of a snapshots in a different region in Azure is not possible.

How to force refresh secret used to mount ADLS Gen2? Azure Databricks mounts using Azure KeyVault-backed scope -- SP secret update

Issue:
Mounted ADLS gen2 container using service principal secret as secret from Azure Key Vault-backed secret scope. All good, can access the data.
Deleted secret from service principal in AAD, added new, updated Azure Key Vault secret (added the new version, disabled the old secret). All was still good, could access the data.
Restarted cluster. Unable to access mount point, error: “AADToken: HTTP connection failed for getting token from AzureAD. Http response: 401 Unauthorized”
Unmount/mount using the same config helped.
Is there a way to refresh the secret used for mount point that I could add to init scripts to avoid this issue? I would rather avoid unmounting/mounting all mount points in init scripts and was hoping that there is something like dbutils.fs.refreshMounts() that would help (refreshMounts didn't help with this particular issue).
I mounted ADLS Gen2 using service principal, oauth2.0, and azure key vault-backed secret scope, following this documentation: https://learn.microsoft.com/en-us/azure/databricks/data/data-sources/azure/azure-datalake-gen2#mount-azure-data-lake-gen2
Also - out of curiosity: does anybody know how long a token to mount to ADLS Gen2 lives? As long as the cluster did not restart, I was able to access my mnt even though the secret was deleted and updated (i.e., secret was updated in AAD and Key Vault; no failures until restarting the cluster - which was more than 12 hours after the update).
This is a known limitation. Whenever you create a mount point using credentials coming from an Azure Key Vault backed secret scope, the credentials will be stored in the mount point and will never be refreshed again.
This is a one-time read activity on mount point creation time. So each time you rotate credentials in Azure Key Vault you need to re-create the mount points to refresh the credentials there.
I would suggest you to provide feedback on the same:
Azure Databricks - Feedback
All of the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building Azure.

Storage account connectivity method for AKS

I'm setting up a Storage Account so I can Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS). Doing this to:
Have a PV and PVC for the database
A place to store the application files
AKS does create a storage account in the MC_<resource-group>_<aks-name>_<region> resource group that is automatically created. However, that storage account is destroyed if the node size/VM is changed (not node count), so it shouldn't be used since you'll lose your files and database if you need a node size/VM with more resources.
This documentation, nor any other I've really come across, says what the best practice is for the Connectivity method:
Public endpoint (all networks)
Public endpoint (selected networks)
Private endpoint
The first option sounds like a bad idea.
The second option allows me to select a virtual network, and there are two choices:
MC_<resource-group>_<aks-name>_<region>... again, doesn't seem like a good idea because if the node size/VM is changed, the connection will be broke.
aks-vnet-<number>... not sure what this is, but looks like it is part of the previous resource group so will also be destroyed in the previously mentioned scenario.
The third option contains a number of options some of which are included the second option.
So how should I securely set this up for AKS to share files with the application and persist database files?
EDIT
Looking at the both the "Firewalls and virtual networks" and "Private endpoint connections" for the storage account that comes with the AKS node, it looks like it is just setup for "All networks"... so maybe having that were my actual PV and PVC will be stored isn't such an issue...? Could use some clarity on the topic.
not sure where the problem lies. all the assets generated by AKS are tied to AKS lifecycle. if you delete AKS it will delete the MC_* resource group (and that it 100% right). Not sure what do you mean about storage account being destroyed, it wouldn't get destroyed unless you remove the pvc and set the delete action to reclaim.
Reading: https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
As for the networking part, selected networks with selecting the AKS nodes network should be the way to go. you can figure that network out by looking at the AKS nodes or the AKS agent pool definition(s). I dont think this is configurable only using kubernetes primitives, so that would be a manual\scripted action after storage account is created.

Read secrets inside of Redhat OpenShift online?

I have gotten a Redhat OpenShift online starter vps, for hosting my discord bot. I've uploaded it to github, minus my discord token and other API keys, of course :^)
How would I get OpenShift to use store and read client secrets?
I'm using the nodejs8 framework if that helps.
Secrets have no place in a source version control hosting service like GitHub.
Regarding OpenShift, it includes Secrets, an encoded-64 configmap in which you can inject confidential information.
But that long-term confidential information storage (to be injected in OpenShift secrets) ought to be stored in a proper Vault.
Like, for instance, the Hashicorp Vault, as described by the article "Managing Secrets on OpenShift – Vault Integration"
The rest describes that solution, but even if you don't use that particular host, the general idea (an external vault-type storage) remains:
An Init Container (run before the main container of a pod is started) requests a wrapped token from the Vault Controller over an encrypted connection.
Wrapped credentials allow you to pass credentials around without any of the intermediaries having to actually see the credentials.
The Vault Controller retrieves the pod details from the Kubernetes API server.
If the pod exists and contains the vaultproject.io/policies annotation, the Vault Controller calls Vault and generates a unique wrapped token with access to the Vault policies mentioned in the annotation. This step requires trust on pod author to have used to right policies. The generated token has a configurable TTL.
The Vault Controller “calls back” the Init Container using the pod IP obtained from the Kubernetes API over an encrypted connection and delivers it the newly created wrapped token. Notice that the Vault Controller does not trust the pod, it only trusts the master API.
The Init Container unwraps the token to obtain a the Vault token that will allow access to the credentials.
The Vault token is written to a well-known location in a volume shared between the two containers (emptyDir) and the Init Container exits.
The main container reads the token from the token file. Now the main container can use the token to retrieve all the secrets allowed by the policies considered when the token was created.
If needed, the main container renews the token to keep it from expiring.

Resources