The way to mount azure file as a PV is shown in tutorials as below:
apiVersion: v1
kind: PersistentVolume
metadata:
name: azurefile
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl
Currently I need to create a secret with the storage account key to get the pv working.
Is there a way to mount the file share without creating a secret, preferably using environment variables? For example, putting the account key in environment variables or using service principal credentials.
I also attempted to use kustomise secretGenerator to create the secret from env variables, but the secret name is different each time and I can't use it in the pv yaml file.
Kubernetes is secret or configmap is one way of injecting secret only to deployments.
If you are storing something in secret that means you can access it as an environment variable.
You can create a storage class and use it something like:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- actimeo=30
parameters:
skuName: Standard_LRS
With this only you can add the secret so you don't have to add it multiple time.
Related
I am trying to sync an Azure Key Vault Secret with a Kubernetes Secret of type dockerconfigjson by applying the following yaml manifest with the 4 objects Pod, SecretProviderClass, AzureIdentity and AzureIdentityBinding.
All configuration around key vault access and managed identity RBAC rules have been done and proven to work, as I have access to the Azure Key Vault secret from within the running Pod.
But, when applying this manifest, and according to the documentation here, I expect to see the kubernetes secret regcred reflecting the Azure Key Vault Secret when I create the Pod with mounted secret volume, but the kubernetes secret remains unchanged. I have also tried to recreate the Pod in an attempt to trigger the sync but in vain.
Since this is a very declarative way of configuring this functionality, I am also confused where to look at logs for troubleshooting.
Can someone lead me to what may I be doing wrong?
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
aadpodidbinding: webapp
spec:
containers:
- name: demo
image: mcr.microsoft.com/oss/azure/aad-pod-identity/demo:v1.6.3
volumeMounts:
- name: web-app-secret
mountPath: "/mnt/secrets"
readOnly: true
nodeSelector:
kubernetes.io/os: linux
volumes:
- name: web-app-secret
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: web-app-secret-provide
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: web-app-secret-provide
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: <key-vault-name>
objects: |
array:
- |
objectName: registryPassword
objectType: secret
tenantId: <tenant-id>
secretObjects:
- data:
- key: .dockerconfigjson
objectName: registryPassword
secretName: regcred
type: kubernetes.io/dockerconfigjson
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: kv-managed-identity
spec:
type: 0
resourceID: <resource-id>
clientID: <client-id>
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: kv-managed-binding
spec:
azureIdentity: kv-managed-identity
selector: web-app
I used AzureFileShare to create a dynamic PVC for a pod where I deployed a NodeJS application.
Below is the yaml of the storageclass I used to create the pvc,
apiVersion: storage.k8s.io/v1
metadata:
name: my-azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- actimeo=30
parameters:
skuName: Standard_LRS
The yaml file I used to create the pvc,
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamic-pvc
namespace: test
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: my-azurefile
I took the backup of the namespace where the pod is deployed using velero. When I restored the backup in a different cluster, I see no data present in the pod. But when I use dynamic azuredisk pvc, I am able to restore the pod with the data.
NOTE: Before restoring the velero backup, I have created the my-azurefile storageclass in the new cluster where I performed the restoration.
Can anyone please explain why the restoration is not happening properly with the data when I use dynamic azurefile pvc? Thanks in Advance!
kubectl patch storageclass/<YOUR_AZURE_FILE_STORAGE_CLASS_NAME>
--type json
--patch '[{"op":"add","path":"/mountOptions/-","value":"nouser_xattr"}]'
As per the above patch method, I am able to restore the data from azurefile pv.
I am using azure csi disk driver method for implementing K8 persistent volume. I have installed azure-csi-drivers in my K8 cluster and using below mentioned files as end-to-end testing purpose but my deployment is getting failed due to following error :
Warning FailedAttachVolume 23s (x7 over 55s) attachdetach-controller
AttachVolume.Attach failed for volume "pv-azuredisk-csi" : rpc error:
code = NotFound desc = Volume not found, failed with error: could not
get disk name from
/subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_RTT_HOT_ENV_RG/providers/Microsoft.Compute/disks/560d-RTT-PVDisk,
correct format:
./subscriptions/(?:.)/resourceGroups/(?:.*)/providers/Microsoft.Compute/disks/(.+)
Note: I have checked multiple times, my URL is correct but I am not sure if underscore in resource group name is creating any problem, RG = "560d_RTT_HOT_ENV_RG". Please suggest if anyone have any idea what is going wrong?
K8 version : 14.9
CSI drivers : v0.3.0
My YAML files are :
csi-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
csi:
driver: disk.csi.azure.com
readOnly: false
volumeHandle: /subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_RTT_HOT_ENV_RG/providers/Microsoft.Compute/disks/560d-RTT-PVDisk
volumeAttributes:
cachingMode: ReadOnly
fsType: ext4
-------------------------------------------------------------------------------------------------
csi-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: pv-azuredisk-csi
storageClassName: ""
nginx-csi-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: nginx-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
image: nginx
name: nginx-azuredisk-csi
command:
"/bin/sh"
"-c"
while true; do echo $(date) >> /mnt/azuredisk/outfile; sleep 1; done
volumeMounts:
name: azuredisk01
mountPath: "/mnt/azuredisk"
volumes:
name: azuredisk01
persistentVolumeClaim:
claimName: pvc-azuredisk-csi
It seems you create the disk in another resource group, not the AKS nodes group. So you must grant the Azure Kubernetes Service (AKS) service principal for your cluster the Contributor role to the disk's resource group firstly. For more details, see Create an Azure disk.
Update:
Finally, I found out the reason why it cannot find the volume. I think it's a stupid definition. It's case sensitive about the resource Id of the disk which you used for the persist volume. So you need to change your csi-pv.yaml file like below:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
csi:
driver: disk.csi.azure.com
readOnly: false
volumeHandle: /subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_rtt_hot_env_rg/providers/Microsoft.Compute/disks/560d-RTT-PVDisk
volumeAttributes:
cachingMode: ReadOnly
fsType: ext4
In addition, the first paragraph of the answer is also important.
Update:
Here are the screenshots of the result that the static disk for the CSI driver works on my side:
I have an application that starts with docker-compose up. Some ssh credentials are provided with a json file, in a volume, in the host machine. I want to run the app in kubernetes, how can I provide the credentials using kubernetes secrets? my json file looks like:
{
"HOST_USERNAME"="myname",
"HOST_PASSWORD"="mypass",
"HOST_IP"="myip"
}
I created a file named mysecret.yml with base64 and I applied in kubernetes
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
HOST_USERNAME: c2gaQ=
HOST_PASSWORD: czMxMDIsdaf0NjcoKik=
HOST_IP: MTcyLjIeexLjAuMQ==
How I have to write the volumes in deployment.yml in order to use the secret properly?
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
This is the above example of using secret as volumes. You can use the same to define a deployment.
Please refer to official kubernetes documentation for further info:
https://kubernetes.io/docs/concepts/configuration/secret/
I have been trying to mount a file share on Kubernetes pod hosted on AKS in Azure. So far, I have tried to:
1. Successfully created a secret by base64 encoding the name and the key
2. Create a yaml by specifying the correct configurations
3. Once I apply it using kubectl apply -f azure-file-pod.yaml, it gives me the following error:
Output: mount error: could not resolve address for
demo.file.core.windows.net: Unknown error
I have an Azure File Share by the name of demo.
Here is my yaml file:
apiVersion: v1
kind: Pod
metadata:
name: azure-files-pod
spec:
containers:
- image: microsoft/sample-aks-helloworld
name: azure
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: demo
readOnly: false
How can this possibly be resolved?