Use Azure files based persistent volume in local environment with minikube - azure

I am expecting problems with mounting an Azure file persistent volume in my kubernetes pod.
I get an error:
Warning FailedMount 2m7s kubelet
Unable to attach or mount volumes: unmounted volumes=[customer-service-logs-volume[], unattached volumes=[customer-service-logs-volume kube-api-access-6k588[]: timed out waiting for the condition
Anyone an idea what perhaps going wrong with it?
apiVersion: v1
kind: PersistentVolume
metadata:
name: azurefile
labels:
usage: azurefile
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureFile:
shareName: pssk8sshare
secretName: azure-secret
readOnly: false
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: customerservice-logs
annotations:
volume.beta.kubernetes.io/storage-class: ""
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Mi
selector:
matchLabels:
usage: azurefile

Related

How to do bind mounting in azure kubernetes to view the files from pod to azure fileshare?

I have yaml file that has deployment and persistant volume with azure fileshare.
Scenario 1 - The file mount happens successfully when trying to mount only the logs folder with azure files share. This pretty much works as expected.
Scenario 2 - When I try to mount the application configuration file, the file mount fails with azure fileshare. The pod keeps restarting each time and I am unable to find the files as well.
What am I trying to achieve here?
I have the Azure Fileshare folder that is empty before running the yaml and after running the yaml I am expecting the application files from the pod to be shown in the Azure Fileshare... I guess that isn't happening and actually Azure Fileshare empty folder overwrites the folder/files in the pod that has the application.
Is there any way to view the pod application files in the Azure Fileshare while starting?
ex- just like the bind mount in docker-compose
Please find the yaml file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-product
namespace: my-pool
labels:
app: my-product
spec:
replicas: 1
selector:
matchLabels:
app: my-product
template:
metadata:
labels:
app: my-product
spec:
containers:
- image: myproductimage:latest
name: my-product
imagePullPolicy: Always
envFrom:
- configMapRef:
name: configmap
env:
- name: env-file
value: my-product
volumeMounts:
- name: azure
mountPath: /opt/kube/my-product
imagePullSecrets:
- name: secret1
hostname: my-product
volumes:
- name: azure
persistentVolumeClaim:
claimName: fileshare-pvc
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: fileshare-pv
labels:
usage: fileshare-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: azure-file
azureFile:
secretName: secret2
shareName: myfileshare-folder
readOnly: false
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: fileshare-pvc
namespace: my-pool
spec:
accessModes:
- ReadWriteOnce
storageClassName: azure-file
resources:
requests:
storage: 5Gi
selector:
# To make sure we match the claim with the exact volume, match the label
matchLabels:
usage: fileshare-pv

Shared Azure File Storage with Statefulset on AKS

I have a Statefulset with 3 instances on Azure Kubernetes 1.16, where I try to use Azure File storage to create a single file share for the 3 instances.
I use Azure Files dynamic where all is declarative i.e. storage account, secrets, pvc's and pv's are created automatically.
Manifest with VolumeClaimTemplate
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: xxx
spec:
replicas: 3
...
volumeClaimTemplates:
- metadata:
name: xxx-data-shared
spec:
accessModes: [ ReadWriteMany ]
storageClassName: azfile-zrs-sc
resources:
requests:
storage: 1Gi
The StorageClass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azfile-zrs-sc
provisioner: kubernetes.io/azure-file
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
parameters:
resourceGroup: xxx
skuName: Standard_ZRS
shareName: data
Instead of one share, I end up with 3 pv's each referring to a separate created Azure Storage Account each with a share data.
Question: Can I use the Azure Files dynamic, with additional configuration in the manifest to get a single file share? Or will I have to do Static?
Turns out that volumeClaimTemplates is not the right place (reference).
Instead use persistentVolumeClaim.
For Azure File Storage this becomes:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-shared-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: azfile-zrs-sc
resources:
requests:
storage: 1Gi
And refer to it in the manifest:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: xxx
spec:
replicas: 3
...
template:
spec:
containers:
...
volumeMounts:
- name: data-shared
mountPath: /data
volumes:
- name: data-shared
persistentVolumeClaim:
claimName: data-shared-claim

How to mount pod container path which is having data (I need this data)to my local host path in kubernetes

I have pod (kind:job) which is having some code build files under "/usr/src/app" and these files I need in my local k8s host.
But when I am trying to do as per below yamls, I am not able to see any data in mounted host path which is actually exists in pod container ("/usr/src/app"). I think mounting is overwriting/hide that data. Please help me to get in my local k8s host.
My files are :-
apiVersion: batch/v1
kind: Job
metadata:
name: wf
spec:
template:
spec:
containers:
- name: wf
image: 12345678.dkr.ecr.ap-south-1.amazonaws.com/eks:ws
volumeMounts:
- name: wf-persistent-storage
mountPath: /usr/src/app # my data is in (/usr/src/app)
volumes:
- name: wf-persistent-storage
# pointer to the configuration of HOW we want the mount to be implemented
persistentVolumeClaim:
claimName: wf-test-pvc
restartPolicy: Never
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wf-test-pvc
spec:
storageClassName: mylocalstorage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local
spec:
storageClassName: mylocalstorage
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/root/mnt/"
type: DirectoryOrCreate
Don't mount your /usr/src/app. As it will get overwritten by the contents of the PVC. In your case the pvc is empty initially so all the files will be deleted.
Try with the below code, where you will be mounting /tmp on the pvc and using command the files will be copied to the pvc.
apiVersion: batch/v1
kind: Job
metadata:
name: wf
spec:
template:
spec:
containers:
- name: wf
image: 12345678.dkr.ecr.ap-south-1.amazonaws.com/eks:ws
command:
- bash
- -c
- cp -R /usr/src/app/* /tmp/
volumeMounts:
- name: wf-persistent-storage
mountPath: /opt # my data is in (/usr/src/app)
volumes:
- name: wf-persistent-storage
# pointer to the configuration of HOW we want the mount to be implemented
persistentVolumeClaim:
claimName: wf-test-pvc
restartPolicy: Never

Azure csi disk FailedAttachVolume issue : could not get disk name from disk URL

I am using azure csi disk driver method for implementing K8 persistent volume. I have installed azure-csi-drivers in my K8 cluster and using below mentioned files as end-to-end testing purpose but my deployment is getting failed due to following error :
Warning FailedAttachVolume 23s (x7 over 55s) attachdetach-controller
AttachVolume.Attach failed for volume "pv-azuredisk-csi" : rpc error:
code = NotFound desc = Volume not found, failed with error: could not
get disk name from
/subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_RTT_HOT_ENV_RG/providers/Microsoft.Compute/disks/560d-RTT-PVDisk,
correct format:
./subscriptions/(?:.)/resourceGroups/(?:.*)/providers/Microsoft.Compute/disks/(.+)
Note: I have checked multiple times, my URL is correct but I am not sure if underscore in resource group name is creating any problem, RG = "560d_RTT_HOT_ENV_RG". Please suggest if anyone have any idea what is going wrong?
K8 version : 14.9
CSI drivers : v0.3.0
My YAML files are :
csi-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
csi:
driver: disk.csi.azure.com
readOnly: false
volumeHandle: /subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_RTT_HOT_ENV_RG/providers/Microsoft.Compute/disks/560d-RTT-PVDisk
volumeAttributes:
cachingMode: ReadOnly
fsType: ext4
-------------------------------------------------------------------------------------------------
csi-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: pv-azuredisk-csi
storageClassName: ""
nginx-csi-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: nginx-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
image: nginx
name: nginx-azuredisk-csi
command:
"/bin/sh"
"-c"
while true; do echo $(date) >> /mnt/azuredisk/outfile; sleep 1; done
volumeMounts:
name: azuredisk01
mountPath: "/mnt/azuredisk"
volumes:
name: azuredisk01
persistentVolumeClaim:
claimName: pvc-azuredisk-csi
It seems you create the disk in another resource group, not the AKS nodes group. So you must grant the Azure Kubernetes Service (AKS) service principal for your cluster the Contributor role to the disk's resource group firstly. For more details, see Create an Azure disk.
Update:
Finally, I found out the reason why it cannot find the volume. I think it's a stupid definition. It's case sensitive about the resource Id of the disk which you used for the persist volume. So you need to change your csi-pv.yaml file like below:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
csi:
driver: disk.csi.azure.com
readOnly: false
volumeHandle: /subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_rtt_hot_env_rg/providers/Microsoft.Compute/disks/560d-RTT-PVDisk
volumeAttributes:
cachingMode: ReadOnly
fsType: ext4
In addition, the first paragraph of the answer is also important.
Update:
Here are the screenshots of the result that the static disk for the CSI driver works on my side:

How to mount a volume with a windows container in kubernetes?

i'm trying to mount a persistent volume into my windows container, but i alwys get this error:
Unable to mount volumes for pod "mssql-with-pv-deployment-3263067711-xw3mx_default(....)": timeout expired waiting for volumes to attach/mount for pod "default"/"mssql-with-pv-deployment-3263067711-xw3mx". list of unattached/unmounted volumes=[blobdisk01]
i've created a github gist with the console output of "get events" and "describe sc | pvc | po" maybe someone will find the solution with it.
Below are my scripts that I'm using for deployment.
my storageclass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azure-disk-sc
provisioner: kubernetes.io/azure-disk
parameters:
skuname: Standard_LRS
my PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-disk-pvc
spec:
storageClassName: azure-disk-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
and the deployment of my container:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mssql-with-pv-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: mssql-with-pv
spec:
nodeSelector:
beta.kubernetes.io/os: windows
terminationGracePeriodSeconds: 10
containers:
- name: mssql-with-pv
image: testacr.azurecr.io/sql/mssql-server-windows-developer
ports:
- containerPort: 1433
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: "c:/volume"
name: blobdisk01
volumes:
- name: blobdisk01
persistentVolumeClaim:
claimName: azure-disk-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mssql-with-pv-deployment
spec:
selector:
app: mssql-with-pv
ports:
- protocol: TCP
port: 1433
targetPort: 1433
type: LoadBalancer
what am i doing wrong? is there another way to mount a volume?
thank for every help :)
I would try:
Change API version to v1: https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-disk
kubectl get events to see you if have a more detailed error (I could figure out the reason when I used NFS watching events)
maybe is this bug, I read in this post?
You will need a new volume in D: drive, looks like folders in C: are not supported for Windows Containers, see here:
https://github.com/kubernetes/kubernetes/issues/65060
Demos:
https://github.com/andyzhangx/demo/tree/master/windows/azuredisk

Resources