I have a Statefulset with 3 instances on Azure Kubernetes 1.16, where I try to use Azure File storage to create a single file share for the 3 instances.
I use Azure Files dynamic where all is declarative i.e. storage account, secrets, pvc's and pv's are created automatically.
Manifest with VolumeClaimTemplate
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: xxx
spec:
replicas: 3
...
volumeClaimTemplates:
- metadata:
name: xxx-data-shared
spec:
accessModes: [ ReadWriteMany ]
storageClassName: azfile-zrs-sc
resources:
requests:
storage: 1Gi
The StorageClass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azfile-zrs-sc
provisioner: kubernetes.io/azure-file
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
parameters:
resourceGroup: xxx
skuName: Standard_ZRS
shareName: data
Instead of one share, I end up with 3 pv's each referring to a separate created Azure Storage Account each with a share data.
Question: Can I use the Azure Files dynamic, with additional configuration in the manifest to get a single file share? Or will I have to do Static?
Turns out that volumeClaimTemplates is not the right place (reference).
Instead use persistentVolumeClaim.
For Azure File Storage this becomes:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-shared-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: azfile-zrs-sc
resources:
requests:
storage: 1Gi
And refer to it in the manifest:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: xxx
spec:
replicas: 3
...
template:
spec:
containers:
...
volumeMounts:
- name: data-shared
mountPath: /data
volumes:
- name: data-shared
persistentVolumeClaim:
claimName: data-shared-claim
Related
I am expecting problems with mounting an Azure file persistent volume in my kubernetes pod.
I get an error:
Warning FailedMount 2m7s kubelet
Unable to attach or mount volumes: unmounted volumes=[customer-service-logs-volume[], unattached volumes=[customer-service-logs-volume kube-api-access-6k588[]: timed out waiting for the condition
Anyone an idea what perhaps going wrong with it?
apiVersion: v1
kind: PersistentVolume
metadata:
name: azurefile
labels:
usage: azurefile
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureFile:
shareName: pssk8sshare
secretName: azure-secret
readOnly: false
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: customerservice-logs
annotations:
volume.beta.kubernetes.io/storage-class: ""
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Mi
selector:
matchLabels:
usage: azurefile
I have yaml file that has deployment and persistant volume with azure fileshare.
Scenario 1 - The file mount happens successfully when trying to mount only the logs folder with azure files share. This pretty much works as expected.
Scenario 2 - When I try to mount the application configuration file, the file mount fails with azure fileshare. The pod keeps restarting each time and I am unable to find the files as well.
What am I trying to achieve here?
I have the Azure Fileshare folder that is empty before running the yaml and after running the yaml I am expecting the application files from the pod to be shown in the Azure Fileshare... I guess that isn't happening and actually Azure Fileshare empty folder overwrites the folder/files in the pod that has the application.
Is there any way to view the pod application files in the Azure Fileshare while starting?
ex- just like the bind mount in docker-compose
Please find the yaml file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-product
namespace: my-pool
labels:
app: my-product
spec:
replicas: 1
selector:
matchLabels:
app: my-product
template:
metadata:
labels:
app: my-product
spec:
containers:
- image: myproductimage:latest
name: my-product
imagePullPolicy: Always
envFrom:
- configMapRef:
name: configmap
env:
- name: env-file
value: my-product
volumeMounts:
- name: azure
mountPath: /opt/kube/my-product
imagePullSecrets:
- name: secret1
hostname: my-product
volumes:
- name: azure
persistentVolumeClaim:
claimName: fileshare-pvc
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: fileshare-pv
labels:
usage: fileshare-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: azure-file
azureFile:
secretName: secret2
shareName: myfileshare-folder
readOnly: false
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: fileshare-pvc
namespace: my-pool
spec:
accessModes:
- ReadWriteOnce
storageClassName: azure-file
resources:
requests:
storage: 5Gi
selector:
# To make sure we match the claim with the exact volume, match the label
matchLabels:
usage: fileshare-pv
I have pod (kind:job) which is having some code build files under "/usr/src/app" and these files I need in my local k8s host.
But when I am trying to do as per below yamls, I am not able to see any data in mounted host path which is actually exists in pod container ("/usr/src/app"). I think mounting is overwriting/hide that data. Please help me to get in my local k8s host.
My files are :-
apiVersion: batch/v1
kind: Job
metadata:
name: wf
spec:
template:
spec:
containers:
- name: wf
image: 12345678.dkr.ecr.ap-south-1.amazonaws.com/eks:ws
volumeMounts:
- name: wf-persistent-storage
mountPath: /usr/src/app # my data is in (/usr/src/app)
volumes:
- name: wf-persistent-storage
# pointer to the configuration of HOW we want the mount to be implemented
persistentVolumeClaim:
claimName: wf-test-pvc
restartPolicy: Never
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wf-test-pvc
spec:
storageClassName: mylocalstorage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local
spec:
storageClassName: mylocalstorage
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/root/mnt/"
type: DirectoryOrCreate
Don't mount your /usr/src/app. As it will get overwritten by the contents of the PVC. In your case the pvc is empty initially so all the files will be deleted.
Try with the below code, where you will be mounting /tmp on the pvc and using command the files will be copied to the pvc.
apiVersion: batch/v1
kind: Job
metadata:
name: wf
spec:
template:
spec:
containers:
- name: wf
image: 12345678.dkr.ecr.ap-south-1.amazonaws.com/eks:ws
command:
- bash
- -c
- cp -R /usr/src/app/* /tmp/
volumeMounts:
- name: wf-persistent-storage
mountPath: /opt # my data is in (/usr/src/app)
volumes:
- name: wf-persistent-storage
# pointer to the configuration of HOW we want the mount to be implemented
persistentVolumeClaim:
claimName: wf-test-pvc
restartPolicy: Never
I have been trying to deploy Cassandra using following documentation
https://kubernetes.io/docs/tutorials/stateful-application/cassandra/
deployment of Cassandra works fine but when i try to create statefull set it gives following error :
Cassandra 0
pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
can any one help me where am i doing wrong ?
A stateful set requires a persistent volume where store the state, the docs you provide there is a section that shows it:
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
type: pd-ssd
these are the docs to create a PV and/or Storage class in Azure as you need
https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
then you can associate the object with your StatefulSet
Did you create correct storage class and named it fast?
Try with this one (should work on azure):
...
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
resources:
requests:
storage: 1Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
parameters:
fsType: xfs
kind: Managed
storageaccounttype: Premium_LRS
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: Immediate
i'm trying to mount a persistent volume into my windows container, but i alwys get this error:
Unable to mount volumes for pod "mssql-with-pv-deployment-3263067711-xw3mx_default(....)": timeout expired waiting for volumes to attach/mount for pod "default"/"mssql-with-pv-deployment-3263067711-xw3mx". list of unattached/unmounted volumes=[blobdisk01]
i've created a github gist with the console output of "get events" and "describe sc | pvc | po" maybe someone will find the solution with it.
Below are my scripts that I'm using for deployment.
my storageclass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azure-disk-sc
provisioner: kubernetes.io/azure-disk
parameters:
skuname: Standard_LRS
my PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-disk-pvc
spec:
storageClassName: azure-disk-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
and the deployment of my container:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mssql-with-pv-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: mssql-with-pv
spec:
nodeSelector:
beta.kubernetes.io/os: windows
terminationGracePeriodSeconds: 10
containers:
- name: mssql-with-pv
image: testacr.azurecr.io/sql/mssql-server-windows-developer
ports:
- containerPort: 1433
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: "c:/volume"
name: blobdisk01
volumes:
- name: blobdisk01
persistentVolumeClaim:
claimName: azure-disk-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mssql-with-pv-deployment
spec:
selector:
app: mssql-with-pv
ports:
- protocol: TCP
port: 1433
targetPort: 1433
type: LoadBalancer
what am i doing wrong? is there another way to mount a volume?
thank for every help :)
I would try:
Change API version to v1: https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-disk
kubectl get events to see you if have a more detailed error (I could figure out the reason when I used NFS watching events)
maybe is this bug, I read in this post?
You will need a new volume in D: drive, looks like folders in C: are not supported for Windows Containers, see here:
https://github.com/kubernetes/kubernetes/issues/65060
Demos:
https://github.com/andyzhangx/demo/tree/master/windows/azuredisk