I am trying to Dynamically provision storage using a storageclass I've defined with type azure-file. I've tried setting both the parameters in the storageclass for storageAccount and skuName. Here is my example with storageAccount set.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azuretestfilestorage
namespace: kube-system
provisioner: kubernetes.io/azure-file
parameters:
storageAccount: <storage_account_name>
The storageclass is created successfully however when I try to create a persistent volume claim using this storage class the persistent volume create fails with this error:
Failed to provision volume with StorageClass "azuretestfilestorage": failed to find a matching storage account
Here is the code for my persistentvolumeclaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: logging-persistent-volume-claim-test
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: azuretestfilestorage
My storageaccount is definitely in the same resource group and data center location as my acs cluster. My understanding is that a secret, persistent volume, and file share should be automatically generated. Instead I just get stuck in a pending state w/ the above error.
Here is the output of my kubectl version command
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.7", GitCommit:"8e1552342355496b62754e61ad5f802a0f3f1fa7", GitTreeState:"clean", BuildDate:"2017-09-28T23:56:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Any input would be appreciated. Thanks!
I emailed microsoft azure support about this and received an answer.
There is a bug in acs kubernetes version 1.7.7 that does not allow for dynamic persistent volume claims to work if your --cluster-name value in “/etc/kubernetes/manifests/kube-controller-manager.yaml” of the master node VM is greater than 16 characters. Very obscure bug. The fix is to upgrade your cluster or re-deploy with a different name.
Here is bug report: https://github.com/andyzhangx/demo/blob/master/issues/azurefile-issues.md#4-azure-file-dynamic-provision-failed-due-to-cluster-name-length-issue
Related
I used AzureFileShare to create a dynamic PVC for a pod where I deployed a NodeJS application.
Below is the yaml of the storageclass I used to create the pvc,
apiVersion: storage.k8s.io/v1
metadata:
name: my-azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- actimeo=30
parameters:
skuName: Standard_LRS
The yaml file I used to create the pvc,
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamic-pvc
namespace: test
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: my-azurefile
I took the backup of the namespace where the pod is deployed using velero. When I restored the backup in a different cluster, I see no data present in the pod. But when I use dynamic azuredisk pvc, I am able to restore the pod with the data.
NOTE: Before restoring the velero backup, I have created the my-azurefile storageclass in the new cluster where I performed the restoration.
Can anyone please explain why the restoration is not happening properly with the data when I use dynamic azurefile pvc? Thanks in Advance!
kubectl patch storageclass/<YOUR_AZURE_FILE_STORAGE_CLASS_NAME>
--type json
--patch '[{"op":"add","path":"/mountOptions/-","value":"nouser_xattr"}]'
As per the above patch method, I am able to restore the data from azurefile pv.
OS: Windows 10
Kubernetes version: 1.14.8
Helm version: 3
Docker Desktop version: 2.1.0.5
Trying to deploy a Kubernetes cluster using a Helm-chart that contains a pod that connects to a statically provisioned Azure File Share.
Deploying to an Azure Kubernetes cluster works, but when we try to deploy the cluster locally on docker-desktop it gets the error message when trying to mount the share:
Unable to mount volumes for pod "": timeout expired waiting
for volumes to attach or mount for pod "". list of unmounted
volumes=[servicecatalog-persistent-storage]. list of unattached
volumes=[interactor-properties servicecatalog-persistent-storage
default-token-9fp7j]
Mounting arguments: -t cifs -o
username=,password=,file_mode=0777,dir_mode=0777,vers=3.0
//.file.core.windows.net/spps
/var/lib/kubelet/pods/44a70ebf-1b26-11ea-ab13-00155d0a4406/volumes/kubernetes.io~azure-file/servicecatalog-spp-pv
Output: mount error(11): Resource temporarily unavailable
Helm charts (removed redundant information):
Deployment:
apiVersion: apps/v1
kind: Deployment
spec:
spec:
containers:
- name: {{ .Release.Name }}-{{ .Chart.Name }}
volumeMounts:
- name: servicecatalog-persistent-storage
mountPath: /data/sppstore
volumes:
- name: servicecatalog-persistent-storage
persistentVolumeClaim:
claimName: servicecatalog-pv-claim
Persistent Storage / Claims:
apiVersion: v1
kind: PersistentVolume
metadata:
name: servicecatalog-spp-pv
labels:
usage: servicecatalog-spp-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureFile:
secretName: azurefile-secret
shareName: spps
readOnly: false
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: servicecatalog-pv-claim
annotations:
volume.beta.kubernetes.io/storage-class: ""
storageClass:
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
usage: servicecatalog-spp-pv
Secret:
apiVersion: v1
kind: Secret
metadata:
name: azurefile-secret
type: Opaque
data:
azurestorageaccountname: <acc name>
azurestorageaccountkey:<acc key>
We have tried:
Using the Azure File Diagnostics to ensure ports are open and we are able to connect from our machine. link
Connecting using Azure Storage Explorer (works)
Microsoft says that connecting to an Azure File Share locally requires SMB 3.0 for security reasons which Windows 10 supports, but Kubernetes seems to use CIFS (which is a dialect of SMB?), but we cant figure out if its supported for access to Azure File Share. Any ideas?
The recommended way to mount an Azure file share on Linux is using SMB
3.0. By default, Azure Files requires encryption in transit, which is only supported by SMB 3.0. Azure Files also supports SMB 2.1, which
does not support encryption in transit, but you may not mount Azure
file shares with SMB 2.1 from another Azure region or on-premises for
security reasons.
https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-linux
so if you are using smb 2.1 you can only mount the file share from inside the same region. not from local workstation or from another azure region
since your cifs mount mentions vers=3.0 - I would assume this should work in your case. check storage account network access restrictions? or your network restrictions. say port 445, or other concerns mentioned in the linked article
In Kubernetes (Azure AKS), how do I create a PersistentVolume resource that is bound to my own managed disk Azure resource that has a specific diskName and diskURI (resource id)?
Here is one example but for a Pod:
kind: Pod
apiVersion: v1
metadata:
name: mypodrestored
spec:
containers:
- name: myfrontendrestored
image: nginx
volumeMounts:
- mountPath: "/mnt/azure"
name: volume
volumes:
- name: volume
azureDisk:
kind: Managed
diskName: pvcRestored
diskURI: /subscriptions/<guid>/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
You can create your own managed disk in the AKS cluster and then attach it to the pod which you need. For more details about the steps, see Volumes with Azure disks.
The result will like this:
I'm trying to run the Jenkins Helm chart. As part of this setup, I'd like to pass in a persistent volume that I provisioned ahead of time (or perhaps exported from another cluster during a migration).
I'm trying to get my persistent volume (PV) and persistent volume claim (PVC) setup in a such a way that when Jenkins starts, it uses my predefined PV and PVC.
I think the problem originates from the persistent storage definition for the Azure disk points to a VHD in my storage account. Is there any way to point it to an existing managed disk -and not a blob?
This is how I setup my persistent storage using Azure Managed Disk
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-home
spec:
capacity:
storage: 10Gi
storageClassName: default
azureDisk:
diskName: jenkins-home
diskURI: https://<storageaccount>.blob.core.windows.net/jenkins-data/jenkins-home.vhd
fsType: ext4
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
name: jenkins-home-pvc
namespace: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-home-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: default
I then start helm like this...
helm install --name jenkins stable/jenkins --values=values.yaml
Where my values.yaml file looks like
Persistence:
ExistingClaim: jenkins-home-pvc
Here is the error I receive when the Jenkins' pod starts.
AttachVolume.Attach failed for volume "jenkins-home" : Attach volume "jenkins-home" to instance "aks-agentpool-40897452-0" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="OperationNotAllowed" Message="Addition of a blob based disk to VM with managed disks is not supported."
I posed this question to the Azure team here.
Through their help I arrived at the following solution...
I had tried to use the managed disk resource ID before but it yelled at me saying it expected a .vhd file. But after adding 'kind: Managed', it was perfectly happy to take the managed disk resource id.
Creating an empty and formatted managed disk is of course a pre-requisite for this to work. Copying the managed disk into the same resource group as the AKS cluster was also required.
So now my PV and PVC look like this and it's working...
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-home
spec:
capacity:
storage: 10Gi
storageClassName: default
azureDisk:
kind: Managed
diskName: jenkins-home
diskURI: /subscriptions/{subscription-id}/resourceGroups/{aks-controlled-resource-group-name}/providers/Microsoft.Compute/disks/jenkins-home
fsType: ext4
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
name: jenkins-home-pvc
namespace: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-home-pvc
annotations:
volume.beta.kubernetes.io/storage-class: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: default
I'm trying to create a persistent volume using the azureFile however I keep getting the following error.
MountVolume.SetUp failed for volume "kubernetes.io/azure-file/2882f900-d7de-11e6-affc-000d3a26076e-pv0001" (spec.Name: "pv0001") pod "2882f900-d7de-11e6-affc-000d3a26076e" (UID: "2882f900-d7de-11e6-affc-000d3a26076e") with: mount failed: exit status 32 Mounting arguments: //xxx.file.core.windows.net/test /var/lib/kubelet/pods/2882f900-d7de-11e6-affc-000d3a26076e/volumes/kubernetes.io~azure-file/pv0001 cifs [vers=3.0,username=xxx,password=xxx ,dir_mode=0777,file_mode=0777] Output: mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I also tried mounting the share in one of the VM's on which kubernetes is running which does work.
I've used the following configuration to create the pv/pvc/pod.
apiVersion: v1
kind: Secret
metadata:
name: azure-secret
type: Opaque
data:
azurestorageaccountkey: [base64 key]
azurestorageaccountname: [base64 accountname]
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
azureFile:
secretName: azure-secret
shareName: test
readOnly: false
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- mountPath: "/mnt"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: pvc0001
This the version of kubernetes I'm using, which was build using the azure container service.
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:38:40Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.6", GitCommit:"e569a27d02001e343cb68086bc06d47804f62af6", GitTreeState:"clean", BuildDate:"2016-11-12T05:16:27Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
I had a blog discussion the errors when mounting Azure files. The permission denied error might be due to the following reasons:
The Azure storage account name and/or key were not encoded with base64 algorithm;
The Azure storage account name and/or key were encoded with command echo rather than echo -n;
The location of Azure storage account was different from the location of container host.