I'm trying to create a persistent volume using the azureFile however I keep getting the following error.
MountVolume.SetUp failed for volume "kubernetes.io/azure-file/2882f900-d7de-11e6-affc-000d3a26076e-pv0001" (spec.Name: "pv0001") pod "2882f900-d7de-11e6-affc-000d3a26076e" (UID: "2882f900-d7de-11e6-affc-000d3a26076e") with: mount failed: exit status 32 Mounting arguments: //xxx.file.core.windows.net/test /var/lib/kubelet/pods/2882f900-d7de-11e6-affc-000d3a26076e/volumes/kubernetes.io~azure-file/pv0001 cifs [vers=3.0,username=xxx,password=xxx ,dir_mode=0777,file_mode=0777] Output: mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I also tried mounting the share in one of the VM's on which kubernetes is running which does work.
I've used the following configuration to create the pv/pvc/pod.
apiVersion: v1
kind: Secret
metadata:
name: azure-secret
type: Opaque
data:
azurestorageaccountkey: [base64 key]
azurestorageaccountname: [base64 accountname]
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
azureFile:
secretName: azure-secret
shareName: test
readOnly: false
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- mountPath: "/mnt"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: pvc0001
This the version of kubernetes I'm using, which was build using the azure container service.
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:38:40Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.6", GitCommit:"e569a27d02001e343cb68086bc06d47804f62af6", GitTreeState:"clean", BuildDate:"2016-11-12T05:16:27Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
I had a blog discussion the errors when mounting Azure files. The permission denied error might be due to the following reasons:
The Azure storage account name and/or key were not encoded with base64 algorithm;
The Azure storage account name and/or key were encoded with command echo rather than echo -n;
The location of Azure storage account was different from the location of container host.
Related
I deployed my first container, I got info:
deployment.apps/frontarena-ads-deployment created
but then I saw my container creation is stuck in Waiting status.
Then I saw the logs using kubectl describe pod frontarena-ads-deployment-5b475667dd-gzmlp and saw MountVolume error which I cannot figure out why it is thrown:
Warning FailedMount 9m24s kubelet MountVolume.SetUp
failed for volume "ads-filesharevolume" : mount failed: exit status 32 Mounting command:
systemd-run Mounting arguments: --description=Kubernetes transient
mount for
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
--scope -- mount -t cifs -o username=frontarenastorage,password=mypassword,file_mode=0777,dir_mode=0777,vers=3.0
//frontarenastorage.file.core.windows.net/azurecontainershare
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
Output: Running scope as unit
run-rf54d5b5f84854777956ae0e25810bb94.scope. mount error(115):
Operation now in progress Refer to the mount.cifs(8) manual page (e.g.
man mount.cifs)
Before I run the deployment I created a secret in Azure, using the already created azure file share, which I referenced within the YAML.
$AKS_PERS_STORAGE_ACCOUNT_NAME="frontarenastorage"
$STORAGE_KEY="mypassword"
kubectl create secret generic fa-fileshare-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
In that file share I have folders and files which I need to mount and I reference azurecontainershare in YAML:
My YAML looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-deployment
labels:
app: frontarena-ads-deployment
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-aks-test
labels:
app: frontarena-ads-aks-test
spec:
containers:
- name: frontarena-ads-aks-test
image: faselect-docker.dev/frontarena/ads:test1
imagePullPolicy: Always
ports:
- containerPort: 9000
volumeMounts:
- name: ads-filesharevolume
mountPath: /opt/front/arena/host
volumes:
- name: ads-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: azurecontainershare
readOnly: false
imagePullSecrets:
- name: fa-repo-secret
selector:
matchLabels:
app: frontarena-ads-aks-test
The Issue was because of the different Azure Regions in which AKS cluster and Azure File Share are deployed. If they are in the same Region you would not have this issue.
OS: Windows 10
Kubernetes version: 1.14.8
Helm version: 3
Docker Desktop version: 2.1.0.5
Trying to deploy a Kubernetes cluster using a Helm-chart that contains a pod that connects to a statically provisioned Azure File Share.
Deploying to an Azure Kubernetes cluster works, but when we try to deploy the cluster locally on docker-desktop it gets the error message when trying to mount the share:
Unable to mount volumes for pod "": timeout expired waiting
for volumes to attach or mount for pod "". list of unmounted
volumes=[servicecatalog-persistent-storage]. list of unattached
volumes=[interactor-properties servicecatalog-persistent-storage
default-token-9fp7j]
Mounting arguments: -t cifs -o
username=,password=,file_mode=0777,dir_mode=0777,vers=3.0
//.file.core.windows.net/spps
/var/lib/kubelet/pods/44a70ebf-1b26-11ea-ab13-00155d0a4406/volumes/kubernetes.io~azure-file/servicecatalog-spp-pv
Output: mount error(11): Resource temporarily unavailable
Helm charts (removed redundant information):
Deployment:
apiVersion: apps/v1
kind: Deployment
spec:
spec:
containers:
- name: {{ .Release.Name }}-{{ .Chart.Name }}
volumeMounts:
- name: servicecatalog-persistent-storage
mountPath: /data/sppstore
volumes:
- name: servicecatalog-persistent-storage
persistentVolumeClaim:
claimName: servicecatalog-pv-claim
Persistent Storage / Claims:
apiVersion: v1
kind: PersistentVolume
metadata:
name: servicecatalog-spp-pv
labels:
usage: servicecatalog-spp-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureFile:
secretName: azurefile-secret
shareName: spps
readOnly: false
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: servicecatalog-pv-claim
annotations:
volume.beta.kubernetes.io/storage-class: ""
storageClass:
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
usage: servicecatalog-spp-pv
Secret:
apiVersion: v1
kind: Secret
metadata:
name: azurefile-secret
type: Opaque
data:
azurestorageaccountname: <acc name>
azurestorageaccountkey:<acc key>
We have tried:
Using the Azure File Diagnostics to ensure ports are open and we are able to connect from our machine. link
Connecting using Azure Storage Explorer (works)
Microsoft says that connecting to an Azure File Share locally requires SMB 3.0 for security reasons which Windows 10 supports, but Kubernetes seems to use CIFS (which is a dialect of SMB?), but we cant figure out if its supported for access to Azure File Share. Any ideas?
The recommended way to mount an Azure file share on Linux is using SMB
3.0. By default, Azure Files requires encryption in transit, which is only supported by SMB 3.0. Azure Files also supports SMB 2.1, which
does not support encryption in transit, but you may not mount Azure
file shares with SMB 2.1 from another Azure region or on-premises for
security reasons.
https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-linux
so if you are using smb 2.1 you can only mount the file share from inside the same region. not from local workstation or from another azure region
since your cifs mount mentions vers=3.0 - I would assume this should work in your case. check storage account network access restrictions? or your network restrictions. say port 445, or other concerns mentioned in the linked article
I am using azure csi disk driver method for implementing K8 persistent volume. I have installed azure-csi-drivers in my K8 cluster and using below mentioned files as end-to-end testing purpose but my deployment is getting failed due to following error :
Warning FailedAttachVolume 23s (x7 over 55s) attachdetach-controller
AttachVolume.Attach failed for volume "pv-azuredisk-csi" : rpc error:
code = NotFound desc = Volume not found, failed with error: could not
get disk name from
/subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_RTT_HOT_ENV_RG/providers/Microsoft.Compute/disks/560d-RTT-PVDisk,
correct format:
./subscriptions/(?:.)/resourceGroups/(?:.*)/providers/Microsoft.Compute/disks/(.+)
Note: I have checked multiple times, my URL is correct but I am not sure if underscore in resource group name is creating any problem, RG = "560d_RTT_HOT_ENV_RG". Please suggest if anyone have any idea what is going wrong?
K8 version : 14.9
CSI drivers : v0.3.0
My YAML files are :
csi-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
csi:
driver: disk.csi.azure.com
readOnly: false
volumeHandle: /subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_RTT_HOT_ENV_RG/providers/Microsoft.Compute/disks/560d-RTT-PVDisk
volumeAttributes:
cachingMode: ReadOnly
fsType: ext4
-------------------------------------------------------------------------------------------------
csi-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: pv-azuredisk-csi
storageClassName: ""
nginx-csi-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: nginx-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
image: nginx
name: nginx-azuredisk-csi
command:
"/bin/sh"
"-c"
while true; do echo $(date) >> /mnt/azuredisk/outfile; sleep 1; done
volumeMounts:
name: azuredisk01
mountPath: "/mnt/azuredisk"
volumes:
name: azuredisk01
persistentVolumeClaim:
claimName: pvc-azuredisk-csi
It seems you create the disk in another resource group, not the AKS nodes group. So you must grant the Azure Kubernetes Service (AKS) service principal for your cluster the Contributor role to the disk's resource group firstly. For more details, see Create an Azure disk.
Update:
Finally, I found out the reason why it cannot find the volume. I think it's a stupid definition. It's case sensitive about the resource Id of the disk which you used for the persist volume. So you need to change your csi-pv.yaml file like below:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
csi:
driver: disk.csi.azure.com
readOnly: false
volumeHandle: /subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_rtt_hot_env_rg/providers/Microsoft.Compute/disks/560d-RTT-PVDisk
volumeAttributes:
cachingMode: ReadOnly
fsType: ext4
In addition, the first paragraph of the answer is also important.
Update:
Here are the screenshots of the result that the static disk for the CSI driver works on my side:
I have been trying to mount a file share on Kubernetes pod hosted on AKS in Azure. So far, I have tried to:
1. Successfully created a secret by base64 encoding the name and the key
2. Create a yaml by specifying the correct configurations
3. Once I apply it using kubectl apply -f azure-file-pod.yaml, it gives me the following error:
Output: mount error: could not resolve address for
demo.file.core.windows.net: Unknown error
I have an Azure File Share by the name of demo.
Here is my yaml file:
apiVersion: v1
kind: Pod
metadata:
name: azure-files-pod
spec:
containers:
- image: microsoft/sample-aks-helloworld
name: azure
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: demo
readOnly: false
How can this possibly be resolved?
Github Issue
I'm using Azure ACS with Kubernetes orchestrator with Windows agents.
But I keep running into an issue when I try to use azureFile volume, it never seems to find my share.
The volume remains unknown, and when trying to browse to the website it gives access denied:
But this is probably because the folder is empty.
I'll show you my .yaml file and storagestructure, i'm pretty sure my secret is correct, doublechecked it.
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: azurepod
labels:
Volumes: ok
spec:
containers:
- image: XXXX
name: aspvolumes
volumeMounts:
- mountPath: C:\site
name: asp-website-volume
imagePullSecrets:
- name: crcatregistry
nodeSelector:
OS: windows
volumes:
- name: asp-website-volume
azureFile:
secretName: azure-secret
shareName: asptestsite
readOnly: false
k8s azure file mount on windows node is not ready yet, the code has been merged into v1.9, see https://github.com/Azure/kubernetes/pull/11, and this feature relies on a new Windows version which is not published yet.