I want to attach a shared disk to multiple windows containers on AKS.
From post learned that it can be done for Linux containers.
I am trying to do the same with windows container but it's failing to mount a shared disk, with below error
MapVolume.MapPodDevice failed for volume "pvc-6e07bdca-2126-4a5b-806a-026016c3798d" : rpc error: code = Internal desc = Could not mount "2" at "\var\lib\kubelet\plugins\kubernetes.io\csi\volumeDevices\publish\pvc-6e07bdca-2126-4a5b-806a-026016c3798d\4e44da87-ea33-4d85-a7db-076db0883bcf": rpc error: code = Unknown desc = not an absolute Windows path: 2
Error occured
Used below to dynamically provision Shared Azure Disk
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-csi-custom
provisioner: disk.csi.azure.com
parameters:
skuname: Premium_LRS
maxShares: "2"
cachingMode: None
reclaimPolicy: Delete
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk-dynamic
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
volumeMode: Block
storageClassName: managed-csi-custom
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-shared-disk
name: deployment-azuredisk
spec:
replicas: 2
selector:
matchLabels:
app: test-shared-disk
template:
metadata:
labels:
app: test-shared-disk
name: deployment-azuredisk
spec:
nodeSelector:
role: windowsgeneral
containers:
- name: deployment-azuredisk
image: mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019
volumeDevices:
- name: azuredisk
devicePath: "D:\test"
volumes:
- name: azuredisk
persistentVolumeClaim:
claimName: pvc-azuredisk-dynamic
Is it possible to mount shared disk for windows container on AKS? Thanks for help.
Azure shared disks is an Azure managed disks feature that enables attaching an Azure disk to agent nodes simultaneously. But it doesn't apply for window node pool only
To overcome this issue or mounting Azure Disk CSI driver to window node you need to provisoned or create the window node pool first.
Please refer this MS tutorial to add a Windows node pool.
After you have a Windows node pool, you can now use the same built-in storage classes managed-csi to mount the DISK.
For more information and Validating Volume Mapping you can refer this MS Document
Related
I am using azure csi disk driver method for implementing K8 persistent volume. I have installed azure-csi-drivers in my K8 cluster and using below mentioned files as end-to-end testing purpose but my deployment is getting failed due to following error :
Warning FailedAttachVolume 23s (x7 over 55s) attachdetach-controller
AttachVolume.Attach failed for volume "pv-azuredisk-csi" : rpc error:
code = NotFound desc = Volume not found, failed with error: could not
get disk name from
/subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_RTT_HOT_ENV_RG/providers/Microsoft.Compute/disks/560d-RTT-PVDisk,
correct format:
./subscriptions/(?:.)/resourceGroups/(?:.*)/providers/Microsoft.Compute/disks/(.+)
Note: I have checked multiple times, my URL is correct but I am not sure if underscore in resource group name is creating any problem, RG = "560d_RTT_HOT_ENV_RG". Please suggest if anyone have any idea what is going wrong?
K8 version : 14.9
CSI drivers : v0.3.0
My YAML files are :
csi-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
csi:
driver: disk.csi.azure.com
readOnly: false
volumeHandle: /subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_RTT_HOT_ENV_RG/providers/Microsoft.Compute/disks/560d-RTT-PVDisk
volumeAttributes:
cachingMode: ReadOnly
fsType: ext4
-------------------------------------------------------------------------------------------------
csi-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: pv-azuredisk-csi
storageClassName: ""
nginx-csi-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: nginx-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
image: nginx
name: nginx-azuredisk-csi
command:
"/bin/sh"
"-c"
while true; do echo $(date) >> /mnt/azuredisk/outfile; sleep 1; done
volumeMounts:
name: azuredisk01
mountPath: "/mnt/azuredisk"
volumes:
name: azuredisk01
persistentVolumeClaim:
claimName: pvc-azuredisk-csi
It seems you create the disk in another resource group, not the AKS nodes group. So you must grant the Azure Kubernetes Service (AKS) service principal for your cluster the Contributor role to the disk's resource group firstly. For more details, see Create an Azure disk.
Update:
Finally, I found out the reason why it cannot find the volume. I think it's a stupid definition. It's case sensitive about the resource Id of the disk which you used for the persist volume. So you need to change your csi-pv.yaml file like below:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-azuredisk-csi
namespace: azure-static-diskpv-csi-fss
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
csi:
driver: disk.csi.azure.com
readOnly: false
volumeHandle: /subscriptions/464f9a13-7g6o-730g-hqi4-6ld2802re6z1/resourcegroups/560d_rtt_hot_env_rg/providers/Microsoft.Compute/disks/560d-RTT-PVDisk
volumeAttributes:
cachingMode: ReadOnly
fsType: ext4
In addition, the first paragraph of the answer is also important.
Update:
Here are the screenshots of the result that the static disk for the CSI driver works on my side:
I have a private Azure Container registry that contains two containers, a windows based (mcr.microsoft.com/dotnet/core/samples:aspnetapp) and a linux based (a custom test). I created a secret etc. which seems ok. When I try to deploy those with kubernetes the following happens:
The linux based from the private repo starts normally
The windows based container from docker hub starts normally
The SAME windows based container from the private repo throws an error : Back-off pulling image "spintheblackcircleshop.azurecr.io/aspnetapp"
Anyone?
-
test.yaml:
apiVersion: v1
items:
# basplus deployment
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-private
spec:
replicas: 1
template:
metadata:
labels:
app: private
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: spintheblackcircleshop.azurecr.io/aspnetapp
imagePullSecrets:
- name: mysecret
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-public
spec:
replicas: 1
template:
metadata:
labels:
app: public
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: mcr.microsoft.com/dotnet/core/samples:aspnetapp
imagePullSecrets:
- name: mysecret
- apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aspnetapp-private-sleep
spec:
replicas: 1
template:
metadata:
labels:
app: private-sleep
spec:
terminationGracePeriodSeconds: 100
containers:
- name: xxx
image: spintheblackcircleshop.azurecr.io/danielm-test-sleep
imagePullSecrets:
- name: mysecret
# end
kind: List
metadata: {}
AKS doesnt support windows nodes yet. There is no way to run windows containers in AKS at the time of writing (05/05/2019).
edit: fair point raised by the other answer. you actually can run windows containers in aci in aks, but it's not exactly in aks :)
Well, AKS does not support windows node currently, but you can just run windows container in it when you install the virtual kubelet in the AKS. It takes advantage of the ACI.
See the steps that install the virtual kubelet and run windows container in the document Use Virtual Kubelet with Azure Kubernetes Service (AKS).
In Kubernetes (Azure AKS), how do I create a PersistentVolume resource that is bound to my own managed disk Azure resource that has a specific diskName and diskURI (resource id)?
Here is one example but for a Pod:
kind: Pod
apiVersion: v1
metadata:
name: mypodrestored
spec:
containers:
- name: myfrontendrestored
image: nginx
volumeMounts:
- mountPath: "/mnt/azure"
name: volume
volumes:
- name: volume
azureDisk:
kind: Managed
diskName: pvcRestored
diskURI: /subscriptions/<guid>/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
You can create your own managed disk in the AKS cluster and then attach it to the pod which you need. For more details about the steps, see Volumes with Azure disks.
The result will like this:
How can I attach 100GB Persistent Volume Disk to Each Node in the AKS Kubernetes Cluster?
We are using Kubernetes on Azure using AKS.
We have a scenario where we need to attach Persistent Volumes to each Node in our AKS Cluster. We run 1 Docker Container on each Node in the Cluster.
The reason to attach volumes Dynamically is to increase the IOPS available and available amount of Storage that each Docker container needs to do its job.
The program running inside of each Docker container works against very large input data files (10GB) and writes out even larger output files(50GB).
We could mount Azure File Shares, but Azure FileShares is limited to 60MB/ps which is too slow for us to move around this much raw data. Once the program running in the Docker image has completed, it will move the output file (50GB) to Blob Storage. The total of all output files may exceed 1TB from all the containers.
I was thinking that if we can attach a Persistent Volume to each Node we can increase our available disk space as well as the IOPS without having to go to a high vCPU/RAM VM configuration (ie. DS14_v2). Our program is more I/O intensive vs CPU.
All the Docker images running in the Pod are exactly the same where they read a message from a Queue that tells it a specific input file to work against.
I've followed the docs to create a StorageClass, Persistent Volume Claims and Persistent Volume and run this against 1 POD. https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv
However, when I create a Deployment and Scale the number of Pods from 1 to 2 I receive the error (in production we'd scale to as many nodes as necessary ~100)
Multi-Attach error for volume
"pvc-784496e4-869d-11e8-8984-0a58ac1f1e06" Volume is already used by
pod(s) pv-deployment-67fd8b7b95-fjn2n
I realize that an Azure Disk can only be attached to a SingleNode (ReadWriteOnce) however I'm not sure how to create multiple disks and attach them to each Node at the time we load up the Kubernetes Cluster and begin our work.
Persistent Volume Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-managed-disk
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium
resources:
requests:
storage: 100Gi
This is my Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pv-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- name: volume
mountPath: /mnt/azure
resources:
limits:
cpu: ".7"
memory: "2.5G"
requests:
cpu: ".7"
memory: "2.5G"
volumes:
- name: volume
persistentVolumeClaim:
claimName: azure-managed-disk
If I knew that I was going to scale to 100 Nodes, would I have to create a .yaml files with 100 Deployments and be explicit for each Deployment to use a specific Volume Claim?
For example in my volume claim I'd have azure-claim-01, azure-claim-02, etc. and in each Deployment I would have to make claim to each named Volume Claim
volumes:
- name: volume
persistentVolumeClaim:
claimName: azure-claim-01
I can't quite get my head around how I can do all this dynamically?
Can you recommend a better way to achieve the desired result?
You should use the StatefulSetand volumeClaimTemplates configuration like following:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 4
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
volumeMounts:
- name: persistent-storage
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: hdd
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: hdd
provisioner: kubernetes.io/azure-disk
parameters:
skuname: Standard_LRS
kind: managed
cachingMode: ReadOnly
You will get Persistent Volume for every Node:
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON
AGE
pvc-0e651011-7647-11e9-bbf5-c6ab19063099 2Gi RWO Delete Bound default/persistent-storage-web-0 hdd
51m
pvc-17181607-7648-11e9-bbf5-c6ab19063099 2Gi RWO Delete Bound default/persistent-storage-web-1 hdd
49m
pvc-4d488893-7648-11e9-bbf5-c6ab19063099 2Gi RWO Delete Bound default/persistent-storage-web-2 hdd
48m
pvc-6aff2a4d-7648-11e9-bbf5-c6ab19063099 2Gi RWO Delete Bound default/persistent-storage-web-3 hdd
47m
And every Node will create dedicated Persistent Volume Claim:
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistent-storage-web-0 Bound pvc-0e651011-7647-11e9-bbf5-c6ab19063099 2Gi RWO hdd 55m
persistent-storage-web-1 Bound pvc-17181607-7648-11e9-bbf5-c6ab19063099 2Gi RWO hdd 48m
persistent-storage-web-2 Bound pvc-4d488893-7648-11e9-bbf5-c6ab19063099 2Gi RWO hdd 46m
persistent-storage-web-3 Bound pvc-6aff2a4d-7648-11e9-bbf5-c6ab19063099 2Gi RWO hdd 45m
I would consider using DaemonSet. This would allow your pods to only run on each node, hence ReadWriteOnce will take effect. The constraint will be, you cannot scale your application more than the number of nodes you have.
I'm trying to run the Jenkins Helm chart. As part of this setup, I'd like to pass in a persistent volume that I provisioned ahead of time (or perhaps exported from another cluster during a migration).
I'm trying to get my persistent volume (PV) and persistent volume claim (PVC) setup in a such a way that when Jenkins starts, it uses my predefined PV and PVC.
I think the problem originates from the persistent storage definition for the Azure disk points to a VHD in my storage account. Is there any way to point it to an existing managed disk -and not a blob?
This is how I setup my persistent storage using Azure Managed Disk
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-home
spec:
capacity:
storage: 10Gi
storageClassName: default
azureDisk:
diskName: jenkins-home
diskURI: https://<storageaccount>.blob.core.windows.net/jenkins-data/jenkins-home.vhd
fsType: ext4
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
name: jenkins-home-pvc
namespace: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-home-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: default
I then start helm like this...
helm install --name jenkins stable/jenkins --values=values.yaml
Where my values.yaml file looks like
Persistence:
ExistingClaim: jenkins-home-pvc
Here is the error I receive when the Jenkins' pod starts.
AttachVolume.Attach failed for volume "jenkins-home" : Attach volume "jenkins-home" to instance "aks-agentpool-40897452-0" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="OperationNotAllowed" Message="Addition of a blob based disk to VM with managed disks is not supported."
I posed this question to the Azure team here.
Through their help I arrived at the following solution...
I had tried to use the managed disk resource ID before but it yelled at me saying it expected a .vhd file. But after adding 'kind: Managed', it was perfectly happy to take the managed disk resource id.
Creating an empty and formatted managed disk is of course a pre-requisite for this to work. Copying the managed disk into the same resource group as the AKS cluster was also required.
So now my PV and PVC look like this and it's working...
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-home
spec:
capacity:
storage: 10Gi
storageClassName: default
azureDisk:
kind: Managed
diskName: jenkins-home
diskURI: /subscriptions/{subscription-id}/resourceGroups/{aks-controlled-resource-group-name}/providers/Microsoft.Compute/disks/jenkins-home
fsType: ext4
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
name: jenkins-home-pvc
namespace: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-home-pvc
annotations:
volume.beta.kubernetes.io/storage-class: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: default