Mounting a copy of a managed disk on AKS - azure

I am trying to create a pod that uses an existing Managed Disk as the source for the disks that are mounted. I can attach the managed disk directly, but I can't make it work via PV and a PVC.
These are the files I'm using
pvclaim.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
annotations:
volume.beta.kubernetes.io/storage-class: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 256Gi
storageClassName: default
pvdisk.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
spec:
capacity:
storage: 256Gi
storageClassName: default
azureDisk:
kind: Managed
diskName: Mongo-Data-Test01
fsType: xfs
diskURI: /subscriptions/<SubId>/resourceGroups/Static-Staging-Disks-Centralus/providers/Microsoft.Compute/disks/Mongo-Data-Test01
accessModes:
- ReadWriteOnce
claimRef:
name: mongo-pvc
namespace: default
pvpod.yml
apiVersion: v1
kind: Pod
metadata:
name: adisk
spec:
containers:
- image: nginx
name: azure
volumeMounts:
- name: azuremount
mountPath: /mnt/azure
volumes:
- name: azuremount
persistentVolumeClaim:
claimName: mongo-pvc
The ultimate goal is to create a Statefulset that will deploy a cluster of Pods with the same Managed disk as the source for them all.
Any pointers would be appreciated!
Updated to add
The above will create a new disk for each instance (pod) that is launched. I am looking to create a new disk using the createOption: fromImage
So I'm looking for the underlying Azure infrastructure to create a copy of the existing managed disk, and then attach that to the pod(s) that are launched.

Kubernetes provides access mode of 3 types for mounting Persistent Volumes to a Pod:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
In your case, if you want to mount one volume to many pods, you need to use accessModes: ReadWriteMany. So, you need to check, it is possible to use this mode for Azure.
For more information, you can go through that link

After a conversation with one of the AKS developers, I was told that it is only possible to either attach an existing disk or to create a new, empty disk to AKS. It is unclear whether this will change in future.

Related

After resizing PV/PVC in AKS using standard LRS storage class, Artifactory > monitoring > storage section still shows old storage space. How to fix?

Issue:
Need to increase the filestore size after resizing my PV but the filestore size is not changing, even if I set the PV/PVC to 800gi or 300gi, it's still stuck at 205
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: artifactory-pv-claim
namespace: test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
volumeMode: Filesystem
storageClassName: "custom-artifactory"
To resize the PVC, you can edit the PVC to change the storage request to ask for more space. But to make the update works, here shows the way:
File system expansion must be triggered by terminating the pod using
the volume. More specifically:
Edit the PVC to request more space. Once underlying volume has been
expanded by the storage provider, then the PersistentVolume object
will reflect the updated size and the PVC will have the
FileSystemResizePending condition.
And here is the screenshot of my test:
before change:
after change, but before recreate pod:
after recreate pod:

Is it possible to mount a shared Azure disk in Azure Kubernetes to multiple PODs/Nodes?

I want to mount an Azure Shared Disk to the multiple deployments/nodes based on this:
https://learn.microsoft.com/en-us/azure/virtual-machines/disks-shared
So, I created a shared disk in Azure Portal and when trying to mount it to deployments in Kubernetes I got an error:
"Multi-Attach error for volume "azuredisk" Volume is already used by pod(s)..."
Is it possible to use Shared Disk in Kubernetes? If so how?
Thanks for tips.
Yes, you can, and the capability is GA.
An Azure Shared Disk can be mounted as ReadWriteMany, which means you can mount it to multiple nodes and pods. It requires the Azure Disk CSI driver, and the caveat is that currently only Raw Block volumes are supported, thus the application is responsible for managing the control of writes, reads, locks, caches, mounts, and fencing on the shared disk, which is exposed as a raw block device. This means that you mount the raw block device (disk) to a pod container as a volumeDevice rather than a volumeMount.
The documentation examples mostly points to how to create a Storage Class to dynamically provision the static Azure Shared Disk, but I have also created one statically and mounted it to multiple pods on different nodes.
Dynamically Provision Shared Azure Disk
Create Storage Class and PVC
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-csi
provisioner: disk.csi.azure.com
parameters:
skuname: Premium_LRS # Currently shared disk only available with premium SSD
maxShares: "2"
cachingMode: None # ReadOnly cache is not available for premium SSD with maxShares>1
reclaimPolicy: Delete
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 256Gi # minimum size of shared disk is 256GB (P15)
volumeMode: Block
storageClassName: managed-csi
Create a deployment with 2 replicas and specify volumeDevices, devicePath in Spec
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: deployment-azuredisk
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
name: deployment-azuredisk
spec:
containers:
- name: deployment-azuredisk
image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
volumeDevices:
- name: azuredisk
devicePath: /dev/sdx
volumes:
- name: azuredisk
persistentVolumeClaim:
claimName: pvc-azuredisk
Use a Statically Provisioned Azure Shared Disk
Using an Azure Shared Disk that has been provisioned through ARM, Azure Portal, or through the Azure CLI.
Define a PersistentVolume (PV) that references the DiskURI and DiskName:
apiVersion: v1
kind: PersistentVolume
metadata:
name: azuredisk-shared-block
spec:
capacity:
storage: "256Gi" # 256 is the minimum size allowed for shared disk
volumeMode: Block # PV and PVC volumeMode must be 'Block'
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureDisk:
kind: Managed
diskURI: /subscriptions/<subscription>/resourcegroups/<group>/providers/Microsoft.Compute/disks/<disk-name>
diskName: <disk-name>
cachingMode: None # Caching mode must be 'None'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-azuredisk-managed
spec:
resources:
requests:
storage: 256Gi
volumeMode: Block
accessModes:
- ReadWriteMany
volumeName: azuredisk-shared-block # The name of the PV (above)
Mounting this PVC is the same for both dynamically and statically provisioned shared disks. Reference the deployment above.
Note
Only raw block device(volumeMode: Block) is supported on shared disk feature, Kubernetes application should manage coordination and control of writes, reads, locks, caches, mounts, fencing on the shared disk which is exposed as raw block device. Multi-node read write is not supported by common file systems (e.g. ext4, xfs), it's only supported by cluster file systems.
details: https://github.com/kubernetes-sigs/azuredisk-csi-driver/tree/master/deploy/example/sharedisk

Pod with Azure File Share configured. Do I need PersistentVolume and PVC as well?

we have defined our YAML with
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
name: mypod
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
and we will before the deployment create secret with kubectl command:
$AKS_PERS_STORAGE_ACCOUNT_NAME
$STORAGE_KEY
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME \
--from-literal=azurestorageaccountkey=$STORAGE_KEY
We already have that existing file share as Azure File Share resource and we have file stored in it.
I am confused if we need to manage and define as well yamls for
kind: PersistentVolume
and
kind: PersistentVolumeClaim
or the above YAML is completely enough?
Are PV and PVC required only if we do not have our file share already created on Azure?
I've read the docs https://kubernetes.io/docs/concepts/storage/persistent-volumes/ but still feeling confused when they need to be defined and when it is OK not to use them at all during the overall deployment process.
Your Pod Yaml is ok.
The Kubernetes Persistent Volumes is a newer abstraction. If your application instead uses PersistentVolumeClaim it is decoupled from the type of storage you use (in your case Azure File Share) so your app can be deployed to e.g. AWS or Google Cloud or Minikube on your desktop without any changes. Your cluster need to have some support for PersistentVolumes and that part can be tied to a specific storage system.
So, to decouple your app yaml from specific infrastructure, it is better to use PersistentVolumeClaims.
Persistent Volume Example
I don't know about Azure File Share, but there is good documentation on Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS).
Application config
Persistent Volume Claim
Your app, e.g. a Deployment or StatefulSet can have this PVC resource
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-azurefile
spec:
accessModes:
- ReadWriteMany
storageClassName: my-azurefile
resources:
requests:
storage: 5Gi
Then you need to create a StorageClass resource that probably is unique for each type of environment, but need to have the same name and support the same access modes. If the environment does not support dynamic volume provisioning you may to have manually create PersistentVolume resource as well.
Examples in different environments:
The linked doc Dynamically create and use a persistent volume with Azure Files in AKS) describes for Azure.
See AWS EFS doc for creating ReadWriteMany volumes in AWS.
Blog about ReadWriteMany storage in Minikube
Pod using Persistent Volume Claim
You typically deploy apps using a Deployment or a StatefulSet but the part declaring the Pod template is similar, except that you probably want to use volumeClaimTemplate instead of PersistentVolumeClaim for StatefulSet.
See full example on Create a Pod using a PersistentVolumeClaim
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: file-share
persistentVolumeClaim:
claimName: my-azurefile # this must match your name of PVC
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: file-share

Specify Name of Fileshare with Dynamically Provisioned Storage

Is it possible to specify the name (or post script to the automatically generated name) for azure fileshare storage that is dynamically provisioned with kubernetes.
The automatically provisioned storage names look as follows:
kubernetes-dynamic-pvc-1254de92-8668-4245-bf78-2512fsgdges6
And I would like to change this to something like:
kubernetes-dynamic-pvc-1254de92-8668-4245-bf78-2512fsgdges6-username
either through specifying a new name (with generated UUID) or specifying a post script to the auto generated name.
The current deployment works by only specifying a PVC for the dynamically provisioned storage and the name can therefore not be specified in the PV file.
The yaml file for the storage-class contains the following:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: retain-fileshare-storage
provisioner: kubernetes.io/azure-file
parameters:
skuName: Standard_LRS
allowVolumeExpansion: True
reclaimPolicy: Retain
The yaml file for the PVC contains the following:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
labels:
app: my-app
chart: my-chart
release: my-release
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: retain-fileshare-storage
To clarify:
I am not interested in the name of the PVC, but in the name of the actual resource (a fileshare in a storage account in this case) on Azure.

Use Azure Storage Account for Promtheus database in Azure Kubernetes Service

I currently have an Azure Kubernetes cluster running with Promtheus and Grafana deployments. Prometheus is using the local cluster storage for the database and I want to mount a persistent volume in the Kubernetes cluster that points back to an Azure Storage Account (file share) for the Prometheus database.
I would like to do this because it seems cleaner than setting up a remote-write configuration and solves the issue that remote-writes solve and that is 'scalability and durability'. I've done some testing and proven out this does in fact work for a non-production, low traffic environment.
I would like to know if there are any pitfalls I should be aware of if I do move forward with this plan. Has anybody else done this and encountered any issues?
Create storage class to be used for prometheus data. Update the details in Prometheus manifest file. sample is given below
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: k8s
labels:
prometheus: k8s
spec:
replicas: 2
version: PROMETHEUS_VERSION
externalUrl: PROMETHEUS_EXTERNAL_URL
serviceAccountName: prometheus-k8s
serviceMonitorSelector:
matchExpressions:
- {key: k8s-app, operator: Exists}
ruleSelector:
matchLabels:
role: alert-rules
prometheus: k8s
nodeSelector:
node_label_key: node_label_value
resources:
requests:
memory: PROMETHEUS_MEMORY_REQUEST
retention: PROMETHEUS_STORAGE_RETENTION
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
storage:
class: STORAGE_CLASS_TYPE
selector:
resources:
volumeClaimTemplate:
metadata:
annotations:
annotation1: prometheus
spec:
storageClassName: STORAGE_CLASS_TYPE
resources:
requests:
storage: PROMETHEUS_STORAGE_VOLUME_SIZE

Resources