I have an Azure Kubernetes cluster and I need to mount a data volume for an application like mentioned below
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 3
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-db-password
key: db-password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
- name: usermanagement-dbcreation-script
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: azure-managed-disk-pvc
- name: usermanagement-dbcreation-script
configMap:
name: usermanagement-dbcreation-script
I see that there are two options to create the Persistent Volume - either a based on Azure Disks or Azure Files.
I want to know what is the difference between Azure Disks or Azure Files with respect to Persistent Volume in Azure Kubernetes and when should I Azure Disks vs Azure Files?
For something as mysql (exclusive access to files) you are better off using Azure Disks. That would be pretty much a regular disk attached to the pod, whereas Azure Files are mostly meant to be used when you need ReadWriteMany access, not ReadWriteOnce
https://learn.microsoft.com/en-us/azure/aks/concepts-storage
Related
I deployed few days ago 2 services into Azure Kubernetes cluster. I set up cluster with 1 node, virtual machine parameters: B2s: 2 Cores, 4 GB RAM, 8 GB Temporary storage. Then I placed 2 pods on the same node:
MySQL database with 4Gib storage persistent volume, 5 tables at the moment
Spring boot java application
There is no replicas.
Take a look on kubectl output regarding the deployed pods:
The purpose is to create internal application in company where I work which will be used by company team. There won't be a lot of data in DB.
When we started to test connection with API from front-end I received memory alert like below:
mySQL deployment yaml file looks like:
apiVersion: v1
kind: Service
metadata:
name: mysql-db-testing-service
namespace: testing
spec:
type: LoadBalancer
ports:
- port: 3307
targetPort: 3306
selector:
app: mysql-db-testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-db-testing
namespace: testing
spec:
selector:
matchLabels:
app: mysql-db-testing
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-db-testing
spec:
containers:
- name: mysql-db-container-testing
image: mysql:8.0.31
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysqldb-secret-testing
key: password
ports:
- containerPort: 3306
name: mysql-port
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-persistent-storage
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: azure-managed-disk-pvc-mysql-testing
nodeSelector:
env: preprod
Spring app deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-app-api-testing
namespace: testing
labels:
app: spring-app-api-testing
spec:
replicas: 1
selector:
matchLabels:
app: spring-app-api-testing
template:
metadata:
labels:
app: spring-app-api-testing
spec:
containers:
- name: spring-app-api-testing
image: techradaracr.azurecr.io/technology-radar-be:$(Build.BuildId)
env:
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysqldb-secret-testing
key: password
- name: MYSQL_PORT
valueFrom:
configMapKeyRef:
name: spring-app-testing-config-map
key: mysql_port
- name: MYSQL_HOST
valueFrom:
configMapKeyRef:
name: spring-app-testing-config-map
key: mysql_host
nodeSelector:
env: preprod
---
apiVersion: v1
kind: Service
metadata:
labels:
app: spring-app-api-testing
k8s-app: spring-app-api-testing
name: spring-app-api-testing-service
namespace: testing
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
type: LoadBalancer
selector:
app: spring-app-api-testing
First I deployed MySQl database, then java Spring API.
I guess the problem is with default resource allocation and MySQL db is using 90 % of overall RAM memory. That's why I'm receiving memory alert.
I know that there are sections for resources allocation in yaml config:
resources:
requests:
cpu: 250m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
for minimum and maximum cpu and memory resources. Question is how many of them should I allocate for spring app and how many for mySQL database in order to avoid memory problems?
I would be grateful for help.
First of all, running the whole cluster on only one VM defeats the purpose of using Kubernetes, specially that you are using a small SKU for the VMSS. Have you considered running the application outside of k8s ?
To answer your question, there is no given formula or set values for the request/limits. The values you choose for resource requests and limits will depend on the specific requirements of your application and the resources available in your cluster.
In detail, you should consider the workload characteristics, cluster capacity, performance (if the values are too small, the application will struggle) and cost.
Please refer to the best practices here: https://learn.microsoft.com/en-us/azure/aks/developer-best-practices-resource-management
I have yaml file that has deployment and persistant volume with azure fileshare.
Scenario 1 - The file mount happens successfully when trying to mount only the logs folder with azure files share. This pretty much works as expected.
Scenario 2 - When I try to mount the application configuration file, the file mount fails with azure fileshare. The pod keeps restarting each time and I am unable to find the files as well.
What am I trying to achieve here?
I have the Azure Fileshare folder that is empty before running the yaml and after running the yaml I am expecting the application files from the pod to be shown in the Azure Fileshare... I guess that isn't happening and actually Azure Fileshare empty folder overwrites the folder/files in the pod that has the application.
Is there any way to view the pod application files in the Azure Fileshare while starting?
ex- just like the bind mount in docker-compose
Please find the yaml file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-product
namespace: my-pool
labels:
app: my-product
spec:
replicas: 1
selector:
matchLabels:
app: my-product
template:
metadata:
labels:
app: my-product
spec:
containers:
- image: myproductimage:latest
name: my-product
imagePullPolicy: Always
envFrom:
- configMapRef:
name: configmap
env:
- name: env-file
value: my-product
volumeMounts:
- name: azure
mountPath: /opt/kube/my-product
imagePullSecrets:
- name: secret1
hostname: my-product
volumes:
- name: azure
persistentVolumeClaim:
claimName: fileshare-pvc
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: fileshare-pv
labels:
usage: fileshare-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: azure-file
azureFile:
secretName: secret2
shareName: myfileshare-folder
readOnly: false
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: fileshare-pvc
namespace: my-pool
spec:
accessModes:
- ReadWriteOnce
storageClassName: azure-file
resources:
requests:
storage: 5Gi
selector:
# To make sure we match the claim with the exact volume, match the label
matchLabels:
usage: fileshare-pv
I want to attach a shared disk to multiple windows containers on AKS.
From post learned that it can be done for Linux containers.
I am trying to do the same with windows container but it's failing to mount a shared disk, with below error
MapVolume.MapPodDevice failed for volume "pvc-6e07bdca-2126-4a5b-806a-026016c3798d" : rpc error: code = Internal desc = Could not mount "2" at "\var\lib\kubelet\plugins\kubernetes.io\csi\volumeDevices\publish\pvc-6e07bdca-2126-4a5b-806a-026016c3798d\4e44da87-ea33-4d85-a7db-076db0883bcf": rpc error: code = Unknown desc = not an absolute Windows path: 2
Error occured
Used below to dynamically provision Shared Azure Disk
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-csi-custom
provisioner: disk.csi.azure.com
parameters:
skuname: Premium_LRS
maxShares: "2"
cachingMode: None
reclaimPolicy: Delete
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-azuredisk-dynamic
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
volumeMode: Block
storageClassName: managed-csi-custom
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-shared-disk
name: deployment-azuredisk
spec:
replicas: 2
selector:
matchLabels:
app: test-shared-disk
template:
metadata:
labels:
app: test-shared-disk
name: deployment-azuredisk
spec:
nodeSelector:
role: windowsgeneral
containers:
- name: deployment-azuredisk
image: mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019
volumeDevices:
- name: azuredisk
devicePath: "D:\test"
volumes:
- name: azuredisk
persistentVolumeClaim:
claimName: pvc-azuredisk-dynamic
Is it possible to mount shared disk for windows container on AKS? Thanks for help.
Azure shared disks is an Azure managed disks feature that enables attaching an Azure disk to agent nodes simultaneously. But it doesn't apply for window node pool only
To overcome this issue or mounting Azure Disk CSI driver to window node you need to provisoned or create the window node pool first.
Please refer this MS tutorial to add a Windows node pool.
After you have a Windows node pool, you can now use the same built-in storage classes managed-csi to mount the DISK.
For more information and Validating Volume Mapping you can refer this MS Document
I'm using azure aks to create a statefulset with volume using azure disk provisioner.
I'm trying to find a way to write my statefulset YAML file in a way that when a pod restarts, it will get a new Volume and the old volume will be deleted.
I know I can delete volumes manually, but is there any ways to tell Kubernetes to do this via statefulset yaml?
Here is my Yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: janusgraph
labels:
app: janusgraph
spec:
...
...
template:
metadata:
labels:
app: janusgraph
spec:
containers:
- name: janusgraph
...
...
volumeMounts:
- name: data
mountPath: /var/lib/janusgraph
livenessProbe:
httpGet:
port: 8182
path: ?gremlin=g.V(123).count()
initialDelaySeconds: 120
periodSeconds: 10
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "default"
resources:
requests:
storage: 7Gi
If you want your data to be deleted when the pod restarts, you can use an ephemeral volume like EmptyDir.
When a Pod is removed/restarted for any reason, the data in the emptyDir is deleted forever.
Sample:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
emptyDir: {}
N.B.:
By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment. However, you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead.
i'm trying to mount a persistent volume into my windows container, but i alwys get this error:
Unable to mount volumes for pod "mssql-with-pv-deployment-3263067711-xw3mx_default(....)": timeout expired waiting for volumes to attach/mount for pod "default"/"mssql-with-pv-deployment-3263067711-xw3mx". list of unattached/unmounted volumes=[blobdisk01]
i've created a github gist with the console output of "get events" and "describe sc | pvc | po" maybe someone will find the solution with it.
Below are my scripts that I'm using for deployment.
my storageclass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azure-disk-sc
provisioner: kubernetes.io/azure-disk
parameters:
skuname: Standard_LRS
my PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-disk-pvc
spec:
storageClassName: azure-disk-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
and the deployment of my container:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mssql-with-pv-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: mssql-with-pv
spec:
nodeSelector:
beta.kubernetes.io/os: windows
terminationGracePeriodSeconds: 10
containers:
- name: mssql-with-pv
image: testacr.azurecr.io/sql/mssql-server-windows-developer
ports:
- containerPort: 1433
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: "c:/volume"
name: blobdisk01
volumes:
- name: blobdisk01
persistentVolumeClaim:
claimName: azure-disk-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mssql-with-pv-deployment
spec:
selector:
app: mssql-with-pv
ports:
- protocol: TCP
port: 1433
targetPort: 1433
type: LoadBalancer
what am i doing wrong? is there another way to mount a volume?
thank for every help :)
I would try:
Change API version to v1: https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-disk
kubectl get events to see you if have a more detailed error (I could figure out the reason when I used NFS watching events)
maybe is this bug, I read in this post?
You will need a new volume in D: drive, looks like folders in C: are not supported for Windows Containers, see here:
https://github.com/kubernetes/kubernetes/issues/65060
Demos:
https://github.com/andyzhangx/demo/tree/master/windows/azuredisk