How to mount local volume hostPath with AKS? - azure

I am trying to create a Kubernetes pod and mounting a volume from local hostpath. I am using Azure Kubernetes cluster. Following is my yaml for creating pod
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /opt/myfolder
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /Users/kkadam/minikube/myfolder
# this field is optional
I have few files under myfolder which I want to use inside container. Files are present in local volume but not inside container.
What could be an issue?

You can not add local path to container running on AKS. You have to add the file on specific node where POD is scheduled.
If both are on same node POD and files then you can mount the files as the volume to the container and use it.
However if your POD is schedule to another node then you will not be able to access the files inside the container.
If due to any reason your node restarted or deleted during auto-scaling you might lose the data.

Judging by what you said in your comment and your config, especially the path /Users/kkadam/minikube/myfolder which is typically a Mac OS path, it seems that you're trying to mount your local volume (probably your mac) in a pod deployed on AKS.
That's the problem.
In order to make it work, you need to put the files you're trying to mount on the node running your pod (which is in AKS).

Related

Mounting an SMB or NFT Azure File share onto JupyterHub on kubernetes for a shared directory

Cluster information:
Kubernetes version: 1.19.11
Cloud being used: Azure
Installation method: Manual creation in Azure online UI/Azure CLI
Host OS: Linux
CNI and version: Azure container networking interface, most recent
Hey everyone! I'm a relatively new user of Kubernetes, but I think I've got the basics down. I'm mainly trying to understand a more complex file share feature.
I’m essentially trying to use JupyterHub on Kubernetes for a shared development environment for a team of about a dozen users (we may expand this to larger/other teams later, but for now I want to get this working for just our team), and one feature that would be extremely helpful, and looks doable, is having a shared directory for notebooks, files, and data. I think I’m pretty close to getting this set-up, but I’m running into a mounting issue that I can’t quite resolve. I’ll quickly explain my setup first and then the issue. I’d really appreciate any help/comments/hints that anyone has!
Setup
Currently, all of this setup is on a Kubernetes cluster in Azure or other Azure-hosted services. We have a resource group with a kubernetes cluster, App Service Domain, DNS Zone, virtual network, container registry (for our custom docker images), and storage account. Everything works fine, except that in the storage account, I have an Azure NFS (and plain SMB if needed) file share that I’ve tried mounting via a PV and PVC to a JupyterHub server, but to no avail.
To create the PV, I set up an NFS file share in Azure and created the appropriate kubernetes secret as follows:
# Get storage account key
STORAGE_KEY=$(az storage account keys list --resource-group $resourceGroupName --account-name $storageAccountName --query "[0].value" -o tsv)
kubectl create secret generic azure-secret \
--from-literal=azurestorageaccountname=$storageAccountName \
--from-literal=azurestorageaccountkey=$STORAGE_KEY
I then tried to create the PV with this YAML file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-nfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
nfs:
server: wintermutessd.file.core.windows.net:/wintermutessd/wintermutessdshare
path: /home/shared
readOnly: false
storageClassName: premium-nfs
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl
Issue
During the creation of the PV, I get the error Failed to create the persistentvolume 'shared-nfs-pv'. Error: Invalid (422) : PersistentVolume "shared-nfs-pv" is invalid: spec.azureFile: Forbidden: may not specify more than 1 volume type. Removing the azureFile options solves this error, but I feel like it would be necessary to specify the kubernetes secret that I created. If I do remove the azureFile options, it does successfully create and bind the PV. Then I created the corresponding PVC with
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-nfs-pvc
spec:
accessModes:
- ReadWriteMany
# Match name of PV
volumeName: shared-nfs-pv
storageClassName: premium-nfs
resources:
requests:
storage: 50Gi
which also successfully bound. However, when I add the configuration to my Helm config for JupyterHub with
singleuser:
storage:
extraVolumes:
- name: azure
persistentVolumeClaim:
claimName: azurefile
extraVolumeMounts:
- name: azure
mountPath: /home/shared
I get the following error when the jupyterhub server tries to spawn and mount the PVC:
Just in case this is relevant, the NFS azure file share is only accessible via a private endpoint, but this should be fine since my kubernetes cluster is running in the same virtual network. In fact, Azure tells me that I could just mount this NFS share on linux with
sudo apt-get -y update
sudo apt-get install nfs-common
sudo mkdir -p /mount/wintermutessd/wintermutessdshare
sudo mount -t nfs wintermutessd.file.core.windows.net:/wintermutessd/wintermutessdshare /mount/wintermutessd/wintermutessdshare -o vers=4,minorversion=1,sec=sys
But when I add this to my Dockerfile for the docker image that I'm using in my container, the build fails and tells me that systemctl isn't installed. Trying to add this through apt-get install systemd doesn't resolve the issue either.
From looking at other K8s discourse posts, I found this one ( File based data exchange between pods and daemon-set - General Discussions - Discuss Kubernetes) which looked helpful and has a useful link to deploying an NSF server, but I think the fact that my NFS server is an Azure file share makes this a slightly different scenario.
If anyone has any ideas or suggestions, I'd really appreciate it!
P.S. I had previously posted on the JupyterHub discourse here ( Mounting an SMB or NFT Azure File share onto JupyterHub on kubernetes for a shared directory - JupyterHub - Jupyter Community Forum), but it was suggested that my issue is more of a k8s issue rather than a JupyterHub one. I also looked at this other stackoverflow post, but, even though I am open to an SMB file share, it has to do more with VMs and not with PV/PVCs on kubernetes.
Thank you! :)
so I actually managed to figure this out using a dynamically allocated Azure file share. I'm writing an internal documentation for this, but I thought I'd post the relevant bit here. I hope this helps people!
Dynamically creating an Azure file share and storage account by defining a PVC and storage class
Here, we're mainly following the documentation for dynamically creating a PV with Azure Files in AKS. The general idea is to create a storage class that will define what kind of Azure file share we want to create (premium vs. standard and the different redundancy modes) and then create a PVC (persistent volume claim) that adheres to that storage class. Consequently, when JupyterHub tries to mount the PVC we created, it will automatically create a PV (persistent volume) for the PVC to bind to, which will then automatically create a storage account and file share for the PV to actually store filese in. This will all be done in the resource group that backs the one we're already using (these generally start with "MC_"). Here, we will be using the premium storage class with zone reduntant storage. First, create the storage class to be used (more info on the available tags here can be found in this repository) with the following YAML
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: shared-premium-azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- actimeo=30
parameters:
skuName: Premium_ZRS
Name this file azure-file-sc.yaml and run
kubectl apply -f azure-file-sc.yaml
Next, we will create a PVC which will dynamically provision from our Azure file share (it automatically creates a PV for us). Create the file azure-file-pvc.yaml with the following code
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-premium-azurefile-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: shared-premium-azurefile
resources:
requests:
storage: 100Gi
and apply it with
kubectl apply -f azure-file-pvc.yaml
This will create the file share and the corresponding PV. We can check that our PVC and storage class were successfully created with
kubectl get storageclass
kubectl get pvc
It might take a couple of minutes for the PVC to bind.
On the Azure side, this is all that has to be done, and the dynamic allocation of the PV and file share are taken care of for us.
Mounting the PVC to JupyterHub in the home directory
JupyterHub, by default, creates a PVC of 10Gi for each new user, but we can also tell it to mount existing PVCs as external volumes (think of this as just plugging in your computer to a shared USB drive). To mount our previously created PVC in the home folder of all of our JupyterHub users, we simply add the following to our config.py Helm config:
singleuser:
storage:
extraVolumes:
- name: azure
persistentVolumeClaim:
claimName: shared-premium-azurefile-pvc
extraVolumeMounts:
- name: azure
mountPath: /home/jovyan/shared
Now, when JupyterHub starts up, all users should have a shared directory in their home folders with read and write permission.

Getting 'didn't match node selector' when running Docker Windows container in Azure AKS

In my local machine I created a Windows Docker/nano server container and was able to 'push' this container into an Azure Container Registry using this command (The reason why I had to use the Windows container is because I have to use CSOM in the ASP.NET Core and it is not possible in Linux)
docker push MyContainerRegistry.azurecr.io/myimage:v1
That Docker container IS visible inside the Azure container registry which is MyContainerRegistry
I know that in order to run it I have to create a Container Instance; however, our management team doesn't want to go with that path and wants to use AKS instead
We do have an AKS cluster created
The kubectl IS running in our Azure shell
I tried to create an AKS pod using this command
kubectl apply -f myyaml.yaml
These are contents of yaml file
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mypod
spec:
replicas: 1
template:
metadata:
labels:
app: mypod
spec:
containers:
- name: mypod
image: MyContainerRegistry.azurecr.io/itataxsync:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: mysecret
nodeSelector:
beta.kubernetes.io/os: windows
The pod successfully created.
When I run 'get pods' I see a newly created pod
However, when I get into details of this pod, I see the following
"Warning FailedScheduling 3m (x2 over 3m) default-scheduler 0/3
nodes are available: 3 node(s) didn't match node selector."
Does it mean that I simply can't run Docker Windows container in Azure using AKS?
Is there any way I can run Docker Windows container in Azure at all?
Thank you very much for your help!
You cannot yet have windows nodes on AKS, you can, however, use AKS engine (examples).
Bear in mind that windows support in kubernetes is a bit lacking, so you will run into issues, unfortunately.

Kubernetes Mounting Hostpath to specific location within container

I'm trying to translate a bunch of docker-compose files into kubernetes yamls. I have used kompose, which has gotten me part way, but I'm getting stuck on one particular part for multiple containers.
This is one of the containers. Notice the docker container is mounting /u/data/. . . to /var/lib/mysql. This is actually necessary as the mysql directory contains the database and configurations.
server1-backend-mysql:
image: mysql
container_name: server-backend-mysql
restart: always
volumes:
- /u/data/server-backend-mysql:/var/lib/mysql
networks:
- eolnet
What is the correct way to make this happen in Kubernetes? Note that for k8 I will be mounting an nfs volume (this is only for testing purposes).
I did look into hostpath, but so far no luck.
When declaring a Pod, specify the volume at spec.volumes, and then the volume mount at spec.containers[*].volumeMounts:
apiVersion: v1
kind: Pod
metadata:
name: server1-backend-mysql
spec:
containers:
- image: mysql
name: mysql
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-data
volumes:
- name: mysql-data
hostPath:
path: /u/data/...
type: Directory
When declaring a Deployment or StatefulSet (which you should do instead of declaring a Pod), move the respective configurations to spec.template.spec.volumes and spec.template.spec.containers[*].volumeMounts.
For more information, have a look at the documentation.
As a side note unrelated to your question: if you're planning to run MySQL from a NFS volume, keep in mind that running MySQL from NFS is possible, but not something that MySQL is really optimized for. Be sure to configure your MySQL server accordingly, and check if your environment permits you to use a networked block device (and not a network file system) like a Ceph RBD volume or similar.

How to upload a file to kubernetes cluster for my Apps to access it?

Lets say we have an application which accesses a file. This App is a jar which is packaged into an image and pushed to Registry for the Kubernetes to run it. But when we create the Pod, we need to configure a volume also in it. When we specify a volume we give a path, how do we place the file in that volume from lets say our virtual machine?
Please help me in understanding this with an explanation. Also should we create a storage so that its accessible from kubernetes cluster? please explain relevent topic as well to understand this.
Note: we are using azure cli
I think the best approach would be to create a ConfigMap with the data you want to use from your application. Then you just need to mount the ConfigMap as a volume in the Pod's (explained here) that need the data.
You can easily create a ConfigMap from a file like
kubectl create configmap your-configmap-name --from-file=/some/path/to/file
And then mount it in your Pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config

Kubernetes on Windows Persistent Volume

Does windows minikube support a persistent volume with a hostpath? If so what is the syntax?
I tried:
apiVersion: v1
kind: PersistentVolume
metadata:
name: kbmongo002
labels:
type: local
spec:
storageClassName: mongostorageclass
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/temp/mongo"
persistentVolumeReclaimPolicy: Retain
---
This passed validation and created the PV and a PVC claimed it, but nothing was written to my expected location of C:\temp\mongo
I also tried:
hostPath:
path: "c:/temp/mongo"
persistentVolumeReclaimPolicy: Retain
---
That resulted in:
Error: Error response from daemon: Invalid bind mount spec
"c:/temp/mongo:/data/db": invalid mode: /data/db
Error syncing pod
If you use virtualbox in windows, only the c:/Users is mapped into vm as /c/Users which is kubernetes system can access. It is the feature in Virtualbox.
Minikube use VM to simulate the kubernetes VM.
Minikube provides mount feature as well, not so user-friendly for persitency.
You can try choose one of the solutions below
use folders under /c/Users for your yaml file
map extra folders into virtualbox VM like C:\Users
use minikube mount, see host folder mount
I have tried k8s hostpath on windows, it works well.
You should use drive letter in pod mount path, see example: https://github.com/andyzhangx/Demo/blob/master/windows/azuredisk/aspnet-pod-azuredisk.yaml#L14
As there is a docker mount path related bug on windows, you need to use drive letter as mount path in pod, see issue: https://github.com/moby/moby/issues/34729

Resources