How to transfer docker volumes to kubernetes volumes in azure? - azure

I'm trying to apply kubernetes on one of my applications, and my app is using docker volumes and saves data in there.
When I'm applying the kubernetes on that app it won't save any data from the docker volumes obviously and it just needs kubernetes volumes, thing is, I have my data inside the docker volumes and I need to transfer it to a kubernetes volume, and all that is running in azure and since kubernetes interfaces with azure I figured there should be a way to automate this only I couldn't find how to do that.
If someone can help with this ill be very thankful.

To transfer the data in the docker volume to Azure Kubernetes volumes, the way I can think of is that transfer the data in the docker volumes to Azure file share, and then mount the file share to the Kubernetes volumes.
The best way is that mount the Azure file share to the machine where the docker server in at the beginning, and then create the volumes in the mount path and use the volumes for your application in the docker. When you deploy the application in Azure Kubernetes, mount the file share to its volumes.
I do not think there are the interfaces in Azure Kubernetes to automate transfer the data from docker volumes to AKS volumes.

When I'm applying the kubernetes on that app it won't save any data
from the docker volumes obviously and it just needs kubernetes volumes.
This is not fully true, as #Charles Xu mentioned, as long as your Docker containers use 'Azure File Storage' backed Volumes, you can seamlessly mount the same data volumes (Azure File Storage) as a Persistence Volumes in Azure Kubernetes Service (one of two types of data storage supported by AKS).
Taking into account you are running currently your Docker containers on-premise environment, use can use Docker Volume Driver for Azure File Storage to start pushing your data to Azure (check demo here), or for Swarm Cluster in Azure use Cloudstor.

According to the kubernetes documentation:
https://kubernetes.io/docs/concepts/storage/volumes/
Docker volumes and Kubernetes volumes are different and work differently.
When you create your PVC or it is dynamically created for you, mount it into your pod and copy the data into it using:
kubectl cp
Your app will automatically start storing data in the volume.
Docker volumes are just directories on disk and you can't simply map them to the Kubernetes volume

Related

How to attach a disk in Kubernetes cluster in azure (AKS)

I have deployed my running application in AKS. I want to add new disk (Harddisk of 30GB) but I don't know how to do it.
I want to attach 3 disks.
Here is details of AKS:
Node size: Standard_DS2_v2
Node pools: 1 node pool
Storage is:
default (default) kubernetes.io/azure-disk Delete WaitForFirstConsumer true
Please, tell me how to add it.
Based on Kubernetes documentation:
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV.
In the Azure documentation one can find clear guides how to:
create a static volume using Azure Disks
create a static volume using Azure Files
create a dynamic volume using Azure Disks
create a dynamic volume using Azure Files
NOTE:
Before you begin you should have existing AKS cluster and Azure CLI version 2.0.59 or later installed and configured. To check your version run:
az --version
See also this documentation.
A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned.
This article shows you how to dynamically create persistent volumes with Azure disks for use by a single pod in an Azure Kubernetes Service (AKS) cluster.
But if your requirement is to share the persistent volume across the multiple nodes use Azure FileShare
This article shows you how to dynamically create an Azure Files share for use by multiple pods in an Azure Kubernetes Service (AKS) cluster.

Is it possible to attach and NFS (NetApp files) volume to an Azure Container Instance?

I have an NFS server (NetApp files) which I wish to mount as a volume on an Azure Container Instance. Is this possible?
I need to do this programmatically and the az CLI appears not to have any options for it.
I can launch the container in the same vnet as the NFS server but I can't then mount the volume as far as I can tell.
I am able to access the volume from within an AKS cluster I run but I was hoping to keep this bit of functionality outside of the cluster.
I also haven't found any workaround (such as mounting the volume inside the container).
Any suggestions? Is this impossible?
Thanks.
The short answer is yes, it's possible to mount a volume of Azure NetApp Files to an Azure Container Instance. Not to do this in programming or use Azure CLI, but to create a custom Docker image with some drivers like SAMBA/CIFS which will helps to mount a NFS volume as a Docker volume.
As references, you can follow the steps and refer to the related documents below.
The blog Using a SAMBA/CIFS mount as a docker volume introduces the base idea. And the other blog Mount AWS EFS, NFS or CIFS/Samba volumes in Docker also be useful at here.
Follow the offical document Tutorial: Create a container image for deployment to Azure Container Instances to know how to create a container image.
The offical document Mount or unmount a volume for Windows or Linux virtual machines shows the Mount instructions of Azure NetApp Files to help to mount a volume to a Linux instance which be a Docker instance of a custom image you need to build.
Then, if the image has been built, you can try to deploy it as Azure Container Instance. For doing these, please refer to the two offical documents Tutorial: Deploy an Azure container registry and push a container image and Tutorial: Deploy a container application to Azure Container Instances.

Issue with persistent storage in Azure Kubernetes Service using Azure Disk

Not able to set up persistent volume using Azure disk
We are trying to deploy an application on AKS and the application is to use persistent volume. If we use Azure disk, we have noticed if the node having the pod running the application container is stopped / not working , another pod from another node is spinned up but it is no longer accessing the persistent volume.
As per documentation ,azure disk is mapped to a particular node and file share is shared across nodes. What is the way to ensure that a application running on AKS using persistent volume is not lost if a pod/node does not work ?
We are looking for a solution with regard to persistent storage so that an application with 3 pods as a replica set can use an Azure disk persistent volume in AKS.
The Azure disk to work as the persistent storage volume in AKS, it should associates to the actual node, so it cannot share the files between multiple pods. So if you want to share files and persist files between pods whenever the pods in any node, the Azure File Share is a good way for you.
Finally, all of all, if you have multiple nodes and the deployment has 3 replicas. Then the best way to share and persist data between pods is using the Azure File Share or the NFS.

DC/OS on Azure: Automatically mount disk

I just installed a DC/OS Cluster on Azure using Terraform. Now I was wondering if it's possible to automatically mount Data Disks of agent nodes under /dcos/volume<N>. As far as I understood the docs, this is a manual task. Wouldn't it be possible to automate this step with Terraform? I was looking through the DC/OS docs and Terraform docs but I couldn't find anything related to auto mounting.
It seems you just can mount the Data disks to the node of AKS manual as a volume. It's a Kubernetes task, not Azure's. Azure only can manage the data disk for you.
What you can do through the Terraform is attach the data disk to the node itself of AKS as a disk, not a volume of the AKS. And the volume, you only can create it through Kubernetes, not Azure. So Terraform also cannot help you achieve it automated.

Mount Azure storage in Docker Swarm

I have successfully set-up a docker swarm on an Azure scale-set, all good. Now, i need to mount an Azure storage device on my container to utilize my app, this is ofc. simple with "docker run" since i can add capabilities; but this cannot be achieved through "docker service create". What are the possibilities for me to mount storage shares (CIFS/Samba) on the autoscaling nodes for my container?
Thanks in advance.
There is a volume plugin for Docker to mount Azure storage entities as a volume using a volume driver: https://github.com/Azure/azurefile-dockervolumedriver

Resources