Attaching external disk on kubernetes pod in azure - azure

I want to attach an external disk to kubernetes pod in azure environment. According to documentation here https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/azure_file it uses azure file system.
What if I want to use the OS disks (external disks) like we have in gcloud environment ?

The Azure volumes piece was merged: https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/azure_file/README.md

Related

How to attach a disk in Kubernetes cluster in azure (AKS)

I have deployed my running application in AKS. I want to add new disk (Harddisk of 30GB) but I don't know how to do it.
I want to attach 3 disks.
Here is details of AKS:
Node size: Standard_DS2_v2
Node pools: 1 node pool
Storage is:
default (default) kubernetes.io/azure-disk Delete WaitForFirstConsumer true
Please, tell me how to add it.
Based on Kubernetes documentation:
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV.
In the Azure documentation one can find clear guides how to:
create a static volume using Azure Disks
create a static volume using Azure Files
create a dynamic volume using Azure Disks
create a dynamic volume using Azure Files
NOTE:
Before you begin you should have existing AKS cluster and Azure CLI version 2.0.59 or later installed and configured. To check your version run:
az --version
See also this documentation.
A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned.
This article shows you how to dynamically create persistent volumes with Azure disks for use by a single pod in an Azure Kubernetes Service (AKS) cluster.
But if your requirement is to share the persistent volume across the multiple nodes use Azure FileShare
This article shows you how to dynamically create an Azure Files share for use by multiple pods in an Azure Kubernetes Service (AKS) cluster.

Issue with persistent storage in Azure Kubernetes Service using Azure Disk

Not able to set up persistent volume using Azure disk
We are trying to deploy an application on AKS and the application is to use persistent volume. If we use Azure disk, we have noticed if the node having the pod running the application container is stopped / not working , another pod from another node is spinned up but it is no longer accessing the persistent volume.
As per documentation ,azure disk is mapped to a particular node and file share is shared across nodes. What is the way to ensure that a application running on AKS using persistent volume is not lost if a pod/node does not work ?
We are looking for a solution with regard to persistent storage so that an application with 3 pods as a replica set can use an Azure disk persistent volume in AKS.
The Azure disk to work as the persistent storage volume in AKS, it should associates to the actual node, so it cannot share the files between multiple pods. So if you want to share files and persist files between pods whenever the pods in any node, the Azure File Share is a good way for you.
Finally, all of all, if you have multiple nodes and the deployment has 3 replicas. Then the best way to share and persist data between pods is using the Azure File Share or the NFS.

DC/OS on Azure: Automatically mount disk

I just installed a DC/OS Cluster on Azure using Terraform. Now I was wondering if it's possible to automatically mount Data Disks of agent nodes under /dcos/volume<N>. As far as I understood the docs, this is a manual task. Wouldn't it be possible to automate this step with Terraform? I was looking through the DC/OS docs and Terraform docs but I couldn't find anything related to auto mounting.
It seems you just can mount the Data disks to the node of AKS manual as a volume. It's a Kubernetes task, not Azure's. Azure only can manage the data disk for you.
What you can do through the Terraform is attach the data disk to the node itself of AKS as a disk, not a volume of the AKS. And the volume, you only can create it through Kubernetes, not Azure. So Terraform also cannot help you achieve it automated.

Azure disk replication across VMs

In Azure, is it possible to have master VM that writes to a disk which has read-only slave replicas on other VMs?
Our app needs to download ~100GB of files when scaling to a new VM. This is loaded slowly from an external provider but we want to make it available quickly when we scale out more VMs.
I don't think you can do streaming replication (which I think is what you're asking for), or read only slave through the Azure service without implementing this yourself over network or through a relational database management system.
As of this writing, one disk cannot be connected to multiple Azure VMs (See FAQ for Managed Disks. One option would be to create a snapshot of the disk, and create a new disk from the snapshot. You could automate this via the Azure Managed Disk Service API (eg: an Azure Powershell script), and it would have to happen on a VM that isn't running.
If your data is same and doesn't change per new VM created then you can have it stored on the Azure File Storage Standard/ Premium. Then have Azure File storage attached to every new VM whenever it is created. snapshot disk will make it pretty complex. Azure Files Storage is good choice in this scenario.

AWS NFS mount needs to be moved to AZure

read this mount -t nfs vs cifs already :( ?
Our requirement is that we have an application hosted in AWS using nfs-utils to mount a EFS for use ? My question is how can this be done in Azure. I know they have Azure files which works in quite similar way to EFS but as per azure documentation it is done only through cifs-util. Point is that though it will mount a Azure file share in Azure will it work without any issue or do we need to do something in our commands to make it happen?
I am not good in linux, so please pardon me if I am sounding total stupid.
Our requirement is that we have an application hosted in AWS using
nfs-utils to mount a EFS for use ? My question is how can this be done
in Azure.
Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud.
If you want do the same thing in Azure, I think you are talking about Azure storage blob(new disk).
In Azure, we can via Azure portal to add a new disk to Azure VM as a data disk, works like add a physical data disk to a host. Then we can use fdisk to create the file system on the new partition.
We can follow this article to attach a new disk to Azure VM via Azure portal.
After that completed, we can follow this article to initialize a new data disk in Linux.
I know they have Azure files which works in quite similar way to EFS
but as per azure documentation it is done only through cifs-util.
You are right, Azure files share works like EFS, but Azure files share use Server Message Block (SMB) protocol(also known as Common Internet File System, or CIFS).
The maximum size of an Azure file share is 5 Tib, there is a quota of 20,000 open handles on a single file, and the max IOPS per share is 1000 IOPS.
We can create data disk from Azure storage blob, the maxium size of data disk is 4 Tib(we can create multiple data disks to that VM), and OS disk is 2 Tib.
AWS EFS suppoer Network file system versions 4.0 and 4.1(NFSv4) protocal.
Here a article about performance about Azure file share and Azure storage blob.

Resources