Storage to cloud - linux

we have a working Netapp with ESXi (VMWare 5.5) setup. With multiple VMs running on 3 ESXi Systems but they are residing entirely on Netapp Storage.
We are thinking of moving this entire setup to private Cloud consists of HP Nimble cloud storage. This cloud is currently owned by one of our departments and are ready to give us space(in terms of storage) and ESXis(VMI Cluster) to run our VMs on a rental basis. So immediate advantage for us is more space, more network speed, DR Setup and not anymore worry about the hardware.
Ofcourse it is in discussion phase but I still would like to ask you experts following questions.
Netapp Storage is all about data plus its configuration (Snapshot, User Quota Policies, Export Rules etc.). When we talk about storage space in cloud, then how are we going to control/administrate the configuration parts listed above? Or will this not be anymore possible to administrate? And the Cloud administrators take this control in their hands and we have to be dependant on them for every configuration changes? This is very important factor.
Will VMs running on Netapp Storage be migrated without much efforts? Is there any documented method for this?
Your view on this will be really helpful.
Thanx in advance.
Regards,
Admin

On point #1, a common way to provide multi-tenant administrator access on NetApp is to create a separate SVM [1] (Storage Virtual Machine) that a tenant administrator can use to manage volume capacity, snapshots, quotas, etc.
For #2, a common migration path for moving VMware VMs is to use Storage vMotion [2]. The private cloud provider can remap the ESXi hosts in your environment to be managed under their vCenter Server first. Then from there, they will have the ability to (non-disruptively, in most cases) move the VMs from your old NetApp datastores to new datastores on their array. They can do the same for vMotioning these VMs over to their managed ESXi hosts.
[1] https://docs.netapp.com/us-en/ontap/concepts/storage-virtualization-concept.html
[2] https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vcenterhost.doc/GUID-AB266895-BAA4-4BF3-894E-47F99DC7B77F.html

Related

Azure. How to Share Files Between Highly Available Web Servers

We have 2 Ubuntu VMs inside Virtual Machine Flexible Orchestration that are behind Application Gateway and are running Apache Tomcat web servers. When a client connects to one of the VMs and uploads the files that files also need to exist on another Virtual Machine.
I only found 2 options to do that:
Azure File Share - $80/month for 1 TB of Hot SKU, but the speed is only 1 MBs when mounted as SMB share on Ubuntu.
Azure NetApp Files - $600/month for 4 TB minimum.
Both of the options are not good, the first one is to slow and the second one is too expensive. What can we use in the development environment and production environment to achieve file sharing between Highly Available VMs?
1MBs is awfully low, I am not sure where this is coming from. I am fairly sure I get about 30MBs for Standard SSD/HDD deployments when mounting them into Linux docker containers, which should not perform worse.
An alternative to the mounted file shares would be to use shared disks. You can basically attach a disk to multiple VMs at the same time.
There are some limitations, for your case mainly mainly:
Shared disks can be attached to individual VMSS instances but can't be defined in the VMSS models or automatically deployed.
You can still expect to pay 50-200$ for the disk, but you should be able to get much better speeds than what you are currently getting.
Use a Blob and grant access via Managed Identity to your Virtual Machines:
https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage
Blob Pricing and IOPS:
https://azure.microsoft.com/es-es/pricing/details/storage/page-blobs/

Azure - VM Generalize

I write this question because I found only partial informations about my scenario.
In Azure to clone a VM, I need to deallocate and generalize, after I can create a lot of copy as many I would. The details that I didn't find are:
the VM generalized "cost" me in term of CORE ? After azure vm generalize , If I run azure vm list-usage the number of cores used decreased or not ?
if I generalize a VM with all users / groups / services configured (Apache, DB, etc.) in the same disk of VM, after generalize -> clone, I will find again this configurations in new cloned VMs ?
what are the parameters that I can change after generalization, ex. Availability Set, Network Security Group, Nic associated, etc. ?
Thanks
the VM generalized "cost" me in term of CORE ? If I run azure vm list-usage the number of cores used decreased or not ?
Yes, it costs you in the term of CORE. However, you don't need pay for the VM. You only need pay for the OS and date VHDs storage account. When you use vm list-usage, you will find the core CurrentValue does not change. When you delete the VM, the cores will be released.
I will find again this configurations in new cloned VMs
Yes, you could. Sysprep removes all your personal account information, among other things, and prepares the machine to be used as an image. More information please refer to this link. For a Linux VM, please refer to this link.
what are the parameters that I can change after generalization, ex.
Availability Set, Network Security Group, Nic associated, etc. ?
You could use the generalized VHD image to create any VMs. All of them you could associate to the new VM. More information about how to create a VM from a generalized VHD image please refer to this link.
VM's are billed according to their capacity, not on the basis of whether they are generalized or not, a vm and a vm created from generalized vm would cost the same (granted they are the same "size").
this question has nothing to do with azure, it's just a syprep. Azure adds nothing to this process.
Anything you would expect when creating a new VM.
you need to understand the basic concepts behind virtualization to understand what is going on there.
https://en.wikipedia.org/wiki/Virtualization

Azure VMs and load balancing

am new to windows azure. I recently set up a vm and host a website, according to the SLA i need to have 2 VMs in the availability set. Now i did set up the second VM.
My questions what do i need to use the second VM for?
if i setup load balancing does azure redirect user to the second VM? this second VM has nothing in it.
Please i will like to know this and is it possible to replicate the content of the first VM to the second one, so each time the first one is down the second VM can take over.
Thanks
At first, You must understand the statement of minimum two machines to get 99.95% SLA. It is not about "reserving" resources for use in case of fault or update (fault domain and update domain in availability set). Your application must be created as multi-tenant, so You need to run Your application on two servers, connected to the availability set. You can synchronize storage with GlusterFS (if You use Linux) or other distributed file system. You also can use Azure Files service (SMB as a service) to share storage. For sharing DB (in example MySQL) You need a cluster (independent or distributed through Your two machines).
So... You must to start think in "cloud way" instead of typical one VM administration.

Possible to keep two vhd's in sync on azure vm's

This is scenario than a specific technical question.
I have two azure vm's who run a web application in load balanced mode.
as per this article http://asheej.blogspot.in/2014/03/load-balancing-using-windows-azure.html
both virtual machines are attached an additional disk which stores images which are referred from web application hosted in vm's IIS.
Now What would be the best way to keep contents on two vm hard drives in sync.
For example, If i delete, add a data from vhd of first vm then that should also be affected on second vm.
Is there anything possible, probably using a common vhd for both machines which will take sync out of question.
Before going into solution , let me briefly touch base on the VM and disk relationship.
Typically a VM contains 3 Disks attached to them 1. OS Disk 2. Temporary Disk and 3.Data Disks. The VM will have lease on all these disks ,the only way to write into data disks is via the VM.
The C: Disk is persistent, meaning when the VM get rebooted the data in the disk is retained. But the D:\ is non persistent , when you reboot the disk will be fully wiped clean. So at any point in time the D:\ shouldn't be used to store any user data.
So writing a process to sync between two VM's just to keep pictures in sync is not very ideal. You might know this already , but wanted to set context for the choice of options provided below.
Your potential options are as follows
You can setup File Share using the new Azure File Service (In Preview) http://blogs.technet.com/b/uspartner_ts2team/archive/2014/06/09/setting-up-a-file-share-for-the-new-azure-file-service.aspx. This will be single source for all your images and you don't need to worry about syncing of files.
2.Store the images in the Azure Blob and access them from the application that's running in the VM http://blogs.msdn.com/b/yaohuang1/archive/2012/07/02/asp-net-web-api-and-azure-blob-storage.aspx and http://www.nickharris.net/2012/11/how-to-upload-an-image-to-windows-azure-storage-using-mobile-services/
3.Host another VM as a Webserver and host your images from there. Then the two VM's can refer the image. The cost here will be to hosting the VM.
The key point with all the 3 potential options there is no need sync the files in two different places , everything is in single place.
Edited based on new information:-
In your scenario hosting your files into VM is not the right approach. You should take the following into consideration even for the short term solution , if you are using Azure LB.
Azure Load Balancer uses a 5 tuple (source IP, source port, destination IP, destination port, protocol type) to calculate the hash that and map traffic to available servers and also the distribution is fairly random. So if you load balance the VM, you cannot control which VM the images are accessed.
Manual updates is not possible in this scenario.
You either need to setup an virtual network to allow you to create and share a windows file share OR you should investigate the use of Azure File Service for creating a share that both VMs connect to (see: http://blogs.technet.com/b/uspartner_ts2team/archive/2014/06/09/setting-up-a-file-share-for-the-new-azure-file-service.aspx).

Azure VMs high-availability setup for data disk or storage

I'm currently looking into a high-availability approach for a file server within Azure in which I will need to be deploying VMs for. The data on the file server will be constantly changing. From what I read so far, I will need at least 2 VMs and grouping them together into a shared availability set along with creating a cloud service. Although this will address the application and server aspect, what about the storage and the data on them?
I understand that I can't attach a single disk to multiple VMs so I'm a bit lost on how to proceed with this. Any thoughts or ideas on how to move forward with this?
In short, I have a VM with direct data disk attached to it that I'm looking to provide high-availability in the event that the VM goes offline; either through an outage, host patching, hardware maintenance, etc...
Have a look into Azure Blob Storage - don't worry about disks etc, just let the Azure fabric handle the data redundancy and scalability for you!
Here's an "all you need" introduction to WIndows Azure Storage:

Resources