Possible to keep two vhd's in sync on azure vm's - azure

This is scenario than a specific technical question.
I have two azure vm's who run a web application in load balanced mode.
as per this article http://asheej.blogspot.in/2014/03/load-balancing-using-windows-azure.html
both virtual machines are attached an additional disk which stores images which are referred from web application hosted in vm's IIS.
Now What would be the best way to keep contents on two vm hard drives in sync.
For example, If i delete, add a data from vhd of first vm then that should also be affected on second vm.
Is there anything possible, probably using a common vhd for both machines which will take sync out of question.

Before going into solution , let me briefly touch base on the VM and disk relationship.
Typically a VM contains 3 Disks attached to them 1. OS Disk 2. Temporary Disk and 3.Data Disks. The VM will have lease on all these disks ,the only way to write into data disks is via the VM.
The C: Disk is persistent, meaning when the VM get rebooted the data in the disk is retained. But the D:\ is non persistent , when you reboot the disk will be fully wiped clean. So at any point in time the D:\ shouldn't be used to store any user data.
So writing a process to sync between two VM's just to keep pictures in sync is not very ideal. You might know this already , but wanted to set context for the choice of options provided below.
Your potential options are as follows
You can setup File Share using the new Azure File Service (In Preview) http://blogs.technet.com/b/uspartner_ts2team/archive/2014/06/09/setting-up-a-file-share-for-the-new-azure-file-service.aspx. This will be single source for all your images and you don't need to worry about syncing of files.
2.Store the images in the Azure Blob and access them from the application that's running in the VM http://blogs.msdn.com/b/yaohuang1/archive/2012/07/02/asp-net-web-api-and-azure-blob-storage.aspx and http://www.nickharris.net/2012/11/how-to-upload-an-image-to-windows-azure-storage-using-mobile-services/
3.Host another VM as a Webserver and host your images from there. Then the two VM's can refer the image. The cost here will be to hosting the VM.
The key point with all the 3 potential options there is no need sync the files in two different places , everything is in single place.
Edited based on new information:-
In your scenario hosting your files into VM is not the right approach. You should take the following into consideration even for the short term solution , if you are using Azure LB.
Azure Load Balancer uses a 5 tuple (source IP, source port, destination IP, destination port, protocol type) to calculate the hash that and map traffic to available servers and also the distribution is fairly random. So if you load balance the VM, you cannot control which VM the images are accessed.
Manual updates is not possible in this scenario.

You either need to setup an virtual network to allow you to create and share a windows file share OR you should investigate the use of Azure File Service for creating a share that both VMs connect to (see: http://blogs.technet.com/b/uspartner_ts2team/archive/2014/06/09/setting-up-a-file-share-for-the-new-azure-file-service.aspx).

Related

Azure. How to Share Files Between Highly Available Web Servers

We have 2 Ubuntu VMs inside Virtual Machine Flexible Orchestration that are behind Application Gateway and are running Apache Tomcat web servers. When a client connects to one of the VMs and uploads the files that files also need to exist on another Virtual Machine.
I only found 2 options to do that:
Azure File Share - $80/month for 1 TB of Hot SKU, but the speed is only 1 MBs when mounted as SMB share on Ubuntu.
Azure NetApp Files - $600/month for 4 TB minimum.
Both of the options are not good, the first one is to slow and the second one is too expensive. What can we use in the development environment and production environment to achieve file sharing between Highly Available VMs?
1MBs is awfully low, I am not sure where this is coming from. I am fairly sure I get about 30MBs for Standard SSD/HDD deployments when mounting them into Linux docker containers, which should not perform worse.
An alternative to the mounted file shares would be to use shared disks. You can basically attach a disk to multiple VMs at the same time.
There are some limitations, for your case mainly mainly:
Shared disks can be attached to individual VMSS instances but can't be defined in the VMSS models or automatically deployed.
You can still expect to pay 50-200$ for the disk, but you should be able to get much better speeds than what you are currently getting.
Use a Blob and grant access via Managed Identity to your Virtual Machines:
https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage
Blob Pricing and IOPS:
https://azure.microsoft.com/es-es/pricing/details/storage/page-blobs/

Azure virtual machines with shared premium disk - cannot see data

We've got two Windows Server 2019 virtual machines within the same Azure subscription and subnet. Recently, we have created a Premium SSD azure data disk with 'sharing' enabled and mounted it to those two VMs without any problems. It's perfectly fine to use the disk from both of the machines, but unfortunately files/folders added from one of them are not visible on the other.
Is it possible to somehow truly share the data between the machines using such azure disk attached to both of them? Maybe some super secret PowerShell option/flag when mounting the drives?
The machines are within same domain so obviously we can simply share a folder (which's what we do right now), but here the problem is that whenever our application is writing something to that share, it takes ages due to latencies/long upload times (effectively, it freezes the application for couple of minutes). Yes, they are in the same region (machines and disk). There's this Proximity Placement Groups thing available, but it does not seem to be applicable to disks, unfortunately.
We've also tried Azure Files but we've got exactly same problem as with 'shared folder' within the domain (long upload times whenever our application is writing something to the persistent storage).
I've went through Shared drive between Azure Virtual Machines but there's nothing about seeing the same contents from all machines which have the disk attached and mounted.
Thank you! Would appreciate any ideas.
Right so eventually I've found the answer. Basically the machines have to be joined into a failover cluster. Assuming the shared SSD:
is formatted to NTFS on both machines
has the same label on both machines (in my case F:)
if the conditions are met, that particular disk can be added to Cluster Shared Volumes. Program has no problems with writing the data smoothly from both machines.

Azure VM complete replication

I have a azure vm setup that contains custom software, files, folders blah blah. At any point I will need to create multiple identical instances of the existing vm (maybe up to 30). I used the sysprep in the vm, in the azure portal I used the capture image for the vm and tried to create a new VM from the image. The new vm created from the image did not contain any of the files from the previous vm.
What would be the best approach in preserving a complete image of the vm and be able to mass deploy from it at any given time?
I'm not sure what you do makes the file disappear, but on my side, it works well. Sysprep just removes all your personal account and security information and then prepares the machine to be used as an image. You can see the feathers of it from Sysprep overview.
To create a custom VM image, see Create a managed image of a generalized VM in Azure. I suggest that creating an image directly from the VM ensures that the image includes all of the disks associated with the VM, including the OS disk and any data disks.
For the creation of multiple identical instances, you can use the Azure VM Scale Sets. The VMs are identical, load balanced in a group, so if you want to specialize one of them, you need to specialize all of them through a custom image.
Also, you can use the Azure Template to create multiple individual VMs, they are ไป–the independent VMs, you can specialize any one of them, not all of them, but uniform management is a little difficult.
You can get more details about the difference between them, see Differences between virtual machines and scale sets.

Can you move/copy Azure virtual machines to a different instance?

If I setup a server running my application on an azure instance, for example A1 can I later change the instance to D2?
I might want to experiment with a VM at a lower cost but then move to a higher performing machine at a later date without having to rebuild everything.
Yes, you can change the size of Azure VM on demand. Changing the size will trigger a machine reboot and if you're using a configuration with SSD temporary drive, the content of the SSD will get erased. Other than that, everything else will be left untouched.
Drew, the Principal PM in this area has a great blog here about this.
You can only resize a VM to another offering that does not have fundamentally different hardware. Since A-Series and D-Series VMs have similar hardware, you would be able to swap those two around. You would not be able to go from A-Series to G-Series though. In addition you need to look at VM availability per region if you want to swap to something only in certain areas, as well as look at if you are using an ASM or ARM VM.
If you have an existing VM, you can check what it can swap out with in the new portal under "Size" in the VM Settings.
This will allow you to reboot into a different machine type, however any temp storage will be erased as with any VM reboot. You just need to ensure you are storing your persistent data in external storage.
You can learn more about the VM size offerings here.

Azure VM Capture (Process Overview)

I am planning to capture my VM image in Azure to create a copy for VM deployments (I am using this to deploy multiple VM or any redeployment scenarios).
Will any data/ configurations lost during the process? Be it application wise or server. I am expecting it to work just as simple as copy and paste functionality no Gotchas. Everything within this VM is critical to my clients (Customized apps/ web services etc.)
P/S: I have done my research here: https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-capture-image-windows-server/
It says it will delete my VM once I have captured the image, now this is where I am very worried about.
Process of capturing the VM will preserve the installed applications, data and most settings. However, it does clear a few things like the computer name, network settings etc, so that the same image could be used to create multiple VMs later on.
Also, this process will delete your existing VM. You have to create a new VM using the image.
If you are unsure about any required settings that may be lost in this process, strong recommendation is to create a backup of the existing VM before you begin. You can do that by doing AzCopy of all the vhds on the VM (OS and Data disks). You could delete the backup after verifying the image deployments.
There are two ways of creating Virtual Images:
Without deprovisioning it: Source VM is not destroyed. You should switch if off to avoid problems. If you create a VM from image their hostnames will clash. The idea is using this capture method for backups.
After running waagennt -deprovision on it: Source VM is destroyed in the process. You can create many VMs with no problem.This is probably what you want to do. Don't worry is harmless, apart of destroying source VM. You can always create a VM from that image. The idea is using this capture method for creating a base image and then have some kind of process to create and destroy servers (Auto Scaling).
For example you create a web server for your app and instantiate more VMs in peak times.
What does waagent exactly deprovision?
waagent -deprovision command clears some configuration on the machine. Exactly:
This command will attempt to clean the system and make it suitable for
re-provisioning. This operation performs the following tasks:
Removes SSH host keys (if Provisioning.RegenerateSshHostKeyPair is 'y' in the configuration file)
Clears nameserver configuration in /etc/resolv.conf Removes the root user's password from /etc/shadow (if
Provisioning.DeleteRootPassword is 'y' in the configuration file)
Removes cached DHCP client leases Resets host name to localhost.localdomain
Deletes the last provisioned user account (obtained from /var/lib/waagent) and associated data.
Apart of this nothing will be touch on your server.

Resources