How do I increase the size of an Azure CloudDrive? - azure

I have an existing Azure CloudDrive that I want to make bigger. The simplist way I can think of is to creating a new drive and copying everything over. I cannot see anyway to just increase the size of the vhd. Is there a way?

Since an Azure drive is essentially a page blob, you can resize it. You'll find this blog post by Windows Azure Storage team useful regarding that: http://blogs.msdn.com/b/windowsazurestorage/archive/2010/04/11/using-windows-azure-page-blobs-and-how-to-efficiently-upload-and-download-page-blobs.aspx. Please read the section titled "Advanced Functionality – Clearing Pages and Changing Page Blob Size" for sample code.

yes you can,
please i know this program, is ver easy for use, you can connect with you VHD and create new, upload VHD and connect with azure, upload to download files intro VHD http://azuredriveexplorer.codeplex.com/

I have found these methods so far:
“the soft way”: increase the size of the page blob and fix the
VHD data structure (the last 512 bytes).
Theoretically this creates unpartitioned disk space after the
current partition. But if the partition table also expects
metadata at the end of the disk (GPT? or Dynamic disks), that
should be fixed as well.
I'm aware of only one tool
that can do this in-place modification. Unfortunately this tool is
not much more than a one-weekend hack (at the time of this writing)
and thus it is fragile. (See the disclaimer of the author.) But fast.
Please notify me (or edit this post) if this tool gets improved significantly.
create a larger disk and copy everything over, as you've suggested.
This may be enough if you don't need to preserve NTFS features like
junctions, soft/hard links etc.
plan for the potential expansion and start with a huge (say 1TB) dynamic VHD,
comprised of a small partition and lots of unpartitioned (reserved) space.
Windows Disk Manager will see the unpartitioned space in the VHD, and can expand the
partition to it whenever you want -- an in-place operation. The subtle point is
that the unpartitioned area, as long as unparitioned, won't be billed, because
isn't written to. (Note that either formatting or defragmenting does allocate
the area and causes billing.)
However it'll count against the quota of your Azure Subscription (100TB).
“the hard way”: download the VHD file, use a VHD-resizer program to insert unpartitioned disk space, mount the
VHD locally, extend the partition to the unpartitioned space, unmount,
upload.
This preserves everything, even works for an OS partition, but is very
slow due to the download/upload and software installations involved.
same as above but performed on a secondary VM in Azure. This speeds up
downloading/uploading a lot. Step-by-step instructions are available here.
Unfortunately all these techniques require unmounting the drive for quite a lot of time, i.e. cannot be performed in high-available manner.

Related

How to reduce used storage in azure function for container

A little bit of background first. Maybe, hopefully, this can save some else some trouble and frustration. Move down to the TL;DR to move on to the actual question.
We currently have a couple genetics workflows related to gene sequencing running in azure batch. Some of which are quite light and I'd like to move them to an Azure Function running a Docker container. For this purpose I have created a docker image, based on the azure-functions image, containing anaconda with the necessary packages to run our most common, lighter workflows. My initial attempt produced a huge image of ~8GB. Moving to Miniconda and a couple other adjustments has reduced the image size to just shy of 3.5GB. Still quite large, but should be manageable.
For this function I created an Azure Function running on an App Service Plan on the P1V2 tier on the belief that I would have 250GB storage to work with as stated in the tier description:
I encountered some issues with loading my first image (the large one) after a couple fixes where the log indicated that there was no more space left on the device. This puzzled me since the quota stated that I'd used some 1.5MB of the 250 total. At this point I reduced the image size and could at least successfully deploy the image again. Enabling SSH support I logged in to the container via SSH and ran df -h.
Okay so the function does not have the advertised 250GB of storage available runtime. It only has about 34. I spent some time searching in the documentation but could not find anything indicating that this should be the case. I did find this related SO question which clarified things a bit. I also found this still open issue on the azure functions github repo. Seems that more people are having the same issue and is not aware of the local storage limitation of the SKU. I might have overlooked something so if this is in fact documented I'd be happy if someone could direct me there.
Now the reason I need some storage is that I need to get the raw data file which can be anything from a handful of MBs to several GBs. And the workflow then subsequently produces multiple files varying between a few bytes and several GBs. The intention was, however, not to store this on the function instances but to complete the workflow and then store the resulting files in a blob storage.
TL;DR
You do not get the advertised storage capacity for functions running on an App Service Plan on the local instance. You get around 20/60/80GB depending on the SKU.
I need 10-30GB of local storage temporarily until the workflow has finished and the resulting files can be stored elsewhere.
How can I reduce the spent storage on the local instance?
Finally, the actual question. You might have noticed on the screenshot from the df -h command that of the available 34GB a whopping 25GB is already used. Which leaves 7.6GB to work with. I already mentioned that my image is ~3.5GB of size. So how come there is a total of 25GBs used and is there any change at all to reduce this aside from shrinking my image? That being said, if I'd removed my image completely (freeing 3.5GB of storage) it would still not be quite enough. Maybe the function simply needs stuff worth over 20GB of storage to run?
Note: It is not a result of cached docker layers or the like since I have tried scaling the app service plan which clears the cached layers/images and re-downloads the image.
Moving up a tier gives me 60GB of total available storage on the instance. Which is enough. But it feels very overkill when I don't need the rest that this tier offers.
Attempted solution 1
One thing I have tried, which might be of help to others, is mounting a file share on the function instance. This can be done with very little effort as shown by the MS docs. Great, now I could directly write to a file share saving me some headache and finally move on. Or so I thought. While this mostly worked great it still threw an exception indicating that it ran out space on the device at some point leading me to believe that it may be using local storage as temporary storage, buffer, or whatever. I will continue looking into it and see if I can figure that part out.
Any suggestions or alternative solutions will be greatly appreciated. I might just decide to move away from Azure Functions for this specific workflow. But I'd still like to clear things up for future reference.
Thanks in advance
niknoe

Azure will not let me swap to my new smaller OS disk

Good Morning, Fellow Stack Overflow-ers,
I have a Windows 2019 DC Virtual Machine with a 127GiB OS Disk with MS Azure. The WM image is Standard B2s (2 vcpus, 4 GiB memory)
I want to swap this with a smaller 8GiB OS disk - having successfully created this in my portal and labelled useastOS - Azure is failing to allow me to swap from the previous 127GiB disk to the smaller 8GiB Disk. On the "Swap OS Disk" menu illustrated, you will see there is no option to use the useastOS disk.
Puzzling.
This is a managed disk and so there is no reason whatsoever as to why Azure is not giving me the option.
So my question is there any valid reason as to why Azure is not allowing me to swap to the smaller useastOS or is this bug within Azure that I need to make Azure aware of?
When you are creating a Managed Disk like this, there is no SO installed, it is an empty disk, that's why Azure assumes it is a data disk, not a SO disk.
Now, when you upload your VHD disk to blob storage, you can tell Azure that this disk is OS and not a data disk like this.
Looking for upload VHD to Azure blob, here it is an example https://learn.microsoft.com/en-us/azure/virtual-machines/windows/prepare-for-upload-vhd-image.
Your question is how to swap SO disk to a new one smaller, this is what I understood, in case you just want to add a second disk as a data disk, you can go to VM overview, from blade disk, you can add it easily.
Anyway, I hope that I could help in any :)
Just in case, confirm that you selected an operation system when you created this disk useastOS. For example, in my case it is Windows, but disk can be either Windows or Linux, when you don't select anything, Azure assumes it is a data disk, not an operation system.

How to expand detached data disk in Azure RM

This link describes how to expand attached VM OS or data disks in an Azure resource group. I want to know how can I extend a detached data disk so that I could perform this action without restarting the machine. Is that achievable?
https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-windows-expand-os-disk/
You should be able to use the Get-AzureDisk PowerShell command to obtain a reference to the unattached disk. From there you can call Update-AzureDisk to increase the size of the disk.
Two notes which you may already know; you can't shrink a disk (without recreating it from the underlying blob) once you increase its size and generally you should avoid putting data on an OS disk so if you need more space just add another data disk.

How do I reclaim unused blob space for my Azure VHD

Maybe I don't fully understand how Azure charges for VHD storage.
When I started out, I had a 120gb VHD with only ~30gb used. I was only getting charged for roughly 1gb per day for Azure. As I filled up the hard drive, the daily usage grew as expected. I ended up using 100gb of the drive and was getting hit with roughly 3.6gb per day from Azure. That makes perfect sense to me.
The other day, I free'd up a lot of space on the VHD and now I only use 30gb again where the other 90gb is free space. However, it seems that I'm still getting charged for roughly 3.6gb per day.
Could someone help explain this to me? Do I need to do something to reclaim the free space? If so, how?
Thanks
It's now possible to manually reclaim unused space by executing the following PowerShell command (starting from Windows Server 2012 / R2):
Optimize-Volume -DriveLetter F -ReTrim
More information: Release unused space from your Windows Azure Virtual Hard Disks to reduce their billable size
Even though the files on the VHD may be deleted, you still pay for the space they once consumed. Check out this post by the Windows Azure storage team - http://blogs.msdn.com/b/windowsazurestorage/archive/2012/06/28/exploring-windows-azure-drives-disks-and-images.aspx.
In the "Storage Capacity" section -
"It is also important to note that when you delete files within the file system used by the VHD, most operating systems do not clear or zero these ranges, so you can still be paying capacity charges within a blob for the data that you deleted via a disk/drive."

Can I create Windows Azure drive as 1T size straight way

The max size for Windows Azure drive is 1T
M$ only charged for the data in it, not for the size
My question is: why not just create an Azure Drive at size 1T, so no more worries about resize etc.
Or there has catch if I create a Drive bigger than I need.
I often do that when creating an Azure Drive: allocating a maximum size drive of 1TB. No discernable penalty. The only advantage to setting a smaller size: protecting yourself against cost overruns. There might be a possibility it takes longer to initialize a 1TB drive, but I haven't measured it.
I have not yet found a lot of use for Azure Drives given some of the limitations that they have and the other storage options that are available, so I have only done some playing with them, not actually used one in a production environment.
With that said, based on my understanding, and the description you give in your question about only being charged for the amount of content stored on the drive I do not see any issue with creating a large drive initially and growing into it in the future.
Hope that helps some, even if it is just a - yes I think you understand it correctly!
The reason is pretty simple if you tried it. Namely, while you are not charged except for the data inside the drive, it does count against your quota limit. So, if every drive was 1TB, then you could create only 99 drives (think overhead here) before your storage account quota was gone. Also, yes, it does take longer to create a 1TB drive versus a smaller one (in practice).

Resources