This link describes how to expand attached VM OS or data disks in an Azure resource group. I want to know how can I extend a detached data disk so that I could perform this action without restarting the machine. Is that achievable?
https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-windows-expand-os-disk/
You should be able to use the Get-AzureDisk PowerShell command to obtain a reference to the unattached disk. From there you can call Update-AzureDisk to increase the size of the disk.
Two notes which you may already know; you can't shrink a disk (without recreating it from the underlying blob) once you increase its size and generally you should avoid putting data on an OS disk so if you need more space just add another data disk.
Related
Good Morning, Fellow Stack Overflow-ers,
I have a Windows 2019 DC Virtual Machine with a 127GiB OS Disk with MS Azure. The WM image is Standard B2s (2 vcpus, 4 GiB memory)
I want to swap this with a smaller 8GiB OS disk - having successfully created this in my portal and labelled useastOS - Azure is failing to allow me to swap from the previous 127GiB disk to the smaller 8GiB Disk. On the "Swap OS Disk" menu illustrated, you will see there is no option to use the useastOS disk.
Puzzling.
This is a managed disk and so there is no reason whatsoever as to why Azure is not giving me the option.
So my question is there any valid reason as to why Azure is not allowing me to swap to the smaller useastOS or is this bug within Azure that I need to make Azure aware of?
When you are creating a Managed Disk like this, there is no SO installed, it is an empty disk, that's why Azure assumes it is a data disk, not a SO disk.
Now, when you upload your VHD disk to blob storage, you can tell Azure that this disk is OS and not a data disk like this.
Looking for upload VHD to Azure blob, here it is an example https://learn.microsoft.com/en-us/azure/virtual-machines/windows/prepare-for-upload-vhd-image.
Your question is how to swap SO disk to a new one smaller, this is what I understood, in case you just want to add a second disk as a data disk, you can go to VM overview, from blade disk, you can add it easily.
Anyway, I hope that I could help in any :)
Just in case, confirm that you selected an operation system when you created this disk useastOS. For example, in my case it is Windows, but disk can be either Windows or Linux, when you don't select anything, Azure assumes it is a data disk, not an operation system.
My Linux VMs each have two disk (OS + Data)
The data disks are currently set to 1024Gb but only contain <15Gb of content
In have two environments (test and production). The production data disk is premium the test data disk is standard.
I want to reduce the size of the production data disk because as I discovered Premium disks are changed on the full size, not just the amount being used as standard disks are.
So before doing this in production, I wanted to try in test. I stop the VM then try to change the size of the disk through the Azure portal but I get an error stating that the new size must be greater than the current - it won't let me reduce the size.
Is that a constraint of premium disks as tell? Is it a constraint of the Azure portal, or can I run CLI/powershell commands that can do this? Or am I forced to create a new disk, copy data, then remove the old disk?
You can't reduce the size of a disk, so you have to attach another disk and copy the content over using robocopy or other method.
I've recently upgraded my EC2 server from a m1.small to a m1.medium (old EC2 instances I know) so I have more storage as I recently maxed it out.
When I look at the space available through Terminal I have extra space available on a /dev/sda2 directory
Is there a something I have done wrong when upgrading the server or will the storage automatically balance between the two if I reach 100% on /dev/sda1?
When I run a check I get the following information back:
I've got 1% of 374Gb on /dev/sda2 available but I'm unsure how servers access this memory if /dev/sda1 reaches 100%
I'm a novice at server management so apologies if I'm doing something wrong.
I think you are confusing disk space and memory.
On AWS, different instance types have different memory, cpu and network performances, but the storage space is unrelated: you can extend disk space on a EC2 machine without changing its instance type, by attaching a new disk. I don't understand if your question is about disk space or memory, and I don't understand how a new disk appeared on your instance by simply upgrading it - probably it was there from instance creation.
Anyway, there isn't an "automatic balancing" of storage space - you have to manage your own files and move some files/folders to the new disk before the old one gets filled. Working on Linux, you can leverage symbolic links to move large directories across disk without too much hassle.
you are using m1.medium, which given as SSD. So you just treat it as "virtual physical storage" given to you. So /dev/sda2 space is NOT extendable to /dev/sda1
The SSD storage given is call "instance store". Anything inside /dev/sda are not permanent. You can REBOOT the instance and nothing lost .
HOWEVER, if you STOP or SHUTDOWN the instance, everything is gone. Do not pour important data in there.
EBS volume normally are shown as /dev/xd* , which is extendable.
Please check out. EC2 instance store
Let's say I have an Azure IaaS virtual machine with a mounted data disk (e.g. E: drive). Then I copy 1000 files of varying sizes onto it. As soon as Windows says the copy is complete I take a snapshot of the mounted data disk.
Here's the problem: if I mount that snapshot, some files are missing and others are corrupt.
However, if I wait a while after the copy is complete and then take a snapshot, all of the data is there.
This tells me there is some behind the scenes caching that's done, which is expected. Is there any documentation available that discusses how Azure caches data before it's actually flushed to blobs?
It depends on the cache settings of your disks when you created it.
If you mounted the drive in azure with cache setting to "Write", then writes are cached and you should wait or turn off VMs to be safe.
You can play with IaaS Management Studio -evaluation for 30 days-, this tool, among other things, prevents you to take snapshot when it detects that your blob is a disk with cache settings to write. (I'm the dev behind this tool :))
I have an existing Azure CloudDrive that I want to make bigger. The simplist way I can think of is to creating a new drive and copying everything over. I cannot see anyway to just increase the size of the vhd. Is there a way?
Since an Azure drive is essentially a page blob, you can resize it. You'll find this blog post by Windows Azure Storage team useful regarding that: http://blogs.msdn.com/b/windowsazurestorage/archive/2010/04/11/using-windows-azure-page-blobs-and-how-to-efficiently-upload-and-download-page-blobs.aspx. Please read the section titled "Advanced Functionality – Clearing Pages and Changing Page Blob Size" for sample code.
yes you can,
please i know this program, is ver easy for use, you can connect with you VHD and create new, upload VHD and connect with azure, upload to download files intro VHD http://azuredriveexplorer.codeplex.com/
I have found these methods so far:
“the soft way”: increase the size of the page blob and fix the
VHD data structure (the last 512 bytes).
Theoretically this creates unpartitioned disk space after the
current partition. But if the partition table also expects
metadata at the end of the disk (GPT? or Dynamic disks), that
should be fixed as well.
I'm aware of only one tool
that can do this in-place modification. Unfortunately this tool is
not much more than a one-weekend hack (at the time of this writing)
and thus it is fragile. (See the disclaimer of the author.) But fast.
Please notify me (or edit this post) if this tool gets improved significantly.
create a larger disk and copy everything over, as you've suggested.
This may be enough if you don't need to preserve NTFS features like
junctions, soft/hard links etc.
plan for the potential expansion and start with a huge (say 1TB) dynamic VHD,
comprised of a small partition and lots of unpartitioned (reserved) space.
Windows Disk Manager will see the unpartitioned space in the VHD, and can expand the
partition to it whenever you want -- an in-place operation. The subtle point is
that the unpartitioned area, as long as unparitioned, won't be billed, because
isn't written to. (Note that either formatting or defragmenting does allocate
the area and causes billing.)
However it'll count against the quota of your Azure Subscription (100TB).
“the hard way”: download the VHD file, use a VHD-resizer program to insert unpartitioned disk space, mount the
VHD locally, extend the partition to the unpartitioned space, unmount,
upload.
This preserves everything, even works for an OS partition, but is very
slow due to the download/upload and software installations involved.
same as above but performed on a secondary VM in Azure. This speeds up
downloading/uploading a lot. Step-by-step instructions are available here.
Unfortunately all these techniques require unmounting the drive for quite a lot of time, i.e. cannot be performed in high-available manner.