I've recently upgraded my EC2 server from a m1.small to a m1.medium (old EC2 instances I know) so I have more storage as I recently maxed it out.
When I look at the space available through Terminal I have extra space available on a /dev/sda2 directory
Is there a something I have done wrong when upgrading the server or will the storage automatically balance between the two if I reach 100% on /dev/sda1?
When I run a check I get the following information back:
I've got 1% of 374Gb on /dev/sda2 available but I'm unsure how servers access this memory if /dev/sda1 reaches 100%
I'm a novice at server management so apologies if I'm doing something wrong.
I think you are confusing disk space and memory.
On AWS, different instance types have different memory, cpu and network performances, but the storage space is unrelated: you can extend disk space on a EC2 machine without changing its instance type, by attaching a new disk. I don't understand if your question is about disk space or memory, and I don't understand how a new disk appeared on your instance by simply upgrading it - probably it was there from instance creation.
Anyway, there isn't an "automatic balancing" of storage space - you have to manage your own files and move some files/folders to the new disk before the old one gets filled. Working on Linux, you can leverage symbolic links to move large directories across disk without too much hassle.
you are using m1.medium, which given as SSD. So you just treat it as "virtual physical storage" given to you. So /dev/sda2 space is NOT extendable to /dev/sda1
The SSD storage given is call "instance store". Anything inside /dev/sda are not permanent. You can REBOOT the instance and nothing lost .
HOWEVER, if you STOP or SHUTDOWN the instance, everything is gone. Do not pour important data in there.
EBS volume normally are shown as /dev/xd* , which is extendable.
Please check out. EC2 instance store
Related
Good Morning, Fellow Stack Overflow-ers,
I have a Windows 2019 DC Virtual Machine with a 127GiB OS Disk with MS Azure. The WM image is Standard B2s (2 vcpus, 4 GiB memory)
I want to swap this with a smaller 8GiB OS disk - having successfully created this in my portal and labelled useastOS - Azure is failing to allow me to swap from the previous 127GiB disk to the smaller 8GiB Disk. On the "Swap OS Disk" menu illustrated, you will see there is no option to use the useastOS disk.
Puzzling.
This is a managed disk and so there is no reason whatsoever as to why Azure is not giving me the option.
So my question is there any valid reason as to why Azure is not allowing me to swap to the smaller useastOS or is this bug within Azure that I need to make Azure aware of?
When you are creating a Managed Disk like this, there is no SO installed, it is an empty disk, that's why Azure assumes it is a data disk, not a SO disk.
Now, when you upload your VHD disk to blob storage, you can tell Azure that this disk is OS and not a data disk like this.
Looking for upload VHD to Azure blob, here it is an example https://learn.microsoft.com/en-us/azure/virtual-machines/windows/prepare-for-upload-vhd-image.
Your question is how to swap SO disk to a new one smaller, this is what I understood, in case you just want to add a second disk as a data disk, you can go to VM overview, from blade disk, you can add it easily.
Anyway, I hope that I could help in any :)
Just in case, confirm that you selected an operation system when you created this disk useastOS. For example, in my case it is Windows, but disk can be either Windows or Linux, when you don't select anything, Azure assumes it is a data disk, not an operation system.
Each week nearly two or three times, azure vm stops the responding. When it happens, Disk Read Bytes are dramatically increasing i dont know why. My application has no read disk function (only writes the logs).
As you see in the figure, Read Bytes and Disk Operations are increasing for a moment that causes server to freeze. I have to do restart the VM and it takes nearly 15 minutes to be available again.
Azure Ubuntu VM Stats
i am using Azure Ubuntu 16.04 VM and Size is Standard B1s (1 vcpus, 1 GiB memory).
i have alredy checked these conditions
possible memory leaks by my application
observed all source codes of application whether disk read function exists or not (i dont have any disk ready operation at my application)
i have changed default virtual ram size at ubuntu %60 to %1
i am using docker to run my application and i am using only 1 instance of docker image at vm
i removed all vm and i created it again (including all resources i.e. disk/network etc.)
i want to know why this is happening and how can i investigate what are cause this problem. I haven't seen any suspicious running process at ubuntu when it is happening.
This issue is solved by Azure Technical Support Engineers (Thanks to Micah_MSFT)
That seems Azure VM is going to out of memory. When it happens, VM is acting unsteady, so Disk Operations is rapidly increasing and VM is freezing (they are investigating the issue's details and reported to Linux Technicians).
As clarified by Azure Technical Support Engineer, There is a feature
named Swap Space which helps the saver to reserve storage space as
processing memory.
Here is the configuration
Open the file located under /etc/waagent.conf find these variables and modify them accordingly:
# Format if unformatted. If 'n', resource disk will not be mounted.
ResourceDisk.Format=n /// change this parameter to "y"
# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=n /// change this parameter to "y"
# Size of the swapfile.
ResourceDisk.SwapSizeMB=0 /// change this parameter to "1024"
Restart the service to apply the changes
sudo systemctl restart walinuxagent.service
That's it
In my opinion, unnecessary increase in disk usage as a result of Out of Memory limit will cause Azure VMs performance to decrease.
Ideally, I'm looking for an article that explicitly says which disk(s) are ssd drives. At the moment I can see 3 disks. C, D and E. I used CristalDiskMark to test the speed of each disk and it turns out that D is by far the fastest so I'm assuming it is the SSD disk. The thing that does not make sense is that it is only 32GB on an a D12 VM and it should be 200GB.
Based on this post: https://azure.microsoft.com/en-in/blog/new-d-series-virtual-machine-sizes/, it is D:\. From this link:
On these new sizes, the temporary drive (D:\ on Windows, /mnt or
/mnt/resource on Linux) are local SSDs. This high-speed local disk is
best used for workloads that replicate across multiple instances, like
MongoDB, or can leverage this high I/O disk for a local and temporary
cache, like SQL Server 2014's Buffer Pool Extensions. Note, these
drives are not guaranteed to be persistent. Thus, while physical
hardware failure is rare, when it occurs, the data on this disk may be
lost, unlike your OS disk and any attached durable disks that are
persisted in Azure Storage.
I have SQL Server running on a Large Azure Virtual Machine.
I have an attached disk where all the database MDFs, logging, backup and restore files are kept. Nothing is kept on the C:.
I just logged in now and noticed that there is only 58MB left on the C:! Is it possible to increase this disk space. Is there something I can delete from the C:?
You cannot increase your os disk size. Not sure when you created your Virtual Machine. I know that a long time ago the size was set at (I believe) 30GB, and now the os disks should be around 127GB. You may want to check what size the OS disk is.
I'm not sure what chewed up your OS disk space except maybe some type of temporary os storage.
I have an existing Azure CloudDrive that I want to make bigger. The simplist way I can think of is to creating a new drive and copying everything over. I cannot see anyway to just increase the size of the vhd. Is there a way?
Since an Azure drive is essentially a page blob, you can resize it. You'll find this blog post by Windows Azure Storage team useful regarding that: http://blogs.msdn.com/b/windowsazurestorage/archive/2010/04/11/using-windows-azure-page-blobs-and-how-to-efficiently-upload-and-download-page-blobs.aspx. Please read the section titled "Advanced Functionality – Clearing Pages and Changing Page Blob Size" for sample code.
yes you can,
please i know this program, is ver easy for use, you can connect with you VHD and create new, upload VHD and connect with azure, upload to download files intro VHD http://azuredriveexplorer.codeplex.com/
I have found these methods so far:
“the soft way”: increase the size of the page blob and fix the
VHD data structure (the last 512 bytes).
Theoretically this creates unpartitioned disk space after the
current partition. But if the partition table also expects
metadata at the end of the disk (GPT? or Dynamic disks), that
should be fixed as well.
I'm aware of only one tool
that can do this in-place modification. Unfortunately this tool is
not much more than a one-weekend hack (at the time of this writing)
and thus it is fragile. (See the disclaimer of the author.) But fast.
Please notify me (or edit this post) if this tool gets improved significantly.
create a larger disk and copy everything over, as you've suggested.
This may be enough if you don't need to preserve NTFS features like
junctions, soft/hard links etc.
plan for the potential expansion and start with a huge (say 1TB) dynamic VHD,
comprised of a small partition and lots of unpartitioned (reserved) space.
Windows Disk Manager will see the unpartitioned space in the VHD, and can expand the
partition to it whenever you want -- an in-place operation. The subtle point is
that the unpartitioned area, as long as unparitioned, won't be billed, because
isn't written to. (Note that either formatting or defragmenting does allocate
the area and causes billing.)
However it'll count against the quota of your Azure Subscription (100TB).
“the hard way”: download the VHD file, use a VHD-resizer program to insert unpartitioned disk space, mount the
VHD locally, extend the partition to the unpartitioned space, unmount,
upload.
This preserves everything, even works for an OS partition, but is very
slow due to the download/upload and software installations involved.
same as above but performed on a secondary VM in Azure. This speeds up
downloading/uploading a lot. Step-by-step instructions are available here.
Unfortunately all these techniques require unmounting the drive for quite a lot of time, i.e. cannot be performed in high-available manner.