Installing a small footprint windows server 2008 on Azure - azure

I want to install a w2k8 on windows azure on a disk of 25Gb.
If I choose an image from the VMs galery, it installs on a disk of 127Gb (five times my need).
So I guess I must install localy, run sysprep and upload it to azure (I have detalis of how to do this from my googles). But I do not own w2k8. My questions are:
Can I use a 180 days traial of w2k8 to make my setup on a 25Gb VHD?
Will it work after the 180 days period? //becouse it is on Azure???
Is this process "licence-compliant"?
Is there other way to obtanin the same I want?
(edit) I am offering a bounty becouse I am having partial anwsers to my four questions. Thanks

Maarten Balliauw wrote a nice blog post on how to resize a VHD on Windows Azure. It was focused on extending a virtual disk, not on shrinking one, but it might also works (I only tested the extend, not the shrink).
This way of work would at least save you hours of upload and sysprepping, so I would say it's worth a try by creating a new virtual machine from the gallery, shut down the machine, delete it and the disk (not the VHD) and then apply the resize.
The blog post is here: http://blog.maartenballiauw.be/post/2013/01/07/Tales-from-the-trenches-resizing-a-Windows-Azure-virtual-disk-the-smooth-way.aspx
Hope it works

Why do you want a smaller disk? You only pay for the amount of storage you are actually using, so for the 127 GB image galleries you are only paying storage costs for the amount of data you have written to the disk.

Related

Azure Ultra Disks: How to Downgrade Ultra disk to Premium SSD?

We have tempdb and log files on 2 separate Ultra disks on an Azure VM. Since usage is not that high we would like to downgrade them to Premium SSDs.
Is it possible to do it or we have to attach new SSDs, move files to them and then delete Ultra disks? I am looking around and can't find anything about that.
Thanks.
Nothing on the MSFT site about it that I could find.
Tried on a VM to downgrade it but couldn't find an option to do it.
Searched web, no luck.
As mentioned in this documentation, changing the new disk type is not supported for ultra disk.
You need to create an empty disk and attach both disks to same VM and copy the disk data from one disk to other or leverage a 3rd party solution for data migration.

How to reduce used storage in azure function for container

A little bit of background first. Maybe, hopefully, this can save some else some trouble and frustration. Move down to the TL;DR to move on to the actual question.
We currently have a couple genetics workflows related to gene sequencing running in azure batch. Some of which are quite light and I'd like to move them to an Azure Function running a Docker container. For this purpose I have created a docker image, based on the azure-functions image, containing anaconda with the necessary packages to run our most common, lighter workflows. My initial attempt produced a huge image of ~8GB. Moving to Miniconda and a couple other adjustments has reduced the image size to just shy of 3.5GB. Still quite large, but should be manageable.
For this function I created an Azure Function running on an App Service Plan on the P1V2 tier on the belief that I would have 250GB storage to work with as stated in the tier description:
I encountered some issues with loading my first image (the large one) after a couple fixes where the log indicated that there was no more space left on the device. This puzzled me since the quota stated that I'd used some 1.5MB of the 250 total. At this point I reduced the image size and could at least successfully deploy the image again. Enabling SSH support I logged in to the container via SSH and ran df -h.
Okay so the function does not have the advertised 250GB of storage available runtime. It only has about 34. I spent some time searching in the documentation but could not find anything indicating that this should be the case. I did find this related SO question which clarified things a bit. I also found this still open issue on the azure functions github repo. Seems that more people are having the same issue and is not aware of the local storage limitation of the SKU. I might have overlooked something so if this is in fact documented I'd be happy if someone could direct me there.
Now the reason I need some storage is that I need to get the raw data file which can be anything from a handful of MBs to several GBs. And the workflow then subsequently produces multiple files varying between a few bytes and several GBs. The intention was, however, not to store this on the function instances but to complete the workflow and then store the resulting files in a blob storage.
TL;DR
You do not get the advertised storage capacity for functions running on an App Service Plan on the local instance. You get around 20/60/80GB depending on the SKU.
I need 10-30GB of local storage temporarily until the workflow has finished and the resulting files can be stored elsewhere.
How can I reduce the spent storage on the local instance?
Finally, the actual question. You might have noticed on the screenshot from the df -h command that of the available 34GB a whopping 25GB is already used. Which leaves 7.6GB to work with. I already mentioned that my image is ~3.5GB of size. So how come there is a total of 25GBs used and is there any change at all to reduce this aside from shrinking my image? That being said, if I'd removed my image completely (freeing 3.5GB of storage) it would still not be quite enough. Maybe the function simply needs stuff worth over 20GB of storage to run?
Note: It is not a result of cached docker layers or the like since I have tried scaling the app service plan which clears the cached layers/images and re-downloads the image.
Moving up a tier gives me 60GB of total available storage on the instance. Which is enough. But it feels very overkill when I don't need the rest that this tier offers.
Attempted solution 1
One thing I have tried, which might be of help to others, is mounting a file share on the function instance. This can be done with very little effort as shown by the MS docs. Great, now I could directly write to a file share saving me some headache and finally move on. Or so I thought. While this mostly worked great it still threw an exception indicating that it ran out space on the device at some point leading me to believe that it may be using local storage as temporary storage, buffer, or whatever. I will continue looking into it and see if I can figure that part out.
Any suggestions or alternative solutions will be greatly appreciated. I might just decide to move away from Azure Functions for this specific workflow. But I'd still like to clear things up for future reference.
Thanks in advance
niknoe

How do I reclaim unused blob space for my Azure VHD

Maybe I don't fully understand how Azure charges for VHD storage.
When I started out, I had a 120gb VHD with only ~30gb used. I was only getting charged for roughly 1gb per day for Azure. As I filled up the hard drive, the daily usage grew as expected. I ended up using 100gb of the drive and was getting hit with roughly 3.6gb per day from Azure. That makes perfect sense to me.
The other day, I free'd up a lot of space on the VHD and now I only use 30gb again where the other 90gb is free space. However, it seems that I'm still getting charged for roughly 3.6gb per day.
Could someone help explain this to me? Do I need to do something to reclaim the free space? If so, how?
Thanks
It's now possible to manually reclaim unused space by executing the following PowerShell command (starting from Windows Server 2012 / R2):
Optimize-Volume -DriveLetter F -ReTrim
More information: Release unused space from your Windows Azure Virtual Hard Disks to reduce their billable size
Even though the files on the VHD may be deleted, you still pay for the space they once consumed. Check out this post by the Windows Azure storage team - http://blogs.msdn.com/b/windowsazurestorage/archive/2012/06/28/exploring-windows-azure-drives-disks-and-images.aspx.
In the "Storage Capacity" section -
"It is also important to note that when you delete files within the file system used by the VHD, most operating systems do not clear or zero these ranges, so you can still be paying capacity charges within a blob for the data that you deleted via a disk/drive."

Can I create Windows Azure drive as 1T size straight way

The max size for Windows Azure drive is 1T
M$ only charged for the data in it, not for the size
My question is: why not just create an Azure Drive at size 1T, so no more worries about resize etc.
Or there has catch if I create a Drive bigger than I need.
I often do that when creating an Azure Drive: allocating a maximum size drive of 1TB. No discernable penalty. The only advantage to setting a smaller size: protecting yourself against cost overruns. There might be a possibility it takes longer to initialize a 1TB drive, but I haven't measured it.
I have not yet found a lot of use for Azure Drives given some of the limitations that they have and the other storage options that are available, so I have only done some playing with them, not actually used one in a production environment.
With that said, based on my understanding, and the description you give in your question about only being charged for the amount of content stored on the drive I do not see any issue with creating a large drive initially and growing into it in the future.
Hope that helps some, even if it is just a - yes I think you understand it correctly!
The reason is pretty simple if you tried it. Namely, while you are not charged except for the data inside the drive, it does count against your quota limit. So, if every drive was 1TB, then you could create only 99 drives (think overhead here) before your storage account quota was gone. Also, yes, it does take longer to create a 1TB drive versus a smaller one (in practice).

How do I increase the size of an Azure CloudDrive?

I have an existing Azure CloudDrive that I want to make bigger. The simplist way I can think of is to creating a new drive and copying everything over. I cannot see anyway to just increase the size of the vhd. Is there a way?
Since an Azure drive is essentially a page blob, you can resize it. You'll find this blog post by Windows Azure Storage team useful regarding that: http://blogs.msdn.com/b/windowsazurestorage/archive/2010/04/11/using-windows-azure-page-blobs-and-how-to-efficiently-upload-and-download-page-blobs.aspx. Please read the section titled "Advanced Functionality – Clearing Pages and Changing Page Blob Size" for sample code.
yes you can,
please i know this program, is ver easy for use, you can connect with you VHD and create new, upload VHD and connect with azure, upload to download files intro VHD http://azuredriveexplorer.codeplex.com/
I have found these methods so far:
“the soft way”: increase the size of the page blob and fix the
VHD data structure (the last 512 bytes).
Theoretically this creates unpartitioned disk space after the
current partition. But if the partition table also expects
metadata at the end of the disk (GPT? or Dynamic disks), that
should be fixed as well.
I'm aware of only one tool
that can do this in-place modification. Unfortunately this tool is
not much more than a one-weekend hack (at the time of this writing)
and thus it is fragile. (See the disclaimer of the author.) But fast.
Please notify me (or edit this post) if this tool gets improved significantly.
create a larger disk and copy everything over, as you've suggested.
This may be enough if you don't need to preserve NTFS features like
junctions, soft/hard links etc.
plan for the potential expansion and start with a huge (say 1TB) dynamic VHD,
comprised of a small partition and lots of unpartitioned (reserved) space.
Windows Disk Manager will see the unpartitioned space in the VHD, and can expand the
partition to it whenever you want -- an in-place operation. The subtle point is
that the unpartitioned area, as long as unparitioned, won't be billed, because
isn't written to. (Note that either formatting or defragmenting does allocate
the area and causes billing.)
However it'll count against the quota of your Azure Subscription (100TB).
“the hard way”: download the VHD file, use a VHD-resizer program to insert unpartitioned disk space, mount the
VHD locally, extend the partition to the unpartitioned space, unmount,
upload.
This preserves everything, even works for an OS partition, but is very
slow due to the download/upload and software installations involved.
same as above but performed on a secondary VM in Azure. This speeds up
downloading/uploading a lot. Step-by-step instructions are available here.
Unfortunately all these techniques require unmounting the drive for quite a lot of time, i.e. cannot be performed in high-available manner.

Resources