Managed disks in Azure - azure

I want to know restrictions in Azure Managed disks. Which is asked in Interview.
what are the restrictions for customers using managed disks in Azure

Azure Managed Disks are managed by Microsoft Azure and you don't need any storage account while created new disk. Since the storage account is managed by Azure you do not have full control of the disks that are being created that is one limitation that we have with managed disks. There are no other specific restrictions as such with managed disks. They are widely used in Azure and is highly recommended.
Adding more..
Managed disks brought new experience and features to Azure VM storage that permits better controlling the VM storage. Azure recommends using managed disks, even if the ‘Pay as you consume’ model is not adopted there. But the features and the simplicity is worth the ‘little’ difference we can see with pricing.
There is also nice practical differences summary here between managed and unmanaged disks that might come handy as you will be able to understand what is useful in various configuration needs.
Happy Learning!

Related

No Storage account for VM Disks in Azure

According to Microsoft documentation. there are managed disks and unmanaged disks. Managed disks are something managed by microsoft. All the VHDs are stored as page blob in storage account.
Question: When I create a linux VM with additional disk I don't see storage account created in resources list. I dont see storage account created when VMs are created for both OS disk and Data disk. I really appreciate if some one answer this.
Storage account and VM Disks are 2 separate things ?
There's a Storage Account and Page Blob behind the scenes but they are kind of hidden from you. When the disks are managed, the necessary infrastructure is created and managed by Microsoft so that you don't have to worry about them.
This is the reason you don't see any Storage Account and Page Blobs in your Azure Subscription for managed disks. You will see them only when your disks are unmanaged as then you are responsible for the management of these resources.
Please see this link for a nice comparison between managed and unmanaged disks: https://learn.microsoft.com/en-us/answers/questions/3619/what-is-the-difference-between-managed-disk-and-un.html.

What are the costs associated with having an Azure VM image?

Azure images does not seem to be a category in the Azure pricing calculator.
Is there a cost for storing VM images on Azure?
Look here for details on Managed Images of a generalized VM in Azure and here for pricing.
If you want to store them as unmanaged disks (i.e. manage/maintain them yourself) then you can use a storage account and look at pricing information here.
If you are talking about container images, you can look here.

Difference between new and classic storage accounts in Azure

Azure has Storage accounts and Storage accounts (classic) in the Azure Portal.
What are the differences between them? Is there any reason to migrate from a classic storage account to a new storage account?
Classic storage accounts are created using existing Service Management API's (the REST API stack that's been available for the past several years). The newer storage accounts are created with the new Azure Resource Manager (ARM) API's (which are also wrapped in PowerShell and CLI now). Ultimately they provide the same resources to your apps, but they're created/managed differently, and there are a few nuanced differences (such as the ability to tag resources that are created via ARM scripts).
You can't convert a classic storage account (or any classic resource) to a newer type. You don't really need to anyway, unless you're trying to mix resources from classic and new, such as adding ARM-based virtual machines to a classic-based virtual network, or spin up an ARM-based VM from a vhd image sitting in a classic storage account (and for that example, you could always just copy the vhd to a new storage account). Note that, for general storage usage (blobs/tables/queues), you just need the URI and the primary (or secondary) key. With those, you can access your storage resources from anywhere, from any VM/website/etc, regardless if you're accessing storage from classic or new virtual machines, for example.
Check out this link for a general list of differences between classic and new resources.
One advantage of the new over the classic storage accounts is Storage Service Encryption (SSE):
Q: I have an existing classic storage account. Can I enable SSE on it?
A: No, SSE is only supported on Resource Manager storage accounts.
Q: How can I encrypt data in my classic storage account?
A: You can create a new Resource Manager storage account and copy your
data using AzCopy from your existing classic storage account to your
newly created Resource Manager storage account.
There is now a way to migrate Classic resources to the new ARM model. I've done a few myself and it worked as expected. Here's a guide from Microsoft:
https://learn.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-ps-migration-classic-resource-manager
In addition to #David Makogon's answer, the new Azure Storage offers reselling resources to sub-subscriptions.
This means that you are able to buy storage from Azure and sell it to your customers.
You can now migrate Classic Storage Accounts to ARM from within Azure.
Settings --> Migrate to ARM
With Azure
With Powershell

Azure - Multiple Cloud Services, Single Storage Account

I want to create a couple of cloud services - Int, QA, and Prod. Each of these will connect to separate Db's.
Do these cloud services require "storage accounts"? Conceptually the cloud services have executables and they must be physically located somewhere.
Note: I do not use any blobs/queues/tables.
If so, must I create 3 separate storage accounts or link them up to one?
Storage accounts are more like storage namespaces - it has a url and a set of access keys. You can use storage from anywhere, whether from the cloud or not, from one cloud service or many.
As #sharptooth pointed out, you need storage for diagnostics with Cloud Services. Also for attached disks (Azure Drives for cloud services), deployments themselves (storing the cloud service package and configuration).
Storage accounts are free: That is, create a bunch, and still only pay for consumption.
There are some objective reasons why you'd go with separate storage accounts:
You feel that you could exceed the 20,000 transaction/second advertised limit of a single storage account (remember that storage diagnostics are using some of this transaction rate, which is impacted by your logging-aggressiveness).
You are concerned about security/isolation. You may want your dev and QA folks using an entirely different subscription altogether, with their own storage accounts, to avoid any risk of damaging a production deployment
You feel that you'll exceed 200TB 500TB (the limit of a single storage account)
Azure Diagnostics uses Azure Table Storage under the hood (and it's more convenient to use one storage account for every service, but it's not required). Other dependencies your service has might also use some of the Azure Storage services. If you're sure that you don't need Azure Storage (and so you don't need persistent storage of data dumped through Azure Diagnostics) - okay, you can go without it.
The service package of your service will be stored and managed by Azure infrastructure - that part doesn't require a storage account.

Can I use Azure Storage geo-replication as source?

I know Azure will geo-replication a copy of current storage account to another location,
my questions is: can I access another location in program, even just read only
I asked this, because this allow me to build another deploy in different geo-location for performance and disaster-proof like what Azure did. For current setup, if I use same source of storage in different geo-location, I have to pay extra bandwidth cost.
You can only access your storage account by its primary name. In the event of failover, that name will be mapped to the alternate datacenter. You cannot access the failover storage directly, nor can you choose when to trigger a failover. For a multi-site setup as you described, you'd need to duplicate your data (which would then add the cost of storage in datacenter #2). This does give you ultimate flexibility in your DR and performance planning, but at an added cost of storage and bandwidth (egress-only).
Last week the storage team announced read-only access to the failover storage: Windows Azure Storage Redundancy Options and Read Access Geo Redundant Storage.
This means you can now deploy your application in a different datacenter which can be used for "full" failover (meaning that the storage will also be available there). Even if it's only read-only, your application will still be online - but simply in "degraded" mode.
The steps on how you can implement this with traffic manager are described here: http://fabriccontroller.net/blog/posts/adding-failover-to-your-application-with-read-access-geo-redundant-storage-and-the-windows-azure-traffic-manager/

Resources