Clarification on Azure SQL Database Backup plan (short term retention) - azure

I am confused with azure SQL database backup plan (short term backup retention).
As far as i understood,
In DTU purchasing model, no extra charge for backup storage, you only pay for redundancy type (such as LRS,ZRS)
In vCore purchase model, you will have to pay for backup storage.
am i right ?
does that mean , i will not have any backups if do not subscribe to backup storage in vCore ?
further, in azure pricing calculator, in vCore, General purpose option, you have two redundancy drop down options (i am not talking about long term retention plan) , what is the difference between them ?
Thanks.

i will not have any backups if do not subscribe to backup storage in vCore ?
Yes, in vCore, if you do not allocate a storage account for backups, you will not be able to perform backup operations, either manually or automatically. If you believe you do not need backups, then you might be a fool ;), Azure will maintain access to your database according to the standard SLAs but the infrastructure will not provide a way for you to point-in-time restore the state of your database, only backups can adequately do that for you. But the storage costs are usually a very minimal component of your overall spend. Once the backup operation is complete you can download the backup for local storage and then clear the blob, making this aspect virtually cost free, but you will need a storage account to complete the backup process at all.
in azure pricing calculator, in vCore, General purpose option, you have two redundancy drop down options
Are you referring to the Computer Redundancy:
Zone redundancy for Azure SQL Database general purpose tier
The zone redundant configuration utilizes Azure Availability Zones to replicate databases across multiple physical locations within an Azure region. By selecting zone redundancy, you can make your serverless and provisioned general purpose single databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes of the application logic. This configuration offers 99.995% availability SLA and RPO=0. For more information see general purpose service tier zone redundant availability.
In the other tiers, these redundancy modes are referred to as LRS (Locally Redundant) and ZRS (Zone Redundant). Think of this your choice on what happens when your data centre is affected by some sort of geological or political event that means the server cluster, pod or whole data centre is offline.
Locally Redundant offers redundancy only from a geographically local (often the same physical site). In general this protects from local hardware failures but not usually against scenarios that take the whole data center off-line. This is the minimal level of redundancy that Azure requires for their hardware management and maintenance plans.
Zone Redundant offers redundancy across multiple geographically independent zones but still within the same Azure Region. Each Azure availability zone is an individual physical location with its own independent networking, power, and cooling. ZRS provides a minimum of 99.9999999999% durability for objects during a given year.
There is a third type of redundancy offered in higher tiers: Geo-Redundant Storage (GRS). This has the same Zone level redundancy but configures additional replicas in other Azure regions around the world.
In the case of Azure SQL DB, these terms for Compute (So the actual server and CPU) have almost identical implications as that of Storage Redundancy. So with regard to available options, the pricing calculator is pretty well documented for everything else, use the info tips for quick info and go to the reference pages for the extended information:
The specifics are listed here: Azure Storage redundancy but redundancy in Azure is achieved via replication. That means that an entire workable and usable version of your database is maintained so that in the event of failure, the replica takes the load.
A special feature of replication is that you can actively utilise the replicated instance for Read Only workloads, which gives us as developers and architects some interesting performance opportunities for moving complex reporting and analytic workloads out of the transactional data manipulations OOTB, traditionally this was a non-trivial configuration.
The RA prefix on redundancy options is an acronym for Read Access.

Related

How can I use Azure app services to mitigate against growing retention costs?

I need to devise a pricing strategy for a SaaS product I plan to go live with, as tricky a task as that is.
Putting product 'value' and things like RoI aside (since they're off-topic here), I'm looking for some assurances against a situation whereby my competitively priced product incurs losses because of increasing blob storage/SQL costs in Azure.
In a nutshell, this web app will allow users to create tasks, to which they may attach any number of hi-res images, documents etc.
So, in order to keep this question specific and technical, what services does the Azure platform offer that helps mitigate against escalating costs of data/blob storage? Or which services lend itself to managing these losses/costs?
For example, I think a DTU option for my SQL Server will be a flat rate as opposed to a dynamically priced VCore alternative. So I could opt for DTU so I at least know where I stand.
Question/s
Does Azure offer flat rate services for storage? Would IaaS instead of PaaS give me this?
Does Azure ofer flat rate for SQL Server? (Is my understanding of DTU correct?)

Difference between Managed and Unmanaged Disk

Can someone tell me the main benefits and differences between Managed disks and Unmanaged disks, various pros and cons of the managed and unmanaged disk and how best can I use this?
I would like to highlight some of the benefits of using managed disks:
Simple and scalable VM deployment: Managed Disks will allow you to create up to 10,000 VM disks in a subscription, which will enable you to create thousands of VMs in a single subscription.
Better reliability for Availability Sets: Managed Disks provides better reliability for Availability Sets by ensuring that the disks of VMs in an Availability Set are sufficiently isolated from each other to avoid single points of failure.
Highly durable and available.
Granular access control: You can use Azure Role-Based Access Control (RBAC) to assign specific permissions for a managed disk to one or more users. Managed Disks exposes a variety of operations, including read, write (create/update), delete, and retrieving a shared access signature (SAS) URI for the disk.
Azure Backup service support: Use Azure Backup service with Managed Disks to create a backup job with time-based backups, easy VM restoration and backup retention policies.
Are unmanaged disks still supported: Yes. Both support unmanaged and managed disks. We recommend that you use managed disks for new workloads and migrate your current workloads to managed disks.
Refer Azure Managed Disks Overview for more details.
Essentially, Managed Disks are easier to use because they don't require you to create a storage account. I think Azure still creates one, but this detail is hidden from you.
The benefit of not having to manage a storage account is that storage accounts have limits, like max IOPS, so that if you place too many disks in a storage account, it is possible that you will reach the IOPS limit. Azure takes care of this for you.
If you have VMs in an Availability Set, Azure will make sure that disks are on different "stamps" ensuring that disks are spread out so that you don't have a single point of failure for the disks.
As for a Con, I've encountered two (but there are probably more):
When taking snapshots they are Full Snapshots, not incremental, so
this adds to storage cost.
If you are setting up a Disaster Recovery between two Azure regions, using Recovery Services, managed disks are not yet supported.
Managed disk for Azure site recovery is now supported
Managed and unmanaged drives in Azure are different concept.
Unmanaged approach treat the drive as a service provided under storage account, you can use this "service" connecting it to your VM but from management perspective is completelly different entity.
Contrary to this approach managed drive is a HDD you connect to your VM, storage account behind it is managed by Azure, so you should get appropriate performance for your disk size. In fact because VMs have there own IOPS limits associatied with hardware profile size just resizing the disk will generally doesn't provide you better performance.
Since managed drives are newer and more "sophisticated" service they are also more expensive.
If you are interested in this topic I did quite complete comparison based on options available over az command line options here. There is also nice practical differences summary here
Managed Disks:
The managed disk provides enhanced manageability and high availability which provides the following features.
Simple - Abstracts underlying storage account/blob associated with the VM disks from customers. Eliminates the need to manage storage accounts for IaaS VMs
Secure by default – Role based access control, storage encryption by default and encryption using own keys
Storage account limits do not apply – No throttling due to storage account IOPS limits
Big scale - 20,000 disks per region per subscription
Better Storage Resiliency - Prevents single points of failure due to storage
Supports both Standard and Premium Storage disks
Unmanaged Disks:
Less availability: Unmanaged disks do not protect against single storage scale unit outage
Upgrading process is complex:
If you want to upgrade from standard to premium on unmanaged disks, process is very complex.
Apart from this unplanned downtime, security is the downsides of the unmanaged disks. However, Cost differences between managed and unmanaged are based on your workload use case

SQL Azure reliability and scalability

I need to make sure the availability of my database is high. working with SQL Azure does not make that clear.
Is there a way to run multi servers (one will take over if one server fails? ) under SQL Azure, above that is there something equivalent to increasing memory on the DB server to speed up the Database processing ?
Read High Availability on the Intro the Azure SQL and then read Business Continuity in Windows Azure SQL Database. To summarize:
Data durability and fault tolerance is enhanced by maintaining
multiple copies of all data in different physical nodes located across
fully independent physical sub-systems such as server racks and
network routers. At any one time, Windows Azure SQL Database keeps
three replicas of data running—one primary replica and two secondary
replicas.
Right now there is no way to specify hardware configuration for SQL Azure Databases. It's totally out of your control and from SAAS perspective that makes sense. The backend management services are responsible making sure you get the best performance possible.
If you need dedicated and reserved hardware for your SQL deployment you may take a look at IAAS offerings in Azure and start a VM with SQL installed however you need to make sure you know the main differences between a IAAS and PAAS offering.
I do not know what your high availability requirements are, but you should look at the SLAs provided by Microsoft. SQL Database offers 99.9% monthly availability.

How to scale out Azure Storage Queues

I am wondering how Azure handles the geographic distribution of Storage Queues?
If I have a storage queue setup in one region and then I want to scale out to other regions, what happens? Do I need to write code to handle the Queues separately?
For example Amazon Web Services have DynamoDB which is globally distributed out of the box and will provide the same performance everywhere.
I think a more logical comparison would be between Windows Azure Tables and DynamoDB. That said:
Windows Azure queues are assigned to a specific data center, and you can create additional queues in other data centers. Typically you'd place your queue in the same DC as your cloud service working with the queue, but there's no requirement there (you'll get better performance and no outbound bandwidth charges when you access same-DC queues).
DynamoDB, from what I've read here, has the same model: Choose your data center for a table. Data is distributed across servers in the same region, not multiple regions (in other words, if you choose N. Virginia, that's where your data access point is).
Regarding your statements of DynamoDB "being globally distributed out of the box" and providing "the same performance everywhere" - I don't think that's the case (at least, I can't find any evidence supporting that assertion). Rather, DynamoDB is replicated to additional data centers for fault tolerance, as is Windows Azure Storage.
Bottom line: you'd have to manage resources allocated to multiple data centers, whether Windows Azure Tables, Windows Azure Queues, or DynamoDB.

How do I make my Windows Azure application resistant to Azure datacenter catastrophic event?

AFAIK Amazon AWS offers so-called "regions" and "availability zones" to mitigate risks of partial or complete datacenter outage. Looks like if I have copies of my application in two "regions" and one "region" goes down my application still can continue working as if nothing happened.
Is there something like that with Windows Azure? How do I address risk of datacenter catastrophic outage with Windows Azure?
Within a single data center, your Windows Azure application has the following benefits:
Going beyond one compute instance, your VMs are divided into fault domains, across different physical areas. This way, even if an entire server rack went down, you'd still have compute running somewhere else.
With Windows Azure Storage and SQL Azure, storage is triple replicated. This is not eventual replication - when a write call returns, at least one replica has been written to.
Ok, that's the easy stuff. What if a data center disappears? Here are the features that will help you build DR into your application:
For SQL Azure, you can set up Data Sync. This facility synchronizes your SQL Azure database with either another SQL Azure database (presumably in another data center), or an on-premises SQL Server database. More info here. Since this feature is still considered a Preview feature, you have to go here to set it up.
For Azure storage (tables, blobs), you'll need to handle replication to a second data center, as there is no built-in facility today. This can be done with, say, a background task that pulls data every hour and copies it to a storage account somewhere else. EDIT: Per Ryan's answer, there's data geo-replication for blobs and tables. HOWEVER: Aside from a mention in this blog post in December, and possibly at PDC, this is not live.
For Compute availability, you can set up Traffic Manager to load-balance across data centers. This feature is currently in CTP - visit the Beta area of the Windows Azure portal to sign up.
Remember that, with DR, whether in the cloud or on-premises, there are additional costs (such as bandwidth between data centers, storage costs for duplicate data in a secondary data center, and Compute instances in additional data centers). .
Just like with on-premises environments, DR needs to be carefully thought out and implemented.
David's answer is pretty good, but one piece is incorrect. For Windows Azure blobs and tables, your data is actually geographically replicated today between sub-regions (e.g. North and South US). This is an async process that has a target of about a 10 min lag or so. This process is also out of your control and is purely for a data center loss. In total, your data is replicated 6 times in 2 different data centers when you use Windows Azure blobs and tables (impressive, no?).
If a data center was lost, they would flip over your DNS for blob and table storage to the other sub-region and your account would appear online again. This is true only for blobs and tables (not queues, not SQL Azure, etc).
So, for a true disaster recovery, you could use Data Sync for SQL Azure and Traffic Manager for compute (assuming you run a hot standby in another sub-region). If a datacenter was lost, Traffic Manager would route to the new sub-region and you would find your data there as well.
The one failure that you didn't account for is in the ability for an error to be replicated across data centers. In that scenario, you may want to consider running Azure PAAS as part of HP Cloud offering in either a load balanced or failover scenario.

Resources