I'm building a web app using Azure & SQL Azure. I'm setting it up so each organization has their own database. Low to moderate traffic per customer organization.
I'm thinking about using SQL Azure Data Sync as part of a failover/backup plan, so that if SQL Azure goes down, my app can switch over to my on-premises SQL Server (read-only mode).
I would also be able to do all of my backups on-prem, instead of in the cloud which could incur costs.
One issue may be trying to data-sync multiple databases to my on-prem
sql server (not sure what the limit is on the number of databases
that can be synced to one server)
Bandwidth may be an issue, but I'll probably only sync daily.
Does anyone see any other problems with this approach?
Data Sync is ok, but may or may not be good for your particular DR plan since it's not a transactional sync model.
One option to consider is making a database copy:
CREATE DATABASE destination_database_name
AS COPY OF [source_server_name.]source_database_name
Then you can create a backup from this copy, store the backup in blob storage, and (optionally) delete the database copy. While this does add an additional cost due to a second database being live, you can keep that cost to a minimum if you delete the database instance after creating a backup and storing to blob storage (remember that databases are amortized daily).
Since your backups would then be in blob storage, you could keep multiple backups in blob storage, and pull a backup to your on-premises server if needed.
Related
Does Azure Synapse Analytics support Geo-Redundancy like Storage Account & Key vault? If not, why do I implement High availability for Azure Synapse Analytics? I have the following components as a part of the Azure Synapse Analytics Solution
SQL Dedicated Pool
SQL Serverless Pool
Spark Pool
Storage Account(ADLS)
Azure DevOps Git Repo
First, designing and documenting a Disaster Recovery plan is a project unto itself. I’ve been working on one for a client of mine using Synapse for several months part-time.
The first task is to define your Recovery Time Objective (RTO, meaning how long before your solution is back up in the event of a disaster) and your Recovery Point Objective (RPO, meaning how many minutes or hours of data you can afford to lose… and with analytics solutions you can usually reload from the source to catch up). If your RTO and RPO are low for an analytics solution (like 2 hours) then you probably need to spin up parallel environments in another region and load data to both environments in parallel. If your RTO and RPO are typical for an analytics solution (24-48 hours) then you can probably survive with ensuring backups are geo-redundant and restoring in the event of an outage. I would recommend you preconfigured your Synapse workspace and other infrastructure before the outage unless you have a trust an infrastructure as code solution. If your RPO and RTO are long (like 7 days) it’s extremely unlikely an Azure service or region is going to be down for that long.
ADLS supports RA-GRS redundancy so you could read all the files from the secondary endpoint in its pair region and copy files to another ADLS in the secondary region. Unfortunately ADLS accounts don’t yet support user-initiated failover.
Dedicated SQL Pools support built-in geo redundant backups once a day but you can’t control when they are taken. If this isn’t acceptable then you need to proactively create a user-defined restore point and proactively restore it cross region and pause the SQL pool.
Synapse Serverless SQL pools have no storage so ensure you have a backup of the schema (views, permissions, external data sources, external tables, etc) in source control or somewhere. The data will failover with ADLS.
For Spark Pools ensure you have your notebook artifacts in source control and you can always run them in a different Synapse workspace in another region when needed. Document your cluster configs.
Write out a disaster recovery playbook and do a DR drill periodically (once a quarter or once a year).
Here is another author’s description of the DR plan for Synapse.
Azure SQL has built in backups. If the SQL database and server are deleted from within the Portal these backups are lost.
What is best practice for backing up Azure SQL that will withstand deletion of the server from the portal.
I have manually exported the database to a storage location, however Azure says that this should not be used as a backup. Why should it not be used and what should I be doing instead?
If you want a direct control over your backups, then the best mechanism is fairly straight forward.
Create a copy of your database. This ensures no active transactions because it's a copy.
Use the BACPAC process to export the copy of the database. BACPAC doesn't respect transactions, this is why we created a copy.
Store this where ever you want.
Drop the copy of the database because you're paying for that while it exists.
You can use the BACPAC to import into a new Azure database, or you can import it into a VM in Azure, AWS, or locally.
Azure SQL has built in backups. If the SQL database and server are
deleted from within the Portal these backups are lost.
Yes,If you delete the Azure SQL server that hosts SQL Databases, all databases that belong to the server are also deleted and cannot be recovered. You cannot restore a deleted server.
What is best practice for backing up Azure SQL that will withstand
deletion of the server from the portal.
If your Azure SQL Server has been deleted, you need to create a support ticket to restore the databases.
When you really need to delete a SQL sever and then try to back up it, You can configure the Azure Recovery Services vault to store Azure SQL database backups and then recover a database using backups retained in the vault using the Azure portal or PowerShell.
Why should it not be used and what should I be doing instead?
I think you can export your database to your local storage , but it's complex to restore it to Azure. Also, it may change some information of your SQL database and may need migration to Azure.
Well, there is no real answer to this, you can use any backup method you like. Its mostly a personal preference.
But the easiest way (probably) is using Azure Backup Vault to do long term backups (which is a native way for Azure). Its pretty easy to configure (next\next\next) and it is not connected to the Azure SQL Server, so when you delete the server the backups are there.
I want to confirm our understanding of how our Azure SQL databases are being backed up to enable point in time restore. We have not currently configured geo-replication to have the database available in another region. We may in the future as some data analysis is done. But my understanding is that the database is still being backed up to a geo redundant location so I could do a geo-restore if there was an issue with the data center that houses my sql database. Is that correct or do I need to enable geo-replication and pay for a second database in order to have a disaster recover option if the datacenter had an issue.
To clarify further: I think this article states what I'm saying in the Geo-Restore section.
https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity/
Thanks
Yes, all databases have a geo-replicated copy for disaster recovery purposes. For more details, please see the following: https://azure.microsoft.com/en-us/blog/azure-sql-database-geo-restore/
Geo-restore uses the same technology as point in time restore with one
important difference. It restores the database from a copy of the most
recent daily backup in geo-replicated blob storage (RA-GRS). For each
active database, the service maintains a backup chain that includes a
weekly full backup, multiple daily differential backups, and
transaction logs saved every 5 minutes. These blobs are geo-replicated
this guarantees that daily backups are available even after a massive
failure in the primary region.
Yes, Azure SQL Databases are automatically backed up to a different Azure data center using Geo-Replication. This is an automatic features of Azure SQL that is baked into the service offering.
Here's a blog post with further information about Azure SQL Data Replication:
https://azure.microsoft.com/en-us/blog/azure-sql-database-standard-geo-replication/
I am trying to upgrade an Azure DB in a continuous release scenario. The DB lives in SQL Azure and its size keeps growing. Now it's about > 50G. In my previous on-premise experience, I usually backup the old DB in a compressed format and save it to an on-premise file sever. In case the upgrade fail, I can restore it safely.
But with SQL Azure, I am not sure if it's OK to download such a big DB from SQL Azure. And is there any best practice for the SQL Azure DB upgrade scenario?
ADD
I found this link regarding different SQL Azure backup strategies. But it'll be great if someone can share some field experiences.
Azure now has automatic exports (aka full backups) to blob storage that you can schedule. The .bacpac files are complete compressed copies of your database and blob storage is pretty cheap. To give you an idea of size we have a 20GB database that is backed up to only 500 MB. We typically keep 14 days of backups but how long to retain them is up to your needs.
It's kind of like the Ron Popeil Rotisserie. You just set it, and forget it.
Obviously after you take a backup you want to restore it somewhere else to verify it worked. It's also a good idea to periodically restore your backups to make sure they working over time. You can do all of this in the Azure Portal. Just create a new database based on a .bacpac file that you created from the automated export.
You actually don't have to download the DB on premise unless you want another copy locally. Because if you are using geo-redundant blob storage its already copied to another region and you have 6 copies in total. But again its up to you.
When you log into the management portal navigate to the Sql Database tab. Click on your DB and then click configure. There you can set up automated backups for your db to blob storage.
The path on the management portal looks like this:
https://manage.windowsazure.com/mycompany.com#Workspaces/SqlAzureExtension/SqlServer/coolazuredb/Database/5.coolazuredb/Config
Here is a screenshot of the automated export section:
I would like to transfer my existing SQL Azure location to other one, but I think there is no functionality right now to do so on the management portal of Azure.
I just googled it and found one link http://social.msdn.microsoft.com/Forums/en-US/ssdsgetstarted/thread/e6c961cc-5eea-4f07-82c9-a8805d367b05 that says I need to use the data sync option in Azure's portal but I don't have that feature enabled in my Azure portal.
Also if I do use that option, is there any charge for it? Finally, are there any other option that is possible for moving the SQL Azure location?
To Move an Existing SQL Server Database to a New Region on Azure Assuming There Are No Blob Containers Associated With the Database. For further reference see:
https://azure.microsoft.com/en-us/blog/migrating-azure-services-to-new-regions/
Upgrade the database, if necessary, to one of the Premium pricing tiers
Add geo-replication to the existing database. You can choose what region to have the backup of the existing database. Create a new Database server in the target region of your choice. I suggest provisioning that new database server with the same admin username and password as the existing sql database. When creating the secondary database, I suggest making the Secondary type “Readable” as it will allow you the ability to check that all data and schemas were replicated correctly.
Allow the two databases time to sync. Rule of thumb according from Microsoft AzureCAT is: 3 * (5 minutes + database size / 150 MB/minute)
Configure the Firewall settings of the secondary database to allow the necessary IP addresses to access the database
Temporarily shut down whatever users or applications are accessing the existing database.
From the Azure portal select the existing database and change its geo-replication role from primary to secondary.
Run any ddl scripts that rely on the masterdb such as ddl scripts to recreate users and user profiles
Change the connection strings of any applications to point to the new database.
Users and applications can now connect to the new Database
At your discretion you can remove the old database as a backup and add any new regions as backup.
In terms of charges there will be charges for upgrading the old database if it isn't already a premium database. There will also be charges for creating the geo-replicated database. However, those charges can be limited to a day to a few days worth of fees (depending on how long geo-replication takes). Once the new database is up and running, delete the old database as soon as possible to limit additional fees. Finally, if you upgraded the service level of the old database to a premium tier to facilitate the geo-replication, you will want to downgrade the new database to the original service level of the old database to also limit fees.
I think you can use new Import/Export bacpac feature. I have used it to move databases between accounts and can't see why it wouldn't also work between regions.
See how here
If you are able to stop writes to the DB for a time then you can use the Copy feature on the Azure Portal.
Create a new SQL Server in the region of your choosing.
Add your service(s) IP addresses to the new SQL Server firewall.
Stop writes to the origin database.
Open the origin database in the Azure Portal and click Copy at the top of the blade.
Choose your new SQL server located in the destination region.
Wait for the copy to complete.
Update your service(s) to point to the destination DB.
Enable DB writes.
Verify everything is working.
Delete origin database (and server if it was the only DB on the server).
I wouldn't use DataSynch because it creates many objects in your database to perform synchronization (it's an invasive solution). You can indeed try the Import/Export feature; that should work fine. You can also download a trial version of the Enzo backup tool, which comes with a 30-day free trial: http://www.bluesyntax.net/backup.aspx. [disclaimer: I am the author of this tool]
Regarding the pricing question, you may be charged for data being extracted out of the database. Moving data "in" SQL Azure is free of charge for now. If you are transferring the data to a different data center, you will be charged for extracting the data. It's 15 cents per GB in the US and Europe, and 20 cents in Asia. Here are the pricing details: http://www.microsoft.com/windowsazure/pricing/
Keep in mind that a database that requires 4GB of storage doesn't mean you have 4GB of data. Sometimes indexes can take a lot of space. To estimate the size of the data you will need to transfer you can either drop your indexes (and wait a little for the database size to shrink; the database size should be roughly equal to your data transfer needs) or you can calculate the size of your tables by running a command. Here is a link to an article that shows how to do something similar (look at the second command with is a SELECT statement; just run it for all the tables): http://www.sqldocumentor.com/table-size-in-sql-server-find-rows-and-disk-space-usage
Azure has released a new tool called Azure Resource Mover.
Resource mover can for now handle these resources:
Azure VMs and associated disks
NICs
Availability sets
Azure virtual networks
Public IP addresses
Network security groups (NSGs)
Internal and public load balancers
Azure SQL databases and elastic pools
https://learn.microsoft.com/en-us/azure/resource-mover/move-region-within-resource-group
Azure SQL Server is not supported yet but Azure has a complete guide for this anyway:
https://learn.microsoft.com/en-us/azure/resource-mover/tutorial-move-region-sql#move-the-sql-server