Backup in SQL Azure very slow - azure

I'm currently working on an SQL Database backup strategy in advance of porting our application to Azure. Currently we are using a SQL Server maintenance task to run a backup of our on-premise database once every 15 minutes with a 1 hour retention (thus retaining 4 local copies). We also run a 24 hour backup which gets pushed into Amazon S3.
Now in Azure, I've so far managed to institute a backup of the primary database (to another sql server instance) using the following T-SQL:
CREATE DATABASE targetserver.backupName AS COPY OF sourceserver.sourceName
The source database is approximately 3GB in size and is expanding around 5-10% per month. The problem I'm having is that the copy process is painfully slow! I initiated a copy over 30 minutes ago and it's still running! This means that adopting a 15 minute backup schedule seems untenable in Azure.
So I'm wondering if I can qualify a few things with other users:
Is it normal for a 3GB backup to take over 30 minutes (and counting) to replicate to another server instance?
Should I keep the backups on the same server as the source? I'm very nervous as a few clicks in the Azure portal could wipe out a lot of critical data! I know this is a 'black swan' event but I just wouldn't feel easy having everything running in a single server instance.
Is there a quicker way to backup an SQL Azure Database? I've taken a look at the Red-Gate but it seems expensive to do sub daily incremental backups.
Any thoughts on this would be much appreciated!
I should add that I am happy to rethink my backup strategy entirely to be more Azure friendly. The key thing is mitigation against administrator error, e.g. dropping a load of important data due to a clumsy statement (the shorter the backup intervals the better) and a 24 hour backup pushed into a different storage method, e.g. blob container.
UPDATE ------
I cancelled the initial backup request after waiting 1 hour and re-initiated. The second backup completed in 5 minutes. I've now gone back to Red-Gate to take a look at their hosted backup solution.

How long copy database takes to run depends not only on the size of the data, but also how many transactions are being run on it at the time, so this option may not be tenable in your situation. Now that you have a backup DB you can test this for yourself by making a backup of your backup and see how long that takes.
Your other option is to export a .bacpac file and store it in blob storage. There are libraries for this but I don't have the reference to hand. This will also be a much cheaper option. I'm pretty sure this is what Red Gate are doing under the covers of their service.

Related

I'm getting a lot of transactions while idling (Airflow and Azure File Share)

I need to load data from different files into an Azure SQL database. So I set up a VM running Airflow and two Azure File Shares, one for my dags (so that I can modify them without sshing into the VM) and another to drop the files that will be loaded.
I mounted those two fileshares to the VM and my PC and use them as normal drives.
The system is currently idling and I can see in Azure's portal that I'm getting about 24k transactions every 5 minutes, but I can't see specifically what is generating them.
Is it possible the VM is constantly requesting a list of files or touching the fileshare to check if it's still there? How can I avoid this?
Thanks!
I can confirm that having the dags folder in a shared drive was the cause of the insane amount of transactions. I moved the dags folder to the VM drive and now everything is back to normal.
I was running into a similar issue, having 8k transactions every 5 minutes for just 3 DAGs. I got it down to about 800 transactions every 5 minutes by setting file_parsing_sort_mode to alphabetical.
https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#file-parsing-sort-mode
The default setting for this, which is modified_time would make the DAG processor retrieve the last modified time of the file from the fileshare on every loop. Weirdly, this action even triggers write operations which are more costly than read operations.
https://github.com/apache/airflow/blob/2d79d730d7ff9d2c10a2e99a4e728eb831194a97/airflow/dag_processing/manager.py#L982-L1008
Same answer posted on a similar question here: https://stackoverflow.com/a/70524563/6654620

Azure - never ending Full Backup Uploading in Database Migration Service

I have been migrating some databases from a SQL Server to an SQL Managed Instance. 13 of 14 DBs have been successfully restored. There is only one remaining, the biggest one with almost 600 Gb. It has been more than a week continuously uploading the initial full backup and it is still running.
It is a big database but I thought it has been a long time and it should have been finished by now. For this reason I have been trying some cmd/az commands but I don´t get anything more than a running status.
The strange thing is that I can´t see the DB (in recovery mode) in the SQL Management Studio and the file has not been created yet in the container of the Storage Account. All the other databases appear in SSMS and in the storage account.
I had around 75 Gb more than the total size of the databases in the Storage Account, so I guess that was not the issue. In any case, I added 500 Gb more but still no results.
Is it possible to stop the task and restart it to see if this helps? Obviously I would not like to upload all databases again if possible.
Could you please help ?
Thank you!
As explained in the comments before, the best options for the migration of old SQL Servers in my case were:
Check regularly the cpu and network performance of the server.
When you configure your SQL MI, use at least the double storage size of the full DB backups size.
Finally, if you have big DBs, (my case more than 400Gbs), create different activities* to separate the small ones from the big ones. This would help as well if any errors happen into the big DBs. You won´t need to upload all of them again.
*NOTE. I had some issues when I had more than 2 activities: Some of them stayed in "Queued" Status and after a day still did not run. This happened even when the other activities were already completed. So, to fix this, I had to delete all the activities and create the remaining one again.
Have a good day.
I would recommend to open a case with Support to make sure there is no patching or failover happening on the SQL MI during Migration.
I have seen this happen before where the restore is going through for a VLDB and then patching on SQL MI causes it to restart restoring again.
Hopefully this will help

Create database within Azure SQL elastic pool takes almost 10 minutes to complete

We use Azure SQL databases and an elastic pool (level "Standard").
Usually the creation of a new customer database takes approximately 1-2 minutes but suddenly it started taking way longer (up to 10 minutes) and I have no idea why this is happening. I checked the pool in the Azure portal and everything seems fine. We are still far away from reaching the given limits (257/500 databases; ~11GB/200GB data size). Upscaling for a short period of time has no effect.
Is there anything else I can do?
I think there are some ongoing issue at Microsoft cloud services just check if your issue related to that, if that’s true your issue should be temporary

Are there utilities to help automate backing up Azure SQL databases to Azure Storage?

I know SQL-Azure has automated backups that are retained for 30 days, but for archival purposes I also need to take and retain other backups: daily (last 60 days), weekly (first day of each week for the last 8 weeks), monthly (first day of each month for the last 12 months). At the end of a period, the last backup gets deleted (except for monthly). Any daily backups older than 60 days gets deleted, etc. The monthly backups would get moved to cold storage where they are saved for years.
I should note that my databases are only in the 2-4 GB range, so the cost savings of using Azure's cold storage may be so minimal that it's not even worth bothering moving the monthly backups to cold storage.
I was thinking blob storage is probably the way to go. Are there utilities, scripts, etc that do this? I don't want to reinvent the wheel. I see Azure has a scheduling service which would be nice to use because the free version would more than suffice, but I don't want to overcomplicate things. If I need to run a cheap VM just for backups I will.
There is Cherry Safe
https://www.cherrysafe.com/Home/Features#sqlAzureBackup
This tool is flexible and not very expensive.
This article may also help you
http://fabriccontroller.net/backup-and-restore-your-sql-azure-database-using-powershell/
We have a private preview of long term backup retention feature in Azure SQL. If you are interested to join please email to sashan at microsoft.com for details.

Azure Websites automated and manual backups are not created

Whilst accepting that Backups in Windows Azure Websites are a preview feature, I can't seem to get them working at all. My site is approximately 3GB and on the standard tier. The settings are configured to move to a Geo-Redundant storage account with no other containers. There is no database selected, I'm only backing up the files.
In the Admin Portal, if I use the manual Backup Now button, a 0 bytes file is created within the designated storage account, dated 01/01/0001 00:00:00. However even after several days, it is not replaced with the 'actual' file.
If I use the automated backup scheduler, nothing happens at all - no errors, no 0 byte files.
Can anyone shed any light on this please?
The backup/restore feature is still in a preview mode and officially supports only 2 GB of data. From the error message you posted ("backup is currenly in progress") it seems you probably hit a bug which was there and was fixed last week (the result of that bug was that there were some lingering backups which blocked subsequent backups).
Please try it again, you should be able to invoke it now. If you find another error message in operational logs, feel free to post it here (just leave the RequestId in it unscrambled - we can correlate using that) and we can take a look.
However, as I mentioned in the beginning, more than 2 GBs are not fully supported yet (you might not be able to do e.g. roundtrip with your data - backup and then restore).
Thanks,
Petr

Resources