I have an application where we are using serverless cosmosdb along with premium Function apps and consumption based Logic apps. What can be the best disaster recovery strategy for this infrastructure? All resources are deployed in single region.
I tried to replicate cosmosdb in other region but as serverless cosmosdb does not support multi region, how to configure for disaster recovery?
Thanks in advance,
MVB
Related
We are planning to migrate our self hosted MongoDB database to Azure CosmosDB with Mongo API. We have 186 GB of data. Server-less CosmosDB is our plan. But as we dig in to the documentation we find that, Azure Data Migration Service (DMS) don't support migration to a server-less CosmosDB.
So our plan is to create a provisioned Service of CosmosDB then migrate our data there and from that provisioned service we will migrate to a server-less CosmosDB and then finally we will delete the provisioned CosmosDB service.
But how can we achieve the second stage of our migration.? is there any particular service provided by azure for that?
And we are good if we can migrate in Online mode. Because our service can't bear a large downtime. We know that the first stage of migration (ie. From Native Mongo server to provisioned CosmosDB via DMS) can be done in Online way. But is it possible to parallelly do the online migration from Provisioned CosmosDB to a server-less CosmosDB?
If Online migration isn't possible we are OK with the offline mode as well, unless it don't requires a large downtime of the application. Is there any estimation on the time for migration?
Please shed some light to these concerns. It will be so much helpful for us to do the task. CosmosDB is a great service provided by Azure. We can't wait to see our database there.
There is a way to do an offline migration using Spark. You can learn more by reading this article.
That said you can't use serverless anyway because it has a maximum storage capacity of 50 GB.
Update: Clarified data is not 186 GB for a container so serverless is ok.
I just added azure data factory service to my subscription. During the setup I was able to select only one region, what happens if disaster happens in this region? How does ADF guarantees high availability?
Do we need to wait till recovery or is there any similar setup like in ADLS2(GRS & RA-GRS).
No statements of Disaster Recovery could be found in the ADF official document.Based on my researching,ADF only provides cloud-based data integration work flow, the DR is affected by the supported data stores in ADF actually. I provide some clues for your reference:
1.The statement of Location option when you create ADF:
2.High availability for Azure Integration Runtime,it is affected by DU setting(allocation of compute resources):https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-performance-features#data-integration-units
3.High availability for Self-Hosted Integration Runtime,it could be better if you create multiple nodes in the on-premise environment:https://learn.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime#high-availability-and-scalability
As similar to app service plan, can we autoscale pricing tier of azure SQL database? Currently, my database is in standard S2 tier and I want scale out it's tier in S3 when CPU utilization reached up 80% and similarly want to scale in when it's down to 60% to S2.
I was going through many links and found this is old question/answer but wanted to check if we have any options available for the same.
Autoscaling Azure SQL Database
Single Azure SQL Database supports manual dynamic scalability, but not autoscale. For a more automatic experience, consider using elastic pools, which allow databases to share resources in a pool based on individual database needs. However, there are scripts that can help automate scalability for a single Azure SQL Database. For an example, see Use PowerShell to monitor and scale a single SQL Database.
As Cosmos DB cost extra for multi-region write, is it possible to start with single region write and upgrade to multi-region write at a later stage or will that require a database migration?
From the pricing information it looks like the multi-region write pricing kicks in even if you don't have any geo redundancy configured. So it looks like you either have to go with a high cost from the start, or choose to pay the price through a migration at a later stage. Is this a correct observation?
Multi-region write functionality can be enable after a single region account has been created. This can be done via the Azure Portal or via Powershell: Set up Azure Cosmos DB global distribution using the SQL API
This applies to the SQL(Core) API.
Powershell example: Replicate an Azure Cosmos DB database account in multiple regions and configure failover priorities using PowerShell
I have site deployed on Azure. I am using Cloud Services, Storage, SQL Database.
I want to have High Availability and Disaster Recovery for our Azure Website.
My question is that how can we provide this feature on Azure? Is it already managed by Azure or we need to use any services from Azure for the same.
Thanks in Advance
Well, I don't think DR is needed, since everything you use is PaaS Service, so if you trust Azure - it will handle everything for you, if you don't. Well, if you don't it won't help you ;)
So, in my opinion best way to achieve what you are looking for is using build-in HA for Cloud Services (increase instance count), while Storage and Azure SQL are HA by design.
If you really-really want DR, you can implement Traffic Manager with extra copy of your Cloud Service in another Azure region and implement Storage Replication and Azure SQL Replication.
I won't be giving link to documentation, as all of those are found in under 5 minutes in and search engine.