Azure SQL - GEO-REPLICATION : Data loss? - azure

I have an Azure SQL in WEST US with GEO-REPLICATION enabled to sync with EAST US.
and I want to know
How often Geo-Recovery sync gets executed to keep the EAST US up to date?
In case of WEST US regional failure and happen to failover to EAST US, would there be any data loss?

Update:
Automated backups, according to this documentation: Both SQL Database and SQL Managed Instance use SQL Server technology to create full backups every week, differential backups every 12-24 hours, and transaction log backups every 5 to 10 minutes. The frequency of transaction log backups is based on the compute size and the amount of database activity.
According to this documentation, if an outage is detected, Azure waits for the period you specified by GracePeriodWithDataLossHours. The default value is 1 hour. If you cannot afford data loss, make sure to set GracePeriodWithDataLossHours to a sufficiently large number, such as 24 hours. Use manual group failover to fail back from the secondary to the primary.
According to this answer, Grace period means to allow time for the database to failover within the primary region.

Related

Azure SQL - would Geo-replication cause any performance impact?

I have an Azure SQL in WEST US and I want to setup the failover grop with EAST US.
would Azure SQL Geo-replication/failover group cause any performance impact? If so, what would be the impact?
Talking about the impact
In case of failure,
There might be 2 scenarios : Planned Failure and Unplanned Failure.
For Planned Failure,
Your primary database i.e. WEST US will first synchronize with secondary database i.e. EAST US. Then the EAST US db will become primary. This will prevent data loss.
For Unplanned Failure,
The secondary db EAST US will immediately takeover as primary db. Data Loss might happen depending on previous synchronization time.
There will be a performance impact in both the cases. Latency will increase. Microsoft has defined some best practices to minimize this impact.
Refer : https://learn.microsoft.com/en-us/azure/azure-sql/database/auto-failover-group-overview?tabs=azure-powershell#failover-groups-and-network-security

Azure Synapse Billing Model for individuals?

This is a question on the pricing model of the azure synapse, how it works, and understanding the cost accrued/accumulated for developers who are doing self-study and exploring/learning the services.
For the same purpose, I purchased a pay-as-you-go service. The first question is -Is it the right scope/subscription for individuals who want to do practice or hands-on with Azure services?
Last Sunday i.e. 19 July 2020 (4 days ago) I provisioned 2 services (SQL server and Synapse SQL pool (data warehouse)).
Synapse SQL pool was set to 100 DWU and the service was immediately paused after creating it.
As per the billing I was expecting only to be charged 1.510 (since I had stopped the service and billing rate of 100 DWU is 1.510 per hour)
However on seeing today, i.e. 4 days since the services were provisioned I am seeing my accumulated charges to be 20.15 .
Does anyone know how this works out to be?
I have raised an SR for this and awaiting a response from Microsoft.
Appreciate it if anyone could give me some leads.
Regards
Lokesh
Did your bill include storage or was the $20 from compute alone? It would be reasonable for Synapse Data warehouse compute+storage: for 1 hour of DWU100 compute and 4 days of 1TB storage in central US region (1h * $1.51 + 4 * 24h * $0.19 = 1.51+18.24=$19.75).
The storage in ”old” Azure Data Warehouse comes with 1TB slots so you will be billed for 1TB even if you don’t use that all. With new Synapse on-demand you can use Azure Data Lake as your storage and it is billed by the actual usage.

Improve CPU Utilization by Restructuring Nodes

We have a database located in North Europe region with 2 nodes of AppServices on Azure (West Europe & North Europe). We use traffic manager to route traffic.
Our SQL database and storage are located in Northern Europe.
When we started the website, European locations were the closest to our customers.
However, we saw a shift and most of our customers now are from USA.
We have high CPU utilization on our processors although we have a lot of instances on each.
The question is:
Since most of our customers are from USA and it's hard to relocate the database, is it better to keep the app structure as it is (N. Europe & W. Europe) or create a new node in USA but this node will still need to communicate with the database in North Europe?
Thank you
Having you app in US region and Database in Europe is not recommended.
These are a few of the things you will run into:
1) High latency since the queries for data will have to round-trip to Europe to get this.
2) Higher resource utilization since in general each request that access the DB will take longer, this will increase memory usage while requests are waiting on data it will also make the impact of load a lot more severe on the app.
3) cross region data egress, you will need to pay for all the data moving from Europe to us every-time there is a query.
A better solution would be to do the following:
1) Setup a new DB in us region and hook up active geo-replication
At this point you will have a hot/cold configuration where any instance can be used to read data form the DB but only the primary instance can be used for write operations.
2) Create a new version of the App/App Service plan in US region
3) Adapt your code to understand your geo distributed topology.
You App should be able to send all reads to the "closest" region and all writes to the primary database.
4) Deploy the code to all regions
5) add the new region to TM profile
While this is not ideal since write operation might still have to jump the pond, most apps have a read write patter than is heavily askewed towards read operations (roughly 85% reads / 15% writes ) so this solution works out with the added benefit of giving you HA in case one of the regions goes down.
You might want to look at this talk where I go over how to setup a geo distributed app using App Service, SQL Azure and the technique outlined above.
Have you considered sharding your data based on the location of your users? In terms of performance it will be better, You can provide maintenance on off-peak hours of each region. Allow me to recommend you this article.

Is there a cost for Azure Site Recovery if you finish one-time migration from on-prem to Azure within 31 days?

"Customers can replicate on-premises workloads to Azure with Azure Site Recovery for 31 days at no charge, effectively making migration to Azure free."
While the above statement from a Microsoft blog indicates ASR is free for a one-time migration done within 31 days, I wonder why the word "effectively" was used. I'm looking for a confirmation from those who have used it for a one-time migration that you just have to pay for Storage, storage transactions and outbound data transfer and there is no per instance cost.
"Effectively" here means that there is no cost as long as you do not continue using the service, incur data egress fee charges, etc. Basically, it's free to use as long as you remember to fail over your VMs, disable ASR and delete your Recovery Services Vault after you have finished your replication. (It should go without saying that the bandwidth you'll require obviously isn't "free").
Where people get in to trouble is forgetting that ASR is still running and doing something like copying a VHD after it's been migrated. Once you failover the VMs to Azure for the final time, you start incurring normal IaaS charges.
One final note - if you have really busy workloads or a huge amount of VMs, it's possible to incur some small charges for Storage Transactions with ASR, especially if you're using premium storage. I've never seen this be more than a few dollars a month.
If you think there's a chance that you might use the service after 31 days, you can use this tool to estimate charges: http://oms-calculator-webapp.azurewebsites.net/home

Azure Data Sync Frequency Change

I have sign up in Azure Free Trail set up 2 Azure Database in 2 Different Region and Sync them using azure Data Sync Portal it's works pretty well, but my question is that how to reduce SYNC FREQUENCY.
because azure gives minimum time is 5 minute for automated Sync. but i want this time around 30 sec to 1 minute because my website is of online selling product ( Shopping ) so that i want fast sync
So is there anyway to workaround this situation ?
5 mins is the lowest interval you can set it to. likewise, the service is never meant for a near-realtime replication.

Resources