Using Azure Data Sync between databases at different Azure datacenters - azure

We are looking to host our Azure web application at 2-3 locations globally to reduce load latency and for BCP if an application server fails (we will use Traffic Manager to direct traffic)
We will be co-locating the Azure SQL DB databases along with the web app. We want to get the databases synced near real-time. The data volumes will be under 1 GB on any given day. There will be no on-premise database. Here intent is not to have a master slave but more active-active databases
Given Azure Data Sync is now in GA,
a) What kind of delay in sync should I plan for (I can tolerate a few seconds of latency)
b) Will there be any performance issues in both the DB's during these periods of sync. How do conflicts get resolved- latest timestamp?
c) Can I use out of-the-box azure portal functionality- or will I need additional tools

Minimum sync frequency is 5 minutes.
Check this out https://learn.microsoft.com/en-us/azure/sql-database/sql-database-sync-data

Related

Virtual machine with SQL Server recovery using Premium disk

I have a VM with SQL Server and an application that uses no more than 50 users. I don't require to have a zero downtime application in case my VM or datacenter had an issue, but what I need at least to assure is that I can make the app available again in less than 30 minutes.
First approach: using an Availability Set with 2 VM's won't work actually because my SQL Server lives in the same VM and I don't think Availability Set will take care of the real time replication of my SQL Server data, it will care only about the web application itself and not the persistent data (if I'm wrong please let me know), so having the above statement AV Set is not for me. Also It will be twice expensive because of the 2 VMs.
Second approach: using Recovery Site with disaster recovery I was reading that wont warranty to have a zero data loss, because there is a minimum frequency of replication and I think is 1 hour, so you have to be prepared to deal with 1 hour of data loss and I don't like this.
Third option: Azure Backup for SQL Server VM, this option could work the only downside is that has a RPO of 15 minutes that is not that much, but the problem is that if by some reason the user generates in the app some critical records we wont be able to get them again into the app because the user always destroy everything right away when they register into the app.
Fourth approach: Because I don't really require a zero downtime app, I was thinking on just having the actual VM using 2 premium disks one for SQL Server data files and other for SQL Server logs. In case of a VM failure I will get notified by users inmediately and what I can do is to create a snapshot of OS disk, and SQL premium disks (total of 3) and then create a new VM using these snapshots, so I will get a new working VM maybe in a different region having the exact very last data inserted into SQL before the failure happened.
Of course I guess I will need on top the VM a load balancer so I can just reroute traffic to the new VM. The failed VM i will just kill it and use the new VM as my new system. If fail happens again I just follow same process so this way I just only pay for one VM and not two.
Is this someone has already tried, does this sound reasonable and doable or Im missing a big thing or maybe I wont get what I expect to get?
You better use Azure SQL (PaaS) instead of VM, there are many different options that you can do for your needs. Running SO + SQL in the same VM is not recommended, changing to a Azure SQL (PaaS) you can decrease your hardware for SO VM and configure your SQL for supporting 50 users. Also you can use Load Balancer as you said, either Traffic Manager (https://learn.microsoft.com/pt-br/azure/traffic-manager/traffic-manager-overview) or Application Gateway (https://learn.microsoft.com/pt-br/azure/application-gateway/overview) to route traffic to your SO VM's where the application is running. Depends on your application you can migrate to Azure Web App (https://learn.microsoft.com/en-us/azure/app-service/).
Azure SQL (Paas) you can have less than 30 minutes for sure, I would say almost zero down time although you don't required it.
Automatic backups and Point-in-time restores
https://learn.microsoft.com/pt-br/azure/sql-database/sql-database-automated-backups
Active geo-replication
https://learn.microsoft.com/pt-br/azure/sql-database/sql-database-active-geo-replication
Zone-redundant databases
https://learn.microsoft.com/pt-br/azure/sql-database/sql-database-high-availability
Finally I don't think having Always-on (https://learn.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server?view=sql-server-ver15) solution is good, once it is expensive and there are only 50 users. That's why I believe you better thinking of a Saas + PaaS solution for your application and database. Your 4th option sounds fine, but you need to create a new VM, configure IP, install SQL, configure SQL and so on to bring up your SQL.
What users is going to do if it happens when you are not available to fix it immediately? Your 30 minutes won't be accomplished :)

Migration websites to Azure platform [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
We have several website hosted on a dedicated server with following configuration
Windows OS based dedicated server
MS SQL SERVER Windows 2012 for database
IIS 7.5+
and other software to managing website such as plesk
We have developed websites in asp.net web-form with framework 4.0 or 4.5 (about 10 website)
We also have few asp.net MVC based website with framework 4.5 (about 5 websites)
We use InProc default session state for sessions
beside this we have other software installed for security etc.
we also have mobile application api running same server and same application is use to send push notification to OS & Android devices
Now i have few question regarding migration to Azure enviroment.
First, my big concerns are session state, i want to migrate to Azure without making any changes to code except changes to web.config is this possible?
Second we use MS SQL SERVER 2012 as database and on azure we have to use Azure SQL Database which i believe is not same as MS SQL SERVER, Can i use Azure SQL Database or i should still to MS SQL SERVER as i have application running on this and migrating to Azure database may create problems?
Third, let us say i choose web+Mobile--> App Service Standard Package (which comes with upto 10 instance) what are these instance? and will individual session always connect to same instance?
Forth: I have about 20 database one of them is about 6GB & other database are about 200MB-700MB which service should i use for database in case i use Azure SQL Database
Single Database or Elastic?
Can i create multiple database under elastic mode?
Let us say if i choose "100 eDTUs: 10 GB included storage per pool, 200 DBs per pool, $0.20/hour" will i have total of 10GB space for all
database in elastic pool and what is per pool and how many pools will
i get in this option.
or MS SQL SERVER on Virtual Machine is better option as i can run SQL Server based session
Fifth: Disk Space, let us say i choose App Service "S2: 2 Cores(s), 3.5 GB RAM, 50 GB Storage, $0.200", is 50GB disk space include OS or space allocated to file which we upload?
Sixth: Some of our application are used to send push notification to iOS & Android device i am not sure if they will work in Azure environment as they need certain ports to be open and also some sort of certificate to be installed on the server.
I have asked too many question as i didn't had clarity from MS chat as they just passed links which can be confusing at time, i hope i get clarity here
Q: First, my big concerns are session state, i want to migrate to Azure without making any changes to code except changes to web.config is this possible?
If one of your concerns is code refactoring, then the model you should chose is Infrastructure-As-A-Service. In this model, there is no need to change in code because the infrastructure on Azure can be similar to the on-premises in which you provision virtual machines to run Windows Server, SQL Server and IIS. Software versions are all of your choice with no limitation. As long as the software version is still supported in Microsoft product lifecycle when procuring new software license.
If you'd love to modernize your web application, Azure App Service can be a good chosen destination. Azure App Service can run code compiled against .NET 4.0 framewor. InProc session state is not guaranteed in Azure App Service so you need to look into an alternative if using Azure App Service, for example Azure Redis Cache.
Q: Second we use MS SQL SERVER 2012 as database and on azure we have to use Azure SQL Database which i believe is not same as MS SQL SERVER, Can i use Azure SQL Database or i should still to MS SQL SERVER as i have application running on this and migrating to Azure database may create problems?
Without impact analysis and how complex your data model is, it's hard to say whether Azure SQL Database is compatible with your database. Fortunately, Microsoft provides a tool named Data Migration Assistant (DAM) which assists you to perform database compatibility analysis for Azure SQL Database. This link gives you more details on DAM (https://learn.microsoft.com/en-us/azure/sql-database/sql-database-cloud-migrate). Moving from SQL Server to Azure SQL Database would gain more benefits in high availability, disaster recovery and scalability. Administration effort with server management, OS patching is significantly reduced. With SQL Server in Azure VM, the migration cost is much better as you only need to shift and lift (provision VM, perform database detach/attach or other backup/restore methods).
Q: Third, let us say i choose web+Mobile--> App Service Standard Package (which comes with upto 10 instance) what are these instance? and will individual session always connect to same instance?
No, session would not be maintained in guaranteed mode. When you chose Azure App Service, your web application will be run on virtualized servers running Windows Server and IIS. The term "Instance" is server instance. Azure App Service helps you handle scaling by allocating compute resource across multiple instance to make sure your application does not get crashed with inadequate memory and resource. The default at the first time you provision your web app is 1, but the number of instance is configurable.
Q: Forth: I have about 20 database one of them is about 6GB & other database are about 200MB-700MB which service should i use for database in case i use Azure SQL Database
Single Database or Elastic?
Can i create multiple database under elastic mode?
Let us say if i choose "100 eDTUs: 10 GB included storage per pool, 200 DBs per pool, $0.20/hour" will i have total of 10GB space for all database in elastic pool and what is per pool and how many pools will i get in this option.
or MS SQL SERVER on Virtual Machine is better option as i can run SQL Server based session
Choosing Single Database or Elastic depends on performance and peak load of your database. Single database is used for independently database, when you can specify the DTU (Data Transaction Unit) for predictable performance. While Elastic Pool is best for managing set of databases in a pool. Elastic Pool is a choice for unpredictable performance and usage.
In your case, I'd recommend to use Elastic Pool to rescue performance. Elastic Pool allows you to set eDTU for your pool no matter how much of DTU a specific database in a pool needs. Elastic Pool monitors and perform performance analysis in depth to give you an insight and overall picture of each database performance.
When it comes to pool, you should not worry about how much storage you are given to each database. You don't also have to worry about the number of databases you can store in a pool. Saying you have total 20 databases, you need only one pool.
The eDTU you need can be calculated via this website http://dtucalculator.azurewebsites.net/. Just run one of the given scripts in the website on your SQL Server (where your on-premises databases are running) to capture performance metrics, then upload Excel file to the website. It will gives you a number. For example, the result says that total 20 databases need totally 100 eDTU. Then you just create an Elastic pool and adjust 100 eDTU for the pool. However, if using Elastic Pool Basic, you are only given 10 GB per pool which is not enough for 120 GB (20 * 6 GB), then you need Elastic Pool Standard for 100 eDTU to achieve 750 GB maximum. Note that you can choose Basic plan of 1,200 eDTU to achieve 156 GB maximum. However, this way is never recommended because storage space is much cheaper than eDTU.
In a nutshell, with your draft info above, I'd recommend to chose Standard plan of Elastic Pool with 100 eDTU. You can increase number of eDTU if it does not satisfy the performance of totally 20 databases. No database downtime is needed when adjusting eDTU number.
Creating only 1 pool is not really my recommendation. It depends on your database workload. For example, in 20 databases, there are 5 databases that are heavy workload for an ERP or business-critical systems while the rest are just normal databases. In this case, you'd need two Elastic pools. One pool with high number of eDTU is set, and another pool has low number of eDTU.
Q: Fifth: Disk Space, let us say i choose App Service "S2: 2 Cores(s), 3.5 GB RAM, 50 GB Storage, $0.200", is 50GB disk space include OS or space allocated to file which we upload?
When it comes to Azure App Service, OS is not counted in. 50 GB storage space is given directly to your application's space (to store image, compiled DLL, video, library..)
Q: Sixth: Some of our application are used to send push notification to iOS & Android device i am not sure if they will work in Azure environment as they need certain ports to be open and also some sort of certificate to be installed on the server.
Azure Notification Hubs can help you achieve push notification. Azure Notification Hub allows you to use certificate of each kind of platform (e.g iOS to manage devices. This is a sample reference if you are familiar with iOS https://learn.microsoft.com/en-us/azure/notification-hubs/notification-hubs-ios-apple-push-notification-apns-get-started. Azure Notification Hub also supports token-based for APNS if you need.
For each case, please give more details (e.g. your mobile scenario) , and specific questions if possible so I and people here can elaborate more.

Is there any Azure Cache/database supporting multi-region automatic failover

We have one webapp running on Azure, which pushes data to Azure Redis, then we have an on-prem component which reads that data from Azure redis and processes that.
Recently due to Azure region failure that Azure Redis went down. Web app and my on-prem component was not able to contact Azure redis.
How can I make sure zero down time for my web app to access Azure redis ?
Redis-GeoRelication doesn't solves my problem as it is unideirectional, and Manual failover. Also my web app and on-prem component need to know both redis endpoint, and contact accrondignly. which is not seemless.
Azure redis doesn't support cluster having shards in multiple region.
So my requirement is, Web-app and on-prem component both need to contain one cache/database endpoint ( without having any knowledge about the replication of the cache/database). if primary cache/db fails then, that endpoint should automatically goes to replicated cache or DB.
As per Documentation from Azure, it doesn't seem Azure Redis is correct fit for this requirment, is there any other Azure component which fits this requiremnet.
Had a look to Azure sql with failover group. As per documentation, "you can configure a grace period that controls the time between the detection of the outage and the failover itself. It is possible that traffic manager initiates the endpoint failover before the failover group triggers the failover of the database. In that case the web application cannot immediately reconnect to the database. But the reconnections will automatically succeed as soon as the database failover completes." . We can set that grace period to 1 hour (minimum) .
So it means with Azure sql also. In case of failure of one db server, my web application will not be able to write to db for atleast 1 hour, Is my understanding correct ?
Azure SQL and Azure Cosmos DB both support single endpoint and HA across regions, you might want to look into those.
Those are not caches, but they do allow for a single endpoint and failover

How do I make my Windows Azure application resistant to Azure datacenter catastrophic event?

AFAIK Amazon AWS offers so-called "regions" and "availability zones" to mitigate risks of partial or complete datacenter outage. Looks like if I have copies of my application in two "regions" and one "region" goes down my application still can continue working as if nothing happened.
Is there something like that with Windows Azure? How do I address risk of datacenter catastrophic outage with Windows Azure?
Within a single data center, your Windows Azure application has the following benefits:
Going beyond one compute instance, your VMs are divided into fault domains, across different physical areas. This way, even if an entire server rack went down, you'd still have compute running somewhere else.
With Windows Azure Storage and SQL Azure, storage is triple replicated. This is not eventual replication - when a write call returns, at least one replica has been written to.
Ok, that's the easy stuff. What if a data center disappears? Here are the features that will help you build DR into your application:
For SQL Azure, you can set up Data Sync. This facility synchronizes your SQL Azure database with either another SQL Azure database (presumably in another data center), or an on-premises SQL Server database. More info here. Since this feature is still considered a Preview feature, you have to go here to set it up.
For Azure storage (tables, blobs), you'll need to handle replication to a second data center, as there is no built-in facility today. This can be done with, say, a background task that pulls data every hour and copies it to a storage account somewhere else. EDIT: Per Ryan's answer, there's data geo-replication for blobs and tables. HOWEVER: Aside from a mention in this blog post in December, and possibly at PDC, this is not live.
For Compute availability, you can set up Traffic Manager to load-balance across data centers. This feature is currently in CTP - visit the Beta area of the Windows Azure portal to sign up.
Remember that, with DR, whether in the cloud or on-premises, there are additional costs (such as bandwidth between data centers, storage costs for duplicate data in a secondary data center, and Compute instances in additional data centers). .
Just like with on-premises environments, DR needs to be carefully thought out and implemented.
David's answer is pretty good, but one piece is incorrect. For Windows Azure blobs and tables, your data is actually geographically replicated today between sub-regions (e.g. North and South US). This is an async process that has a target of about a 10 min lag or so. This process is also out of your control and is purely for a data center loss. In total, your data is replicated 6 times in 2 different data centers when you use Windows Azure blobs and tables (impressive, no?).
If a data center was lost, they would flip over your DNS for blob and table storage to the other sub-region and your account would appear online again. This is true only for blobs and tables (not queues, not SQL Azure, etc).
So, for a true disaster recovery, you could use Data Sync for SQL Azure and Traffic Manager for compute (assuming you run a hot standby in another sub-region). If a datacenter was lost, Traffic Manager would route to the new sub-region and you would find your data there as well.
The one failure that you didn't account for is in the ability for an error to be replicated across data centers. In that scenario, you may want to consider running Azure PAAS as part of HP Cloud offering in either a load balanced or failover scenario.

Pulling data asynchronously from third-party web service on Windows Azure Platform

I want to pull large amount of data, frequently from different third party API web services and store it in a staging area (this is what I want to decide right now) from where it will be then moved one by one as required into my application's database.
I wanted to know that can I use Azure platform to achieve the above? How good is it to use Azure platform for this task?
What if the data to be pulled is of large amount and the frequency of the pull is high i.e. may be half-hourly or hourly for 2,000 different users?
I assume that if at all this is possible, then the bandwidth, data storage and server capability etc. will not be a thing to worry for me but for ©Microsoft. And obviously, I should be able to access the data back whenever I need it.
If I would have to implement it on Windows Servers, then I know that I would use a windows service to do this. But I don't know how it can be done for Windows Azure Platform if at all it is possible?
As Rinat stated, you can use Lokad's solution. If you choose to do it yourself, you can run a timed task in your worker role - maybe spawn a thread that sleeps, waking every 30 minutes to perform its task. It can then reach out to the Web Services in question (or maybe one thread per Web Service?) and fetch data. You can store it temporarily in Azure Table Storage, which is a fraction of the cost of SQL Azure (0.15 per GB), and then easily read it out of Table Storage on-demand and transfer to SQL Azure.
Assuming you host your services, storage and SQL Azure are in the same data center (by setting the affinity appropriately), you'd only pay for bandwidth when pulling data from the web service. There'd be no bandwidth charges to retrieve from Table Storage or insert into SQL Azure.
In Windows Azure that's usually Worker Role used to host the cloud processing. In order to accomplish your tasks you'll either need to implement this messaging/scheduling infrastructure yourself or use something like Lokad.Cloud or Lokad.CQRS open source projects for Azure.
We use Lokad.Cloud for distributed BI processing of hundreds of thousands of series and Lokad.CQRS allows to reliably retrieve and synchronize millions of products on schedule.
There are samples, docs and community in both projects to get you started.

Resources