We are trying to SQL Azure Geo-Replication for load balancing 95% of SQL transaction are read only and 5% requires Write.
In SQL Azure Geo-replication we can have only One(aka Primary) database as RW and rest are RO. so we need to separate the RW and RO traffic. I was wondering that is there an easy way to use multiple connection string one for RW and one for RO.
Assuming you have a web app deployed for each replica as described in Pattern 2, you can create a round-robin TM profile (if replicas are in the same region) or performance profile (if replicas are in different regions). This way all connections to the TM endpoint will be routed accordingly. Since you have 5% writes, you should also create a failover profile with a different endpoint. The latter will route all connections to the same web app (and the primary db). You can have up to 4 read-only replicas.
Related
We can have passive read-only asynchronous real-time sync-up for Azure SQL database, for disaster recovery.
But our requirement is to have real-time sync-up between both active read-write databases to provide low latency to customers in different locations of the world.
for example:
I'm providing e-commerce website, I will update data in one of the
database server and other connected databases in sync with this
database should get updates.
Users from different servers of the world will get connected to their
nearest data center for low latency. If someone buys something or puts
some review, it should get updated in all other databases. In this
way we need active-active database sync.
We explored multiple items on this, but did not find anything relative.
Can anyone please guide me on how to achieve this.
SQL Server has Peer-to-Peer Transactional Replication, but you need to ensure in the application that conflicting changes are not introduced on multiple nodes.
SQL Server also has Merge Replication, which allows updates at any subscriber, and supports custom conflict resolution.
These are both available on SQL Server VMs. Limited replication options are available on Azure SQL Database Managed Instance. Azure SQL Database also has Data Sync.
Azure Cosmos DB also supports Multi-Master.
In either case multi-master introduces significant cost/complexity. Often it's better to just have a single writable master with regional readable replicas. In that configuration the application needs to connect to the global master for writing, but can read from a local replica. For this pattern you can simply use Failover Groups.
I am planning to migrate my existing cloud monolithic Restful Web API service to Service fabric in three steps.
The Memory cache (in process) has been heavily used in my cloud service.
Step 1) Migrate cloud service to SF stateful service with 1 replica and single partition. The cache code is as it is. No use of Reliable collection.
Step 2) Horizontal scaling of SF Monolithic stateful service to 5 replica and single partition. Cache code is modified to use Reliable collection.
Step 3) Break down the SF monolithic service to micro services (stateless / stateful)
Is the above approach cleaner? Any recommendation.? Any drawback?
More on Step 2) Horizontal scaling of SF stateful service
I am not planning to use SF partitioning strategy as I could not think of uniform data distribuition in my applictaion.
By adding more replica and no partitioning with SF stateful service , I am just making my service more reliable (Availability) . Is my understanding correct?
I will modify the cache code to use Reliable collection - Dictionary. The same state data will be available in all replicas.
I understand that the GET can be executed on any replica , but update / write need to be executed on primary replica?
How can i scale my SF stateful service without partitioning ?
Can all of the replica including secondory listen to my client request and respond the same? GET shall be able to execute , How PUT & POST call works?
Should i prefer using external cache store (Redis) over Reliable collection at this step? Use Stateless service?
This document has a good overview of options for scaling a particular workload in Service Fabric and some examples of when you'd want to use each.
Option 2 (creating more service instances, dynamically or upfront) sounds like it would map to your workload pretty well. Whether you decide to use a custom stateful service as your cache or use an external store depends on a few things:
Whether you have the space in your main compute machines to store the cached data
Whether your service can get away with a simple cache or whether it needs more advanced features provided by other caching services
Whether your service needs the performance improvement of a cache in the same set of nodes as the web tier or whether it can afford to call out to a remote service in terms of latency
whether you can afford to pay for a caching service, or whether you want to make due with using the memory, compute, and local storage you're already paying for with the VMs.
whether you really want to take on building and running your own cache
To answer some of your other questions:
Yes, adding more replicas increases availability/reliability, not scale. In fact it can have a negative impact on performance (for writes) since changes have to be written to more replicas.
The state data isn't guaranteed to be the same in all replicas, just a majority of them. Some secondaries can even be ahead, which is why reading from secondaries is discouraged.
So to your next question, the recommendation is for all reads and writes to always be performed against the primary so that you're seeing consistent quorum committed data.
We have an azure web app & a db we want to replicate all over the world.
So, we use Traffic manager to redirect the User to the closest hosted Web app , and with a location setting in the web app, It knows to which database it should go against.
Now, my question is , as the mode is One database Writeable (Primary) and the replicas being read only , how do me or azure handle that at the moment of calling the database?
For example, if from my app I am going to Add a record to database, I cant use the nearest DB connection string, I need to go against the Primary one.
Should I handle this? or I will go always against the nearest one even if its read-only an azure will handle the write transferring it to the primary db ?
In the case I am the one that should manage that, then I should handle 2 connection strings, one for the primary DB writeable, and one for the closest db readable, and I should split my services , categorized by write/read actions
and following this scenario, if I have a Store procedure which WIRTES AND READS, how would I handle that?
This is a common issue when it comes to using Azure SQL in geo-replication mode. You cannot use traditional LB techniques such as Azure Traffic Manager. In this case, you should be using the retry pattern on your database connections, working from the primary down to the alternate names as required.
AFAIK, there is no easy way to tell, after connected to a database, if you are on a primary or a read-only secondary. As per this link there are some stored procs you can call to understand the topology. You can understand this using Azure PS/API, but then you would have to build that logic in to your application.
In short:
You need to handle your database connections and employ retry
patterns,etc
You should implement CQRS to separate read/write workloads from
each other if you want to take advantage of read-only secondaries
Hope that helps.
I have finally got the time to start looking at Azure. It's looks good and easy scaling.
Azure SQL, Table Storage and Blog Storage should cover most of my things. Fast access to data, auto replication and failover to an other datacenter.
Should the idea come for an app that needs fast global access the Traffic manager is there and one can route users for "Fail Over" or "Performance".
The "performance" is very nice for Cloud Services and "Web Roles / Worker Roles" ... BUT ... What about access to data from SQL Azure/Table Storage/Blog Storage.
I have tried searching the web(for what to do about this need), but haven't found anything about the traffic manager that mentions anything about how to access data in such a scenario.
Have I missed anything?
Do people access the storage in the original data center (and if that fails use the Geo Replication feature)? Is that fast enough? Is internal traffic on the MS network free across datacenters?
This seems like such a simple ...
Take a look at the guidance by Microsoft: Replicating, Distributing, and Synchronizing Data. You could use the Service Bus to keep data centers in Sync. This can cover SQL Databases, Storage, search indexes like SolR, ElasticSearch, ... The advantage over solutions like SQL Data Sync is that it's technology independent and it can keep virtually all your data in sync:
In this episode of Channel 9 they state that Traffic Manager is only for Cloud Services as of now (Jan 2014) but support is coming for Azure Web Sites and other services. I agree that you should be able to ask for a Blob using a single global URL and expect that the content will be served from the closest datacenter.
There isn't a one-click easy to implement solution for this issue. The way you solve it will depend on where the data lives (ie. SQL Azure, Blob storage, etc) and your access patterns.
Do you have a small number of data requests that are not on a performance critical path in your code? Consider just using the main datacenter.
Do you have a large number of read-only type of requests? Consider doing a replication of the data to another datacenter.
Do you do a large number of read and only a few write operations? Consider duplicating the data among all datacenters and each write will write to all datacenters at the same time (incurring a perf penalty) and do all reads to the local datacenter (fast reads).
Is your data in SQL Azure? Consider using SQL Data Sync to keep multiple datacenters in sync.
I need to make sure the availability of my database is high. working with SQL Azure does not make that clear.
Is there a way to run multi servers (one will take over if one server fails? ) under SQL Azure, above that is there something equivalent to increasing memory on the DB server to speed up the Database processing ?
Read High Availability on the Intro the Azure SQL and then read Business Continuity in Windows Azure SQL Database. To summarize:
Data durability and fault tolerance is enhanced by maintaining
multiple copies of all data in different physical nodes located across
fully independent physical sub-systems such as server racks and
network routers. At any one time, Windows Azure SQL Database keeps
three replicas of data running—one primary replica and two secondary
replicas.
Right now there is no way to specify hardware configuration for SQL Azure Databases. It's totally out of your control and from SAAS perspective that makes sense. The backend management services are responsible making sure you get the best performance possible.
If you need dedicated and reserved hardware for your SQL deployment you may take a look at IAAS offerings in Azure and start a VM with SQL installed however you need to make sure you know the main differences between a IAAS and PAAS offering.
I do not know what your high availability requirements are, but you should look at the SLAs provided by Microsoft. SQL Database offers 99.9% monthly availability.