what is the best way to limit latency for SQL Azure in global applications?
My Application uses SQL Azure and would like to know based on the network location of users if its possible to connect SQL Azure near to users.
So Logically would need to have SQL Azure database with global replication but not geo-replication as each copy would serve as Master and not secondary.
Thank you in advance.
You may want to try CosmosDB to distribute data globally and obtain low latency as explained on this article and this documentation.
For replicating data using SQL Data Sync with Azure SQL Database, take in consideration paired regions which may reduce latency. With SQL Data Sync a hub database can be defined and many member database on another region, and data can be synched on both ways between the hub and any member database.
I want to confirm our understanding of how our Azure SQL databases are being backed up to enable point in time restore. We have not currently configured geo-replication to have the database available in another region. We may in the future as some data analysis is done. But my understanding is that the database is still being backed up to a geo redundant location so I could do a geo-restore if there was an issue with the data center that houses my sql database. Is that correct or do I need to enable geo-replication and pay for a second database in order to have a disaster recover option if the datacenter had an issue.
To clarify further: I think this article states what I'm saying in the Geo-Restore section.
https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity/
Thanks
Yes, all databases have a geo-replicated copy for disaster recovery purposes. For more details, please see the following: https://azure.microsoft.com/en-us/blog/azure-sql-database-geo-restore/
Geo-restore uses the same technology as point in time restore with one
important difference. It restores the database from a copy of the most
recent daily backup in geo-replicated blob storage (RA-GRS). For each
active database, the service maintains a backup chain that includes a
weekly full backup, multiple daily differential backups, and
transaction logs saved every 5 minutes. These blobs are geo-replicated
this guarantees that daily backups are available even after a massive
failure in the primary region.
Yes, Azure SQL Databases are automatically backed up to a different Azure data center using Geo-Replication. This is an automatic features of Azure SQL that is baked into the service offering.
Here's a blog post with further information about Azure SQL Data Replication:
https://azure.microsoft.com/en-us/blog/azure-sql-database-standard-geo-replication/
I have read here and on other post and forums that the best place to save session state in Azure is AppFabric Cache, but I find that very expensive and would like to give a go to either table storage or a SQL database.
I read that a SQL database will be faster but I can't understand why it would be. Surely the SQL database will cache hot data in memory, but I would expect Table Storage to also do that (does it?). Otherwise I don't see why a SQL database would be faster at retrieving a row than Table Storage, in the end both are just retrieving data from a local disk based on a key. I would even expect that because Table Storage scales up well and automatically (vs a SQL databases that needs to be partitioned manually), it would be preferable as session state isn't a good candidate for local caching.
Does anyone have any experience or opinion on this?
thanks
Charles
You mentioned AppFabric Cache, which is a retired service. Regarding SQL vs Table: There isn't really a right answer to this. If you want to spin up a SQL Database instance (running about $2.50 monthly for a Basic-tier database), you'll have 2GB to work with. With Table storage, you'll pay about $0.15 for the same storage. Then there is Redis cache, your own cache (such as memcached), Azure Managed Cache service, etc. Performance-wise, you'd need to do some benchmarking to see how each performs. Any of these would work with Virtual Machines, Cloud Services (web/worker roles), and Web Sites, as they all have very well-defined APIs and, if using ASP.NET MVC, good provider support. Each has different capacity limits and different pricing.
One thing with Table storage: each entity (row) is limited to 1MB, so if you're attempting to cache > 1MB per cache entry, you'll need to consider another option.
#Gaurav mentioned in-role cache. This is a great way to use extra memory in your web/worker role instances. However: It's limited to web/worker Cloud Services; it doesn't help with Web Sites or Virtual Machines. For those, you really need some type of independent cache provider.
I would like to transfer my existing SQL Azure location to other one, but I think there is no functionality right now to do so on the management portal of Azure.
I just googled it and found one link http://social.msdn.microsoft.com/Forums/en-US/ssdsgetstarted/thread/e6c961cc-5eea-4f07-82c9-a8805d367b05 that says I need to use the data sync option in Azure's portal but I don't have that feature enabled in my Azure portal.
Also if I do use that option, is there any charge for it? Finally, are there any other option that is possible for moving the SQL Azure location?
To Move an Existing SQL Server Database to a New Region on Azure Assuming There Are No Blob Containers Associated With the Database. For further reference see:
https://azure.microsoft.com/en-us/blog/migrating-azure-services-to-new-regions/
Upgrade the database, if necessary, to one of the Premium pricing tiers
Add geo-replication to the existing database. You can choose what region to have the backup of the existing database. Create a new Database server in the target region of your choice. I suggest provisioning that new database server with the same admin username and password as the existing sql database. When creating the secondary database, I suggest making the Secondary type “Readable” as it will allow you the ability to check that all data and schemas were replicated correctly.
Allow the two databases time to sync. Rule of thumb according from Microsoft AzureCAT is: 3 * (5 minutes + database size / 150 MB/minute)
Configure the Firewall settings of the secondary database to allow the necessary IP addresses to access the database
Temporarily shut down whatever users or applications are accessing the existing database.
From the Azure portal select the existing database and change its geo-replication role from primary to secondary.
Run any ddl scripts that rely on the masterdb such as ddl scripts to recreate users and user profiles
Change the connection strings of any applications to point to the new database.
Users and applications can now connect to the new Database
At your discretion you can remove the old database as a backup and add any new regions as backup.
In terms of charges there will be charges for upgrading the old database if it isn't already a premium database. There will also be charges for creating the geo-replicated database. However, those charges can be limited to a day to a few days worth of fees (depending on how long geo-replication takes). Once the new database is up and running, delete the old database as soon as possible to limit additional fees. Finally, if you upgraded the service level of the old database to a premium tier to facilitate the geo-replication, you will want to downgrade the new database to the original service level of the old database to also limit fees.
I think you can use new Import/Export bacpac feature. I have used it to move databases between accounts and can't see why it wouldn't also work between regions.
See how here
If you are able to stop writes to the DB for a time then you can use the Copy feature on the Azure Portal.
Create a new SQL Server in the region of your choosing.
Add your service(s) IP addresses to the new SQL Server firewall.
Stop writes to the origin database.
Open the origin database in the Azure Portal and click Copy at the top of the blade.
Choose your new SQL server located in the destination region.
Wait for the copy to complete.
Update your service(s) to point to the destination DB.
Enable DB writes.
Verify everything is working.
Delete origin database (and server if it was the only DB on the server).
I wouldn't use DataSynch because it creates many objects in your database to perform synchronization (it's an invasive solution). You can indeed try the Import/Export feature; that should work fine. You can also download a trial version of the Enzo backup tool, which comes with a 30-day free trial: http://www.bluesyntax.net/backup.aspx. [disclaimer: I am the author of this tool]
Regarding the pricing question, you may be charged for data being extracted out of the database. Moving data "in" SQL Azure is free of charge for now. If you are transferring the data to a different data center, you will be charged for extracting the data. It's 15 cents per GB in the US and Europe, and 20 cents in Asia. Here are the pricing details: http://www.microsoft.com/windowsazure/pricing/
Keep in mind that a database that requires 4GB of storage doesn't mean you have 4GB of data. Sometimes indexes can take a lot of space. To estimate the size of the data you will need to transfer you can either drop your indexes (and wait a little for the database size to shrink; the database size should be roughly equal to your data transfer needs) or you can calculate the size of your tables by running a command. Here is a link to an article that shows how to do something similar (look at the second command with is a SELECT statement; just run it for all the tables): http://www.sqldocumentor.com/table-size-in-sql-server-find-rows-and-disk-space-usage
Azure has released a new tool called Azure Resource Mover.
Resource mover can for now handle these resources:
Azure VMs and associated disks
NICs
Availability sets
Azure virtual networks
Public IP addresses
Network security groups (NSGs)
Internal and public load balancers
Azure SQL databases and elastic pools
https://learn.microsoft.com/en-us/azure/resource-mover/move-region-within-resource-group
Azure SQL Server is not supported yet but Azure has a complete guide for this anyway:
https://learn.microsoft.com/en-us/azure/resource-mover/tutorial-move-region-sql#move-the-sql-server
I am at the planning stage of a web application that will be hosted in Azure with ASP.NET for the web site and Silverlight within the site for a rich user experience. Should I use Azure Tables or SQL Azure for storing my application data?
Azure Table Storage appears to be less expensive than SQL Azure. It is also more highly scalable than SQL Azure.
SQL Azure is easier to work with if you've been doing a lot of relational database work. If you were porting an application that was already using a SQL database, then moving it to SQL Azure would be the obvious choice, but that's the only situation where I would recommend it.
The main limitation on Azure Tables is the lack of secondary indexes. This was announced at PDC '09 and is currently listed as coming soon, but there hasn't been any time-frame announcement. (See http://windowsazure.uservoice.com/forums/34192-windows-azure-feature-voting/suggestions/396314-support-secondary-indexes?ref=title)
I've seen the proposed use of a hybrid system where you use table and blob storage for the bulk of your data, but use SQL Azure for indexes, searching and filtering. However, I haven't had a chance to try that solution yet myself.
Once the secondary indexes are added to table storage, it will essentially be a cloud based NoSQL system and will be much more useful than it is now.
Despite similar names SQL Azure Tables and Table Storage have very little in common.
Here are a two links that might help you:
Table Storage, a 100x cost factor
Fat Entities on Table Storage
Basically, the first question should wonder about is Does my app really need to scale? If not, then go for SQL Azure.
For those trying to decide between the two options, be sure to factor reporting requirements into the equation. SQL Azure Reporting and other reporting products support SQL Azure out of the box. If you need to generate complex or flexible reports, you'll probably want to avoid Table Storage.
Azure tables are cheaper, simpler and scale better than SQL Azure. SQL Azure is a managed SQL environment, multi-tenant in nature, so you should analyze if your performance requirements are fit for SQL Azure. A premium version of SQL Azure has been announced and is in preview as of this writing (see HERE).
I think the decisive factors to decide between SQL Azure and Azure tables are the following:
Do you need to do complex joins and use secondary indexes? If yes, SQL Azure is the best option.
Do you need stored procedures? If yes, SQL Azure.
Do you need auto-scaling capabilities? Azure tables is the best option.
Rows within an Azure table cannot exceed 4MB in size. If you need to store large data within a row, it is better to store it in blob storage and reference the blob's URI in the table row.
Do you need to store massive amounts of semi-structured data? If yes, Azure tables are advantageous.
Although Azure tables are tremendously beneficial in terms of simplicity and cost, there are some limitations that need to be taken into account. Please see HERE for some initial guidance.
One other consideration is latency. There used to be a site that Microsoft ran with microbenchmarks on throughput and latency of various object sizes with table store and SQL Azure. Since that site's no longer available, I'll just give you a rough approximation from what I recall. Table store tends to have much higher throughput than SQL Azure. SQL Azure tends to have lower latency (by as much as 1/5th).
It's already been mentioned that table store is easy to scale. However, SQL Azure can scale as well with Federations. Note that Federations (effectively sharding) adds a lot of complexity to your application. I'm also not sure how much Federations affects performance, but I imagine there's some overhead.
If business continuity is a priority, consider that with Azure Storage you get cheap geo-replication by default. With SQL Azure, you can accomplish something similar but with more effort with SQL Data Sync. Note that SQL Data Sync also incurs performance overhead since it requires triggers on all of your tables to watch for data changes.
I realize this is an old question, but still a very valid one, so I'm adding my reply to it.
CoderDennis and others have pointed out some of the facts - Azure Tables is cheaper, and Azure Tables can be much larger, more efficient etc. If you are 100% sure you will stick with Azure, go with Tables.
However this assumes you have already decided on Azure. By using Azure Tables, you are locking yourself into the Azure platform. It means writing code very specific to Azure Tables that is not just going to port over to Amazon, you will have to rewrite those areas of your code. On the other hand programming for a SQL database with LINQ will port over much more easily to another cloud service.
This may not be an issue if you've already decided on your cloud platform.
I suggest looking at Azure Cache in combination with Azure Table. Table alone has 200-300ms latencies, with occasional spikes higher, which can significantly slow down response times / UI interactivity. Cache + Table seems to be a winning combination, for me.
For your question, I want to talk about how to decide with logic choose SQL Table and which need to use Azure Table.
As we know SQL Table is a relational database engine. but if you have a big data in one table the SQL Table is not applicable, because SQL query get big data is slow.
At this time you can choose Azure Table, the Azure Table query is so fast then SQL Table for big data, for example, in our website, someone subscribed many articles, we make the article as feed to user, every user have a copy of article title and description, so in the article table there are lots of data, if we use SQL Table, each query execution maybe take more than 30 seconds. But in Azure Table get users article feed by PartitionKey and RowKey is so fast.
From this example you may know how to choose between in SQL Table and Azure Table.
I wonder whether we are going to end up with some "vendor independent" cloud api libraries in due course?
I think that you have first to define what your application usage funnels are. Will your data model be subjected to frequent changes or it is a stable one? You have to be able to perform ultra fast inserts and reads are not so complicated? Do you need advance google like search? Storing BLOBS?
Those are the questions (and not only) that you have to ask and answer yourself in order to decide if you are more likely going to use NoSql or SQL approach in storing your data.
Please consider that both approaches can easily coexist and can be extended with BLOB storage as well.
Both Azure Tables and SQL Azure are two different beasts.Both are meant for different scenarios, one con to azure table is that you cannot move from azure to any other platform, unless you write providers in your code that can handle such shifts.