So I need to create a Single Database in Azure through Terraform.
The requirements are 8vCores/3TB.
What should be the edition that I should be passing in resource parameter "azurerm_sql_database"-->"edition"?
The documentation at https://www.terraform.io/docs/providers/azurerm/r/sql_database.html
says -- Valid values are: Basic, Standard, Premium, or DataWarehouse.
But when I create a similar database through portal, and query the DB, it says edition is 'GeneralPurpose'.
SQL Azure recently introduced a second set of choices which you can think of as a parallel (but more powerful) business model. Basic/Standard/Premium still work, but you now have additional choices. The new model supports separation of compute/memory from storage/iops more formally. It exposes General Purpose and Business Critical + exposing each generation of CPU (gen 4 vs. gen 5). As a rough starting point, you should think of standard as being close to general purpose and business critical is close to premium.
SQL DW is a somewhat different offer that is based on the PDW/APS scale-out appliance model (run as a service). So, while 3TB will fit on a single node in current generation 5 HW, if you want to run a DW SQL DW is a great choice if you think you will grow further, need scale-out processing, etc. You should not think about going between SQL DW and without app changes - in fact, you can't change between those two sets once you pick one.
You can read more about the new business model here:
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-vcore
This week a new option (Hyperscale) was announced as well which gives scale-out storage within SQL DB
not sure why you are looking at terraform.io rather than here: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-single-databases-manage#azure-cli-manage-logical-servers-and-databases and here: https://learn.microsoft.com/en-us/cli/azure/sql/db?view=azure-cli-latest#az-sql-db-create
Related
SQL Azure storage is a lot more expensive than Windows Azure Storage. Would implementing a no-sql solution like RavenDB allow me to store data on the cheaper Azure Storage?
Are there other things to consider, like backup, speed or security?
Thank you.
You have to consider that with SQL Azure you not only get the storage, but the database server too. If you implement RavenDB, you will will need a worker role to host it in and, in order to allow for failure of that worker role, another worker role (replica), which also doubles up the storage.
Bear in mind that with SQL Azure you get a highly available (3x replicated with failover) SQL solution that surfaces a familiar (ADO.NET) API. Make your choices based on aspects other than storage cost, such as operational effort and development effort. If you choose RavenDB it should be because of the potential cost savings in development effort (because of the closeness on the document API to the object graph) and operational cost, because RavenDB is 'administered' as part of the application. Cost of storage of actual bytes, particularly at scale, is a marginal consideration.
Adding a bit to #Simon's answer: When considering Table Storage and its low cost, also consider whether you can use it directly, instead of going with an installed-and-managed-by-you NoSQL database engine. As it stands, Table Storage offers a schemaless solution that lets you store essentially a property bag within a row, indexed by partitionkey+rowkey. Does that work for you? Could you work with a few extra tables to give you additional indexing? If so, your storage cost is going to be really low (and still durable, triple-replicated).
If you find yourself writing significant code to manage Table Storage, then it may be more efficient to invest in the Compute instances needed to run RavenDB. When considering this, also consider that you'll likely want larger VM sizes if you're moving significant data (as you get approx. 100Mbps per core). A database like MongoDB, working with memory-mapped files, really ramps up speed-wise with more RAM. Not sure if this is the same with RavenDB.
I want to host my WCF services in the Azure clouds for scalability reasons. For example there will be some read data action. And it will be under High Load (1000+ user/sec).
(Like in my previous question)
Also I have a limitation in 1 sec timeout for any request.
My service will be connected with SQL Azure. I chosing it because of small latency (not more than 7ms according to microsoft's benchmark)
How many concurrent connections can hold SQL Azure per instance/database?
Is there any ability to scale SQL Azure when i will reach the limit of connections per instance?
Other solutions, options for my scenario?
Thanks.
One thing to keep in mind is that you will need to make sure you are leveraging connection pooling to its maximum. Using a service account instead of different logins is an important step to ensure proper connection pooling.
Another consideration is the use of MARS. If you have many requests coming through, you may want to pool them together into a single request, hence a single connection, and return multiple resultsets. In this post I discuss how to implement one-way queuing of SQL statements; this may not work for you as-is because you may be expecting a response, but it may give you some ideas on how to implement a batch of requests to minimize the number of connections and minimize wait time.
Finally you can take a look at this tool I wrote last year to test connection/statements against SQL Azure. The tool automatically turns off connection pooling to measure the effects of concurrency. You can download it here.
Finally, I also wrote the Enzo Shard Library on codeplex. Let me know if you have any questions if you decide to investigate the library for your project. Note that the library will evolve to support the future capabilities of SQL Azure Data Federation as well.
It appears there is no direct limit to the number of connections available per SQL Azure instance, but Microsoft state that they reserve the right to throttle connections in situations where resource use is regarded as "excessive".
There's some information on this here, also details on what may happen in this situation here.
A good work-around is to consider "sharding", where you partition your data on some easily-definable criteria and have multiple databases. This does, of course, incur additional cost. A neat implementation of that is here: http://enzosqlshard.codeplex.com/
Also: Azurescope have had some interesting benchmarks here: http://azurescope.cloudapp.net/BestPractices/#ed6a21ed-ad51-4b47-b69c-72de21776f6a (unfortunately, removed early 2012)
Is there any ability to scale SQL Azure when i will reach the limit of connections per instance?
In addition to the Enzo sql sharding suggestion, there are a couple of Microsoft products/features under construction to assist with scaling SQL Azure. These are CTP (at best) but may provide some scalability options for you by allowing you to spread the load across multiple SQL Azure databases:
SQL Azure federations - http://convective.wordpress.com/2011/05/02/sql-azure-federations/
SQL Azure datasync http://www.microsoft.com/windowsazure/sqlazure/datasync/
Im writing a 'proof of concept' application to investigate the possibility of moving a bespoke ASP.NET ecommerce system over to Windows Azure during a necessary re-write of the entire application.
Im tempted to look at using Azure Table Storage as an alternative to SQL Azure as the entities being stored are likely to change their schema (properties) over time as the application matures further, and I wont need to make endless database schema changes. In addition we can build refferential integrity into the applicaiton code - so the case for considering Azure Table Storage is a strong one.
The only potential issue I can see at this time is that we do a small amount of simple reporting - i.e. value of sales between two dates, number of items sold for a particular product etc.
I know that Table Storage doesnt support aggregate type functions, and I believe we can achieve what we want with clever use of partitions, multiple entity types to store subsets of the same data and possibly pre-aggregation but Im not 100% sure about how to go about it.
Does anyone know of any in-depth documents about Azure Table Storage design principles so that we make proper and efficient use of Tables, PartitionKeys and entity design etc.
there's a few simplistic documents around, and the current books available tend not to go into this subject in much depth.
FYI - the ecommerce site has about 25,000 customers and takes about 100,000 orders per year.
Have you seen this post ?
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx
Pretty thorough coverage of tables
I think there are three potential issues I think in porting your app to Table Storage.
The lack of reporting - including aggregate functions - which you've already identified
The limited availability of transaction support - with 100,000 orders per year I think you'll end up missing this support.
Some problems with costs - $1 per million operations is only a small cost, but you can need to factor this in if you get a lot of page views.
Honestly, I think a hybrid approach - perhaps EF or NH to SQL Azure for critical data, with large objects stored in Table/Blob?
Enough of my opinion! For "in depth":
try the storage team's blog http://blogs.msdn.com/b/windowsazurestorage/ - I've found this very good
try the PDC sessions from Jai Haridas (couldn't spot a link - but I'm sure its there still)
try articles inside Eric's book - http://geekswithblogs.net/iupdateable/archive/2010/06/23/free-96-page-book---windows-azure-platform-articles-from.aspx
there's some very good best practice based advice on - http://azurescope.cloudapp.net/ - but this is somewhat performance orientated
If you have start looking at Azure storage such as table, it would do no harm in looking at other NOSQL offerings in the market (especially around document databases). This would give you insight into NOSQL space and how solution around such storages are designed.
You can also think about a hybrid approach of SQL DB + NOSQL solution. Parts of the system may lend themselves very well to Azure table storage model.
NOSQL solutions such as Azure table have their own challenges such as
Schema changes for data. Check here and here
Transactional support
ACID constraints. Check here
All table design papers I have seen are pretty much exclusively focused on the topics of scalability and search performance. I have not seen anything related to design considerations for reporting or BI.
Now, azure tables are accessible through rest APIs and via the azure SDK. Depending on what reporting you need, you might be able to pull out the information you require with minimal effort. If your reporting requirements are very sophisticated, then perhaps SQL azure together with Windows Azure SQL Reporting services might be a better option to consider?
Is there any difference between the Web Edition and Business Edition of Azure SQL Database other than the maximum supported database sizes? I'm assuming the naming has some significance but all of the information I find simply talks about the max db size. I want to know if there are any other differences such as SLA, replication, scalability, etc.
Any clues?
The two editions are identical except for capacity. Both offer the same replication and SLA.
EDIT April 3, 2014 - Updated to reflect SQL Database size limit now at 500GB
EDIT June 17, 2013: Since I originally posted this answer, a few things have changed with pricing (but the sizing remains the only difference between web & business editions)
Web Edition scales to 5GB, whereas Business Edition scales to 500GB. Also: with the new MSDN plans (announced at TechEd 2013; see ScottGu's blog post for more details), you'll now get monthly monetary credits toward any services you want to apply your credits to, including SQL Database (up to $150 per month, depending on MSDN tier - see this page for details around the new MSDN benefits).
Both allow you to set maximum size, and both are billed on an amortized schedule, where your capacity is evaluated daily. Full pricing details are here. You'll see that the base pricing begins at $4.995 (up to 100MB), then jumps to $9.99 (up to 1GB), and then starts tiered pricing for additional GB's.
Regardless of edition, you have the exact same set of features - it's all about capacity limits. You can easily change maximum capacity, or even change edition, with T-SQL. For instance, you might start with a Web edition:
CREATE DATABASE Test (EDITION='WEB', MAXSIZE=1GB)
Your needs grow, so you bump up to 5GB:
ALTER DATABASE Test MODIFY (EDITION='WEB', MAXSIZE=5GB)
Now you need even more capacity, so you need to switch to one of the Business Edition tiers:
ALTER DATABASE Test MODIFY (EDITION='BUSINESS', MAXSIZE=10GB)
If you ever need to reduce your database size, that works just fine as well - just alter right back to Web edition:
ALTER DATABASE Test MODIFY (EDITION='WEB', MAXSIZE=5GB)
Web and Business Editions are being deprecated. Check out the latest editions of Azure SQL DB (Basic, Standard, Premium) here : http://azure.microsoft.com/en-us/pricing/details/sql-database/
You can also find info on latest features in SQL DB V12 here : http://azure.microsoft.com/en-us/documentation/articles/sql-database-preview-whats-new/
Edit (4/29) :
Check out the new Elastic DB offering (Preview) announced at Build today. The pricing page has been updated with Elastic DB price information.
A documented difference is the Business edition supports Federations:
http://azure.microsoft.com/en-us/documentation/articles/sql-database-scale-out/
"Federations are supported in the Business edition. For more information, see Federations in SQL Database and SQL Database Federations Tutorial ... "
I have noticed a behavioral difference between the two versions. In the Business edition we have set up for QA, the following code snippet gets an error when applying the foreign key unless a "GO" is placed after adding the column. Then it works fine. This is not needed in the Web edition databases we have for development.
IF NOT EXISTS (SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA='ASSIGN'
AND TABLE_NAME = 'ASSIGNTARGET_EXCEPTION'
AND COLUMN_NAME = 'EXCESS_WEAR_FLAG')
ALTER TABLE [ASSIGN].[ASSIGNTARGET_EXCEPTION] ADD [EXCESS_WEAR_FLAG] [varchar](1) NULL
-- GO -- placing this here makes this sectino work.
IF NOT EXISTS (SELECT *
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS
WHERE TABLE_SCHEMA ='ASSIGN'
AND TABLE_NAME = 'ASSIGNTARGET_EXCEPTION'
AND CONSTRAINT_NAME = 'CHK_ATEXCPTN_EXCESSWEARFLAG')
BEGIN
ALTER TABLE [ASSIGN].[ASSIGNTARGET_EXCEPTION] WITH NOCHECK ADD CONSTRAINT [CHK_ATEXCPTN_EXCESSWEARFLAG] CHECK (([EXCESS_WEAR_FLAG]='N' OR [EXCESS_WEAR_FLAG]='Y'))
ALTER TABLE [ASSIGN].[ASSIGNTARGET_EXCEPTION] CHECK CONSTRAINT [CHK_ATEXCPTN_EXCESSWEARFLAG]
END
I read on the MS site that SQL Azure does not support SQL Profiler. What are people using to profile queries running on this platform?
I haven't got too far playing around with SQL Azure as yet, but from what I understand there isn't anything you can use at the moment.
From MS (probably the article you read):
Because SQL Azure performs the
physical administration, any
statements and options that attempt to
directly manipulate physical resources
will be blocked, such as Resource
Governor, file group references, and
some physical server DDL statements.
It is also not possible to set server
options and SQL trace flags or use the
SQL Server Profiler or the Database
Tuning Advisor utilities.
If there were to be an alernative, I'd imagine it would require the ability to set trace flags which you can't do, hence I don't think there is an option at the moment.
Solution? I can only suggest you have a local development copy of the db so you can run profiler locally on it. I know that won't help with "live" issues/debugging/monitoring but it depends on what you need it for.
Edit:
Quote from MSDN forum:
Q: Is SQL Profiler supported in SQL
Azure?
A: We do not support SQL Profiler in
v1 of SQL Azure.
Now, you could interpret that as a hint that Profiler will be supported in future versions. I think it will be a big requirement to get a lot of people on board, using SQL Azure seriously.
Update as of 9/17/2015:
Microsoft just announced a new feature called Index Advisor:
How does Index Advisor work? Index Advisor continuously monitors your
database workload, performs the analysis and recommends new indexes
that can further improve the DB performance.
Recommendations are always kept up-to-date: As the DB workload and
schema evolves, Index Advisor will monitor the changes and adjust the
recommendations accordingly. Each recommendation comes with the
estimated impact to DB workload performance: You can use this
information to prioritize the most impactful recommendations first. In
addition, Index Advisor provides a very easy and powerful way of
creating the recommended indexes.
Creating new indexes only takes a couple of clicks. Index Advisor
measures the impact of newly created indexes and provides a report on
index impact to users. You can get started with Index Advisor and
improve your database performance with the following simple steps. It
literally takes five minutes to get accustomed with Index Advisor’s
simple and intuitive user interface. Let’s get started!
Original Answer:
SQL Azure now has some native profiling. See http://blogs.msdn.com/b/benko/archive/2012/05/19/cloudtip-14-how-do-i-get-sql-profiler-info-from-sql-azure.aspx for details.
Microsoft's stated position SQL Server Profiler is deprecated. As much as this is a bad idea, that's what they have said.
SQL Profile is already deprecated in SQL Server, and that’s part of
the reason that it doesn’t make sense to bring to SQL DB.
What this means is you are going back 20+ years in database performance monitoring and everyone is going to have to write their own perf monitoring scripts instead of having a standard factory delivered tool that's on every server you will go to. It's tantamount to deprecating "sp_help" and making every DBA write their own. Hope you know all your DMVs inside and out... INNER JOIN, OUTER JOIN, and CROSS APPLY syntax really well.
Update as of 2017/04/14:
Microsoft's Scott Guthrie today announced a lot of new features in SQLAzure(this is called sqlazure managed instance,which is currently in preview),which are expected to be present in SQLAzure in coming months..below are them
1.SQLAgent
2.SQLProfiler
3.SQLCLR
4.Service Broker
5.Logshipping,Transactional Replication
6.Native/Backup restore
7.Additional DMV's and Xevents
8.cross database querying
References:
https://youtu.be/0uT46lpjeQE?t=1415
I have tried today a new tool suggested by Microsoft that is called Azure Data Studio.
In this tool you can download an extension called Profiler and it seems to be working just as expected.
You can use Query store feature, look here for more details: http://azure.microsoft.com/blog/2015/06/08/query-store-a-flight-data-recorder-for-your-database/
The most close to SQL profiler, that I found working in Azure SQL, is SQL Workload Profiler
However note, that it’s beta version of a tool, created but a single person, and it is not too convinient to use.
SQL Azure offers following features to tune performance, profile queries in its own way, identity long running queries and much more
Intelligent Performance
Performance overview
Performance recommendations
Query Performance Insight
Automatic tuning