I'd like to collect information about top N queries ranked by execution time. The same as described here: http://msdn.microsoft.com/en-us/library/windowsazure/ff394114.aspx. But I need historical data. Is there any tool for this? Something really simple and cheap. Or it is better to implement this by myself? Like it is described here: http://wasa.codeplex.com/
If you are open to a paid service to do this monitoring, Cotega may be an options for you. We do monitoring and store historical data on your SQL Azure database. We actually previously logged top N queries, but stopped doing it as it was hard for DBA's to use this information. It would be great to hear more about how you would like to see and use this information as it would be pretty easy to add this capability back in.
If you want to do this yourself, there are some great services such as Aditi that can be used to schedule processes that would allow you to create some code to be executed on regular basis to log this yourself.
Full disclosure, I work on the Cotega service.
Related
We have a running site using NLog for logs. We are not only login errors, we use it to measure things relative to business logic.
Now we are moving to Azure and that's why I'm searching for a better way to log this type of info in azure. I'm looking for something like graylog.
Things to have in mind:
What azure provides to log info is easy to read?
Can i make queries to read data?
Is there an API to log?
Check out the following stuff, which is more or less native to Azure. Also you could probably use some of the third parties, like New Relic.
Log Analytics
Application Insights
Operations Management Suite
Application Insights not only has out of the box monitoring but also provides capabilities to create your own queries.
ps. Just my 2 cents, I'd go for OMS, Microsoft is pushing it oh so hard, it is evolving rapidly, even if you are missing some capabilities they are going to be there soon and in the long run, Microsoft is really unlikely to drop OMS anytime soon, since they started forcing it like 1.5 year ago.
I have an app running on Azure which logs (traces, really) to the Azure Diagnostics storage. I'm looking for a good tool which can be used to analyze these logs.
I know it's possible to retrieve these trace logs using the Server Explorer in Visual Studio, but this tool is a bit cumbersome. For instance, I can't specify a time interval for log records.
Also tried Azure Diagnostics Manager from Cerebrata, which is nice, but wonder if there any other good alternatives?
(The logging itself works just fine, it's the retrieval and analysis of the logs which I'm interested in)
Cerebrata certainly have the most complete solution for dealing with diagnostics and it's not especially expensive, but it does still cost money.
If you're just looking at the trace information then I've found that just querying the Azure Tables works well enough. If you're not able to convert a time into ticks in your head (which is what the PartitionKey of the table is), then you can use LINQPad. Jason Haley has provided full instructions and helper code.
Cerebrata's tool is probably the best to date to deal with diagnostics information.
Also try Stackify. Their DevOps solutions makes it really easy to remotely see server details needed for troubleshooting without using azure storage accounts. Check out this article: Windows Azure Diagnostics: The Bad, The Ugly, and a Better Way
I just came across this MSDN blog post. It hasn't been updated since September but looks like it has a rich enough feature set.
I'm just wondering if anyone who has experience on Azure Table Storage could comment on if it is a good idea to use 1 table to store multiple types?
The reason I want to do this is so I can do transactions. However, I also want to get a sense in terms of development, would this approach be easy or messy to handle? So far, I'm using Azure Storage Explorer to assist development and viewing multiple types in one table has been messy.
To give an example, say I'm designing a community site of blogs, if I store all blog posts, categories, comments in one table, what problems would I encounter? On ther other hand, if I don't then how do I ensure some consistency on category and post for example (assume 1 post can have one 1 category)?
Or are there any other different approaches people take to get around this problem using table storage?
Thank you.
If your goal is to have perfect consistency, then using a single table is a good way to go about it. However, I think that you are probably going to be making things more difficult for yourself and get very little reward. The reason I say this is that table storage is extremely reliable. Transactions are great and all if you are dealing with very very important data, but in most cases, such as a blog, I think you would be better off just 1) either allowing for some very small percentage of inconsistent data and 2) handling failures in a more manual way.
The biggest issue you will have with storing multiple types in the same table is serialization. Most of the current table storage SDKs and utilities were designed to handle a single type. That being said, you can certainly handle multiple schemas either manually (i.e. deserializing your object to a master object that contains all possible properties) or interacting directly with the REST services (i.e. not going through the Azure SDK). If you used the REST services directly, you would have to handle serialization yourself and thus you could more efficiently handle the multiple types, but the trade off is that you are doing everything manually that is normally handled by the Azure SDK.
There really is no right or wrong way to do this. Both situations will work, it is just a matter of what is most practical. I personally tend to put a single schema per table unless there is a very good reason to do otherwise. I think you will find table storage to be reliable enough without the use of transactions.
You may want to check out the Windows Azure Toolkit. We have designed that toolkit to simplify some of the more common azure tasks.
I want to log some information about my visitors. Is it better to use the IIS generated log or to create my own in an SQL 2008 db.
I know I should probably provide more information about my specific scenario, but I'd like just generally, pros and cons of either proposal.
You can add additional information to the IIS logs from ASP.NET using HttpResponse.AppendToLog, additionally you could use the Advanced Logging Module to create your own logs with custom filters and custom data including data from Performance Counters, and more.
It all depends on what information you want to analyse.
If you're doing aggregations and rollups then you'd want to pull this data into a database for analysis. Pulling your data into a database will give you access to indexes and better querying tools.
If you're doing infrequent one-off simple queries then LogParser might be sufficient for your needs. However you'll be constantly scanning unindexed flat files looking for data which is I/O intensive.
But as you say, without knowing more about your specific scenario it's hard to say what would be best.
I read on the MS site that SQL Azure does not support SQL Profiler. What are people using to profile queries running on this platform?
I haven't got too far playing around with SQL Azure as yet, but from what I understand there isn't anything you can use at the moment.
From MS (probably the article you read):
Because SQL Azure performs the
physical administration, any
statements and options that attempt to
directly manipulate physical resources
will be blocked, such as Resource
Governor, file group references, and
some physical server DDL statements.
It is also not possible to set server
options and SQL trace flags or use the
SQL Server Profiler or the Database
Tuning Advisor utilities.
If there were to be an alernative, I'd imagine it would require the ability to set trace flags which you can't do, hence I don't think there is an option at the moment.
Solution? I can only suggest you have a local development copy of the db so you can run profiler locally on it. I know that won't help with "live" issues/debugging/monitoring but it depends on what you need it for.
Edit:
Quote from MSDN forum:
Q: Is SQL Profiler supported in SQL
Azure?
A: We do not support SQL Profiler in
v1 of SQL Azure.
Now, you could interpret that as a hint that Profiler will be supported in future versions. I think it will be a big requirement to get a lot of people on board, using SQL Azure seriously.
Update as of 9/17/2015:
Microsoft just announced a new feature called Index Advisor:
How does Index Advisor work? Index Advisor continuously monitors your
database workload, performs the analysis and recommends new indexes
that can further improve the DB performance.
Recommendations are always kept up-to-date: As the DB workload and
schema evolves, Index Advisor will monitor the changes and adjust the
recommendations accordingly. Each recommendation comes with the
estimated impact to DB workload performance: You can use this
information to prioritize the most impactful recommendations first. In
addition, Index Advisor provides a very easy and powerful way of
creating the recommended indexes.
Creating new indexes only takes a couple of clicks. Index Advisor
measures the impact of newly created indexes and provides a report on
index impact to users. You can get started with Index Advisor and
improve your database performance with the following simple steps. It
literally takes five minutes to get accustomed with Index Advisor’s
simple and intuitive user interface. Let’s get started!
Original Answer:
SQL Azure now has some native profiling. See http://blogs.msdn.com/b/benko/archive/2012/05/19/cloudtip-14-how-do-i-get-sql-profiler-info-from-sql-azure.aspx for details.
Microsoft's stated position SQL Server Profiler is deprecated. As much as this is a bad idea, that's what they have said.
SQL Profile is already deprecated in SQL Server, and that’s part of
the reason that it doesn’t make sense to bring to SQL DB.
What this means is you are going back 20+ years in database performance monitoring and everyone is going to have to write their own perf monitoring scripts instead of having a standard factory delivered tool that's on every server you will go to. It's tantamount to deprecating "sp_help" and making every DBA write their own. Hope you know all your DMVs inside and out... INNER JOIN, OUTER JOIN, and CROSS APPLY syntax really well.
Update as of 2017/04/14:
Microsoft's Scott Guthrie today announced a lot of new features in SQLAzure(this is called sqlazure managed instance,which is currently in preview),which are expected to be present in SQLAzure in coming months..below are them
1.SQLAgent
2.SQLProfiler
3.SQLCLR
4.Service Broker
5.Logshipping,Transactional Replication
6.Native/Backup restore
7.Additional DMV's and Xevents
8.cross database querying
References:
https://youtu.be/0uT46lpjeQE?t=1415
I have tried today a new tool suggested by Microsoft that is called Azure Data Studio.
In this tool you can download an extension called Profiler and it seems to be working just as expected.
You can use Query store feature, look here for more details: http://azure.microsoft.com/blog/2015/06/08/query-store-a-flight-data-recorder-for-your-database/
The most close to SQL profiler, that I found working in Azure SQL, is SQL Workload Profiler
However note, that it’s beta version of a tool, created but a single person, and it is not too convinient to use.
SQL Azure offers following features to tune performance, profile queries in its own way, identity long running queries and much more
Intelligent Performance
Performance overview
Performance recommendations
Query Performance Insight
Automatic tuning