On our system, which is implemented by a web role that uses a database sql-azure, we are experiencing recurring timeout on a specific query.
These timeouts occur for a few hours during the day and then do not show up anymore.
The query has two tables with a number of rows is not very high (about 800,000 rows) with joins using primary keys.
The execution plan is ok, the indexes are used properly, the query normally takes two seconds to be performed.
Tests without EntityFramework give the same result.
Transient fault handling are not applicable in the case of timeout.
What can be the cause of this behavior?
We have experienced similar issues in the past using SQL Azure; frequently queries running against tables with less that 10 rows and even the standard .Net membership provider queries, all failed intermittently with timeouts. This is usually when we have little to no activity on our service; mostly at night.
In commonly used areas where it is safe to retry on SQL Timeout (Usually read operations) we have added the timeout exception to our custom error detection strategy, taken from the Transient Fault Handling Block; however as you stated this is not appropriate in most cases.
The best explanation we have received from Azure support thus far is that as SQL Azure is really a shared SQL Server instance that is used by multiple clients; if one user performs an intensive operation it can affect other users in this way. However; believe this not to be acceptable we are still in contact with SQL Azure support to ascertain why throttling is not stopping this sort of activity from affecting us.
You best bet is to:
Contact SQL Azure Support either through the forums or directly (If you have a support package)
If possible; try setting up a new SQL Azure instance and migrating your database across
Whilst we get this issue intermittently on one SQL Azure instance; we have never experienced it on our other 2 instances.
As a side note; we are still waiting on Azure Support to get back to us regarding why we were still receiving timeout exceptions.
Related
Sometimes slowness observed, in retrieving object from Azure Redis cache with key of pattern "http:____my.website.com", whereas time to retrieve object with key "abc_xyz_def_test_test" is almost consistent no spike in retrieving time like in other case mentioned. Size of the both the objects stored against "http:____my.website.com" (or) "abc_xyz_def_test_test" are of almost same. Also verified serializing the object to custom type, not playing foul here.
Is the slowness because of the key pattern? Please clarify. Also how to overcome this slowness issue.
Azure redis P1 tier (without cluster) used in this case. Redis metrics like CPU/Memory are normal as shown in Azure portal.
According to the best practices documentation, we need to consider several ways to improve the performance.
Chopping up bigger data into multiple keys.
Configure your client library to use a connect timeout of at least 15 seconds.
Scale up to P2 Premium to get High network bandwith.
Configure Redis clustering for a Premium Azure Cache for Redis.
I'm migrating from SQL Server to Azure SQL and I'd like to ask you who have more experience in Azure(I have basically none) some questions just to understand what I need to do to have the best migration.
Today I do a lot of cross database queries in some of my tasks that runs once a week. I execute SPs, run selects, inserts and updates cross the dbs. I solved the executions of SPs by using external data sources and sp_execute_remote. But as far as I can see it's only possible to select from an external database, meaning I won't be able to do any inserts or updates cross the dbs. Is that correct? If so, what's the best way to solve this problem?
I also read about cross db calls are slow. Does this mean it's slower that in SQL Server? I want to know if I'll face a slower process comparing to what I have today.
What I really need is some good guidelines on how to do the best migration without spending loads of time with trial and error. I appreciate any help in this matter.
Cross database transactions are not supported in Azure SQL DB. You connect to a specific database, and can't use 3 part names or use the USE syntax.
You could open up two different connections from your program, one to each database. It doesn't allow any kind of transactional consistency, but would allow you to retrieve data from one Azure SQL DB and insert it in another.
So, at least now, if you want your database in Azure and you can't avoid cross-database transactions, you'll be using an Azure VM to host SQL Server.
I am seeing erratic performance with an Azure Search Basic instance. Our index only has 1,544 documents and is 28MB in size, so I would expect searches to be very fast.
Azure Application Insights is reporting 4.7K calls to Azure Search from our app within the last 12 hours, with an average response time of 2.1s and a standard deviation of 35.8s(!).
I am personally seeing erratic performance during my manual testing. A query can take 20+ seconds at one moment, and then just a bit later the same query will take less than 100ms.
There queries are very simple. Here's an example query string:
api-version=2015-02-28&api-key=removed&search=&%24count=true&%24top=10&%24skip=0&searchMode=all&scoringProfile=FieldBoost&%24orderby=sortableTitle
What can I do to further troubleshoot this issue?
First off, I am assume you have a fairly even distribution of queries which means based on your numbers, you are only ~1 query per second. Does that sound correct? If not, and you are seeing large spikes of queries, it is very possible that you do not have enough replicas (copies of the index) to handle the query load. Please note that a single replica Basic service is targeted to handle low single digit QPS (although this can vary widely based on the complexity or simplicity of the queries). If you go beyond the limits of the service, latency can certainly become an issue. A good way to drill into this is to use Azure Search Traffic Analytics which can expose the search metrics that include data such as the number of queries per second over various timeframe as well as the latency metrics that we are seeing internally.
Also, most importantly, please try to reuse HTTP connections as much as possible and leverage HTTP connection pooling if possible. By the way, in .NET you should reuse a single HttpClient instance, or SearchIndexClient instance if using our Azure Search SDK.
I gathered more data and posted my results over at the Azure Search forum.
The slowdowns are due to the fact that we're running a single basic instance and code deployments by the Azure Search team cause a brief (a few minutes in my experience) interruption / degradation in service.
I find running two basic instances too expensive. Our search traffic doesn't warrant two instances except for availability purposes.
It's my understanding from the forum that the free tier has generally higher availability than a single basic instance. As a result, I have submitted a feedback item suggesting a paid shared tier that would provide more storage than the free tier while retaining higher availability than a single dedicated instance.
we're having issues with TransactionScope in a .NET 4 project.
We have segmented our DAL's into domains, that is we have different Linq2Sql DataContexts pointing to the same database.
The issue arises when, within the same TransactionScope, we insert/update on more than one DataContext, instantly a msdtc transaction will pop up, both locally and on the server, and then it will just hang there for 1-2 minutes (guess it times out), the code will then continue to run, until t.Complete() and subsequent implied .Dispose will yield and Exception "Transaction has aborted.".
We have configured msdtc both locally and on the server to allow all, no authentication, full trace levels, still no relevant information comes from the dtctrace.log
I guess it is standard procedure for msdtc to kick in when more that one database connection is initiated (even if it is vs. the same database), but why the timeout? The operations are not conflicting, there is no possible way for a deadlock to occur in our domain?
Have googled and tested extensively, hope for some seasoned experience here :)
With SQL2005 any transaction spanning multiple connections will be escalated to DTC. With SQL2008, several connections with the same connection string can participate in the same transaction without the need for DTC. With the architecture you've chosen I'd strongly suggest upgrading to SQL2008 if that is an option. DTC can be a paint to get working correctly.
I want to host my WCF services in the Azure clouds for scalability reasons. For example there will be some read data action. And it will be under High Load (1000+ user/sec).
(Like in my previous question)
Also I have a limitation in 1 sec timeout for any request.
My service will be connected with SQL Azure. I chosing it because of small latency (not more than 7ms according to microsoft's benchmark)
How many concurrent connections can hold SQL Azure per instance/database?
Is there any ability to scale SQL Azure when i will reach the limit of connections per instance?
Other solutions, options for my scenario?
Thanks.
One thing to keep in mind is that you will need to make sure you are leveraging connection pooling to its maximum. Using a service account instead of different logins is an important step to ensure proper connection pooling.
Another consideration is the use of MARS. If you have many requests coming through, you may want to pool them together into a single request, hence a single connection, and return multiple resultsets. In this post I discuss how to implement one-way queuing of SQL statements; this may not work for you as-is because you may be expecting a response, but it may give you some ideas on how to implement a batch of requests to minimize the number of connections and minimize wait time.
Finally you can take a look at this tool I wrote last year to test connection/statements against SQL Azure. The tool automatically turns off connection pooling to measure the effects of concurrency. You can download it here.
Finally, I also wrote the Enzo Shard Library on codeplex. Let me know if you have any questions if you decide to investigate the library for your project. Note that the library will evolve to support the future capabilities of SQL Azure Data Federation as well.
It appears there is no direct limit to the number of connections available per SQL Azure instance, but Microsoft state that they reserve the right to throttle connections in situations where resource use is regarded as "excessive".
There's some information on this here, also details on what may happen in this situation here.
A good work-around is to consider "sharding", where you partition your data on some easily-definable criteria and have multiple databases. This does, of course, incur additional cost. A neat implementation of that is here: http://enzosqlshard.codeplex.com/
Also: Azurescope have had some interesting benchmarks here: http://azurescope.cloudapp.net/BestPractices/#ed6a21ed-ad51-4b47-b69c-72de21776f6a (unfortunately, removed early 2012)
Is there any ability to scale SQL Azure when i will reach the limit of connections per instance?
In addition to the Enzo sql sharding suggestion, there are a couple of Microsoft products/features under construction to assist with scaling SQL Azure. These are CTP (at best) but may provide some scalability options for you by allowing you to spread the load across multiple SQL Azure databases:
SQL Azure federations - http://convective.wordpress.com/2011/05/02/sql-azure-federations/
SQL Azure datasync http://www.microsoft.com/windowsazure/sqlazure/datasync/