I have been given a task of combatting an occasional slowness starting up an Azure web app. The web app calls makes seven separate Azure API controller calls that each connect to an Azure SQL Server in order to run stored procedures. The application insights show that these calls take less than 250ms 90 percent of the time but will take 7-15 seconds at other times. Adding logging shows the OpenConnection statement accounts for all of the delay.
date1 = DateTime.Now;
_dbContext.Database.OpenConnection();
date2 = DateTime.Now;
if (_logger != null)
{
interval = date2 - date1;
_logger.LogDebug(string.Format("GetUserDetails Opened Connection {0:N0}", interval.TotalMilliseconds));
}
I added a min pool size and max pool size of 200 to the connection string. It did not help and the application insights does not show more than 100 connections at a time. Profiling the Azure SQL Server shows the Audit logout with a similar delay when connecting.
Where else should I look to find a cause for this occasional delay in creating connections?
Thanks in advance,
Hank
With performance, it could be a bunch of things. I can provide a couple of things that you could check.
Is it always the Xth iteration/invocation etc. that is causing the delay ? In that case, see if you have any locks etc. due to the previous queries, etc.
If you are connecting to Managed instance, check your connection policy. Setting it to proxy can also lead to throttling when there is high network load. (or if you are connecting and querying crazy fast let's say in a loop, etc.)
This is something simple, but make sure you are using a connecting string rather than specifying the connection details in code every time. Connection pooling features only kick in when using a connection string. (Could also explain why you have 100 connections)
Related
I am running a JMeter test with 20 concurrent users (use kee alive is enabled). All 20 users with different login ids trying to login (1st test) and create a record(2nd test). While creating a record i observed most of the create records has connect time of '0' but certain records (say 5/20) has connect time of 21000 ms, so due to that elapsed time of 5 requests alone is so high compared to other 15 requests. Why its happening for 5 users alone ?
We don't know, according to JMeter Glossary
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
So it's more a network metric which indicates how long did it take for JMeter to establish connection with the server.
You need to check:
Your network adapter statistics, it might be the case it doesn't have enough bandwidth to send 20 concurrent requests
Your application connection pool settings, for example if it has 15 connections in the pool the remaining 5 will be put into queue and wait until a connection becomes available.
Check your server baseline health metrics like CPU, RAM, etc. (it can be done using JMeter PerfMon Plugin) as it might be the case your server lacks resources to serve all connections at the same time
Make sure to follow JMeter Best Practices as it might be the case JMeter is overloaded and cannot send requests fast enough
I have an API which allows other microservices to call on to check whether a particular product exists in the inventory. The API takes in only one parameter which is the ID of the product.
The API is served through API Gateway in Lambda and it simply queries against a Postgres RDS to check for the product ID. If it finds the product, it returns the information about the product in the response. If it doesn't, it just returns an empty response. The SQL is basically this:
SELECT * FROM inventory where expired = false and product_id = request.productId;
However, the problem is that many services are calling this particular API very heavily to check the existence of products. Not only that, the calls often come in bursts. I assume those services loop through a list of product IDs and check for their existence individually, hence the burst.
The number of concurrent calls on the API has resulted in it making many queries to the database. The rate can burst beyond 30 queries per sec and there can be a few hundred thousands of requests to fulfil. The queries are mostly the same, except for the product ID in the where clause. The column has been indexed and it takes an average of only 5-8ms to complete. Still, the connection to the database occasionally time out when the rate gets too high.
I'm using Sequelize as my ORM and the error I get when it time out is SequelizeConnectionAcquireTimeoutError. There is a good chance that the burst rate was too high and it max'ed out the pool too.
Some options I have considered:
Using a cache layer. But I have noticed that, most
of the time, 90% of the product IDs in the requests are not repeated.
This would mean that 90% of the time, it would be a cache miss and it
will still query against the database.
Auto scale up the database. But because the calls are bursty and I don't
know when they may come, the autoscaling won't complete in time to
avoid the time out. Moreover, the query is a very simple select statement and the CPU of the RDS instance hardly crosses 80% during the bursts. So I doubt scaling it would do much too.
What other techniques can I do to avoid the database from being hit hard when the API is getting burst calls which are mostly unique and difficult to cache?
Use cache in the boot time
You can load all necessary columns into an in-memory data storage (redis). Every update in database (cron job) will affect cached data.
Problems: memory overhead of updating cache
Limit db calls
Create a buffer for ids. Store n ids and then make one query for all of them. Or empty the buffer every m seconds!
Problems: client response time extra process for query result
Change your database
Use NoSql database for these data. According to this article and this one, I think choosing NoSql database is a better idea.
Problems: multiple data stores
Start with a covering index to handle your query. You might create an index like this for your table:
CREATE INDEX inv_lkup ON inventory (product_id, expired) INCLUDE (col, col, col);
Mention all the columns in your SELECT in the index, either in the main list of indexed columns or in the INCLUDE clause. Then the DBMS can satisfy your query completely from the index. It's faster.
You could start using AWS lambda throttling to handle this problem. But, for that to work the consumers of your API will need to retry when they get 429 responses. That might be super-inconvenient.
Sorry to say, you may need to stop using lambda. Ordinary web servers have good stuff in them to manage burst workload.
They have an incoming connection (TCP/IP listen) queue. Each new request coming in lands in that queue, where it waits until the server software accept the connection. When the server is busy requests wait in that queue. When there's a high load the requests wait for a bit longer in that queue. In nodejs's case, if you use clustering there's just one of these incoming connection queues, and all the processes in the cluster use it.
The server software you run (to handle your API) has a pool of connections to your DBMS. That pool has a maximum number of connections it it. As your server software handles each request, it awaits a connection from the pool. If no connection is immediately available the request-handling pauses until one is available, then handles it. This too smooths out the requests to the DBMS. (Be aware that each process in a nodejs cluster has its own pool.)
Paradoxically, a smaller DBMS connection pool can improve overall performance, by avoiding too many concurrent SELECTs (or other queries) on the DBMS.
This kind of server configuration can be scaled out: a load balancer will do. So will a server with more cores and more nodejs cluster processes. An elastic load balancer can also add new server VMs when necessary.
I am currently stress testing a .Net Core application, targeting netcoreapp2.2, that is hosted on Azure as a App Service connected to a P1V2 (210 ACU, 3.5GB memory) service plan with 2 instances.
The endpoint that I'm stress testing is very simple, it validates a Oauth2.0 token, gets the user and some info about the user from a P2 (250 DTU) Azure hosted database, total 4 db queries per request, and returns the string "Pong".
When running 15 concurrent users (or more) in 200 loops I see the stop(s) in processing seen in the image (between the high peaks). The service plan never hits more than around 20-35% CPU and the database never uses more than 2% load. Increasing the users decreases the average throughput.
When looking at the slow requests it is like it just randomly stops, never at the same place. When I look at the DB requests I never see a request that takes longer than a couple of 100 milliseconds while some requests can take upwards to 5-6s to process.
It feels like I reach some limit which results in something stopping for a period of time, but I can't figure out where the problem lies.
When running the same stress locally I don't see these stops.
I'm using jmeter cli to run the stress tests against both environments.
Any help is greatly appreciated, thanks!
This could be because of Azure DDOS protection behaviour.
If your application is being attacked by a DDOS attack, Microsoft will
stop all connections to your end point and in effect taking down your
service.
To avoid this you need to setup Web application firewall (WAF) to exclude healthy requests.
When we migrated our apps to azure from rackspace, we saw almost 50% of http requests getting read timeouts.
We tried placing the client both inside and outside azure with the same results. The client in this case is also a server btw, so no geographic/browser issues either.
We even tried increasing the size of the box to ensure azure wasn't throttling. But even using D boxes for a single request, the result was the same.
Once we moved out apps out of azure they started functioning properly again.
Each query was done directly on an instance using a public ip, so no load balancer issues either.
Almost 50% of queries ran into this issue. The timeout was set to 15 minutes.
Region was US East 2
Having 50% of HTTP requests timing out is not normal behavior. This is why you need to analyze what is causing those timeouts by validating the requests are hitting your VM. For this, I would recommend you running a packet capture on your server and analyze response times, as well as look for high number of retransmissions; it is even better if you can take a simultaneous network trace on your clients machines so you can do TCP sequence number analysis and compare packets sent vs received.
If you are seeing high latencies in the packet capture or high number of retransmissions, it requires detailed analysis. I strongly suggest you to open a support incident so Microsoft support can help you investigate your issue further.
We are using the Azure SQL Database (Web Edition) for a MVC3 ASP.NET/EF5 application.
Is there a limit to the number of sessions that this SQL Database setup supports? I am just wondering whether any delays that we are getting is due to some form of queuing or pooling. Currently we have about 5 concurrent users.
Thanks.
The SQL Azure Web edition database should support a high number of concurrent users - we've had applications running that issue thousands of queries per minute against Web databases.
Throttling
SQL Azure does implement database throttling to maintain performance for all users of the platform. If throttling has been applied to the current operation you'll receive error 40501. The link I've provided also shows you how to determine why throttling is being applied. If you receive this error you can treat it as a transient error and wait before retrying.
It doesn't sound like your connections are being throttled, because you mention only 5 concurrent users and talk about delays, whereas the throttling error would occur pretty quickly.
Transient error handling
If you're getting connection timeouts etc you need to handle them as transient errors. Transient errors are timeouts or dropped connections, as well as error codes 10054, 10053, 40501 (throttling as described above) and 40197 (usually because an upgrade or failover operation is in progress).
You should ensure you implement retry logic to handle transient errors.
Query performance
If you're executing long running queries you can check which ones are slow by logging into the database management URL:
https://<database-id>.database.windows.net/#$database=<database-name>
Log in and click "Query Performance" - take a look at the longest running queries at the top.