Azure SQL Server copy database script failing - azure

CREATE DATABASE {0}
AS COPY OF {1} ( SERVICE_OBJECTIVE = 'S2' )
Execution timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
CREATE DATABASE AS Copy of operation failed. Internal service error.

If setting a higher connection timeout via the connectionstring doesn't work, you might want to check out the Command Timout setting on the SqlCommand.
You can also set this with any of the ORM-frameworks available, though the property is probably named something different.

You have a timeout exception, which indicates the time to complete the command is longer as your timeout. Have a look at the connectionstring to see the connection timeout. Change it to a larger value.
Depending on what takes time, you can create the db as a larger size (S3) and than scale it down afterwards. Check if the DTU usage is at 100% while creating the db.

Related

Consumption Plan Azure Functions Transient Failures

During heavy load my consumption azure functions are timing out with the following errors-
1.System.InvalidOperationException : An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseSqlServer' call. ---> System.Data.SqlClient.SqlException : The client was unable to establish a connection because of an error during connection initialization process before login. Possible causes include the following: the client tried to connect to an unsupported version of SQL Server; the server was too busy to accept new connections; or there was a resource limitation (insufficient memory or maximum allowed connections) on the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) ---> System.ComponentModel.Win32Exception : An existing connection was forcibly closed by the remote host.
2.System.InvalidOperationException : Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
We are using Azure SQL database P4 with 500 DTUs. My initial thought was that due to less available worker threads it might be failing. But, they are well within limit with max at 12%.
We know that some of out LINQ queries are slow and are not performing well, but that would require business logic change.
Is there any solution on Azure Infrastructure side or any logs I can look into to?
We had first problem couple of month ago, we just added EnableRetryOnFailure() on database configuration, it resolved the first issue. Sample code given below
var optionsBuilder = new DbContextOptionsBuilder<KasDbContext>();
optionsBuilder.UseSqlServer(getConnectionString(), options =>
{
options.EnableRetryOnFailure(maxRetryCount: Constants.MaxRetryCountOnDbTransientFailure, maxRetryDelay: TimeSpan.FromSeconds(Constants.MaxDelaySecondsOnDbTransientFailure), errorNumbersToAdd: null);
});
return new DbContext(optionsBuilder.Options);

Azure SQL Database Occasionally Slow to COnnect

I have been given a task of combatting an occasional slowness starting up an Azure web app. The web app calls makes seven separate Azure API controller calls that each connect to an Azure SQL Server in order to run stored procedures. The application insights show that these calls take less than 250ms 90 percent of the time but will take 7-15 seconds at other times. Adding logging shows the OpenConnection statement accounts for all of the delay.
date1 = DateTime.Now;
_dbContext.Database.OpenConnection();
date2 = DateTime.Now;
if (_logger != null)
{
interval = date2 - date1;
_logger.LogDebug(string.Format("GetUserDetails Opened Connection {0:N0}", interval.TotalMilliseconds));
}
I added a min pool size and max pool size of 200 to the connection string. It did not help and the application insights does not show more than 100 connections at a time. Profiling the Azure SQL Server shows the Audit logout with a similar delay when connecting.
Where else should I look to find a cause for this occasional delay in creating connections?
Thanks in advance,
Hank
With performance, it could be a bunch of things. I can provide a couple of things that you could check.
Is it always the Xth iteration/invocation etc. that is causing the delay ? In that case, see if you have any locks etc. due to the previous queries, etc.
If you are connecting to Managed instance, check your connection policy. Setting it to proxy can also lead to throttling when there is high network load. (or if you are connecting and querying crazy fast let's say in a loop, etc.)
This is something simple, but make sure you are using a connecting string rather than specifying the connection details in code every time. Connection pooling features only kick in when using a connection string. (Could also explain why you have 100 connections)

"Operation was cancelled" exception is throwing for long running Azure indexer

We are getting "Operation was cancelled" exception while Azure Indexer is running for larger records (around 2M+). Here are the log details -
"The operation was canceled. Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request. The I/O operation has been aborted because of either a thread exit or an application request "
We are running the indexer under thread. It is working for smaller records but for larger records (1M+), it is throwing Socket Exception.
Does anyone saw this error while running Azure Indexer for larger records (running for long time)?
(we have already increase httpclient timeout to maximum value for serviceClient object.)
This could happen because of happen because of excess http connections. Try to make your **HttpClient** static and see if anything improves. **HttpClient** timeout to maximum value is required to execute with maximum records.
You may also want to consider working to reduce your sql query time for best indexer performance. Also please share you code if possible.
Hope it helps.
Try set SearchServiceClient.HttpClient.Timeout to Timeout.InfiniteTimeSpan. You have to set the timeout before you send any request to Azure Cognitive Search.
client.HttpClient.Timeout = Timeout.InfiniteTimeSpan;

How to Increasing the Command Timeout in CodingHorror("my sql query").Execute();

I am trying to create a database backup and using CodingHorror to execute my command as below.
CodingHorror("my sql query").Execute();
My database size is big and it takes about 2 minutes to complete the backup process when I executed my command in MSSQL. But while executing in my C# application, exception is thrown as below
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Is there any way to increase command timeout in Subsonic CodingHorror ?
CodingHorror uses the normal DataProvider so it should just use the timeout you've set in the connection string q.v. Timeout setting for SQL Server

Connection to Redis cache fails after restart - Azure

We are using following code to connect to our caches (in-memory and Redis):
settings
.WithSystemRuntimeCacheHandle()
.WithExpiration(CacheManager.Core.ExpirationMode.Absolute, defaultExpiryTime)
.And
.WithRedisConfiguration(CacheManagerRedisConfigurationKey, connectionString)
.WithMaxRetries(3)
.WithRetryTimeout(100)
.WithJsonSerializer()
.WithRedisBackplane(CacheManagerRedisConfigurationKey)
.WithRedisCacheHandle(CacheManagerRedisConfigurationKey, true)
.WithExpiration(CacheManager.Core.ExpirationMode.Absolute, defaultExpiryTime);
It works fine, but sometimes machine is restarted (automatically by Azure where we host it) and after the restart connection to Redis fails with following exception:
Connection to '{connection string}' failed.
at CacheManager.Core.BaseCacheManager`1..ctor(String name, ICacheManagerConfiguration configuration)
at CacheManager.Core.BaseCacheManager`1..ctor(ICacheManagerConfiguration configuration)
at CacheManager.Core.CacheFactory.Build[TCacheValue](String cacheName, Action`1 settings)
at CacheManager.Core.CacheFactory.Build(Action`1 settings)
According to Redis FAQ (https://learn.microsoft.com/en-us/azure/redis-cache/cache-faq) part: "Why was my client disconnected from the cache?" it might happen after redeploy.
The question is
is there any mechanism to restore the connection after redeploy
is anything wrong in way we initialize the connection
We are sure the connection string is OK
Most clients (including StackExchange.Redis) usually connect / re-connect automatically after a connection break. However, your connect timeout setting needs to be large enough for the re-connect to happen successfully. Remember, you only connect once, so it's alright to give the system enough time to be able to reconnect. Higher connect timeout is especially useful when you have a burst of connections or re-connections after a blip causing CPU to spike and some connections might not happen in time.
In this case, I see RetryTimeout as 100. If this is the Connection timeout, check if this is in milliseconds. 100 milliseconds is too low. You might want to make this more like 10 seconds (remember it's a one time thing, so you want to give it time to be able to connect).

Resources