How to Increasing the Command Timeout in CodingHorror("my sql query").Execute(); - subsonic

I am trying to create a database backup and using CodingHorror to execute my command as below.
CodingHorror("my sql query").Execute();
My database size is big and it takes about 2 minutes to complete the backup process when I executed my command in MSSQL. But while executing in my C# application, exception is thrown as below
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Is there any way to increase command timeout in Subsonic CodingHorror ?

CodingHorror uses the normal DataProvider so it should just use the timeout you've set in the connection string q.v. Timeout setting for SQL Server

Related

"Operation was cancelled" exception is throwing for long running Azure indexer

We are getting "Operation was cancelled" exception while Azure Indexer is running for larger records (around 2M+). Here are the log details -
"The operation was canceled. Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request. The I/O operation has been aborted because of either a thread exit or an application request "
We are running the indexer under thread. It is working for smaller records but for larger records (1M+), it is throwing Socket Exception.
Does anyone saw this error while running Azure Indexer for larger records (running for long time)?
(we have already increase httpclient timeout to maximum value for serviceClient object.)
This could happen because of happen because of excess http connections. Try to make your **HttpClient** static and see if anything improves. **HttpClient** timeout to maximum value is required to execute with maximum records.
You may also want to consider working to reduce your sql query time for best indexer performance. Also please share you code if possible.
Hope it helps.
Try set SearchServiceClient.HttpClient.Timeout to Timeout.InfiniteTimeSpan. You have to set the timeout before you send any request to Azure Cognitive Search.
client.HttpClient.Timeout = Timeout.InfiniteTimeSpan;

How to increase query specific timeout in VoltDB

In VoltDB Community edition When I am uploading a CSV file (size: 550Mb)for more than 7 times and then performing basic aggregation operations, it’s showing query timeout.
But then I tried to increase the query timeout through the web interface and still, it’s showing error as “query specific timeout is 10s”
What should I do if I want to resolve this issue?
What does your configuration / deployment file look like? To increase the query timeout, the following code should be somewhere in your deployment.xml file:
<systemsettings>
<query timeout="30000"/>
</systemsettings>
Where 30000 is 30 seconds, for example. The cluster-wide query timeout is set when you first initialize the database with voltdb init. You could re-initialize with force a new deployment file with the above section in it:
voltdb init --force --config=new_deployment_file.xml
Or you could keep the cluster running and simply use:
voltadmin update new_deployment_file.xml
The section Query Timeout in this part of the docs contains more information as well:
https://docs.voltdb.com/AdminGuide/HostConfigDBOpts.php
Full disclosure: I work at VoltDB.

Request timed out is not logging in server side Cassandra

I have set server timeout in cassandra as 60 seconds and client timeout in cpp driver as 120 seconds.
I use Batch query which has 18K operations, I get the Request timed out error in cpp driver logs but in Cassandra server logs there is no TRACE available in spite of enabling ALL logs in Cassandra logback.xml
So how can I confirm that It is thrown from the server / client side in Cassandra?
BATCH is not intended to work that way. It’s designed to apply 6 or 7 mutations to different tables atomically. You’re trying to use it like it’s RDBMS counterpart (Cassandra just doesn’t work that way). The BATCH timeout is designed to protect the node/cluster from crashing due to how expensive that query is for the coordinator.
In the system.log, you should see warnings/failures concerning the sheer size of your BATCH. If you’ve modified them and don’t see that, you should see a warning about a timeout threshold being exceeded (I think BATCH gets its own timeout in 3.0).
If all else fails, run your BATCH statement (part of it) in cqlsh with tracing on, and you’ll see precisely why this is a bad idea (server side).
Also, the default query timeouts are there to protect your cluster. You really shouldn’t need to alter those. You should change your query/model or approach before looking at adjusting the timeout.

Azure SQL Server copy database script failing

CREATE DATABASE {0}
AS COPY OF {1} ( SERVICE_OBJECTIVE = 'S2' )
Execution timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
CREATE DATABASE AS Copy of operation failed. Internal service error.
If setting a higher connection timeout via the connectionstring doesn't work, you might want to check out the Command Timout setting on the SqlCommand.
You can also set this with any of the ORM-frameworks available, though the property is probably named something different.
You have a timeout exception, which indicates the time to complete the command is longer as your timeout. Have a look at the connectionstring to see the connection timeout. Change it to a larger value.
Depending on what takes time, you can create the db as a larger size (S3) and than scale it down afterwards. Check if the DTU usage is at 100% while creating the db.

Oracle on Azure closes connection after a time period

We have a Oracle 11g DB on Microsoft Azure VM.
The oracle connection at client side closes after a time period even if active sqls are running. I'm checking active sql reports one minute and in the next BOOM closed connection.
We have not defined any profile and timing out a connection but still if I keep a connection in my SQL Developer and some time later when i run a query, The connection is closed.
In case of running batch programs which consume more time on the server itself, It seems to get hanged and no session will exist of this batch program
I'm guessing after a particular time from getting a session, the DB is closing the connection thus making the batch hang.
Is it related to Azure or any other thing. It is not even producing any error codes

Resources