How to increase query specific timeout in VoltDB - voltdb

In VoltDB Community edition When I am uploading a CSV file (size: 550Mb)for more than 7 times and then performing basic aggregation operations, it’s showing query timeout.
But then I tried to increase the query timeout through the web interface and still, it’s showing error as “query specific timeout is 10s”
What should I do if I want to resolve this issue?

What does your configuration / deployment file look like? To increase the query timeout, the following code should be somewhere in your deployment.xml file:
<systemsettings>
<query timeout="30000"/>
</systemsettings>
Where 30000 is 30 seconds, for example. The cluster-wide query timeout is set when you first initialize the database with voltdb init. You could re-initialize with force a new deployment file with the above section in it:
voltdb init --force --config=new_deployment_file.xml
Or you could keep the cluster running and simply use:
voltadmin update new_deployment_file.xml
The section Query Timeout in this part of the docs contains more information as well:
https://docs.voltdb.com/AdminGuide/HostConfigDBOpts.php
Full disclosure: I work at VoltDB.

Related

How can I increase the request timeout in Java SDK v4 for Cosmos or Spring data for Cosmos v3?

I need to run an aggregate query to calculate the count of records e.g. SELECT r.product_id, r.rating, COUNT(1) FROM product_ratings r GROUP BY r.product_id, r.rating. The query works perfectly fine on the Azure Data Explorer, albeit a little slow. An optimised version of the query takes about 30 seconds when executed on the Data Explorer. However, when I run the same query in my Java app, it appears to be timing out in 5 seconds with the following exception:
com.azure.cosmos.implementation.GoneException: {"innerErrorMessage":"The requested resource is no longer available at the server."}
I believe this is due to a default request timeout of 5 seconds defined in ConnectionPolicy (both Direct and Gateway modes). I can't find a way to override this default. Is there a way to increase the request timeout? Is there another possible reason for this error?
Tried this both on the Java SDK v4 and Spring Data Connector v3 with the same end result i.e. GoneException.
You could consider the following recommendations,
The following should help to address the issue:
Try to increase http connection pool size (default is 1000, you can increase to 2000)
If you are using GateWay mode, try the DirectMode, more traffic will go over tcp and less traffic over http
You can refer the Github code on setting the timeout.

Request timed out is not logging in server side Cassandra

I have set server timeout in cassandra as 60 seconds and client timeout in cpp driver as 120 seconds.
I use Batch query which has 18K operations, I get the Request timed out error in cpp driver logs but in Cassandra server logs there is no TRACE available in spite of enabling ALL logs in Cassandra logback.xml
So how can I confirm that It is thrown from the server / client side in Cassandra?
BATCH is not intended to work that way. It’s designed to apply 6 or 7 mutations to different tables atomically. You’re trying to use it like it’s RDBMS counterpart (Cassandra just doesn’t work that way). The BATCH timeout is designed to protect the node/cluster from crashing due to how expensive that query is for the coordinator.
In the system.log, you should see warnings/failures concerning the sheer size of your BATCH. If you’ve modified them and don’t see that, you should see a warning about a timeout threshold being exceeded (I think BATCH gets its own timeout in 3.0).
If all else fails, run your BATCH statement (part of it) in cqlsh with tracing on, and you’ll see precisely why this is a bad idea (server side).
Also, the default query timeouts are there to protect your cluster. You really shouldn’t need to alter those. You should change your query/model or approach before looking at adjusting the timeout.

Azure SQL Server copy database script failing

CREATE DATABASE {0}
AS COPY OF {1} ( SERVICE_OBJECTIVE = 'S2' )
Execution timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
CREATE DATABASE AS Copy of operation failed. Internal service error.
If setting a higher connection timeout via the connectionstring doesn't work, you might want to check out the Command Timout setting on the SqlCommand.
You can also set this with any of the ORM-frameworks available, though the property is probably named something different.
You have a timeout exception, which indicates the time to complete the command is longer as your timeout. Have a look at the connectionstring to see the connection timeout. Change it to a larger value.
Depending on what takes time, you can create the db as a larger size (S3) and than scale it down afterwards. Check if the DTU usage is at 100% while creating the db.

Presto Query error for memory

I am running complex query on Presto 0.148 on HDP 2.3 which errors out-
Query 20161215_175704_00035_tryh6 failed: Query exceeded local memory limit of 1GB
I am able to un simple queries without issues.
Configuration on coordinator and worker nodes-
http-server.http.port=9080
query.max-memory=50GB
query.max-memory-per-node=4GB
discovery.uri=http://host:9080
Query-
CREATE TABLE a.product_id, b.date, LOCATION FROM tblproduct a, day b WHERE b.date BETWEEN a.mfg_date AND a.exp_date
I had to restart and then configuration was updated. I see Presto keeping query result set in memory if we have any operation performed on result set.
Hence Presto needs lot of Reserved Memory and default setting of 1 GB is not good enough.
Make sure that you restart Presto after changing the config files, it seems like your configuration files are out of sync with the Presto server.

BigQuery connector for excel - Request failed: Error. Unable to execute query. Timeout while fetching URL

I am trying to pull data from BigQuery into Excel.
when i run simple and fast queries, everything runs fine. when running "heavy" query that takes long to retrieve i get the following error:
Request failed: Error. Unable to execute query. Timeout while fetching URL: https://www.googleapis.com/bigquery/v2/projects/{my-project}/queries.
the query i can see the query and retrieve its results in the browser tool query history.
I manage to retrieve data for simpler queries.
any ideas?
I would believe it has to do with default time out configuration. is there a way to set the timeout parameters for the connector?
Many thanks for your support.
it looks like the bigquery web connector was not setting a timeout correctly. We have currently updated it to 60 seconds from 15 seconds. 60 seconds is the longest timeout we can use without major restructuring is because the connector is hosted in an appengine app.
Your 8-10 minute query, unfortunately, will not work. One alternative may be to run the query yourself and save the result in a bigquery table (i.e. set a destination table for the query) and then just read that table from excel (i.e. select *).

Resources