Presto Query error for memory - presto

I am running complex query on Presto 0.148 on HDP 2.3 which errors out-
Query 20161215_175704_00035_tryh6 failed: Query exceeded local memory limit of 1GB
I am able to un simple queries without issues.
Configuration on coordinator and worker nodes-
http-server.http.port=9080
query.max-memory=50GB
query.max-memory-per-node=4GB
discovery.uri=http://host:9080
Query-
CREATE TABLE a.product_id, b.date, LOCATION FROM tblproduct a, day b WHERE b.date BETWEEN a.mfg_date AND a.exp_date
I had to restart and then configuration was updated. I see Presto keeping query result set in memory if we have any operation performed on result set.
Hence Presto needs lot of Reserved Memory and default setting of 1 GB is not good enough.

Make sure that you restart Presto after changing the config files, it seems like your configuration files are out of sync with the Presto server.

Related

How to set row batch size for incrementalCollect in Apache Spark Thrift server?

I enabled spark.sql.thriftServer.incrementalCollect in my Thrift server (Spark 3.1.2) to prevent OutOfMemory exceptions. This worked fine, but my queries are really slow now. I checked the logs and found that Thrift is querying batches of 10.000 rows.
INFO SparkExecuteStatementOperation: Returning result set with 10000 rows from offsets [1260000, 1270000) with 169312d3-1dea-4069-94ba-ec73ac8bef80
My hardware would be able to handle 10x-50x of that.
This issue and this documentation page suggest setting spark.sql.inMemoryColumnarStorage.batchSize, but that didn't work.
Is it possible to configure the value?

Timeout to read from Alluxio

I encountered this error while performing a Presto query on Alluxio. What does this timeout mean, and how can I fix it?
com.facebook.presto.spi.PrestoException: Error opening Hive split alluxio://xxxxx:19998/s3/data/m-00020 (offset=134217728,
length=67108864) using org.apache.hadoop.mapred.TextInputFormat:
Timeout to read 39963328512 from [id: 0x23615709, L:/xxxxx:34740 -
R:xxxxx/xxxxx:29999]
You will receive this error when the Alluxio worker takes too long (configurable through alluxio.user.network.netty.timeout) to provide data to the client.
One simple workaround is to increase timeout.
However, this is generally a symptom of the worker being overloaded in some way. Common things to check in your setup:
Alluxio worker load, possibly a problem if your compute is co-located and there is no resource management
Alluxio worker to under file system load/bandwidth, this is often a bottleneck for remote storages like object stores.
If these are bottlenecks, you can try reducing the concurrency or increasing the number of nodes in your cluster.

How to increase query specific timeout in VoltDB

In VoltDB Community edition When I am uploading a CSV file (size: 550Mb)for more than 7 times and then performing basic aggregation operations, it’s showing query timeout.
But then I tried to increase the query timeout through the web interface and still, it’s showing error as “query specific timeout is 10s”
What should I do if I want to resolve this issue?
What does your configuration / deployment file look like? To increase the query timeout, the following code should be somewhere in your deployment.xml file:
<systemsettings>
<query timeout="30000"/>
</systemsettings>
Where 30000 is 30 seconds, for example. The cluster-wide query timeout is set when you first initialize the database with voltdb init. You could re-initialize with force a new deployment file with the above section in it:
voltdb init --force --config=new_deployment_file.xml
Or you could keep the cluster running and simply use:
voltadmin update new_deployment_file.xml
The section Query Timeout in this part of the docs contains more information as well:
https://docs.voltdb.com/AdminGuide/HostConfigDBOpts.php
Full disclosure: I work at VoltDB.

Request timed out is not logging in server side Cassandra

I have set server timeout in cassandra as 60 seconds and client timeout in cpp driver as 120 seconds.
I use Batch query which has 18K operations, I get the Request timed out error in cpp driver logs but in Cassandra server logs there is no TRACE available in spite of enabling ALL logs in Cassandra logback.xml
So how can I confirm that It is thrown from the server / client side in Cassandra?
BATCH is not intended to work that way. It’s designed to apply 6 or 7 mutations to different tables atomically. You’re trying to use it like it’s RDBMS counterpart (Cassandra just doesn’t work that way). The BATCH timeout is designed to protect the node/cluster from crashing due to how expensive that query is for the coordinator.
In the system.log, you should see warnings/failures concerning the sheer size of your BATCH. If you’ve modified them and don’t see that, you should see a warning about a timeout threshold being exceeded (I think BATCH gets its own timeout in 3.0).
If all else fails, run your BATCH statement (part of it) in cqlsh with tracing on, and you’ll see precisely why this is a bad idea (server side).
Also, the default query timeouts are there to protect your cluster. You really shouldn’t need to alter those. You should change your query/model or approach before looking at adjusting the timeout.

how to monitor number of connections in cassandra

I keep getting this exception under small load.
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /127.0.0.1:9042
(com.datastax.driver.core.exceptions.BusyPoolException: [/127.0.0.1]
Pool is busy (no available connection and the queue has reached its
max size 256)))
is there an option to check number of open connections ?
The driver provides a bunch of metrics providing you do not set withoutMetrics on the cluster builder. You can check the value attribute of the cluster1-metrics:name=open-connections mbean*.
Which version of Cassandra and the Java driver your running can make a big difference. With a recent version of C* and the Java driver it can have a lot more concurrent requests per connection than say a 2.0 version of the java driver.
You can use the PoolingOptions object to set the number of connections per host or the max queue size and pass it to your cluster builder.
* Note that the domain cluster1-metrics is generated by clusterName + "-metrics" so if on the Cluster builder you set withClusterName it would change domain accordingly. It will also auto increament the cluster1 to cluster2 etc if you create multiple Cluster objects in your jvm.

Resources