I am currently using Cassandra 3.11 having 3 node cluster & consistency of one, along with NodeJS express cassandra client 2.3.0. I am using the method saveAsync for storing the model data. But I am getting an error : apollo.model.save.dberror
Error during save query on DB -> NoHostAvailableError: All host(s) tried for query failed. \
First host tried, <ip>:<port>: OperationTimedOutError: The host <ip>:<port> \
did not reply before timeout 12000 ms.
I am unable to verify what is causing this error. This happens for most of the records & only few of them are getting through. I am inserting this data reading from a kafka topic & pushing it to cassandra after few validations. The data rate is around 200 per second.
Tried googling & searching in stack-overflow but unable to get any details around it. Sample code.
SomeData.insert = function (data) {
data = transformData(data); // Basically cleans & normalizes the data to suit the model
let any_data = new ExpressCassandra.instance.data_store(data);
return any_data.saveAsync()
.catch(function(err){
return Promise.reject(err);
});
};
The Node.js driver returns NoHostAvailableError after it has tried all hosts and none of them were available.
The OperationTimedOutError means that driver attempted to contact a node but never got a reply. This is a different error to a read or write timeout which is a response returned by the coordinator node.
OperationTimedOutError indicates that the nodes are unresponsive, usually because they are overloaded. You will need to review the logs and if you're seeing lots of GC activity and/or lots of dropped mutations/reads/messages then the nodes cannot keep up with the load and you need to consider increasing the capacity of your cluster by adding more nodes. Cheers!
Related
I have 3 node Spanner instance, and a single table that contains around 4 billion rows. The DDL looks like this:
CREATE TABLE predictions (
name STRING(MAX),
...,
model_version INT64,
) PRIMARY KEY (name, model_version)
I'd like to setup a job to periodically remove some old rows from this table using the Python Spanner client. The query I'd like to run is:
DELETE FROM predictions WHERE model_version <> ?
According to the docs, it sounds like I would need to execute this as a Partitioned DML statement. I am using the Python Spanner client as follows, but am experiencing timeouts (504 Deadline Exceeded errors) due to the large number of rows in my table.
# this always throws a "504 Deadline Exceeded" error
database.execute_partitioned_dml(
"DELETE FROM predictions WHERE model_version <> #version",
params={"model_version": 104},
param_types={"model_version": Type(code=INT64)},
)
My first intuition was to see if there was some sort of timeout I could increase, but I don't see any timeout parameters in the source :/
I did notice there was a run_in_transaction method in the Spanner lib that contains a timeout parameter, so I decided to deviate from the partitioned DML approach to see if using this method worked. Here's what I ran:
def delete_old_rows(transaction, model_version):
delete_dml = "DELETE FROM predictions WHERE model_version <> {}".format(model_version),
dml_statements = [
delete_dml,
]
status, row_counts = transaction.batch_update(dml_statements)
database.run_in_transaction(delete_old_rows,
model_version=104,
timeout_secs=3600,
)
What's weird about this is the timeout_secs parameter appears to be ignored, because I still get a 504 Deadline Exceeded error within a minute or 2 of executing the above code, despite a timeout of one hour.
Anyways, I'm not too sure what to try next, or whether or not I'm missing something obvious that would allow me to run a delete query in a timely fashion on this huge Spanner table. The model_version column has pretty low cardinality (generally 2-3 unique model_version values in the entire table), so I'm not sure if that would factor into any recommendations. But if someone could offer some advice or suggestions, that would be awesome :) Thanks in advance
The reason that setting timeout_secs didn't help was because the argument is unfortunately not the timeout for the transaction. It's the retry timeout for the transaction so it's used to set the deadline after which the transaction will stop being retried.
We will update the docs for run_in_transaction to explain this better.
The root cause was that the total timeout for the Streaming RPC calls was set too low in the client libraries, being set to 120s for Streaming APIs (eg ExecuteStreamingSQL used by partitioned DML calls.)
This has been fixed in the client library source code, changing them to a 60 minute timout (which is the maximum), and will be part of the next client library release.
As a workaround, in Java, you can configure the timeouts as part of the SpannerOptions when you connect your database. (I do not know how to set custom timeouts in Python, sorry)
final RetrySettings retrySettings =
RetrySettings.newBuilder()
.setInitialRpcTimeout(Duration.ofMinutes(60L))
.setMaxRpcTimeout(Duration.ofMinutes(60L))
.setMaxAttempts(1)
.setTotalTimeout(Duration.ofMinutes(60L))
.build();
SpannerOptions.Builder builder =
SpannerOptions.newBuilder()
.setProjectId("[PROJECT]"));
builder
.getSpannerStubSettingsBuilder()
.applyToAllUnaryMethods(
new ApiFunction<UnaryCallSettings.Builder<?, ?>, Void>() {
#Override
public Void apply(Builder<?, ?> input) {
input.setRetrySettings(retrySettings);
return null;
}
});
builder
.getSpannerStubSettingsBuilder()
.executeStreamingSqlSettings()
.setRetrySettings(retrySettings);
builder
.getSpannerStubSettingsBuilder()
.streamingReadSettings()
.setRetrySettings(retrySettings);
Spanner spanner = builder.build().getService();
The first suggestion is to try gcloud instead.
https://cloud.google.com/spanner/docs/modify-gcloud#modifying_data_using_dml
Another suggestion is to pass the range of name as well so that limit the number of rows scanned. For example, you could add something like STARTS_WITH(name, 'a') to the WHERE clause so that make sure each transaction touches a small amount of rows but first, you will need to know about the domain of name column values.
Last suggestion is try to avoid using '<>' if possible as it is generally pretty expensive to evaluate.
I have an application where I read csv files and do some transformations and then push them to elastic search from spark itself. Like this
input.write.format("org.elasticsearch.spark.sql")
.mode(SaveMode.Append)
.option("es.resource", "{date}/" + type).save()
I have several nodes and in each node, I run 5-6 spark-submit commands that push to elasticsearch
I am frequently getting Errors
Could not write all entries [13/128] (Maybe ES was overloaded?). Error sample (first [5] error messages):
rejected execution of org.elasticsearch.transport.TransportService$7#32e6f8f8 on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor#4448a084[Running, pool size = 4, active threads = 4, queued tasks = 200, completed tasks = 451515]]
My Elasticsearch cluster has following stats -
Nodes - 9 (1TB space,
Ram >= 15GB ) More than 8 cores per node
I have modified following parameters for elasticseach
spark.es.batch.size.bytes=5000000
spark.es.batch.size.entries=5000
spark.es.batch.write.refresh=false
Could anyone suggest, What can I fix to get rid of these errors?
This occurs because the bulk requests are incoming at a rate greater than elasticsearch cluster could process and the bulk request queue is full.
The default bulk queue size is 200.
You should handle ideally this on the client side :
1) by reducing the number the spark-submit commands running concurrently
2) Retry in case of rejections by tweaking the es.batch.write.retry.count and
es.batch.write.retry.wait
Example:
es.batch.write.retry.wait = "60s"
es.batch.write.retry.count = 6
On elasticsearch cluster side :
1) check if there are too many shards per index and try reducing it.
This blog has a good discussion on criteria for tuning the number of shards.
2) as a last resort increase the thread_pool.index.bulk.queue_size
Check this blog with an extensive discussion on bulk rejections.
The bulk queue in your ES cluster is hitting its capacity (200) . Try increasing it. See this page for how to change the bulk queue capacity.
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html
Also check this other SO answer where OP had a very similar issue and was fixed by increasing the bulk pool size.
Rejected Execution of org.elasticsearch.transport.TransportService Error
I am using "node-rdkafka" npm module for our distributed service architecture written in Nodejs. We have a use case for metering where we allow only a certain amount of messages to be consumed and processed every n seconds. For example, a "main" topic has 100 messages pushed by a producer and "worker" consumes from main topic every 30 seconds. There is a lot more to the story of the use case.
The problem I am having is that I need to progamatically get the lag of a given topic(all partitions).
Is there a way for me to do that?
I know that I can use "bin/kafka-consumer-groups.sh" to access some of the data I need but is there another way?
Thank you in advance
You can retrieve that information directly from your node-rdkafka client via several methods:
Client metrics:
The client can emit metrics at defined interval that contain the current and committed offsets as well as the end offset so you can easily calculate the lag.
You first need to enable the metrics events by setting for example 'statistics.interval.ms': 5000 in your client configuration. Then set a listener on the event.stats events:
consumer.on('event.stats', function(stats) {
console.log(stats);
});
The full stats are documented on https://github.com/edenhill/librdkafka/wiki/Statistics but you probably are mostly interested in the partition stats: https://github.com/edenhill/librdkafka/wiki/Statistics#partitions
Query the cluster for offsets:
You can use queryWatermarkOffsets() to retrieve the first and last offsets for a partition.
consumer.queryWatermarkOffsets(topicName, partition, timeout, function(err, offsets) {
var high = offsets.highOffset;
var low = offsets.lowOffset;
});
Then use the consumer's current position (position()) or committed (committed()) offsets to calculate the lag.
Kafka exposes "records-lag-max" mbean which is the max records in lag for a partition via jmx, so you can get the lag querying this mbean
Refer to below doc for the exposed jmx mbean in detail .
https://docs.confluent.io/current/kafka/monitoring.html#consumer-group-metrics
When I try to find some docs in documentDB, all is good -
collection.find({query})
when I try to remove, all is bad
[mongooseModel | collection].remove({same-query})
I got
Request rate is large
The number of documents to remove ~ 10 000 . I have tested queries in robomongo shell, which limits find results to 50 per page. Also my remove query fails with mongoose. I can't understand such behavior. How can I got in to Request Limit while remove query is a single request?
Update
Count with query also raise same error.
db.getCollection('taxonomies').count({query})
"Request rate is large" indicates that the application has exceeded the provisioned RU quota, and should retry the request after a small time interval.
Since you are using DocumentDB Node.js API, you could check out #Larry Maccherone's answer in Request rate is large on how to avoid this issue by handling retry behavior and logic in your application's error handling routines.
More on this:
Dealing with RequestRateTooLarge errors in Azure DocumentDB and testing performance
Request Units in Azure Cosmos DB
This is an awful part of CosmosDB - I don't see how they intend to become a real player in the cloud database game with limitations like this. That being said - I came up with a hack to delete all records from a collection through the MongoDB API when you are bumping up against the "Rate Exceeded" error. See below:
var loop = true
while(loop){
try {
db.grid.deleteMany({"myField":"my-query"})
}
catch (err) {
print(err)
printjson(db.runCommand({getLastRequestStatistics:1}))
continue
}
loop = false
}
Just update the deleteMany query and run this script directly in your mongo shell and it should loop through and delete the records.
I'm using datastax cassandra 2.1 driver and performing read/write operations at the rate of ~8000 IOPS. I've used pooling options to configure my session and am using separate session for read and write each of which connect to a different node in the cluster as contact point.
This works fine for say 5 mins but after that I get a lot of exceptions like :
Failed with: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.0.1.123:9042 (com.datastax.driver.core.TransportException: [/10.0.1.123:9042] Connection has been closed), /10.0.1.56:9042 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
Can anyone help me out here on what could be the problem?
The exception asks me to increase number of connections per host but how high a value can I set for this parameter ?
Also I'm not able to set CoreConnectionsPerHost beyond 2 as it throws me exception saying 2 is the max.
This is how I'm creating each read / write session.
PoolingOptions poolingOpts = new PoolingOptions();
poolingOpts.setCoreConnectionsPerHost(HostDistance.REMOTE, 2);
poolingOpts.setMaxConnectionsPerHost(HostDistance.REMOTE, 200);
poolingOpts.setMaxSimultaneousRequestsPerConnectionThreshold(HostDistance.REMOTE, 128);
poolingOpts.setMinSimultaneousRequestsPerConnectionThreshold(HostDistance.REMOTE, 2);
cluster = Cluster
.builder()
.withPoolingOptions( poolingOpts )
.addContactPoint(ip)
.withRetryPolicy( DowngradingConsistencyRetryPolicy.INSTANCE )
.withReconnectionPolicy( new ConstantReconnectionPolicy( 100L ) ).build();
Session s = cluster.connect(keySpace);
Your problem might not actually be in your code or the way you are connecting. If you say the problem is happening after a few minutes then it could simply be that your cluster is becoming overloaded trying to process the ingestion of data and cannot keep up. The typical sign of this is when you start seeing JVM garbage collection "GC" messages in the cassandra system.log file, too many small ones batched together of large ones on their own can mean that incoming clients are not responded to causing this kind of scenario. Verify that you do not have too many of these event showing up in your logs first before you start to look at your code. Here's a good example of a large GC event:
INFO [ScheduledTasks:1] 2014-05-15 23:19:49,678 GCInspector.java (line 116) GC for ConcurrentMarkSweep: 2896 ms for 2 collections, 310563800 used; max is 8375238656
When connecting to a cluster there are some recommendations, one of which is only have one Cluster object per real cluster. As per the article I've linked below (apologies if you already studied this):
Use one cluster instance per (physical) cluster (per application lifetime)
Use at most one session instance per keyspace, or use a single Session and explicitly specify the keyspace in your queries
If you execute a statement more than once, consider using a prepared statement
You can reduce the number of network roundtrips and also have atomic operations by using batches
http://www.datastax.com/documentation/developer/java-driver/2.1/java-driver/fourSimpleRules.html
As you are doing a high number of reads I'd most definitely recommend using setFetchSize also if its applicable to your code
http://www.datastax.com/documentation/developer/java-driver/2.1/common/drivers/reference/cqlStatements.html
http://www.datastax.com/documentation/developer/java-driver/2.1/java-driver/reference/queryBuilderOverview.html
For reference heres the connection options in case you find it useful
http://www.datastax.com/documentation/developer/java-driver/2.1/common/drivers/reference/connectionsOptions_c.html
Hope this helps.