Cassandra: Operation time-out - node.js

I am using cassandra node js driver and i am getting following error:
error: Database error found %s . On selectAllJobs() call
{ name: 'ResponseError',
message: 'Operation timed out - received only 0 responses.',
info: 'Represents an error message from the server',
code: 4608,
consistencies: 1,
received: 0,
blockFor: 1,
isDataPresent: 0,
query: 'SELECT * FROM cron_tasks WHERE type =? AND starts < ? ALLOW FILTERING ;' }
This error occurred when i ported to new instance of AWS. Earlier, everything went fine.
Cassandra version:
[cqlsh 4.1.1 | Cassandra 2.0.12 | CQL spec 3.1.1 | Thrift protocol 19.39.0]

Read_timeout error means that the coordinator of the query does not know whether the request succeeded or failed, so all it can tell the client is that the request timed out.
In your case, it means that the coordinator of the query sent the request internally to the replica but the replica didn't respond in time.
You can enable query tracing and execute in cqlsh to understand why it is happening.
You can read more about how Cassandra deals with replica failure.

Related

advanced.session-leak after sometime of starting spark thrift server with datastax cassandra connector

Hi I am getting the following error after some time of inactivity.
Error: Error running query: com.typesafe.config.ConfigException$Missing: withValue(advanced.reconnection-policy.base-delay): No configuration setting found for key 'advanced.session-leak' (state=,code=0)
restarting thrift server seems to solve the issue for sometime.

Cassandra read timeout exception while insert

I have a 16 node cassandra cluster and I am inserting 50.000 rows, pretty much in parallel, from an external tool(which is installed in every node) in every cassandra node with Simba Cassandra JDBC Driver. While the insertion takes place, sometimes/rarely, I get the following error on (mostly/usually) two of the nodes:
Execute failed: [Simba]CassandraJDBCDriver Error setting/closing connection: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.simba.cassandra.shaded.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded))).
java.sql.SQLException: [Simba]CassandraJDBCDriver Error setting/closing connection: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.simba.cassandra.shaded.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded))).
Caused by: com.simba.cassandra.shaded.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.simba.cassandra.shaded.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded)))
The weird thing is that it is a readtimeout exception, while I am just trying to insert. I have not changed any read_time_out or other parameters in the .yaml file, so they are the default. This means that if I try to count(*) on something from cqlsh I also get a readtimeout exception.
ReadTimeout: Error from server: code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
I do not know if these two are related though! Any ideas on what might be going on and how to avoid the first error "All host(s) tried for query failed"??

Elasticsearch spark connection for structured-streaming

I'm trying to make a connection to elasticsearch from my spark program.
My elasticsearch host is https and found no connection property for that.
We are using spark structred streaming Java API and the connection details are as follows,
SparkSession spark = SparkSession.builder()
.config(ConfigurationOptions.ES_NET_HTTP_AUTH_USER, "username")
.config(ConfigurationOptions.ES_NET_HTTP_AUTH_PASS, "password")
.config(ConfigurationOptions.ES_NODES, "my_host_url")
.config(ConfigurationOptions.ES_PORT, "9200")
.config(ConfigurationOptions.ES_NET_SSL_TRUST_STORE_LOCATION,"C:\\certs\\elastic\\truststore.jks")
.config(ConfigurationOptions.ES_NET_SSL_TRUST_STORE_PASS,"my_password") .config(ConfigurationOptions.ES_NET_SSL_KEYSTORE_TYPE,"jks")
.master("local[2]")
.appName("spark_elastic").getOrCreate();
spark.conf().set("spark.sql.shuffle.partitions",2);
spark.conf().set("spark.default.parallelism",2);
And I'm getting the following error
19/07/01 12:26:00 INFO HttpMethodDirector: I/O exception (org.apache.commons.httpclient.NoHttpResponseException) caught when processing request: The server 10.xx.xxx.xxx failed to respond
19/07/01 12:26:00 INFO HttpMethodDirector: Retrying request
19/07/01 12:26:00 ERROR NetworkClient: Node [10.xx.xxx.xxx:9200] failed (The server 10.xx.xxx.xxx failed to respond); no other nodes left - aborting...
19/07/01 12:26:00 ERROR StpMain: Error
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverClusterInfo(InitializationUtils.java:344)
Probably it's because it tries to initiate connection by http protocol but in my case I need https connection and not sure how to configure that
The error happened as spark was not able to locate the truststore file. It seems we need to add "file:\\" for the path to be accepted.

Cassandra is throwing NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1 (null))

I am not able to connect to Cassandra cluster using this code:
public static boolean tableCreate() {
// Query
String query = "CREATE KEYSPACE store WITH replication "
+ "= {'class':'SimpleStrategy', 'replication_factor':1};";
// creating Cluster object
Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").withPort(9042).build();
// Creating Session object
Session session = cluster.connect("tutorialspoint");
// Executing the query
session.execute(query);
// using the KeySpaceq
session.execute("USE store");
System.out.println("Keyspace created with store name");
return true;
}
It is giving me this error:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1 (null))
What is my mistake in the code above?
Cassandra is running on my Local Windows 10 64bit and I also disabled the firewall.
You may need to check and possibly update the version of datastax driver that you are using. I faced exactly same error (ie same error message while connecting) and after upgrading driver 'datastax' version the problem went away and I could connect to DB.
Similar Issue: Unable to connect to Cassandra cluster running on local host

We are running a map reduce/spark job to bulk load hbase data in One of the environment

We are running a map reduce/spark job to bulk load hbase data in one of the environments.
While running it, connection to the hbase zookeeper cannot initialize throwing the following error.
16/05/10 06:36:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=c321shu.int.westgroup.com:2181,c149jub.int.westgroup.com:2181,c167rvm.int.westgroup.com:2181 sessionTimeout=90000 watcher=hconnection-0x74b47a30, quorum=c321shu.int.westgroup.com:2181,c149jub.int.westgroup.com:2181,c167rvm.int.westgroup.com:2181, baseZNode=/hbase
16/05/10 06:36:10 INFO zookeeper.ClientCnxn: Opening socket connection to server c321shu.int.westgroup.com/10.204.152.28:2181. Will not attempt to authenticate using SASL (unknown error)
16/05/10 06:36:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.204.24.16:35740, server: c321shu.int.westgroup.com/10.204.152.28:2181
16/05/10 06:36:10 INFO zookeeper.ClientCnxn: Session establishment complete on server c321shu.int.westgroup.com/10.204.152.28:2181, sessionid = 0x5534bebb441bd3f, negotiated timeout = 60000
16/05/10 06:36:11 INFO mapreduce.HFileOutputFormat2: Looking up current regions for table ecpdevv1patents:NormNovusDemo
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions:
Tue May 10 06:36:11 CDT 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller#3927df20, java.io.IOException: Call to c873gpv.int.westgroup.com/10.204.67.9:60020 failed on local exception: java.io.EOFException
We have executed the same job in Titan DEV too but facing the same problem. Please let us know if anyone has faced the same problem before.
Details are,
• Earlier job was failing to connect to localhost/127.0.0.1:2181. Hence only the property hbase.zookeeper.quorum has been set in map reduce code with c149jub.int.westgroup.com,c321shu.int.westgroup.com,c167rvm.int.westgroup.com which we got from hbase-site.xml.
• We are using jars of cdh version 5.3.3.

Resources