could not connect to cassandra using the latest cql driver and default settings? - cassandra

I am using the latest version of cassandra 3.0.2 and the latest version of datastax cassandra core java driver which is 3.0.0. The settings in cassandra.yaml remains unchanged. I have not changed that file. so whatever the default settings are they remain the same. I keep hearing that rpc_address should be 0.0.0.0 but by default it is localhost and the broadcast_rpc_address by default is commmented out however its value is 1.2.3.4 (again by default it is commented out). I have not changed any of the default settings so the cassandra.yaml file remains unchanged. I also don't understand why we need to set rpc_address and all that after all I hear that latest version of cassandra had moved away from rpc?!
Here is the snippet of the code
Cluster cassandra = Cluster.builder().addContactPoint("localhost").withPort(9042).build();
ListenableFuture<Session> session = cassandra.connectAsync("demo1");
......
Here is the error that I get when I turn on the DEBUG FLAG
com.datastax.driver.NEW_NODE_DELAY_SECONDS is undefined, using default value 1
com.datastax.driver.NON_BLOCKING_EXECUTOR_SIZE is undefined, using default value 8
com.datastax.driver.NOTIF_LOCK_TIMEOUT_SECONDS is undefined, using default value 60
Starting new cluster with contact points [localhost/127.0.0.1:9042, localhost/0:0:0:0:0:0:0:1:9042]
log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
com.datastax.driver.FORCE_NIO is undefined, using default value false
Did not find Netty's native epoll transport in the classpath, defaulting to NIO.
[localhost/127.0.0.1:9042] preparing to open 1 new connections, total = 1
com.datastax.driver.DISABLE_COALESCING is undefined, using default value false
Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Connection established, initializing transport
[localhost/127.0.0.1:9042] Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized, connection ready
[Control connection] Refreshing node list and token map
You listed localhost/0:0:0:0:0:0:0:1:9042 in your contact points, but it wasn't found in the control host's system.peers at startup
[Control connection] Refreshing schema
[Control connection] Refreshing node list and token map
[Control connection] established to localhost/127.0.0.1:9042
Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
New Cassandra host localhost/127.0.0.1:9042 added
[localhost/127.0.0.1:9042] preparing to open 1 new connections, total = 2
Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=false] Connection established, initializing transport
[localhost/127.0.0.1:9042] Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=false] Transport initialized, connection ready
Created connection pool to host localhost/127.0.0.1:9042 (1 connections needed, 1 successfully opened)
Added connection pool for localhost/127.0.0.1:9042
Preparing query SELECT "hash" AS col1,"body" AS col2 FROM demo1.ents WHERE "hash"=?;
[1795897147-1] Doing retry 1 for query com.datastax.driver.core.Statement$1#19be093f at consistency null
[1795897147-1] Error querying localhost/127.0.0.1:9042 : com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response
error message -> All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response))
error cause -> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:98)
at com.datastax.driver.mapping.Mapper.getPreparedQuery(Mapper.java:118)
at com.datastax.driver.mapping.Mapper.getPreparedQuery(Mapper.java:129)
at com.datastax.driver.mapping.Mapper.getQuery(Mapper.java:333)
at com.datastax.driver.mapping.Mapper.getQuery(Mapper.java:325)
at com.datastax.driver.mapping.Mapper.getAsync(Mapper.java:388)
Here is where the exeption happens in cassandra driver
stmt = session().prepare(queryString); //Mapper.java
public PreparedStatement prepare(String query) { //AbstractSession.java
try {
return Uninterruptibles.getUninterruptibly(prepareAsync(query));
} catch (ExecutionException e) {
throw DriverThrowables.propagateCause(e);
}
}

Related

Cassandra read timeout exception while insert

I have a 16 node cassandra cluster and I am inserting 50.000 rows, pretty much in parallel, from an external tool(which is installed in every node) in every cassandra node with Simba Cassandra JDBC Driver. While the insertion takes place, sometimes/rarely, I get the following error on (mostly/usually) two of the nodes:
Execute failed: [Simba]CassandraJDBCDriver Error setting/closing connection: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.simba.cassandra.shaded.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded))).
java.sql.SQLException: [Simba]CassandraJDBCDriver Error setting/closing connection: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.simba.cassandra.shaded.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded))).
Caused by: com.simba.cassandra.shaded.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.simba.cassandra.shaded.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded)))
The weird thing is that it is a readtimeout exception, while I am just trying to insert. I have not changed any read_time_out or other parameters in the .yaml file, so they are the default. This means that if I try to count(*) on something from cqlsh I also get a readtimeout exception.
ReadTimeout: Error from server: code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
I do not know if these two are related though! Any ideas on what might be going on and how to avoid the first error "All host(s) tried for query failed"??

Cassandra is throwing NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1 (null))

I am not able to connect to Cassandra cluster using this code:
public static boolean tableCreate() {
// Query
String query = "CREATE KEYSPACE store WITH replication "
+ "= {'class':'SimpleStrategy', 'replication_factor':1};";
// creating Cluster object
Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").withPort(9042).build();
// Creating Session object
Session session = cluster.connect("tutorialspoint");
// Executing the query
session.execute(query);
// using the KeySpaceq
session.execute("USE store");
System.out.println("Keyspace created with store name");
return true;
}
It is giving me this error:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1 (null))
What is my mistake in the code above?
Cassandra is running on my Local Windows 10 64bit and I also disabled the firewall.
You may need to check and possibly update the version of datastax driver that you are using. I faced exactly same error (ie same error message while connecting) and after upgrading driver 'datastax' version the problem went away and I could connect to DB.
Similar Issue: Unable to connect to Cassandra cluster running on local host

We are running a map reduce/spark job to bulk load hbase data in One of the environment

We are running a map reduce/spark job to bulk load hbase data in one of the environments.
While running it, connection to the hbase zookeeper cannot initialize throwing the following error.
16/05/10 06:36:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=c321shu.int.westgroup.com:2181,c149jub.int.westgroup.com:2181,c167rvm.int.westgroup.com:2181 sessionTimeout=90000 watcher=hconnection-0x74b47a30, quorum=c321shu.int.westgroup.com:2181,c149jub.int.westgroup.com:2181,c167rvm.int.westgroup.com:2181, baseZNode=/hbase
16/05/10 06:36:10 INFO zookeeper.ClientCnxn: Opening socket connection to server c321shu.int.westgroup.com/10.204.152.28:2181. Will not attempt to authenticate using SASL (unknown error)
16/05/10 06:36:10 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.204.24.16:35740, server: c321shu.int.westgroup.com/10.204.152.28:2181
16/05/10 06:36:10 INFO zookeeper.ClientCnxn: Session establishment complete on server c321shu.int.westgroup.com/10.204.152.28:2181, sessionid = 0x5534bebb441bd3f, negotiated timeout = 60000
16/05/10 06:36:11 INFO mapreduce.HFileOutputFormat2: Looking up current regions for table ecpdevv1patents:NormNovusDemo
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions:
Tue May 10 06:36:11 CDT 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller#3927df20, java.io.IOException: Call to c873gpv.int.westgroup.com/10.204.67.9:60020 failed on local exception: java.io.EOFException
We have executed the same job in Titan DEV too but facing the same problem. Please let us know if anyone has faced the same problem before.
Details are,
• Earlier job was failing to connect to localhost/127.0.0.1:2181. Hence only the property hbase.zookeeper.quorum has been set in map reduce code with c149jub.int.westgroup.com,c321shu.int.westgroup.com,c167rvm.int.westgroup.com which we got from hbase-site.xml.
• We are using jars of cdh version 5.3.3.

Error while creating a new thrift connection to Cassandra

I am trying to setup a KairosDB installation using Cassandra as backend, but I am facing the following error:
[HThriftClient.java:152] - Creating a new thrift connection to localhost(127.0.0.1):9042
ERROR [HConnectionManager.java:418] - MARK HOST AS DOWN TRIGGERED for host localhost(127.0.0.1):9042
ERROR [HConnectionManager.java:422] - Pool state on shutdown: :{localhost(127.0.0.1):9042}; IsActive?: true; Active: 1; Blocked: 0; Idle: 15; NumBeforeExhausted: 49
[HConnectionManager.java:303] - Exception:
me.prettyprint.hector.api.exceptions.HectorTransportException: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2080374784)!
at me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:39) ~[hector-core-1.1-4.jar:na]
I already checked the cassandra opened port and it is set to 9042. Also, I set start_rpc to true on cassandra.yaml file. Any idea on further troubleshooting?
For thrift connection cassandra use 9160 port . so give 9160 port .
What version of Cassandra are you using?
I think thrift is disabled in newer versions of Cassandra, you may enable the protocol by modifying Cassandra.yaml and restarting cassandra (or maybe by using nodetool).

Cassandra and defuncting connection

I've got a question about Cassandra. I haven't found any "understable answer" yet...
I made a cluster build on 3 nodes (RackInferringSnitch) on differents VM. I'm using Datastax's Java Driver to read and update my keyspace (with CSVs).
When one node is down (ie : 10.10.6.172), I've got this debug warning:
INFO 00:47:37,195 New Cassandra host /10.10.6.172:9042 added
INFO 00:47:37,246 New Cassandra host /10.10.6.122:9042 added
DEBUG 00:47:37,264 [Control connection] Refreshing schema
DEBUG 00:47:37,384 [Control connection] Successfully connected to /10.10.6.171:9042
DEBUG 00:47:37,391 Adding /10.10.6.172:9042 to list of queried hosts
DEBUG 00:47:37,395 Defuncting connection to /10.10.6.172:9042
com.datastax.driver.core.TransportException: [/10.10.6.172:9042] Channel has been closed
at com.datastax.driver.core.Connection$Dispatcher.channelClosed(Connection.java:621)
at
[...]
[...]
DEBUG 00:47:37,400 [/10.10.6.172:9042-1] Error connecting to /10.10.6.172:9042 (Connection refused: /10.10.6.172:9042)
DEBUG 00:47:37,407 Error creating pool to /10.10.6.172:9042 ([/10.10.6.172:9042] Cannot connect)
DEBUG 00:47:37,408 /10.10.6.172:9042 is down, scheduling connection retries
DEBUG 00:47:37,409 First reconnection scheduled in 1000ms
DEBUG 00:47:37,410 Adding /10.10.6.122:9042 to list of queried hosts
DEBUG 00:47:37,423 Adding /10.10.6.171:9042 to list of queried hosts
DEBUG 00:47:37,427 Adding /10.10.6.122:9042 to list of queried hosts
DEBUG 00:47:37,435 Shutting down pool
DEBUG 00:47:37,439 Adding /10.10.6.171:9042 to list of queried hosts
DEBUG 00:47:37,443 Shutting down pool
DEBUG 00:47:37,459 Connected to cluster: WormHole
I wanted to know if I need to handle this exception or it will be handled by itself (I mean, when the node will be back again cassandra will do the correct write if the batch was a write...)
EDIT : Current consistency level is ONE.
The DataStax driver keeps track of which nodes are available at all times and routes queries (load balacing) based on this information. The way it does this is based on your reconnection policy.
You will see debug level messages when nodes are detected as down, etc. This is no cause for concern as the driver will re-route to other available nodes, it will also re-try the nodes periodically to find out if they are back up. If you had a problem and the data was not getting saved to Cassandra you would see timeout errors. No action necessary in this case.

Resources