Cassandra is throwing NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1 (null)) - cassandra

I am not able to connect to Cassandra cluster using this code:
public static boolean tableCreate() {
// Query
String query = "CREATE KEYSPACE store WITH replication "
+ "= {'class':'SimpleStrategy', 'replication_factor':1};";
// creating Cluster object
Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").withPort(9042).build();
// Creating Session object
Session session = cluster.connect("tutorialspoint");
// Executing the query
session.execute(query);
// using the KeySpaceq
session.execute("USE store");
System.out.println("Keyspace created with store name");
return true;
}
It is giving me this error:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1 (null))
What is my mistake in the code above?
Cassandra is running on my Local Windows 10 64bit and I also disabled the firewall.

You may need to check and possibly update the version of datastax driver that you are using. I faced exactly same error (ie same error message while connecting) and after upgrading driver 'datastax' version the problem went away and I could connect to DB.
Similar Issue: Unable to connect to Cassandra cluster running on local host

Related

SparkSQL Hive Error : "HikariCP" not found in the CLASSPATH

I configured Hive with mySQL as my metastore. I can enter hive shell and create tables successfully.
Spark version: 2.4.0
Hive version: 3.1.1
When I try to run a SparkSQL program using spark submit, I'm getting the below error.
2019-03-02 15:43:41 WARN HiveMetaStore:622 - Retrying creating default database after error: Error creating transactional connection factory
javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
......
......
Exception in thread "main" org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient;
......
......
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient;
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "HikariCP" plugin to create a ConnectionPool gave an error : The connection pool plugin of type "HikariCP" was not found in the CLASSPATH!
Please let me know if anyone can help me in this regard.
I don't know if you have already solved this problem. There is my advice.
the default database connection is HikariCP in the hive-site.xml. You can search for this in the hive-site.xml: datanucleus.connectionPoolingType. The value is HikariCP. So you need to change it to dbcp since you use Mysql as your metastore.
And at last, don't forget about adding the mysql-connector-java-5.x.x.jar to the path like
/home/hadoop/spark-2.3.0-bin-hadoop2.7/jars

Cassandra Connection with Groovy Script In SoapUI

thanks for the time. I am trying to access a remote Cassandra DB in order to complete my assertions. I see that the Server is running:
Cassandra V 3.0.8.1293
Driver Type: Cassandra CQL
Datastax Java Driver for Apache Cassandra - Core [3.0.5]
So, I am trying with the following simple code to access the DB
import com.datastax.driver.core.*
Cluster cluster = null;
try {
cluster = Cluster.builder().addContactPoint("x.x.x.x").withCredentials("xxxxxxx", "xxxxxx").withPort(9042).build()
Session session = cluster.connect();
ResultSet rs = session.execute("select * from TABLE");
Row row = rs.one();
} finally {
if (cluster != null) cluster.close();
}
when I use the cassandra-driver-core-2.0.1.jar I am getting the error :
ERROR:com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x(null))
Read the documentation and a lot of posts here and on other blogs and I saw that there may be an incompatibility with the driver version so I tried to upgrade the driver to many versions (cassandra-driver-core-2.5,cassandra-driver-core-3,cassandra-driver-core-3.2), but on that I am getting the following:
ERROR:java.lang.ExceptionInInitializerError
Have also tried to connect using JDBC, but to no avail, using the configuration proposed at this thread
SoapUI JDBC connection with Apache Cassandra
Actually I am running out of ideas. Can anyone propose or point to some direction on how to actually achieve this, either by pointing me to some tutorial or any idea.
Thank you very much
I think you haven't enable remote access to cassandra.
Try enabling remote access using below configuration -
File Path /etc/cassandra/default.conf/cassandra.yaml
rpc_address: 0.0.0.0
broadcast_rpc_address: <serverIPAddress>
After that, restart cassandra service.

could not connect to cassandra using the latest cql driver and default settings?

I am using the latest version of cassandra 3.0.2 and the latest version of datastax cassandra core java driver which is 3.0.0. The settings in cassandra.yaml remains unchanged. I have not changed that file. so whatever the default settings are they remain the same. I keep hearing that rpc_address should be 0.0.0.0 but by default it is localhost and the broadcast_rpc_address by default is commmented out however its value is 1.2.3.4 (again by default it is commented out). I have not changed any of the default settings so the cassandra.yaml file remains unchanged. I also don't understand why we need to set rpc_address and all that after all I hear that latest version of cassandra had moved away from rpc?!
Here is the snippet of the code
Cluster cassandra = Cluster.builder().addContactPoint("localhost").withPort(9042).build();
ListenableFuture<Session> session = cassandra.connectAsync("demo1");
......
Here is the error that I get when I turn on the DEBUG FLAG
com.datastax.driver.NEW_NODE_DELAY_SECONDS is undefined, using default value 1
com.datastax.driver.NON_BLOCKING_EXECUTOR_SIZE is undefined, using default value 8
com.datastax.driver.NOTIF_LOCK_TIMEOUT_SECONDS is undefined, using default value 60
Starting new cluster with contact points [localhost/127.0.0.1:9042, localhost/0:0:0:0:0:0:0:1:9042]
log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
com.datastax.driver.FORCE_NIO is undefined, using default value false
Did not find Netty's native epoll transport in the classpath, defaulting to NIO.
[localhost/127.0.0.1:9042] preparing to open 1 new connections, total = 1
com.datastax.driver.DISABLE_COALESCING is undefined, using default value false
Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Connection established, initializing transport
[localhost/127.0.0.1:9042] Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized, connection ready
[Control connection] Refreshing node list and token map
You listed localhost/0:0:0:0:0:0:0:1:9042 in your contact points, but it wasn't found in the control host's system.peers at startup
[Control connection] Refreshing schema
[Control connection] Refreshing node list and token map
[Control connection] established to localhost/127.0.0.1:9042
Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
New Cassandra host localhost/127.0.0.1:9042 added
[localhost/127.0.0.1:9042] preparing to open 1 new connections, total = 2
Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=false] Connection established, initializing transport
[localhost/127.0.0.1:9042] Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=false] Transport initialized, connection ready
Created connection pool to host localhost/127.0.0.1:9042 (1 connections needed, 1 successfully opened)
Added connection pool for localhost/127.0.0.1:9042
Preparing query SELECT "hash" AS col1,"body" AS col2 FROM demo1.ents WHERE "hash"=?;
[1795897147-1] Doing retry 1 for query com.datastax.driver.core.Statement$1#19be093f at consistency null
[1795897147-1] Error querying localhost/127.0.0.1:9042 : com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response
error message -> All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response))
error cause -> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:98)
at com.datastax.driver.mapping.Mapper.getPreparedQuery(Mapper.java:118)
at com.datastax.driver.mapping.Mapper.getPreparedQuery(Mapper.java:129)
at com.datastax.driver.mapping.Mapper.getQuery(Mapper.java:333)
at com.datastax.driver.mapping.Mapper.getQuery(Mapper.java:325)
at com.datastax.driver.mapping.Mapper.getAsync(Mapper.java:388)
Here is where the exeption happens in cassandra driver
stmt = session().prepare(queryString); //Mapper.java
public PreparedStatement prepare(String query) { //AbstractSession.java
try {
return Uninterruptibles.getUninterruptibly(prepareAsync(query));
} catch (ExecutionException e) {
throw DriverThrowables.propagateCause(e);
}
}

Cassandra 2.1, Datastax Java driver 2.1.1 - Custom authentication/authorization plugin causes NoHostAvailableException

Environment is Red Hat, Cassandra 2.1, Datastax Java driver 2.1.1.
I have developed custom authentication/authorization plugins for Cassandra, and they work beautifully when I try them with cqlsh - I can see my plugins being called, users are authenticated/authorized accordingly, etc. - bottom line, everything works exactly as expected.
Then I tried to test using the Datastax driver. I'm connecting to Cassandra with:
public class CassandraConnection {
private final Cluster cluster;
private final Session session;
public CassandraConnection(final String node, final int port) {
this.cluster = Cluster.builder()
.addContactPoint(node)
.withPort(port)
.withCredentials("someuser", "somepassword")
.build();
this.session = cluster.connect();
}
// Etc....
The call to cluster.connect() generates an exception:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.TransportException: [localhost/127.0.0.1:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:196)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:80)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1145)
at com.datastax.driver.core.Cluster.init(Cluster.java:149)
at com.datastax.driver.core.Cluster.connect(Cluster.java:225)
at com.<company...packages...>.CassandraConnection.<init>(CassandraConnection.java:21)
Here is the puzzling part: although I can see my plugins being called when I test them using cqlsh, they are never accessed when I use the Datastax driver - I have added log messages in the beginning of each method, and they are never called. There are no errors in the logs indicating any sort of initialization problem, and I do see a message indicating that my plugins will be used.
That exact same client code works with no problem when:
I don't have my plugin running.
I use Cassadra's PasswordAuthenticator.
So, it looks like there is some problem with my plugins, but how can that be if 1) they work fine with cqlsh and 2) none of their methods are being called when the datastax driver is being used?
A couple of additional points - if I try to connect using Datastax's DevCenter, I see the same behavior as my client, with the exact same exception, so that rules out my (very simple) client code. I have also tried to:
cluster.getConfiguration().getSocketOptions().setReadTimeoutMillis(10000);
before calling connect() as suggested in other posts, but that didn't help either - when I step through the client with the debugger, I see the error as soon as I call cluster.connect(), so it's not a time out issue either.
Any help is appreciated.

Hector Cassandra Connectivity Error

I am trying to connect to a local cassandra instance through a java client powered by Hector. I attempt to read rows after trying to connect. The code snippet is as follows
Cluster myCluster = HFactory.getOrCreateCluster("test" , "localhost:9160");
KeyspaceDefinition keySpaceDef = myCluster.describeKeyspace("testkeyspace");
.....
However the connectivity fails with this error
Exception in thread "main" java.lang.NoSuchFieldError: DEFAULT_MEMTABLE_OPERATIONS_IN_MILLIONS
at me.prettyprint.cassandra.service.ThriftCfDef.(ThriftCfDef.java:65)
at me.prettyprint.cassandra.service.ThriftCfDef.fromThriftList(ThriftCfDef.java:144)
at me.prettyprint.cassandra.service.ThriftKsDef.(ThriftKsDef.java:34)
at me.prettyprint.cassandra.service.AbstractCluster$4.execute(AbstractCluster.java:192)
at me.prettyprint.cassandra.service.AbstractCluster$4.execute(AbstractCluster.java:187)
at me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:101)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:232)
at me.prettyprint.cassandra.service.AbstractCluster.describeKeyspace(AbstractCluster.java:201)
I have cassandra, thrift as dependencies in my pom.xml. Any clues as to what could be wrong?

Resources