Cassandra Connection with Groovy Script In SoapUI - groovy

thanks for the time. I am trying to access a remote Cassandra DB in order to complete my assertions. I see that the Server is running:
Cassandra V 3.0.8.1293
Driver Type: Cassandra CQL
Datastax Java Driver for Apache Cassandra - Core [3.0.5]
So, I am trying with the following simple code to access the DB
import com.datastax.driver.core.*
Cluster cluster = null;
try {
cluster = Cluster.builder().addContactPoint("x.x.x.x").withCredentials("xxxxxxx", "xxxxxx").withPort(9042).build()
Session session = cluster.connect();
ResultSet rs = session.execute("select * from TABLE");
Row row = rs.one();
} finally {
if (cluster != null) cluster.close();
}
when I use the cassandra-driver-core-2.0.1.jar I am getting the error :
ERROR:com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x(null))
Read the documentation and a lot of posts here and on other blogs and I saw that there may be an incompatibility with the driver version so I tried to upgrade the driver to many versions (cassandra-driver-core-2.5,cassandra-driver-core-3,cassandra-driver-core-3.2), but on that I am getting the following:
ERROR:java.lang.ExceptionInInitializerError
Have also tried to connect using JDBC, but to no avail, using the configuration proposed at this thread
SoapUI JDBC connection with Apache Cassandra
Actually I am running out of ideas. Can anyone propose or point to some direction on how to actually achieve this, either by pointing me to some tutorial or any idea.
Thank you very much

I think you haven't enable remote access to cassandra.
Try enabling remote access using below configuration -
File Path /etc/cassandra/default.conf/cassandra.yaml
rpc_address: 0.0.0.0
broadcast_rpc_address: <serverIPAddress>
After that, restart cassandra service.

Related

How to connect to Cassandra Database using Python code

I had followed the steps given in https://docs.datastax.com/en/developer/python-driver/3.25/getting_started/ to connect to cassandra database using python code, but still after running the code snippet I am getting
NoHostAvailable: ('Unable to connect to any servers', {'hosts"port': OperationTimedOut('errors=None, last_host=None'),
Python version 2.7 and 3 (classpath is set for both the python versions)
Java 1.8 (class path has been set)
Apache cassandra 3.11.6 (apache home classpath has been set)
I tend to use a very simple app to test connectivity to a Cassandra cluster:
from cassandra.cluster import Cluster
cluster = Cluster(['10.1.2.3'], port=45678)
session = cluster.connect()
row = session.execute("SELECT release_version FROM system.local").one()
if row:
print(row[0])
Then run it:
$ python HelloCassandra.py
4.0.6
In your comment you mentioned that you're getting OperationTimedOut which indicates that the driver never got a response back from the node within the client timeout period. This usually means (a) you're connecting to the wrong IP, (b) you're connecting to the wrong CQL port, or (c) there's a network connectivity issue between your app and the cluster.
Make sure that you're using the IP address that you've set in rpc_address of cassandra.yaml. Also make sure that the node is listening for CQL clients on the right port. You can easily verify this by checking the output of either of these Linux utilities like netstat or lsof, for example:
$ sudo lsof -nPi -sTCP:LISTEN
Cheers!
So that error message suggests that the host/port combination either does not have Cassandra running on it or is under heavy load and unable to respond.
Can you edit your question to include the Cassandra connection portion of your code, as well as maybe how you're calling it? I have a test script which I use (and you're welcome to check it out), and here is the connection portion:
protocol=4
hostname=sys.argv[1]
username=sys.argv[2]
password=sys.argv[3]
nodes = []
nodes.append(hostname)
auth_provider = PlainTextAuthProvider(username=username, password=password)
cluster = Cluster(nodes,auth_provider=auth_provider, protocol_version=protocol)
session = cluster.connect()
I call it like this:
$ python3 testCassandra.py 127.0.0.1 aaron notReallyMyPassword
local
One thing you might try too, would be to run a nodetool status on the cluster just to make sure it's running ok.
Edit
local variable 'session' referenced before assignment
So this sounds to me like you're attempting a session.execute before session = cluster.connect(). Have a look at my Git repo (linked above) to see the correct order for instantiating session.
I am not using default port
In that case, make sure the port is being set in the cluster definition. Ex:
port = 19099
cluster = Cluster(nodes,auth_provider=auth_provider, port=port)

Can we use Cassandra in java without Maven? Can anyone provide sample code?

Can we use Cassandra without Maven in Java? If so, how can we do that?
I've tried using it with JDBC DRIVER, but it is not helping the situation.
Check the DataStax Java Driver GitHub page, under the section "Getting the driver":
If you can't use a dependency management tool, a binary tarball is available for download.
Note that the link above is for the 3.2 version of the driver.
Untar the tarball, and put the following 3 JAR files into your classpath:
cassandra-driver-core-3.2.0.jar
cassandra-driver-mapping-3.2.0.jar
cassandra-driver-extras-3.2.0.jar
Once that is done, you can follow the "Quick Start" section of the linked manual:
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Session;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.core.Row;
Cluster cluster = null;
try {
cluster = Cluster.builder()
.addContactPoint("127.0.0.1")
.build();
Session session = cluster.connect();
ResultSet rs = session.execute("select release_version from system.local");
Row row = rs.one();
System.out.println(row.getString("release_version"));
} finally {
if (cluster != null) cluster.close();
}
Note that the above "quick start" code is exactly that, and assumes that:
You are not using client-to-node SSL.
You are not using user authentication.
You are running Cassandra on your local machine, listening on 127.0.0.1

Cassandra is throwing NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1 (null))

I am not able to connect to Cassandra cluster using this code:
public static boolean tableCreate() {
// Query
String query = "CREATE KEYSPACE store WITH replication "
+ "= {'class':'SimpleStrategy', 'replication_factor':1};";
// creating Cluster object
Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").withPort(9042).build();
// Creating Session object
Session session = cluster.connect("tutorialspoint");
// Executing the query
session.execute(query);
// using the KeySpaceq
session.execute("USE store");
System.out.println("Keyspace created with store name");
return true;
}
It is giving me this error:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1 (null))
What is my mistake in the code above?
Cassandra is running on my Local Windows 10 64bit and I also disabled the firewall.
You may need to check and possibly update the version of datastax driver that you are using. I faced exactly same error (ie same error message while connecting) and after upgrading driver 'datastax' version the problem went away and I could connect to DB.
Similar Issue: Unable to connect to Cassandra cluster running on local host

Connecting to Cassandra with Spark

First, I have bought the new O'Reilly Spark book and tried those Cassandra setup instructions. I've also found other stackoverflow posts and various posts and guides over the web. None of them work as-is. Below is as far as I could get.
This is a test with only a handful of records of dummy test data. I am running the most recent Cassandra 2.0.7 Virtual Box VM provided by plasetcassandra.org linked from the main Cassandra project page.
I downloaded Spark 1.2.1 source and got the latest Cassandra Connector code from github and built both against Scala 2.11. I have JDK 1.8.0_40 and Scala 2.11.6 setup on Mac OS 10.10.2.
I run the spark shell with the cassandra connector loaded:
bin/spark-shell --driver-class-path ../spark-cassandra-connector/spark-cassandra-connector/target/scala-2.11/spark-cassandra-connector-assembly-1.2.0-SNAPSHOT.jar
Then I do what should be a simple row count type test on a test table of four records:
import com.datastax.spark.connector._
sc.stop
val conf = new org.apache.spark.SparkConf(true).set("spark.cassandra.connection.host", "192.168.56.101")
val sc = new org.apache.spark.SparkContext(conf)
val table = sc.cassandraTable("mykeyspace", "playlists")
table.count
I get the following error. What is confusing is that it is getting errors trying to find Cassandra at 127.0.0.1, but it also recognizes the host name that I configured which is 192.168.56.101.
15/03/16 15:56:54 INFO Cluster: New Cassandra host /192.168.56.101:9042 added
15/03/16 15:56:54 INFO CassandraConnector: Connected to Cassandra cluster: Cluster on a Stick
15/03/16 15:56:54 ERROR ServerSideTokenRangeSplitter: Failure while fetching splits from Cassandra
java.io.IOException: Failed to open thrift connection to Cassandra at 127.0.0.1:9160
<snip>
java.io.IOException: Failed to fetch splits of TokenRange(0,0,Set(CassandraNode(/127.0.0.1,/127.0.0.1)),None) from all endpoints: CassandraNode(/127.0.0.1,/127.0.0.1)
BTW, I can also use a configuration file at conf/spark-defaults.conf to do the above without having to close/recreate a spark context or pass in the --driver-clas-path argument. I ultimately hit the same error though, and the above steps seem easier to communicate in this post.
Any ideas?
Check the rpc_address config in your cassandra.yaml file on your cassandra node. It's likely that the spark connector is using that value from the system.local/system.peers tables and it may be set to 127.0.0.1 in your cassandra.yaml.
The spark connector uses thrift to get token range splits from cassandra. Eventually I'm betting this will be replaced as C* 2.1.4 has a new table called system.size_estimates (CASSANDRA-7688). It looks like it's getting the host metadata to find the nearest host and then making the query using thrift on port 9160.

Cassandra 2.1, Datastax Java driver 2.1.1 - Custom authentication/authorization plugin causes NoHostAvailableException

Environment is Red Hat, Cassandra 2.1, Datastax Java driver 2.1.1.
I have developed custom authentication/authorization plugins for Cassandra, and they work beautifully when I try them with cqlsh - I can see my plugins being called, users are authenticated/authorized accordingly, etc. - bottom line, everything works exactly as expected.
Then I tried to test using the Datastax driver. I'm connecting to Cassandra with:
public class CassandraConnection {
private final Cluster cluster;
private final Session session;
public CassandraConnection(final String node, final int port) {
this.cluster = Cluster.builder()
.addContactPoint(node)
.withPort(port)
.withCredentials("someuser", "somepassword")
.build();
this.session = cluster.connect();
}
// Etc....
The call to cluster.connect() generates an exception:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.TransportException: [localhost/127.0.0.1:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:196)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:80)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1145)
at com.datastax.driver.core.Cluster.init(Cluster.java:149)
at com.datastax.driver.core.Cluster.connect(Cluster.java:225)
at com.<company...packages...>.CassandraConnection.<init>(CassandraConnection.java:21)
Here is the puzzling part: although I can see my plugins being called when I test them using cqlsh, they are never accessed when I use the Datastax driver - I have added log messages in the beginning of each method, and they are never called. There are no errors in the logs indicating any sort of initialization problem, and I do see a message indicating that my plugins will be used.
That exact same client code works with no problem when:
I don't have my plugin running.
I use Cassadra's PasswordAuthenticator.
So, it looks like there is some problem with my plugins, but how can that be if 1) they work fine with cqlsh and 2) none of their methods are being called when the datastax driver is being used?
A couple of additional points - if I try to connect using Datastax's DevCenter, I see the same behavior as my client, with the exact same exception, so that rules out my (very simple) client code. I have also tried to:
cluster.getConfiguration().getSocketOptions().setReadTimeoutMillis(10000);
before calling connect() as suggested in other posts, but that didn't help either - when I step through the client with the debugger, I see the error as soon as I call cluster.connect(), so it's not a time out issue either.
Any help is appreciated.

Resources