I have this code
#Bean
public CqlSession getCqlSession() {
return CqlSession.builder()
.addContactPoint(new InetSocketAddress(cassandraHost, cassandraPort))
.withAuthCredentials(cassandraUsername, cassandraPassword)
.build();
}
The connection is failing with this exception:
Failed to instantiate [com.datastax.oss.driver.api.core.CqlSession]: Factory method 'getCqlSession' threw
exception; nested exception is com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach
any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors()
for more): Node(endPoint=tinyurl-cassandra.cassandra.cosmos.azure.com/52.230.23.170:10350, hostId=null,
hashCode=237f706): [com.datastax.oss.driver.api.core.DriverTimeoutException: [s0|control|id: 0xb89dacff,
L:/192.168.0.101:59158 - R:tinyurl-cassandra.cassandra.cosmos.azure.com/52.230.23.170:10350] Protocol
initialization request, step 1 (OPTIONS): timed out after 5000 ms]
I am new to Cassandra and have tried the following:
Validated that the credentials are okay.
Try with csqlsh - could not connect as well.
Check there's no firewall setup in my machine. Can telnet to host and port.
Can open Cassandra Shell from Azure Data Explorer.
What am I missing? I am new to this. Any help will be appreciated.
Looks like you are using the v.4x version of the Java Driver. The default load balancing in this driver mandates that you provide local data center, e.g:
CqlSession.builder().withSslContext(sc)
.addContactPoint(new InetSocketAddress(cassandraHost, cassandraPort)).**withLocalDatacenter("UK South")**
.withAuthCredentials(cassandraUsername, cassandraPassword).build();
You could take a look at this getting started sample for further reference: https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started-v4
Related
i've updated my spring-boot to v3.0.0 and spring-data-cassandra to v4.0.0 which resulted in unable to connect to cassandra cluster which is deployed in stg env and runs on IPv6 address having different datacenter rather DC1
i've added a config file which accepts localDC programatically
`#Bean(destroyMethod = "close")
public CqlSession session() {
CqlSession session = CqlSession.builder()
.addContactPoint(InetSocketAddress.createUnresolved("[240b:c0e0:1xx:xxx8:xxxx:x:x:x]", port))
.withConfigLoader(
DriverConfigLoader.programmaticBuilder()
.withString(DefaultDriverOption.LOAD_BALANCING_LOCAL_DATACENTER, localDatacenter)
.withString(DefaultDriverOption.AUTH_PROVIDER_PASSWORD,password)
.withString(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT,"10s")
.withString(DefaultDriverOption.CONNECTION_CONNECT_TIMEOUT, "20s")
.withString(DefaultDriverOption.REQUEST_TIMEOUT, "20s")
.withString(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, "20s")
.withString(DefaultDriverOption.SESSION_KEYSPACE,keyspace)
.build())
//.addContactPoint(InetSocketAddress.createUnresolved(InetAddress.getByName(contactPoints).getHostName(), port))
.build();
}
return session;`
and this is my application.yml file
spring:
data:
cassandra:
keyspace-name: xxx
contact-points: [xxxx:xxxx:xxxx:xxx:xxx:xxx]
port: xxx
local-datacenter: xxxx
use-dc-aware: true
username: xxxxx
password: xxxxx
ssl: true
SchemaAction: CREATE_IF_NOT_EXISTS
So locally I was able to connect to cassandra (by default it is pointing to localhost) , but in stg env my appplication is not able to connect to that cluster
logs in my stg env
caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node (endPoint=/[240b:cOe0:102:xxxx:xxxx:x:x:x]:3xxx,hostId-null,hashCode=4e9ba6a8):[com.datastax.oss.driver.api.core.connection.ConnectionInitException:[s0|controllid:0x984419ed,L:/[240b:cOe0:102:5dd7: xxxx:x:x:xxx]:4xxx - R:/[240b:c0e0:102:xxxx:xxxx:x:x:x]:3xxx] Protocol initialization request, step 1 (OPTIONS: unexpected tarlure com.datastax.oss.driver.apt.core.connection.closedconnectiontxception: Lost connection to remote peer)]
Network
You appear to have a networking issue. The driver can't connect to any of the nodes because they are unreachable from a network perspective as it states in the error message:
... AllNodesFailedException: Could not reach any contact point ...
You need to check that:
you have configured the correct IP addresses,
you have configured the correct CQL port, and
there is network connectivity between your app and the cluster.
Security
I also noted that you configured the driver to use SSL:
ssl: true
but I don't see anywhere where you've configured the certificate credentials and this could explain why the driver can't initiate connections.
Check that the cluster has client-to-node encryption enabled. If it does then you need to prepare the client certificates and configure SSL on the driver.
Driver build
This post appears to be a duplicate of another question you posted but is now closed due to lack of clarity and details.
In that question it appears you are running a version of the Java driver not produced by DataStax as pointed out by #absurdface:
Specifically I note that java-driver-core-4.11.4-yb-1-RC1.jar isn't a Java driver artifact released by DataStax (there isn't even a 4.11.4 Java driver release). This could be relevant for reasons we'll get into ...
We are not aware of where this build came from and without knowing much about it, it could be the reason you are not able to connect to the cluster.
We recommend that you switch to one of the supported builds of the Java driver. Cheers!
A hearty +1 to everything #erick-ramirez mentioned above. I would also expand on his answers with an observation or two.
Normally spring-data-cassandra is used to automatically configure a CqlSession and make it available for injection (or for use in CqlTemplate etc.). That's what you'd normally be configuring with your application.yml file. But you're apparently creating the CqlSession directly in code, which means that spring-data-cassandra isn't involved... and therefore what's in your application.yml likely isn't being used.
This analysis strongly suggests that your CqlSession is not being configured to use SSL. My understanding is that your testing sequence went as follows:
Tested app locally on a local server, everything worked
Tested app against test environment, observed the errors above
If this sequence is correct and you have SSL enabled in you test environment but not on your local Cassandra instance that could very easily explain the behaviour you're describing.
This explanation could also explain the specific error you cite in the error message. "Lost connection to remote peer" indicates that something is unexpectedly killing your socket connection before any protocol messages are explained... and an SSL issue would cause almost exactly that behaviour.
I would recommend checking the SSL configuration for both servers involved in your testing. I would also suggest consulting the SSL-related documentation referenced by Erick above and confirm that you have all the relevant materials when building your CqlSession.
added the certificate in my spring application
public CqlSession session() throws IOException, CertificateException, NoSuchAlgorithmException, KeyStoreException, KeyManagementException {
Resource resource = new ClassPathResource("root.crt");
InputStream inputStream = resource.getInputStream();
CertificateFactory cf = CertificateFactory.getInstance("X.509");
Certificate cert = cf.generateCertificate(inputStream);
TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
keyStore.load(null);
keyStore.setCertificateEntry("ca", cert);
trustManagerFactory.init(keyStore);
SSLContext sslContext = SSLContext.getInstance("TLSv1.3");
sslContext.init(null, trustManagerFactory.getTrustManagers(), null);
return CqlSession.builder()
.withSslContext(sslContext)
.addContactPoint(new InetSocketAddress(contactPoints,port))
.withAuthCredentials(username, password)
.withLocalDatacenter(localDatacenter)
.withKeyspace(keyspace)
.build();
}
so added the cert file in the configuration file of the cqlsession builder and this helped me in connecting to the remote cassandra cluster
We are trying to connect to two keyspaces of Cassandra (3.x) in the same application with the same Kerberos credentials. The application is able to connect to one keyspace but no the other. Access to the keyspaces has been verified.
Error on connection:
2022-08-22 13:15:10,972 [cluster-reconnection-0] DEBUG c.d.d.c.ControlConnection [--]- [Control connection] error on 169.24.167.109:9042 connection, trying next host
javax.security.auth.login.LoginException: No LoginModules configured for CassandraJavaClient
at javax.security.auth.login.LoginContext.init(LoginContext.java:264)
at javax.security.auth.login.LoginContext.<init>(LoginContext.java:417)
The ticket cache is :
CassandraJavaClient {
com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true ticketCache="/var//krb5cc_userlogin";
};
The same ticket cache file is used by the first connection - which succeeds. While the second connection fails. I am not even sure as to how to debug it (tried remote debugging and since the initial control connection is an Async call, unable to get to the actual error).
We are using com.datastax.cassandra:cassandra-driver-core:jar:3.6.0
Any ideas/help to debug / resolve this will be highly appreciated
I followed the official instructions of Azure Portal. This is my
config.properties:
cassandra_host="demodemodemo.cassandra.cosmosdb.azure.com"
cassandra_username="demo"
cassandra_password="aHaplLoWhRlysBrtJWiOwB79TkqSU9PjKLu5wDeltLqys5NpR9vmtHCJrTF4ScdY69yNSWUvTUphax8RijydTA=="
cassandra_port=10350
ssl_keystore_file_path=
ssl_keystore_password=
Then it throws java.lang.IllegalArgumentException: Failed to add contact point and Caused by: java.net.UnknownHostException: "demodemodemo.cassandra.cosmosdb.azure.com" at this point:
[ CassandraUtils class, getSession() method ]
cluster = Cluster.builder()
.addContactPoint(cassandraHost)
You need to remove the double quotes from the settings.
If your credentials are correct, this should work.
cassandra_host=demodemodemo.cassandra.cosmosdb.azure.com
cassandra_username=demo
cassandra_password=aHaplLoWhRlysBrtJWiOwB79TkqSU9PjKLu5wDeltLqys5NpR9vmtHCJrTF4ScdY69yNSWUvTUphax8RijydTA==
cassandra_port=10350
Also by default the username is the same as the first part of the host so in your case demodemodemo except if you changed it.
I had similar problem. My corporate on prem environment is behind a proxy. Since I was using cassandra, I could not setup a http-proxy (it has its own protocol). The solution might be to use Azure Private Link. An example tutorial on how to do it is here: https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-configure-private-endpoints
I have a Spring Boot Application that uses Spring Data for Cassandra. One of the requirements is that the application will start even if the Cassandra Cluster is unavailable. The Application logs the situation and all its endpoints will not work properly but the Application does not shutdown. It should retry to connect to the cluster during this time. When the cluster is available the application should start to operate normally.
If I am able to connect during the application start and the cluster becomes unavailable after that, the cassandra java driver is capable of managing the retries.
How can I manage the retries during application start and still use Cassandra Repositories from Spring Data?
Thanx
It is possible to start a Spring Boot application if Apache Cassandra is not available but you need to define the Session and CassandraTemplate beans on your own with #Lazy. The beans are provided out of the box with CassandraAutoConfiguration but are initialized eagerly (default behavior) which creates a Session. The Session requires a connection to Cassandra which will prevent a startup if it's not initialized lazily.
The following code will initialize the resources lazily:
#Configuration
public class MyCassandraConfiguration {
#Bean
#Lazy
public CassandraTemplate cassandraTemplate(#Lazy Session session, CassandraConverter converter) throws Exception {
return new CassandraTemplate(session, converter);
}
#Bean
#Lazy
public Session session(CassandraConverter converter, Cluster cluster,
CassandraProperties cassandraProperties) throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster);
session.setConverter(converter);
session.setKeyspaceName(cassandraProperties.getKeyspaceName());
session.setSchemaAction(SchemaAction.NONE);
return session.getObject();
}
}
One of the requirements is that the application will start even if the Cassandra Cluster is unavailable
I think you should read this session from the Java driver doc: http://datastax.github.io/java-driver/manual/#cluster-initialization
The Cluster object does not connect automatically unless some calls are executed.
Since you're using Spring Data Cassandra (that I do not recommend since it has less feature than the plain Mapper Module of the Java driver ...) I don't know if the Cluster object or Session object are exposed directly to the users ...
For retry, you can put the cluster.init() call in a try/catch block and if the cluster is still unavaible, you'll catch an NoHostAvailableException according to the docs. Upon the exception, you can schedule a retry of cluster.init() later
Environment is Red Hat, Cassandra 2.1, Datastax Java driver 2.1.1.
I have developed custom authentication/authorization plugins for Cassandra, and they work beautifully when I try them with cqlsh - I can see my plugins being called, users are authenticated/authorized accordingly, etc. - bottom line, everything works exactly as expected.
Then I tried to test using the Datastax driver. I'm connecting to Cassandra with:
public class CassandraConnection {
private final Cluster cluster;
private final Session session;
public CassandraConnection(final String node, final int port) {
this.cluster = Cluster.builder()
.addContactPoint(node)
.withPort(port)
.withCredentials("someuser", "somepassword")
.build();
this.session = cluster.connect();
}
// Etc....
The call to cluster.connect() generates an exception:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.TransportException: [localhost/127.0.0.1:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:196)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:80)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1145)
at com.datastax.driver.core.Cluster.init(Cluster.java:149)
at com.datastax.driver.core.Cluster.connect(Cluster.java:225)
at com.<company...packages...>.CassandraConnection.<init>(CassandraConnection.java:21)
Here is the puzzling part: although I can see my plugins being called when I test them using cqlsh, they are never accessed when I use the Datastax driver - I have added log messages in the beginning of each method, and they are never called. There are no errors in the logs indicating any sort of initialization problem, and I do see a message indicating that my plugins will be used.
That exact same client code works with no problem when:
I don't have my plugin running.
I use Cassadra's PasswordAuthenticator.
So, it looks like there is some problem with my plugins, but how can that be if 1) they work fine with cqlsh and 2) none of their methods are being called when the datastax driver is being used?
A couple of additional points - if I try to connect using Datastax's DevCenter, I see the same behavior as my client, with the exact same exception, so that rules out my (very simple) client code. I have also tried to:
cluster.getConfiguration().getSocketOptions().setReadTimeoutMillis(10000);
before calling connect() as suggested in other posts, but that didn't help either - when I step through the client with the debugger, I see the error as soon as I call cluster.connect(), so it's not a time out issue either.
Any help is appreciated.