Not able to connect to Cassandra from my spring boot application , throwing exception as couldn't reach any contact-point [closed] - cassandra

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 20 days ago.
Improve this question
added config file to take contact-point programatically
#Bean(destroyMethod = "close")
public CqlSession session() {
CqlSession session = CqlSession.builder()
.addContactPoint(InetSocketAddress.createUnresolved("[240b:c0e0:1xx:xxx8:xxxx:x:x:x]", port))
.withConfigLoader(
DriverConfigLoader.programmaticBuilder()
.withString(DefaultDriverOption.LOAD_BALANCING_LOCAL_DATACENTER, localDatacenter) .withString(DefaultDriverOption.AUTH_PROVIDER_USER_NAME,username)
.withString(DefaultDriverOption.AUTH_PROVIDER_PASSWORD,password)
.withString(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT,"10s")
.withString(DefaultDriverOption.CONNECTION_CONNECT_TIMEOUT, "20s")
.withString(DefaultDriverOption.REQUEST_TIMEOUT, "20s")
.withString(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, "20s")
.withString(DefaultDriverOption.SESSION_KEYSPACE,keyspace)
.build())
//.addContactPoint(InetSocketAddress.createUnresolved(InetAddress.getByName(contactPoints).getHostName(), port))
.build();
}
return session;
`Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.datastax.oss.driver.api.core.CqlSession]: Factory method 'cassandraSession' threw exception with message: Since you provided explicit contact points, the local DC must be explicitly set (see basic.load-balancing-policy.local-datacenter in the config, or set it programmatically with SessionBuilder.withLocalDatacenter). Current contact points are: Node(endPoint=/127.0.0.1:9042, hostId=0323221f-9a0f-ec92-ea4a-c1472c2a8b94, hashCode=39075b16)=datacenter1. Current DCs in this cluster are: datacenter1
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:171) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:648) ~[spring-beans-6.0.2.jar:6.0.2]
... 89 common frames omitted
Caused by: java.lang.IllegalStateException: Since you provided explicit contact points, the local DC must be explicitly set (see basic.load-balancing-policy.local-datacenter in the config, or set it programmatically with SessionBuilder.withLocalDatacenter). Current contact points are: Node(endPoint=/127.0.0.1:9042, hostId=0323221f-9a0f-ec92-ea4a-c1472c2a8b94, hashCode=39075b16)=datacenter1. Current DCs in this cluster are: datacenter1
at com.datastax.oss.driver.internal.core.loadbalancing.helper.MandatoryLocalDcHelper.discoverLocalDc(MandatoryLocalDcHelper.java:91) ~[java-driver-core-4.11.4-yb-1-RC1.jar:na]
at com.datastax.oss.driver.internal.core.loadbalancing.DefaultLoadBalancingPolicy.discoverLocalDc(DefaultLoadBalancingPolicy.java:119) ~[java-driver-core-4.11.4-yb-1-RC1.jar:na]
at
this is the application.yml file
spring:
data:
cassandra:
keyspace-name: xxx
contact-points: [xxxx:xxxx:xxxx:xxx:xxx:xxx]
port: xxx
local-datacenter: xxxx
use-dc-aware: true
username: xxxxx
password: xxxxx
ssl: true
SchemaAction: CREATE_IF_NOT_EXISTS
but still the application is pointing towards the localhost , even though i've explicitly mentioned the contact-points and localDC
logs of stg evn are :
caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node (endPoint=/[240b:cOe0:102:xxxx:xxxx:x:x:x]:3xxx,hostId-null,hashCode=4e9ba6a8):[com.datastax.oss.driver.api.core.connection.ConnectionInitException:[s0|controllid:0x984419ed,L:/[240b:cOe0:102:5dd7:
xxxx:x:x:xxx]:4xxx - R:/[240b:c0e0:102:xxxx:xxxx:x:x:x]:3xxx] Protocol initialization request,
step 1 (OPTIONS: unexpected tarlure com.datastax.oss.driver.apt.core.connection.closedconnectiontxception:
Lost connection to remote peer)]

thanks for the question!
I'll try to provide some pointers that might help you identify the problem but it should be noted that you appear to have some non-standard elements to your application. Specifically I note that "java-driver-core-4.11.4-yb-1-RC1.jar" isn't a Java driver artifact released by DataStax (there isn't even a 4.11.4 Java driver release). This could be relevant for reasons we'll get into in a moment. I also don't recognize the configuration file you cite above. Could you provide some more detail on how your app is configured? At first blush it looked as though you might be using spring-data-cassandra but there wasn't any mention of it in your stack trace... so perhaps you're using some kind of custom configuration code?
As to your specific question: my guess is that you might have a Java driver configuration file in your staging environment which is providing a default value for "datastax-java-driver.basic.contact-points". The 4.x Java driver is configured via the Lightbend Config library. Most relevant to our case it searches for a set of configuration files with various default names on the classpath; these files are then merged together to generate the config passed to the driver. So if you have an application.conf in staging which specifies some contact points and is in the classpath the code you cite above would run fine in your local environment but fail in staging.
To validate this, create an application.conf file in your local environment in src/main/resources (or somewhere else that's explicitly included in the classpath) and give it the following contents:
datastax-java-driver {
basic {
contact-points: ["127.0.0.1:9042"]
}
}
If you then re-run the app in your local environment you should see the error there as well.
Note that the core Java driver JAR already includes a reference.conf file which serves as a default configuration. Here's where the part about a custom JAR figures in; because your not using a standard DataStax Java driver JAR I don't know if you're using the standard reference.conf file defined within that JAR. It's possible that the contact points are defined in that file, although if that were the case I'd expect you to already be seeing the error in any environment where you use that JAR.
One final note: the Java driver should be fine with IPv6 addresses. The issue described above isn't related to IPv6; it's entirely a function of how you're using the Java driver's configuration mechanism.
Hopefully some of the above is helpful!

Related

Spring Boot app can't connect to Cassandra cluster, driver returning "AllNodesFailedException: Could not reach any contact point"

i've updated my spring-boot to v3.0.0 and spring-data-cassandra to v4.0.0 which resulted in unable to connect to cassandra cluster which is deployed in stg env and runs on IPv6 address having different datacenter rather DC1
i've added a config file which accepts localDC programatically
`#Bean(destroyMethod = "close")
public CqlSession session() {
CqlSession session = CqlSession.builder()
.addContactPoint(InetSocketAddress.createUnresolved("[240b:c0e0:1xx:xxx8:xxxx:x:x:x]", port))
.withConfigLoader(
DriverConfigLoader.programmaticBuilder()
.withString(DefaultDriverOption.LOAD_BALANCING_LOCAL_DATACENTER, localDatacenter)
.withString(DefaultDriverOption.AUTH_PROVIDER_PASSWORD,password)
.withString(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT,"10s")
.withString(DefaultDriverOption.CONNECTION_CONNECT_TIMEOUT, "20s")
.withString(DefaultDriverOption.REQUEST_TIMEOUT, "20s")
.withString(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, "20s")
.withString(DefaultDriverOption.SESSION_KEYSPACE,keyspace)
.build())
//.addContactPoint(InetSocketAddress.createUnresolved(InetAddress.getByName(contactPoints).getHostName(), port))
.build();
}
return session;`
and this is my application.yml file
spring:
data:
cassandra:
keyspace-name: xxx
contact-points: [xxxx:xxxx:xxxx:xxx:xxx:xxx]
port: xxx
local-datacenter: xxxx
use-dc-aware: true
username: xxxxx
password: xxxxx
ssl: true
SchemaAction: CREATE_IF_NOT_EXISTS
So locally I was able to connect to cassandra (by default it is pointing to localhost) , but in stg env my appplication is not able to connect to that cluster
logs in my stg env
caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node (endPoint=/[240b:cOe0:102:xxxx:xxxx:x:x:x]:3xxx,hostId-null,hashCode=4e9ba6a8):[com.datastax.oss.driver.api.core.connection.ConnectionInitException:[s0|controllid:0x984419ed,L:/[240b:cOe0:102:5dd7: xxxx:x:x:xxx]:4xxx - R:/[240b:c0e0:102:xxxx:xxxx:x:x:x]:3xxx] Protocol initialization request, step 1 (OPTIONS: unexpected tarlure com.datastax.oss.driver.apt.core.connection.closedconnectiontxception: Lost connection to remote peer)]
Network
You appear to have a networking issue. The driver can't connect to any of the nodes because they are unreachable from a network perspective as it states in the error message:
... AllNodesFailedException: Could not reach any contact point ...
You need to check that:
you have configured the correct IP addresses,
you have configured the correct CQL port, and
there is network connectivity between your app and the cluster.
Security
I also noted that you configured the driver to use SSL:
ssl: true
but I don't see anywhere where you've configured the certificate credentials and this could explain why the driver can't initiate connections.
Check that the cluster has client-to-node encryption enabled. If it does then you need to prepare the client certificates and configure SSL on the driver.
Driver build
This post appears to be a duplicate of another question you posted but is now closed due to lack of clarity and details.
In that question it appears you are running a version of the Java driver not produced by DataStax as pointed out by #absurdface:
Specifically I note that java-driver-core-4.11.4-yb-1-RC1.jar isn't a Java driver artifact released by DataStax (there isn't even a 4.11.4 Java driver release). This could be relevant for reasons we'll get into ...
We are not aware of where this build came from and without knowing much about it, it could be the reason you are not able to connect to the cluster.
We recommend that you switch to one of the supported builds of the Java driver. Cheers!
A hearty +1 to everything #erick-ramirez mentioned above. I would also expand on his answers with an observation or two.
Normally spring-data-cassandra is used to automatically configure a CqlSession and make it available for injection (or for use in CqlTemplate etc.). That's what you'd normally be configuring with your application.yml file. But you're apparently creating the CqlSession directly in code, which means that spring-data-cassandra isn't involved... and therefore what's in your application.yml likely isn't being used.
This analysis strongly suggests that your CqlSession is not being configured to use SSL. My understanding is that your testing sequence went as follows:
Tested app locally on a local server, everything worked
Tested app against test environment, observed the errors above
If this sequence is correct and you have SSL enabled in you test environment but not on your local Cassandra instance that could very easily explain the behaviour you're describing.
This explanation could also explain the specific error you cite in the error message. "Lost connection to remote peer" indicates that something is unexpectedly killing your socket connection before any protocol messages are explained... and an SSL issue would cause almost exactly that behaviour.
I would recommend checking the SSL configuration for both servers involved in your testing. I would also suggest consulting the SSL-related documentation referenced by Erick above and confirm that you have all the relevant materials when building your CqlSession.
added the certificate in my spring application
public CqlSession session() throws IOException, CertificateException, NoSuchAlgorithmException, KeyStoreException, KeyManagementException {
Resource resource = new ClassPathResource("root.crt");
InputStream inputStream = resource.getInputStream();
CertificateFactory cf = CertificateFactory.getInstance("X.509");
Certificate cert = cf.generateCertificate(inputStream);
TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
keyStore.load(null);
keyStore.setCertificateEntry("ca", cert);
trustManagerFactory.init(keyStore);
SSLContext sslContext = SSLContext.getInstance("TLSv1.3");
sslContext.init(null, trustManagerFactory.getTrustManagers(), null);
return CqlSession.builder()
.withSslContext(sslContext)
.addContactPoint(new InetSocketAddress(contactPoints,port))
.withAuthCredentials(username, password)
.withLocalDatacenter(localDatacenter)
.withKeyspace(keyspace)
.build();
}
so added the cert file in the configuration file of the cqlsession builder and this helped me in connecting to the remote cassandra cluster

Apache spark error while running with pyspark in a cluster [duplicate]

Here is what I am trying to do.
I have created two nodes of DataStax enterprise cluster,on top of which I have created a java program to get the count of one table (Cassandra database table).
This program was built in eclipse which is actually from a windows box.
At the time of running this program from windows it's failing with the following error at runtime:
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
The same code has been compiled & run successfully on those clusters without any issue. What could be the reason why am getting above error?
Code:
import org.apache.spark.SparkConf;
import org.apache.spark.SparkContext;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SchemaRDD;
import org.apache.spark.sql.cassandra.CassandraSQLContext;
import com.datastax.bdp.spark.DseSparkConfHelper;
public class SparkProject {
public static void main(String[] args) {
SparkConf conf = DseSparkConfHelper.enrichSparkConf(new SparkConf()).setMaster("spark://10.63.24.14X:7077").setAppName("DatastaxTests").set("spark.cassandra.connection.host","10.63.24.14x").set("spark.executor.memory", "2048m").set("spark.driver.memory", "1024m").set("spark.local.ip","10.63.24.14X");
JavaSparkContext sc = new JavaSparkContext(conf);
CassandraSQLContext cassandraContext = new CassandraSQLContext(sc.sc());
SchemaRDD employees = cassandraContext.sql("SELECT * FROM portware_ants.orders");
//employees.registerTempTable("employees");
//SchemaRDD managers = cassandraContext.sql("SELECT symbol FROM employees");
System.out.println(employees.count());
sc.stop();
}
}
I faced similar issue and after some online research and trial-n-error, I narrowed down to 3 causes for this (except for the first the other two are not even close to the error message):
As indicated by the error, probably you are allocating the resources more than that is available. => This was not my issue
Hostname & IP Address mishaps: I took care of this by specifying the SPARK_MASTER_IP and SPARK_LOCAL_IP in spark-env.sh
Disable Firewall on the client : This was the solution that worked for me. Since I was working on a prototype in-house code, I disabled the firewall on the client node. For some reason the worker nodes, were not able to talk back to the client for me. For production purposes, you would want to open-up certain number of ports required.
My problem was that I was assigning too much memory than my slaves had available. Try reducing the memory size of the spark submit. Something like the following:
~/spark-1.5.0/bin/spark-submit --master spark://my-pc:7077 --total-executor-cores 2 --executor-memory 512m
with my ~/spark-1.5.0/conf/spark-env.sh being:
SPARK_WORKER_INSTANCES=4
SPARK_WORKER_MEMORY=1000m
SPARK_WORKER_CORES=2
Please look at Russ's post
Specifically this section:
This is by far the most common first error that a new Spark user will
see when attempting to run a new application. Our new and excited
Spark user will attempt to start the shell or run their own
application and be met with the following message
...
The short term solution to this problem is to make sure you aren’t
requesting more resources from your cluster than exist or to shut down
any apps that are unnecessarily using resources. If you need to run
multiple Spark apps simultaneously then you’ll need to adjust the
amount of cores being used by each app.
In my case, the problem was that I had the following line in $SPARK_HOME/conf/spark-env.sh:
SPARK_EXECUTOR_MEMORY=3g
of each worker,
and the following line in $SPARK_HOME/conf/spark-default.sh
spark.executor.memory 4g
in the "master" node.
The problem went away once I changed 4g to 3g. I hope that this will help someone with the same issue. The other answers helped me spot this.
I have faced this issue few times even though the resource allocation was correct.
The fix was to restart the mesos services.
sudo service mesos-slave restart
sudo service mesos-master restart

How to set DcInferringLoadBalancingPolicy in CqlSessionBuilder programmatically

I using 4.4.0 datastax-java-driver. In my scenario, I have to provide contacts points as I am connecting to remote cluster. If I am doing so I am getting following error - Since you provided explicit contact points, the local DC must be explicitly set.
I also don't have option to provide this explicitly as I am connecting to different cluster on demand which can be in different data centre. I have found option to set DcInferringLoadBalancingPolicy to infer data centre But I am not sure how to set this in CqlSessionBuilder. Please help me with this.
You need be very careful with it - it's mostly for people who are building tools, such as IDEs, etc. For applications itself it's better to pass datacenter name explicitly - either via config file, or via Java system property.
In short it could be done as following:
ProgrammaticDriverConfigLoaderBuilder configBuilder =
DriverConfigLoader.programmaticBuilder();
configBuilder.withClass(DefaultDriverOption.LOAD_BALANCING_POLICY_CLASS,
DcInferringLoadBalancingPolicy.class);
DriverConfigLoader loader = configBuilder.endProfile().build();
CqlSessionBuilder clusterBuilder = CqlSession.builder()
.addContactPoints(hosts);
CqlSession session = clusterBuilder.withConfigLoader(loader).build();

UnknownHostException when connecting to Azure Cosmos DB using Cassandra

I followed the official instructions of Azure Portal. This is my
config.properties:
cassandra_host="demodemodemo.cassandra.cosmosdb.azure.com"
cassandra_username="demo"
cassandra_password="aHaplLoWhRlysBrtJWiOwB79TkqSU9PjKLu5wDeltLqys5NpR9vmtHCJrTF4ScdY69yNSWUvTUphax8RijydTA=="
cassandra_port=10350
ssl_keystore_file_path=
ssl_keystore_password=
Then it throws java.lang.IllegalArgumentException: Failed to add contact point and Caused by: java.net.UnknownHostException: "demodemodemo.cassandra.cosmosdb.azure.com" at this point:
[ CassandraUtils class, getSession() method ]
cluster = Cluster.builder()
.addContactPoint(cassandraHost)
You need to remove the double quotes from the settings.
If your credentials are correct, this should work.
cassandra_host=demodemodemo.cassandra.cosmosdb.azure.com
cassandra_username=demo
cassandra_password=aHaplLoWhRlysBrtJWiOwB79TkqSU9PjKLu5wDeltLqys5NpR9vmtHCJrTF4ScdY69yNSWUvTUphax8RijydTA==
cassandra_port=10350
Also by default the username is the same as the first part of the host so in your case demodemodemo except if you changed it.
I had similar problem. My corporate on prem environment is behind a proxy. Since I was using cassandra, I could not setup a http-proxy (it has its own protocol). The solution might be to use Azure Private Link. An example tutorial on how to do it is here: https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-configure-private-endpoints

TeamCity Build Agent name is not picked up from config file

I have recently setup a VM on azure to use as my build agent.
When the agent is started its name is calculated based on the azure instance name (_myservername) and the name I provide in the buildAgent.properties file is ignored completely.
This is particularly problematic when I have a second agent and the same name is chosen which will result in name conflict.
looking at the teamcity-agent.log I can see the following lines:
[2016-07-14 15:33:04,745] WARN - ds.azure.AzurePropertiesReader - Unable to set self port. Azure integration will experience problems
[2016-07-14 15:33:04,745] INFO - ds.azure.AzurePropertiesReader - Added alternative address is set to
[2016-07-14 15:33:04,745] INFO - ds.azure.AzurePropertiesReader - Instance name and agent name are set to _myservername
...
Question is:
Why is the name I provide via config file is not taking precedence over any other place it reads the name from? -- should it?
How can I possibly force a name on it?
OK, the came to find the answer to this and would share it here in case it would be useful for the humans of the future!
The issue was caused by the the azure-plugin where it was setting a configuration-parameter on the agent called instance name.
https://github.com/JetBrains/teamcity-azure-plugin/issues/17
The issue is fixed in the latest version of the plugin so upgrading it solved my problem. :)

Resources