Lagom external Cassandra authentication - cassandra

I have been trying to set up an external Cassandra for my Lagom setup.
In root pom I have written
<configuration>
<unmanagedServices>
<cas_native>http://ip:9042</cas_native>
</unmanagedServices>
<cassandraEnabled>false</cassandraEnabled>
</configuration>
In my impl application.conf
akka {
persistent {
journal {
akka.persistence.journal.plugin = "this-cassandra-journal"
this-cassandra-journal {
contact-points = ["10.15.2.179"]
port = 9042
cluster-id = "cas_native"
keyspace = "hello"
authentication.username = "cassandra"
authentication.password = "rodney"
# Parameter indicating whether the journal keyspace should be auto created
keyspace-autocreate = true
# Parameter indicating whether the journal tables should be auto created
tables-autocreate = true
}
}
snapshot-store {
akka.persistence.snapshot-store.plugin = "this-cassandra-snapshot-store"
this-cassandra-snapshot-store {
contact-points = ["10.15.2.179"]
port = 9042
cluster-id = "cas_native"
keyspace = "hello_snap"
authentication.username = "cassandra"
authentication.password = "rodney"
# Parameter indicating whether the journal keyspace should be auto created
keyspace-autocreate = true
# Parameter indicating whether the journal tables should be auto created
tables-autocreate = true
}
}
}
But I get the error
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize.
It will be retried on demand. Caused by: Authentication error on host /10.15.2.
179:9042: Host /10.15.2.179:9042 requires authentication, but no authenticator f
ound in Cluster configuration
[warn] a.p.c.s.CassandraSnapshotStore - Failed to connect to Cassandra and initi
alize. It will be retried on demand. Caused by: Authentication error on host /10
.15.2.179:9042: Host /10.15.2.179:9042 requires authentication, but no authentic
ator found in Cluster configuration
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize.
It will be retried on demand. Caused by: Authentication error on host /10.15.2.
179:9042: Host /10.15.2.179:9042 requires authentication, but no authenticator f
ound in Cluster configuration
[error] a.c.s.PersistentShardCoordinator - Persistence failure when replaying ev
ents for persistenceId [/sharding/ProductCoordinator]. Last known sequence numbe
r [0]
com.datastax.driver.core.exceptions.AuthenticationException: Authentication erro
r on host /10.15.2.179:9042: Host /10.15.2.179:9042 requires authentication, but
no authenticator found in Cluster configuration
at com.datastax.driver.core.AuthProvider$1.newAuthenticator(AuthProvider
.java:40)
at com.datastax.driver.core.Connection$5.apply(Connection.java:250)
at com.datastax.driver.core.Connection$5.apply(Connection.java:234)
at com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTrans
form(Futures.java:1442)
at com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTrans
form(Futures.java:1433)
at com.google.common.util.concurrent.Futures$AbstractChainingFuture.run(
Futures.java:1408)
at com.google.common.util.concurrent.Futures$2$1.run(Futures.java:1177)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutorService
.execute(MoreExecutors.java:310)
at com.google.common.util.concurrent.Futures$2.execute(Futures.java:1174
)
I also tried providing this config
lagom.persistence.read-side {
cassandra {
}
}
How to make it work by providing credentials for Cassandra?

In Lagom, you may already use akka-persistence-cassandra settings for your journal and snapshot-store (see reference.conf in the source code, and scroll down for cassandra-snapshot-store.authentication.*). There's no need to configure it because Lagom's support for Cassandra persistence already declares akka-persistence-cassandraas the Akka Persistence implementation:
akka.persistence.journal.plugin = cassandra-journal
akka.persistence.snapshot-store.plugin = cassandra-snapshot-store
See https://github.com/lagom/lagom/blob/c63383c343b02bd0c267ff176bfb4e48c7202d7d/persistence-cassandra/core/src/main/resources/play/reference-overrides.conf#L5-L6
The third last bit to configure when connecting Lagom to Cassandra is Lagom's Read-Side. That is also doable via application.conf if you override the defaults.
Note how each storage may use a different Cassandra Ring/Keyspace/credentials/... so you can tune them separately.
See extra info in the Lagom docs.

Related

Spring Boot app can't connect to Cassandra cluster, driver returning "AllNodesFailedException: Could not reach any contact point"

i've updated my spring-boot to v3.0.0 and spring-data-cassandra to v4.0.0 which resulted in unable to connect to cassandra cluster which is deployed in stg env and runs on IPv6 address having different datacenter rather DC1
i've added a config file which accepts localDC programatically
`#Bean(destroyMethod = "close")
public CqlSession session() {
CqlSession session = CqlSession.builder()
.addContactPoint(InetSocketAddress.createUnresolved("[240b:c0e0:1xx:xxx8:xxxx:x:x:x]", port))
.withConfigLoader(
DriverConfigLoader.programmaticBuilder()
.withString(DefaultDriverOption.LOAD_BALANCING_LOCAL_DATACENTER, localDatacenter)
.withString(DefaultDriverOption.AUTH_PROVIDER_PASSWORD,password)
.withString(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT,"10s")
.withString(DefaultDriverOption.CONNECTION_CONNECT_TIMEOUT, "20s")
.withString(DefaultDriverOption.REQUEST_TIMEOUT, "20s")
.withString(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, "20s")
.withString(DefaultDriverOption.SESSION_KEYSPACE,keyspace)
.build())
//.addContactPoint(InetSocketAddress.createUnresolved(InetAddress.getByName(contactPoints).getHostName(), port))
.build();
}
return session;`
and this is my application.yml file
spring:
data:
cassandra:
keyspace-name: xxx
contact-points: [xxxx:xxxx:xxxx:xxx:xxx:xxx]
port: xxx
local-datacenter: xxxx
use-dc-aware: true
username: xxxxx
password: xxxxx
ssl: true
SchemaAction: CREATE_IF_NOT_EXISTS
So locally I was able to connect to cassandra (by default it is pointing to localhost) , but in stg env my appplication is not able to connect to that cluster
logs in my stg env
caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node (endPoint=/[240b:cOe0:102:xxxx:xxxx:x:x:x]:3xxx,hostId-null,hashCode=4e9ba6a8):[com.datastax.oss.driver.api.core.connection.ConnectionInitException:[s0|controllid:0x984419ed,L:/[240b:cOe0:102:5dd7: xxxx:x:x:xxx]:4xxx - R:/[240b:c0e0:102:xxxx:xxxx:x:x:x]:3xxx] Protocol initialization request, step 1 (OPTIONS: unexpected tarlure com.datastax.oss.driver.apt.core.connection.closedconnectiontxception: Lost connection to remote peer)]
Network
You appear to have a networking issue. The driver can't connect to any of the nodes because they are unreachable from a network perspective as it states in the error message:
... AllNodesFailedException: Could not reach any contact point ...
You need to check that:
you have configured the correct IP addresses,
you have configured the correct CQL port, and
there is network connectivity between your app and the cluster.
Security
I also noted that you configured the driver to use SSL:
ssl: true
but I don't see anywhere where you've configured the certificate credentials and this could explain why the driver can't initiate connections.
Check that the cluster has client-to-node encryption enabled. If it does then you need to prepare the client certificates and configure SSL on the driver.
Driver build
This post appears to be a duplicate of another question you posted but is now closed due to lack of clarity and details.
In that question it appears you are running a version of the Java driver not produced by DataStax as pointed out by #absurdface:
Specifically I note that java-driver-core-4.11.4-yb-1-RC1.jar isn't a Java driver artifact released by DataStax (there isn't even a 4.11.4 Java driver release). This could be relevant for reasons we'll get into ...
We are not aware of where this build came from and without knowing much about it, it could be the reason you are not able to connect to the cluster.
We recommend that you switch to one of the supported builds of the Java driver. Cheers!
A hearty +1 to everything #erick-ramirez mentioned above. I would also expand on his answers with an observation or two.
Normally spring-data-cassandra is used to automatically configure a CqlSession and make it available for injection (or for use in CqlTemplate etc.). That's what you'd normally be configuring with your application.yml file. But you're apparently creating the CqlSession directly in code, which means that spring-data-cassandra isn't involved... and therefore what's in your application.yml likely isn't being used.
This analysis strongly suggests that your CqlSession is not being configured to use SSL. My understanding is that your testing sequence went as follows:
Tested app locally on a local server, everything worked
Tested app against test environment, observed the errors above
If this sequence is correct and you have SSL enabled in you test environment but not on your local Cassandra instance that could very easily explain the behaviour you're describing.
This explanation could also explain the specific error you cite in the error message. "Lost connection to remote peer" indicates that something is unexpectedly killing your socket connection before any protocol messages are explained... and an SSL issue would cause almost exactly that behaviour.
I would recommend checking the SSL configuration for both servers involved in your testing. I would also suggest consulting the SSL-related documentation referenced by Erick above and confirm that you have all the relevant materials when building your CqlSession.
added the certificate in my spring application
public CqlSession session() throws IOException, CertificateException, NoSuchAlgorithmException, KeyStoreException, KeyManagementException {
Resource resource = new ClassPathResource("root.crt");
InputStream inputStream = resource.getInputStream();
CertificateFactory cf = CertificateFactory.getInstance("X.509");
Certificate cert = cf.generateCertificate(inputStream);
TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
keyStore.load(null);
keyStore.setCertificateEntry("ca", cert);
trustManagerFactory.init(keyStore);
SSLContext sslContext = SSLContext.getInstance("TLSv1.3");
sslContext.init(null, trustManagerFactory.getTrustManagers(), null);
return CqlSession.builder()
.withSslContext(sslContext)
.addContactPoint(new InetSocketAddress(contactPoints,port))
.withAuthCredentials(username, password)
.withLocalDatacenter(localDatacenter)
.withKeyspace(keyspace)
.build();
}
so added the cert file in the configuration file of the cqlsession builder and this helped me in connecting to the remote cassandra cluster

How to connect KairosDB with Azure Cosmos (Cassandra)

I use KairosDB on top of Cassandra for saving all our time series data. I am now trying to replicate the same KairosDB with Azure CosmosDB (Cassandra API). But its throwing error:
16:59:08.364 [main] INFO [LZ4Compressor.java:52] - Using LZ4Factory:JNI
16:59:08.441 [main] INFO [NettyUtil.java:73] - Did not find Netty's native epoll transport in the classpath, defaulting to NIO.
16:59:08.842 [main] ERROR [CassandraModule.java:136] - Unable to setup cassandra schema
com.datastax.driver.core.exceptions.AuthenticationException: Authentication error on host ilenstsdb2.cassandra.cosmos.azure.com/40.65.106.154:10350: Cql request had unsupported headers Compression
at com.datastax.driver.core.Connection$8.apply(Connection.java:392)
at com.datastax.driver.core.Connection$8.apply(Connection.java:361)
enter image description here
I don't have a huge amount of exposure to the Cassandra API but the error is indicating that your API request contained the header Compression while authenticating and the server didn't like it.
Make sure you're not using withCompression() when starting your datastax driver:
cluster = Cluster.builder()
.addContactPoint("X.X.X.X")
.withCompression(ProtocolOptions.Compression.LZ4)
.build();
...should be...
cluster = Cluster.builder()
.addContactPoint("X.X.X.X")
.build();

How to configure multiple cassandra contact-points for lagom?

In Lagom it appears the contact point get loaded from the service locator which accepts only a single URI. How can we specify multiple cassandra contact-points?
lagom.services {
cas_native = "tcp://10.0.0.120:9042"
}
I have tried setting just the contact points in the akka persistence config but that doesn't seem to override the service locator config.
All that I was missing was the session provider to override service lookup:
session-provider = akka.persistence.cassandra.ConfigSessionProvider
contact-points = ["10.0.0.120", "10.0.3.114", "10.0.4.168"]
was needed in the lagom cassandra config

HikariCP poolName configuration for Slick 3.0

I am using the following typesafe configuration in application.conf for Slick 3.0. HikariCP is the default connection pool of Slick 3.0. I set the poolName as "primaryPool":
slick.dbs.primary= {
driver="com.typesafe.slick.driver.ms.SQLServerDriver$"
db {
url = "DB URL"
driver = com.microsoft.sqlserver.jdbc.SQLServerDriver
user = "myUser"
password = "myPassword"
poolName="primaryPool"
}
}
From the HikariCP log, I saw
Before cleanup pool stats db (total=21, inUse=0, avail=21, waiting=0)
The default connection pool name "db" is used but not what I expected primaryPool. I suspect the configuration format is not correct.
So my question is how to configure poolName in application.conf using Typesafe configuration?
Note: Because I will have several connection pools in my application, I hope particular pool name is logged to distinguish different pool.
I find a workaround by setting poolName in my own code:
val dbConfig = dbConfigProvider.get[JdbcProfile]
val poolName = dbConfig.config.getConfig("db").getString("poolName")
dbConfig .db.source.asInstanceOf[HikariCPJdbcDataSource].ds.setPoolName(poolName)
It is not a good solution since I hard code HikariCPJdbcDataSource, but it can meet my requirement at least.
Still hope get the help how to configure poolName correctly in the application.conf.

OpsCenter Community, keep data on different cluster

Trying to setup OpsCenter free keep data on different cluster, but getting error:
WARN: Unable to find a matching cluster for node with IP [u'x.x.x.1']; the message was {u'os-load': 0.35}. This usually indicates that an OpsCenter agent is still running on an old node that was decommissioned or is part of a cluster that OpsCenter is no longer monitoring.
Same error for second node in cluster :(
But, if I set [dse].enterprise_override = true in cluster config -- everything works fine.
My config is:
user#casnode1:~/opscenter/conf/clusters# cat ClusterTest.conf
[jmx]
username =
password =
port = 7199
[kerberos_client_principals]
[kerberos]
[agents]
[kerberos_hostnames]
[kerberos_services]
[storage_cassandra]
seed_hosts = x.x.x.2
api_port = 9160
connect_timeout = 6.0
bind_interface =
connection_pool_size = 5
username =
password =
send_thrift_rpc = True
keyspace = OpsCenter2
[cassandra]
username =
seed_hosts = x.x.x.1, x.x.x.4
api_port = 9160
password =
So, the question is: Is it possible in OpsCenter Community setup different cluster to keep opscenter data?
OpsCenter version is 4.0.3
Is it possible in OpsCenter Community setup different cluster to keep opscenter data?
It is not. Storing data on a separate cluster is only supported on DataStax Enterprise clusters.
Note: Using the override you mentioned without permission from DataStax is a violation of the OpsCenter license agreement, and will not be supported.

Resources