I am trying to start Cassandra with SSL. My yam file has
server_encryption_options:
internode_encryption: all
keystore_password: changeme
truststore_password: changeme
truststore: /opt/certs/cassandra.truststore
keystore: /opt/certs/cassandra.keystore
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
When I try to start cassandra I get exception
ERROR [main] 2014-06-12 22:29:18,844 CassandraDaemon.java (line 513) Exception encountered during startup
java.lang.RuntimeException: Unable to create thrift socket to /0.0.0.0:9160
at org.apache.cassandra.thrift.CustomTThreadPoolServer$Factory.buildTServer(CustomTThreadPoolServer.java:263)
at org.apache.cassandra.thrift.TServerCustomFactory.buildTServer(TServerCustomFactory.java:46)
at org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.<init>(ThriftServer.java:130)
at org.apache.cassandra.thrift.ThriftServer.start(ThriftServer.java:56)
at org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:449)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:509)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Caused by: org.apache.thrift.transport.TTransportException: Could not bind to port 9160
at org.apache.thrift.transport.TSSLTransportFactory.createServer(TSSLTransportFactory.java:117)
at org.apache.thrift.transport.TSSLTransportFactory.getServerSocket(TSSLTransportFactory.java:103)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$Factory.buildTServer(CustomTThreadPoolServer.java:253)
... 6 more
Caused by: java.lang.IllegalArgumentException: Cannot support TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA with currently installed providers
at sun.security.ssl.CipherSuiteList.<init>(CipherSuiteList.java:92)
at sun.security.ssl.SSLServerSocketImpl.setEnabledCipherSuites(SSLServerSocketImpl.java:191)
at org.apache.thrift.transport.TSSLTransportFactory.createServer(TSSLTransportFactory.java:113)
... 8 more
I am using OpenJDK
# rpm -qa|grep java
java-1.7.0-openjdk-1.7.0.55-2.4.7.1.el6_5.x86_64
I have copied the JCE security jar to /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.55.x86_64/jre/lib/security
Please help me understand what is going wrong here.
The DataStax documentation (http://www.datastax.com/documentation/datastax_enterprise/4.5/datastax_enterprise/install/installGUI.html) says "NOT OpenJDK" - you need the Oracle version.
Also, need to provide the Oracle security jars if you're going to do client-to-node encryption.
https://serverfault.com/questions/534614/cannot-bind-to-port-enabling-cassandra-client-encryption
http://www.pathin.org/tutorials/java-cassandra-cannot-support-tls_rsa_with_aes_256_cbc_sha-with-currently-installed-providers/
I got the same error and this article helped me solve it.
Specifically this part:
I think you can get round it by overriding the cipher suites for both node-to-node and client-node properties e.g.
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA]
Related
MuleSoft version: 4.3.0
AWS-RTF EKS
DB: AWS RDS (Aurora MySQL) 5.7
Able to connect to AWS DB from anypoint studio successfully, but unable to connect from RTF EKS Pod.
org.mule.runtime.api.connection.ConnectionException: Could not obtain connection from data source
Caused by: org.mule.db.commons.shaded.api.exception.connection.ConnectionCreationException: Could not obtain connection from data source
Caused by: org.mule.runtime.extension.api.exception.ModuleException: java.sql.SQLException: Cannot get connection for URL jdbc:mysql://<host>:3306/DBNAME?verifyServerCertificate=false&useSSL=true&requireSSL=true : Communications link failure
The last packet successfully received from the server was 99 milliseconds ago. The last packet sent successfully to the server was 94 milliseconds ago.
Caused by: java.sql.SQLException: Cannot get connection for URL jdbc:mysql://<host>:3306/DBNAME?verifyServerCertificate=false&useSSL=true&requireSSL=true : Communications link failure
I'm able to access the DB from EKS by creating a default pod with --image=mysql:5.7. But not from MuleSoft App.
Use cases tried:
1. verifyServerCertificate=false&useSSL=true&requireSSL=true
2. verifyServerCertificate=true&useSSL=true&requireSSL=true. (passing truststore in java arguments )
-Djavax.net.ssl.trustStore=/opt/mule/apps/test-rds/mySqlKeyStore.jks
-Djavax.net.ssl.trustStoreType=JKS
-Djavax.net.ssl.trustStorePassword=xxxxxx
(Generated jks file from .pem file using below commands)
openssl x509 -outform der -in us-west-2-bundle.pem -out us-west-2-bundle.der
keytool -import -alias mysql -keystore mySqlKeyStore -file us-west-2-bundle.der
What else am i missing here ? please help
I'm able to resolve this .
By adding this jvm argument i came to know that its something related to ssl handshake. -M-Djavax.net.debug=ssl
It gave debug logs like this
javax.net.ssl|SEVERE|43|[MuleRuntime].uber.03: [test-rds].uber#org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationInstance.testConnectivity:179 #3781e9a3|2021-12-23 09:55:53.715 PST|TransportContext.java:316|Fatal (HANDSHAKE_FAILURE): Couldn't kickstart handshaking (
"throwable" : {
javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
enter code here
After going through this question its clear that i need to pass enabledTLSProtocols=TLSv1.2
Why can Java not connect to MySQL 5.7 after the latest JDK update and how should it be fixed? (ssl.SSLHandshakeException: No appropriate protocol)
So here are the params that i passed in DB Config
<db:connection-properties >
<db:connection-property key="verifyServerCertificate" value="false" />
<db:connection-property key="useSSL" value="true" />
<db:connection-property key="requireSSL" value="true" />
<db:connection-property key="enabledTLSProtocols" value="TLSv1.2" />
</db:connection-properties>
enter code here
Even after adding the enabledTLSProtocols flag ,if you are getting error make sure the DB Version is correct (I had issue with non-prod and prod)
Non-Prod: MySQL 5.7 worked fine
Prod: MySQL 5.6 didn't work even with enabledTLSProtocols. I had to update DB to 5.7 to make it work
Thank you , Hope it helps someone
I'm trying to connect spark to my elasticsearch with SSL.
Setup
Spark 2.4.0 from CDH 6.3.2 (Cloudera)
ElasticSearch 7.6.1 (Open Distro)
elasticsearch-hadoop-7.6.1.jar
Considering
1) I already managed to authenticate logstash with SSL and pkcs12 keystore manually created
2) Connexion Spark to ES works without security
Here spark conf provided :
spark.es.nodes=node1
spark.es.port=9200
spark.es.net.ssl=true
spark.es.net.ssl.keystore.location= ===> See below what i tried
spark.es.net.ssl.keystore.type=PKCS12
spark.es.net.ssl.cert.allow.self.signed=true
spark.es.net.http.auth.user=admin
spark.es.net.http.auth.pass=admin
spark.es.nodes.wan.only=false //tried true
Doing
spark.read.format("org.elasticsearch.spark.sql")
.option("es.query", "?q=*:*")
.load("spark/docs")
.show
====================================================
FileSystem Values tried with spark.es.net.ssl.keystore.location (after copying admin.pkcs12 on all nodes)
file:///PATH/certs/admin.pkcs12
Error :
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
... elided
Caused by: org.elasticsearch.hadoop.EsHadoopIllegalStateException: Cannot initialize SSL - Get Key failed: null
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSSLContext(SSLSocketFactory.java:175)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.getSSLContext(SSLSocketFactory.java:160)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSocket(SSLSocketFactory.java:129)
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport.doExecute(CommonsHttpTransport.java:685)
at org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport.execute(CommonsHttpTransport.java:664)
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:116)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:432)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:388)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:392)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:168)
at org.elasticsearch.hadoop.rest.RestClient.mainInfo(RestClient.java:745)
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverClusterInfo(InitializationUtils.java:330)
... 61 more
Caused by: java.security.UnrecoverableKeyException: Get Key failed: null
at sun.security.pkcs12.PKCS12KeyStore.engineGetKey(PKCS12KeyStore.java:435)
at java.security.KeyStore.getKey(KeyStore.java:1023)
at sun.security.ssl.SunX509KeyManagerImpl.<init>(SunX509KeyManagerImpl.java:133)
at sun.security.ssl.KeyManagerFactoryImpl$SunX509.engineInit(KeyManagerFactoryImpl.java:70)
at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:256)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.loadKeyManagers(SSLSocketFactory.java:217)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSSLContext(SSLSocketFactory.java:173)
... 78 more
Caused by: java.lang.NullPointerException
at sun.security.pkcs12.PKCS12KeyStore.engineGetKey(PKCS12KeyStore.java:374)
... 84 more
====================================================
I copied a keystore a valid admin.pkcs12 to hdfs => /user/company/ with 777 rights, (as i'm writing, is it too permissive, like ssh ?)
//returns true
FileSystem.get(spark.sparkContext.hadoopConfiguration).exists(new Path("hdfs://namenode:8020/user/company/admin.pkcs12"))
HDFS Values tried with spark.es.net.ssl.keystore.location
hdfs:///namenode:8020/user/company/admin.pkcs12
hdfs://namenode:8020/user/company/admin.pkcs12
/user/company/admin.pkcs12
Error :
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
... elided
Caused by: org.elasticsearch.hadoop.EsHadoopIllegalStateException: Cannot initialize SSL - Expected to find keystore file at [...] but was unable to. Make sure that it is available on the classpath, or if not, that you have specified a valid URI.
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSSLContext(SSLSocketFactory.java:175)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.getSSLContext(SSLSocketFactory.java:160)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSocket(SSLSocketFactory.java:129)
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport.doExecute(CommonsHttpTransport.java:685)
at org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport.execute(CommonsHttpTransport.java:664)
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:116)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:432)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:388)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:392)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:168)
at org.elasticsearch.hadoop.rest.RestClient.mainInfo(RestClient.java:745)
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverClusterInfo(InitializationUtils.java:330)
... 61 more
Caused by: org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Expected to find keystore file at [...] but was unable to. Make sure that it is available on the classpath, or if not, that you have specified a valid URI.
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.loadKeyStore(SSLSocketFactory.java:195)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.loadKeyManagers(SSLSocketFactory.java:215)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSSLContext(SSLSocketFactory.java:173)
I tried JKS too.
What am I missing ?
//Works
file:///PATH/certs/admin.pkcs12
I was getting this error because of the missing password.
spark.es.net.ssl.keystore.pass=PASSWORD
I have installed a DSE 6.0 cassandra cluster using LCM| opscenter 6.5 and node is up is running . During LCM cluster install, it installed datastax agent as well .
But the agent is not connecting to DSE and opscenter is not showing any details about the node.Later I tried with tarball install of datastax agent but that too is showing same issue. Please see below agent.log and screenshots.
WARN [async-dispatch-2] 2018-07-24 09:23:19,915 JMX marked as down, restarting JMX components.
ERROR [async-dispatch-2] 2018-07-24 09:23:19,916 Error starting DynamicEnvrionmentComponent.
java.io.IOException: Process failed: bash -c /tmp/opsc_3882111672138551416/dense.sh
Exit val: 126
Output:
bash: /tmp/opsc_3882111672138551416/dense.sh: Permission denied
at opsagent.proc$handle_proc_results.invokeStatic(proc.clj:61)
at opsagent.proc$handle_proc_results.invoke(proc.clj:51)
at opsagent.proc$run_proc.invokeStatic(proc.clj:84)
at opsagent.proc$run_proc.doInvoke(proc.clj:65)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at opsagent.environment.utils$package_config_paths.invokeStatic(utils.clj:161)
at opsagent.environment.utils$package_config_paths.invoke(utils.clj:141)
at opsagent.environment.utils$all_config_paths.invokeStatic(utils.clj:197)
at opsagent.environment.utils$all_config_paths.doInvoke(utils.clj:190)
at clojure.lang.RestFn.invoke(RestFn.java:805)
at opsagent.environment.dynamic$dynamic_env_state.invokeStatic(dynamic.clj:162)
at opsagent.environment.dynamic$dynamic_env_state.invoke(dynamic.clj:148)
at clojure.lang.AFn.applyToHelper(AFn.java:171)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invokeStatic(core.clj:652)
at clojure.core$partial$fn__4765.doInvoke(core.clj:2534)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at opsagent.jmx$create_jmx_pool_with_config$wrapper__5941.doInvoke(jmx.clj:239)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at opsagent.environment.dynamic$add_dynamic_state.invokeStatic(dynamic.clj:276)
at opsagent.environment.dynamic$add_dynamic_state.invoke(dynamic.clj:264)
at opsagent.environment.dynamic.DynamicEnvironmentComponent.start(dynamic.clj:299)
at com.stuartsierra.component$fn__2593$G__2587__2595.invoke(component.clj:4)
at com.stuartsierra.component$fn__2593$G__2586__2598.invoke(component.clj:4)
at clojure.lang.Var.invoke(Var.java:379)
at clojure.lang.AFn.applyToHelper(AFn.java:154)
at clojure.lang.Var.applyTo(Var.java:700)
at clojure.core$apply.invokeStatic(core.clj:648)
at clojure.core$apply.invoke(core.clj:641)
at com.stuartsierra.component$try_action.invokeStatic(component.clj:116)
at com.stuartsierra.component$try_action.invoke(component.clj:115)
at clojure.lang.Var.invoke(Var.java:401)
at opsagent.config_service$update_system$fn__22445.invoke(config_service.clj:223)
at clojure.lang.ArraySeq.reduce(ArraySeq.java:114)
at clojure.core$reduce.invokeStatic(core.clj:6544)
at clojure.core$reduce.invoke(core.clj:6527)
at opsagent.config_service$update_system.invokeStatic(config_service.clj:217)
at opsagent.config_service$update_system.doInvoke(config_service.clj:213)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at opsagent.config_service$start_system_BANG_.invokeStatic(config_service.clj:243)
at opsagent.config_service$start_system_BANG_.invoke(config_service.clj:236)
at opsagent.config_service$fn__22551$fn__22552$state_machine__4942__auto____22553$fn__22555.invoke(config_service.clj:266)
at opsagent.config_service$fn__22551$fn__22552$state_machine__4942__auto____22553.invoke(config_service.clj:266)
at clojure.core.async.impl.ioc_macros$run_state_machine.invokeStatic(ioc_macros.clj:973)
at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:972)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invokeStatic(ioc_macros.clj:977)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:975)
at clojure.core.async.impl.ioc_macros$take_BANG_$fn__4958.invoke(ioc_macros.clj:986)
at clojure.core.async.impl.channels.ManyToManyChannel$fn__707$fn__708.invoke(channels.clj:95)
at clojure.lang.AFn.run(AFn.java:22)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
INFO [async-dispatch-2] 2018-07-24 09:23:19,917 Starting JMXComponent
please note "/tmp/opsc_3882111672138551416/dense.sh: Permission denied" in your logs.
You probably don't have permissions to create anything under /tmp/
You can try fix the permissions or to reconfigure your temporary directory with -Djava.io.tmpdir in datastax-agent-env.sh:
JVM_OPTS="$JVM_OPTS -Xmx128M -Djava.io.tmpdir=/other/temp/directory"
You can find it here: /usr/share/datastax-agent/bin/
In version 6, there is datastax-agent instead of datastax-agent-env.sh
Remember to add this line at the beggining of datastax-agent file
Just installed 64 bit DevCenter 1.5.0 on my Windows 7x64 pc. Now I am trying to connect DevCenter to our Apache Cassandra installation on a remote server. Within DevCenter, I'm getting an error when I try to connect to a remote Cassandra installation. The message from the error log is
!ENTRY Connection # open 4 0 2016-05-19 08:03:02.529
!MESSAGE com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: host.domain.com/###.##.##.###:9042 (com.datastax.driver.core.TransportException: [host.domain.com/###.##.##.###:9042] Cannot connect))
I can however connect command line using cqlsh and it shows
cqlsh 5.0.1 | Cassandra 2.1.14 | CQL spec 3.2.1 | Native protocol v3.
In my cqlshrc file - I have to specify [cql] version = 3.2.1 and the certificate to make the connection function.
From what I have read - to make DevCenter connect - I had to do a couple of things
1) Create a truststore for the certificate and reference it in the connection properties with the password - and that seems to be ok I believe (info from the DataStax site)
2) The second thing I needed to do was change the cassandra.yaml file in C:\Program Files\DataStax-DDC\apache-cassandra\conf - change the rpc_address to the host.domain.com and verify start_native_transport: true (StackOverflow)
I am not sure what I am missing - to successfully connect DataStax DevCenter on my windows 7 machine to the apache cassandra installation on a remote server (host.domain.com).
I'm adding a second node to a single-node cassandra cluster, and getting a stack trace on the second node:
ERROR 18:13:42,841 Exception encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1193)
at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:446)
at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:655)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:504)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1193)
at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:446)
at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:655)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:504)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Exception encountered during startup: Unable to gossip with any seeds
ERROR 18:13:42,885 Exception in thread Thread[StorageServiceShutdownHook,5,main]
java.lang.NullPointerException
at org.apache.cassandra.gms.Gossiper.stop(Gossiper.java:1270)
at org.apache.cassandra.service.StorageService$1.runMayThrow(StorageService.java:572)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:744)
There are other SO questions with this same issue, but none of the answers have worked for me:
Apache Cassandra: Unable to gossip with any seeds
new cassandra node can't gossip with seed
Datastax Enterprise is crashing with Unable to gossip with any seeds error
I'm running Cassandra 2.0.8 and jdk 1.7.0_51 on both nodes. One node is hosted at DigitalOcean, the other at Linode. I've tried
configuring them as the same datacenter and as different datacenters in cassandra-rackdc.properties, it makes no
difference. I've tried listen_address and broadcast_address blank and hardcoded, makes no difference. I did limit the
list of cipher suites to stop a flood of log messages about missing cipher suites. From the stock cassandra.yaml, I've changed
the following entries, excluding entries related to concurrent writes and compaction. For the sake of this question, wherever
there's a hardcoded ip address in the config, I've replaced those with . Each box has a firewall, but I've tried it with
the firewalls disabled. I've also tried it with ''internode_encryption: none'' and the result is the same. I've used telnet
and netcat to confirm that each host can connect to the other's port 7000 and 7001.
on the original host:
- seeds: "<host1>"
listen_address:
broadcast_address:
endpoint_snitch: GossipingPropertyFileSnitch
internode_encryption: all
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]
on the new host:
- seeds: "<host1>"
listen_address:
broadcast_address:
endpoint_snitch: GossipingPropertyFileSnitch
internode_encryption: all
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]
Edit:
Also, using netstat I can see that the new server successfully establishes a tcp connection to port 7001 of the original server.
Edit:
Okay, next day. I've upgraded to java 1.7.0_60 on both machines. Gossip now works with internode_encryption: none. I very much doubt the new result is related to the change in JDK; it's more likely related to some carelessness in scrubbing directories or the like.
I've commented the line in each config file that lists ciphers. Gossip still fails in the same way with internode_encryption: all. The seed node's logs are clean, but the other node logs Filtering out TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA as it isnt supported by the socket repeatedly until gossip fails. I think the log entry is related to the failure. Why one logs this and the other not, I don't know. They're both debian running the same jdk version.
Edit:
Installing the JCE on both nodes made the filtering warning go away. Still no encrypted internode communication at this point.
Edit:
With debug turned on, the seed node logs:
DEBUG 22:44:57,409 Error reading the socket d862c40[SSL_NULL_WITH_NULL_NULL: Socket[addr=/10.128.139.94,port=60611,localport=7001]]
javax.net.ssl.SSLHandshakeException: no cipher suites in common
I've pretty carefully created the certs for both servers, following the instructions at http://www.datastax.com/documentation/cassandra/2.0/cassandra/security/secureSSLCertificates_t.html?scroll=task_ds_c14_xjy_2k.
It's now working using either unencrypted or encrypted communications. Encrypted communications started working after installing the JCE extensions on both servers, and making a change in certificate generation. The Datastax instructions for preparing server certificates for Cassandra 2.0 drops a parameter that was present in their Cassandra 1.2 instructions. Including the parameter seemed to make the difference. The additional parameter is -keyalg RSA:
Seed server:
keytool -genkey -alias prod01 -keystore .keystore -keyalg RSA
keytool -export -alias prod01 -file prod01.cer -keystore .keystore -keyalg RSA
Other server:
keytool -genkey -alias prod00 -keystore .keystore -keyalg RSA
keytool -export -alias prod00 -file prod00.cer -keystore .keystore -keyalg RSA
Then, make sure both servers have both certs, and use them to create a trust store using these commands on both servers:
keytool -import -v -trustcacerts -alias prod00 -file prod00.cer -keystore .truststore
keytool -import -v -trustcacerts -alias prod01 -file prod01.cer -keystore .truststore
chmod go-rwx .keystore