Access AWS DB using MuleSoft RTF EKS - amazon-rds

MuleSoft version: 4.3.0
AWS-RTF EKS
DB: AWS RDS (Aurora MySQL) 5.7
Able to connect to AWS DB from anypoint studio successfully, but unable to connect from RTF EKS Pod.
org.mule.runtime.api.connection.ConnectionException: Could not obtain connection from data source
Caused by: org.mule.db.commons.shaded.api.exception.connection.ConnectionCreationException: Could not obtain connection from data source
Caused by: org.mule.runtime.extension.api.exception.ModuleException: java.sql.SQLException: Cannot get connection for URL jdbc:mysql://<host>:3306/DBNAME?verifyServerCertificate=false&useSSL=true&requireSSL=true : Communications link failure
The last packet successfully received from the server was 99 milliseconds ago. The last packet sent successfully to the server was 94 milliseconds ago.
Caused by: java.sql.SQLException: Cannot get connection for URL jdbc:mysql://<host>:3306/DBNAME?verifyServerCertificate=false&useSSL=true&requireSSL=true : Communications link failure
I'm able to access the DB from EKS by creating a default pod with --image=mysql:5.7. But not from MuleSoft App.
Use cases tried:
1. verifyServerCertificate=false&useSSL=true&requireSSL=true
2. verifyServerCertificate=true&useSSL=true&requireSSL=true. (passing truststore in java arguments )
-Djavax.net.ssl.trustStore=/opt/mule/apps/test-rds/mySqlKeyStore.jks
-Djavax.net.ssl.trustStoreType=JKS
-Djavax.net.ssl.trustStorePassword=xxxxxx
(Generated jks file from .pem file using below commands)
openssl x509 -outform der -in us-west-2-bundle.pem -out us-west-2-bundle.der
keytool -import -alias mysql -keystore mySqlKeyStore -file us-west-2-bundle.der
What else am i missing here ? please help

I'm able to resolve this .
By adding this jvm argument i came to know that its something related to ssl handshake. -M-Djavax.net.debug=ssl
It gave debug logs like this
javax.net.ssl|SEVERE|43|[MuleRuntime].uber.03: [test-rds].uber#org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationInstance.testConnectivity:179 #3781e9a3|2021-12-23 09:55:53.715 PST|TransportContext.java:316|Fatal (HANDSHAKE_FAILURE): Couldn't kickstart handshaking (
"throwable" : {
javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
enter code here
After going through this question its clear that i need to pass enabledTLSProtocols=TLSv1.2
Why can Java not connect to MySQL 5.7 after the latest JDK update and how should it be fixed? (ssl.SSLHandshakeException: No appropriate protocol)
So here are the params that i passed in DB Config
<db:connection-properties >
<db:connection-property key="verifyServerCertificate" value="false" />
<db:connection-property key="useSSL" value="true" />
<db:connection-property key="requireSSL" value="true" />
<db:connection-property key="enabledTLSProtocols" value="TLSv1.2" />
</db:connection-properties>
enter code here
Even after adding the enabledTLSProtocols flag ,if you are getting error make sure the DB Version is correct (I had issue with non-prod and prod)
Non-Prod: MySQL 5.7 worked fine
Prod: MySQL 5.6 didn't work even with enabledTLSProtocols. I had to update DB to 5.7 to make it work
Thank you , Hope it helps someone

Related

Cannot connect to cluster with cqlsh using secure connect bundle

I am getting error when I try to datastax cassandra instance.
bin/cqlsh -u admin -p PASSWORD -b BUNDLE_ZIP_PATH
Connection error: ('Unable to connect to any servers', \
{'xxx:xxx:xxx': ValueError('No host_id to create the SniEndPoint',)} \
)
Have anyone seen this error? This is a to a cloud managed datastax instance on IBM Cloud and the connection used to work before.
The error is generated by the embedded Python driver that cqlsh uses to connect to clusters. It indicates that it couldn't get the host from the secure bundle.
The most likely cause is that the secure bundle you're using is corrupted so I'd suggest downloading it from the source again. Cheers!

Databricks DBT Runtime Error, cannot connect to Database. Maybe an SSL error?

I have a custom Databricks instance with a Domain name that points to an AWS Load Balancer. When I put that information in using either the HTTP instructions here or the databricks cluster instructions here, I get this response in the DBT CLI:
Connection:
host: https://subdomain.domain.com
port: 443
cluster: 123456-stuff00003
endpoint: None
schema: default
organization: 0
16:40:39.470091 [debug] [MainThread]: Acquiring new spark connection "debug"
16:40:39.471632 [debug] [MainThread]: Using spark connection "debug"
16:40:39.472524 [debug] [MainThread]: On debug: select 1 as id
16:40:39.472953 [debug] [MainThread]: Opening a new connection, currently in state init
Connection test: [ERROR]
1 check failed:
dbt was unable to connect to the specified database.
The database returned the following error:
>Runtime Error
Database Error
failed to connect
Unfortunately, DBT's debugging logs are terrible and I am not entirely sure why it is failing. I do know that when I connect to the cluster via Intellij I have to provide the CA file, the Client Certificate file, and the Client key file, because I am using a self-signed SSL cert (unfortunately, the self signed cert is required). Also, when defining my ~/.databrickscfg file I have to provide the argument insecure = true.
I've encountered this issue recently and I fixed it by installing root certificates by executing the "Install Certificates.command" script in the python home directory used to run dbt.
Laurent

Spark ElasticSearch EsHadoopIllegalArgumentException unable to find keystore with valid URI

I'm trying to connect spark to my elasticsearch with SSL.
Setup
Spark 2.4.0 from CDH 6.3.2 (Cloudera)
ElasticSearch 7.6.1 (Open Distro)
elasticsearch-hadoop-7.6.1.jar
Considering
1) I already managed to authenticate logstash with SSL and pkcs12 keystore manually created
2) Connexion Spark to ES works without security
Here spark conf provided :
spark.es.nodes=node1
spark.es.port=9200
spark.es.net.ssl=true
spark.es.net.ssl.keystore.location= ===> See below what i tried
spark.es.net.ssl.keystore.type=PKCS12
spark.es.net.ssl.cert.allow.self.signed=true
spark.es.net.http.auth.user=admin
spark.es.net.http.auth.pass=admin
spark.es.nodes.wan.only=false //tried true
Doing
spark.read.format("org.elasticsearch.spark.sql")
.option("es.query", "?q=*:*")
.load("spark/docs")
.show
====================================================
FileSystem Values tried with spark.es.net.ssl.keystore.location (after copying admin.pkcs12 on all nodes)
file:///PATH/certs/admin.pkcs12
Error :
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
... elided
Caused by: org.elasticsearch.hadoop.EsHadoopIllegalStateException: Cannot initialize SSL - Get Key failed: null
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSSLContext(SSLSocketFactory.java:175)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.getSSLContext(SSLSocketFactory.java:160)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSocket(SSLSocketFactory.java:129)
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport.doExecute(CommonsHttpTransport.java:685)
at org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport.execute(CommonsHttpTransport.java:664)
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:116)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:432)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:388)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:392)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:168)
at org.elasticsearch.hadoop.rest.RestClient.mainInfo(RestClient.java:745)
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverClusterInfo(InitializationUtils.java:330)
... 61 more
Caused by: java.security.UnrecoverableKeyException: Get Key failed: null
at sun.security.pkcs12.PKCS12KeyStore.engineGetKey(PKCS12KeyStore.java:435)
at java.security.KeyStore.getKey(KeyStore.java:1023)
at sun.security.ssl.SunX509KeyManagerImpl.<init>(SunX509KeyManagerImpl.java:133)
at sun.security.ssl.KeyManagerFactoryImpl$SunX509.engineInit(KeyManagerFactoryImpl.java:70)
at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:256)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.loadKeyManagers(SSLSocketFactory.java:217)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSSLContext(SSLSocketFactory.java:173)
... 78 more
Caused by: java.lang.NullPointerException
at sun.security.pkcs12.PKCS12KeyStore.engineGetKey(PKCS12KeyStore.java:374)
... 84 more
====================================================
I copied a keystore a valid admin.pkcs12 to hdfs => /user/company/ with 777 rights, (as i'm writing, is it too permissive, like ssh ?)
//returns true
FileSystem.get(spark.sparkContext.hadoopConfiguration).exists(new Path("hdfs://namenode:8020/user/company/admin.pkcs12"))
HDFS Values tried with spark.es.net.ssl.keystore.location
hdfs:///namenode:8020/user/company/admin.pkcs12
hdfs://namenode:8020/user/company/admin.pkcs12
/user/company/admin.pkcs12
Error :
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
... elided
Caused by: org.elasticsearch.hadoop.EsHadoopIllegalStateException: Cannot initialize SSL - Expected to find keystore file at [...] but was unable to. Make sure that it is available on the classpath, or if not, that you have specified a valid URI.
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSSLContext(SSLSocketFactory.java:175)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.getSSLContext(SSLSocketFactory.java:160)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSocket(SSLSocketFactory.java:129)
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport.doExecute(CommonsHttpTransport.java:685)
at org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport.execute(CommonsHttpTransport.java:664)
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:116)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:432)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:388)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:392)
at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:168)
at org.elasticsearch.hadoop.rest.RestClient.mainInfo(RestClient.java:745)
at org.elasticsearch.hadoop.rest.InitializationUtils.discoverClusterInfo(InitializationUtils.java:330)
... 61 more
Caused by: org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Expected to find keystore file at [...] but was unable to. Make sure that it is available on the classpath, or if not, that you have specified a valid URI.
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.loadKeyStore(SSLSocketFactory.java:195)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.loadKeyManagers(SSLSocketFactory.java:215)
at org.elasticsearch.hadoop.rest.commonshttp.SSLSocketFactory.createSSLContext(SSLSocketFactory.java:173)
I tried JKS too.
What am I missing ?
//Works
file:///PATH/certs/admin.pkcs12
I was getting this error because of the missing password.
spark.es.net.ssl.keystore.pass=PASSWORD

root.crt not found postgresql

I have a postgres docker image that i am using and I am enabling SSL on it. I want it to verify-full because I have a root.crt and want to make sure all the certs that can use SSL are verified. So, in my docker-compose file, i have mounted my server.crt and server.key to /var/ssl and my root.crt to /root/.postgresql.
volumes:
- ~/server_certs:/var/ssl
- ~/root_certs:/root/.postgresql
and the error i get is
ERROR [2018-07-10 20:28:24,355] org.apache.tomcat.jdbc.pool.ConnectionPool: Unable to create initial connections of pool.
! java.io.FileNotFoundException: /root/.postgresql/root.crt (No such file or directory)
! at java.io.FileInputStream.open0(Native Method)
! at java.io.FileInputStream.open(FileInputStream.java:195)
! at java.io.FileInputStream.<init>(FileInputStream.java:138)
! at java.io.FileInputStream.<init>(FileInputStream.java:93)
! at org.postgresql.ssl.jdbc4.LibPQFactory.<init>(LibPQFactory.java:124)
! ... 32 common frames omitted
! Causing: org.postgresql.util.PSQLException: Could not open SSL root certificate file /root/.postgresql/root.crt.
Any help with getting postgres to find the root.crt would be greatly appreciated (postgres 10 btw)
As a workaround you can add sslmode=require (no certificate validation!) or sslfactory=org.postgresql.ssl.DefaultJavaSSLFactory (validate certificate using JRE trust store) to your JDBC url.
This behavior and the mentioned workaround are described in https://github.com/pgjdbc/pgjdbc/issues/1307

Error on Cassandra server: Unable to gossip with any seeds

I'm adding a second node to a single-node cassandra cluster, and getting a stack trace on the second node:
ERROR 18:13:42,841 Exception encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1193)
at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:446)
at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:655)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:504)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1193)
at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:446)
at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:655)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:504)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Exception encountered during startup: Unable to gossip with any seeds
ERROR 18:13:42,885 Exception in thread Thread[StorageServiceShutdownHook,5,main]
java.lang.NullPointerException
at org.apache.cassandra.gms.Gossiper.stop(Gossiper.java:1270)
at org.apache.cassandra.service.StorageService$1.runMayThrow(StorageService.java:572)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:744)
There are other SO questions with this same issue, but none of the answers have worked for me:
Apache Cassandra: Unable to gossip with any seeds
new cassandra node can't gossip with seed
Datastax Enterprise is crashing with Unable to gossip with any seeds error
I'm running Cassandra 2.0.8 and jdk 1.7.0_51 on both nodes. One node is hosted at DigitalOcean, the other at Linode. I've tried
configuring them as the same datacenter and as different datacenters in cassandra-rackdc.properties, it makes no
difference. I've tried listen_address and broadcast_address blank and hardcoded, makes no difference. I did limit the
list of cipher suites to stop a flood of log messages about missing cipher suites. From the stock cassandra.yaml, I've changed
the following entries, excluding entries related to concurrent writes and compaction. For the sake of this question, wherever
there's a hardcoded ip address in the config, I've replaced those with . Each box has a firewall, but I've tried it with
the firewalls disabled. I've also tried it with ''internode_encryption: none'' and the result is the same. I've used telnet
and netcat to confirm that each host can connect to the other's port 7000 and 7001.
on the original host:
- seeds: "<host1>"
listen_address:
broadcast_address:
endpoint_snitch: GossipingPropertyFileSnitch
internode_encryption: all
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]
on the new host:
- seeds: "<host1>"
listen_address:
broadcast_address:
endpoint_snitch: GossipingPropertyFileSnitch
internode_encryption: all
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]
Edit:
Also, using netstat I can see that the new server successfully establishes a tcp connection to port 7001 of the original server.
Edit:
Okay, next day. I've upgraded to java 1.7.0_60 on both machines. Gossip now works with internode_encryption: none. I very much doubt the new result is related to the change in JDK; it's more likely related to some carelessness in scrubbing directories or the like.
I've commented the line in each config file that lists ciphers. Gossip still fails in the same way with internode_encryption: all. The seed node's logs are clean, but the other node logs Filtering out TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA as it isnt supported by the socket repeatedly until gossip fails. I think the log entry is related to the failure. Why one logs this and the other not, I don't know. They're both debian running the same jdk version.
Edit:
Installing the JCE on both nodes made the filtering warning go away. Still no encrypted internode communication at this point.
Edit:
With debug turned on, the seed node logs:
DEBUG 22:44:57,409 Error reading the socket d862c40[SSL_NULL_WITH_NULL_NULL: Socket[addr=/10.128.139.94,port=60611,localport=7001]]
javax.net.ssl.SSLHandshakeException: no cipher suites in common
I've pretty carefully created the certs for both servers, following the instructions at http://www.datastax.com/documentation/cassandra/2.0/cassandra/security/secureSSLCertificates_t.html?scroll=task_ds_c14_xjy_2k.
It's now working using either unencrypted or encrypted communications. Encrypted communications started working after installing the JCE extensions on both servers, and making a change in certificate generation. The Datastax instructions for preparing server certificates for Cassandra 2.0 drops a parameter that was present in their Cassandra 1.2 instructions. Including the parameter seemed to make the difference. The additional parameter is -keyalg RSA:
Seed server:
keytool -genkey -alias prod01 -keystore .keystore -keyalg RSA
keytool -export -alias prod01 -file prod01.cer -keystore .keystore -keyalg RSA
Other server:
keytool -genkey -alias prod00 -keystore .keystore -keyalg RSA
keytool -export -alias prod00 -file prod00.cer -keystore .keystore -keyalg RSA
Then, make sure both servers have both certs, and use them to create a trust store using these commands on both servers:
keytool -import -v -trustcacerts -alias prod00 -file prod00.cer -keystore .truststore
keytool -import -v -trustcacerts -alias prod01 -file prod01.cer -keystore .truststore
chmod go-rwx .keystore

Resources