I am trying to connect to RDS through Lambda NodeJS 12.x with SSL. However I am receiving these errors:
Error: 4506652096:error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol:
library: 'SSL routines',
function: 'ssl_choose_client_version',
reason: 'unsupported protocol',
code: 'HANDSHAKE_SSL_ERROR'
I am connecting like this:
const pool = mysql.createPool({
connectionLimit : 10,
host : 'db.cqgcxllqwqnk.eu-central-1.rds.amazonaws.com',
ssl : {
ca : fs.readFileSync(__dirname + '/rds-ca-2019-root.pem')
},
user : ‘xxxxx’,
password : ‘xxxxxx’,
database : ‘xxxxxx’,
multipleStatements : true
});
When I connect with the certificate through MySql Workbench everything works just fine.
Any idea on how to solve this?
Thanks a lot!
The problem was related to Mysql version and TLS version. This matrix shows that for MySQL 5.6 only TLS 1.0 is supported. Node.js 12 by default uses TLS 1.2.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.SSLSupport
Do this before connect as this have insufficent permission over file
sudo chmod 755 rds-combined-ca-bundle.pem
Warning: keep a strong password, this password easily guessable
if you still find problem, check IAM permission to lambda or check whether both RDS and lambda in same VPC.
Related
I'm getting this error when I try and connect caught error # main Error: connect ENOENT /cloudsql/<PROJECT ID>:us-central1:<DB NAME>/.s.PGSQL.5432
This is what my typeorm config file looks like
const config2 = {
database: <DB NAME>,
entities: Object.values(entities),
host: '/cloudsql/<project id>:us-central1:<db name>',
extra: {
socketPath: '/cloudsql/<project id>:us-central1:<db name>',
},
password: ...,
port: 5432,
type: process.env.POSTGRES_CONNECTION as DatabaseType,
username: ...,
synchronize: false,
dropSchema:
process.env.NODE_ENV !== 'production' &&
process.env.POSTGRES_DROP_SCHEMA === 'true',
migrations: ['dist/migrations/*.js'],
migrationsRun: true,
cache: shouldCache(),
} as PostgresConnectionOptions;
I also tried to connect via a connection URL in Postico 2 and I'm getting the error Hostname not found.
I have cloud SQL API enabled in my google project
Did you add your network? If not, postgres will not allow to connect.
Try to use 'Connections'.
To connect via Postico to your Google Cloud SQL instance, you'll need to tunnel over SSH on port 15432 using the downloadable cloud_sql_proxy executable.
https://cloud.google.com/sql/docs/mysql/sql-proxy
The user you're signed-in as in your terminal needs IAM permissions of:
Service Account Admin (or Service Account User may suffice)
If you aren't sure which user to give this IAM permissions to, run:
gcloud config get account
Once those permissions are confined, a command similar to this would establish the tunnel (note the & at the end, which is purposeful!)
./cloud_sql_proxy -instances=<CONNECTION_NAME>:<DB_NAME>=tcp:15432 &
The CONNECTION_NAME can be found inside your cloud console under SQL > 'Connect to this instance' section.
Now with the tunnel open, you should be able to connect inside of Postico.
To ensure your app can connect, you may just need to assign Cloud SQL Client IAM permission to your app in the same way you assigned service account permissions to your local user.
I hope this helps!
How do I connect to my elasticsearch cluster (TLS secured) when there are certificates generated by myself with the elasticsearch-certutil?
According to the ES documentation this code snippet should do it:
const client = new Client({
node: config.elastic.node,
auth: {
username: "elastic",
password: config.elastic.password
},
tls: {
ca: fs.readFileSync( "./share/es/certs/ca.crt" ),
rejectUnauthorized: false
}
})
Unfortunately, this gives me this famous error:
ConnectionError: unable to verify the first certificate
I've setup ES via docker-compose. To wrap up, I did the following:
Generating the certs using the elasticsearch-certutil using cert command via: bin/elasticsearch-certutil cert --silent --pem --in config/instances.yml -out /certs/bundle.zip. instances.yml contains all of my nodes as well as kibana. bundle.zip contains all certs and keys as well as the certificate for CA.
Configuring my nodes in docker-compose.yml so that they can read the generated certificates. For instance,
...
- xpack.security.http.ssl.key=${ES_CERTS_DIR}/es01/es01.key
- xpack.security.http.ssl.certificate_authorities=${ES_CERTS_DIR}/ca/ca.crt
- xpack.security.http.ssl.certificate=${ES_CERTS_DIR}/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=${ES_CERTS_DIR}/ca/ca.crt
- xpack.security.transport.ssl.certificate=${ES_CERTS_DIR}/es01/es01.crt
- xpack.security.transport.ssl.key=${ES_CERTS_DIR}/es01/es01.key
...
Validating the connection with curl with this command
$ curl -X GET "https://elastic:$ES_PASSWORD#my-cluster-doomain.com:9201" -H "Content-type: application/json" --cacert $CACERT --key $KEY --cert $CERT
where $CACERT, $KEY, $CERT are pointing to the CA cert, the key and certificate for the node that I am connecting to. This results in:
{
"name" : "es01",
"cluster_name" : "es-docker-cluster",
...
"tagline" : "You Know, for Search"
}
which is fine I suppose.
But why can't I connect to my cluster from my expressjs application? I read something about creating a the certificate chain and letting ES know that. But, I this necessary? I mean, I can connect via curl and also using elasticdump. What gives my an error is when I access the cluster via browser https://my-cluster-domain.com:9201. The browser warns me that, although the certificate is valid, the connection is not secure.
Any ideas? Thank you.
Well, after a lot of googling it turned out that adding the CA file to the ES client config is not enough, as indicated in my example configuration above.
...
tls: {
ca: fs.readFileSync( "./share/es/certs/ca.crt" ),
rejectUnauthorized: false # don't do this in production
}
Instead, one has to announce the CA certificate to the Node process itself, before configuring your connecting to ES. You can do this, as described in this and in this post (solution 2a, with the NODE_EXTRA_CA_CERTS environment variable. I now start my process like this and it worked out:
$ NODE_EXTRA_CA_CERTS="./share/es/certs/ca.crt" NODE_ENV=prod ...
One last remark, you don't have to set rejectUnauthorized: false, as some workarounds do, in case you have the current version of the elasticsearch client.
My final configuration looks like this:
const client = new Client({
node: config.elastic.node,
auth: {
username: "elastic",
password: config.elastic.password
}
})
I try to run locally (from GCP terminal) python 3 tutorial program to connect to my postgresql dsatabase.
I run proxy, as it is suggested in source:
./cloud_sql_proxy -instances=xxxxxxxx:us-central1:testpg=tcp:5432
it works, I can connect to it with:
psql "host=127.0.0.1 sslmode=disable dbname=guestbook user=postgres
Unfortunately when I try to connect from python:
cnx = psycopg2.connect(dbname=db_name, user=db_user,
password=db_password, host=host)
host is 121.0.0.1 -as I run it locally, I get this error:
psycopg2.OperationalError: connection to server at "127.0.0.1", port 5432 failed: FATAL: password authentication failed for user "postgres"
I can't get around what I miss?
Thanks in advance ...
I'd recommend using the Cloud SQL Python Connector to manage your connections and best of all you won't need to worry about running the proxy manually. It supports the pg8000 postgresql driver and can run from Cloud Shell.
Here is an example code snippet showing how to use it:
from google.cloud.sql.connector import connector
import sqlalchemy
# configure Cloud SQL Python Connector properties
def getconn() ->:
conn = connector.connect(
"xxxxxxxx:us-central1:testpg",
"pg8000",
user="YOUR_USER",
password="YOUR_PASSWORD",
db="YOUR_DB"
)
return conn
# create connection pool to re-use connections
pool = sqlalchemy.create_engine(
"postgresql+pg8000://",
creator=getconn,
)
# query or insert into Cloud SQL database
with pool.connect() as db_conn:
# query database
result = db_conn.execute("SELECT * from my_table").fetchall()
# Do something with the results
for row in result:
print(row)
For more detailed examples refer to the README of the repository.
I'm using Python 3.9.5 and PyMongo 3.11.4. The version of my MongoDB database is 4.4.6. I'm using Windows 8.1
I'm learning MongoDB and I have a cluster set up in Atlas that I connect to. Whenever I try to insert a document into a collection, a ServerSelectionTimeoutError is raised, and inside its parentheses there are several [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate.
Troubleshooting TLS Errors in the PyMongo docs weren't too much help as they only provided tips for Linux and macOS users.
It's worth mentioning that if I set tlsAllowInvalidCertificates=True when initializing my MongoClient, everything works fine. That sounds insecure, and while I am working on a small project, I would still like to develop good habits and not override any security measures in place, so I'm hoping there is an alternative to that.
From all the searching I've done, I'm guessing that I'm missing certain certificates, or that Python can't find them. I've looked into the certifi package, but this part of the docs makes it seem that should only be necessary if I'm using Python 2.x, which I'm not.
So yeah, I'm kind of stuck right now.
Well, I eventually decided to install certifi and it worked.
client = MongoClient(CONNECTION_STRING, tlsCAFile=certifi.where())
Wish the docs were a bit clearer on this, but maybe I just didn't look hard enough.
In Flask server I solved by using:
import certifi
app = Flask(__name__)
app.config['MONGO_URI'] =
'mongodb+srv://NAME:<PWD><DBNAME>.9xxxx.mongodb.net/<db>? retryWrites=true&w=majority'
mongo = PyMongo(app,tlsCAFile=certifi.where())
collection_name = mongo.db.collection_name
By default, pymongo relies on the operating system’s root certificates.
You need to install certifi:
pip install certifi
It could be that Atlas itself updated its certificates or it could be that something on your OS changed. “certificate verify failed” often occurs because OpenSSL does not have access to the system’s root certificates or the certificates are out of date. For how to troubleshoot see TLS/SSL and PyMongo — PyMongo 3.12.0 documentation 107.
So try:
client = pymongo.MongoClient(connection, tlsCAFile=certifi.where())
This happens in django as well just add the above code to your settings.py in Django:
DATABASE = {
'default': {
'ENGINE': 'djongo',
"CLIENT": {
"name": <your_database_name>,
"host": <your_connection_string>,
"username": <your_database_username>,
"password": <your_database_password>,
"authMechanism": "SCRAM-SHA-1",
},
}
}
But in host you may get this issue:
"pymongo.errors.ServerSelectionTimeoutError:"[SSL:
CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get
local issuer certificate (_ssl.c:997)
So for this you can add:
"mongodb+srv://sampleUser:samplePassword#cluster0-gbdot.mongodb.net/sampleDB??ssl=true&ssl_cert_reqs=CERT_NONE&retryWrites=true&w=majority"
Add
ssl=true&ssl_cert_reqs=CERT_NONE
after db name of your url string works fine
"mongodb+srv://username:Password#cluster0-gbdot.mongodb.net/DbName?**ssl=true&ssl_cert_reqs=CERT_NONE**&retryWrites=true&w=majority"
I saw an answer that worked for me, it appears i had not yet installed the python certificates on my mac, so from the following path i went and installed it
/Applications/Python 3.10/Install Certificates.command
Only change the version of your python, after that everything, worked fine for me
PS: I had been trying to solve the problem for half a day, I even asked ChatGPT
Step 1:
pip install certifi
Step 2:
client = pymongo.MongoClient(connection, tlsCAFile=certifi.where())
My python script is raising an 'psycopg2.OperationalError: fe_sendauth: no password supplied' error, even though the Postgre server is authorizing the connect.
I am using Python 3.5, psycopg2, Postgre 9.5 and the password is stored in a .pgpass file. The script is part of a restful flask application, using flask-restful. The script is running on the same host as the Postgre server.
I am calling the connect function as follows:
conn_admin = psycopg2.connect("dbname=database user=username")
When I execute the script I get the following stack trace:
File "/var/www/flask/content_provider.py", line 84, in get_report
conn_admin = psycopg2.connect("dbname=database user=username")
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: fe_sendauth: no password supplied
However when I look at the Postgre server log I see the following (I enabled the logger to also show all connection requests):
2019-01-04 18:28:35 SAST [17736-2] username#database LOG: connection authorized: user=username database=database
This code is running fine on my development PC, however when I put it onto the Unbuntu server, I start getting this problem.
To try and find the issue, I have hard-coded the password into the connection string, but I still get the same error.
If I execute the above line directly into my Python terminal on the host, it works fine, with and without the password in the connection string.
EDIT:
One thing I did notice is that on my desktop I use Python 3.6.2, while on the server I use Python 3.5.2.
Try adding the host:
conn_admin = psycopg2.connect("dbname=database user=username host=localhost")
Try adding the password ie
conn = psycopg2.connect("dbname=database user=username host=localhost password=password")