I dont seem to be able to connect to Heroku Redis using TLS on Node.
These docs arent really much help: https://devcenter.heroku.com/articles/securing-heroku-redis
Does anyone have a working example? Should I be using REDIS_URL or REDIS_TLS_URL?
Im using node_redis v3
I found the Redis 6 add-on by Heroku generated an Error: self signed certificate in certificate chain error when when connecting to REDIS_URL without any parameters with ioredis on Node. You can avoid this error by passing in TLS options with rejectUnauthorized set to false.
The rejectUnauthorized of false allows for self-signed certificates, which would be an issue if concerned about MITM attacks. See TLS options for more background.
This is working for me with the latest ioredis package with rediss:// and redis:// URL's...
const REDIS_URL = process.env.REDIS_URL;
const redis_uri = url.parse(REDIS_URL);
const redisOptions = REDIS_URL.includes("rediss://")
? {
port: Number(redis_uri.port),
host: redis_uri.hostname,
password: redis_uri.auth.split(":")[1],
db: 0,
tls: {
rejectUnauthorized: false,
},
}
: REDIS_URL;
const redis = new Redis(redisOptions);
Here's my approach. It's easier to pass URL and TLS options separately.
const redisUrl = process.env.REDIS_TLS_URL ? process.env.REDIS_TLS_URL : process.env.REDIS_URL;
const redisDefaults = {
tls: {
// Heroku uses self-signed certificate, which will cause error in connection, unless check is disabled
rejectUnauthorized: false,
},
};
const defaultClient = redis.createClient(redisUrl, redisDefaults);
If you have test env running with hobby version, the TLS is URL is set in REDIS_TLS_URL, while production normally runs with premium and the env is REDIS_URL. So, to be compatible with the both, I first look for REDIS_TLS_URL and after that REDIS_URL to support both test and prod env.
For devs using node-redis, you'll need to set TLS to true when initializing your client.
redis.createClient({
url: REDIS_URL,
socket: {
tls: true,
rejectUnauthorized: false,
},
}
I don't know why you can't connect to this Redis Add-on unfortunately.
In the event you want to test on another Add-On, I have developped a Redis Add-On that is in the "alpha" phrase (free) on Heroku. I'll be able to provide you some support if you can't connect to it.
If you are interested, give me your Heroku email in private and I'll send you an invitation :)
For people using Bull, this implementation worked for me. Thanks #Tom McLellan.
const Queue = require('bull');
const redisUrlParse = require('redis-url-parse');
const REDIS_URL = process.env.REDIS_URL || 'redis://127.0.0.1:6379';
const redisUrlParsed = redisUrlParse(REDIS_URL);
const { host, port, password } = redisUrlParsed;
const bullOptions = REDIS_URL.includes('rediss://')
? {
redis: {
port: Number(port),
host,
password,
tls: {
rejectUnauthorized: false,
},
},
}
: REDIS_URL;
const workQueue = new Queue('work', bullOptions);
This worked for me using node-redis v3.0.0
const opts = config.REDIS_URL.includes('rediss://') ? {
url: config.REDIS_URL,
tls: {
rejectUnauthorized: false
}
} : config.REDIS_URL;
const client = redis.createClient(opts);
Use tls not socket. Thanks to this.
Related
I've been trying to learn elasticsearch and decided to try to connect it with node.js. I have a elasticsearch running + a index I created named test-idx. I'm following the documentation of elasticsearch to connect and create a document however when I run my code I get 'ConnectionError: self signed certificate in certificate chain' followed by a huge meta object.
const client = new elasticsearch.Client({
node: 'https://localhost:9200',
auth: {
username: 'elastic',
password: '123456'
}
})
client.index({
index: 'test-idx',
document: {
field: 'test123'
}
})
I tried adding when creating the instance of the Client but it didn't seem to help
tls: {
rejectUnauthorized: false
}
I'm my case it's Working. I'm using node with nest js
tls: { rejectUnauthorized: false }
I'm trying to download https://www.stackoverflow.com or https://www.google.com using got while I'm behind a proxy
I keep running into this error RequestError: unable to get local issue if rejectUnauthorized: false is not used. I know that this rejectUnauthorized: false workaround is a security issue.
stackoverflow.com and google.com must have trusted well-known CAs, so why am I getting this error?
import got from "got";
import { HttpsProxyAgent } from "hpagent";
const result = await got("https://www.google.com", {
agent: {
https: new HttpsProxyAgent({
proxy: process.env.https_proxy,
rejectUnauthorized: false, // If true => RequestError: unable to get local issuer certificate
}),
},
}).text();
console.log("result:", result);
On the other hand, this request to https://jsonplaceholder.typicode.com works without setting rejectUnauthorized: false
const result = await got("https://jsonplaceholder.typicode.com", {
agent: {
https: new HttpsProxyAgent({
proxy: process.env.https_proxy,
}),
},
}).text();
Can you please explain this inconsistency and how to resolve it?
Note: I'm using Node.js 14.17.6
I am currently using node-postgres to create my pool. This is my current code:
const { Pool } = require('pg')
const pgPool = new Pool({
user: process.env.PGUSER,
password: process.env.PGPASSWORD,
host: process.env.PGHOST,
database: process.env.PGDATABASE,
port: process.env.PGPORT,
ssl: {
rejectUnauthorized: true,
// Would like to add line below
// ca: process.env.CACERT,
},
})
I found another post where they read in the cert using 'fs' which can be seen below.
const config = {
database: 'database-name',
host: 'host-or-ip',
user: 'username',
password: 'password',
port: 1234,
// this object will be passed to the TLSSocket constructor
ssl: {
ca: fs.readFileSync('/path/to/digitalOcean/certificate.crt').toString()
}
}
I am unable to do that as I am using git to deploy my application. Specifically Digital Oceans new App Platform. I have attempted reaching out to them with no success. I would prefer not to commit my certificate in my source control. I see a lot of posts of people suggesting to set
ssl : { rejectUnauthorized: false}
That is not the approach I want to take. My code does work with that but I want it to be secure.
Any help is appreciated thanks.
Alright I finally was able to figure it out. I think the issue was multiline and just unfamiliarity with dotenv for my local developing environment.
I was able to get it all working with my code like this. It also worked with the fs.readFileSync() but I didn't want to commit that to my source control.
const { Pool } = require('pg')
const fs = require('fs')
const pgPool = new Pool({
user: process.env.PGUSER,
password: process.env.PGPASSWORD,
host: process.env.PGHOST,
database: process.env.PGDATABASE,
port: process.env.PGPORT,
ssl: {
rejectUnauthorized: true,
// ca: fs.readFileSync(
// `${process.cwd()}/cert/ca-certificate.crt`.toString()
// ),
ca: process.env.CA_CERT,
},
})
.on('connect', () => {
console.log('connected to the database!')
})
.on('error', (err) => {
console.log('error connecting to database ', err)
})
Now in my config.env I had to make it look like this:
CA_CERT="-----BEGIN CERTIFICATE-----\nVALUES HERE WITH NO SPACES AND A \n
AFTER EACH LINE\n-----END CERTIFICATE-----"
I had to keep it as a single line string to have it work. But I was finally to connect with
{rejectUnauthorized:true}
For the digital ocean app platform environment variable, I copied everything including the double quotes and pasted it in there. Seems to work great. I do not think you will be able to have this setting set to true with their $7 development database though. I had to upgrade to the managed one in order to find any CA cert to download.
I am trying to connect to an amazon postgreSQL RDS using a NodeJS lambda.
The lambda is in the same VPC as the RDS instance and as far as I can tell the security groups are set up to give the lambda access to the RDS. The lambda is called through API gateway and I'm using knex js as a query builder. When the lambda attempts to connect to the database it throws an "unable to get local issuer certificate" error, but the connection parameters are what I expect them to be.
I know this connection is possible as I've already implemented it in a different environment, without receiving the certificate issue. I've compared the two environments but cannot find any immediate differences.
The connection code looks like this:
import AWS from 'aws-sdk';
import { types } from 'pg';
import { Moment } from 'moment';
import knex from 'knex';
const TIMESTAMP_OID = 1114;
// Example value string: "2018-10-04 12:30:21.199"
types.setTypeParser(TIMESTAMP_OID, (value) => value && new Date(`${value}+00`));
export default class Database {
/**
* Gets the connection information through AWS Secrets Manager
*/
static getConnection = async () => {
const client = new AWS.SecretsManager({
region: '<region>',
});
if (process.env.databaseSecret == null) {
throw 'Database secret not defined';
}
const response = await client
.getSecretValue({ SecretId: process.env.databaseSecret })
.promise();
if (response.SecretString == undefined) {
throw 'Cannot find secret string';
}
return JSON.parse(response.SecretString);
};
static knexConnection = knex({
client: 'postgres',
connection: async () => {
const secret = await Database.getConnection();
return {
host: secret.host,
port: secret.port,
user: secret.username,
password: secret.password,
database: secret.dbname,
ssl: true,
};
},
});
}
Any guidance on how to solve this issue or even where to start looking would be greatly appreciated.
First of all, it is not a good idea to bypass ssl verification, and doing so can make you vulnerable to various exploits and skips a critical step in the TLS handshake.
What you can do is programmatically download the ca certificate chain bundle from Amazon and place it in the root directory of the lambda along side the handler.
wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem -P path/to/handler
Note: you can do this in your buildspec.yaml or in your script that packages the zip file that gets uploaded to aws
Then set the ssl configuration option to the contents of the pem file in your code postgres client configuration, like this:
let pgClient = new postgres.Client({
user: 'postgres',
host: 'rds-cluster.cluster-abc.us-west-2.rds.amazonaws.com',
database: 'mydatabase',
password: 'postgres',
port: 5432,
ssl: {
ca: fs.readFileSync(path.resolve('rds-combined-ca-bundle.pem'), "utf-8")
}
})
I know this is old, but just ran into this today. Running with node 10 and an older version of the pg library worked just fine. Updating to node 16 with pg version 8.x caused this error (simplified):
UNABLE_TO_GET_ISSUER_CERT_LOCALLY
In the past, you could indeed just set the ssl parameter to true or 'true' and it would work with the default AWS RDS certificate. Now, it seems we need to at least tell node/pg to ignore the cert verification (since it's self generated).
Using ssl: 'no-verify' works, enabling ssl and telling pg to ignore the verification of the cert chain.
source
UPDATE
For clarity, here's what the connection string would look like. With Knex, the same client info is passed to pg, so it should look similar to a pg client connection.
static knexConnection = knex({
client: 'postgres',
connection: async () => {
const secret = await Database.getConnection();
return {
host: secret.host,
port: secret.port,
user: secret.username,
password: secret.password,
database: secret.dbname,
ssl: 'no-verify',
};
}
For a project I have to connect to a FTPS server over a implicit connection.
I tried with node-ftp, because it seems that this is the only library, that supports the implicit connection.
I connect using the following code:
var ftpC = new FTPClient();
ftpC.on('ready', function () {
console.log('Connection successful!');
});
ftpC.on('error', function (err) {
console.log(err);
});
console.log('Try to connect to FTP Server...');
ftpC.connect({
host: HOST_TO_CONNECT,
port: 990,
secure: 'implicit',
user: '***',
password: '***',
secureOptions: {
rejectUnauthorized: false
// secureProtocol: 'SSLv23_method',
// ciphers: 'ECDHE-RSA-AES128-GCM-SHA256'
}
})
This code gives me everytime a timeout error. If I raise the timeout, the error comes later.
I tried in secureOptions to add the params rejectUnauthorized, secureProtocol and ciphers, as you can see. None of them is working. Everytime I get this timeout error.
In FileZilla I have no problem to connect. Everything is working fine.
Do someone have a solution for this behavior?
Or is there another plugin for nodejs to connect to a implicit FTPS server?
This appears to be a bug in node-ftp. I've created a PR for it and will update this as soon as it's been fixed.