Redis sentinel connection is timing out from nodeJS - node.js

Am trying to connect redis sentinel instance from nodeJS using ioredis. Am not able to connect redis sentinel instance despite trying multiple available options. We have not configured sentinel password. But, able to connect same redis sentinel instance from .net core using StackExchange.Redis. Please find below nodeJS code,
import { AppModule } from './app.module';
import IORedis from 'ioredis';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
const ioredis = new IORedis({
sentinels: [
{ host: 'sentinel-host-1' },
{ host: 'sentinel-host-2' },
{ host: 'sentinel-host-3' },
],
name: 'mastername',
password: 'password',
showFriendlyErrorStack: true,
});
try {
ioredis.set('foo', 'bar');
} catch (exception) {
console.log(exception);
}
await app.listen(3000);
}
bootstrap();
Error we got is,
[ioredis] Unhandled error event: Error: connect ETIMEDOUT
node_modules\ioredis\built\redis\index.js:317:37)
at Object.onceWrapper (node:events:475:28)
at Socket.emit (node:events:369:20)
at Socket._onTimeout (node:net:481:8)
at listOnTimeout (node:internal/timers:557:17)
at processTimers (node:internal/timers:500:7)
Connection String used from .net core is below,
Redis_Configuration = "host-1,host-2,host-3,serviceName=mastername,password=password,abortConnect=False,connectTimeout=1000,responseTimeout=1000";

Answering this for the benefit of others. Everything is fine, but this nodeJS package is resolving redis instances into private IPs which i cannot access from my local. So, had to put it over subnet group and make it work. However, FYI - .net core package does not resolve into private IPs, hence i was able to access instances from my local itself.

"The arguments passed to the constructor are different from the ones you use to connect to a single node"
Try to replace password with sentinelPassword.

Related

Connection timed out while connecting to AWS DocumentDB outside the VPC

I'm trying create a very simple node app that can use DocumentDB. I'm not using Cloud9 neither Lambda, I'm coding locally. I was following this link https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html and this link https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-ec2.html
I created a poorly secured EC2 instance with the following inbound rules
port range
protocol
source
security group
22
TCP
0.0.0.0/0
demoEC2
This demoEC2 security group has the following inbound rules
type
protocol
port range
source
SSH
TCP
22
0.0.0.0/0
Then I created a DocumentDB cluster with 1 instance available that belongs to a security group that has the following inbound rules
type
protocol
port range
source
custom tcp
TCP
27017
demoEC2
After that, I open my terminal and created a tunnel:
ssh -i "mykeypair.pem" -L 27017:<CLUSTER ENDPOINT>:27017 ec2-user#<EC2 PUBLIC IPV4 DNS> -N
And, to test if my tunnel is working, I connect using mongoshell:
> mongo "mongodb://<MASTER USERNAME>:<MASTER PASSWORD>#localhost:27017/<DATABASE>" --tls --tlsAllowInvalidHostnames --tlsCAFile rds-combined-ca-bundle.pem
MongoDB shell version v4.2.13
connecting to: mongodb://localhost:27017/<DATABASE>?compressors=disabled&gssapiServiceName=mongodb
2021-07-29T10:10:59.309+0200 W NETWORK [js] The server certificate does not match the host name. Hostname: localhost does not match docdb-2021-07-27-10-32-49.ctuxybn342pe.eu-central-1.docdb.amazonaws.com docdb-2021-07-27-10-32-49.cluster-ctuxybn342pe.eu-central-1.docdb.amazonaws.com docdb-2021-07-27-10-32-49.cluster-ro-ctuxybn342pe.eu-central-1.docdb.amazonaws.com , Subject Name: C=US,ST=Washington,L=Seattle,O=Amazon.com,OU=RDS,CN=docdb-2021-07-27-10-32-49.ctuxybn342pe.eu-central-1.docdb.amazonaws.com
Implicit session: session { "id" : UUID("63340995-54ad-471b-aa8d-85763f3c7281") }
MongoDB server version: 4.0.0
WARNING: shell and server versions do not match
Warning: Non-Genuine MongoDB Detected
This server or service appears to be an emulation of MongoDB rather than an official MongoDB product.
Some documented MongoDB features may work differently, be entirely missing or incomplete, or have unexpected performance characteristics.
To learn more please visit: https://dochub.mongodb.org/core/non-genuine-mongodb-server-warning.
rs0:PRIMARY>
However, when I try to connect in my node app:
const mongoose = require('mongoose');
const fs = require('fs');
const path = require('path');
const username = ...
const password = ...
const database = ...
const connstring = `mongodb://${username}:${password}#localhost:27017/${database}?tls=true&replicaSet=rs0&readPreference=secondaryPreferred`;
const certFile = path.resolve(__dirname, './rds-combined-ca-bundle.pem');
const certFileBuf = fs.readFileSync(certFile); //I tried this one in tlsCAFile option as well
mongoose.connect(connstring,
{
tlsCAFile: certFile,
useNewUrlParser: true,
tlsAllowInvalidHostnames: true,
}
).then(() => console.log('Connection to DB successful'))
.catch((err) => console.error(err, 'Error'));
I get a connection timeout error after a while:
> > node .\index.js
(node:12388) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
MongoNetworkError: failed to connect to server [<CLUSTER ENDPOINT WITHOUT HAVING .cluster->:27017] on first connect [MongoNetworkTimeoutError: connection timed out
at connectionFailureError (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:345:14)
at TLSSocket.<anonymous> (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:313:16)
at Object.onceWrapper (events.js:421:28)
at TLSSocket.emit (events.js:315:20)
at TLSSocket.Socket._onTimeout (net.js:481:8)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7)]
at Pool.<anonymous> (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\topologies\server.js:441:11)
at Pool.emit (events.js:315:20)
at D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\pool.js:564:14
at D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\pool.js:1013:9
at D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:32:7
at callback (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:283:5)
at TLSSocket.<anonymous> (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:313:7)
at Object.onceWrapper (events.js:421:28)
at TLSSocket.emit (events.js:315:20)
at TLSSocket.Socket._onTimeout (net.js:481:8)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7) Error
Since I could connect using mongoshell, I think the tunnel is working and I can even do some inserts on it, but why Mongoose can't connect? I tried also using the MongoClient (const MongoClient = require('mongodb').MongoClient and MongoClient.connect(same everything)) but it didn't worked, I'm still getting the same timeout error.
Turns out all I need to do is to pass the username and password through the options, not in the connection string:
const connstring = `mongodb://localhost:27017/${database}`;
const certFile = path.resolve(__dirname, './rds-combined-ca-bundle.pem');
const certFileBuf = fs.readFileSync(certFile);
mongoose.connect(connstring,
{
tls: true,
tlsCAFile: certFile,
useNewUrlParser: true,
tlsAllowInvalidHostnames: true,
auth: {
username,
password
}
}
)

Managed DigitalOcean Redis instance giving Redis AbortError

I setup managed redis and managed postgres on digital ocean. Digital ocean gave me a .crt file, I don't know what to do with this, so didn't do anything with it. Can this be the root of the problem below:
Or do I have to allow docker container to reach outside of the container on the rediss protocol?
I dockerized a node app and then put this container onto my droplet. I have my droplet and managed redis and postgres in same region (SFO2). It connects to redis using this url:
url: 'rediss://default:REMOVED_THIS_PASSWORD#my-new-app-sfo2-do-user-5053627-0.db.ondigitalocean.com:25061/0',
I then did ran my docker container with docker run.
It then gives me error:
node_redis: WARNING: You passed "rediss" as protocol instead of the "redis" protocol!
events.js:186
throw er; // Unhandled 'error' event
^
AbortError: Connection forcefully ended and command aborted. It might have been processed.
at RedisClient.flush_and_error (/opt/apps/mynewapp/node_modules/redis/index.js:362:23)
at RedisClient.end (/opt/apps/mynewapp/node_modules/redis/lib/extendedApi.js:52:14)
at RedisClient.onPreConnectionEnd (/opt/apps/mynewapp/node_modules/machinepack-redis/machines/get-connection.js:157:14)
at RedisClient.emit (events.js:209:13)
at RedisClient.connection_gone (/opt/apps/mynewapp/node_modules/redis/index.js:590:14)
at Socket.<anonymous> (/opt/apps/mynewapp/node_modules/redis/index.js:293:14)
at Object.onceWrapper (events.js:298:28)
at Socket.emit (events.js:214:15)
at endReadableNT (_stream_readable.js:1178:12)
at processTicksAndRejections (internal/process/task_queues.js:80:21)
Emitted 'error' event on RedisClient instance at:
at /opt/apps/mynewapp/node_modules/redis/index.js:310:22
at Object.callbackOrEmit [as callback_or_emit] (/opt/apps/mynewapp/node_modules/redis/lib/utils.js:89:9)
at Command.callback (/opt/apps/mynewapp/node_modules/redis/lib/individualCommands.js:199:15)
at RedisClient.flush_and_error (/opt/apps/mynewapp/node_modules/redis/index.js:374:29)
at RedisClient.end (/opt/apps/mynewapp/node_modules/redis/lib/extendedApi.js:52:14)
[... lines matching original stack trace ...]
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
code: 'NR_CLOSED',
command: 'AUTH',
args: [ 'REMOVED_I_DONT_KNOW_IF_THIS_IS_SENSITIVE' ]
The redis protocol is different from rediss because the latter uses TLS connection. DigitalOcean Managed Redis requires the connections to be made over TLS, so you have to use rediss. However, I couldn't find any info about the TLS certificate provided by DigitalOcean to connect to the Managed Redis service.
Based on your error message, I presumed you're using this redis package. If that's the case, you can pass empty TLS object option in the connection string like so:
const Redis = require('redis')
const host = 'db-redis.db.ondigitalocean.com'
const port = '25061'
const username = 'user'
const password = 'secret'
const url = `${username}:${password}#${host}:${port}`
const client = Redis.createClient(url, {tls: {}})
Further reading/source:
SSL connections arrive for Redis on Compose
Connecting to IBM Cloud Databases for Redis from Node.js
I solved this. Below are snippets from config/env/production.js
Sockets
For sockets, to enable rediss you have to pass in all options through adapterOptions like this:
sockets: {
onlyAllowOrigins: ['https://my-website.com'],
// pass in as adapterOptions so it gets through to redis-adapter
// as i need it "rediss" but this url is not supported i get an error.
// so i need to pass in `tls` empty object. and i see he moves things into
// `adapterOptions` here here - https://github.com/balderdashy/sails-hook-sockets/blob/master/lib/configure.js#L128
adapterOptions: {
user: 'username',
pass: 'password',
host: 'host',
port: 9999,
db: 2, // pick a number
tls: {},
},
adapter: '#sailshq/socket.io-redis',
},
Session
For session, pass tls: {} empty object to config:
session: {
pass: 'password',
host: 'host',
port: 9999,
db: 1, // pick a number not used by sockets
tls: {},
cookie: {
secure: true,
maxAge: 24 * 60 * 60 * 1000, // 24 hours
},
},

AWS lambda with mongoose to Atlas - MongoNetworkError

I am trying to connect MongoDB Atlas with mongoose and aws lambda but i get error MongoNetworkError
AWS Lambda
Mongoose
MongoDB Atlas
The same code was tested with serverless-offline and works perfect, the problem is when i deploy it to AWS Lambda.
This is the code snipet
'use strict';
const mongoose = require('mongoose');
const MongoClient = require('mongodb').MongoClient;
let dbuser = process.env.DB_USER;
let dbpass = process.env.DB_PASSWORD;
let opts = {
bufferCommands: false,
bufferMaxEntries: 0,
socketTimeoutMS: 2000000,
keepAlive: true,
reconnectTries: 30,
reconnectInterval: 500,
poolSize: 10,
ssl: true,
};
const uri = `mongodb+srv://${dbuser}:${dbpass}#carpoolingcluster0-bw91o.mongodb.net/awsmongotest?retryWrites=true&w=majority`;
// simple hello test
module.exports.hello = async (event, context, callback) => {
const response = {
body: JSON.stringify({message:'AWS Testing :: '+ `${dbuser} and ${dbpass}`}),
};
return response;
};
// connect using mongoose
module.exports.cn1 = async (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
let conn = await mongoose.createConnection(uri, opts);
const M = conn.models.Test || conn.model('Test', new mongoose.Schema({ name: String }));
const doc = await M.find();
const response = {
body: JSON.stringify({data:doc}),
};
return response;
};
// connect using mongodb
module.exports.cn2 = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
console.log("Connec to mongo using connectmongo ");
MongoClient.connect(uri).then(client => {
console.log("Success connect to mongo DB::::");
client.db('awsmongotest').collection('tests').find({}).toArray()
.then((result)=>{
let response = {
body: JSON.stringify({data:result}),
}
callback(null, response)
})
}).catch(err => {
console.log('=> an error occurred: ', err);
callback(err);
});
};
In the CloudWatch logs i see this error
{
"errorType": "MongoNetworkError",
"errorMessage": "failed to connect to server [carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017] on first connect [MongoNetworkError: connection 5 to carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017 closed]",
"stack": [
"MongoNetworkError: failed to connect to server [carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017] on first connect [MongoNetworkError: connection 5 to carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017 closed]",
" at Pool.<anonymous> (/var/task/node_modules/mongodb-core/lib/topologies/server.js:431:11)",
" at Pool.emit (events.js:189:13)",
" at connect (/var/task/node_modules/mongodb-core/lib/connection/pool.js:557:14)",
" at callback (/var/task/node_modules/mongodb-core/lib/connection/connect.js:109:5)",
" at runCommand (/var/task/node_modules/mongodb-core/lib/connection/connect.js:129:7)",
" at Connection.errorHandler (/var/task/node_modules/mongodb-core/lib/connection/connect.js:321:5)",
" at Object.onceWrapper (events.js:277:13)",
" at Connection.emit (events.js:189:13)",
" at TLSSocket.<anonymous> (/var/task/node_modules/mongodb-core/lib/connection/connection.js:350:12)",
" at Object.onceWrapper (events.js:277:13)",
" at TLSSocket.emit (events.js:189:13)",
" at _handle.close (net.js:597:12)",
" at TCP.done (_tls_wrap.js:388:7)"
],
"name": "MongoNetworkError",
"errorLabels": [
"TransientTransactionError"
]
}
Here is example on github to reproduce the error.
https://github.com/rollrodrig/error-aws-mongo-atlas
Just clone it, npm install, add your mongo atlas user, password and push to AWS.
Thanks.
Some extra steps are required to let lambda call external endpoint
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
Your atlas should also whitelist IP address of the servers, from which lambda will be connected.
Another option to consider - VPC peering between your lambda VPC and Atlas.
I have some questions concerning your configuration:
Did you whitelist the AWS Lambda function's IP address in Atlas?
Several posts on SO indicate that users get a MongoNetworkError like this if the IP is not whitelisted. [1][4]
Did you read the best-practices guide by Atlas which states that mongodb connections should be initiated outside the lambda handler? [2][3]
Do you use a public lambda function or a lambda function inside a VPC? There is a substantial difference between them and the latter one is more error-prone since the VPC configuration (e.g. NAT) must be taken into account.
I was able to ping the instances in the Atlas cluster and was able to establish a connection on port 27017. However, when connecting via the mongo shell, I get the following error:
Unable to reach primary for set CarpoolingCluster0-shard-0.
Cannot reach any nodes for set CarpoolingCluster0-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.
When I use your GitHub sample from AWS lambda I get the exact same error message as described in the question.
As the error messages are not authentication-related but network-related, I assume that something is blocking the connection... Please double-check the three config questions above.
[1] What is a TransientTransactionError in Mongoose (or MongoDB)?
[2] https://docs.atlas.mongodb.com/best-practices-connecting-to-aws-lambda/
[3] https://blog.cloudboost.io/i-wish-i-knew-how-to-use-mongodb-connection-in-aws-lambda-f91cd2694ae5
[4] https://github.com/Automattic/mongoose/issues/5237
Well, thanks everyone. Finally i found the solution with the help of the mongo support.
Here is the solution for anyone who needs
When you create a Mongo Altas cluster they ask you add your local ip and it is automatically added to the WhiteList. You can see it in
Your cluster > Network Access > IP Whitelist there in the list you will see your IP. It mean that only people from YOUR network will be able to connect to your MongoAtlas. AWS Lambda is NOT in your network, soo aws lambda will never connect to your Mongo Atlas. That is why i get the error MongoNetworkError.
Fix
You need to add the AWS Lambda IP to the Mongo Atlas WhiteListIP
go to your Your cluster > Network Access > IP Whitelist
click in the button ADD IP ADDRESS
click on ALLOW ACCESS FROM ANYWHERE it will add the ip 0.0.0.0/0 to the list, click confirm
Test your call from AWS Lambda and i will work.
FINALLY !
What you did is tell to Mongo Atlas that ANYONE from ANYWHERE can connect to your mongo Atlas.
Of course this is not a good practice. What you need is add only the AWS Lambda IP, here is when VPC comes to scene.
Create a VPC is little complex and it have many steeps, there are good tutorials in the other comments.
But for sure this small guide tacle the MongoNetworkError

Random SSL handshake error when connecting to ElastiCache with ioRedis

I am attempting to connect to an ElastiCache cluster that is encrypted in transit from a node script using ioRedis. Sometimes my script works, other times I get Error: 140736319218624:error:140940E5:SSL routines:ssl3_read_bytes:ssl handshake failure:../deps/openssl/openssl/ssl/s3_pkt.c:1216:
Here is all of my code:
var Redis = require('ioredis');
var nodes = [{
host: 'clustercfg.name.xxxxxx.region.cache.amazonaws.com',
port: '6379',
}];
var cluster = new Redis.Cluster(nodes,{
redisOptions: {
tls: {}
}});
cluster.set('aws', 'test');
cluster.get('aws', function (err, res) {
console.log(res);
if (err) {
console.error(err)
}
cluster.disconnect()
});
I believe the ssl handshake error is a side-effect of a race-condition bug in ioredis.
I have been banging my head over the same issue the last several days (ioredis version 4.0.0). I just couldn't reliably connect ioredis to our elasticache cluster. I would see the same intermittent error.
Error: 140618195700616:error:140940E5:SSL routines:ssl3_read_bytes:ssl
handshake failure:../deps/openssl/openssl/ssl/s3_pkt.c:1216:
You can view ioredis debug output by setting "DEBUG=ioredis:*" in your node environment. Once I did this I could see that when the error occurred it was accompanied by several messages similar to the following:
2018-10-06T18:24:38.287Z ioredis:cluster:connectionPool Disconnect
xxx.usw2.cache.amazonaws.com:6379 because the node does not hold any
slot
I tried node-redis and redis-clustr it works fine with elasticache.

How to use PM2 Cluster with Socket IO?

I am developing an application that relies completely on Socket.io. As we all know NodeJS by default runs only on one core. Now I would like to scale it across multiple cores. I am finding it difficult to make socketio work with PM2 Cluster Mode. Any sample code would help.
I am using Artillery to test. And when the app runs on single core I get the response while It runs in cluster the response would be NaN
When Ran Without Cluster
PM2 docs say
Be sure your application is stateless meaning that no local data is
stored in the process, for example sessions/websocket connections,
session-memory and related. Use Redis, Mongo or other databases to
share states between processes.
Socket.io is not stateless.
Kubernetes implementation get around the statefull issues by routing based on source IP to a specific instance. This is still not 100% since some sources may present more than one IP address. I know this is not PM2, but gives you an idea of the complexity.
NESTjs SERVER
I use Socket server 2.4.1 so then i get the compatible redis adapter that is 5.4.0
I need to extend nest's adepter class "ioAdapter" that class only works for normal ws connections not our pm2 clusters
import { IoAdapter } from '#nestjs/platform-socket.io';
import * as redisIOAdapter from 'socket.io-redis';
import { config } from './config';
export class RedisIoAdapter extends IoAdapter {
createIOServer(port: number, options?: any): any {
const server = super.createIOServer(port, options);
const redisAdapter = redisIOAdapter({
host: config.server.redisUrl,
port: config.server.redisPort,
});
server.adapter(redisAdapter);
return server;
}
}
That is actually nestjs implementation
Now i need to tell nest im using that implementetion so i go to main.ts
import { NestFactory } from '#nestjs/core';
import { AppModule } from './app.module';
import { config } from './config';
import { RedisIoAdapter } from './socket-io.adapter';
import { EventEmitter } from 'events';
async function bootstrap() {
EventEmitter.defaultMaxListeners = 15;
const app = await NestFactory.create(AppModule);
app.enableCors();
app.useWebSocketAdapter(new RedisIoAdapter(app));
await app.listen(config.server.port);
}
bootstrap();
I have a lot of events for this one so i had to up my max event count
now for every gateway you got, you need to use a different connection strategy, so instead of using polling you need to go to websocket directly
...
#WebSocketGateway({ transports: ['websocket'] })
export class AppGateway implements OnGatewayConnection, OnGatewayDisconnect {
...
or if you are using namespaces
...
#WebSocketGateway({ transports: ['websocket'], namespace: 'user' })
export class UsersGateway {
...
last step is to install the redis database on your AWS instance and that is another thing; and also install pm2
nest build
pm2 i -g pm2
pm2 start dist/main.js -i 4
CLIENT
const config: SocketIoConfig = {
url: environment.server.admin_url, //http:localhost:3000
options: {
transports: ['websocket'],
},
};
You can now test your websocket server using FireCamp
Try using this lib:
https://github.com/luoyjx/socket.io-redis-stateless
It makes socket io stateless through redis.
You need to setup Redis with your Node server. Here is how I managed to get cluster mode to work with Socket.io
First install Redis. If you are using Ubuntu, follow this link: https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-redis-on-ubuntu-18-04
Then:
npm i socket.io-redis
Now place Redis in your Node server
const redisAdapter = require('socket.io-redis')
global.io = require('socket.io')(server, { transports: [ 'websocket' ]})
io.adapter(redisAdapter({ host: 'localhost', port: 6379 }))
That was all I had to do to get PM2 cluster mode to work with socket.io in my server.

Resources