Redis client via Amazon Elasticache is ignoring host parameter from function and using default IP address - node.js

I have a Redis cluster set up that I need to connect to from a Lambda function. I'm using the following code to create the client and connect to Redis:
const getClient = async (clientargs) => {
var params = {
no_ready_check: true,
connect_timeout: clientargs.timeout == null ? 30000 : clientargs.timeout,
enable_offline_queue: false,
db: clientargs.index,
retry_strategy: undefined
};
logger.debug('db getClient: Connecting to redis client.');
logger.debug('db getClient params', params);
const client = redis.createClient(clientargs.host, clientargs.port, params);
await client.connect();
// do stuff once it's connected
}
clientargs.host stores the Elasticache Redis endpoint and I'm using the default port 6379. I've checked that the host and port are being passed in correctly to the getClient function, and that they are being passed into redis.createClient properly as well. I was originally debugging in a local environment and noticed that it was trying to connect to 127.0.0.1 instead of the Elasticache endpoint, so I ran a few tests within the AWS Console but it's still trying on 127.0.0.1:6379.
I have also tried passing the host and port arguments into the client.connect() function, but had no luck with that either.
The Lambda function and the Elasticache Redis cluster are in the same VPC instance and security group, so I don't think that is the issue, but I am fairly new to AWS as a whole so any help/thoughts would be appreciated!

Related

Different Behavior Deploying AWS Lambda Standalone vs within an Application Stack

Hi everybody and thanks for taking time to look at my issue/question.
I am getting different results when deploying my AWS Lambda stand-alone versus within an Application Stack.
I'm trying to connect to AWS Elasticache Redis from within my Lambda. I have .Net Core 3.1 Lambdas (using StackExchange.Redis) which can connect. But I also need to be able to connect from my Node.js Lambdas.
For the Node.js Lambdas, I'm using "node-redis" and "async-redis". I have two Lambdas which are essentially identical except that one is deployed in an Application Stack and the other is deployed as a stand-alone Lambda. Both Lambdas reference the same Lambda Layer (i.e. same "node_modules"), have the same VPC settings, the same Execution Role, and essentially the same code. So they've pushed it up to another group.
The stand-alone Lambda connects to Redis without issue. The Application Stack Lambda does not and exits processing before completing but without raising any error.
At first I thought I might just need to configure my Application Stack but I cannot find any information indicating we even can configure Application Stacks. So I'm at a loss.
The stand-alone Lambda:
exports.handler = async (event) => {
const asyncRedis = require("async-redis");
const redisOptions =
{
host: "XXXXXXXXX.XXXXX.XXXX.use2.cache.amazonaws.com",
port: 6379
}
console.log('A');
const client = asyncRedis.createClient(redisOptions);
console.log(client);
console.log('B');
const value = await client.get("Key");
console.log('C');
console.log(value);
console.log('D');
console.log(client);
};
The output of this function is essentially:
A
{RedisClient} --> the "client" object --> Shows connected = false
B
C
{ Correct Data From Redis }
D
{RedisClient} --> the "client" object --> Shows connected = true
The Application Stack Lambda:
async function testRedis2(event, context) {
console.log('In TestRedis2');
const asyncRedis = require("async-redis");
const redisOptions =
{
host: "XXXXXXXXX.XXXXX.XXXX.use2.cache.amazonaws.com",
port: 6379
}
console.log('A');
const client = asyncRedis.createClient(redisOptions);
console.log(client);
console.log('B');
var value = await client.get("Key");
console.log('C');
console.log(value);
console.log('D');
console.log(client);
}
module.exports = {
testRedis2
};
The output of this function is essentially:
In TestRedis2
A
{RedisClient} --> the "client" object --> Shows connected = false
B
I don't understand why these don't perform identically. And I don't get why I don't see further entries in the output.
Has anyone else experienced issues connecting to VPC resources from within an Application Stack?
Thanks
I stumbled across the answer through extensive trial and error. It may be obvious to Node/js developers but, just in case another Javascript/Node newbie has the same issue, I'll post the answer here.
The import/require and creation of the client must be at the top of the module. Not in the function itself.
So, the following does work in my application stack:
const asyncRedis = require("async-redis");
const redisOptions = {
host: "XXXXXXXXX.XXXXX.XXXX.use2.cache.amazonaws.com",
port: 6379
};
const client = asyncRedis.createClient(redisOptions);
async function redisGet(key: string){
// console.log('In redisGet');
return await client.get(key);
}

Heroku Node.js RedisCloud Redis::CannotConnectError on localhost instead of REDISCLOUD_URL

When i try to connect my Nodsjs application to RedisCloud on Heroku I am getting the following error
Redis::CannotConnectError: Error connecting to Redis on 127.0.0.1:6379 (ECONNREFUSED)
I have even tried to directly set the redis URL and port in the code to test it out as well. But still, it tried to connect to the localhost on Heroku instead of the RedisCloud URL.
const {Queue} = require('bullmq');
const Redis = require('ioredis');
const conn = new Redis(
'redis://rediscloud:mueSEJFadzE9eVcjFei44444RIkNO#redis-15725.c9.us-east-1-4.ec2.cloud.redislabs.com:15725'
// Redis Server Connection Configuration
console.log('\n==================================================\n');
console.log(conn.options, process.env.REDISCLOUD_URL);
const defaultQueue = () => {
// Initialize queue instance, by passing the queue-name & redis connection
const queue = new Queue('default', {conn});
return queue;
};
module.exports = defaultQueue;
Complete Dump of the Logs https://pastebin.com/N9awJYL9
set REDISCLOUD_URL on .env file as follows
REDISCLOUD_URL =redis://rediscloud:password#hostname:port
import * as Redis from 'ioredis';
export const redis = new Redis(process.env.REDISCLOUD_URL);
I just had a hard time trying to find out how to connect the solution below worked for me.
Edit----
Although I had been passed the parameters to connect to the Redis cloud, it connected to the local Redis installed in my machine. Sorry for that!
I will leave my answer here, just in case anyone need to connect to local Redis.
let express = require('express');
var redis = require('ioredis');
pwd = 'your_pwd'
url = 'rediss://host'
port = '1234'
redisConfig = `${url}${pwd}${port}`
client = redis.createClient({ url: redisConfig })
client.on('connect', function() {
console.log('-->> CONNECTED');
});
client.on("error", function(error) {
console.error('ERRO DO REDIS', error);
});
Just wanted to post my case in case someone has the same problem like me.
In my situation I was trying to use Redis with Bull, so i need it the url/port,host data to make this happened.
Here is the info:
https://devcenter.heroku.com/articles/node-redis-workers
but basically you can start your worker like this:
let REDIS_URL = process.env.REDISCLOUD_URL || 'redis://127.0.0.1:6379';
//Once you got Redis info ready, create your task queue
const queue = new Queue('new-queue', REDIS_URL);
In the case you are using local, meaning 'redis://127.0.0.1:6379' remember to run redis-server:
https://redis.io/docs/getting-started/

AWS lambda with mongoose to Atlas - MongoNetworkError

I am trying to connect MongoDB Atlas with mongoose and aws lambda but i get error MongoNetworkError
AWS Lambda
Mongoose
MongoDB Atlas
The same code was tested with serverless-offline and works perfect, the problem is when i deploy it to AWS Lambda.
This is the code snipet
'use strict';
const mongoose = require('mongoose');
const MongoClient = require('mongodb').MongoClient;
let dbuser = process.env.DB_USER;
let dbpass = process.env.DB_PASSWORD;
let opts = {
bufferCommands: false,
bufferMaxEntries: 0,
socketTimeoutMS: 2000000,
keepAlive: true,
reconnectTries: 30,
reconnectInterval: 500,
poolSize: 10,
ssl: true,
};
const uri = `mongodb+srv://${dbuser}:${dbpass}#carpoolingcluster0-bw91o.mongodb.net/awsmongotest?retryWrites=true&w=majority`;
// simple hello test
module.exports.hello = async (event, context, callback) => {
const response = {
body: JSON.stringify({message:'AWS Testing :: '+ `${dbuser} and ${dbpass}`}),
};
return response;
};
// connect using mongoose
module.exports.cn1 = async (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
let conn = await mongoose.createConnection(uri, opts);
const M = conn.models.Test || conn.model('Test', new mongoose.Schema({ name: String }));
const doc = await M.find();
const response = {
body: JSON.stringify({data:doc}),
};
return response;
};
// connect using mongodb
module.exports.cn2 = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
console.log("Connec to mongo using connectmongo ");
MongoClient.connect(uri).then(client => {
console.log("Success connect to mongo DB::::");
client.db('awsmongotest').collection('tests').find({}).toArray()
.then((result)=>{
let response = {
body: JSON.stringify({data:result}),
}
callback(null, response)
})
}).catch(err => {
console.log('=> an error occurred: ', err);
callback(err);
});
};
In the CloudWatch logs i see this error
{
"errorType": "MongoNetworkError",
"errorMessage": "failed to connect to server [carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017] on first connect [MongoNetworkError: connection 5 to carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017 closed]",
"stack": [
"MongoNetworkError: failed to connect to server [carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017] on first connect [MongoNetworkError: connection 5 to carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017 closed]",
" at Pool.<anonymous> (/var/task/node_modules/mongodb-core/lib/topologies/server.js:431:11)",
" at Pool.emit (events.js:189:13)",
" at connect (/var/task/node_modules/mongodb-core/lib/connection/pool.js:557:14)",
" at callback (/var/task/node_modules/mongodb-core/lib/connection/connect.js:109:5)",
" at runCommand (/var/task/node_modules/mongodb-core/lib/connection/connect.js:129:7)",
" at Connection.errorHandler (/var/task/node_modules/mongodb-core/lib/connection/connect.js:321:5)",
" at Object.onceWrapper (events.js:277:13)",
" at Connection.emit (events.js:189:13)",
" at TLSSocket.<anonymous> (/var/task/node_modules/mongodb-core/lib/connection/connection.js:350:12)",
" at Object.onceWrapper (events.js:277:13)",
" at TLSSocket.emit (events.js:189:13)",
" at _handle.close (net.js:597:12)",
" at TCP.done (_tls_wrap.js:388:7)"
],
"name": "MongoNetworkError",
"errorLabels": [
"TransientTransactionError"
]
}
Here is example on github to reproduce the error.
https://github.com/rollrodrig/error-aws-mongo-atlas
Just clone it, npm install, add your mongo atlas user, password and push to AWS.
Thanks.
Some extra steps are required to let lambda call external endpoint
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
Your atlas should also whitelist IP address of the servers, from which lambda will be connected.
Another option to consider - VPC peering between your lambda VPC and Atlas.
I have some questions concerning your configuration:
Did you whitelist the AWS Lambda function's IP address in Atlas?
Several posts on SO indicate that users get a MongoNetworkError like this if the IP is not whitelisted. [1][4]
Did you read the best-practices guide by Atlas which states that mongodb connections should be initiated outside the lambda handler? [2][3]
Do you use a public lambda function or a lambda function inside a VPC? There is a substantial difference between them and the latter one is more error-prone since the VPC configuration (e.g. NAT) must be taken into account.
I was able to ping the instances in the Atlas cluster and was able to establish a connection on port 27017. However, when connecting via the mongo shell, I get the following error:
Unable to reach primary for set CarpoolingCluster0-shard-0.
Cannot reach any nodes for set CarpoolingCluster0-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.
When I use your GitHub sample from AWS lambda I get the exact same error message as described in the question.
As the error messages are not authentication-related but network-related, I assume that something is blocking the connection... Please double-check the three config questions above.
[1] What is a TransientTransactionError in Mongoose (or MongoDB)?
[2] https://docs.atlas.mongodb.com/best-practices-connecting-to-aws-lambda/
[3] https://blog.cloudboost.io/i-wish-i-knew-how-to-use-mongodb-connection-in-aws-lambda-f91cd2694ae5
[4] https://github.com/Automattic/mongoose/issues/5237
Well, thanks everyone. Finally i found the solution with the help of the mongo support.
Here is the solution for anyone who needs
When you create a Mongo Altas cluster they ask you add your local ip and it is automatically added to the WhiteList. You can see it in
Your cluster > Network Access > IP Whitelist there in the list you will see your IP. It mean that only people from YOUR network will be able to connect to your MongoAtlas. AWS Lambda is NOT in your network, soo aws lambda will never connect to your Mongo Atlas. That is why i get the error MongoNetworkError.
Fix
You need to add the AWS Lambda IP to the Mongo Atlas WhiteListIP
go to your Your cluster > Network Access > IP Whitelist
click in the button ADD IP ADDRESS
click on ALLOW ACCESS FROM ANYWHERE it will add the ip 0.0.0.0/0 to the list, click confirm
Test your call from AWS Lambda and i will work.
FINALLY !
What you did is tell to Mongo Atlas that ANYONE from ANYWHERE can connect to your mongo Atlas.
Of course this is not a good practice. What you need is add only the AWS Lambda IP, here is when VPC comes to scene.
Create a VPC is little complex and it have many steeps, there are good tutorials in the other comments.
But for sure this small guide tacle the MongoNetworkError

Random SSL handshake error when connecting to ElastiCache with ioRedis

I am attempting to connect to an ElastiCache cluster that is encrypted in transit from a node script using ioRedis. Sometimes my script works, other times I get Error: 140736319218624:error:140940E5:SSL routines:ssl3_read_bytes:ssl handshake failure:../deps/openssl/openssl/ssl/s3_pkt.c:1216:
Here is all of my code:
var Redis = require('ioredis');
var nodes = [{
host: 'clustercfg.name.xxxxxx.region.cache.amazonaws.com',
port: '6379',
}];
var cluster = new Redis.Cluster(nodes,{
redisOptions: {
tls: {}
}});
cluster.set('aws', 'test');
cluster.get('aws', function (err, res) {
console.log(res);
if (err) {
console.error(err)
}
cluster.disconnect()
});
I believe the ssl handshake error is a side-effect of a race-condition bug in ioredis.
I have been banging my head over the same issue the last several days (ioredis version 4.0.0). I just couldn't reliably connect ioredis to our elasticache cluster. I would see the same intermittent error.
Error: 140618195700616:error:140940E5:SSL routines:ssl3_read_bytes:ssl
handshake failure:../deps/openssl/openssl/ssl/s3_pkt.c:1216:
You can view ioredis debug output by setting "DEBUG=ioredis:*" in your node environment. Once I did this I could see that when the error occurred it was accompanied by several messages similar to the following:
2018-10-06T18:24:38.287Z ioredis:cluster:connectionPool Disconnect
xxx.usw2.cache.amazonaws.com:6379 because the node does not hold any
slot
I tried node-redis and redis-clustr it works fine with elasticache.

Connect to MySQL database from Lambda function (Node)

I have been unable to connect to MySQL database using Node from Lambda function. The error I receive is Task timed out after 4.00 seconds.
Does anyone have any solutions?
Here is an overview of my state:
The AWS RDS database is a MySQL database. It is not confined to the VPC (I am able to connect using host/user/password from MySQLWorkbench).
The execution role of my Lambda function is set to have Lambda as a trusted entity and given AdministratorAccess.
On my local machine, I installed the mysql module, zipped my index.js and node_modules folder, and uploaded to my Lambda function.
I have tried putting the createConnection and connect function inside the handler. I have tried putting my query inside the callback function of the connection function. I have tried increasing the timeout time to 10 seconds.
My code:
var mysql = require('mysql');
var connection = mysql.createConnection({
host : 'amazon-string.rds.amazonaws.com',
user : 'myusername',
password : 'mypassword'
});
connection.connect();
exports.handler = (event, context, callback) => {
connection.query("SELECT * FROM table", function(err, rows, fields) {
console.log("rows: " + rows);
callback(null);
});
};
Increase the timeout to one minute. It could be due to the coldstart of the lambda function.
Only your first call should take time, consecutive calls should be very fast, since you are reusing the same connection.
Also, By having higher timeout, does not mean you will be charged for that timeout, you will be charged only for the time the Lambda runs.
Also to speed up the coldstart time you can webpack your scripts,
http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/webpack.html
There is one more issue noticed,
var mysql = require('mysql');
var connection = mysql.createConnection({
host : 'amazon-string.rds.amazonaws.com',
user : 'myusername',
password : 'mypassword'
});
connection.connect();
exports.handler = (event, context) => {
connection.query("SELECT * FROM table", function(err, rows, fields) {
console.log("rows: " + rows);
context.succeed('Success');
});
};
Hope it helps.
Since you're using RDS, go check out it's security group configuration. By default RDS's security group will allow inbound connections from your own IP and your default security group on your default VPC. However Lambda, by default, runs under no VPC, and thus is not able to establish a connection to RDS.
Either change your RDS instance to allow all IP addresses, or execute your Lambda function under a VPC that your RDS instance can access, and allow access to the security group.

Resources