Aws lambda with mongoDb connection - node.js

Hello guyz i need an answer to an simple question.I am using Aws lambda with serverless framework and i am using mongoDb connection in aws lambda.I have used connection code inside my handler function and i have used connection pooling.
Now when i deploy my app on AWS using sls deploy and after deploying when i call my lambda for first time then connection is established only once and after that on other lambda API calls it is reusing my connection instead of creating new connection.so this thing is fine.
Now after this process i am running a script which is not related with my AWS app to test my concurrent lambda requests.I have called my same lambda API using request npm module in for loop in script and in that case all time my new connnection is created till loop terminates instead of using my existing one generated from first call.Can someone tell me why it is happening and what is the reason behind this? Why my connection is creating again when this script runs when already i have created my connection on first lambda call.
And same api when i call from postman then it is resuing my connection after first lambda call but when i run this script and from inside script i call this API(using request NPM module) using command "node app.js" then all time till loop terminates it creates new connection.
Please help me out in this.
'use strict'
const bodyParser = require('body-parser')
const express = require('express')
const serverless = require('serverless-http')
const cors = require('cors');
const mongoConnection = require('./connection/mongoDb');
const app = express()
app.use(cors())
app.use(bodyParser.json())
const handler = serverless(app);
let cachedDb = null;
module.exports.handler = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
if (cachedDb == null) {
let Database = await mongoConnection();
console.log("DB", Database);
cachedDb = Database
}
const baseRouter = require('./routes/index');
app.use('/api', baseRouter);
const result = await handler(event, context);
return result;
};

Here is a node.js example that shows the connection parameters. Perhaps this will help?
const express = require("express");
const bodyParser= require("body-parser")
const app = express();
const MongoClient = require("mongodb").MongoClient
MongoClient.connect("mongodb://myusername:mypassword#localhost:27017", (err, client) => {
if (err) return console.log(err)
var db = client.db("mydatabase")
db.collection("mycollection").countDocuments(getCountCallback);
app.listen(3000, () => {
console.log("listening on 3000")
})
})
function getCountCallback(err, data) {
console.log(data);
}
app.use(bodyParser.urlencoded({extended: true}))
app.get("/", (req, res) => {
res.sendFile(__dirname + "/index.html")
})
app.post("/quotes", (req, res) => {
console.log(req.body)
})
Your example code does not show any hostname for your database server, nor does it specify which port to use. Please compare your code and contrast to my example.

I see you defined the cachedDb variable outside the handler scope, so that makes it available when the container is reused. However, there is no guarantee that the container will be reused (see my previous link on that) because that's not how Lambda works. If you invoke the same functions many times very quickly after eachother, Lambda needs to scale out horizontally to be able to handle the requests quickly. They each get their own container and connection.
When the invocation is finished, AWS will keep the container for a bit (how long depends on many factors like function size & RAM limit). If you invoke it again the containers can reuse their connection. You can try to invoke the function 20 times with 1 second interval and counting the number of connections that have been openend. It will be lower than 20, but higher than 1.

Related

Intermittent (time dependent) error when using Node Package rate-limiter-flexible in Express Node.js application

My overall goal is to apply the basic logic of the package "rate-limiter-flexible" for my Express Node.js application. I've gotten the basic functionality with the "Memory" version which does not require any communication with a database (IP address counts stored in Node server memory). Now I'm trying to get the functionality to work with the MongoDB rate limiter modules (using an instance of rateLimiterMongo object from the package).
The problem I'm encountering is that my rate limiter middleware is throwing an error intermittently... The only pattern I can find is that the error occurs more frequently if there are > ~10 seconds between requests to the Node app. This is the error which occurs every 5-10 requests:
[RateLimiter Error]: Rejection object:
TypeError: Cannot read properties of null (reading 'points')
at RateLimiterMongo._getRateLimiterRes (C:\root\node_modules\rate-limiter-flexible\lib\RateLimiterMongo.js:124:33)
at RateLimiterMongo._afterConsume (C:\root\node_modules\rate-limiter-flexible\lib\RateLimiterStoreAbstract.js:51:22)
at C:\root\node_modules\rate-limiter-flexible\lib\RateLimiterStoreAbstract.js:205:16
at processTicksAndRejections (node:internal/process/task_queues:96:5)
So far, I have tried:
Disabling buffering with Mongoose (was a recommendation from the package docs) -- did not work
Changing from MongoDB Atlas free tier to a locally hosted MongoDB instance -- this resolved all occurrences of the error, but I need to be able to use the cloud service
Here is a minimal reproduction of the error I'm facing when connecting to a MongoDB Atlas free tier cluster (via MONGO_DB_URL):
// Node packages:
require('dotenv').config();
const { RateLimiterMongo } = require('rate-limiter-flexible');
const mongoose = require('mongoose');
const express = require('express');
const app = express();
// open a Mongoose connection and save it:
const dbUrl = process.env.MONGO_DB_URL;
const connectDB = async function () {
await mongoose
.connect(dbUrl, {
// options
})
.catch(error => {
console.log("DB not connected!");
// handle error here (initial connection)
});
};
connectDB();
const mongoConn = mongoose.connection;
// options and passing to the RateLimiterMongo constructor:
const opts = {
storeClient: mongoConn,
points: 3, // Number of points
duration: 1, // Per second(s)
};
const rateLimiterMongo = new RateLimiterMongo(opts);
// create the middleware for the express app:
const rateLimiterMiddleware = (req, res, next) => {
rateLimiterMongo.consume(req.ip, 1)
.then((rateLimiterRes) => {
console.log("[RateLimiter Success]: RateLimiterRes object:\n", rateLimiterRes);
next();
// Allowed
})
.catch((rej) => {
console.log("[RateLimiter Error]: Rejection object:\n", rej);
res.status(500).send("RateLimiter error(s)...");
// Blocked
});
};
// Express app code:
app.use(rateLimiterMiddleware);
app.get('/', (req, res) => {
res.status(200).send("Valid Route!");
});
app.listen(3000, () => {
console.log(`Serving on port 3000!`);
});
Thanks all for any help you can provide with this. It may just be a side effect of using the MongoDBAtlas free tier...
Most likely, you use mongoose v6.x, which changes how connection established. It returns promise, which should be awaited, before connection can be used to make queries. More context in migration to 6 guide.
You should await connectDB() call and only then create middleware. In other words, your Express app should wait until connection to MongoDB Atlas is established. You can read through comments in related closed issue on github.

What is the correct order of requiring and mocking files using Jest?

I'm trying to create an integration test using Jest for my Express app. I think I have a conceptual misunderstanding as my tests are behaving strangely. My goal is to test the following scenario. I'm hitting a specific endpoint using Supertest, and I want to check whether an error handler middleware is called if there is a mocked error. I want to check whether the error handler is not called, if there is no error present. I have the following test file:
test.js
const request = require('supertest')
describe('Error handler', () => {
let server
let router
beforeEach(() => {
jest.resetModules()
jest.resetAllMocks()
})
afterEach(async () => {
await server.close()
})
it('should be triggered if there is a router error', async () => {
jest.mock('../../routes/')
router = require('../../routes/')
router.mockImplementation(() => {
throw new Error()
})
server = require('../../server')
const res = await request(server)
.get('')
.expect(500)
.expect('Content-Type', /json/)
expect(res.body.error).toBe('Error')
expect(res.body.message).toBe('Something went wrong!')
expect(res.body.status).toBe(500 )
})
it('should not be triggered if there is no router error', async () => {
server = require('../../server')
const res = await request(server)
.get('')
.expect(201)
.expect('Content-Type', /text/)
})
})
What I think is happening is the following. Before each test I reset all modules, because I don't want to have the cached version of my server from the first require, I want to overwrite it. I also reset all mocks, so when the second test runs, no mock is used, no fake error is forced, so the middleware is not called and I'm getting back a vanilla 200 result.
After this is done, I start testing the scenario when there is an error. I mock the routes file that exports my routes so I can force a fake error. Then I require the server, this way, I suppose, it's loading the server up with the fake, error throwing route. Then I wait for the response with Supertest, and assert that I indeed got an error back - hence the error handler middleware has been triggered and worked.
The afterEach hook is called, the server is closed, then the beforeEach hook initializes everything, again. Now I have my vanilla implementation without the mock. I require my server, hit the homepage with a get request, and I get back the correct response.
The strange thing is that for some reason the second test seems to not exit gracefully. If I change my implementation from async - await in the second test, to specify the done callback, and then if I call it at the end of the test, it seems to be working.
I tried a lot of possible permutations, including putting the mocking part to the beforeEach hook, starting the server before / after mocking, and I got weird results. I feel like I have conceptual misunderstandings, but I don't know where, because there are so many moving parts.
Any help to make me understand what is wrong would be greatly appreciated
EDIT:
I thought that most parts can be considered a black box, but now I realize that the fact that I'm trying to create an app using Socket.IO makes the setup process a bit more convoluted.
I don't want Express to automatically create a server for me, because I want to use socketIO. So for now I only create a function with the appropiate signature, and that is 'app'. This can be given as an argument to http.Server(). I configure it with options and the middlewares that I want to use. I do not want to call app.listen, because that way Socket.IO could not do its own thing.
config.js
const path = require('path')
const express = require('express')
const indexRouter = require('./routes/')
const errorHandler = require('./middlewares/express/errorHandler')
const app = express()
app.set('views', path.join(__dirname + '/views'))
app.set('view engine', 'ejs')
app.use(express.static('public'))
app.use('', indexRouter)
app.use(errorHandler)
module.exports = app
In server.js I require this app, and then I create a HTTP server using it. After that, I feed it to 'socket.io', so it is connected to the proper instance. In server.js I do not call server.listen, I want to export it to a file that actually starts up the server (index.js) and I want to export it to my tests, so Supertest can spin it up.
server.js
// App is an Express server set up to use specific middlewares
const app = require('./config')
// Create a server instance so it can be used by to SocketIO
const server = require('http').Server(app)
const io = require('socket.io')(server)
const logger = require('./utils/logger')
const Game = require('./service/game')
const game = new Game()
io.on('connection', (socket) => {
logger.info(`There is a new connection! Socket ID: ${socket.id}`)
// If this is the first connection in the game, start the clock
if (!game.clockStarted) {
game.startClock(io)
game.clockStarted = true
}
game.addPlayer(socket)
socket.on('increaseTime', game.increaseTime.bind(game))
})
module.exports = server
If I understand everything correctly, basically the same thing happens, expect for a few additional steps in the example that you provided. There is no need to start the server, and then use Supertest on it, Supertest handles the process of starting up the server when I use request(server).get, etc.
EDIT 2
Right now I'm not sure whether mocking like that is enough. Some mysterious things leaves the Supertest requests hanging, and it might be that somewhere along the way it can not be ended, although I do not see why would that be the case. Anyway, here is the router:
routes/index.js
const express = require('express')
const router = express.Router()
router.get('', (req, res, next) => {
try {
res.status(200).render('../views/')
} catch (error) {
next(error)
}
})
router.get('*', (req, res, next) => {
try {
res.status(404).render('../views/not-found')
} catch (error) {
next(error)
}
})
module.exports = router
The order of requiring and mocking is correct but the order of setting up and shutting down a server probably isn't.
A safe way is to make sure the server is available before doing requests. Since Node http is asynchronous and callback-based, errors cannot be expected to be handled in async functions without promisification. Considering that server.listen(...) was called in server.js, it can be:
...
server = require('../../server')
expect(server.listening).toBe(true);
await new Promise((resolve, reject) => {
server.once('listening', resolve).once('error', reject);
});
const res = await request(server)
...
close is asynchronous and doesn't return a promise so there's nothing to await. Since it's in a dedicated block, a short way is to use done callback:
afterEach(done => {
server.close(done)
})
In case errors are suppressed in error listener, server.on('error', console.error) can make troubleshooting easier.
Supertest can handle server creation itself:
You may pass an http.Server, or a Function to request() - if the server is not already listening for connections then it is bound to an ephemeral port for you so there is no need to keep track of ports.
And can be provided with Express instance instead of Node server, this eliminates the need to handle server instances manually:
await request(app)
...

How to keep one instance of the database using express

I am using express along with sequelize to create a REST API. I want to keep just one instance of my database and use it in all the routes I may create. My first thought was to create the instance as soon as the app runs. Something like this.
const express = require('express')
const databaseService = require( './services/database')
const config = require( './config')
const userRoute = require( './routes/user')
const app = express()
databaseService.connect(config.db_options).then(
connectionInstance => {
app.use('user', userRoute(connectionInstance))
app.listen(3000)
}
)
I have not seen something like that so I believe it's not a good approach.
Any ideas ?
A strategy that I use extensively in several production applications to great success is to define the database connection logic in a single file similar to the following:
database.js
const dbClient = require('some-db-client')
let db
module.exports.connectDB = async () => {
const client = await dbClient.connect(<DB_URL>)
db = client.db()
}
module.exports.getDB = () => {
return db
}
Obviously, the connection logic would need to be more complex then what I have defined above, but you can get the idea that you would connect to your database using whichever database client library that you choose and save a reference to that single instance of your database. Then, wherever you need to interface with your database, you can call the getDB function. But first, you would need to make a call to your connectDB function from your server application entry file.
server.js
async function run() {
const db = require('./database')
await db.connectDB()
const app = require('express')()
const routes = require('./routes')
app.use('/', routes)
app.listen(3000, () => {
console.log('App server listening to port 3000')
})
}
You would make the call to initialize your database connection in the server application entry file before initializing express. Afterwards, any place in your application that needs access to the database can simple call the getDB() function and use the returned database object as necessary from there. This does not require overly complicated connection open and close logic and saves on the overhead of constantly opening and closing connections whenever you want to access the database by maintaining just a single database connection that can be used anywhere.

Google Firebase Functions - Timeout when doing HTTP requests

So I am trying to make a simple proxy (I think that's the right word) and I've come up with some code that works fine locally. I can call 'firebase serve --only functions' and the function works fine and I get expected results. Now when I deploy this same code, and try calling it it just times out. I have no idea why, so I was hoping I could get some help.
Here's the code:
//Variables
const functions = require('firebase-functions');
const express = require('express');
const cors = require('cors');
const request = require('request');
//App
const app = express();
app.use(cors({ origin: true }));
//Endpoints
app.get('/**', function(req, res) {
request('https://example.com' + req.url, function(err, proxyRes, body) {
//Err
if (err) {
res.send('Error: ' + err.code);
return
}
//Res
res.status(200).send(body);
});
});
//Functions
exports.proxy = functions.https.onRequest(app);
HTTP functions will time out if they don’t send a response. This means your request() is probably failing, and it’s probably failing because, on the free Spark payment plan, you can’t make outgoing requests to services that Google doesn’t fully control.
Your function should send a response in all conditions in order to avoid a timeout. This means you should be checking for errors all the time.

Proper return for service functions in nodejs web applications to respond to client synchronously

I've posted an approximation of my node web application below. The original problem I had is that I want to be able to let the client know on post to createentity whether the insert was successful and the id of the inserted entity. However, connection.query having a callback rather than running synchronously, I can't use the entityservice how I'd expect in another language, ie simply returning the result synchronously. There are several solutions, and I'm curious which is the best/common practice re node.js or if there is another I'm not thinking of.
Passing res down to the service, and responding within a callback; seems poor practice
Similarly, passing functions to execute after success/failure to the service; also seems poor practice
Returning a promise from the service and setting res based on resolution/failure; seems like services shouldn't return promises but I'm new to node
Some better method using appropriate features of node.js of which I'm unaware
trying to change the service such that it runs synchronously and just returning the result Other questions/answers have made me leery that this is possible or wise
structure the application some other way
something else?
//app.js
var express = require('express');
var bodyParser = require('body-parser')
var path = require('path');
var EntityService = require('./entityService.js');
var app = express();
var urlencodedParser = bodyParser.urlencoded({ extended: true })
app.post('/createentity', urlencodedParser, function(req, res){
EntityService.createEntity(req.body.name);
res.status(200).json(null);
});
app.listen(3000);
//entityService.js
var mysql = require('mysql');
EntityService = function(){
var connection = mysql.createConnection({
host : CONNECTION_IP,
user : 'root',
password : 'password',
database : 'entitydb'
});
connection.connect();
this.createEntity = function(name){
var record = {name: 'name'};
var query = connection.query('INSERT INTO entity set ?', record, function(error, results, fields ){
//want to return the results.insertId from the service
});
}
}
module.exports = new EntityService();
The correct approach here is option 3 - have your service return a Promise
this.createEntity = name => new Promise((resolve, reject) => {
const query = connection.query('...', { name }, (err, results) => {
if (err) return reject(err);
return resolve(results.map(r => r.insertId));
});
})
If you're on the latest version of Node, I'd go with the asynch/await syntax.
Your service returns a promise,then your calling code can do:
app.post('/createentity', urlencodedParser, async function(req, res){
const entity = await EntityService.createEntity(req.body.name);
res.status(200).json(entity);
});

Resources