NestJs for lambda with mongodb creating too many connections - node.js

We are using nestjs for lambda which connects with mongodb for data. We are using nestjs mongoose module. However on deployment for each invocation a new set of connection are made and the previous ones are not released.
We are using forRootAsync
MongooseModule.forRootAsync({
useClass: MongooseConfigService,
})
service looks like this:
#Injectable({ scope: Scope.REQUEST })
export class MongooseConfigService implements MongooseOptionsFactory {
constructor(#Inject(REQUEST) private readonly request: Request) {}
async createMongooseOptions(): Promise<MongooseModuleOptions> {
if (Buffer.isBuffer(this.request.body)) {
this.request.body = JSON.parse(this.request.body.toString());
}
const { db } = this.request.body;
console.log('connecting database', db);
return {
uri: process.env.MONGO_URL,
dbName: db || '',
};
}
}
I understand we need to reuse the same connection. We have achieved it in nodejs by simply checking if the connection already exists and if it does not connect again. Not sure how to achieve the same in nest.js
tried changing the scope of service to Scope.DEFAULT but that didn't help.

I would suggest that you make a connection proxy for MongoDB. Ever time a lambda gets invoked, it will open up a new connection to MongoDB. AWS generally provides a service that allows you to proxy requests through one connect, this is a common issue with RDS etc.
This may help you though: https://www.mongodb.com/docs/atlas/manage-connections-aws-lambda/

We were able to resolve the issue by using a durable provider. NestJs documentation we created a strategy at the root that would use a parameter coming in each request. NestJs would than call the connection module only when a new connection was required.
Note: When you use durable providers, Requests doesn't have all the parameters anymore and now only has the tenantId as per the example.

Related

NestJS WebSocket Gateway - how get full path for initial connection?

I'm adding a socket.io "chat" implementation to our NestJS app, currently serving a range of HTTP REST APIs. We have fairly complex tenant-based auth using guards for our REST APIs. Users can belong to one or many tenants, and they target a given tenats via the API URL, which can be either subdomain or path-based, depending on the deployment environment, for example:
//Subdomain based
https://tenant1.api.server.com/endpoint
https://tenant2.api.server.com/endpoint
//Path based
https://api.server.com/tenant1/endpoint
https://api.server.com/tenant2/endpoint
This all works fine for REST APIs, allowing us to determine the intended tenant (and validate the user access to that tenant) within guards.
The new socket.io implementation is being exposed on the same port at endpoint "/socket", meaning that possible full paths for connection could be:
https://tenant1.api.server.com/socket
https://api.server.com/tenant1/socket
Ideally I want to validate the user (via JWT) and the access to the group during the connection of the websocket (and if they are not validated they get immediately disconnected). I have been struggling to implement with guards, so I have done JWT/user validate in the socket gateway, which works ok. For the tenant validation, as per the above, I need the FULL URL that was used for the connection, because I will either be looking at the subdomain OR the path, depending on the deployment. I can get the host from the client handshake headers, but cannot find any way to get at the path. Is there a way to get the full path either from the handshake, or perhaps from Nest? I think perhaps I am limited in what I have access to in the handleConnection method when implementing OnGatewayConnection.
Code so far:
#WebSocketGateway({
namespace: 'socket',
cors: {
origin: '*',
},
})
export class ChannelsGateway
implements OnGatewayInit, OnGatewayConnection, OnGatewayDisconnect {
#WebSocketServer() public server: Server
//Init using separate socket service (allowing for other components to push messages)
afterInit(server: Server) {
this.socketService.socket = server
Logger.debug('Socket.io initialized')
}
//Method for handling the client initial connection
async handleConnection(client: Socket, ...args: any[]) {
//This line gets me the host, such as tenant1.api.server.com but without the path
const host = client.handshake.headers.host
//Get bearer token from authorizaton header and validate
//Disconnect and return if not validated
const bearerJwt = client.handshake.headers.authorization
const decodedToken = await validateJwt(bearerJwt).catch(error => {
client.disconnect()
})
if (!decodedToken) {
return
}
//What can I use to get at the path, such as:
//api.server.com/tenant1/socket
//tenant1.api.server.com/socket
//Then I can extract the "tenant1" with existing code and validate access
//Else disconnect the client
//Further code to establish a private room for the user, etc...
}
//other methods for receiving messages, etc...
}

Angular Service with Azure Cosmos

I am really new to Angular. I am trying to create a service which i want to consume in my angular component. While doing so i am getting below error.
Below is my code which i am writing
import { Injectable } from '#angular/core';
import { HttpClient} from '#angular/common/http';
import { CosmosClient } from '#azure/cosmos';
import {Observable,of} from 'rxjs'
#Injectable({
providedIn: 'root'
})
export class ApiService {
databaseId='dbName';
containerId='Container Name';
constructor() { }
public async getProjects():Promise<Observable<any>>{
const endpoint = "https://AAAA.documents.azure.com:443/";
const key = "==";
const client = new CosmosClient({ endpoint, key });
const database = client.database(this.databaseId);
const container = database.container(this.containerId);
const querySpec = {query: "SELECT * from c where c.Category=\"Details\""};
const { resources:items } = await container.items.query(querySpec).fetchAll();
return of(items);
}
}
Any help is really appreciated.
There is an exception to every rule, but the use cases for connecting to a DB directly from a web browser are pretty close to zero. By doing so, you lose all fine grained control over what a user can do in your database and the only way to revoke access is to rotate your keys. You may not currently have anything in your database that is sensitive, but it is still considered bad practice.
For this reason, the CosmosDB library is compatible with NodeJS as a server-side framework. Whether or not it works with front end frameworks like Angular or React are incidental. There are some large changes in how Angular compiles projects in version 9, and it looks like the Cosmos client is not compatible with the new Ivy compiler.
You do have a couple options here.
(recommended) Use an API layer between your database and your front end. You can use Node to keep it within Javascript. If you are running on Azure, there are services like Azure Functions that can make this even easier to implement securely, or you can run it from the same App Service, VM, or whatever hosting solution you are using.
Disable the new Ivy compiler.You can do this by adding aot: false in your angular.json

Storing token on server-side using nestjs

I have a nestjs application that consumes third party API for data. In order to use that third party API, I need to pass along an access token. This access token is application-wide and not attached to any one user.
What would be the best place to store such a token in Nestjs, meeting the following requirements:
It must be available in the application and not per given user
It must not be exposed to the frontend application
It must work in a load balancer setup
I am looking at Nestjs caching https://docs.nestjs.com/techniques/caching, but I am not sure whether that's the best practice and if it is - should I use it with in-memory storage or something like redis.
Thank you.
If you're working with Load Balancing, then in-memory solutions are dead on arrival, as they will only affect one instance of your server, not all of them. Your best bet for speed purposes and accessibility would be Redis, saving the token under a simple key and keeping it alive from there (and updating it as necessary). Just make sure all your instances connect to the same Redis instance, and that your instance can handle it, shouldn't be a problem, more of a callout
I used a custom provider. Nest allows you to load async custom providers.
export const apiAuth = {
provide: 'API_AUTH',
useFactory: async (authService: AuthService) => {
return await authService.createOrUpdateAccessToken()
},
inject: [AuthService]
}
and below is my api client.
#Injectable()
export class ApiClient {
constructor(#Inject('API_AUTH') private auth: IAuth, private authService: AuthService) { }
public async getApiClient(storeId: string): Promise<ApiClient> {
if (((Date.now() - this.auth.createdAt.getTime()) > ((this.auth.expiresIn - 14400) * 1000))) {
this.auth = await this.authService.createOrUpdateAccessToken()
}
return new ApiClient(storeId, this.auth.accessToken);
}
}
This way token is requested from storage once and lives with the application, when expired token is re-generated and updated.

Use mongoose in an angular project

I try to integrate MongoDB in my project angular 6. I want to use the ODM Mongoose.
npm install mongoose --save
I created this service which must manage the connection with the database
import { Injectable } from '#angular/core';
import { mongoose } from 'mongoose';
#Injectable()
export class BD {
private bd;
public test: string;
constructor(mongoose: mongoose){}
Test(){
mongoose('the url of the database')
this.bd = mongoose.connection;
this.bd.on('error', console.error.bind(console, 'error: impossible connection to the database'));
this.bd.once('open', ()=>{console.log('connected to the DB :}')})
}
}
When I launch the application in my browser I have this erreur:
SCRIPT5009: 'global' is not defined
What does this error mean?
Is this how I have to import mongoose?
Thank you in advance for your help
Mongoose is a library designed specially for node.js and won't work in the browser environment. In order to be able to connect the db - you need to establish node.js back-end server, for example express.js. Then you will be able access your resources through the REST api.
Also, you cannot establish connection to the database directly, because you need somehow secure the data inside those database. When you connecting to the database directly - everyone who can take a look at your code will gain acess to the data and this is serious security problem.

Connection to Mongodb-Native-Driver in express.js

I am using mongodb-native-driver in express.js app. I have around 6 collections in the database, so I have created 6 js files with each having a collection as a javascript object (e.g function collection(){}) and the prototypes functions handling all the manipulation on those collections. I thought this would be a good architecture.
But the problem I am having is how to connect to the database? Should I create a connection in each of this files and use them? I think that would be an overkill as the connect in mongodb-native-driver creates a pool of connections and having several of them would not be justified.
So how do I create a single connection pool and use it in all the collections.js files? I want to have the connection like its implemented in mongoose. Let me know if any of my thought process in architecture of the app is wrong.
Using Mongoose would solve these problems, but I have read in several places thats it slower than native-driver and also I would prefer a schema-less models.
Edit: I created a module out of models. Each collection was in a file and it took the database as an argument. Now in the index.js file I called the database connection and kept a variable db after I got the database from the connection. (I used the auto-reconnect feature to make sure that the connection wasn't lost). In the same index.js file I exported each of the collections like this
exports.model1 = require('./model1').(db)
exprorts.model2 = require('./model2').(db)
This ensured that the database part was handled in just one module and the app would just call function that each model.js file exported like save(), fincdbyid() etc (whatever you do in the function is upto you to implement).
how to connect to the database?
In order to connect using the MongoDB native driver you need to do something like the following:
var util = require('util');
var mongodb = require('mongodb');
var client = mongodb.MongoClient;
var auth = {
user: 'username',
pass: 'password',
host: 'hostname',
port: 1337,
name: 'databaseName'
};
var uri = util.format('mongodb://%s:%s#%s:%d/%s',
auth.user, auth.pass, auth.host, auth.port, auth.name);
/** Connect to the Mongo database at the URI using the client */
client.connect(uri, { auto_reconnect: true }, function (err, database) {
if (err) throw err;
else if (!database) console.log('Unknown error connecting to database');
else {
console.log('Connected to MongoDB database server at:');
console.log('\n\t%s\n', uri);
// Create or access collections, etc here using the database object
}
});
A basic connection is setup like this. This is all I can give you going on just the basic description of what you want. Post up some code you've got so far to get more specific help.
Should I create a connection in each of this files and use them?
No.
So how do I create a single connection pool and use it in all the collections.js files?
You can create a single file with code like the above, lets call it dbmanager.js connecting to the database. Export functions like createUser, deleteUser, etc. which operate on your database, then export functionality like so:
module.exports = {
createUser: function () { ; },
deleteUser: function () { ; }
};
which you could then require from another file like so:
var dbman = require('./dbmanager');
dbman.createUser(userData); // using connection established in `dbmanager.js`
EDIT: Because we're dealing with JavaScript and a single thread, the native driver indeed automatically handles connection pooling for you. You can look for this in the StackOverflow links below for more confirmation of this. The OP does state this in the question as well. This means that client.connect should be called only once by an instance of your server. After the database object is successfully retrieved from a call to client.connect, that database object should be reused throughout the entire instance of your app. This is easily accomplished by using the module pattern that Node.JS provides.
My suggestion is to create a module or set of modules which serves as a single point of contact for interacting with the database. In my apps I usually have a single module which depends on the native driver, calling require('mongodb'). All other modules in my app will not directly access the database, but instead all manipulations must be coordinated by this database module.
This encapsulates all of the code dealing with the native driver into a single module or set of modules. The OP seems to think there is a problem with the simple code example I've posted, describing a problem with a "single large closure" in my example. This is all pretty basic stuff, so I'm adding clarification as to the basic architecture at work here, but I still do not feel the need to change any code.
The OP also seems to think that multiple connections could possibly be made here. This is not possible with this setup. If you created a module like I suggest above then the first time require('./dbmanager') is called it will execute the code in the file dbmanager.js and return the module.exports object. The exports object is cached and is also returned on each subsequent call to require('./dbmanager'), however, the code in dbmanager.js will only be executed the first require.
If you don't want to create a module like this then the other option would be to export only the database passed to the callback for client.connect and use it directly in different places throughout your app. I recommend against this however, regardless of the OPs concerns.
Similar, possibly duplicate Stackoverflow questions, among others:
How to manage mongodb connections in nodejs webapp
Node.JS and MongoDB, reusing the DB object
Node.JS - What is the right way to deal with MongoDB connections
As accepted answer says - you should create only one connection for all incoming requests and reuse it, but answer is missing solution, that will create and cache connection. I wrote express middleware to achieve this - express-mongo-db. At first sight this task is trivial, and most people use this kind of code:
var db;
function createConnection(req, res, next) {
if (db) { req.db = db; next(); }
client.connect(uri, { auto_reconnect: true }, function (err, database) {
req.db = db = databse;
next();
});
}
app.use(createConnection);
But this code lead you to connection-leak, when multiple request arrives at the same time, and db is undefined. express-mongo-db solving this by holding incoming clients and calling connect only once, when module is required (not when first request arrives).
Hope you find it useful.
I just thought I would add in my own method of MongoDB connection for others interested or having problems with different methods
This method assumes you don't need authentication(I use this on localhost)
Authentication is still easy to implement
var MongoClient = require('mongodb').MongoClient;
var Server = require('mongodb').Server;
var client = new MongoClient(new Server('localhost',27017,{
socketOptions: {connectTimeoutMS: 500},
poolSize:5,
auto_reconnect:true
}, {
numberOfRetries:3,
retryMilliseconds: 500
}));
client.open(function(err, client) {
if(err) {
console.log("Connection Failed Via Client Object.");
} else {
var db = client.db("theDbName");
if(db) {
console.log("Connected Via Client Object . . .");
db.logout(function(err,result) {
if(!err) {
console.log("Logged out successfully");
}
client.close();
console.log("Connection closed");
});
}
}
});
Credit goes to Brad Davley which goes over this method in his book (page 231-232)

Resources