How to use typescript and mongoose? - node.js

I have been working on an application for a few months and I have been starting to doubt my current backend architecture. I am building a backend with Typescript, mongoose and routing-controllers.
I have a controller which handles the requests and passes them to a repository. I have two types of repositories. A normal repository (with which the controller interacts) and a model repository (which interacts with the mongoose model).
I also have defined some interfaces for the mongoose models. Let's say I have a user:
export interface User {
firstName: string;
lastName: string;
}
export interface UserModel extends User, Document {
}
When the model is created it uses the UserModel.
Now my questions are:
1) Is this a correct way of using nodejs?
2) When making a request with the model repository should the User interface or the UserModel interface be returned?
Currently I use it like this
public async getUserById(userId: string): Promise<UserModel> {
return this.userModel.findById(userId);
}
And then the normal repository returns an object of the User interface.

Related

NestJs for lambda with mongodb creating too many connections

We are using nestjs for lambda which connects with mongodb for data. We are using nestjs mongoose module. However on deployment for each invocation a new set of connection are made and the previous ones are not released.
We are using forRootAsync
MongooseModule.forRootAsync({
useClass: MongooseConfigService,
})
service looks like this:
#Injectable({ scope: Scope.REQUEST })
export class MongooseConfigService implements MongooseOptionsFactory {
constructor(#Inject(REQUEST) private readonly request: Request) {}
async createMongooseOptions(): Promise<MongooseModuleOptions> {
if (Buffer.isBuffer(this.request.body)) {
this.request.body = JSON.parse(this.request.body.toString());
}
const { db } = this.request.body;
console.log('connecting database', db);
return {
uri: process.env.MONGO_URL,
dbName: db || '',
};
}
}
I understand we need to reuse the same connection. We have achieved it in nodejs by simply checking if the connection already exists and if it does not connect again. Not sure how to achieve the same in nest.js
tried changing the scope of service to Scope.DEFAULT but that didn't help.
I would suggest that you make a connection proxy for MongoDB. Ever time a lambda gets invoked, it will open up a new connection to MongoDB. AWS generally provides a service that allows you to proxy requests through one connect, this is a common issue with RDS etc.
This may help you though: https://www.mongodb.com/docs/atlas/manage-connections-aws-lambda/
We were able to resolve the issue by using a durable provider. NestJs documentation we created a strategy at the root that would use a parameter coming in each request. NestJs would than call the connection module only when a new connection was required.
Note: When you use durable providers, Requests doesn't have all the parameters anymore and now only has the tenantId as per the example.

Resolving a Graphql query from different data sources

I have a graphql server I've created using ApolloServer and Type-Graphql.
I have two types defined on my graphql server:
User{
id:string;
name:string
email:string;
...
}
UserPrefrences{
userId:string;
theme: string;
color:string;
...
}
The data for the User type is saved in a database which I access through a different graphql server by forwarding the request I get from client.
The data for the UserPrefrences type is saved on a different database which I access directly from my graphql server.
I don't want my client side to need to know these two separate types and to need to run two separate queries.
I'm looking for a way to let my client run the following query on my graphql server:
query UserData($userId: String!) {
id
name
email
theme
color
}
But if I forward this request to the graphql server that I'm querying, I will get a response saying the fields 'theme' and 'color' are unknown to him.
I'm trying to find a way to forward only the relevant fields to the graphql server, and then resolving the rest within my graphql server. But I receive the query as a string which makes it a pain trying to use regex to only forward the fields I'm interested in.
I'd be more than happy for any ideas on how to solve this issue.
the only way I found is using a Graphql client in Node.js.
I'm using the npm library called graphql-request
https://www.npmjs.com/package/graphql-request
import { GraphQLClient, gql } from 'graphql-request';
const client = new GraphQLClient('http://localhost:4000/graphql');
const query = `
{
yourQuery {
id
name
}
}
`;
const response = client.request(query);

session management using core node.js without express.js

How to handle/create middleware for server side session management in a core node.js /non express.js project. I can find modules for express based project but not for core node.js. Please suggest me any modules or middleware for non express.js project.
Session management can be implemented via database (MySQL, MongoDB, Redis etc.) or some local cache.
The main logic behind sessions - is object with data.
So you can provide user on first interaction with some random id, like uuid.
And save it to some module, which looks like this:
class OwnSession(){
constructor(){
this.sessions = {};
}
getUser(sessionId){
return this.sessions[sessionId];
}
setUser(sessionId, userData){
if(this.sessions[sessionId]){
Object.assign(this.sessions[sessionId], userData);
return;
}
this.sessions[sessionId] = userData;
}
}
// We export here new OwnSession() to keep singleton across your project.
module.exports = new OwnSession();
And then, in any module you require OwnSession and call the method.

How to share mongoose models with multiple microservices

I've a User model which looks like:
import mongoose from 'mongoose';
const UserSchema = new mongoose.Schema({
name: String,
email: {
type: String,
required: true,
unique: true,
},
password: {
type: String,
required: true,
},
});
export default mongoose.model('User', UserSchema);
I'm trying to share this model with multiple microservices, but how do i share this? Should i make a database service exposed over http or should i manually make models in each server and use it that way, or is there any other way to do the same?
It's dangerous to share schemas across microservices because they could become very coupled, or at least not like that. It's normal that microservices use data from each other, but models should not be fully imported in another microservice. Instead, the dependent microservices should use a subset, a local representation of the remote model. For this you should use an Anti-corruption layer. This ACL would receive as input remote models and produce as output a local, immutable/readonly representation of that model. The ACL lives at the outer boundary of the microservice, i.e. where the remote calls are made.
Also, sharing *schema.js files across microservices would force you to use JavaScript/NodeJS in all the other microservices, which is not good. Each microservice should use whatever programming language is best suited for it.
Should i make a database service exposed over http
The database is private to the owning microservice. It should not be exposed.

How to deal with calling sequelize.sync() first?

I'm a bit new to developing in nodejs, so this is probably a simple problem. I'm building a typical webapp based on express + sequelize. I'm using sqlite in-memory since I'm just prototyping at the moment. I understand if I were to use a persistent sqlite file, this may not be a problem, but that's not my goal at the moment. Consider the following:
var User = sequelize.define("User", {
"username": DataTypes.STRING,
// etc, etc, etc
});
sequelize.sync();
User.build({
"username": "mykospark"
});
At first, I got an error on User.build() about the Users table not existing yet. I realized that sequelize.sync() was being called async, and the insert was happening before the table was created. I then re-arranged my code so that the User.build() call was inside of sequelize.sync().complete() which fixes the problem, but I'm not sure how to apply this to the rest of my project.
My project uses models in a bunch of different places. It is my understanding that I just want to call sequelize.sync() once after my models are defined, then they can be used freely. I could probably find some way to block the entire nodejs app until sequelize.sync() finishes, but that doesn't seem like good form. I suppose I could wrap every single model operation into a sequelize.sync().complete() call, but that doesn't seem right either.
So how do people usually deal with this?
Your .sync() call should be called once within your app.js file. However, you might have additional calls if you manage multiple databases in one server. Typically your .sync() call will be in your server file and the var User = sequelize.define("ModelName"... will be in your models/modelName.js file. Sequelize suggests this type of guidance to "create a maintainable application where the database logic is collected in the models folder". This will help you as your development grows. Later in the answer, I'll provide an easy step to follow for initializing the file structure.
So for your case, you would have app.js, models/index.js and models/users.js. Where app.js would be your server running the .sync() method. In the models folder you will have the required index.js folder where you configure a connection to the database and collect all the model definitions. Finally you have your user.js files where you add your model with class and instance methods. Below is an example of the models/user.js file you might find helpful.
user.js
module.exports = function(sequelize, DataTypes) {
return sequelize.define('User', {
username: DataTypes.STRING,
},{
classMethods: {
doSomething: function(successcb, errcb, request) {}
},
instanceMethods: {
someThingElse: function(successcb, errcb, request) {}
}
});
};
models/index.js --> See here
EDIT 03/14/17
Now the best option to setup your node app with sequelize is to use sequelize-cli. This is sequelize migrations and has very useful functionality in development and production environments. For the scope of this question and revision to the answer, the best approach is the following:
npm install sequelize-cli
Use npm install sequelize-cli -g if you want it installed globally.
Then initialize sequelize migrations:
sequelize init
It should install the following folders and files structure in the folder you initiated the command:
config:
-config.json
models:
-index.js
seeders:
migrations:
If you want to create a model you can run the following command and it will auto generate the file structure for you. Here is an example
sequelize model:create --name User --attributes "user:string email:string"
Next you should be able to see the new model page in models/page.js.
config:
-config.json
models:
-index.js
-user.js
-page.js
seeders:
migrations:
You'll need to then go into you models/index.js and define your new model for your database to access the correct path for that model. Here is an example:
models/index.js
var sq = new Sequelize(dbname, user, password, config);
db = {
Sequelize: Sequelize,
sequelize: sq,
page: sq.import(__dirname + '/page.js'),
user: sq.import(__dirname + '/user.js')
}
module.exports = db;
If you need to make changes to the model you can go into the migrations folder and add methods. Follow the sequelize migration docs here. Now, about the app.js server. Before you run your server you need to initialize your databases. I use the following script to initialize the database before running the server to setup a postgres db:
postgresInit.sh
[...]
`sudo -u postgres createdb -U postgres -O $PG_USER $PG_DB. `
If you prefer a javascript solution, there is an SO solution here
app.js
[...]
console.log('this will sync your table to your database')
console.log('and the console should read out Executing (default): CREATE TABLE IF NOT EXISTS "TABLE NAME"....')
db.sequelize.sync(function(err){});

Resources