one controller, service and model instance per request in node js - node.js

I am working on a large project and we are reviewing the performance of the system we are thinking to make one(separate) instance of controller, service and model for every single request to make the code more readable but I think it would affect the performance of the system!!
Is it so?

Related

Using the same class instance for database operation over every different request

On my NodeJS/Express backend implementation, each incoming request will instanciate a new class Controller, with a handler method for this specific route only.
This mean that every incoming request, will allocate some memory to instanciate the class that will handle the response, and then free this memory.
In order to manipulate the database, the new instanciate controller will use other models class, which are not instanciate at each request, but when the server start.
This mean that every models (one for each table), is kinda global and used by every new controller instance.
Those models those not have state, and they are more likely some kind of functions library.
Knowing this, I am still wondering if there is any kind of side effect possible if many request arrive simultaneously ? Even if those models doesnt have any state that ( like this.data = 1 ) that would be cross used between 2 request, I am not totally sure
Also I decided this implementation because I do think that instanciate new models for each request may start to consume too much memory in case of several request simultaneous, especially because there is also other class which are services and helpers that follow the same logic, but maybe I am wrong

Splitting up Azure Functions without creating new function app

Our existing system uses App Services with API controllers.
This is not a good setup because our scaling support is poor, its basically all or nothing
I am looking at changing over to use Azure Functions
So effectively each method in a controller would become a new function
Lets say that we have a taxi booking system
So we have the following
Taxis
GetTaxis
GetTaxiDrivers
Drivers
GetDrivers
GetDriversAvailableNow
In the app service approach we would simply have a TaxiController and DriverController with the the methods as routes
How can I achieve the same thing with Azure Functions?
Ideally, I would have 2 function apps - Taxis and Drivers with functions inside for each
The problem with that approach is that 2 function apps means 2 config settings, and if that is expanded throughout the system its far too big a change to make right now
Some of our routes are already quite long so I cant really add the "controller" name to my function name because I will exceed the 32 character limit
Has anyone had similar issues migrating from App Services to Azure Functions>
Paul
The problem with that approach is that 2 function apps means 2 config
settings, and if that is expanded throughout the system its far too
big a change to make right now
This is why application setting is part of the release process. You should compile once, deploy as many times you want and to different environments using the same binaries from the compiling process. If you're not there yet, I strongly recommend you start by automating the CI/CD pipeline.
Now answering your question, the proper way (IMHO) is to decouple taxis and drivers. When requested a taxi, your controller should add a message to a Queue, which will have an Azure Function listening to it, and it get triggered automatically to dequeue / process what needs to be processed.
Advantages:
Your controller response time will get faster as it will pass the processing to another process
The more messages in the queue / more instances of the function to consume, so it will scale only when needed.
Http Requests (from one controller to another) is not reliable (unless you implement properly a circuit breaker and a retry policy. With the proposed architecture, if something goes wrong, the message will remain in the queue or it won't get completed by the Azure function and will return to the queue.

Scaling Stateful NodeJS Services - Stickiness/Affinity Based on Object ID (not user session)

I am trying to find a good way to horizontally scale a stateful NodeJS service.
The Problem
The problem is that most of the options I find online assume the service is stateless. The NodeJS cluster documentation says:
Node.js [Cluster] does not provide routing logic. It is, therefore important to design an application such that it does not rely too heavily on in-memory data objects for things like sessions and login.
https://nodejs.org/api/cluster.html
We are using Kubernetes so scaling across multiple machines would also be easy if my service was stateless, but it is not.
Current Setup
I have a list of objects that stay in memory, each object alone is a transaction boundary. Requests to this service always have the object ID in the url. Requests to the same object ID are put into a queue and processed one at a time.
Desired Setup
I would like to keep this interface to the external world but internally spread this list of objects across multiple nodes and based on the ID in the URL the request would be routed to the appropriate node.
What is the usual way to do it in NodeJS? I've seen people using the user session to make sure a given user always go to the same node, what I would like to do is the same thing but instead of using the user session using the ID in the url.

Unity Container PerResolveLiveTimeManager with multi-database and multiple requests on IIS

I have a question about Unity Container. My MVC application starts on Application_Start at Global.asax which is a container of Unity Container that works like below
_container = new UnityContainer();
_container.RegisterType(typeof(MainBCUnitOfWork), new PerResolveLIfeTimeManager());
From what I know, IIS will instantiate the type MainBCUnitOfWork only one time on its life cycle and will utilize the same instance in all requests, which is why I am using LifeTimeManager of type PerResolveLifeTimeManager.
My application has always worked well in this mode, however I am trying to utilize shared database access / cross database access where the required access will come from a (session, querystring) and change the database with the method below:
public void ChangeDatabase(string database)
{
Database.Connection.ConnectionString = "server=localhost;User Id=root;password=mypassword;Persist Security Info=True;database=" + database;
}
On my local testing everything works ok, but I have questions when in production if IIS processing many requests at the same time.
I did some research and found references where IIS only process one request each time, and if needed to process more than one request I should activate Web Garden, but this would bring other problems. See this link IIS and HTTP pipelining, processing requests in parallel
My question is, does IIS server only process one request each time, independent of the source?
The change of database during execution time can interfere with prior requests which are still ongoing?
I use Utity 2 wich does not have PerRequestLIfetimeManager, which would instantiate MainBCUnitOfWork per each request, which is sugested here MVC, EF - DataContext singleton instance Per-Web-Request in Unity
In case I update for a newer version and utilize one instance per request, what would be the impact on my performance?
What is recommended for this situation?
From what I know, IIS will instantiate the type MainBCUnitOfWork only
one time on its life cycle and will utilize the same instance in all
requests, which is why I am using LifeTimeManager of type
PerResolveLifeTimeManager.
This is wrong. Look at this articles (one and two). PerResolveLifeTimeManager is not a singleton life time manager. You'll get new instance of MainBCUnitOfWork for every Resolve.
What is recommended for this situation?
Using PerRequestLifeTimeManager is the best choice for web applications. You will get a new independant instance of your UnitOfWork for every request.

DynamoDB Application Architecture

We are using DynamoDB with node.js and Express to create REST APIs. We have started to go with Dynamo on the backend, for simplicity of operations.
We have started to use the DynamoDB Document SDK from AWS Labs to simplify usage, and make it easy to work with JSON documents. To instantiate a client to use, we need to do the following:
AWS = require('aws-sdk');
Doc = require("dynamodb-doc");
var Dynamodb = new AWS.DynamoDB();
var DocClient = new Doc.DynamoDB(Dynamodb);
My question is, where do those last two steps need to take place, in order to ensure data integrity? I’m concerned about an object that is waiting for something happen in Dynamo, being taken over by another process, and getting the data swapped, resulting in incorrect data being sent back to a client, or incorrect data being written to the database.
We have three parts to our REST API. We have the main server.js file, that starts express and the HTTP server, and assigns resources to it, sets up logging, etc. We do the first two steps of creating the connection to Dynamo, creating the AWS and Doc requires, at that point. Those vars are global in the app. We then, depending on the route being followed through the API, call a controller that parses up the input from the rest call. It then calls a model file, that does the interacting with Dynamo, and provides the response back to the controller, which formats the return package along with any errors, and sends it to the client. The model is simply a group of methods that essentially cover the same area of the app. We would have a user model, for instance, that covers things like login and account creation in an app.
I have done the last two steps above for creating the dynamo object in two places. One, I have simply placed them in one spot, at the top of each model file. I do not reinstantiate them in the methods below, I simply use them. I have also instantiated them within the methods, when we are preparing to the make the call to Dynamo, making them entirely local to the method, and pass them to a secondary function if needed. This second method has always struck me as the safest way to do it. However, under load testing, I have run into situations where we seem to have overwhelmed the outgoing network connections, and I start getting errors telling me that the DynamoDB end point is unavailable in the region I’m running in. I believe this is from the additional calls required to make the connections.
So, the question is, is creating those objects local to the model file, safe, or do they need to be created locally in the method that uses them? Any thoughts would be much appreciated.
You should be safe creating just one instance of those clients and sharing them in your code, but that isn't related to your underlying concern.
Concurrent access to various records in DynamoDB is still something you have to deal with. It is possible to have different requests attempt writes to the object at the same time. This is possible if you have concurrent requests on a single server, but is especially true when you have multiple servers.
Writes to DynamoDB are atomic only at the individual item. This means if your logic requires multiple updates to separate items potentially in separate tables there is no way to guarantee all or none of those changes are made. It is possible only some of them could be made.
DynamoDB natively supports conditional writes so it is possible to ensure specific conditions are met, such as specific attributes still have certain values, otherwise the write will fail.
With respect to making too many requests to DynamoDB... unless you are overwhelming your machine there shouldn't be any way to overwhelm the DynamoDB API. If you are performing more read/writes that you have provisioned you will receive errors indicating provisioned throughput has been exceeded, but the API itself is still functioning as intended under these conditions.

Resources