Nginx - Routing, Capabililties of with Node.js - node.js

I am running a Node.js server which is now getting more load and I need to start getting this running on multiple cores, as Node.js is single threaded and can only run on one.
This is a simple solution given the Node.js Cluster module and tons of NPM packages for this very thing.
I have a problem in that I need browser sessions to retain the same Node.js worker after the first request. This is because I store authentication data, etc. in a single node worker process and do not want to open the can of worms of messaging between worker processes, etc. etc.
My browsers store a session id cookie once authenticated, and I want a system to re-route requests to their correct worker based on their session cookie.
Nginx looks promising, but I know nothing about it, and while I will put the work into it, I would like to know before I spend hours diving into it, if it is capable of routing to Node.js worker processes based on arbitrary data from the request header, such as a session cookie.
Is this doable? If I know it is, I'll get down and dirty figuring out Nginx, ground up.

I assume you are storing your sessions in nodejs memory. You might want to store your sessions in redis instead. This way it is persisted outside of a single server, and can be accessed from multiple processes.
In addition to redis, you might also want to look into Amazon Elastic Beanstalk for managing your load-balancing. You can setup an nginx proxy to route your requests to multiple servers based on their load.
This link might be able to get you started http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs_express_elasticache.html

Related

How to direct a user to an available websocket server when she logs in to my multi-server Node.js app?

This is more like a design question but I have no idea where to start.
Suppose I have a realtime Node.js app that runs on multiple servers. When a user logs in she doesn't know which server she will be assigned to. She will just login, do something and logout and that's it. A user won't be interacting with other users on a different server, nor will her details be stored on another server.
In the backend I assume the Node.js server will put the user's login details to some queue and then when there is space it will assign this user to an available server (A server that has the lowest ping value or is not full). Because there is a limit number of users on one physical server when the users try to login to a "full" server it will direct her to another available server.
I am using ws module of node.js. Is there any service available for this purpose or do I have to build my own? How difficult would that be?
I am not sure how websocket fits into this question. Ignoring it. I guess your actual question is about load balancing... Let me try paraphasing it.
Q: Does NodeJS has any load balancing feature that I can leverage?
Yes and it is called cluster in NodeJS. Instead of the traditional one node process listening on a single port, this module allows you to spawn a group of node processes and have them all binded to the same port.
This means is that all the user know is only the service's endpoint. He sends a request to it and 1 of the available server in the group will serve him whenever possible.
Alternatively using Nginx, the web server, as your load balancer is also a very popular approach to this problem.
References:
Cluster API: https://nodejs.org/api/cluster.html
Nginx as load balancer: http://nginx.org/en/docs/http/load_balancing.html
P.S
I guess the key word for googling solutions to your problem is load balancer.
Out of the 2 solutions I would recommend going the Nginx way as it is a much scalable approach
Example:
Your Node process could possibly be spread across multiple hosts (horizontal scaling). The former solution is more for vertical scaling, taking advantages of multi-cores machine.

Simple message passing Nodejs server accepting only 4 requests at a time

We have a simple express node server deployed on windows server 2012 that recieves GET requests with just 3 parameters. It does some minor processing on these parameters, has a very simple in-memory node-cache for caching some of these parameter combinations, interfaces with an external license server to fetch license for the requesting user and sets it in the cookie, followed by which, it interfaces with some workers via a load balancer (running with zmq) to download some large files (in chunks, and unzips and extracts them, writes them to some directories) and display them to the user. On deploying these files, some other calls to the workers are initiated as well.
The node server does not talk to any database or disk. It simply waits for response from the load balancer running on some other machines (these are long operations taking typically between 2-3 minutes to send response). So, essentially, the computation and database interactions happens on other machines. The node server is only a simple message passing/handshaking server that waits for response in event handlers, initiates other requests and renders the response.
We are not using a 'cluster' module or nginx at the moment. With a bare bones node server, is it possible to accept and process atleast 16 requests simultaneously ? Pages such as these http://adrianmejia.com/blog/2016/03/23/how-to-scale-a-nodejs-app-based-on-number-of-users/ mention that a simple node server can handle only 2-9 requests at a time. But even with our bare bones implementation, not more than 4 requests are accepted at a time.
Is using a cluster module or nginx necessary even for this case ? How to scale this application for a few hundred users to begin with ?
An Express server can handle many more than 9 requests at a time, especially if it isn't talking to a datebase.
The article you're referring to assumes some database access on each request and serving static assets via node itself, rather than a CDN. All of this taking place on a single CPU with 1GB of RAM. That's a database and web server all running on a single core with minimal RAM.
There really are not hard numbers on this sort of thing; You build it and see how it performs. If it doesn't perform well enough, put a reverse proxy in front of it like nginx or haproxy to do load balancing.
However, based on your problem, if you really are running into bottlenecks where only 4 connections are possible at a time, it sounds like you're keeping those connections open way too long and blocking others. Better to have those long running processes kicked off by node, close the connections, then have those servers call back somehow when they're done.

req.session data lost when after redirect when running pm2 in cluster mode

We are running a node.js app with express 4.6.1 cookie parser 1.3.2 connect-flash 0.1.1 and express session 1.7.0.
We use flash to display messages on pages after redirects and sometimes store data in the req.session to auto fill forms when the user makes a mistake and needs to reenter. Recently we started using pm2 in cluster mode and most things seem to work fine but we noticed that we lose our flash data and data stored in req.session after a redirect.
Here is an example:
req.flash("signup", errorString);
req.session.storedData = {};
req.session.storedData.username = "";
req.session.storedData.password = req.body.password;
req.session.storedData.email = req.body.email;
req.session.storedData.emailConfirm = req.body.emailConfirm;
res.redirect(problemRedirectPath);
This comes from an endpoint that accepts a request after the users tries to signup but has an error of some kind. If we run this in without cluster mode the session data and the flash both show up properly but if we run this in cluster mode, they are both almost always lost (be not always :/)
Is there a better way to do this in cluster mode?
Unless you use Redis, Memcache, some other process to store session data you will not be able to use more than one Node process to handle requests. Right now your app is only using express-session to store session data, which by default only stores session data in memory.
https://github.com/expressjs/session#sessionoptions
See the warning section in the above link.
When you run an application with the cluster module it will fork a different process for each application instance. These processes cannot directly share memory without some work on your part to do so, which means when requests are round-robin distributed to the application instances any requests that do not end up at the same process will not be able to associate their cookie with the server-side session store.
I'd recommend changing your session store to something more production-ready such as Redis or Memcache. If you use Redis you may want to look at using connect-redis.
I had the same issue. After switching from using memory for Express session to memcached, everything works fine with pm2 cluster mode.
https://github.com/balor/connect-memcached
It's always recommended that applications should never store state in memory. By using a tool/solution like pm2, which is a load balancer/process manager that will distribute requests through all instances based on an algorithm, one process will not contains the same state stored in memory that the others processes have. The solution is: Use an external storage, shared and accessible for all instances, like mongo/redis/sql/etc. This way all processes will read state from the same source (not memory, but a database), solving the problem.

Scaling nodejs app with pm2

I have an app that receives data from several sources in realtime using logins and passwords. After data is recieved it's stored in memory store and replaced after new data is available. Also I use sessions with mongo-db to auth user requests. Problem is I can't scale this app using pm2, since I can use only one connection to my datasource for one login/password pair.
Is there a way to use different login/password for each cluster or get cluster ID inside app?
Are memory values/sessions shared between clusters or is it separated? Thank you.
So if I understood this question, you have a node.js app, that connects to a 3rd party using HTTP or another protocol, and since you only have a single credential, you cannot connect to said 3rd party using more than one instance. To answer your question, yes it is possibly to set up your clusters to use a unique use/pw combination, the tricky part would be how to assign these credentials to each cluster (assuming you don't want to hard code it). You'd have to do this assignment when the servers start up, and perhaps use a a data store to hold these credentials and introduce some sort of locking mechanism for each credential (so that each credential is unique to a particular instance).
If I was in your shoes, however, what I would do is create a new server, whose sole job would be to get this "realtime data", and store it somewhere available to the cluster, such as redis or some persistent store. The server would then be a standalone server, just getting this data. You can also attach a RESTful API to it, so that if your other servers need to communicate with it, they can do so via HTTP, or a message queue (again, Redis would work fine there as well.
'Realtime' is vague; are you using WebSockets? If HTTP requests are being made often enough, also could be considered 'realtime'.
Possibly your problem is like something we encountered scaling SocketStream (websockets) apps, where the persistent connection requires same requests routed to the same process. (there are other network topologies / architectures which don't require this but that's another topic)
You'll need to use fork mode 1 process only and a solution to make sessions sticky e.g.:
https://www.npmjs.com/package/sticky-session
I have some example code but need to find it (over a year since deployed it)
Basically you wind up just using pm2 for 'always-on' feature; sticky-session module handles the node clusterisation stuff.
I may post example later.

Going session-less with NodeJS

I've been doing a lot of research lately and it appears to me that going stateless serverside brings benefits to both performance & scalability.
I am although trying to figure out how to achieve session-less-ness on Node.JS. It seems to me that basically all I have to do is assign a token to a logged in user, so I would have something like this in my DB:
{ user:'foo#example.com', pass:'123456', token:'long_id_here' }
so that the token can be send with every HTTP request like this:
/set/:key/:val/:token
to be checked against aforementioned DB object. Is this what it is actually meant to be a session-less web service?
If this is the right way, then I do not understand things like token expiry, and other security issues. I would like to be pointed out to NPM package of some sort?
On a side note, is it best for a token, to use a hash of the user+password, or to assign a different one at every login?
The reason to go sessionless is that most default session implementations use an in-memory store. That means that the session information is stored in memory local to that instance. Most websites these days are scaling out as traffic increases. This means they add more servers and balance the load between the servers. The problem with in-memory session stores is your user can log into Server 1, but if their next request is routed to Server 2, they don't have a session created yet and will appear to be logged off.
You don't necessarily need to go sessionless to scale out with node or any other server side language. You just need to use a session that isn't in local memory that would be accessible to all nodes. If you're using something like Express or Connect, you can easily use a session implementation like connect-redis which will enable you to have a fast session store which is accessible to all of your node instances so it doesn't matter which one is hit.

Resources