Nodejs handling login on another server - node.js

suppose you had 4 machines each running an instance identical nodejs app, and users have to log in to access your website, after a user logs in is it possible to move his connection to one of the other machines?
To Clear it up:
Node 1 only holds the main app page, handles login validation and
knows how many users are on each node, and it routes the user who
logs in to the Node with the lowest number of users, or to make it
more complicated to the server which has the lowest load(not based
on the number of users but the traffic).
each of the other Nodes run CentOS with a nodejs server instance
cluster of Node processes.
i am using socket.io intensively and after login i always have
presistant connection with the client,even on my client no ajax
requests are made, everything is handled using sockets.
in my current source code, everything is combined in one nodejs app,
and i do socket authentication for login
The clients have no kind of interaction with each other, which makes the job easiear.
is it possible to pass a socket connection from one nodejs server to another?
how would you solve this problem yourself, considering that the nodejs app that handles the login and the actual nodejs app are 2 seperate machines?

I would keep it simple. I would create a load balancer node with its own application balancer application. This node will redirect to the less loaded "worker" node.js instance based on the number of authenticated use sessions on each node. This should happen even before authentication is done. All other "worker" nodes will have same main app with exactly same logic running - main page, authentication and application logic.

Just save an access token (i.e. cookie in case of http) into a database and send it from the client to the server every time it connects.

Well, it's tough to give a complete answer without having a better sense of your application's architecture... but I'll lay out my assumptions and go from there:
a) www.yourdomain.com points to Node 1.
b) Node 1 is the only server that responds to HTTP requests, Node 2 through Node 5 only communicate through sockets.
c) Once a user is authenticated through Node 1, it does not need to re-authenticate through Node 1 for subsequent communication through sockets (this one is a bit tricky, if you really want to ensure that only authenticated users can access your app that authentication must be passed over the socket connection, but it can be simpler than the full authentication performed by Node 1, and it sounds like you're doing this, just want to raise the issue)
Given those assumptions, then I would assign subdomains to each app server (perhaps node2.yourdomain.com, node3.yourdomain.com, etc?), then when Node 1 is ready to pass the client over to the app server, determine which node you want to send them to, pass that subdomain over to the client, and have the client create a socket connection to the assigned app server, then all of its communication will happen through there.
If I've misunderstood or over-simplified things, feel free to set me straight in comments.

Related

How to direct a user to an available websocket server when she logs in to my multi-server Node.js app?

This is more like a design question but I have no idea where to start.
Suppose I have a realtime Node.js app that runs on multiple servers. When a user logs in she doesn't know which server she will be assigned to. She will just login, do something and logout and that's it. A user won't be interacting with other users on a different server, nor will her details be stored on another server.
In the backend I assume the Node.js server will put the user's login details to some queue and then when there is space it will assign this user to an available server (A server that has the lowest ping value or is not full). Because there is a limit number of users on one physical server when the users try to login to a "full" server it will direct her to another available server.
I am using ws module of node.js. Is there any service available for this purpose or do I have to build my own? How difficult would that be?
I am not sure how websocket fits into this question. Ignoring it. I guess your actual question is about load balancing... Let me try paraphasing it.
Q: Does NodeJS has any load balancing feature that I can leverage?
Yes and it is called cluster in NodeJS. Instead of the traditional one node process listening on a single port, this module allows you to spawn a group of node processes and have them all binded to the same port.
This means is that all the user know is only the service's endpoint. He sends a request to it and 1 of the available server in the group will serve him whenever possible.
Alternatively using Nginx, the web server, as your load balancer is also a very popular approach to this problem.
References:
Cluster API: https://nodejs.org/api/cluster.html
Nginx as load balancer: http://nginx.org/en/docs/http/load_balancing.html
P.S
I guess the key word for googling solutions to your problem is load balancer.
Out of the 2 solutions I would recommend going the Nginx way as it is a much scalable approach
Example:
Your Node process could possibly be spread across multiple hosts (horizontal scaling). The former solution is more for vertical scaling, taking advantages of multi-cores machine.

How make big real time app use socket.io or lightstreamer and scale horizontal

I have got some questions with real time app.
I am making a real time app at this time. I used socket.io,mongodb and nodejs. This app works nice in prototype but what will happen when the number of users increases?
I want to grow horizontal scale.
e.g I have got two server (server A, server B)
client A connect server A
Client B connect server B
How can Client A send message Client B? It has been confusing me with different servers
I found the use redis for this. Is there a possibility that redis-server enough?
As a result, what should I use and which tech(redis,lightstreamer,jabber, socket.io,nginx)?
You can't send directly a message from A to B because they arn't connected to the same server.
The solution to this is to enable communication between the two node server.
You mentionned redis so if you go that route you can have a central redis server that has two lists (on for each server). When client A want to join client B, he send to server A his message. Server A will not find client B in his local sockets and will push to redis the message. Soon or later, server B will collect his pending messages from redis and dispatch them to client B.
It's a basic implementation that you can change to fit your needs. You can have for example a single list of messages per server, but also why not a list per user (and the server which has this
Also as a side note, any central data store such as a database server (mongo? MySQL?) can do the same as redis. It all comes down to what you allready have, what you can have and what type of persistence you want.

Route traffic to multiple node servers based on a condition

I'm coding an online multiplayer game using nodejs and HTML5 and I'm at the point where I would like to have multiple maps for people to play on, but I'm having a scaling issue. The server I'm running this on isn't able to support the game loops for more than a few maps on its own, and even though it has 4 cores I can only utilize one with a single node process.
I'd like to be able to scale this to not even necessarily be limited to a single server. I'd like to be able to start up a node process for each map in the game, then have a master process that looks up what map a player is in and passes their connection to the correct sub process for handling, updating with game information, etc.
I've found a few ways to use a proxy like nginx or the built in node clusters to load balance but from what I can tell the examples I've seen just give a connection to whatever the next available process is, and I need to hand them out specifically. Is there some way for me to route a connection to a node process based on a condition like that? I'm using Express to serve my static content and socket.io for client to server communication currently. The information for what map the player is in will be in MongoDB along with the rest of the player data, if that makes a difference.
There are many ways to adress your problem, here are two suggestions based on your description.
1 - Use a router server which will dispatch players queries to "Area servers" : in this topology all clients queries will arrive to your route server, the server tag each query with a unique id and dispatch it to the right area server, the area server handle the query and sendit back to the route server which will recognize it from the unique tag and send back the response to the client.
this solution will dispatch the CPU/memory load but not the bandwidth !
2 - Use an authentication server which redirect client to the servers with less load : in this case, you'll have multiple identical servers and one authentication server, when a client authenticate, send the url and an auth token of available server to the client and an authentication ticket to the server.
the client then connect to the server which will recognize using the auth toekn/auth ticket.
this solution will dispatch all CPU/Memory/Bandwidth, but might not be suited to all games since you can be sent to different server each connection and you'll not see the players in the same area if you are not on the same server.
those are only two simple suggestions, you can mix the two approaches or add other stuff (for example inter-communication area servers etc) which will solve the mensioned issues but will add complexity.

Developing XPages for Clusters

I am about to start a new XPage project which will be used world-wide. I am a bit concerned because they are worried about performance and are therefore thinking about using this application in with a load balancer or a in a cluster. I have been looking around and I have seen that there can be issues with scoped variables (for example of the user starts the session on one server and is then sent to another, certain scoped variables go missing). I have also seen this wonderful article which focuses on performance, but does not really mention anything about a clustered environment.
Just a bit of extra info: concurrent users should not be higher than 600, but may grow over time, there are about 3000 users total. The XPage application will be a portal for a two data sources (an active database and its archive).
My question is this: as a developer, what must I pay very close attention to when developing an application that may run behind a load balancer or in a clustered environment?
Thanks for your time!
This isn't really an answer...but I can't fit this in a comment.
We faced a very similar problem.
We have an xpage SPA (Single Page Application) application that has been in production for 2-3 years, with variable user load up to 300-400 concurrent users who login for 8 hr sessions, we have 4 clustered Domino servers, 1 being a "workhorse" running all scheduled jobs, and 3 dedicated HTTP servers.
We use SSO in Domino and the 3 HTTP servers participate, so a user only has to authenticate once and can access all HTTP servers. We use a reverse proxy, so all users go to www.ourapp.com but get redirected to servera.ourapp.com, serverb.ourapp.com, etc., once they get directed to a server, the rev proxy issues a cookie to the client. This provides a "sticky" session to whichever server they have been directed to, and rev proxy will only move them to a different server if the server they are on becomes unavailable.
We use "user" managed session beans to store config for each user, so if the user moves server, if the user's bean does not exist, it will be created. But they key point is: because of sticky session, the user will only move if we bring a server down or the server was to fail. Since our app is a SPA, a lot of the user "config" is stored client side, so if they get booted to a different server (to the user, they are still pointed to www.ourapp.com) nothing really changes.
This has worked really well for us so far.
The app is also accessed by an "offline" external app, it points to the rev proxy (www.ourapp.com), but we did initially run into problems because this app was not passing back the Rev Proxy "sticky" cookie token, so 1 device was sending a request to proxy which got routed to server A, then 1 sec later to server B, then A..B..C, all sorts of headaches...since the cluster can be a few seconds out of sync, if sending requests to same doc...conflicts. As soon as we got the external app to pass back rev proxy token for each session, problem solved.
The one bit of your question I don't quite understand is: "...The XPage application will be a portal for a single database (no replicas) and an archive database (no replicas). " Does that mean the portal will be clustered, but the DB users are connecting to will not be clustered?
We haven't really coded any differently than if app was on 1 server, since the user's session is "stuck" to one server. We did need persistent document locking across all the servers. We initially used the native document locking, but $writers does not cluster, so we had to implement our own...we set a field on doc so that "lock" clustered (we also had to then implement s single lock store...sigh, can talk about that another time). Because of requirements, we have to maintain close to 1 million docs in 3 of the app databases, we generate huge amounts of audit data, but we push that out to SQL.
So I'd say this is more of an admin headache (I'm also the admin for this project, so in our case I can confirm this!) than a designer headache.
I'd also be interested to hear what anyone else has to say on this subject.

Retrieve Socket.io Client from Redis

I'm building a real time data system that allows an Apache/PHP server to send data to my Node.js server, which will then immediately send that data to the associated client via socket.io. So the Apache/PHP server makes a request that includes the data, as well as a user token that tells Node.js which user to send the data to.
Right now this is working fine - I've got an associative array that ties the user's socket.io connection to their user token. The problem is that I need to start scaling this to multiple servers. Naturally, with the default configs of socket.io I can't share connections between node workers.
The solution I had in mind was to use the RedisStore functionality, and just have each of my workers looking at the same Redis store. I've been doing research and there's a lot of documentation on how to use pub/sub functionality for broadcasting messages to large groups (rooms). That's fine, but I need to be able to send messages to a single client, so I need some way to retrieve a user's socket.io connection from the RedisStore.
The only way I can think to do this right now is to create a ton of 'rooms' named with the user's token, and only have one user in each room. Then I could just emit to that room. However, that seems very inefficient.
Is there a better way that I can retrieve user's unique socket.io connections from Redis?
Once a socket connection is made to a server running the node server, it is connected to that instance.
So it seems you need to make a way for your php server to know which node server a client is connected to.
In your redis store you could just store the id of the server as the value by the client id. Then php looks up which node server to use and makes the request.

Resources