Route traffic to multiple node servers based on a condition - node.js

I'm coding an online multiplayer game using nodejs and HTML5 and I'm at the point where I would like to have multiple maps for people to play on, but I'm having a scaling issue. The server I'm running this on isn't able to support the game loops for more than a few maps on its own, and even though it has 4 cores I can only utilize one with a single node process.
I'd like to be able to scale this to not even necessarily be limited to a single server. I'd like to be able to start up a node process for each map in the game, then have a master process that looks up what map a player is in and passes their connection to the correct sub process for handling, updating with game information, etc.
I've found a few ways to use a proxy like nginx or the built in node clusters to load balance but from what I can tell the examples I've seen just give a connection to whatever the next available process is, and I need to hand them out specifically. Is there some way for me to route a connection to a node process based on a condition like that? I'm using Express to serve my static content and socket.io for client to server communication currently. The information for what map the player is in will be in MongoDB along with the rest of the player data, if that makes a difference.

There are many ways to adress your problem, here are two suggestions based on your description.
1 - Use a router server which will dispatch players queries to "Area servers" : in this topology all clients queries will arrive to your route server, the server tag each query with a unique id and dispatch it to the right area server, the area server handle the query and sendit back to the route server which will recognize it from the unique tag and send back the response to the client.
this solution will dispatch the CPU/memory load but not the bandwidth !
2 - Use an authentication server which redirect client to the servers with less load : in this case, you'll have multiple identical servers and one authentication server, when a client authenticate, send the url and an auth token of available server to the client and an authentication ticket to the server.
the client then connect to the server which will recognize using the auth toekn/auth ticket.
this solution will dispatch all CPU/Memory/Bandwidth, but might not be suited to all games since you can be sent to different server each connection and you'll not see the players in the same area if you are not on the same server.
those are only two simple suggestions, you can mix the two approaches or add other stuff (for example inter-communication area servers etc) which will solve the mensioned issues but will add complexity.

Related

How to direct a user to an available websocket server when she logs in to my multi-server Node.js app?

This is more like a design question but I have no idea where to start.
Suppose I have a realtime Node.js app that runs on multiple servers. When a user logs in she doesn't know which server she will be assigned to. She will just login, do something and logout and that's it. A user won't be interacting with other users on a different server, nor will her details be stored on another server.
In the backend I assume the Node.js server will put the user's login details to some queue and then when there is space it will assign this user to an available server (A server that has the lowest ping value or is not full). Because there is a limit number of users on one physical server when the users try to login to a "full" server it will direct her to another available server.
I am using ws module of node.js. Is there any service available for this purpose or do I have to build my own? How difficult would that be?
I am not sure how websocket fits into this question. Ignoring it. I guess your actual question is about load balancing... Let me try paraphasing it.
Q: Does NodeJS has any load balancing feature that I can leverage?
Yes and it is called cluster in NodeJS. Instead of the traditional one node process listening on a single port, this module allows you to spawn a group of node processes and have them all binded to the same port.
This means is that all the user know is only the service's endpoint. He sends a request to it and 1 of the available server in the group will serve him whenever possible.
Alternatively using Nginx, the web server, as your load balancer is also a very popular approach to this problem.
References:
Cluster API: https://nodejs.org/api/cluster.html
Nginx as load balancer: http://nginx.org/en/docs/http/load_balancing.html
P.S
I guess the key word for googling solutions to your problem is load balancer.
Out of the 2 solutions I would recommend going the Nginx way as it is a much scalable approach
Example:
Your Node process could possibly be spread across multiple hosts (horizontal scaling). The former solution is more for vertical scaling, taking advantages of multi-cores machine.

A node.js server for both web and mobile app

I am creating a game by Unity and I want to upload the players' score to MongoDB. Therefore, I have built a node.js server listening to port 3000, and the scores will be sent to the server and store into the database.
My question is that if I want to create a website for viewing/analyzing players' scores, which approach should I use?
create two node.js servers, one for the web, one for the game
one node.js server but listen to port 80 and 3000 (im not sure whether it is possible or not)
any other better suggestions?
Thank you.
I would create one Node server, one to serve both api and web requests.
It sounds like the data served by the API and the web will be the same or subsets of each other. So you'll probably want to share code, lookup the same stuff from the database, etc etc.
From here, you could either create separate routes that the api uses and the web uses (/api/v1/my_scores vs /my_scores) OR realize that you're just asking for different representations of the same data and do something RESTful like checking the accept header and either sending server rendered HTML or sending JSON back to the client.
Alternatively, you could just create a api in Node, then use a purely front end tool like Angular or React to create a web front end for your site.
Using port 3000 is not a good idea because many users access internet through firewalls which block non-standard ports.
I would recommend using 443 port and https to secure the communication for both use cases.
If the site for analyzing scores does not share logic with the api server, then it can be created as a separate site - but in starting it is easier to manage a single application.
If i understand your question easily and according to my limited knowledge i think that you don't require more than one server with a database. The reason is that one web you only want to display the high score nor the end user can insert it anything on website. So the complexity is minimal already so don't bother to create another server. Just make data getting API separate for using in website.

Setting up a secure back-end NodeJS server for multiple front-end domains

I've been doing a lot of research recently on creating a backend for all the websites that I run and a few days ago I leased a VPS running Debian.
Long-term, I'd like to use it as the back-end for some web applications. However, these client-side javascript apps are running on completely different domains than the VPS domain. I was thinking about running the various back-end applications on the VPS as daemons. For example, daemon 1 is a python app, daemons 2 and 3 are node js, etc. I have no idea how many of these I might eventually create.
Currently, I only have a single NodeJS app running on the VPS. I want to implement two methods on it listening over some arbitrary port, port 4000 for example:
/GetSomeData (GET request) - takes some params and serves back some JSON
/AddSomeData (POST request) - takes some params and adds to a back-end MySQL db
These methods should only be useable from one specific domain (called DomainA) which is different than the VPS domain.
Now one issue that I feel I'm going to hit my head against is CORS policy. It sounds like I need to include a response header for Access-Control-Allow-Origin: DomainA. The problem is that in the future, I may want to add another acceptable requester domain, for example DomainB. What would I do then? Would I need to validate the incoming request.connection.remoteAddress, and if it matched DomainA/DomainB, write the corresponding Access-Control-Allow-Origin?
As of about 5 minutes ago before posting this question, I came across this from the W3C site:
Resources that wish to enable themselves to be shared with multiple Origins but do not respond uniformly with "*" must in practice generate the Access-Control-Allow-Origin header dynamically in response to every request they wish to allow. As a consequence, authors of such resources should send a Vary: Origin HTTP header or provide other appropriate control directives to prevent caching of such responses, which may be inaccurate if re-used across-origins.
Even if I do this, I'm a little worried about security. By design anyone on my DomainA website can use the web app, you don't have to be a registered user. I'm concerned about attackers spoofing their IP address to be equal to DomainA. It seems like it wouldn't matter for the GetSomeData request since my NodeJS would then send the data back to DaemonA rather than the attacker. However, what would happen if the attackers ran a script to POST to AddSomeData a thousand times? I don't want my sql table being filled up by malicious requests.
On another note, I've been reading about nginx and virtual hosts and how you can use them to establish different routes depending on the incoming domain but I don't BELIEVE that I need these things; however perhaps I'm mistaken.
Once again, I don't want to use the VPS as a web-site server, the Node JS listener is going to be returning some collection of JSON hence why I'm not making use of port 80. In fact the primary use of the VPS is to do some heavy manipulation of data (perhaps involving the local MySQL db) and then return a collection of JSON that any number of front-end client browser apps can use.
I've also read some recommendations about making use of NodeJS Restify or ExpressJS. Do I need these for what I'm trying to do?

Nodejs handling login on another server

suppose you had 4 machines each running an instance identical nodejs app, and users have to log in to access your website, after a user logs in is it possible to move his connection to one of the other machines?
To Clear it up:
Node 1 only holds the main app page, handles login validation and
knows how many users are on each node, and it routes the user who
logs in to the Node with the lowest number of users, or to make it
more complicated to the server which has the lowest load(not based
on the number of users but the traffic).
each of the other Nodes run CentOS with a nodejs server instance
cluster of Node processes.
i am using socket.io intensively and after login i always have
presistant connection with the client,even on my client no ajax
requests are made, everything is handled using sockets.
in my current source code, everything is combined in one nodejs app,
and i do socket authentication for login
The clients have no kind of interaction with each other, which makes the job easiear.
is it possible to pass a socket connection from one nodejs server to another?
how would you solve this problem yourself, considering that the nodejs app that handles the login and the actual nodejs app are 2 seperate machines?
I would keep it simple. I would create a load balancer node with its own application balancer application. This node will redirect to the less loaded "worker" node.js instance based on the number of authenticated use sessions on each node. This should happen even before authentication is done. All other "worker" nodes will have same main app with exactly same logic running - main page, authentication and application logic.
Just save an access token (i.e. cookie in case of http) into a database and send it from the client to the server every time it connects.
Well, it's tough to give a complete answer without having a better sense of your application's architecture... but I'll lay out my assumptions and go from there:
a) www.yourdomain.com points to Node 1.
b) Node 1 is the only server that responds to HTTP requests, Node 2 through Node 5 only communicate through sockets.
c) Once a user is authenticated through Node 1, it does not need to re-authenticate through Node 1 for subsequent communication through sockets (this one is a bit tricky, if you really want to ensure that only authenticated users can access your app that authentication must be passed over the socket connection, but it can be simpler than the full authentication performed by Node 1, and it sounds like you're doing this, just want to raise the issue)
Given those assumptions, then I would assign subdomains to each app server (perhaps node2.yourdomain.com, node3.yourdomain.com, etc?), then when Node 1 is ready to pass the client over to the app server, determine which node you want to send them to, pass that subdomain over to the client, and have the client create a socket connection to the assigned app server, then all of its communication will happen through there.
If I've misunderstood or over-simplified things, feel free to set me straight in comments.

node.js server with socket.io handling 50000 simultaneous clients

We are developing a Javascript control which should be constantly connected to a server for receiving animation updates.
We are planning to host this stuff on an Amazon cloud.
The scenario is like this: server connects to activemq queue waiting for updates, for each update it broadcasts it to all connected clients.
Is it even possible to handle such load with node.js + socket.io?
Will a single node.js server be able to handle such load?
How to organize fast transport between different nodes if we will have to use more than one node?
Will single node.js server be able to handle such load?.. How to organize fast transport between different nodes if we will have to use more than one node
You say that you are planning to host on Amazon. So first off, nothing should be scoped for a single server. Amazon machines will simply "disappear", you have to assume that you are going to use multiple computers.
...handling 50k simultaneous clients
So to start with, 50k connections for a single box is a very big number. Here's a very detailed blog post discussing "getting to 10k" with node.js+socket.io.
Here's a very telling quote:
it seemed as though 10,000 clients simply required more serialization
than my server was able to handle.
So a key component to "getting to 50k" is going to be the amount of work required just pushing data over the wire.
How to organize fast transport between different nodes if we will have to use more than one node.
That blog post is the first of 3. When you're done the first, read the other two. That should point you in the right direction.

Resources