Hello i am trying to make a multiplayer game with nodejs and socket.io.
I am using multi process socket.io with cluster and socket.io-redis. It works well if you want to broadcast messages, emit etc.
But if i want to add some complexity in my code problems start to appear. I want my game to have a matchmaking function.
Assume this scenario:
Server find 2 users that want to play and start a game.
Users are on different processes on the same machine.
The problem is that a client can communicate with only one process the one that firstly got in.
So there are 3 possible solutions as I see it:
Matchmake with users that is on the same proccess --- Not good.
Create an ipc method between processes so the one with the target client can broadcast client's answer to the correct process --- Too complex and not sure if solves everything.
Change client's socket.io process to a new one without the user notice it --- Not sure if this is even possible.
Is there something i am missing here? Is there any other solution that i can't think?
Any help appreciated!
With socket.io-redis users can communicate even if they are in different servers/processes, this is why it exists.
Related
I currently have a Node server running that works with MongoDB. It handles some HTTP requests, but it largely used WebSockets. Basically, the server connects multiple users to rooms with WebSockets.
My server currently has around 12k WebSockets open and it's almost crippling my single threaded server, and now I'm not sure how to convert it over.
The server holds HashMap variables for the connected users and rooms. When a user does an action, the server often references those HashMap variables. So, I'm not sure how to use clusters in this. I thought maybe creating a thread for every WebSocket message, but I'm not sure if this is the right approach, and it would not be able to access the HashMaps for the other users
Does anyone have any ideas on what to do?
Thank you.
You can look at the socket.io-redis adapter for architectural ideas or you can just decide to use socket.io and the Redis adapter.
They move the equivalent of your hashmap to a separate process redis in-memory database so all clustered processes can get access to it.
The socket.io-redis adapter also supports higher-level functions so that you can emit to every socket in a room with one call and the adapter finds where everyone in the room is connected, contacts that specific cluster server, and has it send the message to them.
I thought maybe creating a thread for every WebSocket message, but I'm not sure if this is the right approach, and it would not be able to access the HashMaps for the other users
Threads in node.js are not lightweight things (each has its own V8 instance) so you will not want a nodejs thread for every WebSocket connection. You could group a certain number of WebSocket connections on a web worker, but at that point, it is likely easier to use clustering because nodejs will handle the distribution across the clusters for you automatically whereas you'll have to do that yourself for your own web worker pool.
Before going to production, we want to make sure that this is an "as expected behavior".
I have conducted an experiment by laucnhing 4 child processes using a PM2 cluster (I have 4 cores on my machine). Which means there were 4 websocket processes running...
Then on the client I created multiple sockets, and sent many messages to the server. One thing I didn't expect was that Node was able to figure out what child process the socket belonged to, meaning that every message sent by the client was console logged by the correct child process.
It seems like the main worker in the cluster keeps track of what sockets belong where.
So is this managed by Nodejs internally by the "cluster" module?
Also is this ok to use in production?
P.S. for websockets we use "ws" module for Nodejs
I aksed the same question on github. And got an answer...
Also please look into using ClusterWs - it's awesome!
https://github.com/ClusterWS/ClusterWS/issues/143
Say I have a rest end point which when called starts a long running process server side e.g.
http://host/api/program/start
and I want to push any updates / output from that process from the server side to a client.
I'm thinking the rest call would return some sort of unique id which the client could then use when connecting to the websocket to only receive updates about that particular process.
I'd have to think about buffering the output / updates from the process to send to the client if they didn't connect before the first output from the process but irrespective of that, what would be the best way of achieving the socket data handling for this? Could I make use of the socket.io rooms / namespaces in some way?
If you really want to do it this way, I would suggest generating the ID via the initial start call, then passing that to the long running process as an argument. Then that process publishes all messages to that ID (which appropriate clients are listening to as well).
However, I would discourage you from going from this approach. There are plenty of ways to go about handling a child process in Node, so you might want to look into these options a little more so you don't end up dealing with zombie processes all over the place.
The first that comes to mind is ChildProcess. Another option would be something like WebWorker Threads. Either of these would be right in the vein of what (I think) you're trying to do, but allow you to maintain much more control over the child processes.
So I am developing (more playing around with) a realtime game in node.js, I am also using Redis and Sockets.io. I have players create a lobby and join it (kind of like a pre-game chat room, where you can talk to players and select game settings) . The client is written in HTML/CSS/JS, Anyway I want to be able to tell when players disconnect from the lobby, to update the number of players joined on the interface (and joined player names).
Two options I have thought about are:
Using redis' key value timeout feature, to remove a particular field if it is not updated in x amount of time. I would then have the host check the existance of this field to check for DC's. I do wonder if this is highly inefficient, as many users potentially will be playing, so will it be bad to have many timeout values in redis and also many other users polling these fields.
I could use the sockets.io on('disconnect', ..) to update the field. However I am not sure if this event will fire if for example a users pc freezes?
Anyway I am open to any other ideas also!
Socket.io have a 'heartbeat' to check connection still alive. Default heartbeat timeout is 15s. You can read more about configuring it in this wiki. If heartbeat fails (user pc freezes) then socket.io will emit 'disconnect' event.
Socket.io should suffice. You can configure it to use heartbeats to ping the socket and check its health. If a user's computer freezes it will, in effect, not be able to respond to these heartbeats, causing it to force a disconnect.
To test this you could set up your Socket.io to use heartbeats, then connect via a browser onn a different computer. While in the browser past into the console an infinite loop. Causing it to simulate a freeze.
I have an nodejs chat app where multiple clients connect to a common chat room using socketio. I want to scale this to multiple node processes, possibly on different machines. However, clients that connect to the same room will not be guaranteed to hit the same node process. For example user 1 will hit node process A and user 2 will hit node process B. They are in the same room so if user 1 sends a message, user 2 should get it. What's the best way to make this happen since their connections are managed by different processes?
I thought about just having the node processes connect to redis. This at least solves the problem that process A will know there's another user, user 2, in the room but it still can't send to user 2 because process B controls that connection. Is there a way to register a "value changed" callback for redis?
I'm in a server environment where I can't control any of the routing or load balancing.
Both node.js processes can be subscribed to some channel through redis pub/sub and listen to messages which you pass to this channel. For example, when user 1 connects to process A on the first machine, you can store in redis information about this user along with the information which process on which machine manages it. Then when user 2, which is connected to process B on the second machine, sends a message to user 1, you can publish it to this channel and check which process on which machine is responsible for managing communication with user 1 and respond accordingly.
I have done(did) some research on this. Below my findings:
Like yojimbo87 said you first just use redis pub/sub(is very optimized).
http://comments.gmane.org/gmane.comp.lang.javascript.nodejs/22348
Tim Caswell wrote:
It's been my experience that the bottleneck is the serialization and
de-serialization of the data, not the actual channel. I'm pretty sure
you can use named pipes, but I'm not sure what the API is. msgpack
seems like a good format for the data interchange. There are a few
libraries out there that implement msgpack or ipc frameworks on top of
it.
But when serialization / deserialization becomes your bottle-neck I would try to use https://github.com/pgriess/node-msgpack. I would also like to test this out, because I think the sooner you have this the better?