We are in the process of writing a multiplayer game with NodeJS and websockets. When designing the architecture of the server we came up with the idea of objects handling most of their own communication.
First example on a connection we would then pass the socket to the client object who could then handle himself the communications. One of the advantage is not having to have a global handler that has to route the messages. This makes the responsibilities very well separated.
I can then also tell the client themselves to join a specific room with the socket.io functionality by sending them the room id.
The main disadvantage we can see is that there may be occurrence were we need to duplicate the sockets. ie: if a client disconnects from a room and wants to join another.
So the other option is going a more classical way and handling all the messages globally and then routing the with associated id's to the concerned objects.
Would you guys advise one solution over the other and why?
Related
I currently have a Node server running that works with MongoDB. It handles some HTTP requests, but it largely used WebSockets. Basically, the server connects multiple users to rooms with WebSockets.
My server currently has around 12k WebSockets open and it's almost crippling my single threaded server, and now I'm not sure how to convert it over.
The server holds HashMap variables for the connected users and rooms. When a user does an action, the server often references those HashMap variables. So, I'm not sure how to use clusters in this. I thought maybe creating a thread for every WebSocket message, but I'm not sure if this is the right approach, and it would not be able to access the HashMaps for the other users
Does anyone have any ideas on what to do?
Thank you.
You can look at the socket.io-redis adapter for architectural ideas or you can just decide to use socket.io and the Redis adapter.
They move the equivalent of your hashmap to a separate process redis in-memory database so all clustered processes can get access to it.
The socket.io-redis adapter also supports higher-level functions so that you can emit to every socket in a room with one call and the adapter finds where everyone in the room is connected, contacts that specific cluster server, and has it send the message to them.
I thought maybe creating a thread for every WebSocket message, but I'm not sure if this is the right approach, and it would not be able to access the HashMaps for the other users
Threads in node.js are not lightweight things (each has its own V8 instance) so you will not want a nodejs thread for every WebSocket connection. You could group a certain number of WebSocket connections on a web worker, but at that point, it is likely easier to use clustering because nodejs will handle the distribution across the clusters for you automatically whereas you'll have to do that yourself for your own web worker pool.
I'm planning a non-trivial realtime chat platform. The app has several types of resources: Users, Groups, Channels, Messages. There are roughly 20 types of realtime events having to do with these resources: for instance, submitting a message, a user connecting or disconnecting, a user joining a group, a moderator kicking a user from a group, etc...
Overall, I see two paths to organizing all this complexity.
The first is to build a REST API to manage the resources. For instance, to send a message, POST to /api/v1/messages. Or, to kick a user from a group, POST to /api/v1/group/:group_id/kick/. Then, from within the Express route handler, call io.emit (made accessible through res.locals) with the updated data to notify all related clients. In this case, clients talk to the server through HTTP and the server notifies clients through socket.io.
The other option is to not have a rest API at all, and handle all events through socket.IO. For instance, to send a message, emit a SEND_MESSAGE event. Or, to kick a user, emit a KICK_USER event. Then, from within the socket.io event handler, call io.emit with the updated data to notify all clients.
Yet another option is to have certain actions handled by a REST API, others by socket.IO. For instance, to get all messages, GET api/v1/channel/:id/messages. But to post a message, emit SEND_MESSAGE to the socket.
Which is the most suitable option? How do I determine which actions need to be sent thorough an API, and which need to be sent through socket.io? Is it better not to have a REST API for this type of application?
Some of my thoughts so far, nothing conclusive:
Advantages of REST API over the socket.io-only approach:
Easier to organize hierarchically, more modular
Easier to test
More robust and elegant
Simpler auth implementation with middleware
Disadvantages of REST API over the socket.io-only approach:
Slightly less performant (source)
Since a socket connection needs to be open anyways, why not use it for everything?
Slightly harder to manage on the client side.
Thanks for reading !
This could be achieve this using sockets.
Why because a chat application will be having dozens of actions, like ..
'STARTS_TYPING', 'STOPS_TYPING', 'SEND_MESSAGE', 'RECIVE_MESSAGE',...
Accommodating all these features using rest api's will generate a complex system which lacks performance.
Also concept of rooms in socket.io simplifies lot of headache regarding group chat implementation.
So its better to build everything based on sockets[socket.io or web cluster].
Here is the solution I found to solve this problem.
The key mistake in my question was that I assumed a rest API and websockets were mutually exclusive, because I intended on integrating the business and database logic directly in express routes and socket.io handlers. Thus, choosing between socket.io and http was important, because it would influence the core business logic of my app.
Instead, it shouldn't matter which transport to use. The business logic has to be independent from the transport logic, in its own module.
To do this, I developed a service layer that handles CRUD tasks, but also more specific tasks such as authentication. Then, this service layer can be easily consumed from either or both express routes and socket.io handlers.
In the end, this architecture allowed me not to easily switch between transport technologies.
I am writing some NodeJS an application for some clients (mobile, single page app...). The application is game and quite social - users can have friends and game needs to sharing events between them in realtime. So I decided to use Socket.io.
How to effectively share events between connected sockets of user's friends? One user can be connected with more then one client (so he has more connected sockets).
Is it good idea to use socket.io "rooms" where each user has one room and when some event occurs, it is emited to each room of my friends?
How to effectively handle this?
For better notion you can imagine it the same way as facebook wall (or right panel of friends activity).
How would you handle it?
You could try solving this task with socket.io rooms but i think that would kill your application because of the overhead, for example
1000 users => equals => 1000 rooms ( 1 room for each user )
thats 1000 connection without anyone particularly listening to friend activity.
So the approach above already seems like not scalable.
I would try publish/subscribe pattern which is not very difficult to implement, you can use a micro library or Backbone with .listenTo or even socket.io with Redis pub/sub.
A user would have the option to listen or stop listening to a friend events.
One user would not get a ton of notification for simple friend event.
and the list goes on depending on your application behavior.
Some questions that have interesting information.
What should I be using? Socket.io rooms or Redis pub-sub?
Node.js, Socket.io, Redis pub/sub high volume, low latency difficulties
I'm using Backbone.iobind to bind my client Backbone models over socket.io to the back-end server which in turn store it all to MongoDB.
I'm using socket.io so I can synchronize changes back to other clients Backbone models.
The problems starts when I try to run the same thing over a cluster of node.js servers.
Setting a session store was easy using connect-mongo which stores the session to MongoDB.
But now I can't inform all the clients on every change, since the clients are distributed between the different node.js servers.
The only solution I found is to set a pub/sub queue between the different node.js servers (e.g. mubsub), which seems like a very heavy weight solution that will trigger an event on all the servers for every change.
How did you reach the conclusion that pub/sub is a "very heavy weight solution"?
Sounds like you got it right up until that part :-)
Oh, and pub/sub is not a queue.
Let's examine that claim:
The nice thing about pub/sub is that you publish and subscribe to channels/topics.
So, using the classic chat server example, let's say you have a million users connected in total, but #myroom only has 50 users in it.
When a message is sent to #myroom, it's being published once. No duplication whatsoever.
In most use-cases you won't even need to store it on disk/RAM, so we're mostly looking at network/bandwidth here. And, I mean, you're probably throwing more data (probably over the wire?) to MongoDB already, so I assume that's not your bottleneck.
If you also use socket.io's rooms features (which is basically its own pub/sub mechanism), that means only 5 users will have that message emitted to them over the websocket.
And no, socket.io won't iterate over 1M clients to find out which of them are in room #myroom ;-)
So the message is published once, each subscriber (node.js instance) will get notified once, and only the relevant clients -- socket.io won't waste CPU cycles in order to find them as it keeps track of them when they join() or leave() a room -- will receive the message.
Doesn't that sound pretty efficient and light-weight?
Give Redis a shot.
It's really simple to set-up, runs entirely in memory, blazing-fast, replication is extremely simple, etc.
That's the way socket.io recommends passing events between nodes.
You can find more information/code here.
Additionally, if MongoDB can't handle the load at any point, you can use Redis as your session-store as well.
Hope this helps!
I'm experimenting node.js building a chat server app so i'm trying to maintain client connections in arrays, incrementing index for every new connection inserting the client object in an array.
I don't like this solution because i think it's too much light, i put holes in arrays for every disconnection and find long to iterate in arrays for finding a certain connection.
Which is the "node" way to handle multiple connections?
For the record, my understanding here is that you're trying to track clients to know who to broadcast to? I think this based on a previous question you asked that I saw.
I believe that this is similar to how socket.io handles rooms internally. It's been a while since I actually looked into it, but I believe this is how it's done. That being said, what I've done in the past is use rooms for each chat. Something along the lines to a concatenation of the usernames/userid's of the members. This way when you get data on a connection, you can easily broadcast it back to the same room, which will send it to all connections in the room except for the one it received it from. Perfect for a chat application. Socket.io will handle tracking which connections are in which room.