I am trying to build a generic publish/subscribe server with nodejs and node_redis that receives requests from a browser with a channel name and responds with any data that has been published too that channel. To do this, I am using long polling requests from the browser and dealing with these requests by sending a response when a message is received on a channel.
For each new request, an obect is created for subscribing to the channel (if and only if it does not already exist).
clients = {};
//when request comes in,
clients[channel] = redis.createClient();
clients[channel].subscribe(channel);
Is this the best way to deal with the subscribtion channels, or is there some other more intuitive way?
I don't know what's your design, but you can subscribe with one redis client on multiple channels (after you subscribe with client, then you can only subscribe to other channel or unsubscribe within this connection: http://redis.io/commands/subscribe), because after you receive message, you have full information which channel this message comes from. Then you can distribute this message to all interested clients.
This helped me a little, because I could put type of message in channel name and then dynamically choose action for each message from small function, instead of generating separate subscription for each channel with separate logic.
Inside my node.js server I have only 2 redis clients:
simple client for all standard actions - lpush, sadd and so on
subscribe client - which listens for messages over subscribed channels, then this messages are distribute to all sessions (stored as sets for each channel type) using first redis client.
I would like to point you out to my post about pubsub using socket.io together with redis. Socket.io is a very good library =>
How to use redis PUBLISH/SUBSCRIBE with nodejs to notify clients when data values change?
I think the design is very simple and it should also be very scalable.
That seems like a pretty reasonable solution to me. What don't you like about it?
Something to keep in mind is that you can have multiple subscriptions on each Redis connection. This might end up complicating your logic, which is the opposite of what you are asking for. However, at scale this might be necessary. Each Redis connection is relatively inexpensive, but it does require a file descriptor and some memory.
Complete Redis Pub/Sub Example (Real-time Chat using Hapi.js & Socket.io)
We were trying to understand Redis Publish/Subscribe ("Pub/Sub") and all the existing examples were either outdated, too simple or had no tests.
So we wrote a Complete Real-time Chat using Hapi.js + Socket.io + Redis Pub/Sub Example with End-to-End Tests!
https://github.com/dwyl/hapi-socketio-redis-chat-example
The Pub/Sub component is only a few lines of node.js code:
https://github.com/dwyl/hapi-socketio-redis-chat-example/blob/master/lib/chat.js#L33-L40
Rather than pasting it here (without any context) we encourage you to checkout/try the example.
We built it using Hapi.js but the chat.js file is de-coupled from Hapi and can easily be used with a basic node.js http server or express (etc.)
Related
I want to make an app which lets users comment and send messages. However, the notifications for these events will have to come instantly, just like any other social-media or chat application. This is what I'm thinking of:
Web-frontend: Angular, mobile: Ioinc with Angular
Backend: Node, Mongo
Now, this is how I was thinking I'd implement real-time notification.
There's a constant socket connection between the frontend (web & mobile-app) and the backend.
Whenever a message arrives, targeted to a specific user, I'll use some kind of a Mongo-hook to send the notification to the frontend via the socket connection.
Now, the confusion with this approach is:
Would millions of socket connections work at scale, at all? If not, what is the way to implement this pub-sub kind of system? I need to do it from scratch, not using Firebase.
What if a user is offline when he receives the message in the backend? If the socket is not on, how would he get the message? Is there a way to do it using Kafka? Please explain if you have some ideas on this.
Is this the correct approach? If not, can you suggest what would be appropriate?
Would millions of socket connections work at scale, at all? If not, what is the way to implement this pub-sub kind of system? I need to do it from scratch, not using Firebase.
Yes, it can work at scale just you have to made an architecture like that. You might find this useful
Scalable architecture for socket.io
https://socket.io/docs/v3/using-multiple-nodes/
What if a user is offline when he receives the message in the backend? If the socket is not on, how would he get the message?
If he the socket is not on or user is offline, then client Socket will be disconnected. At this point, notification will not be received and whenever the user comes online you'll have make an API call to get the notifications and connect again to the socket for further operations.
Is there a way to do it using Kafka?
Yes, you can also do it with Kafka. You'll need Consumer API(Subscriber) and Producer API(Publisher)
https://kafka.apache.org/documentation/#api
https://www.npmjs.com/package/kafka-node
Sending Apache Kafka data on web page
What do you use Apache Kafka for?
Real time notification with Kafka and NodeJS
I'm trying to build a realtime (private) chat between users of a video game with 25K+ concurrent connections. We currently run 32 nodes where users can connect through a load balancer. The problem I'm trying to solve is how to route messages to each user?
Currently, we are using socket.io & socket.io-redis, where each websocket joins a room with its user ID, and we emit each message they should receive to that room. The problem with this design is that we are reaching the limits of Redis Pubsub, and Socket.io which doesn't scale well (socket.io emit messages to all nodes which check if the user is connected, this is not viable).
Our current stack is composed of Postgres, Redis & RabbitMQ. I have been thinking about this problem a lot and have come up with 3 different solutions :
Route all messages with RabbitMQ. When a user connects, we create an exchange with type fanout with the user ID and a queue per websocket connection (we have to handle multiple connections per user). When we want to emit to that user, we simply publish to that exchange. The problem with that approach is that we have to create a lot of queues, and I heard that this may not be very efficient.
Create a queue for each node in RabbitMQ. When a user connects, we save the node & socket ID in a Redis Set, so that when we have to send a message to that specific user, we first get the list of nodes, emit to each node queue, which then handle routing to specific client in the app. The problems with that approach is that in the case of a node failure, we may store that a user is connected when this is not the case. To fix that, we would need to expire the users's Redis entry but this is not a perfect fix. Also, if we later want to implement group chat, it would mean we have to send duplicates messages in Rabbit, this is not ideal.
Go all in with Firebase Cloud Messaging. We have a mobile app, and we plan to use it for push notifications when the user isn't connected, but would it be a good fit even if the user is connected?
What do you think is the best fit for our use case? Do you have any other idea?
I found a better solution : create a binding for each user but using only one queue on each node, then we route each messages to each user.
I can see only one way communication in pusher docs i.e., from server to client. How to do it from client to server with node.js?
Pusher Channels does not support bidirectional transport. If you need to send data from your client to your server you will have to use another solution such as a POST request.
Channels does offer webhooks which can be triggered by certain events in the application and could be consumed by your server if they fit your requirements. However, webhooks are designed to keep you informed of certain events within your application rather than as a means of communication between client and server.
I'm attempting to create an application which will work as a chat app. I'm currently contemplating the best way to do this and I'm thinking of going with a server sent event package such as the following. Every conversation would have an id, and the message would be emitted under the id. For instance
stream.emit(1512, "Hello") would send the message and
stream.on(1512, function(message){console.log(message)}) would print the message. Only the chat members would have the chatId.
I was initially thinking of using websockets but I thought that not every user should be receiving data, as chats were private and I didn't want to configure authentication within websockets.
Back to server sent events:
I have a few questions on the topic.
Are they efficient and, if not, what would be a more efficient solution
Is the method of sending chat through a randomized, hashed, id (such as 309ECC489C12D6EB4CC40F50C902F2B4D) secure?
Would you recommend a different method for sending chat? This is to be implemented as a mobile application where individual users can chat privately with oneanother so, again, security is pretty important.
Thanks.
I recommend the client-call package (disclaimer: I wrote it). It provides a very simple method to run a client-side method from the server code.
Besides this, you can always just put the chat messages to a db collection and remove them after some time.
The Socket.io API has the ability to send messages to all clients.
With one server and all sockets in memory, I understand how that server one can send a message to all its clients, that's pretty obvious. But what about with multiple servers using Redis to store the sockets?
If I have client a connected to server y and client b connected to server z (and a Redis box for the store) and I do socket.broadcast.emit on one server, the client on the other server will receive this message. How?
How do the clients that are actually connected to the other server get that message?
Is one server telling the other server to send a message to its connected client?
Is the server establishing its own connection to the client to send that message?
Socket.io uses MemoryStore by default, so all the connected clients will be stored in memory making it impossible (well, not quiet but more on that later) to send and receive events from clients connected to a different socket.io server.
One way to make all the socket.io servers receive all the events is that all servers use redis's pub-sub. So, instead using socket.emit one can publish to redis.
redis_client = require('redis').createClient();
redis_client.publish('channelName', data);
And all the socket servers subscribe to that channel through redis and upon receiving a message emit it to clients connected to them.
redis_sub = require('redis').createClient();
redis_sub.subscribe('channelName', 'moreChannels');
redis_sub.on("message", function (channel, message) {
socket.emit(channel, message);
});
Complicated Stuff !! But wait, turns out you dont actually need this sort of code to achieve the goal. Socket.io has RedisStore which essentially does what the code above is supposed to do in a nicer way so that you can write Socket.io code as you would write for a single server and will still get propagated over to other socket.io server through redis.
To summarise socket.io sends messages across multiple servers by using redis as the channel instead of memory.
There are a few ways you can do this. More info in this question. A good explanation of how pub/sub in Redis works is here, in Redis' docs. An explanation of how the paradigm works in general is here, on Wikipedia.
Quoting the Redis docs:
SUBSCRIBE, UNSUBSCRIBE and PUBLISH implement the Publish/Subscribe
messaging paradigm where (citing Wikipedia) senders (publishers) are
not programmed to send their messages to specific receivers
(subscribers). Rather, published messages are characterized into
channels, without knowledge of what (if any) subscribers there may be.
Subscribers express interest in one or more channels, and only receive
messages that are of interest, without knowledge of what (if any)
publishers there are. This decoupling of publishers and subscribers
can allow for greater scalability and a more dynamic network topology.