I'm working with redis to publish and subscribe messages between socket.io clients, when client connects to the server (io.sockets.on('connection', function(socket){...});) i'm creating a subscribe variable using redis.createClient() and then using the subscribe function to subscribe the client to channel.
My question is if its right to use the same subscribe variable to do a publish action? or it's important to create another instance with redis.createClient() for publishing messages so i will have 2 instances, one for publishing and one for subscribing...
Thanks
From the Redis docs:
Once the client enters the subscribed state it is not supposed to issue any other commands, except for additional SUBSCRIBE, PSUBSCRIBE, UNSUBSCRIBE and PUNSUBSCRIBE commands.
For this reason, you'll need two clients, one for subscribing and one for publishing (and potentially other commands).
By subscribe variable you mean the object that redis.createClient() returns ? If yes, from the documentation, When a client issues a SUBSCRIBE or PSUBSCRIBE, that connection is put into "pub/sub" mode. At that point, only commands that modify the subscription set are valid. so yes, you cannot publish to a client where you subscribed first, that would issue a Error: Connection in pub/sub mode, only pub/sub commands may be used error.
You do need to create one client for subscriptions (which can be modified on the fly), and one client to publish. When the subscriptions for a client are free, you have your normal state again.
Related
I have used socket io and apollo subscription before, now I am looking for an alternative solution. I need something similar to subscriptions-transport-ws that allows client browser to subscribe events or channels from the server.
Simple example.
Let's say on the server-side is some kind of pub/sub broker. Now user opens an article and the comment section is updated in real-time. So when a user opens an article, he starts listening for events from that channel. It should work like on Twitter to display every new like and comment.
I'm trying to build a realtime (private) chat between users of a video game with 25K+ concurrent connections. We currently run 32 nodes where users can connect through a load balancer. The problem I'm trying to solve is how to route messages to each user?
Currently, we are using socket.io & socket.io-redis, where each websocket joins a room with its user ID, and we emit each message they should receive to that room. The problem with this design is that we are reaching the limits of Redis Pubsub, and Socket.io which doesn't scale well (socket.io emit messages to all nodes which check if the user is connected, this is not viable).
Our current stack is composed of Postgres, Redis & RabbitMQ. I have been thinking about this problem a lot and have come up with 3 different solutions :
Route all messages with RabbitMQ. When a user connects, we create an exchange with type fanout with the user ID and a queue per websocket connection (we have to handle multiple connections per user). When we want to emit to that user, we simply publish to that exchange. The problem with that approach is that we have to create a lot of queues, and I heard that this may not be very efficient.
Create a queue for each node in RabbitMQ. When a user connects, we save the node & socket ID in a Redis Set, so that when we have to send a message to that specific user, we first get the list of nodes, emit to each node queue, which then handle routing to specific client in the app. The problems with that approach is that in the case of a node failure, we may store that a user is connected when this is not the case. To fix that, we would need to expire the users's Redis entry but this is not a perfect fix. Also, if we later want to implement group chat, it would mean we have to send duplicates messages in Rabbit, this is not ideal.
Go all in with Firebase Cloud Messaging. We have a mobile app, and we plan to use it for push notifications when the user isn't connected, but would it be a good fit even if the user is connected?
What do you think is the best fit for our use case? Do you have any other idea?
I found a better solution : create a binding for each user but using only one queue on each node, then we route each messages to each user.
There are two methods I am considering. One is to set up a subscriber function with the socket ID that is fired once the webhook is received. The other is to poll the server from the client to check if the DB has been updated.
I don't like the idea of polling the server because it is making unnecessary network calls. However, the problem I see with the subscription method is that if the socket connection disappears for whatever reason, the client will never be notified.
I am using Sails.js and Mongoose
I am implementing a TCP chat server using node js and redis, however i dont seem to be able to persist chat data on redis using Publish and Subscribe, and hence when i have left the chat room and reentered, i will not be updated on the newest messages, how should i implement something like this?
Publish is not meant to be stored in Redis, even if you chose the disk storage. When it recieves message, it just finds the connections with requested channels and forwards to each. So, it is not storing anything. Even if it did, It should continously try to forward messages (because it's a pub/sub model) which is not very effective. Instead, you should also push (by lpush the messages to a queue, so they can be stored. And when a client connects and has no messages, it can retrieve those messages from queue (without popping, so other newcomers can also use) and then subscribe to channel and recieve new messages.
By default, redis is in memory only. You have to enable persistence explicitly.
There are multiple options, AOF every query being the safest, but probably the slowest.
More details here: http://redis.io/topics/persistence
I am trying to build a generic publish/subscribe server with nodejs and node_redis that receives requests from a browser with a channel name and responds with any data that has been published too that channel. To do this, I am using long polling requests from the browser and dealing with these requests by sending a response when a message is received on a channel.
For each new request, an obect is created for subscribing to the channel (if and only if it does not already exist).
clients = {};
//when request comes in,
clients[channel] = redis.createClient();
clients[channel].subscribe(channel);
Is this the best way to deal with the subscribtion channels, or is there some other more intuitive way?
I don't know what's your design, but you can subscribe with one redis client on multiple channels (after you subscribe with client, then you can only subscribe to other channel or unsubscribe within this connection: http://redis.io/commands/subscribe), because after you receive message, you have full information which channel this message comes from. Then you can distribute this message to all interested clients.
This helped me a little, because I could put type of message in channel name and then dynamically choose action for each message from small function, instead of generating separate subscription for each channel with separate logic.
Inside my node.js server I have only 2 redis clients:
simple client for all standard actions - lpush, sadd and so on
subscribe client - which listens for messages over subscribed channels, then this messages are distribute to all sessions (stored as sets for each channel type) using first redis client.
I would like to point you out to my post about pubsub using socket.io together with redis. Socket.io is a very good library =>
How to use redis PUBLISH/SUBSCRIBE with nodejs to notify clients when data values change?
I think the design is very simple and it should also be very scalable.
That seems like a pretty reasonable solution to me. What don't you like about it?
Something to keep in mind is that you can have multiple subscriptions on each Redis connection. This might end up complicating your logic, which is the opposite of what you are asking for. However, at scale this might be necessary. Each Redis connection is relatively inexpensive, but it does require a file descriptor and some memory.
Complete Redis Pub/Sub Example (Real-time Chat using Hapi.js & Socket.io)
We were trying to understand Redis Publish/Subscribe ("Pub/Sub") and all the existing examples were either outdated, too simple or had no tests.
So we wrote a Complete Real-time Chat using Hapi.js + Socket.io + Redis Pub/Sub Example with End-to-End Tests!
https://github.com/dwyl/hapi-socketio-redis-chat-example
The Pub/Sub component is only a few lines of node.js code:
https://github.com/dwyl/hapi-socketio-redis-chat-example/blob/master/lib/chat.js#L33-L40
Rather than pasting it here (without any context) we encourage you to checkout/try the example.
We built it using Hapi.js but the chat.js file is de-coupled from Hapi and can easily be used with a basic node.js http server or express (etc.)