problem: remote systems reconnect to multiple nodes websocket server, for each system a dedicated queue in RabbitMQ is created/used. The queues should be automatically removed if no active connections exist. Websocket connect/disconnect events handlers are asynchronous, quite heavy, observed problem that a disconnect event handler finished after reconnect, making system inconsistent.
The main issue is with RabbitMQ queues - initial solution was to create unique queues for each connection and remove them on disconnect. Appeared to be heavy.
Second approach was to keep a dedicated queue per remote system (same queue name for any connection), the problem was that assertQueue added consumers for the same queue. Need to find way to remove stale queue consumers without removing the queue itself.
Solution is to store list of consumers per remote system and on disconnect event trigger cancel function with the olderst consumerTag, then update the list of queue consumers for the given remote system.
on remote system connect event
import { Replies } from "amqplib";
// bind callback function for queue-specific messages and store returned consumer description
const result: Replies.Consume = await channel.consume(queueName, this.onSomeMessage.bind(this));
// update consumers list for the connected remote system
const consumers: Array<string> | undefined = this.consumers.get(remoteId);
if (consumers === undefined) {
const consumersList: Array<string> = new Array();
consumersList.push(result.consumerTag);
this.consumers.set(remoteId, consumersList);
} else {
consumers.push(result.consumerTag);
}
on remote system disconnect event
// remove the oldest consumer in the list and update the list itself
// use cancel method of the amqp channel
const consumers = this.consumers.get(remoteId);
if (consumers === undefined) {
// shouldn't happen
console.error(`consumers list for ${remoteId} is empty`);
} else {
const consumerTag = consumers[0];
await this.rxchannel.addSetup(async (channel: ConfirmChannel) => {
await channel.cancel(consumerTag);
consumers.shift();
});
}
The code snippets are from some class' methods implementation (if you're wondering about "this").
Copyright notice (especially for German colleagues): the code from this answer can be used under Beerware (https://en.wikipedia.org/wiki/Beerware) or MIT license (whatever one prefers).
Related
I am using the pg-pubsub module to listen to 'notify' events sent from Postgres and pushing them into a Redis queue, How can I ensure that the order in the queue is the same as the order in which the events were emitted?
To check if the order of execution is maintained I ran the following code.
const PGPubsub = require(`pg-pubsub`);
const pubsub_instance = new PGPubsub(...);
pubsub_instance.addChannel('test_channel', async (event) => { await util.sleep(event.time); console.log(`done ${event.time}`) });
Then created the following events.
psql> notify test_channel, '{"time": 5000}';
psql> notify test_channel, '{"time": 3000}';
psql> notify test_channel, '{"time": 1000}';
The output was
done 1000
done 3000
done 5000
Is there a way to ensure that event listeners wait for the previous events to finish execution? Even if I use locks, can I trust that the locks will be taken in the order the listeners were invoked?
Is there a better way to push Postgres notifications to Redis?
I am trying to use socket.io alongside a local event listener within the socket.on("connection", client => {...} ) event. The problem is every time a new socket.io connection is created it's creating a new event listener. This eventually leads to a max listeners error in node.js.
I need that event listener there so it can await data from other parts of the application and then use the returned socket.io client object to emit that data to the connected socket.io client.
Should I simply increase setMaxListeners per the documentation? Or is there something I should be doing differently with my code to prevent the creation of a new event listener each time the client connects (e.g. is there a way to register the event listener globally, but pass and use new client connections into the event listener)?
io.on('connection', client => {
//console.log("Websockets client connected")
events.on("initializePage", data => {
client.emit("initializePage", data)
})
client.on('disconnect', () => {
console.log("Socket.io client disconnected")
})
})
In the snippet of code you present, the global events eventEmitter will have a new listener attached for each new socket. So indeed, after a while the maxListeners will be rapidly exhausted if many clients connect.
A first step to avoid adding listeners indefinitely would be to some clean up each time a client disconnects, by tearing down all the listeners it registered with the off method.
In the case the clients only need to be notified one time, you can decide to register the listener using the once method instead of on, which will do the clean up automatically after one trigger.
But, to get back more specifically to socket.io, I feel you're in the situation when you want to perform some kind of broadcast/multicast. Therefore, what about using the namespace system and call:
events.on("initializePage", data => {
io.sockets.emit("initializePage", data);
})
This code has to be written at the top level of your file, not in the connection handler.
I have one redis client for pub-sub. I'm using a websocket message handler to dynamically subscribe to a redis channel. The payload of the websocket message contains an ID that I use to create the channel-name. So for example lobby:${lobbyID}:joined.
Subscribing to this channel works fine, messages are received when publishing to that channel.
But the issue that I'm having is that I want to unsubscribe from this channel at one point. My assumption by reading the redis-documentation is that I would use punsubscribe so I can unsubscribe from any channels with the pattern lobby:*:joined, but messages are still received after trying that.
import redis from 'redis';
const subClient = redis.createClient();
subClient.on('message', (channel, message) => {
// Received message x on channel y
});
const socketHandlerSubscribe = (lobbyID) => {
subClient.subscribe(`lobby:${lobbyID}:joined`);
}
const socketHandlerUnsubscribe = () => {
subClient.punsubscribe('lobby:*:joined'); // true
}
When using the redis-cli the pattern seems valid when using PUBSUB CHANNEL lobby:*:joined. I could solve this issue by passing a lobby ID to the unsubscribe handler aswell, but punsubscribe should be the solution for it.
I also encountered this earlier with a scenario where I looped through an array of user ID's and created a subscription for each on statuses:${userID} and tried a punsubscribe on statuses:*, without any success.
Am I doing something wrong or this is an issue node-redis related? I'm using redis version 2.8.0
I noticed that there are two different types of subscriptions. On channels and patterns. In my question I was subscribing to a channel, and unsubscribing on a pattern, these two are not 'compatible' so this won't work.
I used nc to debug this, as redis-cli won't allow additional commands when entering subscribed state.
I use rabbitMq, nodeJs(with socet.io, amqp modules), ZF2 for development chat
By default RabbitMq send message from queue at help Round-robin.
Does RabbitMq opportunity to send all subscriber queue the same message?
For example:
If i make for each connection its queue, that is work correct, but if user open 2 tabs on him browser, then will make 2 queue. I think its not good.
I want have one queue for each users(if i make that, than first message send to first tab, second message - to second tab)
My code:
var exchange = connectionAmqp.exchange('chat', {type: 'direct', passive: false, durable:false, autoDelete: false});
console.log(' [*] Client connected')
connectionAmqp.queue('chat'+userId.toString(), {
passive : false,
durable : false,
exclusive : false,
autoDelete: false
}, function(queue) {
//Catch new message from queue
queue.bind(exchange, userId.toString());
queue.subscribe(function(msg){
socket.emit('pullMessage', msg); //Emit message to browser
})
});
From other script i push message
var exchange = connectionAmqp.exchange('chat', {type: 'direct', passive: false, durable:false, autoDelete: false});
var data= {chatId:70,msg:"Text",time:1375333200}
exchange.publish('1', data, {contentType: 'application/json'});
Make sure the queues are not exclusive. Then make sure the client connects to the same queue. This can be done but having the client create the queue and specifying the name of that queue. The naming algorithm will make sure that the queue name is unique per client, but for the same client it will produce the same name. Both tabs will read in turn from the same queue ensuring the round robin effect that you are looking for.
If you want to send a message to all queues, you can use an exchange of type fanout. See here! It will broadcast a message to each queue bound to it. However, if you are attaching two consumers (callbacks) on one queue, those two consumers (callbacks) will still be fed round-robin wise.
Queues are very lightweight and RabbitMQ is build to handle many queues, so it's ok to create a queue for each tab. If you are still unsure, this post may be of your interest. The author build a simple chat system and stress tested it, showing that RabbitMQ easily handles thousands of queues and messages per second.
Although it is possible to do this with just one queue per user, it will be far easier with one queue per tab...and when using RabbitMQ there is usually no need to do such optimizations*.
*(of course there are exceptions)
I'm building a simple system like a realtime news feed, using node.js + socket.io.
Since this is a "read-only" system, clients connect and receive data, but clients never actually send any data of their own. The server generates the messages that needs to be sent to all clients, no client generates any messages; yet I do need to broadcast.
The documentation for socket.io's broadcast (end of page) says
To broadcast, simply add a broadcast flag to emit and send method calls. Broadcasting means sending a message to everyone else except for the socket that starts it.
So I currently capture the most recent client to connect, into a variable, then emit() to that socket and broadcast.emit() to that socket, such that this new client gets the new data and all the other clients. But it feels like the client's role here is nothing more than a workaround for what I thought socket.io already supported.
Is there a way to send data to all clients based on an event initiated by the server?
My current approach is roughly:
var socket;
io.sockets.on("connection", function (s) {
socket = s;
});
/* bunch of real logic, yadda yadda ... */
myServerSideNewsFeed.onNewEntry(function (msg) {
socket.emit("msg", { "msg" : msg });
socket.broadcast.emit("msg", { "msg" : msg });
});
Basically the events that cause data to require sending to the client are all server-side, not client-side.
Why not just do like below?
io.sockets.emit('hello',{msg:'abc'});
Since you are emitting events only server side, you should create a custom EventEmitter for your server.
var io = require('socket.io').listen(80);
events = require('events'),
serverEmitter = new events.EventEmitter();
io.sockets.on('connection', function (socket) {
// here you handle what happens on the 'newFeed' event
// which will be triggered by the server later on
serverEmitter.on('newFeed', function (data) {
// this message will be sent to all connected users
socket.emit(data);
});
});
// sometime in the future the server will emit one or more newFeed events
serverEmitter.emit('newFeed', data);
Note: newFeed is just an event example, you can have as many events as you like.
Important
The solution above is better also because in the future you might need to emit certain messages only to some clients, not all (thus need conditions). For something simpler (just emit a message to all clients no matter what), io.sockets.broadcast.emit() is a better fit indeed.