Node.js Google PubSub Subscriber does not receive SOME messages - node.js

Summary:
I have a chat functionality in my Node.js app and want to send messages to the clients via socket.io. I trigger the emit to the client via PubSub. Now, when running the PubSub Subscription everything works (i.e. prints out messages) in roughly 70% of the cases, but would sometimes just stop and not do anything (in particular it would also not print out an error).
I can confirm that the missing 30% (messages) are being published to the topic though, as a different subscription to the same topic receives the messages.
Any help with debugging this would be highly appreciated.
Details
This is my Stack:
Node.js
socket.io
express.js
passport.js
MongoDB
react.js
Process:
Anna sends a message in chat (this writes to the database and also publishes to PubSub topic "messages")
Node.js express app runs a subscription and would then based on the message content emit to whoever else should receive the message.
BoB who is on the same channel as Anna would, in this case, receive the message.
Why do I not directly emit from Anna to Bob? The reason being that I want to have an AppEngine in between the messages and potentially add some logic there, this seemed a good way doing it.
Subscription
const pubSubClient = require('./client');
const errorHandler = function(error) {
console.error(`ERROR: ${error}`);
throw error;
};
module.exports = listenForMessages =(subscriptionName="messageReceiver",io) => {
const subscription = pubSubClient.subscription(subscriptionName);
// Listen for new messages until timeout is hit
subscription.on("message", (message) => {
console.log(`Received message ${message.id}:`);
const parsedMessage = JSON.parse(message.data)
parsedMessage._id = message.id
console.log(parsedMessage)
if (parsedMessage.to === "admin") {
io.to(`admin:${parsedMessage.from}`).emit("NewMessage", parsedMessage);
} else {
io.to(`${parsedMessage.to}`).emit("NewMessage", parsedMessage);
}
message.ack();
});
subscription.on('error', errorHandler);
}
Server.js
...
const listenForMessages = require("./message_processing/listen");
listenForMessages("messageReceiver", io);
Sample console output
The following console output was generated by running the app locally with two browsers [one in incognito] chatting with each other. It can be seen that only the very last message was actually picked up by the listener (and printed out). Funnily enough, due to the async nature of the calls, the printout of the received message came before the log that the message was sent (i.e. latency surely can't be a problem here).
[0] went into deserialize user
[0] Message 875007020424601 published.
[0] went into deserialize user
[0] Message 875006704834317 published.
[0] went into deserialize user
[0] Message 875006583857400 published.
[0] went into deserialize user
[0] Message 875006520104287 published.
[0] went into deserialize user
[0] Message 875006699141463 published.
[0] went into deserialize user
[0] Received message 875006881073134:
[0] {
[0] from: '5e949f73aeed81beefaf6daa',
[0] to: 'admin',
[0] content: 'i6',
[0] seenByUser: true,
[0] type: 'message',
[0] createdByUser: true,
[0] createdAt: '2020-04-20T07:44:54.280Z',
[0] _id: '875006881073134'
[0] }
[0] Message 875006881073134 published.
In some other cases, earlier messages work and then the listener seems to stop.

There are a couple of things you could do to check what is happening:
Go to the topic page, select the topic to see the details, and look at the publish rate. Ensure that the messages you think are being published are actually being published successfully as far as Pub/Sub is concerned. If publishes are failing, it is possible they could be delivered to one of your subscribers and not the other.
Go to the subscription page, select the subscription to see the details, and look at the "Unacked message count" and "Oldest unacked message age" graphs. If these are nonzero, then that means there are messages not getting delivered to your subscriber. If they are zero, then that means the messages are getting delivered to and acknowledged by your subscriber.
If the number of unacked messages is zero, then the likely cause is a rogue processes acting as a subscriber on the subscription receiving the messages. Perhaps a previous instance of your service is still running unexpectedly? Or possibly another task that should use a different subscription is using the same subscription.
Another thing to be aware of is that subscribers will only receive messages on subscriptions that were created before messages were published. Therefore, if you started up the publisher and published some messages and then created the subscription, say at the time when the subscriber was started up, then the subscriber will not receive those earlier messages.

Related

BotFramework-DirectLine JS - Initial Message Missing

I have a Bot I have built with MS BotFramework, hosted on Azure. The Bot is built to start the convo with a Welcome Message. When I test the bot through emulator, or on Azure test webchat, the bot will initiate the conversation as expected with the welcome message.
However, in my chat client using BotFramework-DirectLineJS, it isn't until I send a message that the bot will respond with the Welcome message (along with a response to the message the user just sent).
My expectation is that when I create a new instance of DirectLine and subscribe to its activities, this Welcome message would come through. However, that doesn't seem to be happening.
Am I missing something to get this functionality working?
Given this is working for you on "Test in Webchat", I'm assuming your if condition isn't the issue, but check if it is if (member.id === context.activity.recipient.id) { (instead of !==). The default on the template is !== but that doesn't work for me outside of emulator. With === it works both in emulator and other deployed channels.
However, depending on your use cases you may want to have a completely different welcome message for Directline sessions. This is what I do. In my onMembersAdded handler I actually get channelId from the activity via const { channelId, membersAdded } = context.activity;. Then I check that channelId != 'directline' before proceeding.
Instead, I use the onEvent handler to look for and respond to the 'webchat/join' event from Directline. That leaves for no ambiguity in the welcome response. For a very simple example, it would look something like this:
this.onEvent(async (context, next) => {
if (context.activity.name && context.activity.name === 'webchat/join') {
await context.sendActivity('Welcome to the Directline channel bot!');
}
await this.userState.saveChanges(context);
await this.conversationState.saveChanges(context);
})
You'll still want to have something in your onMembersAdded for non-directline channel welcome messages if you use this approach.

Why am I receiving this error on Azure when using eventhubs?

I started using Azure recently and It has been an overwhelming experience. I started experimenting with eventhubs and I'm basically following the official tutorials on how to send and receive messages from eventhubs using nodejs.
Everything worked perfectly so I built a small web app (static frontend app) and I connected it with a node backend, where the communication with eventhubs occurs. So basically my app is built like this:
frontend <----> node server <-----> eventhubs
As you can see it is very simple. The node server is fetching data from eventhubs and sending it forward to the frontend, where the values are shown. It is a cool experience and I'm enjoying MS Azure until this error occured:
azure.eventhub.common.EventHubError: ErrorCodes.ResourceLimitExceeded: Exceeded the maximum number of allowed receivers per partition in a consumer group which is 5. List of connected receivers - nil, nil, nil, nil, nil.
This error is really confusing. Im using the default consumer group and only one app. I never tried to access this consumer group from another app. It said the limit is 5, I'm using only one app so it should be fine or am I missing something? I'm not checking what is happening here.
I wasted too much time googling and researching about this but I didn't get it. At the end, I thought that maybe every time I deploy the app (my frontend and my node server) on azure, this would be counted as one consumer and since I deployed the app more than 5 times then this error is showing up. Am I right or this is nonsense?
Edit
I'm using websockets as a communication protocol between my app (frontend) and my node server (backend). The node server is using the default consumer group ( I didn't change nothing), I just followed this official example from Microsoft. I'm basically using the code from MS docs that's why I didn't post any code snippet from my node server and since the error happens in backend and not frontend then it will not be helpful if I posted any frontend code.
So to wrap up, I'm using websocket to connect front & backend. It works perfectly for a day or two and then this error starts to happen. Sometimes I open more than one client (for example a client from the browser and client from my smartphone).
I think I don't understand the concept of this consumer group. Like is every client a consumer? so if I open my app (the same app) in 5 different tabs in my browser, do I have 5 consumers then?
I didn't quite understand the answer below and what is meant by "pooling client", therefore, I will try to post code examples here to show you what I'm trying to do.
Code snippets
Here is the function I'm using on the server side to communicate with eventhubs and receive/consume a message
async function receiveEventhubMessage(socket, eventHubName, connectionString) {
const consumerClient = new EventHubConsumerClient(consumerGroup, connectionString, eventHubName);
const subscription = consumerClient.subscribe({
processEvents: async (events, context) => {
for (const event of events) {
console.log("[ consumer ] Message received : " + event.body);
io.emit('msg-received', event.body);
}
},
processError: async (err, context) => {
console.log(`Error : ${err}`);
}
}
);
If you notice, I'm giving the eventhub and connection string as an argument in order to be able to change that. Now in the frontend, I have a list of multiple topics and each topic have its own eventhubname but they have the same eventhub namespace.
Here is an example of two eventhubnames that I have:
{
"EventHubName": "eh-test-command"
"EventHubName": "eh-test-telemetry"
}
If the user chooses to send a command (from the frontend, I just have a list of buttons that the user can click to fire an event over websockets) then the CommandEventHubName will be sent from the frontend to the node server. The server will receive that eventhubname and switch the consumerClient in the function I posted above.
Here is the code where I'm calling that:
// io is a socket.io object
io.on('connection', socket => {
socket.on('onUserChoice', choice => {
// choice is an object sent from the frontend based on what the user choosed. e.g if the user choosed command then choice = {"EventhubName": "eh-test-command", "payload": "whatever"}
receiveEventhubMessage(socket, choice.EventHubName, choice.EventHubNameSpace)
.catch(err => console.log(`[ consumerClient ] Error while receiving eventhub messages: ${err}`));
}
}
The app I'm building will be extending in the future to a real use case in the automotive field, that's why this is important for me. Therefore, I'm trying to figure out how can I switch between eventhubs without creating a new consumerClient each time the eventhubname changes?
I must say that I didn't understand the example with the "pooling client". I am seeking more elaboration or, ideally, a minimal example just to put me on the way.
Based on the conversation in the issue, it would seem that the root cause of this is that your backend is creating a new EventHubConsumerClient for each request coming from your frontend. Because each client will open a dedicated connection to the service, if you have more than 5 requests for the same Event Hub instance using the same consumer group, you'll exceed the quota.
To get around this, you'll want to consider pooling your EventHubConsumerClient instances so that you're starting with one per Event Hub instance. You can safely use the pooled client to handle a request for your frontend by calling subscribe. This will allow you to share the connection amongst multiple frontend requests.
The key idea being that your consumerClient is not created for every request, but shares an instance among requests. Using your snippet to illustrate the simplest approach, you'd end up hoisting your client creation to outside the function to receive. It may look something like:
const consumerClient = new EventHubConsumerClient(consumerGroup, connectionString, eventHubName);
async function receiveEventhubMessage(socket, eventHubName, connectionString) {
const subscription = consumerClient.subscribe({
processEvents: async (events, context) => {
for (const event of events) {
console.log("[ consumer ] Message received : " + event.body);
io.emit('msg-received', event.body);
}
},
processError: async (err, context) => {
console.log(`Error : ${err}`);
}
}
);
That said, the above may not be adequate for your environment depending on the architecture of the application. If whatever is hosting receiveEventHubMessage is created dynamically for each request, nothing changes. In that case, you'd want to consider something like a singleton or dependency injection to help extend the lifespan.
If you end up having issues scaling to meet your requests, you can consider increasing the number of clients for each Event Hub and/or spreading requests out to different consumer groups.

Redis punsubscribe not unsubscribing

I have one redis client for pub-sub. I'm using a websocket message handler to dynamically subscribe to a redis channel. The payload of the websocket message contains an ID that I use to create the channel-name. So for example lobby:${lobbyID}:joined.
Subscribing to this channel works fine, messages are received when publishing to that channel.
But the issue that I'm having is that I want to unsubscribe from this channel at one point. My assumption by reading the redis-documentation is that I would use punsubscribe so I can unsubscribe from any channels with the pattern lobby:*:joined, but messages are still received after trying that.
import redis from 'redis';
const subClient = redis.createClient();
subClient.on('message', (channel, message) => {
// Received message x on channel y
});
const socketHandlerSubscribe = (lobbyID) => {
subClient.subscribe(`lobby:${lobbyID}:joined`);
}
const socketHandlerUnsubscribe = () => {
subClient.punsubscribe('lobby:*:joined'); // true
}
When using the redis-cli the pattern seems valid when using PUBSUB CHANNEL lobby:*:joined. I could solve this issue by passing a lobby ID to the unsubscribe handler aswell, but punsubscribe should be the solution for it.
I also encountered this earlier with a scenario where I looped through an array of user ID's and created a subscription for each on statuses:${userID} and tried a punsubscribe on statuses:*, without any success.
Am I doing something wrong or this is an issue node-redis related? I'm using redis version 2.8.0
I noticed that there are two different types of subscriptions. On channels and patterns. In my question I was subscribing to a channel, and unsubscribing on a pattern, these two are not 'compatible' so this won't work.
I used nc to debug this, as redis-cli won't allow additional commands when entering subscribed state.

Consuming error logs with Twilio API

I have developed an application that sends thousands of SMS using Twilio Notify Service.
const bindings = phoneNumbers.map(number =>
this.createBinding('sms', number)
);
await this.notifyService.notifications.create({
toBinding: bindings,
body
});
The code above doesn't give me a feedback of whether the messages were received or not, but as I can see in Twilio dashboard, some messages fail with error codes 30005, 30003, 30006 and 52001.
I'd like to consume all those error logs and unsubscribe the numbers with error codes. I'm thinking of creating a job that runs every night to do that.
I've tried to list all the alerts:
client.monitor.alerts.each(alert => {
console.log(alert.errorCode);
});
But it seems to fetch only some alerts with error code 52001.
How can I consume all the errors with Twilio API? Is there any other way to be notified of those errors?

How send one message to all lissener queue?

I use rabbitMq, nodeJs(with socet.io, amqp modules), ZF2 for development chat
By default RabbitMq send message from queue at help Round-robin.
Does RabbitMq opportunity to send all subscriber queue the same message?
For example:
If i make for each connection its queue, that is work correct, but if user open 2 tabs on him browser, then will make 2 queue. I think its not good.
I want have one queue for each users(if i make that, than first message send to first tab, second message - to second tab)
My code:
var exchange = connectionAmqp.exchange('chat', {type: 'direct', passive: false, durable:false, autoDelete: false});
console.log(' [*] Client connected')
connectionAmqp.queue('chat'+userId.toString(), {
passive : false,
durable : false,
exclusive : false,
autoDelete: false
}, function(queue) {
//Catch new message from queue
queue.bind(exchange, userId.toString());
queue.subscribe(function(msg){
socket.emit('pullMessage', msg); //Emit message to browser
})
});
From other script i push message
var exchange = connectionAmqp.exchange('chat', {type: 'direct', passive: false, durable:false, autoDelete: false});
var data= {chatId:70,msg:"Text",time:1375333200}
exchange.publish('1', data, {contentType: 'application/json'});
Make sure the queues are not exclusive. Then make sure the client connects to the same queue. This can be done but having the client create the queue and specifying the name of that queue. The naming algorithm will make sure that the queue name is unique per client, but for the same client it will produce the same name. Both tabs will read in turn from the same queue ensuring the round robin effect that you are looking for.
If you want to send a message to all queues, you can use an exchange of type fanout. See here! It will broadcast a message to each queue bound to it. However, if you are attaching two consumers (callbacks) on one queue, those two consumers (callbacks) will still be fed round-robin wise.
Queues are very lightweight and RabbitMQ is build to handle many queues, so it's ok to create a queue for each tab. If you are still unsure, this post may be of your interest. The author build a simple chat system and stress tested it, showing that RabbitMQ easily handles thousands of queues and messages per second.
Although it is possible to do this with just one queue per user, it will be far easier with one queue per tab...and when using RabbitMQ there is usually no need to do such optimizations*.
*(of course there are exceptions)

Resources