Messages order of smooch - whatsapp - node.js

I have a bot and I use smooch to run the bot on whatsapp.
I use 'smooch-core' npm for that.
When I send a lot of messages one after the other sometimes the messages are displayed in reverse order in whatsapp.
Here is the code for sending messages:
for (const dataMessage of data) {
await sendMessage(dataMessage);
}
function sendMessage(dataMessage) {
return new Promise((resolve, reject) => {
smoochClient.appUsers.sendMessage({
appId: xxxx,
userId: userId,
message: dataMessage
}).then((response) => {
console.log('response: ' + JSON.stringify(response), 'green');
resolve();
}).catch(err => {
console.log('error: ' + JSON.stringify(err), 'red');
reject(err);
});
});
All dataMessage looks like this:
{
role: "appMaker",
type: "text",
text: txt
}
I tried to see how I could arrange it, and I saw that there was an option to get the message status from the webhook, and then wait for each message to arrive for the appropriate status. And only then send the following message.
But I would like to know is there something simpler? Is there a parameter that can add to the message itself to say what its order is? Or is there something in the npm that gives information about the message and its status?

In the doc below, Whatsapp mentions that they do not guarantee message ordering.
https://developers.facebook.com/docs/whatsapp/faq#faq_173242556636267
The same limitation applies to any async messaging platform (and most of them are), so backend processing times and other random factors can impact individual message processing/delivery times and thus ordering on the user device (e.g. backend congestion, attachments, message size, etc.).
You can try to add a small [type-dependent] delay between sending each message to reduce the frequency of mis-ordered messages (longer delay for messages with attachments, etc.).
The fool-proof way (with much more complexity) is to queue messages by appUser on your end, only sending the next message after receiving the message:delivery:user webhook event for the previous message.

Related

Node.js/NodeMailer/Express/Outlook smtp host - Concurrent connections limit exceeded

Hope you are well. I am in the middle of working on an application that uses express and nodemailer. My application sends emails successfully, but the issue is, is that I cannot send the emails off one at a time in a manner that I'd like. I do not want to put an array of addresses into the 'to' field, I'd like each e-mail out individually.
I have succeeded in this, however there is an issue. It seems Microsoft has written some kind of limit that prevents applications from having more than a certain number of connections at a time. (link with explanation at end of document of post.)
I have tried to get around this by a number of expedients. Not all of which I'll trouble you with. The majority of them involve setInterval() and either map or forEach. I do not intend to send all that many e-mails - certainly not any to flirt with any kind of standard. I do not even want any HTML in my emails. Just plain text. When my application sends out 2/3 e-mails however, I encounter their error message (response code 432).
Below you will see my code.
As you can see, I'm at the point where I've even been willing to try adding my incrementer into setInterval, as if changing the interval the e-mails fire at will actually help.
Right now, this is sending out some e-mails, but I'm eventually encountering that block. This usually happens around 2/3 e-mails. It is strangely inconsistent however.
This is the first relevant section of my code.
db.query(sqlEmailGetQuery, param)
.then(result => {
handleEmail(result, response);
}).catch(error => {
console.error(error);
response.status(500).json({ error: 'an unexpected error occured.' });
});
});
This is the second section of it.
function handleEmail(result, response) {
const email = result.rows[0];
let i = 0;
email.json_agg.map(contact => {
const msg = {
from: process.env.EMAIL_USER,
to: email.json_agg[i].email,
subject: email.subject,
text: email.emailBody + ' ' + i
};
i++;
return new Promise((resolve, reject) => {
setInterval(() => {
transporter.sendMail(msg, function (error, info) {
if (error) {
return console.log(error);
} else {
response.status(200).json(msg);
transporter.close();
}
});
}, 5000 + i);
});
});
}
I originally tried a simple for loop over the contacts iterable email.json_agg[i].email, but obviously as soon as I hit the connection limit this stopped working.
I have come onto stackoverflow and reviewed questions that are similar in nature. For example, this question was close to being similar, but this guy has over 8000 connections and if you read the rule I posted by microsoft below, they implemented the connection rule after he made that post.
I have tried setInterval with forEach and an await as it connects with each promise, but as this was not the source of the issue, this did not work either.
I have tried similar code to what you see above, except I have set the interval to as long as 20 seconds.
As my understanding of the issue has grown, I can see that I either have to figure out a way to wait long enough so I can send another e-mail - without the connection timing out or break off the connections every time I send an e-mail, so that when I send the next e-mail I have a fresh connection. It seems to me that if the latter were possible though, everyone would be doing it and violating Microsofts policy.
Is there a way for me to get around this issue, and send 3 emails every say 3 seconds, and then wait, and send another three? The volume of e-mails is such that, I can wait ten seconds if necessary. Is there a different smtp host that is less restrictive?
Please let me know your thoughts. My transport config is below if that helps.
const transporter = nodemailer.createTransport({
pool: true,
host: 'smtp-mail.outlook.com',
secureConnection: false,
maxConnections: 1,
port: 587,
secure: false,
tls: { ciphers: 'SSLv3' },
auth: {
user: process.env.EMAIL_USER,
pass: process.env.EMAIL_PASS
}
});
https://learn.microsoft.com/en-us/exchange/troubleshoot/send-emails/smtp-submission-improvements#new-throttling-limit-for-concurrent-connections-that-submitmessages
First off, the most efficient way to send the same email to lots of users is to send it to yourself and BCC all the recipients. This will let you send one email to the SMTP server and then it will distribute that email to all the recipients with no recipient being able to see the email address of any individual recipient.
Second, you cannot use timers to reliably control how many requests are running at once because timers are not connected to how long a given requests takes to complete so timers are just a guess at an average time for a request and they may work in some conditions and not work when things are responding slower. Instead, you have to actually use the completion of one request to know its OK to send the next.
If you still want to send separate emails and send your emails serially, one after the other to avoid having too many in process at a time, you can do something like this:
async function handleEmail(result) {
const email = result.rows[0];
for (let [i, contact] of email.json_agg.entries()) {
const msg = {
from: process.env.EMAIL_USER,
to: contact.email,
subject: email.subject,
text: email.emailBody + ' ' + i
};
await transporter.sendMail(msg);
}
}
If you don't pass transporter.sendMail() the callback, then it will return a promise that you can use directly - no need to wrap it in your own promise.
Note, this code does not send a response to your http request as that should be the responsibility of the calling code and your previous code was trying to send a response for each of the emails when you can only send one response and the previous code was not sending any response if there was an error.
This code relies on the returned promise back to the caller to communicate whether it was successful or encountered an error and the caller can then decide what to do with that situation.
You also probably shouldn't pass result to this function, but should instead just pass email since there's no reason for this code to know it has to reach into some database query result to get the value it needs. That should be the responsibility of the caller. Then, this function is much more generic.
If, instead of sending one email at a time, you want to instead send N emails at a time, you can use something like mapConcurrent() to do that. It iterates an array and keeps a max of N requests in flight at the same time.

How to get a thread reply's content from reaction_added event?

I'm building a slack FAQ app that uses message reactions to gather the best answers to questions. My plan is to save any Slack messages with positive reactions by using the reaction_added event to get the TS attribute and then the conversations.history method to get the message's content.
This works well for parent-level or non-threaded messages, however it doesn't work for reply messages inside threads. For some reason the conversations.history method returns an unrelated message when using the TS of a thread reply.
I've checked the Slack API conversations.history method documentation to see if replies are handled in any special way. I reviewed conversations.replies method to see if it might be helpful, but since reaction_added event simply provides a TS id for the message and no thread_ts value that can be used with the conversations.replies method.
I'm using bolt framework. Here's a snippet of the code that tries to use the reaction_added event with conversations.history method to get the message content:
app.event('reaction_added', async ({ event, context, say }) => {
try {
const result = await app.client.conversations.history({
token: process.env.SLACK_USER_TOKEN,
channel: event.item.channel,
latest: event.item.ts,
limit: 1,
inclusive: true
});
save(`${result.messages[0].text}`);
}
catch (error) {
console.error(error);
}
});
Expected result:
Message contents of thread reply that a reaction is posted for
Actual result:
Message contents of the latest message in the slack channel
I'm not sure if it changed recently or I misread the documentation, but conversations.replies API endpoint does not require a thread_ts value containing the parent thread timestamp in order to retrieve a thread reply.
The event.item.ts value provided by the reaction_added event is sufficient to retrieve the message contents of the reply where a reaction was added.
So to get the message contents of a message where a reaction was added, you can update the code in my original question to:
app.event('reaction_added', async ({ event, context, say }) => {
try {
const result = await app.client.conversations.replies({
token: process.env.SLACK_USER_TOKEN,
channel: event.item.channel,
ts: event.item.ts
});
save(`${result.messages[0].text}`);
}
catch (error) {
console.error(error);
}
});

SQS queue - send message without MessageGroupId?

I am trying to send a message without MessageGroupId because I basically don't need it. I have a few microservices running, that should read from the queue any time and if I put the same group ID it means that only one service can read these messages one by one.
Now generating an UUID as a MessageGroupId sounds like a bad practice.
Is there a way to disable MessageGroupId or send a default value that won't act as a MessageGroupId?
const params = {
MessageDeduplicationId: `${uuidv1()}`,
MessageBody: JSON.stringify({
name: 'Ben',
lastName: 'Beri',
}),
QueueUrl: `https://sqs.us-east-1.amazonaws.com/${accountId}/${queueName}`,
};
sqs.sendMessage(params, (err, data) => {
if (err) {
console.log('error! ' + err.message);
return;
}
console.log(data.MessageId);
});
error! The request must contain the parameter MessageGroupId.
We can't insert the message into the queue without messagegroupid, if you want messages to be picked sequentially, then use the same messagegroupid for all the messages, else use unique value for each.
What are the implications you are facing with using UUID as messagegroupid

How to properly throttle message sending with nodemailer SES transport?

The nodemailer documentation says:
If you use rate or connection limiting then you can also use helper
methods to detect if the sending queue is full or not. This would help
to avoid buffering up too many messages.
It also provides an example:
let transporter = nodemailer.createTransport({
SES: new aws.SES({
apiVersion: '2010-12-01'
}),
sendingRate: 1 // max 1 messages/second
});
// Push next messages to Nodemailer
transporter.on('idle', () => {
while (transporter.isIdle()) {
transporter.sendMail(...);
}
});
Unfortunately, this is rather cryptic to me. Does sendingRate: 1 only provides a helper, or does it handle throttling ?
Also this piece of code looks to me like it would loop infinitely as soon as sendMail(...) is executed. Am I missing something here ?
Is there any example or recommendation on how to use this feature ?
Thanks a lot !
From the documentation:
SES can tolerate short spikes but you can’t really flush all your emails at once and expect these to be delivered. To overcome this you can set a rate limiting value and let Nodemailer handle everything – if too many messages are being delivered then Nodemailer buffers these until there is an opportunity to do the actual delivery.
I don't think listening for the idle event is mandatory, it's only needed if you want to avoid Nodemailer buffering messages. I have an SES send rate of 15 messages per second and regularly throw 250 emails at once at Nodemailer and don't hit any throttling issues.
You are right, the while loop only appears to be there for testing sending rate. Once you remove the while loop the code in documentation should work fine.
transporter.on('idle', () => {
transporter.sendMail(...);
});
You don't need the while loop or the on idle handler. Just set the sendingRate and then use sendMail as normal.
transporter = nodemailer.createTransport({
SES: { ses, aws },
sendingRate: 14,
});
const params = {
from: 'EMAIL',
to: 'EMAIL',
subject: 'Message',
html: 'I hope this <b>message</b> gets sent!',
text: 'I hope this message gets sent!',
// attachments: [{ filename: 'card.pdf', content: data, contentType: 'application/pdf' }],
};
transporter.sendMail(params, (err, info) => {
if (err) {
console.log(JSON.stringify(err));
}
console.log(info.envelope);
console.log(info.messageId);
});
Important thing to note here, nodemailer waits for the next second to continue with the next batch of throttled emails and the next and so on. So if you are running a script that exits immediately after calling the last sendMail(), the throttled emails will never get sent. Make sure the process runs until all emails are sent by listening to on idle or use settimeout.

Using publisher confirms with RabbitMQ, in which cases publisher will be notified about success/failure?

Quoting the book, RabbitMQ in Depth:
A Basic.Ack request is sent to a publisher when a message that it has
published has been directly consumed by consumer applications on all
queues it was routed to or that the message was enqueued and persisted
if requested.
Confused with Has been directly consumed, does it mean when consumer send ack to broker publisher will be informed that consumer process message successfully? or it means that publisher will be notified when consumer just receive message from the queue?
or that the message was enqueued and persisted if requested. Is this like conjuction or publisher will be informed when either of those happens? (In that case publisher would be notified twice)
Using node.js and amqplib wanted to check what is happening actually:
// consumer.js
amqp.connect(...)
.then(connection => connection.createChannel())
.then(() => { assert exchange here })
.then(() => { assert queue here })
.then(() => { bind queue and exchange here })
.then(() => {
channel.consume(QUEUE, (message) => {
console.log('Raw RabbitMQ message received', message)
// Simulate some job to do
setTimeout(() => {
channel.ack(message, false)
}, 5000})
}, { noAck: false })
})
// publisher.js
amqp.connect(...)
.then(connection => connection.createConfirmChannel())
.then(() => { assert exchange here })
.then(() => {
channel.publish(exchange, routingKey, new Buffer(...),{}, (err, ok) => {
if (err) {
console.log('Error from handling confirmation on publisher side', err)
} else {
console.log('From handling confirmation on publisher side', ok)
}
})
})
Running the example, i can see following logs:
From handling confirmation on publisher side undefined
Raw RabbitMQ message received
Time to ack the message
As far as i see, at least by this log, publisher will be notified only when message was enqueued? (So having consumer acking the message will not influence publisher in any way)
Quoting further:
If a message cannot be routed, the broker will send a Basic.Nack RPC
request indicating the failure. It is then up to the publisher to
decide what to do with the message.
Changing the above example, where i only changed the routing key of the message to something that should not be routed anywhere (there are no bindings that would match routing key), from logs i can see only following.
From handling confirmation on publisher side undefined
Now i'm more confused, about what publisher is notified exactly here? I would understand that it receive an error, like Can't route anywhere, that would be aligned with quote above. But as you can see err is not defined and as side question even if amqplib in their official docs are using (err, ok), in no single case i see those defined. So here output is same like in above example, how one can differ between above example and un-routable message.
So what im up to here, when exactly publisher will be notified about what is happening with the message? Any concrete example in which one would use PublisherConfirms? From logging above, i would conclude that is nice to have it in cases where you want to be 100% sure that message was enqueued.
After searching again and again i have found this
http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/
The basic rules are as follows:
An un-routable mandatory or immediate message is confirmed right after the basic.return
transient message is confirmed the moment it is enqueued
Persistent message is confirmed when it is persisted to disk or when it is consumed on every queue.
If more than one of these conditions are met, only the first causes a
confirm to be sent. Every published message will be confirmed sooner
or later and no message will be confirmed more than once.
by default publishers don't know anything about consumers.
PublisherConfirms is used to check if the message reached the broker, but not if the message has been enqueued.
you can use mandatory flag to be sure the message has been routed
see this https://www.rabbitmq.com/reliability.html
To ensure messages are routed to a single known queue, the producer
can just declare a destination queue and publish directly to it. If
messages may be routed in more complex ways but the producer still
needs to know if they reached at least one queue, it can set the
mandatory flag on a basic.publish, ensuring that a basic.return
(containing a reply code and some textual explanation) will be sent
back to the client if no queues were appropriately bound.
I'm not entirely sure about the notification on ack/nack question, but check out the BunnyBus Node library for a simpler api and RabbitMQ management :)
https://github.com/xogroup/bunnybus
const BunnyBus = require('bunnybus');
const bunnyBus = new BunnyBus({
user: 'your-user',
vhost: 'your-vhost', // cloudamqp defaults vhost to the username
password: 'your-password',
server: 'your.server.com'
});
const handler = {
'test.event': (message, ack) => {
// Do your work here.
// acknowledge the message off of the bus.
return ack();
}
};
// Create exchange and queue if they do not already exist and then auto connect.
return bunnyBus.subscribe('test', handler)
.then(() => {
return bunnyBus.publish({event: 'test.event', body: 'here\'s the thing.'});
})
.catch(console.log);

Resources