Node.js/NodeMailer/Express/Outlook smtp host - Concurrent connections limit exceeded - node.js

Hope you are well. I am in the middle of working on an application that uses express and nodemailer. My application sends emails successfully, but the issue is, is that I cannot send the emails off one at a time in a manner that I'd like. I do not want to put an array of addresses into the 'to' field, I'd like each e-mail out individually.
I have succeeded in this, however there is an issue. It seems Microsoft has written some kind of limit that prevents applications from having more than a certain number of connections at a time. (link with explanation at end of document of post.)
I have tried to get around this by a number of expedients. Not all of which I'll trouble you with. The majority of them involve setInterval() and either map or forEach. I do not intend to send all that many e-mails - certainly not any to flirt with any kind of standard. I do not even want any HTML in my emails. Just plain text. When my application sends out 2/3 e-mails however, I encounter their error message (response code 432).
Below you will see my code.
As you can see, I'm at the point where I've even been willing to try adding my incrementer into setInterval, as if changing the interval the e-mails fire at will actually help.
Right now, this is sending out some e-mails, but I'm eventually encountering that block. This usually happens around 2/3 e-mails. It is strangely inconsistent however.
This is the first relevant section of my code.
db.query(sqlEmailGetQuery, param)
.then(result => {
handleEmail(result, response);
}).catch(error => {
console.error(error);
response.status(500).json({ error: 'an unexpected error occured.' });
});
});
This is the second section of it.
function handleEmail(result, response) {
const email = result.rows[0];
let i = 0;
email.json_agg.map(contact => {
const msg = {
from: process.env.EMAIL_USER,
to: email.json_agg[i].email,
subject: email.subject,
text: email.emailBody + ' ' + i
};
i++;
return new Promise((resolve, reject) => {
setInterval(() => {
transporter.sendMail(msg, function (error, info) {
if (error) {
return console.log(error);
} else {
response.status(200).json(msg);
transporter.close();
}
});
}, 5000 + i);
});
});
}
I originally tried a simple for loop over the contacts iterable email.json_agg[i].email, but obviously as soon as I hit the connection limit this stopped working.
I have come onto stackoverflow and reviewed questions that are similar in nature. For example, this question was close to being similar, but this guy has over 8000 connections and if you read the rule I posted by microsoft below, they implemented the connection rule after he made that post.
I have tried setInterval with forEach and an await as it connects with each promise, but as this was not the source of the issue, this did not work either.
I have tried similar code to what you see above, except I have set the interval to as long as 20 seconds.
As my understanding of the issue has grown, I can see that I either have to figure out a way to wait long enough so I can send another e-mail - without the connection timing out or break off the connections every time I send an e-mail, so that when I send the next e-mail I have a fresh connection. It seems to me that if the latter were possible though, everyone would be doing it and violating Microsofts policy.
Is there a way for me to get around this issue, and send 3 emails every say 3 seconds, and then wait, and send another three? The volume of e-mails is such that, I can wait ten seconds if necessary. Is there a different smtp host that is less restrictive?
Please let me know your thoughts. My transport config is below if that helps.
const transporter = nodemailer.createTransport({
pool: true,
host: 'smtp-mail.outlook.com',
secureConnection: false,
maxConnections: 1,
port: 587,
secure: false,
tls: { ciphers: 'SSLv3' },
auth: {
user: process.env.EMAIL_USER,
pass: process.env.EMAIL_PASS
}
});
https://learn.microsoft.com/en-us/exchange/troubleshoot/send-emails/smtp-submission-improvements#new-throttling-limit-for-concurrent-connections-that-submitmessages

First off, the most efficient way to send the same email to lots of users is to send it to yourself and BCC all the recipients. This will let you send one email to the SMTP server and then it will distribute that email to all the recipients with no recipient being able to see the email address of any individual recipient.
Second, you cannot use timers to reliably control how many requests are running at once because timers are not connected to how long a given requests takes to complete so timers are just a guess at an average time for a request and they may work in some conditions and not work when things are responding slower. Instead, you have to actually use the completion of one request to know its OK to send the next.
If you still want to send separate emails and send your emails serially, one after the other to avoid having too many in process at a time, you can do something like this:
async function handleEmail(result) {
const email = result.rows[0];
for (let [i, contact] of email.json_agg.entries()) {
const msg = {
from: process.env.EMAIL_USER,
to: contact.email,
subject: email.subject,
text: email.emailBody + ' ' + i
};
await transporter.sendMail(msg);
}
}
If you don't pass transporter.sendMail() the callback, then it will return a promise that you can use directly - no need to wrap it in your own promise.
Note, this code does not send a response to your http request as that should be the responsibility of the calling code and your previous code was trying to send a response for each of the emails when you can only send one response and the previous code was not sending any response if there was an error.
This code relies on the returned promise back to the caller to communicate whether it was successful or encountered an error and the caller can then decide what to do with that situation.
You also probably shouldn't pass result to this function, but should instead just pass email since there's no reason for this code to know it has to reach into some database query result to get the value it needs. That should be the responsibility of the caller. Then, this function is much more generic.
If, instead of sending one email at a time, you want to instead send N emails at a time, you can use something like mapConcurrent() to do that. It iterates an array and keeps a max of N requests in flight at the same time.

Related

Is there a way to request something from the backend and contiously search and refresh the database until it finds a match? then send the response

I am struggling to make my own matchmaking in a multiplayer game of two players. I have a button that says "play",and i want if the user presses it to change in database his schema property "inQueue to true" .
So after that i want to search the database to check who also is online and in queue(and meets requirement range for elo eg. player1:1200 ELO player2:1288 ) . But i dont want to send multiple requests until it finds a match.
iI was wondering if is there a way to send the request one time and when mongodb finds a match then return the response.
( i am using the mern stack with the help of sockets too)*
The following middleware attempts to find a match on the database. If that is unsuccessful, another attempt is scheduled for a second later. A JSON response is sent only after a successful attempt, but spaces are sent while waiting in the hope that this prevents a browser timeout.
.get("/path", function(req, res) {
res.type("json");
async function attempt() {
var match = await User.find(...);
if (match)
res.end(JSON.stringify({match: match.username}));
else {
res.write(" ");
setTimeout(attempt, 1000);
}
}
attempt();
});
If the backend times out requests after, say, 30 seconds, this cannot be used. In this case, consider repeating the requests in the frontend instead:
async function attempt() {
var match = await axios.get("/path"); // returns 0 or more matches
if (match.length > 0)
// Fill results of match into the HTML page
else
setTimeout(attempt, 1000);
}
attempt();
But I think this is only a poor substitute for the real solution based on web sockets as suggested by balexandre.

Slow promise chain

I'm fairly new to node.js and Promises in general, although I think I get the gist of how they are supposed to work (I've been forced to use ES5 for a looooong time). I also have little in-depth knowledge of Cloud Functions (GCF), though again I do understand at a high level how they work
I'm using GCF for part of my app, which is meant to receive a HTTP request, translate them and send them onward to another endpoint. I need to use promises, as there are occasions when the originating HTTP request has multiple 'messages' sent at once
So, my function works in regards to making sure messages are sent in the correct order, but the subsequent messages are sent on very slowly (the logs suggest it's around a 20 second difference in terms of actually being sent onward)
I'm not entirely sure why that is happening - I would've expected it to be less than a couple of seconds difference. Maybe it's something to do with GCF and not my code? Or maybe it is my code? Either way, I'd like to know if there's something I can do to speed it up, especially since it's supposed to send it onward to a user in Google chat
(Before anyone comments on why it's request.body.body, I don't have control over the format of the incoming request)
exports.onward = (request, response) => {
response.status(200).send();
let bodyArr = request.body.body;
//Chain Promises over multiple messages sent at once stored in bodyArr
bodyArr.reduce(async(previous, next) =>{
await previous;
return process(next);
}, Promise.resolve());
};
function process(body){
return new Promise((resolve, reject) => {
//Obtain JWT from Google
let jwtClient = new google.auth.JWT(
privatekey.client_email,
null,
privatekey.private_key,
['https://www.googleapis.com/auth/chat.bot']
);
//Authorise JWT, reject promise or continue as appropriate
jwtClient.authorize((err, tokens) => {
if(err){
console.error('Google OAuth failure ' + err);
reject();
}else{
let payload = copyPayload();
setValues(payload, body); //Other function which sets payload values
axios.post(url, payload,
{
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Authorization': 'Bearer ' + tokens.access_token
},
})
.then(response => {
//HTTP 2xx response received
resolve();
})
.catch(error => {
switch(true){
//Something bad happened
reject();
});
}
});
});
}
EDIT: After testing the same thing, it's gone done a bit to around a 3-6 second delay between promises. Given that the code didn't change, I suspect that it's something to do with GCF?
By doing
exports.onward = (request, response) => {
response.status(200).send();
let bodyArr = request.body.body;
// Any work
};
you are incorrectly managing the life cycle of your Cloud Function: As a matter of fact by doing response.status(200).send(); you are indicating to the Cloud Functions platform that your function successfully reached its terminating condition or state and that, consequently, the platform can shut it down. See here in the doc for more explanations.
Since you send this signal at the beginning of your Cloud Function it may happen that the Cloud Functions shuts it down before the asynchronous job is finished.
In addition, you are potentially generating some "erratic" behavior of the Cloud Function that makes it difficult to debug. Sometimes your Cloud Function is terminated before the asynchronous work is completed, for the reason explained above. But some other times, the Cloud Function platform does not terminate the Function immediately and the asynchronous work can be completed (i.e. has the possibility to complete before the Cloud Function is terminated).
So, you should send the response after all the work is completed.
If you want to immediately acknowledged the user that the work has been started, without waiting for this work to be completed, you should use Pub/Sub: in your main Cloud Function, delegate the work to a Pub/Sub triggered Cloud Function and then return the response.
If you want to acknowledge the user when the work is completed (i.e. when the Pub/Sub triggered Cloud Function is completed), there are several options: Send a notification, an email or write to a Firestore document that you watch from your app.

Messages order of smooch - whatsapp

I have a bot and I use smooch to run the bot on whatsapp.
I use 'smooch-core' npm for that.
When I send a lot of messages one after the other sometimes the messages are displayed in reverse order in whatsapp.
Here is the code for sending messages:
for (const dataMessage of data) {
await sendMessage(dataMessage);
}
function sendMessage(dataMessage) {
return new Promise((resolve, reject) => {
smoochClient.appUsers.sendMessage({
appId: xxxx,
userId: userId,
message: dataMessage
}).then((response) => {
console.log('response: ' + JSON.stringify(response), 'green');
resolve();
}).catch(err => {
console.log('error: ' + JSON.stringify(err), 'red');
reject(err);
});
});
All dataMessage looks like this:
{
role: "appMaker",
type: "text",
text: txt
}
I tried to see how I could arrange it, and I saw that there was an option to get the message status from the webhook, and then wait for each message to arrive for the appropriate status. And only then send the following message.
But I would like to know is there something simpler? Is there a parameter that can add to the message itself to say what its order is? Or is there something in the npm that gives information about the message and its status?
In the doc below, Whatsapp mentions that they do not guarantee message ordering.
https://developers.facebook.com/docs/whatsapp/faq#faq_173242556636267
The same limitation applies to any async messaging platform (and most of them are), so backend processing times and other random factors can impact individual message processing/delivery times and thus ordering on the user device (e.g. backend congestion, attachments, message size, etc.).
You can try to add a small [type-dependent] delay between sending each message to reduce the frequency of mis-ordered messages (longer delay for messages with attachments, etc.).
The fool-proof way (with much more complexity) is to queue messages by appUser on your end, only sending the next message after receiving the message:delivery:user webhook event for the previous message.

How to properly throttle message sending with nodemailer SES transport?

The nodemailer documentation says:
If you use rate or connection limiting then you can also use helper
methods to detect if the sending queue is full or not. This would help
to avoid buffering up too many messages.
It also provides an example:
let transporter = nodemailer.createTransport({
SES: new aws.SES({
apiVersion: '2010-12-01'
}),
sendingRate: 1 // max 1 messages/second
});
// Push next messages to Nodemailer
transporter.on('idle', () => {
while (transporter.isIdle()) {
transporter.sendMail(...);
}
});
Unfortunately, this is rather cryptic to me. Does sendingRate: 1 only provides a helper, or does it handle throttling ?
Also this piece of code looks to me like it would loop infinitely as soon as sendMail(...) is executed. Am I missing something here ?
Is there any example or recommendation on how to use this feature ?
Thanks a lot !
From the documentation:
SES can tolerate short spikes but you can’t really flush all your emails at once and expect these to be delivered. To overcome this you can set a rate limiting value and let Nodemailer handle everything – if too many messages are being delivered then Nodemailer buffers these until there is an opportunity to do the actual delivery.
I don't think listening for the idle event is mandatory, it's only needed if you want to avoid Nodemailer buffering messages. I have an SES send rate of 15 messages per second and regularly throw 250 emails at once at Nodemailer and don't hit any throttling issues.
You are right, the while loop only appears to be there for testing sending rate. Once you remove the while loop the code in documentation should work fine.
transporter.on('idle', () => {
transporter.sendMail(...);
});
You don't need the while loop or the on idle handler. Just set the sendingRate and then use sendMail as normal.
transporter = nodemailer.createTransport({
SES: { ses, aws },
sendingRate: 14,
});
const params = {
from: 'EMAIL',
to: 'EMAIL',
subject: 'Message',
html: 'I hope this <b>message</b> gets sent!',
text: 'I hope this message gets sent!',
// attachments: [{ filename: 'card.pdf', content: data, contentType: 'application/pdf' }],
};
transporter.sendMail(params, (err, info) => {
if (err) {
console.log(JSON.stringify(err));
}
console.log(info.envelope);
console.log(info.messageId);
});
Important thing to note here, nodemailer waits for the next second to continue with the next batch of throttled emails and the next and so on. So if you are running a script that exits immediately after calling the last sendMail(), the throttled emails will never get sent. Make sure the process runs until all emails are sent by listening to on idle or use settimeout.

node-imap : : How to directly fetch an email that has arrived instead of using a search filter

I have written a wrapper over node-imap(https://github.com/mscdex/node-imap). Currently I am using the since flag to search for emails that arrive in a particular Inbox. For each email that I listen I call upon a imap.search() method and then imap.fetch() method. Is it possible to directly fetch the emails without the imap.search() method.
Providing snippets from my current code.
self.imap.on("mail", function(id) {
self.parseUnreadEmails(time)
});
MailListener.prototype.parseUnreadEmails = function(time) {
var self = this;
self.imap.search([["SINCE", time]], function(error, searchResults) {
var fetch;
if (error) {
self.emit("error", error);
} else {
fetch = self.imap.fetch(searchResults, {
bodies: '',
markSeen: true
})
}
//do some action
}
}
If you are looking to monitor and fetch new mails arriving for a particular mailbox there are many ways you can do this, below are some ideas that are effective that can be used.
Using IDLE command - yes, using this command you can put a folder in idle state and watch for EXISTS/EXPUNGE responses whenever new mails arrive or if there are any flag changes. Do read this RFC . Once constraint using IDLE command is that you have to have a dedicated tcp connection for this for a particular folder, multiple folders can not be monitored in single IDLE command.
Polling STATUS (UIDNEXT MESSAGECOUNT) command frequently on reasonable interval. This will tell you the uidnext(if there is any change from previous value you should find the difference and fetch those mails).Before proceeding read the IMAP RFC carefully.

Resources