Gmail API Quota Limit Exceeded When Sending Too Many Emails - node.js

I am creating email automation platform using gmail api and nodejs. It works fine unless the email list contain >100 emails.
When i try to send the emails i am getting this error
Quota exceeded for quota metric 'Queries' and limit 'Queries per minute per user' of service 'gmail.googleapis.com' for consumer 'project_number:xxxxxxxxxxxxx'.
To send the email I am using the following method
const r = await gmail.users.messages.send({
auth, // coming from google oauth client
userId: "me",
requestBody: {
raw: makeEmailBody( // function to create base64 email body with necessary headers
thread.send_to,
{
address: user.from_email,
name: user.from_name,
},
subject,
template,
thread.id
),
},
});

Information about Usage limits
Per user rate limit 250 quota units per user per second, moving average (allows short bursts).
and
Method--------------------------Quota Units
messages.send-----------------100
In other words, sending 100 emails corresponds to using 10 000 quota units, but you are only allowed to use 250 quota units per second.
This means that you need to slow down your code execution to avoid running into quota issues.
This can be done with an Exponential Backoff logarithm, as explained here.

Related

How to avoid memory leak when using pub sub to call function?

I stuck on performance issue when using pubsub to triggers the function.
//this will call on index.ts
export function downloadService() {
// References an existing subscription
const subscription = pubsub.subscription("DOWNLOAD-sub");
// Create an event handler to handle messages
// let messageCount = 0;
const messageHandler = async (message : any) => {
console.log(`Received message ${message.id}:`);
console.log(`\tData: ${message.data}`);
console.log(`\tAttributes: ${message.attributes.type}`);
// "Ack" (acknowledge receipt of) the message
message.ack();
await exportExcel(message);//my function
// messageCount += 1;
};
// Listen for new messages until timeout is hit
subscription.on("message", messageHandler);
}
async function exportExcel(message : any) {
//get data from database
const movies = await Sales.findAll({
attributes: [
"SALES_STORE",
"SALES_CTRNO",
"SALES_TRANSNO",
"SALES_STATUS",
],
raw: true,
});
... processing to excel// 800k rows
... bucket.upload to gcs
}
The function above is working fine if I trigger ONLY one pubsub message.
However, the function will hit memory leak issue or database connection timeout issue if I trigger many pubsub message in short period of time.
The problem I found is, first processing havent finish yet but others request from pubsub will straight to call function again and process at the same time.
I have no idea how to resolve this but I was thinking implement the queue worker or google cloud task will solve the problem?
As mentioned by #chovy in the comments, there is a need to queue up the excelExport function calls since the function's execution is not keeping up with the rate of invocation. One of the modules that can be used to queue function calls is async. Please note that the async module is not officially supported by Google.
As an alternative, you can employ flow control features on the subscriber side. Data pipelines often receive sporadic spikes in published traffic which can overwhelm subscribers in an effort to catch up. The usual response to high published throughput on a subscription would be to dynamically autoscale subscriber resources to consume more messages. However, this can incur unwanted costs — for instance, you may need to use more VM’s — which can lead to additional capacity planning. Flow control features on the subscriber side can help control the unhealthy behavior of these tasks on the pipeline by allowing the subscriber to regulate the rate at which messages are ingested. Please refer to this blog for more information on flow control features.

Bot Framework: httpStatusCode": 504 getting Failed to send activity: bot timed out

Occasionally the bot sends a timeout error. It seems that it happens by requests with larger amout of data. Is it possible to increase the timeout or cash buffer?
Request
https://directline.botframework.com/v3/directline/conversations/xxx/activities
Response
{
"error": {
"code": "BotError",
"message": "Failed to send activity: bot timed out"
},
"httpStatusCode": 504
}
Payload
17. x-ms-bot-agent:
DirectLine/3.0 (directlinejs; WebChat/4.9.0 (Full))
18. x-requested-with:
XMLHttpRequest
4. Request Payloadview source
1. {,…}
1. channelData: {clientActivityID: "", clientTimestamp: "2020-06-05T06:57:43.001Z"}
2. channelId: "webchat"
3. from: {id: "",…}
4. locale: "en-US"
5. text: "nohy"
6. textFormat: "plain"
7. timestamp: "2020-06-05T06:57:43.045Z"
8. type: "message"
Any idea guys?
You cannot increase the timeout limit. This is a limit imposed by the Direct Line service, and is set in place for performance and stability reasons. The timeout happens because your bot/code takes too long to respond back to the service, not because of how much data (unless that large data is contributing to the length of time to reply).
You should investigate your bot and those portions of it that take longer, to see if you can decrease the time taken. If you know that there are areas (say external calls to other services, etc) that will take a long time and is unavoidable; then you should implement proactive messaging to remedy that.
https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-howto-proactive-message?view=azure-bot-service-4.0&tabs=csharp
https://github.com/microsoft/BotBuilder-Samples/tree/master/samples/csharp_dotnetcore/16.proactive-messages

How to get docusign webhook to retry on failure?

I'm able to correctly set up and monitor any changes to docusign envelopes.
However, I'm trying to get this edge case working: if my listener is not active, I want docusign to retry.
I've tried adding "requireAcknowledgment" to true when creating my envelope, however, that does not seem to change anything. I can see the failures on my admin panel's connect tab. But they only retry if I manually trigger them.
event_notification = DocuSign_eSign::EventNotification.new({
:url => webhook_url,
:includeDocuments => false,
:envelopeEvents => [
DocuSign_eSign::EnvelopeEvent.new({:envelopeEventStatusCode => "sent"}),
DocuSign_eSign::EnvelopeEvent.new({:envelopeEventStatusCode => "delivered"}),
DocuSign_eSign::EnvelopeEvent.new({:envelopeEventStatusCode => "completed"}),
DocuSign_eSign::EnvelopeEvent.new({:envelopeEventStatusCode => "declined"}),
DocuSign_eSign::EnvelopeEvent.new({:envelopeEventStatusCode => "voided"}),
],
:loggingEnabled => true,
:requireAcknowledgment => true #retry on failure
})
# create the envelope definition with the template_id
envelope_definition = DocuSign_eSign::EnvelopeDefinition.new({
:status => 'sent',
:eventNotification => event_notification,
:templateId => #template_id
})
Some related threads that I've looked into: Docusign webhook listener - is there a retry?
DocuSign Connect webhook system has two queuing / retry models. The standard one is the "aggregate" model. The new one is "send individual messages" (SIM) model.
You probably have the aggregate queuing model. Its retry procedure is:
The first retry for envelope "1" will not happen until (24 hours have passed and an additional message for the configuration for an envelope 2 succeeded.)
But, if an envelope 1 message fails, and then there is a different message (different event) also for envelope 1, the second message will be tried whenever its event occurs (even if less than 24 hours). If it succeeds, then the first message will never be re-sent (since it was superseded by message 2).
(Drew is partly describing the SIM retry model.)
To switch to the SIM model
Use the eSignatures Administration tool. See the Updates section in the Account part of the navigation menu.
Connect will retry automatically once a successful publish to the same endpoint has occurred. If the first retry fails, a second will not be attempted until 24 hours have passed.

Azure website timing out after long process

Team,
I have a Azure website published on Azure. The application reads around 30000 employees from an API and after the read is successful, it updates the secondary redis cache with all the 30,000 employees.
The timeout occurs in the second step whereby when it updates the secondary redis cache with all the employees. From my local it works fine. But as soon as i deploy this to Azure, it gives me a
500 - The request timed out.
The web server failed to respond within the specified time
From the blogs i came to know that the default time out is set as 4 mins for azure website.
I have tried all the fixes provided on the blogs like setting the command SCM_COMMAND_IDLE_TIMEOUT in the application settings to 3600.
I even tried putting the Azure redis cache session state provider settings as this in the web.config with inflated timeout figures.
<add type="Microsoft.Web.Redis.RedisSessionStateProvider" name="MySessionStateStore" host="[name].redis.cache.windows.net" port="6380" accessKey="QtFFY5pm9bhaMNd26eyfdyiB+StmFn8=" ssl="true" abortConnect="False" throwOnError="true" retryTimeoutInMilliseconds="500000" databaseId="0" applicationName="samname" connectionTimeoutInMilliseconds="500000" operationTimeoutInMilliseconds="100000" />
The offending code responsible for the timeout is this:
`
public void Update(ReadOnlyCollection<ColleagueReferenceDataEntity> entities)
{
//Trace.WriteLine("Updating the secondary cache with colleague data");
var secondaryCache = this.Provider.GetSecondaryCache();
foreach (var entity in entities)
{
try
{
secondaryCache.Put(entity.Id, entity);
}
catch (Exception ex)
{
// if a record fails - log and continue.
this.Logger.Error(ex, string.Format("Error updating a colleague in secondary cache: Id {0}, exception {1}", entity.Id));
}
}
}
`
Is there any thing i can make changes to this code ?
Please can anyone help me...i have run out of ideas !
You're doing it wrong! Redis is not a problem. The main request thread itself is getting terminated before the process is completed. You shouldn't let a request wait for that long. There's a hard-coded restriction on in-flight requests of 230-seconds max which can't be changed.
Read here: Why does my request time out after 230 seconds?
Assumption #1: You're loading the data on very first request from client-side!
Solution: If the 30000 employees record is for the whole application, and not per specific user - you can trigger the data load on app start-up, not on user request.
Assumption #2: You have individual users and for each of them you have to store 30000 employees data, on the first request from client-side.
Solution: Add a background job (maybe WebJob/Azure Function) to process the task. Upon request from client - return a 202 (Accepted with the job-status location in the header. The client can then poll for the status of the task at a certain frequency update the user accordingly!
Edit 1:
For Assumption #1 - You can try batching the objects while pushing the objects to Redis. Currently, you're updating one object at one time, which will be 30000 requests this way. It is definitely will exhaust the 230 seconds limit. As a quick solution, batch multiple objects in one request to Redis. I hope it should do the trick!
UPDATE:
As you're using StackExchange.Redis - use the following pattern to batch the objects mentioned here already.
Batch set data from Dictionary into Redis
The number of objects per requests varies depending on the payload size and bandwidth available. As your site is hosted on Azure, I do not thing bandwidth will be much of a concern
Hope that helps!

How to improve the performance my azure worker role?

I like to know if I can do something about the performance. I am working on a project where we process payments. At the moment the role processing a payment in 5 seconds which includes sending payment to a third party processor for payment processing. We maintain the payments with status (Pending/InProgress/Successful/Failed).
Database - Azure SQL Server - P1 Size
EF6.0
Worker Role - Medium Size
Queue - Azure Service Bus Queue
To avoid duplicate payments we are saving payment's status per payment. In a day, we process 1500 to 2000 at the moment and this takes 3 to 5 hours. More payments take more and more time.
Once the message is picked up by the WorkerRole, it will be executed by the appropriate handler. The handler is like,
public class PaymentProcessingHandler{
public void Execute(){
foreach(pendingPayments){
// ChangePaymentStatusFromPendingToInProgress
// SaveChanges()
// MakePayment
// UpdateResult
// SaveChanges()
}
}
}
If anyone worked any payment processing system and have any idea to improve the performance would be helpful.

Resources