Elasticsearch nodejs check if queue is full - node.js

I have the following error with elasticsearch
[remote_transport_exception] [es-0][x.x.x.x:9300][indices:data/write/bulk[s]]
Or
[remote_transport_exception] [es-0][x.x.x.x:9300][indices:data/write/bulk[s][p]]
It seems like it seems that the elasticsearch queue is full
I am using the nodejs lib https://www.npmjs.com/package/elasticsearch and this error occured after calling client.index.
I am using index as a promise into a rabbitmq consumer, the message are not coming more than 8 in the same time.
client.index().then(...)
It seems that the then is called when the update or create is still in queue, i tried to add {wait_for_active_shards: 'all'} but I have the same issue.

It was an issue because the elasticsearch server was too busy.
I added a retry system in case of 429 error code, now it works fine

Related

AWS MSK Trigger - Lambda (consumer) running infinitely

I am working on a notification service which is built on AWS infra and uses MSK, lambda and SES.
The lambda is written in Nodejs for which the trigger is an MSK topic. Now the weird thing about this lambda is its getting invoked continuously even after the messages are processed. Inside lambda is the code to fetch recipients and send emails via SES.
I have ensured that there is no loop present inside the code. so my guess is for some reason the messages are not getting marked as consumed.
One reason this could happen is if the code executing throws an Error at some point. But I have no error in the logs.
Can execution time (lambda getting timed out be an issue? I don't see anything like that in the logs though), the volume of messages be responsible for this behavior ?
Lambda is setup using serverless framework:
notificationsKafkaConsumer:
handler: src/consumers/notifications.consumer
events:
- msk:
arn: ${ssm:/kafka/cluster_arn~true}
topic: "notifications"
startingPosition: LATEST
It turned out that the lambdas getting timeout out was the issue. The message got lost in the huge volume of logs and interestingly "timed out" is not an error so filtering logs with "ERROR" didn't work.

EventHub Golang client error: amqp:internal-error

I try to use EventHub Go client to send a simple "hello world" event but got this error message:
*Error{Condition: amqp:internal-error, Description: The service was unable to process the request; please retry the operation. For more information on exception types and proper exception handling, please refer to http://go.microsoft.com/fwlink/?LinkId=761101 TrackingId:be0c66437a1447b7accdc113c84955dd_G5, SystemTracker:gateway5, Timestamp:2021-07-10T21:28:48, Info: map[]}
The code is exactly the same as this sample code here: https://github.com/Azure/azure-event-hubs-go
The SO thread I found which somehow has similar error message is here Getting "amqp:internal-error" when peeking messages from Azure Service Bus Queue using AMQP, rhea and Node, but it is for Service Bus and Node client.
Any idea why this issue occured?
This error is pretty non-descriptive.
One way to trigger is to specify an EventHubs connection string without an EntityPath=<event hub name> in it.
So if you're using a broker level connection string you'll need to specify the EventHub you're attempting to connect to by adding EntityPath=eventHubName. The readme snippet does list this, but the error is admittedly not great in that situation.
I've filed this issue to at least improve the error message in that case, as it doesn't really lead you to what's wrong.
https://github.com/Azure/azure-event-hubs-go/issues/222

"Server x timed out" during MongoDB aggregation

I have a script that periodically runs aggregation on a mongodb collection. As the dataset has grown, the amount of time it takes to aggregate has also grown. My aggregation script has recently stopped working consistently, and the error logs show:
error: { [MongoError: server <x> timed out]
name: 'MongoError',
message: 'server <x> timed out' }
I've tried debugging this, and the only pattern I can find is that this timeout seems to only occur when the aggregation takes longer than 2 minutes (it times out right around 2m). Does anyone have additional debugging tips for this? The 2-minute thing is giving me the impression that I just need to configure some timeout somewhere but I can't figure out where or if i'm just falling into a red-herring trap.
About the system configuration: This aggregation script is a node.js (v5.9.1) application running in an alpine-based docker (v1.9.1) container. It uses the mongodb node driver (v2.1.19). Single mongodb server (though this is also happening in a separate environment with a replSet) running mongod (v3.2.6)
I got the same problem for logs time aggregation. I think I have the solution for you.
I found that the option socketTimeoutMS is responsible for that.
Check your mongo_client.js default socketTimeoutMS value. For me it was 2min. Mongodb module version 2.1.18.
So just add this option into your url :
mongodb://localhost:27017/test?maxPoolSize=2&socketTimeoutMS=60000
It will set timeout to 10 mins. That does the trick for me.

Azure WebJob QueueTrigger Throwing StorageException 404 Not Found

I am working with Azure Queues, Controlling them using WebJob, In the functions file i have one QueueTrigger function, which fires up when AzureQueue receives some message.
Now the problem is that function (QueueTrigger) executes successfully, I have setup proper exception handling inside queue trigger function, everything executed without any errors. But when QueueTrigger function ends, code is throwing exception. Storage Exception 404 Not found.
and that message is not getting deleted from queue. Next time when i run job it is still fetching the old message. I have manually created storage containers
azure-jobs-host-archive
azure-jobs-host-output
azure-webjobs-dashboard
azure-webjobs-hosts
I have seen in one post answer but this does not help at all.
I have no idea how to tackle this exception or why this exception in throwing in the code.
Thank you
Eman
After updating the following packages the problem get solved.
Microsoft.WindowAzure.Storage
Microsoft.Azure.Webjobs
Microsoft.Azure.WebJobs.Host

Random 'ECONNABORTED' error when using sendFile in Express/Node

I have set a node server with Express middleware. I get the ECONNABORTED error randomly on some files when loading an HTML file which triggers about 10 other loads (js, css, etc.). The exact error is:
{ [Error: Request aborted] code: 'ECONNABORTED' }
Generated by this simplified code (after I tried to debug the issue):
res.sendFile(res.locals.physicalUrl,function (err) {
if (err)
console.log(err);
...
}
Many posts talk about this error resulting from not specifying the full path name. That is not the situation here. I do specify the full path and indeed the error is randomly generated. There are times when the page and all its subsequent links load perfectly and there are times when they do not. I tried to flush the cache and did not find any pattern to connect it with this.
This specific error appears to be a a generic term for socket connection getting aborted and is discussed in the context of other applications like FTP.
Having realized that the node worker threads can be increased, I tried to do so using:
process.env.UV_THREADPOOL_SIZE = 20;
However, my understanding is that even absent this, at most the file transfer may have to wait for a worker thread to be free and not get aborted. I am not talking about big files here, all files are less than 1 MB.
I have a gut feeling that this has nothing to do with node directly.
Please point to any other possibilities (node or otherwise) to handle this error. Also, any other indirect solutions? Retrying a few times could be one but that would be clumsy. EDIT: No, I cannot retry. Headers are already sent with the error!
A SIDE NOTE:
Many examples on the use of sendFile skip using the callback thereby giving the impression that it is a synchronous call. It is not. Do use the callback at all times, check for success and only then move on to the "next" middleware or take appropriate steps if the send fails for whatever reason. Not doing so can make it difficult to debug the consequences in an asynchronous environment.
See https://stackoverflow.com/a/36949631/2798152
Could it be possible that in some cases you terminate the connection by calling res.end before the asynchronous call to res.sendFile ends?
If that's not the case - can you pastebin more of your application code?
Uninstalling and Re-installing MongoDB solved this for me.
I was facing the same problem. It started happening when I had to force restart my laptop because it became unresponsive. On restarting, trying to connect to mongo server using nodejs, always threw ECONNABORTED error

Resources