Payload too large entity add error node.js - node.js

I am working on elastic search in node.js. When I try to add my users in elastic search using node.js then I receive following exception all the time. One thing is considerable is that 15000 users successfully update but when user limit increase from 15000 then I receive following exception from node.js
'PayloadTooLargeError: too many parameters\n
at queryparse (/home/rizwan/php/elastic-search-node/node_modules/body-parser/lib/types/urlencoded.js:151:13)\n
at parse (/home/rizwan/php/elastic-search-node/node_modules/body-parser/lib/types/urlencoded.js:75:9)\n
at /home/rizwan/php/elastic-search-node/node_modules/body-parser/lib/read.js:121:18\n
at invokeCallback (/home/rizwan/php/elastic-search-node/node_modules/raw-body/index.js:224:16)\n
at done (/home/rizwan/php/elastic-search-node/node_modules/raw-body/index.js:213:7)\n
at IncomingMessage.onEnd (/home/rizwan/php/elastic-search-node/node_modules/raw-body/index.js:273:7)\n
at emitNone (events.js:106:13)\n at IncomingMessage.emit (events.js:208:7)\n
at endReadableNT (_stream_readable.js:1055:12)\n
at _combinedTickCallback (internal/process/next_tick.js:138:11)\n
at process._tickCallback (internal/process/next_tick.js:180:9)'

Related

Why does Stripe API output "Request metrics buffer is full, dropping telemetry message."?

In Node v12 with the Stripe API client v, I am running a client.getSubscription() call on about 200 items in a loop.
I get this message in the output (--trace-warnings is turned on):
Getting subscription data for a#email.com...
Getting subscription data for b#email.com...
Getting subscription data for c#email.com...
(node:25078) Stripe: Request metrics buffer is full, dropping telemetry message.
at Object.emitWarning (/home/nick/src/project/node_modules/stripe/lib/utils.js:437:18)
at Constructor._recordRequestMetrics (/home/nick/src/project/node_modules/stripe/lib/StripeResource.js:370:15)
at IncomingMessage.<anonymous> (/home/nick/src/project/node_modules/stripe/lib/StripeResource.js:192:14)
at Object.onceWrapper (events.js:481:28)
at IncomingMessage.emit (events.js:387:35)
at IncomingMessage.emit (domain.js:470:12)
at endReadableNT (internal/streams/readable.js:1317:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
(node:25078) Stripe: Request metrics buffer is full, dropping telemetry message.
at Object.emitWarning (/home/nick/src/project/node_modules/stripe/lib/utils.js:437:18)
at Constructor._recordRequestMetrics (/home/nick/src/project/node_modules/stripe/lib/StripeResource.js:370:15)
at IncomingMessage.<anonymous> (/home/nick/src/project/node_modules/stripe/lib/StripeResource.js:192:14)
at Object.onceWrapper (events.js:481:28)
at IncomingMessage.emit (events.js:387:35)
at IncomingMessage.emit (domain.js:470:12)
at endReadableNT (internal/streams/readable.js:1317:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
(node:15748) Stripe: Request metrics buffer is full, dropping telemetry message.
...
I think this error is coming from an underlying library that Stripe is using, since I cannot find any information on Google or Stripe's docs with that specific error in it.
It also does not appear to be a critical error, the calls succeed.
This is a warning that's emitted from stripe-node when request telemetry can't be sent because the buffer is full: https://github.com/stripe/stripe-node/blob/12ebce4220c22e1e1a6a0480ba96c2baffe01b8c/lib/StripeResource.js#L395
The telemetry is latency information which is sent to Stripe for metrics purposes.
It's safe to ignore these as they are just warnings and won't affect the actual API operations. If however they are a bit too noisy for your liking you can disable telemetry entirely with the config object:
const stripe = Stripe('sk_test_...', {
telemetry: false,
});

Application Insights : Unable to verify the first certificate in node js

The application insights keeps on throwing the following error every few minutes.
ApplicationInsights:Sender [ 'Ingestion endpoint could not be reached 5 consecutive times. There may be resulting telemetry loss. Most recent error:',
{ Error: unable to verify the first certificate
at TLSSocket.\u003canonymous\u003e (_tls_wrap.js:1116:38)
at ZoneDelegate.invokeTask (/usr/src/app/node_modules/zone.js/dist/zone-node.js:275:35)
at Zone.runTask (/usr/src/app/node_modules/zone.js/dist/zone-node.js:151:47)
at TLSSocket.ZoneTask.invoke (/usr/src/app/node_modules/zone.js/dist/zone-node.js:345:33)
at emitNone (events.js:106:13)
at TLSSocket.emit (events.js:208:7)
at TLSSocket._finishInit (_tls_wrap.js:643:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:473:38) code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } ]
I reviewed the discussion GITHUB DISCUSSION and tried some proposed solution but it did not work.
Here is the code that I am using to connect to application insights.
let appInsights = require('applicationinsights');
appInsights.setup(config.APPINSIGHTS_KEY.trim())
.setAutoDependencyCorrelation(true)
.setAutoCollectRequests(true)
.setAutoCollectPerformance(true)
.setAutoCollectExceptions(true)
.setAutoCollectDependencies(true)
.setAutoCollectConsole(true)
.setUseDiskRetryCaching(true)
.start();
I am not 100% sure if there is any telemetry loss or not, but getting these errors all the time is annoying. Please help.
I've updated the github discussion thread. A fix on the ingestion side is in the works.
https://github.com/Microsoft/ApplicationInsights-node.js/issues/180#issuecomment-475699485

Load testing our elastic cluster

We are currently trying to load test our app which involves a lot of logging to our elastic cluster. On heavy load , i start seeing the below error from ES
Error: No Living connections
at sendReqWithConnection (D:\home\site\wwwroot\node_modules\elasticsearch\src\lib\transport.js:225:15)
at next (D:\home\site\wwwroot\node_modules\elasticsearch\src\lib\connection_pool.js:213:7)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
at process._tickDomainCallback (internal/process/next_tick.js:218:9)
and before that , we see another bunch of errors
Error: Request Timeout after 30000ms
at D:\home\site\wwwroot\node_modules\elasticsearch\src\lib\transport.js:354:15
at Timeout.<anonymous> (D:\home\site\wwwroot\node_modules\elasticsearch\src\lib\transport.js:383:7)
at ontimeout (timers.js:482:11)
at tryOnTimeout (timers.js:317:5)
at Timer.listOnTimeout (timers.js:277:5)
and
Error: [es_rejected_execution_exception] rejected execution of org.elasticsearch.transport.TransportService$7#4d532edc on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor#6c5897a1[Running, pool size = 1, active threads = 1, queued tasks = 200, completed tasks = 122300]]
at respond (D:\home\site\wwwroot\node_modules\elasticsearch\src\lib\transport.js:307:15)
at checkRespForFailure (D:\home\site\wwwroot\node_modules\elasticsearch\src\lib\transport.js:266:7)
at HttpConnector.<anonymous> (D:\home\site\wwwroot\node_modules\elasticsearch\src\lib\connectors\http.js:159:7)
at IncomingMessage.bound (D:\home\site\wwwroot\node_modules\elasticsearch\node_modules\lodash\dist\lodash.js:729:21)
at emitNone (events.js:111:20)
at IncomingMessage.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1064:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickDomainCallback (internal/process/next_tick.js:218:9)
Is this just caused due to heavy load? I'm wondering how i can fix the bottleneck. We currently have 3 data nodes and 3 master nodes running on separate linux servers.
Should i bring in something like logstash? how many servers would i need?
Should i bring in a queue to set aside ES tasks for later
EDIT : a bit more info -
We're performing one insert per request (we send around 100 parallel requests upto 2000 in total)
Cpu performance hasnt gone very high < 10 %
We hosting the machines in azure. All applications (node and es) stay in the same region
I think the problem is your queue capacity is exceeded. It said that your limit is 200. You didn't provide the memory limit in your es server. But let's try increase the limit and monitor your memory.
edit elasticsearch.yaml:
threadpool.bulk.queue_size: 500
As It's different in different scenario, I'm not sure but you need to workout yourself to test different ways.
In case, you have a lot of data to insert at the same time, you may consider using message queue like kafka for asynchronous handling data.
You can read more on this topic for more information about this: https://discuss.elastic.co/t/any-idea-what-these-errors-mean-version-2-4-2/70690/4

Application Insights: CorrelationIdManager error in node js

I am using application insights on my node js application and I keep on getting this error. Can you please help me
ApplicationInsights:CorrelationIdManager [ { Error: unable to verify the first certificate
at TLSSocket.<anonymous> (_tls_wrap.js:1105:38)
at ZoneDelegate.invokeTask (C:\src\xyz\xyz\xyz\node_modules\zone.js\dist\zone-node.js:275:35)
at Zone.runTask (C:\src\xyz\xyz\xyz\node_modules\zone.js\dist\zone-node.js:151:47)
at TLSSocket.ZoneTask.invoke (C:\src\xyz\xyz\xyz\node_modules\zone.js\dist\zone-node.js:345:33)
at emitNone (events.js:106:13)
at TLSSocket.emit (events.js:208:7)
at TLSSocket._finishInit (_tls_wrap.js:639:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:469:38) code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } ]
You're seeing this error because of transient problems on the Application Insights backend, but the error should not be fatal and your app should continue working as expected (albeit with this message being printed).
In the default configuration of the Application Insights SDK for Node.js, communication with the backend is retried - so you should be able to ignore this error and still see telemetry show up in the Azure Portal.
If you've changed the defaults, the setting you'll want to make sure to set is .setUseDiskRetryCaching(true) For example:
appInsights.setup("key")
.setUseDiskRetryCaching(true)
.start()
If you want to suppress messages like this from the SDK you can disable internal logging: (Be aware you'll be suppressing other errors potentially as well)
appInsights.setup("key")
.setUseDiskRetryCaching(true)
.setInternalLogging(false, false)
.start()
If you're curious about the history of this issue, and why it spontaneously occurs, there's a long-running issue on GitHub here: https://github.com/Microsoft/ApplicationInsights-node.js/issues/180

429 Too Many Requests Error While Running Mocha Test - NodeJS / Swagger API

I used swagger (swagger.io) to built my app's API and everything works fine in production environment. However, when I ran some mocha tests on these APIs, I keep getting this error regarding status code 429 Too Many Requests:
Unhandled rejection Error: Too Many Requests
at Request.callback (..../node_modules/superagent/lib/node/index.js:698:17)
at IncomingMessage.<anonymous> (..../node_modules/superagent/lib/node/index.js:922:12)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:926:12)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickDomainCallback (internal/process/next_tick.js:122:9)
Usually this error happens after I ran the mocha test twice in a row (ie start throwing this error from the third time onwards). I suspected I was overloading my localhost server, but the error persisted even after I closed and reopened the server.
Would be super grateful if someone could point me towards the right direction to solve this problem.

Resources