Why am I getting a rate limiting error on post requests? - node.js

Whenever I send a post request, I'm getting the following error. I couldn't track down where it is coming from.
SentryError: HTTP Error (429): Creation of this event was denied due to rate limiting
at new SentryError (/home/bwsuser/bws-verwaltung/node_modules/#sentry/core/dist/error.js:9:28)
at ClientRequest.<anonymous> (/home/bwsuser/bws-verwaltung/node_modules/#sentry/node/dist/transports/base.js:46:44)
at Object.onceWrapper (events.js:315:30)
at emitOne (events.js:116:13)
at ClientRequest.emit (events.js:211:7)
at HTTPParser.parserOnIncomingClient (_http_client.js:543:21)
at HTTPParser.parserOnHeadersComplete (_http_common.js:112:17)
at TLSSocket.socketOnData (_http_client.js:440:20)
at emitOne (events.js:116:13)
at TLSSocket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at TLSSocket.Readable.push (_stream_readable.js:208:10)
at TLSWrap.onread (net.js:597:20)
anyone has any idea what might be causing that? or at least where to start looking.
I ran out of ideas here :/

anyone has any idea what might be causing that? or at least where to start looking. I ran out of ideas here :/
Yes, certainly, you need to look here at Quota Management | Sentry Documentation. It mostly happens when you run an app from a local dev server. Here's the summary right from the doc ...
Sentry completes a thorough evaluation of each event to determine if it counts toward your quota, as outlined in this overview. Detailed documentation for each evaluation is linked throughout. Before completing any of these evaluations, Sentry confirms that each event includes a valid DSN and project as well as whether the event can be parsed. In addition, for error events, Sentry validates that the event contains valid fingerprint information. If any of these items are missing or incorrect, the event is rejected.
Another hint, try to disable all your browser extensions and check if that call is gone. I just figure out it is called by a chrome extension for me.
Hope helps

Related

Connection to Mongodb Atlas cluster is closing unexpectedly

I am getting mongo error ie 'connection to MongoDB closed' for every other database request that I am making, I looked it up and the solutions were
whitelisting IP - added 0.0.0.0/0 to allowed IP,
setting up the URI correctly,
Checking connection - was able to connect from Studio 3t application,
The error Log says:
MongoError: connection 72 to cluster0-shard-00-01-****.mongodb.net:27017 closed
at Function.MongoError.create (/**/node_modules/mongodb-core/lib/error.js:29:11)
at TLSSocket.<anonymous> (/**/node_modules/mongodb-core/lib/connection/connection.js:214:22)
at Object.onceWrapper (events.js:285:13)
at TLSSocket.emit (events.js:202:15)
at TLSSocket.EventEmitter.emit (domain.js:446:20)
at _handle.close (net.js:611:12)
at Socket.done (_tls_wrap.js:412:7)
at Object.onceWrapper (events.js:285:13)
at Socket.emit (events.js:197:13)
at Socket.EventEmitter.emit (domain.js:446:20)
at TCP._handle.close (net.js:611:12)
I have tried everything with no results, If someone can help me out for which direction to look into or anyone who has faced it and resolved it who can guide me, it would be much appreciated.
Go to your cluster network access and add again your IP address in the IP Whitelist. I am using wi-fi and my id change periodically.

Application Insights : Unable to verify the first certificate in node js

The application insights keeps on throwing the following error every few minutes.
ApplicationInsights:Sender [ 'Ingestion endpoint could not be reached 5 consecutive times. There may be resulting telemetry loss. Most recent error:',
{ Error: unable to verify the first certificate
at TLSSocket.\u003canonymous\u003e (_tls_wrap.js:1116:38)
at ZoneDelegate.invokeTask (/usr/src/app/node_modules/zone.js/dist/zone-node.js:275:35)
at Zone.runTask (/usr/src/app/node_modules/zone.js/dist/zone-node.js:151:47)
at TLSSocket.ZoneTask.invoke (/usr/src/app/node_modules/zone.js/dist/zone-node.js:345:33)
at emitNone (events.js:106:13)
at TLSSocket.emit (events.js:208:7)
at TLSSocket._finishInit (_tls_wrap.js:643:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:473:38) code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } ]
I reviewed the discussion GITHUB DISCUSSION and tried some proposed solution but it did not work.
Here is the code that I am using to connect to application insights.
let appInsights = require('applicationinsights');
appInsights.setup(config.APPINSIGHTS_KEY.trim())
.setAutoDependencyCorrelation(true)
.setAutoCollectRequests(true)
.setAutoCollectPerformance(true)
.setAutoCollectExceptions(true)
.setAutoCollectDependencies(true)
.setAutoCollectConsole(true)
.setUseDiskRetryCaching(true)
.start();
I am not 100% sure if there is any telemetry loss or not, but getting these errors all the time is annoying. Please help.
I've updated the github discussion thread. A fix on the ingestion side is in the works.
https://github.com/Microsoft/ApplicationInsights-node.js/issues/180#issuecomment-475699485

use google+ api oauth2 service error

I have followed the document(https://hyperledger.github.io/composer/latest/tutorials/google_oauth2_rest) setup my business network , and use my own google accounts.
when I access http://localhost:3000/auth/google , then operated allow action, got error after minutes waiting.
error as follows:
{
  "error":{
    "statusCode":500,
    "name":"InternalOAuthError",
    "message":"Failed to obtain access token",
    "oauthError":{
      "code":"ECONNREFUSED",
      "errno":"ECONNREFUSED",
      "syscall":"connect",
      "address":"172.217.24.13",
      "port":443
    },
    "stack":"Error: connect ECONNREFUSED 172.217.24.13:443
at Strategy.OAuth2Strategy._createOAuthError (/home/composer/node_modules/passport-oauth2/lib/strategy.js:379:17)
at /home/composer/node_modules/passport-oauth2/lib/strategy.js:166:45
at /home/composer/node_modules/oauth/lib/oauth2.js:191:18
at ClientRequest.<anonymous> (/home/composer/node_modules/oauth/lib/oauth2.js:162:5)
at emitOne (events.js:116:13)
at ClientRequest.emit (events.js:211:7)
at TLSSocket.socketErrorListener (_http_client.js:387:9)
at emitOne (events.js:116:13)
at TLSSocket.emit (events.js:211:7)
at emitErrorNT (internal/streams/destroy.js:64:8)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickDomainCallback (internal/process/next_tick.js:218:9)"
  }
}
Its trying to contact accounts.google.com with the IP address 172.217.24.13.
https://ipinfo.io/172.217.24.13
That is almost certainly some kind of local networking problem. Are you in an office with a firewall, or a proxy enabled? Because of this, it cannot go to the next step which is to redirect to http://localhost:3000/explorer
Maybe you need HTTPS_PROXY set for https requests. AFAIK, it may be either a complete URL or a “host[:port]” etc.

429 Too Many Requests Error While Running Mocha Test - NodeJS / Swagger API

I used swagger (swagger.io) to built my app's API and everything works fine in production environment. However, when I ran some mocha tests on these APIs, I keep getting this error regarding status code 429 Too Many Requests:
Unhandled rejection Error: Too Many Requests
at Request.callback (..../node_modules/superagent/lib/node/index.js:698:17)
at IncomingMessage.<anonymous> (..../node_modules/superagent/lib/node/index.js:922:12)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:926:12)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickDomainCallback (internal/process/next_tick.js:122:9)
Usually this error happens after I ran the mocha test twice in a row (ie start throwing this error from the third time onwards). I suspected I was overloading my localhost server, but the error persisted even after I closed and reopened the server.
Would be super grateful if someone could point me towards the right direction to solve this problem.

What does "MongoError: Error: corrupt bson message" mean?

I'm suddenly running into this issue with MongoDB. We are running a NodeJS app with npm mongoose (v4.1.3), npm mongodb (v2.0.42), and MongoDB (v3.0.10). We are running a replica set with a primary and one secondary. This coincides with a recently deployed new version of our app in production, but we haven't changed versions of MongoDB or mongoose recently, so I'm not sure why this is suddenly happening.
I can't find any useful information on what causes this error, or what it means. After the error occurs, our mongoose connection locks up and all interaction with the database becomes a black hole of unresponsiveness. Rebooting the app is the only way to restore database connectivity.
I'm interested in what this error message means and how to prevent it from happening.
The error has happened while interacting with 2 different collections, with 2 different queries. The queries are very basic.
MongoError: Error: corrupt bson message
at .messageHandler (../node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/server.js:245:29)
at Socket.<anonymous> (../node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/connection.js:181:18)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at readableAddChunk (_stream_readable.js:177:18)
at Socket.Readable.push (_stream_readable.js:135:10)
at TCP.onread (net.js:542:20)

Resources