TypeError : i is not a function (AWS Lambda wrapped with epsagon) - node.js

This error is popped up on a lambda when I've upgraded got npm module to 11.5.x from 9.6.0.
I'm using serverlessframework to develop and deploy micro services. Using Epsagon wrapper for better monitoring.
I've been struggling with this error from the past 3 days. Any help would be well appreciated.
AWS lambda runtime: Node 10.x
following are a few npm packages
"serverless-webpack": "^5.3.3",
"terser-webpack-plugin": "^4.1.0",
"webpack": "^4.44.1"
"epsagon": "^1.82.0",
"got": "^11.6.0", (with got 9.0.6, there is no issue)
While I can't paste the entire code snippet, following is a snippet. If I use the same code in a simple index.js file and run it then I cant reproduce this issue.
const { body } = await got(path, {
headers: { 'custom-key': customKey },
responseType: 'json',
method: 'POST',
json: { ts: i1599227703000 },
});
The following log snippet is from cloudWatch.
{
"errorType": "TypeError",
"errorMessage": "i is not a function",
"stack": [
"TypeError: i is not a function",
" at ClientRequest.patchedCallback (/opt/nodejs/node_modules/epsagon/dist/bundle.js:1:60626)",
" at Object.onceWrapper (events.js:286:20)",
" at ClientRequest.emit (events.js:203:15)",
" at ClientRequest.EventEmitter.emit (domain.js:448:20)",
" at ClientRequest.e.emit.f [as emit] (/var/task/epsagon_handlers/abcdNameChanged-epsagon.js:2:1080726)",
" at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:565:21)",
" at HTTPParser.parserOnHeadersComplete (_http_common.js:111:17)",
" at TLSSocket.socketOnData (_http_client.js:451:20)",
" at TLSSocket.emit (events.js:198:13)",
" at TLSSocket.EventEmitter.emit (domain.js:448:20)"
]
}

Finally I could crack it.
If we enable epsagon auto-tracing thru epsagon webapp, it actually adds a layer called epsagon-node-layer to that lambda.
Disabling auto-tracing helped to not to get this error.
For more details, refer to: https://github.com/epsagon/epsagon-node/issues/295

Related

Truffle migrate --network bsc error: header not found

When trying to run truffle migrate --network bsc, truffle usually (not always) manages to deploy the migrations contract, then fails with an error: header not found.
Error [ERR_UNHANDLED_ERROR]: Unhandled error. ({ code: -32000, message: 'header not found' })
at new NodeError (node:internal/errors:363:5)
at Web3ProviderEngine.emit (node:events:354:17)
at D:\Contracts\novaria\node_modules\web3-provider-engine\index.js:54:14
at afterRequest (D:\Contracts\novaria\node_modules\web3-provider-engine\index.js:148:21)
at D:\Contracts\novaria\node_modules\web3-provider-engine\index.js:174:21
at D:\Contracts\novaria\node_modules\web3-provider-engine\index.js:232:9
at D:\Contracts\novaria\node_modules\async\internal\once.js:12:16
at replenish (D:\Contracts\novaria\node_modules\async\internal\eachOfLimit.js:61:25)
at D:\Contracts\novaria\node_modules\async\internal\eachOfLimit.js:71:9
at eachLimit (D:\Contracts\novaria\node_modules\async\eachLimit.js:43:36)
at D:\Contracts\novaria\node_modules\async\internal\doLimit.js:9:16
at end (D:\Contracts\novaria\node_modules\web3-provider-engine\index.js:211:5)
at Request._callback (D:\Contracts\novaria\node_modules\web3-provider-engine\subproviders\rpc.js:70:28)
at Request.self.callback (D:\Contracts\novaria\node_modules\request\request.js:185:22)
at Request.emit (node:events:365:28)
at Request.<anonymous> (D:\Contracts\novaria\node_modules\request\request.js:1154:10)
at Request.emit (node:events:365:28)
at IncomingMessage.<anonymous> (D:\Contracts\novaria\node_modules\request\request.js:1076:12)
at Object.onceWrapper (node:events:471:28)
at IncomingMessage.emit (node:events:377:35)
at endReadableNT (node:internal/streams/readable:1312:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
Here's the config for bsc network:
bsc: {
provider: () => { return new HDWalletProvider(mnemonic, `https://bsc-dataseed2.binance.org/`)},
network_id: 56,
confirmations: 10,
timeoutBlocks: 200,
skipDryRun: true,
},
compilers: {
solc: {
version: "0.8.7", // Fetch exact version from solc-bin (default: truffle's version)
// docker: true, // Use "0.5.1" you've installed locally with docker (default: false)
settings: { // See the solidity docs for advice about optimization and evmVersion
optimizer: {
enabled: true,
runs: 200
},
Deploying to testnet and development works without issue. I have in the past deployed to bsc with truffle (been a while though). I've tried changing RPC urls, and messed around with timeout and confirmations (pretty sure that doesn't make a difference for this error). After searching the internet for solutions, the only answer that seems to have worked for people is to change the RPC, but I haven't had any luck with that. Does anyone have any suggestions?
I had the same problem today. Fixed it by using the Websocket endpoint wss://bsc-ws-node.nariox.org:443 from the smart chain docs https://docs.binance.org/smart-chain/developer/rpc.html

Why can't my AWS Lambda node JS app access my MongoDB Atlas cluster?

Context :
I have just created a Node.JS application and deployed it with the Serverless Framework on AWS Lambda.
Problem :
I would like that application to be able to access my (free tier) MongoDB Atlas Cluster. For this I am using mongoose.
Setup :
I have got a IAM User with AdministratorAccess rights. This user has been authorized on my MongoDB Cluster.
I am using the authMechanism=MONGODB-AWS, therefore using the Token and Secret of that IAM user. The password has been correctly "url encoded".
This is the piece of code used to create a connection :
const uri = "mongodb+srv://myIAMtoken:myIAMsecret#cluster0.tfws6.mongodb.net/DBNAME?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority"
mongoose.connect(uri, {useNewUrlParser: true, useUnifiedTopology: true })
When I run this code on my laptop, the connection is made and I can retrieve the data needed.
However when I deploy this exact same code on AWS Lambda (through Serverless), I get this response :
message "Internal server error"
The trace on CloudWatch looks like this :
{
"errorType": "Runtime.UnhandledPromiseRejection",
"errorMessage": "MongoError: bad auth : aws sts call has response 403",
"reason": {
"errorType": "MongoError",
"errorMessage": "bad auth : aws sts call has response 403",
"code": 8000,
"ok": 0,
"codeName": "AtlasError",
"name": "MongoError",
"stack": [
"MongoError: bad auth : aws sts call has response 403",
" at MessageStream.messageHandler (/var/task/node_modules/mongodb/lib/cmap/connection.js:268:20)",
" at MessageStream.emit (events.js:314:20)",
" at processIncomingData (/var/task/node_modules/mongodb/lib/cmap/message_stream.js:144:12)",
" at MessageStream._write (/var/task/node_modules/mongodb/lib/cmap/message_stream.js:42:5)",
" at doWrite (_stream_writable.js:403:12)",
" at writeOrBuffer (_stream_writable.js:387:5)",
" at MessageStream.Writable.write (_stream_writable.js:318:11)",
" at TLSSocket.ondata (_stream_readable.js:718:22)",
" at TLSSocket.emit (events.js:314:20)",
" at addChunk (_stream_readable.js:297:12)",
" at readableAddChunk (_stream_readable.js:272:9)",
" at TLSSocket.Readable.push (_stream_readable.js:213:10)",
" at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23)"
]
},
"promise": {},
"stack": [
"Runtime.UnhandledPromiseRejection: MongoError: bad auth : aws sts call has response 403",
" at process.<anonymous> (/var/runtime/index.js:35:15)",
" at process.emit (events.js:314:20)",
" at processPromiseRejections (internal/process/promises.js:209:33)",
" at processTicksAndRejections (internal/process/task_queues.js:98:32)"
]
}
I thought it was a network access issue from AWS, so I tried fetching "http://google.com" , no problem, my node app could access the page and provide the response. My app has an internet access but cannot reach my MongoDB cloud instance. My MongoDB cluster is accessible from any IP address.
This is reaching the limits of my knowledge :-)
if you are using an iam type mongodb user then you don't need the username + password in the connection string.
const uri = "mongodb+srv://cluster0.tfws6.mongodb.net/DBNAME?authSource=$external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority"
when you invoke your lambda connecting to the mongodb cluster, the iam role that it will be using will be the execution role of the lambda
"arn:aws:iam::ACCOUNT_ID:role/SLS_SERVICE_NAME-ENVIRONMENT-AWS_REGION-lambdaRole"
"arn:aws:iam::123456789012:role/awesome-service-dev-us-east-1-lambdaRole"
check the default iam section of sls framework:
https://www.serverless.com/framework/docs/providers/aws/guide/iam/#the-default-iam-role

Firebase Auth testing with jest and emulator calling the login throws Error: Headers X-Client-Version forbidden

I try to login a dummy user with the firebase js sdk localy. I have the default firebase emulator running. After calling the function i get following exception:
Error: Headers X-Client-Version forbidden
at dispatchError (C:\Users\user\Documents\Projekte\Backend\functions\node_modules\jsdom\lib\jsdom\living\xhr\xhr-utils.js:62:19)
at validCORSPreflightHeaders (C:\Users\user\Documents\Projekte\Backend\functions\node_modules\jsdom\lib\jsdom\living\xhr\xhr-utils.js:99:5)
at Request.<anonymous> (C:\Users\user\Documents\Projekte\Backend\functions\node_modules\jsdom\lib\jsdom\living\xhr\xhr-utils.js:367:12)
at Request.emit (events.js:315:20)
at Request.onRequestResponse (C:\Users\user\Documents\Projekte\Backend\functions\node_modules\request\request.js:1059:10)
at ClientRequest.emit (events.js:315:20)
at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:641:27)
at HTTPParser.parserOnHeadersComplete (_http_common.js:126:17)
at Socket.socketOnData (_http_client.js:509:22)
at Socket.emit (events.js:315:20) undefined
at VirtualConsole.<anonymous> (node_modules/jsdom/lib/jsdom/virtual-console.js:29:45)
at dispatchError (node_modules/jsdom/lib/jsdom/living/xhr/xhr-utils.js:65:53)
at validCORSPreflightHeaders (node_modules/jsdom/lib/jsdom/living/xhr/xhr-utils.js:99:5)
at Request.<anonymous> (node_modules/jsdom/lib/jsdom/living/xhr/xhr-utils.js:367:12)
at Request.onRequestResponse (node_modules/request/request.js:1059:10)
console.log
t {
code: 'auth/network-request-failed',
message: 'A network error (such as timeout, interrupted connection or unreachable host) has occurred.',
a: null
}
at Object.<anonymous> (test/test.ts:275:12)
If i try to connect to my online project it works fine, but i want to perform my testing localy with the emulator.
Example Code:
const app = firebase.initializeApp(firebaseConfig);
app.auth().useEmulator("http://localhost:9099");
app.firestore().settings({
host: "localhost:8080",
ssl: false,
});
test('Example test case', async () => {
try {
const cred: UserCredential = await app.auth().signInWithEmailAndPassword("foo#bar.de", "bla2377");
expect(cred).toBeTruthy();
expect(cred.user).toBeTruthy();
} catch (e) {
console.log(e);
expect(true).toBeFalsy();
}
});
Enviorment Informations:
Operating System version: Windows 10 Home 10.0.18363 Build 18363
Firebase SDK version: 8.2.3
Jest Version: 26.6.3
Node Version: 14
jsdom doesn't support wildcard access-control-allow-headers, which are used by firebase. This answer fixes the exception.

Node AWS Lambda fetch request failing

I am using node-fetch to perform a request to an API (hosted on AWS Lambda/API Gateway with Serverless Framework) from a lambda. The lambda is failing with the below invocation error:
{
"errorType": "FetchError",
"errorMessage": "request to https://[API].us-east-2.amazonaws.com/[ENDPOINT] failed, reason: connect ETIMEDOUT [IP]:443",
"code": "ETIMEDOUT",
"message": "request to https://[API].us-east-2.amazonaws.com/[ENDPOINT] failed, reason: connect ETIMEDOUT [IP]:443",
"type": "system",
"errno": "ETIMEDOUT",
"stack": [
"FetchError: request to https://[API].us-east-2.amazonaws.com/[ENDPOINT] failed, reason: connect ETIMEDOUT [IP]:443",
" at ClientRequest.<anonymous> (/var/task/node_modules/node-fetch/lib/index.js:1461:11)",
" at ClientRequest.emit (events.js:315:20)",
" at TLSSocket.socketErrorListener (_http_client.js:426:9)",
" at TLSSocket.emit (events.js:315:20)",
" at emitErrorNT (internal/streams/destroy.js:92:8)",
" at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)",
" at processTicksAndRejections (internal/process/task_queues.js:84:21)"
]
}
Here is the lambda in question with extraneous code removed:
"use strict";
import { PrismaClient } from "#prisma/client";
import fetch from "node-fetch";
const prisma = new PrismaClient();
module.exports.handler = async (event, context, callback) => {
const users = await prisma.user.findMany();
for (const user of users) {
await fetch(...); // this is where the error occurs
}
};
The code works fine locally (the code in the lambda itself as well as manaully making the request). Because of that, I thought this might be fixed by setting up a NAT for the lambda/configuring the VPC to have external internet access, though I'm not sure how to do that with Serverless Framework if that is indeed the issue. The lambda attempting to perform the fetch request is in the same VPC as the API. Any help or ideas is greatly appreciated!
I solved this by adding a VPC endpoint for the lambda function. I believe an alternative solution (though possibly more expensive) is to set up a NAT gateway for the Lambda.

Runtime.ImportModuleError Error: Cannot find module 'axios/lib/utils' Serverless

I am using the Serverless framework. Backend as node.js. I have several microservices and all others are working fine, but now I have created now microservice where I have not used Axios but still, it is throwing error in the console.
One more issue is that in my local system it works perfectly, but as I push the same into the server then it starts creating issues.
This is the sample code which is throwing error
const { IamAuthenticator } = require('ibm-watson/auth');
const NaturalLanguageUnderstandingV1 = require('ibm-watson/natural-language-understanding/v1');
async function textAnalyse(req, res) {
const naturalLanguageUnderstanding = new NaturalLanguageUnderstandingV1({
version: '2019-07-12',
authenticator: new IamAuthenticator({
apikey: 'API KEY'
}),
url: 'https://URL/natural-language-understanding/api'
});
const analyzeParams = {
'text': HtmlToText.fromString('Test text here'),
'features': {
'entities': {
'sentiment': true,
'limit': 100
}
}
};
const analysis = await naturalLanguageUnderstanding.analyze(analyzeParams);
// prepare the response object
res.send({ analysis: analysis });
}
Error in AWS Cloud watch
{
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'axios/lib/utils'",
"stack": [
"Runtime.ImportModuleError: Error: Cannot find module 'axios/lib/utils'",
" at _loadUserApp (/var/runtime/UserFunction.js:100:13)",
" at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",
" at Object.<anonymous> (/var/runtime/index.js:45:30)",
" at Module._compile (internal/modules/cjs/loader.js:778:30)",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)",
" at Module.load (internal/modules/cjs/loader.js:653:32)",
" at tryModuleLoad (internal/modules/cjs/loader.js:593:12)",
" at Function.Module._load (internal/modules/cjs/loader.js:585:3)",
" at Function.Module.runMain (internal/modules/cjs/loader.js:831:12)",
" at startup (internal/bootstrap/node.js:283:19)",
" at bootstrapNodeJSCore (internal/bootstrap/node.js:623:3)"
]
}
I found the fixes for this.
when we call third-party API from our Lambda it requires the Axios to be implemented internally. So you need to create a folder that will have a package.json file with the dependency
"dependencies": {
"axios": "^0.19.2"
}
Then add the layer in the functions from AWS UI, left side menu
Then add the layer to your function
Now, by doing the above activity the issue will be resolved and Axios dependency is added successfully individually to the microservice.
Also, what you name the folders matters before zipping. Review these AWS docs for folder naming so you can import the library like you would in any other project.
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html

Resources