Node AWS Lambda fetch request failing - node.js

I am using node-fetch to perform a request to an API (hosted on AWS Lambda/API Gateway with Serverless Framework) from a lambda. The lambda is failing with the below invocation error:
{
"errorType": "FetchError",
"errorMessage": "request to https://[API].us-east-2.amazonaws.com/[ENDPOINT] failed, reason: connect ETIMEDOUT [IP]:443",
"code": "ETIMEDOUT",
"message": "request to https://[API].us-east-2.amazonaws.com/[ENDPOINT] failed, reason: connect ETIMEDOUT [IP]:443",
"type": "system",
"errno": "ETIMEDOUT",
"stack": [
"FetchError: request to https://[API].us-east-2.amazonaws.com/[ENDPOINT] failed, reason: connect ETIMEDOUT [IP]:443",
" at ClientRequest.<anonymous> (/var/task/node_modules/node-fetch/lib/index.js:1461:11)",
" at ClientRequest.emit (events.js:315:20)",
" at TLSSocket.socketErrorListener (_http_client.js:426:9)",
" at TLSSocket.emit (events.js:315:20)",
" at emitErrorNT (internal/streams/destroy.js:92:8)",
" at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)",
" at processTicksAndRejections (internal/process/task_queues.js:84:21)"
]
}
Here is the lambda in question with extraneous code removed:
"use strict";
import { PrismaClient } from "#prisma/client";
import fetch from "node-fetch";
const prisma = new PrismaClient();
module.exports.handler = async (event, context, callback) => {
const users = await prisma.user.findMany();
for (const user of users) {
await fetch(...); // this is where the error occurs
}
};
The code works fine locally (the code in the lambda itself as well as manaully making the request). Because of that, I thought this might be fixed by setting up a NAT for the lambda/configuring the VPC to have external internet access, though I'm not sure how to do that with Serverless Framework if that is indeed the issue. The lambda attempting to perform the fetch request is in the same VPC as the API. Any help or ideas is greatly appreciated!

I solved this by adding a VPC endpoint for the lambda function. I believe an alternative solution (though possibly more expensive) is to set up a NAT gateway for the Lambda.

Related

Node Localstack Nodemailer SES errors

Using Nodemailer, Localstack, and the Node AWS SDK to send an email I am getting various errors:
AWS_SES_ENDPOINT=localhost:4566
Error: connect ECONNREFUSED 127.0.0.1:80
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1161:16)
AWS_SES_ENDPOINT=127.0.0.1:4566
uncaughtException: Invalid URL
TypeError [ERR_INVALID_URL]: Invalid URL
at new NodeError (node:internal/errors:371:5)
at onParseError (node:internal/url:552:9)
at new URL (node:internal/url:628:5)
at parseUrl (/something/node_modules/#aws-sdk/url-parser/dist-cjs/index.js:6:60)
at resolveEndpointsConfig (/something/node_modules/#aws-sdk/config-resolver/dist-cjs/endpointsConfig/resolveEndpointsConfig.js
AWS_SES_ENDPOINT=http://localhost:4578 and AWS_SES_ENDPOINT=http://127.0.0.1:4578
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:220:20
AWS_SES_ENDPOINT=http://localhost:4566 and AWS_SES_ENDPOINT=smtp://127.0.0.1:4566
UnknownError
at deserializeAws_querySendRawEmailCommandError (/something/node_modules/#aws-sdk/client-ses/dist-cjs/protocols/Aws_query.js:2779:24)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async /something/node_modules/#aws-sdk/middleware-serde/dist-cjs/deserializerMiddleware.js:7:24
at async /something/node_modules/#aws-sdk/middleware-signing/dist-cjs/middleware.js:11:20
at async StandardRetryStrategy.retry (/something/node_modules/#aws-sdk/middleware-retry/dist-cjs/StandardRetryStrategy.js:51:46)
at async /something/node_modules/#aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:6:22
AWS_SES_ENDPOINT=smtp://127.0.0.1:4578
Error [TimeoutError]: socket hang up
at connResetException (node:internal/errors:691:14)
at Socket.socketOnEnd (node:_http_client:471:23)
at Socket.emit (node:events:402:35)
at endReadableNT (node:internal/streams/readable:1343:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
code: 'ECONNRESET',
SES for nodemailer is set up like:
const sesConfig: SESClientConfig = {
region: "us-east-1",
endpoint: AWS_SES_ENDPOINT,
};
const ses = new SES(sesConfig);
Localstack is set up in docker-compose.yml like:
services:
qatool-localstack:
container_name: "my-localstack"
image: localstack/localstack
ports:
- "127.0.0.1:4566:4566"
- "127.0.0.1:4572:4572"
- "127.0.0.1:4578:4578"
environment:
- SERVICES=sqs:4566,s3:4572,ses:4578
- DEFAULT_REGION=us-east-1
- DATA_DIR=${TMPDIR:-/tmp/}localstack/data
- HOST_TMP_FOLDER=${TMPDIR:-/tmp/}localstack
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
- './localstackSetup.sh:/docker-entrypoint-initaws.d/make-s3.sh'
nodemailer and aws ses versions:
"#aws-sdk/client-ses": "^3.110.0",
"nodemailer": "^6.7.5",
The first two errors for
localhost:4566 and 127.0.0.1:4566 are because a protocol wasnt included. You can see AWS SES doesn't parse the config properly for example:
ses.config.endpoint().then((val) => { console.log(JSON.stringify(val)); });
gives:
{"hostname":"","protocol":"localhost:","path":"4566"}
Where localhost is the protocol instead of the hostname.
The connections to port 4578 didn't work for me.
Aafter doing (replace from-email):
aws ses verify-email-identity --email-address from-email#from.com --endpoint-url=http://localhost:4566
These endpoints worked:
http://127.0.0.1:4566
http://localhost:4566
smtp://localhost:4566
smtp://127.0.0.1:4566
You can verify the email was sent by going to :
http://localhost:4566/_localstack/ses
Localstack doesn't send the actual email in the free version.
You can verify SES is running by going to:
http://localhost:4566/health

Truffle migrate --network bsc error: header not found

When trying to run truffle migrate --network bsc, truffle usually (not always) manages to deploy the migrations contract, then fails with an error: header not found.
Error [ERR_UNHANDLED_ERROR]: Unhandled error. ({ code: -32000, message: 'header not found' })
at new NodeError (node:internal/errors:363:5)
at Web3ProviderEngine.emit (node:events:354:17)
at D:\Contracts\novaria\node_modules\web3-provider-engine\index.js:54:14
at afterRequest (D:\Contracts\novaria\node_modules\web3-provider-engine\index.js:148:21)
at D:\Contracts\novaria\node_modules\web3-provider-engine\index.js:174:21
at D:\Contracts\novaria\node_modules\web3-provider-engine\index.js:232:9
at D:\Contracts\novaria\node_modules\async\internal\once.js:12:16
at replenish (D:\Contracts\novaria\node_modules\async\internal\eachOfLimit.js:61:25)
at D:\Contracts\novaria\node_modules\async\internal\eachOfLimit.js:71:9
at eachLimit (D:\Contracts\novaria\node_modules\async\eachLimit.js:43:36)
at D:\Contracts\novaria\node_modules\async\internal\doLimit.js:9:16
at end (D:\Contracts\novaria\node_modules\web3-provider-engine\index.js:211:5)
at Request._callback (D:\Contracts\novaria\node_modules\web3-provider-engine\subproviders\rpc.js:70:28)
at Request.self.callback (D:\Contracts\novaria\node_modules\request\request.js:185:22)
at Request.emit (node:events:365:28)
at Request.<anonymous> (D:\Contracts\novaria\node_modules\request\request.js:1154:10)
at Request.emit (node:events:365:28)
at IncomingMessage.<anonymous> (D:\Contracts\novaria\node_modules\request\request.js:1076:12)
at Object.onceWrapper (node:events:471:28)
at IncomingMessage.emit (node:events:377:35)
at endReadableNT (node:internal/streams/readable:1312:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
Here's the config for bsc network:
bsc: {
provider: () => { return new HDWalletProvider(mnemonic, `https://bsc-dataseed2.binance.org/`)},
network_id: 56,
confirmations: 10,
timeoutBlocks: 200,
skipDryRun: true,
},
compilers: {
solc: {
version: "0.8.7", // Fetch exact version from solc-bin (default: truffle's version)
// docker: true, // Use "0.5.1" you've installed locally with docker (default: false)
settings: { // See the solidity docs for advice about optimization and evmVersion
optimizer: {
enabled: true,
runs: 200
},
Deploying to testnet and development works without issue. I have in the past deployed to bsc with truffle (been a while though). I've tried changing RPC urls, and messed around with timeout and confirmations (pretty sure that doesn't make a difference for this error). After searching the internet for solutions, the only answer that seems to have worked for people is to change the RPC, but I haven't had any luck with that. Does anyone have any suggestions?
I had the same problem today. Fixed it by using the Websocket endpoint wss://bsc-ws-node.nariox.org:443 from the smart chain docs https://docs.binance.org/smart-chain/developer/rpc.html

Unable to connect to AWS Aurora Serverless with SSL

I'm working with AWS Aurora Serverless V1 (with Postgres engine) and I'm trying to connect using node-postgres. I was able to connect without SSL perfectly, but when I add the SSL config at the pg client I keep receiving this error:
{
"message": "unable to get local issuer certificate",
"stack": "Error: unable to get local issuer certificate\n at TLSSocket.onConnectSecure (_tls_wrap.js:1497:34)\n at TLSSocket.emit (events.js:315:20)\n at TLSSocket.EventEmitter.emit (domain.js:467:12)\n at TLSSocket._finishInit (_tls_wrap.js:932:8)\n at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:706:12)"
}
This is how the SSL config object is been declared
const ca = `-----BEGIN CERTIFICATE-----
MIIEBjCCAu6gAwIBAgIJAMc0ZzaSUK51MA0GCSqGSIb3DQEBCwUAMIGPMQswCQYD
VQQGEwJVUzEQMA4GA1UEBwwHU2VhdHRsZTETMBEGA1UECAwKV2FzaGluZ3RvbjEi
MCAGA1UECgwZQW1hem9uIFdlYiBTZXJ2aWNlcywgSW5jLjETMBEGA1UECwwKQW1h
em9uIFJEUzEgMB4GA1UEAwwXQW1hem9uIFJEUyBSb290IDIwMTkgQ0EwHhcNMTkw
ODIyMTcwODUwWhcNMjQwODIyMTcwODUwWjCBjzELMAkGA1UEBhMCVVMxEDAOBgNV
BAcMB1NlYXR0bGUxEzARBgNVBAgMCldhc2hpbmd0b24xIjAgBgNVBAoMGUFtYXpv
biBXZWIgU2VydmljZXMsIEluYy4xEzARBgNVBAsMCkFtYXpvbiBSRFMxIDAeBgNV
BAMMF0FtYXpvbiBSRFMgUm9vdCAyMDE5IENBMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEArXnF/E6/Qh+ku3hQTSKPMhQQlCpoWvnIthzX6MK3p5a0eXKZ
oWIjYcNNG6UwJjp4fUXl6glp53Jobn+tWNX88dNH2n8DVbppSwScVE2LpuL+94vY
0EYE/XxN7svKea8YvlrqkUBKyxLxTjh+U/KrGOaHxz9v0l6ZNlDbuaZw3qIWdD/I
6aNbGeRUVtpM6P+bWIoxVl/caQylQS6CEYUk+CpVyJSkopwJlzXT07tMoDL5WgX9
O08KVgDNz9qP/IGtAcRduRcNioH3E9v981QO1zt/Gpb2f8NqAjUUCUZzOnij6mx9
McZ+9cWX88CRzR0vQODWuZscgI08NvM69Fn2SQIDAQABo2MwYTAOBgNVHQ8BAf8E
BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUc19g2LzLA5j0Kxc0LjZa
pmD/vB8wHwYDVR0jBBgwFoAUc19g2LzLA5j0Kxc0LjZapmD/vB8wDQYJKoZIhvcN
AQELBQADggEBAHAG7WTmyjzPRIM85rVj+fWHsLIvqpw6DObIjMWokpliCeMINZFV
ynfgBKsf1ExwbvJNzYFXW6dihnguDG9VMPpi2up/ctQTN8tm9nDKOy08uNZoofMc
NUZxKCEkVKZv+IL4oHoeayt8egtv3ujJM6V14AstMQ6SwvwvA93EP/Ug2e4WAXHu
cbI1NAbUgVDqp+DRdfvZkgYKryjTWd/0+1fS8X1bBZVWzl7eirNVnHbSH2ZDpNuY
0SBd8dj5F6ld3t58ydZbrTHze7JJOd8ijySAp4/kiu9UfZWuTPABzDa/DSdz9Dk/
zPW4CXXvhLmE02TA9/HeCw3KEHIwicNuEfw=
-----END CERTIFICATE-----`
module.exports = {
rejectUnauthorized: true,
ca,
checkServerIdentity: (host, cert) => {
const error = tls.checkServerIdentity(host, cert)
if (
error &&
!cert.subject.CN.endsWith('.rds.amazonaws.com')
) {
return error
}
}
}
I am downloading the certificate directly from Amazon https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html
Aurora Serverless docs say that SSL is enforced by default, so I think I'm doing all the neccesary things to make the connection right, but it isn't working though. Maybe there is something about "serverless connections" that work differently from classic ones and I'm not understanding it.

Why can't my AWS Lambda node JS app access my MongoDB Atlas cluster?

Context :
I have just created a Node.JS application and deployed it with the Serverless Framework on AWS Lambda.
Problem :
I would like that application to be able to access my (free tier) MongoDB Atlas Cluster. For this I am using mongoose.
Setup :
I have got a IAM User with AdministratorAccess rights. This user has been authorized on my MongoDB Cluster.
I am using the authMechanism=MONGODB-AWS, therefore using the Token and Secret of that IAM user. The password has been correctly "url encoded".
This is the piece of code used to create a connection :
const uri = "mongodb+srv://myIAMtoken:myIAMsecret#cluster0.tfws6.mongodb.net/DBNAME?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority"
mongoose.connect(uri, {useNewUrlParser: true, useUnifiedTopology: true })
When I run this code on my laptop, the connection is made and I can retrieve the data needed.
However when I deploy this exact same code on AWS Lambda (through Serverless), I get this response :
message "Internal server error"
The trace on CloudWatch looks like this :
{
"errorType": "Runtime.UnhandledPromiseRejection",
"errorMessage": "MongoError: bad auth : aws sts call has response 403",
"reason": {
"errorType": "MongoError",
"errorMessage": "bad auth : aws sts call has response 403",
"code": 8000,
"ok": 0,
"codeName": "AtlasError",
"name": "MongoError",
"stack": [
"MongoError: bad auth : aws sts call has response 403",
" at MessageStream.messageHandler (/var/task/node_modules/mongodb/lib/cmap/connection.js:268:20)",
" at MessageStream.emit (events.js:314:20)",
" at processIncomingData (/var/task/node_modules/mongodb/lib/cmap/message_stream.js:144:12)",
" at MessageStream._write (/var/task/node_modules/mongodb/lib/cmap/message_stream.js:42:5)",
" at doWrite (_stream_writable.js:403:12)",
" at writeOrBuffer (_stream_writable.js:387:5)",
" at MessageStream.Writable.write (_stream_writable.js:318:11)",
" at TLSSocket.ondata (_stream_readable.js:718:22)",
" at TLSSocket.emit (events.js:314:20)",
" at addChunk (_stream_readable.js:297:12)",
" at readableAddChunk (_stream_readable.js:272:9)",
" at TLSSocket.Readable.push (_stream_readable.js:213:10)",
" at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23)"
]
},
"promise": {},
"stack": [
"Runtime.UnhandledPromiseRejection: MongoError: bad auth : aws sts call has response 403",
" at process.<anonymous> (/var/runtime/index.js:35:15)",
" at process.emit (events.js:314:20)",
" at processPromiseRejections (internal/process/promises.js:209:33)",
" at processTicksAndRejections (internal/process/task_queues.js:98:32)"
]
}
I thought it was a network access issue from AWS, so I tried fetching "http://google.com" , no problem, my node app could access the page and provide the response. My app has an internet access but cannot reach my MongoDB cloud instance. My MongoDB cluster is accessible from any IP address.
This is reaching the limits of my knowledge :-)
if you are using an iam type mongodb user then you don't need the username + password in the connection string.
const uri = "mongodb+srv://cluster0.tfws6.mongodb.net/DBNAME?authSource=$external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority"
when you invoke your lambda connecting to the mongodb cluster, the iam role that it will be using will be the execution role of the lambda
"arn:aws:iam::ACCOUNT_ID:role/SLS_SERVICE_NAME-ENVIRONMENT-AWS_REGION-lambdaRole"
"arn:aws:iam::123456789012:role/awesome-service-dev-us-east-1-lambdaRole"
check the default iam section of sls framework:
https://www.serverless.com/framework/docs/providers/aws/guide/iam/#the-default-iam-role

TypeError : i is not a function (AWS Lambda wrapped with epsagon)

This error is popped up on a lambda when I've upgraded got npm module to 11.5.x from 9.6.0.
I'm using serverlessframework to develop and deploy micro services. Using Epsagon wrapper for better monitoring.
I've been struggling with this error from the past 3 days. Any help would be well appreciated.
AWS lambda runtime: Node 10.x
following are a few npm packages
"serverless-webpack": "^5.3.3",
"terser-webpack-plugin": "^4.1.0",
"webpack": "^4.44.1"
"epsagon": "^1.82.0",
"got": "^11.6.0", (with got 9.0.6, there is no issue)
While I can't paste the entire code snippet, following is a snippet. If I use the same code in a simple index.js file and run it then I cant reproduce this issue.
const { body } = await got(path, {
headers: { 'custom-key': customKey },
responseType: 'json',
method: 'POST',
json: { ts: i1599227703000 },
});
The following log snippet is from cloudWatch.
{
"errorType": "TypeError",
"errorMessage": "i is not a function",
"stack": [
"TypeError: i is not a function",
" at ClientRequest.patchedCallback (/opt/nodejs/node_modules/epsagon/dist/bundle.js:1:60626)",
" at Object.onceWrapper (events.js:286:20)",
" at ClientRequest.emit (events.js:203:15)",
" at ClientRequest.EventEmitter.emit (domain.js:448:20)",
" at ClientRequest.e.emit.f [as emit] (/var/task/epsagon_handlers/abcdNameChanged-epsagon.js:2:1080726)",
" at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:565:21)",
" at HTTPParser.parserOnHeadersComplete (_http_common.js:111:17)",
" at TLSSocket.socketOnData (_http_client.js:451:20)",
" at TLSSocket.emit (events.js:198:13)",
" at TLSSocket.EventEmitter.emit (domain.js:448:20)"
]
}
Finally I could crack it.
If we enable epsagon auto-tracing thru epsagon webapp, it actually adds a layer called epsagon-node-layer to that lambda.
Disabling auto-tracing helped to not to get this error.
For more details, refer to: https://github.com/epsagon/epsagon-node/issues/295

Resources