#aws-sdk/client-sts - TypeError: (0 , smithy_client_1.parseRfc3339DateTimeWithOffset) is not a function - node.js

I'm facing issue using sts client on lambdas.
The current code was working two days ago.
const {
STSClient,
AssumeRoleCommand,
} = require('#aws-sdk/client-sts')
const stsClient = new STSClient({
region: process.env.REGION || 'eu-west-1',
})
const params = new AssumeRoleCommand({
RoleArn: process.env.MARKETPLACE_RESOLVE_CUSTOMER_ROLE_ARN,
RoleSessionName: `${
process.env.AWS_LAMBDA_FUNCTION_NAME
}-${new Date().getTime()}`,
})
const assumedRoleOutput = await stsClient.send(params)
Now it always throws an exception as follow:
2023-02-08T08:07:18.684Z 1a7dd68d-da00-4b07-935c-2f6bc95f996f ERROR TypeError: (0 , smithy_client_1.parseRfc3339DateTimeWithOffset) is not a function
at deserializeAws_queryCredentials (/opt/nodejs/node_modules/#aws-sdk/client-sts/dist-cjs/protocols/Aws_query.js:860:117)
at deserializeAws_queryAssumeRoleResponse (/opt/nodejs/node_modules/#aws-sdk/client-sts/dist-cjs/protocols/Aws_query.js:756:32)
at deserializeAws_queryAssumeRoleCommand (/opt/nodejs/node_modules/#aws-sdk/client-sts/dist-cjs/protocols/Aws_query.js:119:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async /opt/nodejs/node_modules/#aws-sdk/client-sts/node_modules/#aws-sdk/middleware-serde/dist-cjs/deserializerMiddleware.js:7:24
at async /opt/nodejs/node_modules/#aws-sdk/client-sts/node_modules/#aws-sdk/middleware-signing/dist-cjs/middleware.js:14:20
at async StandardRetryStrategy.retry (/opt/nodejs/node_modules/#aws-sdk/client-sts/node_modules/#aws-sdk/middleware-retry/dist-cjs/StandardRetryStrategy.js:51:46)
at async /opt/nodejs/node_modules/#aws-sdk/client-sts/node_modules/#aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:5:22
at async getMarketplaceResolveCustomerRoleCredentials (/var/task/utils/marketplaceUtils.js:27:29)
at async Object.resolveMarketplaceCustomer (/var/task/utils/marketplaceUtils.js:50:5) {
'$metadata': { attempts: 1, totalRetryDelay: 0 }
I've tried it with the #aws-sdk/client-sts at versions 3.266.0 and 3.224.0

The problem was the incorrect aws-sdk version installed during the creation of the layer.
I use a docker file to install all the dependencies used by my lambdas and was using the command like:
RUN npm i #aws-sdk/client-sts#3.224.0
RUN npm i #aws-sdk/client-marketplace-entitlement-service#3.266.0
So there was inconsistency between sdk versions probably?
So I tried to create the layer without including the aws sdk modules (removed by hand) and it worked (the sdk is included in lambda execution enviroment)
But I faced another error with aws-sdk/client-marketplace-entitlement-service that where fixed some times ago git issue
So I've changed the commands on the Dockerfile to install latest major release I wanted, as follow:
RUN npm i #aws-sdk/client-sts#3
RUN npm i #aws-sdk/client-marketplace-entitlement-service#3
and now it works!

Related

Puppeteer incompatible with Vercel serverless functions? (Next 13)

I deployed an API route handler with Next.js 13 that uses Puppeteer. When I call this api route in production, I get this error message in 'Function Logs':
[POST] /api/getLinkedin
00:00:21:62
2023-02-02T00:00:23.103Z 7545c99c-ea8f-41d3-8771-97eb786502cb ERROR Error: Could not find Chromium (rev. 1083080). This can occur if either
1. you did not perform an installation before running the script (e.g. `npm install`) or
2. your cache path is incorrectly configured (which is: /home/sbx_user1051/.cache/puppeteer).
For (2), check out our guide on configuring puppeteer at https://pptr.dev/guides/configuration.
at ChromeLauncher.resolveExecutablePath (/var/task/node_modules/puppeteer-core/lib/cjs/puppeteer/node/ProductLauncher.js:127:27)
at ChromeLauncher.executablePath (/var/task/node_modules/puppeteer-core/lib/cjs/puppeteer/node/ChromeLauncher.js:205:25)
at ChromeLauncher.launch (/var/task/node_modules/puppeteer-core/lib/cjs/puppeteer/node/ChromeLauncher.js:93:37)
at async handler (/var/task/.next/server/pages/api/getLinkedin.js:27:21)
at async Object.apiResolver (/var/task/node_modules/next/dist/server/api-utils/node.js:372:9)
at async NextNodeServer.runApi (/var/task/node_modules/next/dist/server/next-server.js:488:9)
at async Object.fn (/var/task/node_modules/next/dist/server/next-server.js:751:37)
at async Router.execute (/var/task/node_modules/next/dist/server/router.js:253:36)
at async NextNodeServer.run (/var/task/node_modules/next/dist/server/base-server.js:384:29)
at async NextNodeServer.handleRequest (/var/task/node_modules/next/dist/server/base-server.js:322:20)
2023-02-02T00:00:23.124Z 7545c99c-ea8f-41d3-8771-97eb786502cb ERROR Error: Could not find Chromium (rev. 1083080). This can occur if either
1. you did not perform an installation before running the script (e.g. `npm install`) or
2. your cache path is incorrectly configured (which is: /home/sbx_user1051/.cache/puppeteer).
For (2), check out our guide on configuring puppeteer at https://pptr.dev/guides/configuration.
at ChromeLauncher.resolveExecutablePath (/var/task/node_modules/puppeteer-core/lib/cjs/puppeteer/node/ProductLauncher.js:127:27)
at ChromeLauncher.executablePath (/var/task/node_modules/puppeteer-core/lib/cjs/puppeteer/node/ChromeLauncher.js:205:25)
at ChromeLauncher.launch (/var/task/node_modules/puppeteer-core/lib/cjs/puppeteer/node/ChromeLauncher.js:93:37)
at async handler (/var/task/.next/server/pages/api/getLinkedin.js:27:21)
at async Object.apiResolver (/var/task/node_modules/next/dist/server/api-utils/node.js:372:9)
at async NextNodeServer.runApi (/var/task/node_modules/next/dist/server/next-server.js:488:9)
at async Object.fn (/var/task/node_modules/next/dist/server/next-server.js:751:37)
at async Router.execute (/var/task/node_modules/next/dist/server/router.js:253:36)
at async NextNodeServer.run (/var/task/node_modules/next/dist/server/base-server.js:384:29)
at async NextNodeServer.handleRequest (/var/task/node_modules/next/dist/server/base-server.js:322:20)
RequestId: 7545c99c-ea8f-41d3-8771-97eb786502cb Error: Runtime exited with error: exit status 1
Runtime.ExitError
I went to the Puppeteer docs and I changed the config file as shown:
const {join} = require('path');
/**
* #type {import("puppeteer").Configuration}
*/
module.exports = {
// Changes the cache location for Puppeteer.
cacheDirectory: join(__dirname, '.cache', 'puppeteer'),
};
That didn't fix it.

Use node ssh into Runcloud gives a different version to what is actually installed/used by NVM on the server

I have a server on Runcloud on which I've used NVM and installed the version 16.14.2.
If I ssh via any SSH client and run node -v, I effectively get 16.14.2
Though when I wrote a script to ssh into the server and run the same command, I get 10.0
I was previously advised to create an alias, I tried to follow some steps I came across for that but it did not fix my issue. Further more, referencing the path to the desired version of npm inside nvm gives me an error that this version npm cannot run with node 10.0
npm does not support Node.js v10.0.0\n
Below is my code
import { NodeSSH } from 'node-ssh'
const ssh = new NodeSSH()
const { environment } = await inquirer.prompt([
{
name: 'environment',
message: `Environment?`,
type: 'input',
default: 'development'
}
])
if (build) {
const sshConfig = sshConfigs[environment]
console.log(chalk.yellow(`Connecting to ${sshConfig.host}...`))
await ssh.connect(sshConfig)
console.log(chalk.green('Connected'))
console.log(chalk.yellow('Executing release...'))
const nodePath = '~/.nvm/versions/node/v16.14.2/bin/node'
const npmPath = '~/.nvm/versions/node/v16.14.2/bin/npm'
console.log(await ssh.execCommand(`${npmPath} -v`))
ssh.dispose()
console.log(chalk.green('Release completed'))
}

Puppeteer: TypeError: Readable is not a constructor

I have been trying to use Puppeteer#15.5.0 to generate a PDF on the server side in Node.js.
import { launch } from 'puppeteer';
...
const browser = await launch();
const page = await browser.newPage();
await page.setContent('COME ON!');
console.log(await page.content());
const pdfBuffer = await page.pdf();
The console.log statement gives me the expected output of <html><head></head><body>COME ON!</body></html>
It then runs into the following error:
Error:
TypeError: Readable is not a constructor
at getReadableFromProtocolStream (/Users/kaziehsanaziz/Work/DocSpace/repos/docspace-pay/.webpack/service/src/public-lambda.js:405775:12)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async Page.pdf (/Users/kaziehsanaziz/Work/DocSpace/repos/docspace-pay/.webpack/service/src/public-lambda.js:403129:26)
at async /Users/kaziehsanaziz/Work/DocSpace/repos/docspace-pay/.webpack/service/src/public-lambda.js:329729:31
Puppeteer cannot be bundled using Webpack. The issue was that I was trying to do just that. In my case, since I was using Serverless, the solution was to tell the serverless-bundle plugin to not bundle the Puppeteer.
bundle:
packager: yarn
esbuild: true
forceExclude:
- aws-sdk
- puppeteer
externals:
- puppeteer-core
- '#sparticuz/chrome-aws-lambda'
The forceExclude is doing the trick here for the local environment. The external is what's helping the Production environment.
I have also run into this issue. It occurs when webpack (v5 on my end) bundles puppeteer. I have solved it by explicitly declaring webpack ignore directive when importing a file which uses puppeteer. I did this via dynamic es import, but a static one could be done in a very similar way:
const loadModule = async (modulePath) => {
try {
return await import(/* webpackIgnore: true */ modulePath)
} catch (e) {
throw new ImportError(`Unable to import module ${modulePath}`)
}
}
const renderPdf = (await loadModule('../../renderPdf/index.js')).default
use require puppeteer instead of import puppeteer statement

How do I ensure I get the same npm packages?

So I have an application whose tests are not passing. The hypothesis is that the machine I cloned it to had a different version of node, so I changed the node/npm version to the one equal to the machine where the Mocha tests are passing. I assumed that would download the same exact packages as the original machine, but it doesn't, not even when I remove the caret on the package version numbers and not even if I do an rm -rf node_modules.
How do I ensure that I have the same exact packages? Including the packages that depend on the packages on package.json file.
The test that is failing is the following one:
the error is: expected 400 to equal 201
1) should create and get notes for a patient? /api/v2/patients/:patientId/notes
27 passing (3s)
1 pending
1 failing
1) Testing patients end points.
given an user
should create and get notes for a patient? /api/v2/patients/:patientId/notes:
Error: Internal server error: expected 400 to equal 201
at Object.<anonymous> (test/testutils/auth.utils.ts:226:11)
at Generator.next (<anonymous>)
at fulfilled (test/testutils/auth.utils.ts:5:58)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
Which is I believe, referring to this method:
export async function createPatientNote(
tokenObjectDto: TokenObjectDto,
patientPui: string,
noteText: string,
isPublic: boolean,
noteType: NoteType,
visibleAfter?: Date,
title?: string
): Promise<ViewNoteDto> {
const urlPath = p().api.v2.patients.$patientId.notes.$url.replace(':patientId', patientPui)
const newNoteDto = new NewNoteDto(
patientPui,
noteType,
noteText,
!isPublic,
visibleAfter,
title,
[
new class implements ITag {
key = 'Test';
value = `createPatientNode for ${patientPui}`;
}
]
)
const res = await request(app)
.post(urlPath)
.set('Authorization', 'Bearer ' + tokenObjectDto.token)
.send(newNoteDto);
expect(res.status).to.equal(StatusCodes.CREATED);
const viewNoteDto = res.body
expect(viewNoteDto.pui).equal(patientPui)
return viewNoteDto
}
This is the package-lock.json file:
https://gist.github.com/ldco2016/3bb682442b6d16976f8ffbd4ec53809d
It seems to throw an error before or on attempt to to execute the urlPath.

Error:"Failed to get the current sub/segment from the context" when use AWS X-ray in Lambda with node.js

I am trying to use implement the AWS X-ray into my current project (using Node.js and Serverless framework). I am trying to wire the X-ray to one of my lambda function, I got the problem of
Error: Failed to get the current sub/segment from the context.
at Object.contextMissingRuntimeError [as contextMissing] (/.../node_modules/aws-xray-sdk-core/lib/context_utils.js:21:15)
at Object.getSegment (/.../node_modules/aws-xray-sdk-core/lib/context_utils.js:92:45)
at Object.resolveSegment (/.../node_modules/aws-xray-sdk-core/lib/context_utils.js:73:19).....
code below:
import { DynamoDB } from "aws-sdk";
import AWSXRay from 'aws-xray-sdk';
export const handler = async (event, context, callback) => {
const dynamo = new DynamoDB.DocumentClient({
service: new DynamoDB({ region })
});
AWSXRay.captureAWSClient(dynamo.service);
try {
// call dynamoDB function
} catch(err) {
//...
}
}
for this problem, I use the solution from
https://forums.aws.amazon.com/thread.jspa?messageID=821510&#821510
the other solution I tried is from https://forums.aws.amazon.com/thread.jspa?messageID=829923&#829923
code is like
import AWSXRay from 'aws-xray-sdk';
const AWS = AWSXRay.captureAWS(require('aws-sdk'));
export const handler = async (event, context, callback) => {
const dynamo = new AWS.DynamoDB.DocumentClient({region});
//....
}
Still not working...
Appreciated to the help of any kind.
As you mention, that happened because you're running locally (using serverless-offline plugin) and the serverless-offline plugin doesn't provide a valid XRAY context.
One possible way to pass this error and still be able to call your function locally is setting AWS_XRAY_CONTEXT_MISSING environment variable to LOG_ERROR instead of RUNTIME_ERROR (default).
Something like:
serverless invoke local -f functionName -e AWS_XRAY_CONTEXT_MISSING=LOG_ERROR
I didn't test this using serverless framework but it worked when the same error occurred calling an amplify function locally:
amplify function invoke <function-name>
I encountered this error also. To fix it, I disabled XRay when running locally. XRay isn't needed when running locally because I can just set up debug log statements at that time.
This is what the code would look like
let AWS = require('aws-sdk');
if (!process.env.IS_OFFLINE) {
const AWSXRay = require('aws-xray-sdk');
AWS = AWSXRay.captureAWS(require('aws-sdk'));
}
If you don't like this approach, you can set up a contextStrategy to not error out when the context is missing.
Link here
AWSXRay.setContextMissingStrategy("LOG_ERROR");
If you don't want the error clogging up your output you can add a helper that ignores only that error.
// Removes noisy Error: Failed to get the current sub/segment from the context due to Xray
export async function disableXrayError() {
console.error = jest.fn((err) => {
if (err.message.includes("Failed to get the current sub/segment from the context")) {
return;
} else {
console.error(err);
}
});
}

Resources