Wrong aws request signature caused by opentelemetry https plugin - node.js

When using the #opentelemetry/plugin-https and the aws-sdk together in a NodeJS application, the opentelemetry plugin adds the traceparent header to each AWS request. This works fine if there is no need for retries in the aws-sdk. When the aws-sdk retries a request the following errors can occur:
InvalidSignatureException: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
The first AWS request contains the following headers:
traceparent: '00-32c9b7adee1da37fad593ee38e9e479b-875169606368a166-01'
Authorization: 'AWS4-HMAC-SHA256 Credential=<credential>, SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-target, Signature=<signature>'
Note that the SignedHeaders doesn't include traceparent.
The retried request contains the following headers:
traceparent: '00-c573e391a455a207469ffa4fb75b3cab-6f20c315628cfcc0-01'
Authorization: AWS4-HMAC-SHA256 Credential=<credential>, SignedHeaders=host;traceparent;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-target, Signature=<signature>
Note that the SignedHeaders does include traceparent.
Before the retry request is sent, the #opentelemetry/plugin-https sets new traceparent header and this makes the signature of the AWS request invalid.
Here is a code which reproduces the issue (you may need to run the script a few times before hitting the rate limit which causes the retries):
const opentelemetry = require("#opentelemetry/api");
const { NodeTracerProvider } = require("#opentelemetry/node");
const { SimpleSpanProcessor } = require("#opentelemetry/tracing");
const { JaegerExporter } = require("#opentelemetry/exporter-jaeger");
const provider = new NodeTracerProvider({
plugins: {
https: {
enabled: true,
path: "#opentelemetry/plugin-https"
}
}
});
const exporter = new JaegerExporter({ serviceName: "test" });
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.register();
const AWS = require("aws-sdk");
const main = async () => {
const cwl = new AWS.CloudWatchLogs({ region: "us-east-1" });
const promises = new Array(100).fill(true).map(() => new Promise((resolve, reject) => {
cwl.describeLogGroups(function (err, data) {
if (err) {
console.log(err.name);
console.log("Got error:", err.message);
console.log("ERROR Request Authorization:");
console.log(this.request.httpRequest.headers.Authorization);
console.log("ERROR Request traceparent:");
console.log(this.request.httpRequest.headers.traceparent);
console.log("Retry count:", this.retryCount);
reject(err);
return;
}
resolve(data);
});
}));
const result = await Promise.all(promises);
console.log(result.length);
};
main().catch(console.error);
Possible solutions:
Ignore all calls to aws in the #opentelemetry/plugin-https.
Ignoring the calls to aws will lead to loosing all spans for aws requests.
Add the traceparent header to the unsignableHeaders in the aws-sdk - AWS.Signers.V4.prototype.unsignableHeaders.push("traceparent");
Changing the prototype seems like a hack and also doesn't handle the case if another node module uses different version of the aws-sdk.
Is there another solution which could allow me to keep the spans for aws requests and guarantees that the signature of all aws requests will be correct?
Update (16.12.2020):
The issue seems to be fixed in the aws sdk v3
The following code throws the correct error (ThrottlingException):
const opentelemetry = require("#opentelemetry/api");
const { NodeTracerProvider } = require("#opentelemetry/node");
const { SimpleSpanProcessor } = require("#opentelemetry/tracing");
const { JaegerExporter } = require("#opentelemetry/exporter-jaeger");
const { CloudWatchLogs } = require("#aws-sdk/client-cloudwatch-logs");
const provider = new NodeTracerProvider({
plugins: {
https: {
enabled: true,
path: "#opentelemetry/plugin-https"
}
}
});
const exporter = new JaegerExporter({ serviceName: "test" });
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.register();
const main = async () => {
const cwl = new CloudWatchLogs({ region: "us-east-1" });
const promises = new Array(100).fill(true).map(() => new Promise((resolve, reject) => {
cwl.describeLogGroups({ limit: 50 })
.then(resolve)
.catch((err) => {
console.log(err.name);
console.log("Got error:", err.message);
reject(err);
});
}));
const result = await Promise.all(promises);
console.log(result.length);
};
main().catch(console.error);

Related

HTTP 502 Error in an AWS Amplify Project with an API Gateway and Node.js Lambda Function

I have an AWS Amplify project with an API Gateway and a node.js Lambda function. Whenever I hit the API and it makes a connection to the RDS PostgreSQL DB I'm getting back a HTTP 502 error. I'm not sure what to do next to resolve it. Can anyone suggest some potential causes of this error and how I can troubleshoot and fix it?
I've been trying to adjust something in the lambda with the hopes that it'll fix it but the problem could lie elsewhere in the flow such as the API Gateway or RDS DB..
NOTE: This is a sample project that I'm working on, it isn't perfect I know. Feedback is always appreciated. Thanks!
/**
* #type {import('#types/aws-lambda').APIGatewayProxyHandler}
*/
var pg = require('pg');
exports.handler = async (event) => {
try {
const rds_host = "";
const name = "";
const password = "";
const db_name = "";
const port = 5432;
const connString = `postgres://${name}:${password}#${rds_host}:${port}/${db_name}`;
const client = new pg.Client(connString);
await client.connect();
const query = {
text: 'SELECT * FROM projects'
}
const res = await client.query(query);
const data = res.rows;
await client.end();
const response = {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "*"
},
body: JSON.stringify(data),
};
return response;
} catch(error) {
console.log('error: ', error);
}
};

S3 getSignedUrl v2 equivalent in AWS Javascript SDK v3

I just started using aws-sdk on my app to upload files to S3, and i'm debating whether to use aws-sdk v2 or v3.
V2 is the whole package, which is super bloated considering i only need the s3 services, not the myriad of other options. However, the documentation is very cryptic and im having a really hard time getting the equivalent getSignedUrl function to work in v3.
In v2, i have this code to sign the url and it works fine. I am using express on the server
import aws from 'aws-sdk';
const signS3URL = (req,res,next) => {
const s3 = new aws.S3({region:'us-east-2'});
const {fileName,fileType} = req.query;
const s3Params = {
Bucket : process.env.S3_BUCKET,
Key : fileName,
ContentType:fileType,
Expires: 60,
};
s3.getSignedUrl('putObject',s3Params,(err,data)=>{
if(err){
next(err);
}
res.json(data);
});
}
Now I've been reading documentation and examples trying to get the v3 equivalent to work, but i cant find any working example of how to use it. Here is how I have set it up so far
import {S3Client,PutObjectCommand} from '#aws-sdk/client-s3';
import {getSignedUrl} from '#aws-sdk/s3-request-presigner';
export const signS3URL = async(req,res,next) => {
console.log('Sign')
const {fileName,fileType} = req.query;
const s3Params = {
Bucket : process.env.S3_BUCKET,
Key : fileName,
ContentType:fileType,
Expires: 60,
// ACL: 'public-read'
};
const s3 = new S3Client()
s3.config.region = 'us-east-2'
const command = new PutObjectCommand(s3Params)
console.log(command)
await getSignedUrl(s3,command).then(signature =>{
console.log(signature)
res.json(signature)
}).catch(e=>next(e))
}
There are some errors in this code, and the first I can identify is creating the command variable using the PutObjectCommand function provided by the SDK. The documentation does not clarify to me what i need to pass it as the "input" https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/interfaces/putobjectcommandinput.html
Does anyone with experience using aws-sdk v3 know how to do this?
Also a side-question, where can i find the api reference for v2???? cuz all i find is the sdk docs that say "v3 now available" and i cant seem to find the reference to v2....
thanks for your time
The following code would give you a signedUrl in a JSON body with the key as signedUrl.
const signS3URL = async (req, res, next) => {
const { fileName, fileType } = req.query;
const s3Params = {
Bucket: process.env.S3_BUCKET,
Key: fileName,
ContentType: fileType,
// ACL: 'bucket-owner-full-control'
};
const s3 = new S3Client({ region: 'us-east-2' })
const command = new PutObjectCommand(s3Params);
try {
const signedUrl = await getSignedUrl(s3, command, { expiresIn: 60 });
console.log(signedUrl);
res.json({ signedUrl })
} catch (err) {
console.error(err);
next(err);
}
}
Keep the ACL as bucket-owner-full-control if you want the AWS account owning the Bucket to access the files.
You can go to the API Reference for both the JS SDK versions from here
In reference to the AWS docs and #GSSwain's answer (cannot comment, new) this link will show multiple examples getSignedURL examples.
Below is an example of uploading copied from AWS docs
// Import the required AWS SDK clients and commands for Node.js
import {
CreateBucketCommand,
DeleteObjectCommand,
PutObjectCommand,
DeleteBucketCommand }
from "#aws-sdk/client-s3";
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates an Amazon S3 service client module.
import { getSignedUrl } from "#aws-sdk/s3-request-presigner";
import fetch from "node-fetch";
// Set parameters
// Create a random name for the Amazon Simple Storage Service (Amazon S3) bucket and key
export const bucketParams = {
Bucket: `test-bucket-${Math.ceil(Math.random() * 10 ** 10)}`,
Key: `test-object-${Math.ceil(Math.random() * 10 ** 10)}`,
Body: "BODY"
};
export const run = async () => {
try {
// Create an S3 bucket.
console.log(`Creating bucket ${bucketParams.Bucket}`);
await s3Client.send(new CreateBucketCommand({ Bucket: bucketParams.Bucket }));
console.log(`Waiting for "${bucketParams.Bucket}" bucket creation...`);
} catch (err) {
console.log("Error creating bucket", err);
}
try {
// Create a command to put the object in the S3 bucket.
const command = new PutObjectCommand(bucketParams);
// Create the presigned URL.
const signedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 3600,
});
console.log(
`\nPutting "${bucketParams.Key}" using signedUrl with body "${bucketParams.Body}" in v3`
);
console.log(signedUrl);
const response = await fetch(signedUrl, {method: 'PUT', body: bucketParams.Body});
console.log(
`\nResponse returned by signed URL: ${await response.text()}\n`
);
} catch (err) {
console.log("Error creating presigned URL", err);
}
try {
// Delete the object.
console.log(`\nDeleting object "${bucketParams.Key}"} from bucket`);
await s3Client.send(
new DeleteObjectCommand({ Bucket: bucketParams.Bucket, Key: bucketParams.Key })
);
} catch (err) {
console.log("Error deleting object", err);
}
try {
// Delete the S3 bucket.
console.log(`\nDeleting bucket ${bucketParams.Bucket}`);
await s3Client.send(
new DeleteBucketCommand({ Bucket: bucketParams.Bucket })
);
} catch (err) {
console.log("Error deleting bucket", err);
}
};
run();

AWS Lambda SSM calls randomly goes in timeout

I've a lambda deployed on AWS, in a VPC that has internet access via NAT. The deploy is made using Serverless.
The lambda uses some Middy middlewares and fetches some credentials from SSM.
The problem is that the SSM fetch randomly goes in timeout!
Here's the lambda code:
/* requirements are omitted */
const authorize = async (_event, _context) => {
try {
const ssm = new SSM({
maxRetries: 6, // lowers a chance to hit service rate limits, default is 3
retryDelayOptions: { base: 200 }
})
const params = {
Names: ["param1", "param2"],
WithDecryption: true
}
const fetch = () => new Promise(resolve => {
ssm.getParameters(params, function(err, data) {
if (err) resolve(err, err.stack); // an error occurred
else resolve(data); // successful response
})
})
const res = await fetch()
return {
statusCode: 200,
body: JSON.stringify(res)
}
} catch (_err) {
console.error(_err)
return {
statusCode: 500,
body: 'error'
}
}
}
export default middy(authorize)
.use(warmup({ waitForEmptyEventLoop: false }))
.use(doNotWaitForEmptyEventLoop({ runOnError: true }))
.use(httpSecurityHeaders())
The lambda is timing out, because ssm is throttling you with your current configuration (6 retries 200ms) it takes around 26 seconds before your lambda will give up.
You are running here against the SSM standard throughput limits.
You can enable increased throuhgput with:
aws ssm update-service-setting --setting-id arn:aws:ssm:*region*:*account-id*:servicesetting/ssm/parameter-store/high-throughput-enabled --setting-value true
Be aware an extra cost will be incurred for every getParameter call afterwards (0.05$/10.000 requests).

unexpected behavior using zip-stream NPM on Google k8s

I am working on creating a zip of multiple files on the server and stream it to the client while creating. Initially, I was using ArchiverJs It was working fine if I was appending buffer to it but it fails when I need to add streams into it. Then after having some discussion on Github, I switched to Node zip-stream which started working fine thanks to jntesteves. But as I deploy the code on GKE k8s I Started getting Network Failed errors for huge files.
Here is my sample code :
const ZipStream = require("zip-stream");
/**
* #summary Adding readable stream provided by https module into zipStreamer using entry method
*/
const handleEntryCB = ({ readableStream, zipStreamer, fileName, resolve }) => {
readableStream.on("error", () => {
console.error("Error while listening readableStream : ", error);
resolve("done");
});
zipStreamer.entry(readableStream, { name: fileName }, error => {
if (!error) {
resolve("done");
} else {
console.error("Error while listening zipStream readableStream : ", error);
resolve("done");
}
});
};
/**
* #summary Handling downloading of files using native https, http and request modules
*/
const handleUrl = ({ elem, zipStreamer }) => {
return new Promise((resolve, reject) => {
let fileName = elem.fileName;
const url = elem.url;
//Used in most of the cases
if (url.startsWith("https")) {
https.get(url, readableStream => {
handleEntryCB({ readableStream, zipStreamer, url, fileName, resolve, reject });
});
} else if (url.startsWith("http")) {
http.get(url, readableStream => {
handleEntryCB({ readableStream, zipStreamer, url, fileName, resolve, reject });
});
} else {
const readableStream = request(url);
handleEntryCB({ readableStream, zipStreamer, url, fileName, resolve, reject });
}
});
};
const downloadZipFile = async (data, resp) => {
let { urls = [] } = data || {};
if (!urls.length) {
throw new Error("URLs are mandatory.");
}
//Output zip name
const outputFileName = `Test items.zip`;
console.log("Downloading using streams.");
//Initialize zip-stream instance
const zipStreamer = new ZipStream();
//Set headers to response
resp.writeHead(200, {
"Content-Type": "application/zip",
"Content-Disposition": `attachment; filename="${outputFileName}"`,
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, OPTIONS"
});
//piping zipStreamer to the resp so that client starts getting response
//as soon as first chunk is added to the zipStreamer
zipStreamer.pipe(resp);
for (const elem of urls) {
await handleUrl({ elem, zipStreamer });
}
zipStreamer.finish();
};
app.post(restPrefix + "/downloadFIle", (req, resp) => {
try {
const { data } = req.body || {};
downloadZipFile(data, resp);
} catch (error) {
console.error("[FileBundler] unknown error : ", error);
if (resp.headersSent) {
resp.end("Unknown error while archiving.");
} else {
resp.status(500).end("Unknown error while archiving.");
}
}
});
I tested for 7-8 files of ~4.5 GB each on local, it works fine and when I tried the same on google k8s, I got network failed error.
After some more research, I Increased server timeout on k8s t0 3000 seconds, than it starts working fine, but I guess the increasing timeout is not good.
Is there anything I am missing on code level or can you suggest some good GKE deployment configuration for a server that can download large files with many concurrent users?
I am stuck on this for the past 1.5+ months. please help!
Edit 1: I edited the timeout in the ingress i.e Network services-> Load Balancing ->edit the timeout in the service

AWS X Ray node js trails not showing

I am using Lambda (Node 8.10) and working with AWS X Ray. I am calling an external ip address using promise.
When I call, other traces are shown but cannot get custom segment.
I am not using any frameworks just a pure node js.
const AWSXRay = require('aws-xray-sdk-core');
AWSXRay.enableManualMode();
AWSXRay.captureHTTPsGlobal(require('https'));
const https = AWSXRay.captureHTTPs(require('https'));
exports.handler = async (event, context, callback) => {
// other code
const response = await doSomething(event);
return callback(error, response);
};
async doSomething(event) {
return new Promise((resolve, reject) => {
const segment = new AWSXRay.Segment('custom_segment_here');
AWSXRay.captureAsyncFunc('send', (subsegment) => {
const options = {
hostname: host,
port: 443,
path: '/',
method: 'GET',
XRaySegment: subsegment,
};
const req = https.request(options, (res) => {
code = res.statusCode;
resolve(code);
});
req.on('error', (error) => {
subsegment.addError(error);
reject(error);
});
subsegment.close();
req.end();
}, segment);
}
}
In Lambda scenario, Lambda is responsible for creating Segments and AWS X-Ray SDKs only create Subsegments and then emits them. Based on your code snippet, you created a segment (const segment = new AWSXRay.Segment('custom_segment_here');) inside a lambda function which couldn't be emitted so that you cannot see it in our console. Hope my answer is clear. :)

Resources