AWS S3 getSignedUrl() returns a 403 Forbidden Error - node.js

I'm trying to get a pre-signed URL from s3.getSignedUrl, so I could directly upload a file/image from the client react side. I get a URL but whenever I open that link or if I try to make put a request on that URL from the client, I always get a 403 forbidden error. I'm not doing this in serverless
// code to get a presigned url
const s3 = new AWS.S3({
accessKeyId: keys.ACCESS_KEY,
secretAccessKey: keys.SECRET_KEY,
});
const router = express.Router();
// this method is returning a url, configured with the Key, we shared
router.get(
'/api/image-upload/get-url',
requireAuth,
async (req: Request, res: Response) => {
// we want our key to be like this -- myUserId/12122113.jpeg -- where filename will be used as a random unique string
const key = `${req.currentUser!.id}/${uuid()}.jpeg`;
s3.getSignedUrl(
'putObject',
{
Bucket: 'my-first-s3-bucket-1234567',
ContentType: 'image/jpeg',
Key: key,
},
(err, url) => res.send({ key, url, err })
);
}
);
I get the object back having Key and Url property and if I open that URL or if I try to make a PUT request from the client-side, I get 403 Forbidden Error
// Error
<Error>
<link type="text/css" rel="stylesheet" id="dark-mode-custom-link"/>
<link type="text/css" rel="stylesheet" id="dark-mode-general-link"/>
<style lang="en" type="text/css" id="dark-mode-custom-style"/>
<style lang="en" type="text/css" id="dark-mode-native-style"/>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<AWSAccessKeyId>AKIA2YLSQ26Z6T3PRLSR</AWSAccessKeyId>
<StringToSign>GET 1616261901 /my-first-s3-bucket-12345/60559b4dc123830023031184/f852ca00-89a0-11eb-a3dc-07c38e9b7626.jpeg</StringToSign>
<SignatureProvided>TK0TFR+I79t8PPbtRW37GYaOo5I=</SignatureProvided>
<StringToSignBytes>47 45 54 0a 0a 0a 31 36 31 36 32 36 31 39 30 31 0a 2f 6d 79 2d 66 69 72 73 74 2d 73 33 2d 62 75 63 6b 65 74 2d 31 32 33 34 35 2f 36 30 35 35 39 62 34 64 63 31 32 33 38 33 30 30 32 33 30 33 31 31 38 34 2f 66 38 35 32 63 61 30 30 2d 38 39 61 30 2d 31 31 65 62 2d 61 33 64 63 2d 30 37 63 33 38 65 39 62 37 36 32 36 2e 6a 70 65 67</StringToSignBytes>
<RequestId>56RQCDS1X5GMF4JH</RequestId>
<HostId>LTA1+vXnwzGcPo70GmuKg0J7QDzW4+t+Ai9mgVqcerRKDbXkHBOnqU/7ZTvMLpyDf1CLZMYwSMY=</HostId>
</Error>
Please have a look at my s3 bucket policies and cors
// S3 Bucket Policies
{
"Version": "2012-10-17",
"Id": "Policy1616259705897",
"Statement": [
{
"Sid": "Stmt1616259703206",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl"
],
"Resource": "arn:aws:s3:::my-first-s3-bucket-1234567/*"
}
]
}
// S3 CORS [
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
},
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"https://ticketing.dev"
],
"ExposeHeaders": [
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2",
"ETag"
],
"MaxAgeSeconds": 3000
} ]
I'm unable to resolve this issue. Please bear with me, I'm not a good questioner. Thanks

After a lot of debugging, I had to give my IAM user, AmazonS3FullAccess to make it work.
Maybe it would need some specific permission to make a put request to s3 presigned url

Related

change multer uploaded file to ReadStream

I'm using nestjs, multer to read uploaded files.
file is well uploade via POST rest api.
I want to convert. this file to ReadableStream.
I want to avoid using write this files in disk and read again using createReadStream,
it would be better convert direct to ReadableStream using uploaded meta infos.
export function ApiFile(fieldName: string) {
return applyDecorators(UseInterceptors(FileInterceptor(fieldName)));
}
#Post("/file_upload")
#ApiFile('file')
create(
#Body() createNewsDto: CreateNewsDto,
#UploadedFile() file: Express.Multer.File,
) {
console.log({ file });
return this.myService.create(createNewsDto, file);
}
this is file meta data
{
file: {
fieldname: 'file',
originalname: 'screenshot.png',
encoding: '7bit',
mimetype: 'image/png',
buffer: <Buffer 59 10 4f 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 00 02 cf 00 00 02 1b 08 06 00 00 00 14 dd 73 8e 00 00 01 55 61 43 43 50 49 43 43 20 50 72 6f 66 69 ... 298432 more bytes>,
size: 298982
}
}
how can I achieve this?
I figured out how to send my files to remote server.
you need to use ReadableStream.from which change buffer file to ReadStream.
if your file meta info is below,
{
file: {
fieldname: 'file',
originalname: 'screenshot.png',
encoding: '7bit',
mimetype: 'image/png',
buffer: <Buffer 59 10 4f 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 00 02 cf 00 00 02 1b 08 06 00 00 00 14 dd 73 8e 00 00 01 55 61 43 43 50 49 43 43 20 50 72 6f 66 69 ... 298432 more bytes>,
size: 298982
}
}
you can convert this meta into stream
import { Readable} from 'stream';
import * as FormData from "form-data";
const formData = new FormData();
const stream = Readable.from(file.buffer);
formData.append("anyKeyValue", stream, {
filename: file.originalname,
contentType: file.mimetype
})
then send to remote server with content type multipart/form-data

How to fetch xml content from sftp server with nodejs

I am going to fetch xml file content from SFTP server
let Client = require('ssh2-sftp-client');
let sftp = new Client();
try {
sftp.connect({
host: 'hostname',
port: '22',
username: 'username',
password: 'password'
}).then(() => {
return sftp.list('/outgoing/orders');
}).then((dir) => {
dir.forEach(element => {
const xmlList = sftp.list('/outgoing/orders/' + element.name);
xmlList.then(xml => {
xml.forEach(xmlFile => {
sftp.get('/outgoing/orders/' + element.name + '/' + xmlFile.name).then(xmlContent => {
console.log(xmlContent)
});
})
})
});
})
<Buffer 3c 3f 78 6d 6c 20 76 65 72 73 69 6f 6e 3d 22 31 2e 30 22 3f 3e 0d 0a 3c 4f 72 64 65 72 4d 65 73 73 61 67 65 42 61 74 63 68 20 62 61 74 63 68 4e 75 6d ... 15509 more bytes>
Now it shows above console.
How can I fetch content as xml format like this one?
<?xml version="1.0"?>
<OrderMessageBatch batchNumber="3012566157">
...
Never used ssh2-sftp-client myself and I'm not sure if sftp.get(...) has a encoding parameter. But you always could decoding string from Buffer manually:
const xmlText = xmlContent.toString("utf8"); // change "utf8" to whatever encoding your file actually encoded with.
console.log(xmlText);
BTW, if the file is not encoded with a encoding natively supported by node, you may need packages like iconv or iconv-lite.

Image upload to s3 using presigned url with ReactJs/NodeJS throws 403 error SignatureDoesNotMatch

I have successfully integrated image upload to S3 using presigned url in App1.
I tried to do the same thing in an App 2, but I keep getting the same error:
403 SignatureDoesNotMatch
FRONTEND:
import React from "react";
import { uploadAdImageToS3BucketAndReturnImageUrl } from "../../../api/calls/data/ads/ads-configuration";
export default function UploadImageTutorial1() {
const onSelectFile = async (event) => {
const file = event.target.files[0];
const { type, name } = file;
// This throws an error
await uploadAdImageToS3BucketAndReturnImageUrl(file, type);
};
return (
<div>
<input type="file" accept="image/*" onChange={onSelectFile} />;
</div>
);
}
api/calls.js
export const uploadAdImageToS3BucketAndReturnImageUrl = async (
imageFile,
typeFile
) => {
try {
const [axios, cancel] = useAxios();
const { data: uploadConfig } = await axios.get(
"/ads/get_image_presigned_url"
);
const saved_image_url = await uploadImageFileToCloud(
uploadConfig,
imageFile,
typeFile
);
return saved_image_url;
} catch (e) {}
};
const uploadImageFileToCloud = async (uploadConfig, imageFile, typeFile) => {
const [axios, cancel] = useAxios();
try {
// This throws 400 error
await axios.put(uploadConfig.url, imageFile, {
headers: {
"Content-Type": typeFile,
"x-amz-acl": "public-read",
},
transformRequest: (data, headers) => {
delete headers.common["Authorization"];
return data;
},
});
} catch (error) {
console.log("🚀 error", error);
}
};
BACKEND:
s3-config.js
// Load the SDK for JavaScript
const { config } = require("dotenv");
config();
var AWS = require("aws-sdk");
const REGION = "xxxxxxxxx"; //e.g. "us-east-1"
AWS.config.update({ region: REGION });
const { AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_BUCKET_NAME } =
process.env;
const s3 = new AWS.S3({
accessKeyId: AWS_ACCESS_KEY_ID,
secretAccessKey: AWS_SECRET_ACCESS_KEY,
signatureVersion: "v4",
});
var getImageSignedUrl = async function (key) {
return new Promise((resolve, reject) => {
s3.getSignedUrl(
"putObject",
{
Bucket: AWS_BUCKET_NAME,
Key: key,
ContentType: "image/*",
ACL: "public-read",
Expires: 300,
},
(err, url) => {
if (err) {
reject(err);
} else {
resolve(url);
}
}
);
});
};
exports.getImageSignedUrl = getImageSignedUrl;
routes/upload.js
// #route POST api/ads/get_image_presigned_url
// #desc Get image presigned url
// #access Private
router.get("/get_image_presigned_url", (req, res) => {
const key = `images/ad_${Date.now()}.jpeg`;
s3_config
.getImageSignedUrl(key)
.then((url) => {
res.status(200).send({ key, url });
})
.catch((error) => {
res.status(500).send({
message: "There was an error generating pre-signed url.",
});
});
});
When I try to upload an image using the code above, in the frontend, this error gets logged:
PUT
https://bucket-name.s3.region.amazonaws.com/images/file_name.jpeg?Content-Type=image%2F%2A&X-Amz-Algorithm=XXX-XXX-SHA256&X-Amz-Credential=XXXXXXX%2F20220526%2Feu-west-3%2Fs3%2Faws4_request&X-Amz-Date=20220526T121600Z&X-Amz-Expires=300&X-Amz-Signature=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&X-Amz-SignedHeaders=host%3Bx-amz-acl&x-amz-acl=public-read
400 (Bad Request)
When I click on it, this opens in a new page:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you
provided. Check your key and signing method.
</Message>
<AWSAccessKeyId>XXXXXXXXXXXXXXXXXXXXXx</AWSAccessKeyId>
<StringToSign>
AWS4-XXXX-XXXX 20220526T121600Z 20220526/region/s3/aws4_request
d058c00da90b745bd2xxxxxxxxxxxxxxxxd68b3c537ff
</StringToSign>
<SignatureProvided>
9b372d73xxxxxxxxxxxxxxxxxxxxxx069c90ab7e24193990f1bcd39a0
</SignatureProvided>
<StringToSignBytes>
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 36
54 31 32 31 36 30 30 5a 0a 32 30 32 32 30 35 32 36 2f 65 75 2d 77 65 73 74
2d 33 2f 73 33 2f 61 77 73 34 5f 72 65 71 75 65 73 74 0a 64 30 35 38 63 30
30 64 61 39 30 62 37 34 35 62 64 32 66 66 37 62 35 33 31 39 62 38 61 37 32
33 34 36 37 35 33 65 64 63 31 38 62 63 62 36 38 37 31 65 62 63 39 64 36 38
62 33 63 35 33 37 66 66
</StringToSignBytes>
<CanonicalRequest>
GET /images/image_name.jpeg
Content-Type=image%2F%2A&X-Amz-Algorithm=AWS4-XXXX-XXXXX&X-Amz-Credential=XXXXXXXXXXXX%2F20220526%2Feu-west-3%2Fs3%2Faws4_request&X-Amz-Date=20220526T121600Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host%3Bx-amz-acl&x-amz-acl=public-read
host:bucket-name.s3.region.amazonaws.com x-amz-acl:public-read
host;x-amz-acl UNSIGNED-PAYLOAD
</CanonicalRequest>
<CanonicalRequestBytes>
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 30
32 31 38 2e 6a 70 65 67 0a 43 6f 6e 74 65 6e 74 2d 54 79 70 65 3d 69 6d 61
67 65 25 32 46 25 32 41 26 58 2d 41 6d 7a 2d 41 6c 67 6f 72 69 74 68 6d 3d
41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 26 58 2d 41 6d 7a 2d 43 72
65 64 65 6e 74 69 61 6c 3d 41 4b 49 41 52 5a 41 52 52 50 50 49 42 4d 56 45
57 4b 55 57 25 32 46 32 30 32 32 30 35 32 36 25 32 46 65 75 2d 77 65 73 74
2d 33 25 32 46 73 33 25 32 46 61 77 73 34 5f 72 65 71 75 65 73 74 26 58 2d
41 6d 7a 2d 44 61 74 65 3d 32 30 32 32 30 35 32 36 54 31 32 31 36 30 30 5a
26 58 2d 41 6d 7a 2d 45 78 70 69 72 65 73 3d 33 30 30 26 58 2d 41 6d 7a 2d
53 69 67 6e 65 64 48 65 61 64 65 72 73 3d 68 6f 73 74 25 33 42 78 2d 61 6d
7a 2d 61 63 6c 26 78 2d 61 6d 7a 2d 61 63 6c 3d 70 75 62 6c 69 63 2d 72 65
61 64 0a 68 6f 73 74 3a 6c 6f 64 65 65 70 2d 73 74 6f 72 61 67 65 2d 33 2e
73 33 2e 65 75 2d 77 65 73 74 2d 33 2e 61 6d 61 7a 6f 6e 61 77 73 2e 63 6f
6d 0a 78 2d 61 6d 7a 2d 61 63 6c 3a 70 75 62 6c 69 63 2d 72 65 61 64 0a 0a
68 6f 73 74 3b 78 2d 61 6d 7a 2d 61 63 6c 0a 55 4e 53 49 47 4e 45 44 2d 50
41 59 4c 4f 41 44
</CanonicalRequestBytes>
<RequestId>XXXXXXXXXXX</RequestId>
<HostId>
q1cQMi/q0jrkk3SeLd3F8/v/mx62XXXXXXXXXXXXXQbVhCdZCDBNfnluTruGLnkhM=
</HostId>
</Error>;
What’s weird is that SignatureDoesNotMatch is 403 error on their documentation.
I know others have faced the same issue but it seems to be triggered by different causes and I still haven't found what's causing this.
Here, in one answer, this is mentioned:
When making a signed request to S3, AWS checks to make sure that the
signature exactly matches the HTTP Header information the browser
sent.
This may be what's causing the issue. But, I don't know what exactly.
In your s3-config.js file, you have the following:
s3.getSignedUrl(
"putObject",
{
Bucket: AWS_BUCKET_NAME,
Key: key,
ContentType: "image/*",
ACL: "public-read",
Expires: 300,
},
(err, url) => {
if (err) {
reject(err);
} else {
resolve(url);
}
}
);
However, a presigned url for a PUT method doesn't support wildcard characters in the content type. One solution is to pass the file type into the function, and dynamically set it.
Like so:
var getImageSignedUrl = async function (key, fileType) {
return new Promise((resolve, reject) => {
s3.getSignedUrl(
"putObject",
{
Bucket: AWS_BUCKET_NAME,
Key: key,
ContentType: fileType,
ACL: "public-read",
Expires: 300,
},
(err, url) => {
if (err) {
reject(err);
} else {
resolve(url);
}
}
);
});
};
This requires you to add a fileType parameter to your GET method, when your getting your presigned url. Let me know if it helps :)
Note: you must also provide the same content type in your headers, when you make a call to upload your object to the presigned url, but in your code, it looks like it's already there.

Node.js: Download file from s3 and unzip it to a string

I am writing an AWS Lambda function which needs to download files from AWS S3, unzips the file and returns the content in the form of a string.
I am trying this
function getObject(key){
var params = {
Bucket: "my-bucket",
Key: key
}
return new Promise(function (resolve, reject){
s3.getObject(params, function (err, data){
if(err){
reject(err);
}
resolve(zlib.unzipSync(data.Body))
})
})
}
But getting the error
Error: incorrect header check
at Zlib._handle.onerror (zlib.js:363:17)
at Unzip.Zlib._processChunk (zlib.js:524:30)
at zlibBufferSync (zlib.js:239:17)
The data looks like this
{ AcceptRanges: 'bytes',
LastModified: 'Wed, 16 Mar 2016 04:47:10 GMT',
ContentLength: '318',
ETag: '"c3xxxxxxxxxxxxxxxxxxxxxxxxx"',
ContentType: 'binary/octet-stream',
Metadata: {},
Body: <Buffer 50 4b 03 04 14 00 00 08 08 00 f0 ad 6f 48 95 05 00 50 84 00 00 00 b4 00 00 00 2c 00 00 00 30 30 33 32 35 2d 39 31 38 30 34 2d 37 34 33 30 39 2d 41 41 ... >
}
The Body buffer contains zip-compressed data (this is identified by the first few bytes), which is not just plain zlib.
You will need to use some zip module to parse the data and extract the files within. One such library is yauzl which has a fromBuffer() method that you can pass your buffer to and get the file entries.

How to replicate a curl command with the nodejs request module?

How can I replicate this curl request:
$ curl "https://s3-external-1.amazonaws.com/herokusources/..." \
-X PUT -H 'Content-Type:' --data-binary #temp/archive.tar.gz
With the node request module?
I need to do this to PUT a file up on AWS S3 and to match the signature provided by Heroku in the put_url from Heroku's sources endpoint API output.
I have tried this (where source is the Heroku sources endpoint API output):
// PUT tarball
function(source, cb){
putUrl = source.source_blob.put_url;
urlObj = url.parse(putUrl);
var options = {
headers: {},
method : 'PUT',
url : urlObj
}
fs.createReadStream('temp/archive.tar.gz')
.pipe(request(
options,
function(err, incoming, response){
if (err){
cb(err);
} else {
cb(null, source);
}
}
));
}
But I get the following SignatureDoesNotMatch error.
<?xml version="1.0"?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<AWSAccessKeyId>AKIAJURUZ6XB34ESX54A</AWSAccessKeyId>
<StringToSign>PUT\n\nfalse\n1424204099\n/heroku-sources-production/heroku.com/d1ed2f1f-4c81-43c8-9997-01706805fab8</StringToSign>
<SignatureProvided>DKh8Y+c7nM/6vJr2pabvis3Gtsc=</SignatureProvided>
<StringToSignBytes>50 55 54 0a 0a 66 61 6c 73 65 0a 31 34 32 34 32 30 34 30 39 39 0a 2f 68 65 72 6f 6b 75 2d 73 6f 75 72 63 65 73 2d 70 72 6f 64 75 63 74 69 6f 6e 2f 68 65 72 6f 6b 75 2e 63 6f 6d 2f 64 31 65 64 32 66 31 66 2d 34 63 38 31 2d 34 33 63 38 2d 39 39 39 37 2d 30 31 37 30 36 38 30 35 66 61 62 38</StringToSignBytes>
<RequestId>A7F1C5F7A68613A9</RequestId>
<HostId>JGW6l8G9kFNfPgSuecFb6y9mh7IgJh28c5HKJbiP6qLLwvrHmESF1H5Y1PbFPAdv</HostId>
</Error>
Here is an example of what the Heroku sources endpoint API output looks like:
{ source_blob:
{ get_url: 'https://s3-external-1.amazonaws.com/heroku-sources-production/heroku.com/2c6641c3-af40-4d44-8cdb-c44ee5f670c2?AWSAccessKeyId=AKIAJURUZ6XB34ESX54A&Signature=hYYNQ1WjwHqyyO0QMtjVXYBvsJg%3D&Expires=1424156543',
put_url: 'https://s3-external-1.amazonaws.com/heroku-sources-production/heroku.com/2c6641c3-af40-4d44-8cdb-c44ee5f670c2?AWSAccessKeyId=AKIAJURUZ6XB34ESX54A&Signature=ecj4bxLnQL%2FZr%2FSKx6URJMr6hPk%3D&Expires=1424156543'
}
}
Update
The key issue here is that the PUT request I send with the request module should be the same as the one sent with curl because I know that the curl request matches the expectations of the AWS S3 Uploading Objects Using Pre-Signed URLs API. Heroku generates the PUT url so I have no control over its creation. I do know that the curl command works as I have tested it -- which is good since it is the example provided by Heroku.
I am using curl 7.35.0 and request 2.53.0.
The Amazon API doesn't like chunked uploads. The file needs to be sent unchunked. So here is the code that works:
// PUT tarball
function(source, cb){
console.log('Uploading tarball...');
putUrl = source.source_blob.put_url;
urlObj = url.parse(putUrl);
fs.readFile(config.build.temp + 'archive.tar.gz', function(err, data){
if (err){ cb(err); }
else {
var options = {
body : data,
method : 'PUT',
url : urlObj
};
request(options, function(err, incoming, response){
if (err){ cb(err); } else { cb(null, source); }
});
}
});
},

Resources