Learning S3 I know how to generate a presigned URL:
const aws = require('aws-sdk')
const s3 = new aws.S3()
aws.config.update({
accessKeyId: 'id-omitted',
secretAccessKey: 'key-omitted'
})
const myBucket = 'foo'
const myKey = 'bar.png'
const signedUrlExpireSeconds = 60 * 5
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(`Presigned URL: ${url}`)
and from reading the documentation I can retrieve what's in the bucket with headObject but I've tested trying to find wether an object already has a presigned URL:
1st attempt:
let signedUrl = await s3.validSignedURL('getObject', params).promise()
console.log(`Signed URL: ${signedUrl}`)
2nd attempt:
await s3.getObject(params, (err, data) => {
if (err) console.log(err)
return data.Body.toString('utf-8')
})
3rd attempt:
let test = await s3.headObject(params).promise()
console.log(`${test}`)
and I'm coming up short. I know could create a file or log to a file when a presigned URL is created but I think that would be a hack. Is there a way in Node I can check an object to see if it has a presigned URL created for it? I'm not looking to do this in the dashboard I'm looking for a way to do this solely in the terminal/script. Going through the tags and querying Google I'm not finding any luck
Referenced:
S3 pre-signed url - check if url was used?
Creating Pre-Signed URLs for Amazon S3 Buckets
GET Object
Pre-Signing AWS S3 URLs
How to check if an prefix / key exists on S3 before creating a presigned URL?
How to get response from S3 getObject in Node.js?
AWS signed url if the object exists using promises
Is there a way in Node I can check an object to see if it has a presigned URL created for it?
Short answer: No
Long answer: There is no information about the signed urls stored on the object or any list of created urls. You can even create a signed url completely on client side without invoking any service
That question is interesting. I'd tried to find whether some place stored the presigned URL, but still not found.
But what gusto2 says is true, you can just create a presigned URL without any aws service, which is exactly what aws-sdk doing.
Check this file: https://github.com/aws/aws-sdk-js/blob/cc29728c1c4178969ebabe3bbe6b6f3159436394/ts/cloudfront.ts
Then you can get how presigned URL is generated:
var getRtmpUrl = function (rtmpUrl) {
var parsed = url.parse(rtmpUrl);
return parsed.path.replace(/^\//, '') + (parsed.hash || '');
};
var getResource = function (url) {
switch (determineScheme(url)) {
case 'http':
case 'https':
return url;
case 'rtmp':
return getRtmpUrl(url);
default:
throw new Error('Invalid URI scheme. Scheme must be one of'
+ ' http, https, or rtmp');
}
};
getSignedUrl: function (options, cb) {
try {
var resource = getResource(options.url);
} catch (err) {
return handleError(err, cb);
}
var parsedUrl = url.parse(options.url, true),
signatureHash = Object.prototype.hasOwnProperty.call(options, 'policy')
? signWithCustomPolicy(options.policy, this.keyPairId, this.privateKey)
: signWithCannedPolicy(resource, options.expires, this.keyPairId, this.privateKey);
parsedUrl.search = null;
for (var key in signatureHash) {
if (Object.prototype.hasOwnProperty.call(signatureHash, key)) {
parsedUrl.query[key] = signatureHash[key];
}
}
try {
var signedUrl = determineScheme(options.url) === 'rtmp'
? getRtmpUrl(url.format(parsedUrl))
: url.format(parsedUrl);
} catch (err) {
return handleError(err, cb);
}
return handleSuccess(signedUrl, cb);
}
Related
I am trying to upload files to AWS S3 using getSignedUrlPromise() to obtain the access link since the bucket is completely private, I want it to only be accessible through the links that the server generates with getSignedUrlPromise().
The problem comes when I try to make a Put request to that link obtained since I get the following error, I also leave you the response that I receive.
Here is the code for configuring aws in nodeJS:
import AWS from 'aws-sdk';
const bucketName = 'atlasfitness-progress';
const region =process.env.AWS_REGION;
const accessKeyId = process.env.AWS_ACCESS_KEY
const secretAccessKey = process.env.AWS_SECRET_KEY
const URL_EXPIRATION_TIME = 60; // in seconds
const s3 = new AWS.S3({
region,
accessKeyId,
secretAccessKey,
signatureVersion: 'v4'
})
export const generatePreSignedPutUrl = async (fileName, fileType) => {
const params = ({
Bucket: bucketName,
Key: fileName,
Expires: 60
})
const url = await s3.getSignedUrlPromise('putObject', params);
return url;
}
And then I have a express controller to send the link when it's requested:
routerProgress.post('/prepare_s3', verifyJWT, async (req, res) => {
res.send({url: await generatePreSignedPutUrl(req.body.fileName, req.body.fileType)});
})
export { routerProgress };
But the problem comes in the frontend, here is the function that first asks for the link and then it tryies to upload the file to S3.
const upload = async (e) => {
e.preventDefault();
await JWT.checkJWT();
const requestObject = {
fileName: frontPhoto.name,
fileType: frontPhoto.type,
token: JWT.getToken()
};
const url = (await axiosReq.post(`${serverPath}/prepare_s3`, requestObject)).data.url;
//Following function is the one that doesn't work
const response = await fetch(url, {
method: "PUT",
headers: {
"Content-Type": "multipart/form-data"
},
body: frontPhoto
});
console.log(response);
}
And with this all is done, I can say that I am a newbie to AWS so it is quite possible that I have caused a rather serious error without realizing it, but I have been blocked here for some many days and I'm starting to get desperate. So if anyone detects the error or knows how I can make it work I would be very grateful for your help.
The first thing I note about your code is that you await on async operations but do not provide for exceptions. This is very bad practice as it hides possible failures. The rule of thumb is: whenever you need to await for a result, wrap your call in a try/catch block.
In your server-side code above, you have two awaits which can fail, and if they do, any error they generate is lost.
A better strategy would be:
export const generatePreSignedPutUrl = async (fileName, fileType) => {
const params = ({
Bucket: bucketName,
Key: fileName,
Expires: 60
})
let url;
try {
url = await s3.getSignedUrlPromise('putObject', params);
} catch (err) {
// do something with the error here
// and abort the operation.
return;
}
return url;
}
And in your POST route:
routerProgress.post('/prepare_s3', verifyJWT, async (req, res) => {
let url;
try {
url = await generatePreSignedPutUrl(req.body.fileName, req.body.fileType);
} catch (err) {
res.status(500).send({ ok: false, error: `failed to get url: ${err}` });
return;
}
res.send({ url });
})
And in your client-side code, follow the same strategy. At the very least, this will give you a far better idea of where your code is failing.
Two things to keep in mind:
Functions declared using the async keyword do not return the value of the expected result; they return a Promise of the expected result, and like all Promises, can be chained to both .catch() and .then() clauses.
When calling async functions from within another async function, you must do something with any exceptions you encounter because, due to their nature, Promises do not share any surrounding runtime context which would allow you to capture any exceptions at a higher level.
So you can use either Promise "thenable" chaining or try/catch blocks within async functions to trap errors, but if you choose not to do either, you run the risk of losing any errors generated within your code.
Here's an example of how to create a pre-signed URL that can be used to PUT an MP4 file.
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
apiVersion: '2010-12-01',
signatureVersion: 'v4',
region: process.env.AWS_DEFAULT_REGION || 'us-east-1',
});
const params = {
Bucket: 'mybucket',
Key: 'videos/sample.mp4',
Expires: 1000,
ContentType: 'video/mp4',
};
const url = s3.getSignedUrl('putObject', params);
console.log(url);
The resulting URL will look something like this:
https://mybucket.s3.amazonaws.com/videos/sample.mp4?
Content-Type=video%2Fmp4&
X-Amz-Algorithm=AWS4-HMAC-SHA256&
X-Amz-Credential=AKIASAMPLESAMPLE%2F20200303%2Fus-east-1%2Fs3%2Faws4_request&
X-Amz-Date=20211011T090807Z&
X-Amz-Expires=1000&
X-Amz-Signature=long-sig-here&
X-Amz-SignedHeaders=host
You can test this URL by uploading sample.mp4 with curl as follows:
curl -X PUT -T sample.mp4 -H "Content-Type: video/mp4" "<signed url>"
A few notes:
hopefully you can use this code to work out where your problem lies.
pre-signed URLs are created locally by the SDK, so there's no need to go async.
I'd advise creating the pre-signed URL and then testing PUT with curl before testing your browser client, to ensure that curl works OK. That way you will know whether to focus your attention on the production of the pre-signed URL or on the use of the pre-signed URL within your client.
If your attempt to upload via curl fails with Access Denied then check that:
the pre-signed URL has not expired (they have time limits)
the AWS credentials you used to sign the URL allow PutObject to that S3 bucket
the S3 bucket policy does not explicitly deny your request
I just started using aws-sdk on my app to upload files to S3, and i'm debating whether to use aws-sdk v2 or v3.
V2 is the whole package, which is super bloated considering i only need the s3 services, not the myriad of other options. However, the documentation is very cryptic and im having a really hard time getting the equivalent getSignedUrl function to work in v3.
In v2, i have this code to sign the url and it works fine. I am using express on the server
import aws from 'aws-sdk';
const signS3URL = (req,res,next) => {
const s3 = new aws.S3({region:'us-east-2'});
const {fileName,fileType} = req.query;
const s3Params = {
Bucket : process.env.S3_BUCKET,
Key : fileName,
ContentType:fileType,
Expires: 60,
};
s3.getSignedUrl('putObject',s3Params,(err,data)=>{
if(err){
next(err);
}
res.json(data);
});
}
Now I've been reading documentation and examples trying to get the v3 equivalent to work, but i cant find any working example of how to use it. Here is how I have set it up so far
import {S3Client,PutObjectCommand} from '#aws-sdk/client-s3';
import {getSignedUrl} from '#aws-sdk/s3-request-presigner';
export const signS3URL = async(req,res,next) => {
console.log('Sign')
const {fileName,fileType} = req.query;
const s3Params = {
Bucket : process.env.S3_BUCKET,
Key : fileName,
ContentType:fileType,
Expires: 60,
// ACL: 'public-read'
};
const s3 = new S3Client()
s3.config.region = 'us-east-2'
const command = new PutObjectCommand(s3Params)
console.log(command)
await getSignedUrl(s3,command).then(signature =>{
console.log(signature)
res.json(signature)
}).catch(e=>next(e))
}
There are some errors in this code, and the first I can identify is creating the command variable using the PutObjectCommand function provided by the SDK. The documentation does not clarify to me what i need to pass it as the "input" https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/interfaces/putobjectcommandinput.html
Does anyone with experience using aws-sdk v3 know how to do this?
Also a side-question, where can i find the api reference for v2???? cuz all i find is the sdk docs that say "v3 now available" and i cant seem to find the reference to v2....
thanks for your time
The following code would give you a signedUrl in a JSON body with the key as signedUrl.
const signS3URL = async (req, res, next) => {
const { fileName, fileType } = req.query;
const s3Params = {
Bucket: process.env.S3_BUCKET,
Key: fileName,
ContentType: fileType,
// ACL: 'bucket-owner-full-control'
};
const s3 = new S3Client({ region: 'us-east-2' })
const command = new PutObjectCommand(s3Params);
try {
const signedUrl = await getSignedUrl(s3, command, { expiresIn: 60 });
console.log(signedUrl);
res.json({ signedUrl })
} catch (err) {
console.error(err);
next(err);
}
}
Keep the ACL as bucket-owner-full-control if you want the AWS account owning the Bucket to access the files.
You can go to the API Reference for both the JS SDK versions from here
In reference to the AWS docs and #GSSwain's answer (cannot comment, new) this link will show multiple examples getSignedURL examples.
Below is an example of uploading copied from AWS docs
// Import the required AWS SDK clients and commands for Node.js
import {
CreateBucketCommand,
DeleteObjectCommand,
PutObjectCommand,
DeleteBucketCommand }
from "#aws-sdk/client-s3";
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates an Amazon S3 service client module.
import { getSignedUrl } from "#aws-sdk/s3-request-presigner";
import fetch from "node-fetch";
// Set parameters
// Create a random name for the Amazon Simple Storage Service (Amazon S3) bucket and key
export const bucketParams = {
Bucket: `test-bucket-${Math.ceil(Math.random() * 10 ** 10)}`,
Key: `test-object-${Math.ceil(Math.random() * 10 ** 10)}`,
Body: "BODY"
};
export const run = async () => {
try {
// Create an S3 bucket.
console.log(`Creating bucket ${bucketParams.Bucket}`);
await s3Client.send(new CreateBucketCommand({ Bucket: bucketParams.Bucket }));
console.log(`Waiting for "${bucketParams.Bucket}" bucket creation...`);
} catch (err) {
console.log("Error creating bucket", err);
}
try {
// Create a command to put the object in the S3 bucket.
const command = new PutObjectCommand(bucketParams);
// Create the presigned URL.
const signedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 3600,
});
console.log(
`\nPutting "${bucketParams.Key}" using signedUrl with body "${bucketParams.Body}" in v3`
);
console.log(signedUrl);
const response = await fetch(signedUrl, {method: 'PUT', body: bucketParams.Body});
console.log(
`\nResponse returned by signed URL: ${await response.text()}\n`
);
} catch (err) {
console.log("Error creating presigned URL", err);
}
try {
// Delete the object.
console.log(`\nDeleting object "${bucketParams.Key}"} from bucket`);
await s3Client.send(
new DeleteObjectCommand({ Bucket: bucketParams.Bucket, Key: bucketParams.Key })
);
} catch (err) {
console.log("Error deleting object", err);
}
try {
// Delete the S3 bucket.
console.log(`\nDeleting bucket ${bucketParams.Bucket}`);
await s3Client.send(
new DeleteBucketCommand({ Bucket: bucketParams.Bucket })
);
} catch (err) {
console.log("Error deleting bucket", err);
}
};
run();
I am trying to upload a PDF file to AWS S3 using multi part uploads. However, when I send the PUT request for uploading the part, I receive a SignatureDoesNotMatch error.
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
My Server Code (Node) is as below:
CREATE MultiPart Upload
const AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
const s3 = new AWS.S3({ apiVersion: '2006-03-01' });
const s3Params = {
Bucket: 'bucket-name',
Key: 'upload-location/filename.pdf',
}
const createRequest = await s3.createMultipartUpload({
...s3Params
ContentType: 'application/pdf'
}).promise();
GET Signed URL
let getSignedUrlParams = {
Bucket: 'bucket-name',
Key: 'upload-location/filename.pdf',
PartNumber: 1,
UploadId: 'uploadId',
Expires: 10 * 60
}
const signedUrl = await s3.getSignedUrl('uploadPart',getSignedUrlParams);
And the Client code (in JS) is :
const response = await axios.put(signedUrl, chunkedFile, {headers: {'Content-Type':'application-pdf'}});
A few things to note:
This code works when I allow all public access to the bucket.However, if all public access is blocked, the code does not work.
With all public access blocked, I am still able to upload to the bucket with the same credentials using aws cli.
I already have tried re-generating AWS Access Key ID and Secret Access Key and that didnt help.
Not able to figure out what the problem is. Any help would be appreciated.
PS: This is the first question I have posted here. So please forgive me if I havent posted it appropriately. Let me know if more details are required.
Try something like this,it worked for me.
var fileName = 'your.pdf';
var filePath = './' + fileName;
var fileKey = fileName;
var buffer = fs.readFileSync('./' + filePath);
// S3 Upload options
var bucket = 'loctest';
// Upload
var startTime = new Date();
var partNum = 0;
var partSize = 1024 * 1024 * 5; // Minimum 5MB per chunk (except the last part) http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html
var numPartsLeft = Math.ceil(buffer.length / partSize);
var maxUploadTries = 3;
var multiPartParams = {
Bucket: bucket,
Key: fileKey,
ContentType: 'application/pdf'
};
var multipartMap = {
Parts: []
};
function completeMultipartUpload(s3, doneParams) {
s3.completeMultipartUpload(doneParams, function(err, data) {
if (err) {
console.log("An error occurred while completing the multipart upload");
console.log(err);
} else {
var delta = (new Date() - startTime) / 1000;
console.log('Completed upload in', delta, 'seconds');
console.log('Final upload data:', data);
}
});
}
You will get error if the upload fails. We can help you to solve this if you print the results of
console.log(this.httpResponse)
and
console.log(this.request.httpRequest)
What worked for me was the version of the signature. While initializing S3, the signature version should also be mentioned.
const s3 = new AWS.S3({ apiVersion: '2006-03-01', signatureVersion: 'v4' });
Remove Content-Part header from the axios call.
const response = await axios.put(signedUrl, chunkedFile);
When adding only a part you're not actually uploading a complete file, so the content type is not application-pdf in your case.
This is different than doing a PUT for a complete object.
I'm trying to upload files from a MERN application I'm working on. I'm almost done with the NodeJS back end part.
Said application will allow users to upload images(jpg, jpeg, png, gifs, etc) to an Amazon AWS S3 bucket that I created.
Well, lets put it this way. I created a helper:
const aws = require('aws-sdk');
const fs = require('fs');
// Enter copied or downloaded access ID and secret key here
const ID = process.env.AWS_ACCESS_KEY_ID;
const SECRET = process.env.AWS_SECRET_ACCESS_KEY;
// The name of the bucket that you have created
const BUCKET_NAME = process.env.AWS_BUCKET_NAME;
const s3 = new aws.S3({
accessKeyId: ID,
secretAccessKey: SECRET
});
const uploadFile = async images => {
// Read content from the file
const fileContent = fs.readFileSync(images);
// Setting up S3 upload parameters
const params = {
Bucket: BUCKET_NAME,
// Key: 'cat.jpg', // File name you want to save as in S3
Body: fileContent
};
// Uploading files to the bucket
s3.upload(params, function(err, data) {
if (err) {
throw err;
}
console.log(`File uploaded successfully. ${data.Location}`);
});
};
module.exports = uploadFile;
That helper takes three of my environment variables which are the name of the bucket, the keyId and the secret key.
When adding files from the form(that will eventually be added in the front end) the user will be able to send more than one file.
Right now my current post route looks exactly like this:
req.body.user = req.user.id;
req.body.images = req.body.images.split(',').map(image => image.trim());
const post = await Post.create(req.body);
res.status(201).json({ success: true, data: post });
That right there works great but takes the req.body.images as a string with each image separated by a comma. What would the right approach be to upload(to AWS S3) the many files selected from the Windows directory pop up?. I tried doing this but did not work :/
// Add user to req,body
req.body.user = req.user.id;
uploadFile(req.body.images);
const post = await Post.create(req.body);
res.status(201).json({ success: true, data: post });
Thanks and hopefully your guys can help me out with this one. Right now I'm testing it with Postman but later on the files will be sent via a form.
Well you could just call the uploadFile multiple times for each file :
try{
const promises= []
for(const img of images) {
promises.push(uploadFile(img))
}
await Promise.all(promises)
//rest of logic
}catch(err){ //handle err }
On a side note you should warp S3.upload in a promise:
const AWS = require('aws-sdk')
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
})
module.exports = ({ params }) => {
return new Promise((resolve, reject) => {
s3.upload(params, function (s3Err, data) {
if (s3Err) return reject(s3Err)
console.log(`File uploaded successfully at ${data.Location}`)
return resolve(data)
})
})
}
Bonus, if you wish to avoid having your backend handle uploads you can use aws s3 signed urls and let the client browser handle that thus saving your server resources.
One more thing your Post object should only contain Urls of the media not the media itself.
// Setting up S3 upload parameters
const params = {
Bucket: bucket, // bucket name
Key: fileName, // File name you want to save as in S3
Body: Buffer.from(imageStr, 'binary'), //image must be in buffer
ACL: 'public-read', // allow file to be read by anyone
ContentType: 'image/png', // image header for browser to be able to render image
CacheControl: 'max-age=31536000, public' // caching header for browser
};
// Uploading files to the bucket
try {
const result = await s3.upload(params).promise();
return result.Location;
} catch (err) {
console.log('upload error', err);
throw err;
}
I've been searching for a way to write to a JSON file in a S3 bucket from the pre signed URL. From my research it appears it can be done but these are not in Node:
http PUT a file to S3 presigned URLs using ruby
PUT file to S3 with presigned URL
Uploading a file to a S3 Presigned URL
Write to a AWS S3 pre-signed url using Ruby
How to create and read .txt file with fs.writeFile to AWS Lambda
Not finding a Node solution from my searches and using a 3rd party API I'm trying to write the callback to a JSON that is in a S3 bucket. I can generate the pre signed URL with no issues but when I try to write dummy text to the pre signed URL I get:
Error: ENOENT: no such file or directory, open
'https://path-to-file-with-signed-url'
When I try to use writeFile:
fs.writeFile(testURL, `This is a write test: ${Date.now()}`, function(err) {
if(err) return err
console.log("File written to")
})
and my understanding of the documentation under file it says I can use a URL. I'm starting to believe this might be a permissions issue but I'm not finding any luck in the documentation.
After implementing node-fetch I still get an error (403 Forbidden) writing to a file in S3 based on the pre signed URL, here is the full code from the module I've written:
const aws = require('aws-sdk')
const config = require('../config.json')
const fetch = require('node-fetch')
const expireStamp = 604800 // 7 days
const existsModule = require('./existsModule')
module.exports = async function(toSignFile) {
let checkJSON = await existsModule(`${toSignFile}.json`)
if (checkJSON == true) {
let testURL = await s3signing(`${toSignFile}.json`)
fetch(testURL, {
method: 'PUT',
body: JSON.stringify(`This is a write test: ${Date.now()}`),
}).then((res) => {
console.log(res)
}).catch((err) => {
console.log(`Fetch issue: ${err}`)
})
}
}
async function s3signing(signFile) {
const s3 = new aws.S3()
aws.config.update({
accessKeyId: config.aws.accessKey,
secretAccessKey: config.aws.secretKey,
region: config.aws.region,
})
params = {
Bucket: config.aws.bucket,
Key: signFile,
Expires: expireStamp
}
try {
// let signedURL = await s3.getSignedUrl('getObject', params)
let signedURL = await s3.getSignedUrl('putObject', params)
console.log('\x1b[36m%s\x1b[0m', `Signed URL: ${signedURL}`)
return signedURL
} catch (err) {
return err
}
}
Reviewing the permissions I have no issues with uploading and write access has been set in the permissions. In Node how can I write to a file in the S3 bucket using that file's pre-signed URL as the path?
fs is the filesystem module. You can't use it as an HTTP client.
You can use the built-in https module, but I think you'll find it easier to use node-fetch.
fetch('your signed URL here', {
method: 'PUT',
body: JSON.stringify(data),
// more options and request headers and such here
}).then((res) => {
// do something
}).catch((e) => {
// do something else
});
Was looking for an elegant way to transfer s3 file to an s3 signed url using PUT. Most examples I found were using the PUT({body : data}). I came across one suggestion to read the data to a readable stream and then pipe it to the PUT. However I still didn't like the notion of loading large files into memory and then assigning them to the put stream. Piping read to write is always better in memory and performance. Since the s3.getObject().createReadStream() returns a request object, which supports pipe, all that we need to do is to pipe it correctly to the PUT request which exposes a write stream.
Get object function
async function GetFileReadStream(key){
return new Promise(async (resolve,reject)=>{
var params = {
Bucket: bucket,
Key: key
};
var fileSize = await s3.headObject(params)
.promise()
.then(res => res.ContentLength);
resolve( {stream : s3.getObject(params).createReadStream(),fileSize});
});
}
Put object function
const request = require('request');
async function putStream(presignedUrl,readStream){
return new Promise((resolve,reject)=>{
var putRequestWriteStream = request.put({url:presignedUrl,headers:{'Content-Type':'application/octet-stream','Content-Length':readStream.fileSize }});
putRequestWriteStream.on('response', function(response) {
var etag = response.headers['etag'];
resolve(etag);
})
.on('end', () =>
console.log("put done"))
readStream.stream.pipe(putRequestWriteStream);
});
}
This works great with a very small memory foot print. Enjoy.