How to get url from s3.getSignedUrl() - node.js

I'm trying to store some images using AWS S3. Everything is running smoothly until I started getting some 400s on PUTting images on URLs I got from s3.getSignedUrl. At that time my code looked like this:
const s3 = new AWS.S3({
accessKeyId,
secretAccessKey
});
const imageRouter = express.Router();
imageRouter.post('/upload', (req, res) => {
const type = req.body.ContentType;
const Key = `${req.session.user.id}/${uuid()}.${type}`;
s3.getSignedUrl(
'putObject',
{
Bucket: 'cms-bucket-06',
ContentType: type,
Key
},
(err, url) => {
console.log('URL ', url); res.send({ Key, url });
}
);
});
I followed link from error and I found out that "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.".
So I did. Like that:
const s3 = new AWS.S3({signatureVersion: 'v4'});
But now I get no URL in my callback function. It's undefined. What am I still missing here?
EDIT:
Alright, I added my key back to the constructor and I'm able to upload images. New problem is that I can't open them. I get access denied every time. I added proper bucket policy but it doesn't help :(
{
"Version": "2012-10-17",
"Id": "Policy1547050603038",
"Statement": [
{
"Sid": "Stmt1547050601490",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}

Related

Getting an 'Access Denied' error when using Multer-S3 to upload file to AWS-S3 Bucket

I have been trying to upload a file from my express backend application using Multer-S3 but I am getting an 'Access Denied' error. Printing out the error gives me this:
{
"success": false,
"errors": {
"title": "File Upload Error",
"detail": "Access Denied",
"error": {
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2021-06-26T16:40:47.074Z",
"requestId": "7W7EMNWNFWTPNHHG",
"extendedRequestId": "9tC2dSn8Zu6dplJxxUVIx3Zdr4mCk7ZVg0RcayXHHO86hTIZdO/9YZKsUKwn1ir0AeUg50Y/c94=",
"statusCode": 403,
"retryable": false,
"retryDelay": 76.37236671132325,
"storageErrors": []
}
}
}
My AWS set up:
I have an AWS S3 Bucket with 'Block all public access' turned ON
The S3 bucket does not have any policies or CORS configurations
I have created a new IAM user with an attached policy that I created. This is the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
where 'test-bucket' is the name of my bucket.
The user is also set with programmatic access and I have checked that the access keys are updated and active.
From what I understand, this policy will allow my IAM user to put, get and delete objects in my s3 bucket, even though it is not accessible by the public. I also should not need to carry out any further configurations like setting up CORS.
My code:
route.js
router.post('/upload', uploadController.uploadToS3);
uploadController.js
const upload = require("../util/s3");
const singleUpload = upload.single("myfile");
exports.uploadToS3 = (req, res, next) => {
singleUpload(req, res, function (err) {
if (err) {
return res.json({
success: false,
errors: {
title: "File Upload Error",
detail: err.message,
error: err
}
})
}
})
};
s3.js
const S3 = require('aws-sdk/clients/s3');
const multer = require("multer");
const multerS3 = require("multer-s3");
bucket_name = process.env.AWS_BUCKET_NAME;
bucket_region = process.env.AWS_BUCKET_REGION;
aws_access_key = process.env.AWS_ACCESS_KEY;
aws_secret_key = process.env.AWS_SECRET_KEY;
const s3 = new S3({
bucket_region,
aws_access_key,
aws_secret_key
})
const upload = multer({
storage: multerS3({
s3: s3,
bucket: bucket_name,
acl: 'private',
metadata: function (req, file, cb) {
cb(null, { fieldName: file.fieldname });
},
key: function (req, file, cb) {
cb(null, file.originalname);
}
})
});
I have tried some solutions that were suggested online, such as adding a policy for the bucket itself, or setting up CORS configuration.
I've even tried to change my IAM user's policy to S3FullAccess. Even then, I still get the access denied error.
I followed this Youtube video's tutorial for the AWS setup: https://www.youtube.com/watch?v=NZElg91l_ms&t=900s&ab_channel=SamMeech-Ward
It doesn't seem as if anyone is facing the same issue...
Any help would be much appreciated, thanks!
I've managed to solve my issue. Turns out it was quite a silly mistake, nothing to do with my IAM policies at all. The issue lies in my s3.js code, when I create the s3 object:
const s3 = new S3({
bucket_region,
aws_access_key,
aws_secret_key
})
From the AWS S3 SDK documentation (https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html), the correct constructor parameters should be:
const s3 = new S3({
region,
accessKeyId,
secretAccessKey
})
With that, I was able to upload to my bucket using the AWS setup that I described.
The reason why I noticed this was because I realized that the 'Last Used' column in the 'Access Key' section for my IAM user was constantly displaying "N/A". This made me realize that my access key was not even being called in the first place.

Upload an image to AWS S3 private bucket using presign url

I am facing a problem while uploading an image to s3 private bucket using nodejs aws sdk.
This is what I have done so far.
import aws from 'aws-sdk';
aws.config.update({
signatureVersion: 'v4',
region: 'ap-southeast-1',
accessKeyId: process.env.AWS_S3_ACCESS_KEY,
secretAccessKey: process.env.AWS_S3_SECRET_KEY,
});
const s3 = new aws.S3();
export const generate_pre_ssigned_url_write = async (req, res, next) => {
try {
const { Key, ContentType } = req.query;
const { userId } = req;
const params = {
Bucket: 'test-bucket',
Key: `${userId}/${Key}`,
Expires:200,
};
const preSignedUrl = await s3.getSignedUrl('putObject', params);
res.send({preSignedUrl});
}
catch(err){
console.log(err);next(err);
}
}
I know by creating a bucket policy in my s3 bucket the image can be uploaded.
currently, I am having the following bucket policy
{
"Id": "Policy1592575147579",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1592575144175",
"Action": [
"s3:GetObject",// Thinking to
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::test-bucket/*",
"Principal": "*" // here what must this value for aws ec2 instance which is present //in different region from s3 buckets region.
},
{
"Sid": "Stmt1592575144176",
"Action": [
"s3:GetObject",// Thinking
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::test-bucket/*",
"Principal": "*" // having a * here makes the bucket public which I don't want
}
]
}
In the above policy,
For the first case, what must be the attribute Principal if I only want to putObject in the bucket from specific AWS EC2 instance or Is there any other way that I can insert objects into s3 bucket using pre-signed url.
For the second case,
If I have the Principal as * then the bucket is becoming public which is not desired, so I thought of writing a pre-signed URL similar to above javascript snippet like s3.getSignedUrl('getObject', params). But I have many images to show on front-end
which will definitely increase the load on my server as each image will make presign URL request to the server, so I want to avoid this approach also. I would like to find out if there is any other approach to this?

s3 check if file exist by getting metadata

I am trying to check if file exists in s3 bucket using AWS javascript sdk.
I have defined my policy to Allow HeadBucket for my s3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:HeadBucket",
"Resource": "*"
}
]
}
I have attached the above policy to a user and I am using that user in setting up the config for the s3 as follows:
aws-config.json
{
"user1": {
"bucket": "my-bucket",
"region": "us-east-2",
"accessKey": "********",
"secretKey": "*********"
}
}
In my node.js code, I am trying to use headObject to get the meta data for the object as follows:
var AWS = require('aws-sdk');
var s3Config = require("../data/aws-config.json").user1;
AWS.config.update(s3Config);
var s3 = new AWS.S3;
var params = {
Bucket: "my-bucket",
Key: "mykey.PNG"
};
s3.headObject(params, function (err, metadata) {
console.log(err);
});
This is giving me 403 Forbidden error. I have tried everything from changing AWS policy to allow all s3 operations to allow access to all resources, nothing seems to work.
EDIT:
I checked the AWS.config.credentials and it is loading some random accessKey and secretKey and not from my config file. I am not sure why this is happening.
You are trying to HEAD object. There's no HEAD bucket operation which is what your IAM policy grants.
To do HEAD operation on an object, you need s3:GetObject permission.
See docs for more information.

Access denied when making api calls to s3 bucket with Node.js

Using Node.js, I'm making an api that makes calls to my s3 bucket on AWS. When I try to make use putObject method, i receive this error:
message: 'Access Denied',
code: 'AccessDenied',
region: null,
time: 2018-07-27T17:08:29.555Z,
... etc
}
I have a config and credentials file in C:/User/{User}/.aws/ directory
config file:
[default]
region=us-east-2
output=json
credentials file:
[default]
aws_access_key_id=xxxxxxxxxxxxxxx
aws_secret_access_key=xxxxxxxxxxx
I created policies for both IAM user and Bucket. Here's my IAM user inline policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
And my bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1488494182833",
"Statement": [
{
"Sid": "Stmt1488493308547",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::134100338998:user/Test-User"
},
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetBucketLocation",
"s3:Get*",
"s3:Put*"
],
"Resource": "arn:aws:s3:::admin-blog-assets"
}
]
}
And finally, my api
var fs = require('fs'),
AWS = require('aws-sdk'),
s3 = new AWS.S3('admin-blog-assets');
...
var params = {
Bucket: 'admin-blog-assets',
Key: file.filename,
Body: fileData,
ACL:'public-read'
};
s3.putObject(params, function (perr, pres) {
if (perr) {
console.log("Error uploading image: ", perr);
} else {
console.log("uploading image successfully");
}
});
I've been banging my head on this for hours, can anyone help?
I believe the source of the problem is related to how you are defining the s3 object, as s3 = new AWS.S3('admin-blog-assets');
If you look at the example used here, it has this line:
var bucketPromise = new AWS.S3({apiVersion: '2006-03-01'}).createBucket({Bucket: bucketName}).promise();
Where the argument passed to AWS.S3 is an object containing that apiVersion field. But you are passing a string value.
The S3 specific documentation overview section has more information:
Sending a Request Using S3 var s3 = new AWS.S3();
s3.abortMultipartUpload(params, function (err, data) { if (err)
console.log(err, err.stack); // an error occurred else
console.log(data); // successful response }); Locking the
API Version In order to ensure that the S3 object uses this specific
API, you can construct the object by passing the apiVersion option to
the constructor:
var s3 = new AWS.S3({apiVersion: '2006-03-01'}); You can also set the
API version globally in AWS.config.apiVersions using the s3 service
identifier:
AWS.config.apiVersions = { s3: '2006-03-01', // other service API
versions };
var s3 = new AWS.S3();
Some of the permissions you were granting were bucket permissions and others were object permissions. There are actions matching s3:Get* and s3:Put* that apply to both buckets and objects.
"Resource": "arn:aws:s3:::example-bucket" is only the bucket itself, not the objects inside it.
"Resource": "arn:aws:s3:::example-bucket/*" is only the objects in the bucket, and not the bucket itself.
You can write two policy statements, or you can combine the resources, like this:
"Resource": [
"arn:aws:s3:::example-bucket",
"arn:aws:s3:::example-bucket/*"
]
Important Security Consideration: By using s3:Put* with both the bucket and object ARNs, your policy likely violates the principle of least privilege, because you have implicitly granted this user s3:PutBucketPolicy which allows these credentials to change the bucket policy. There may be other, similar concerns. You probably do not want to give these credentials that much control.
Credit to #PatNeedham for noticing a second issue that I overlooked, the AWS.S3() constructor expects an object as its first argument, not a string.

Creating public URL for Skipper upload on AWS

I'm uploading my image to Amazon Web Service S3 using Skipper.js, but creating a public URL for the file uploaded hasn't been possible using Skipper.js. I don't want to use Skipper-Disk I want to upload to S3 and be able to create a publicly accessible url to download the file. My code is below and that's all I've done
imageUpload: function(req, res) {
//console.log(req);
req.file('avatar').upload({
adapter: skipper,
key: 'key',
secret: 'secret',
bucket: 'bucketName'
}, function(err, fileUploaded){
if (err) {
console.log(err);
return res.negotiate(err);
}
if (fileUploaded.length === 0) {
return res.badRequest('No files uploaded');
}
var imageUrl = fileUploaded[0].extra.Location;
var imageKy = fileUploaded[0].extra.Key;
ImageUpload.create({urlLink: imageUrl, imageKey: imageKy}).then(function(urlAdded){
if (urlAdded) {
//return res.negotiate(err);
//The linkAdded is the link the s3 provides to the image E.g https://bucket.s3.amazonaws.com/filename
return res.json({linkAdded: urlAdded});
}
})
.catch(function (err){
return res.badRequest(err);
});
});
}
Looks like you did all the upload code right assuming at the top of the file you have:
var skipper = require('skipper-s3')
The actual fix has to do with S3 config. In the AWS S3 console, with your bucket selected you need to select properties -> permissions -> Edit Bucket Policy.
Add this code to make your bucket have public GET requests but replace "bucketName" in the Resource param with your bucket name:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}

Resources