s3 check if file exist by getting metadata - node.js

I am trying to check if file exists in s3 bucket using AWS javascript sdk.
I have defined my policy to Allow HeadBucket for my s3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:HeadBucket",
"Resource": "*"
}
]
}
I have attached the above policy to a user and I am using that user in setting up the config for the s3 as follows:
aws-config.json
{
"user1": {
"bucket": "my-bucket",
"region": "us-east-2",
"accessKey": "********",
"secretKey": "*********"
}
}
In my node.js code, I am trying to use headObject to get the meta data for the object as follows:
var AWS = require('aws-sdk');
var s3Config = require("../data/aws-config.json").user1;
AWS.config.update(s3Config);
var s3 = new AWS.S3;
var params = {
Bucket: "my-bucket",
Key: "mykey.PNG"
};
s3.headObject(params, function (err, metadata) {
console.log(err);
});
This is giving me 403 Forbidden error. I have tried everything from changing AWS policy to allow all s3 operations to allow access to all resources, nothing seems to work.
EDIT:
I checked the AWS.config.credentials and it is loading some random accessKey and secretKey and not from my config file. I am not sure why this is happening.

You are trying to HEAD object. There's no HEAD bucket operation which is what your IAM policy grants.
To do HEAD operation on an object, you need s3:GetObject permission.
See docs for more information.

Related

Upload an image to AWS S3 private bucket using presign url

I am facing a problem while uploading an image to s3 private bucket using nodejs aws sdk.
This is what I have done so far.
import aws from 'aws-sdk';
aws.config.update({
signatureVersion: 'v4',
region: 'ap-southeast-1',
accessKeyId: process.env.AWS_S3_ACCESS_KEY,
secretAccessKey: process.env.AWS_S3_SECRET_KEY,
});
const s3 = new aws.S3();
export const generate_pre_ssigned_url_write = async (req, res, next) => {
try {
const { Key, ContentType } = req.query;
const { userId } = req;
const params = {
Bucket: 'test-bucket',
Key: `${userId}/${Key}`,
Expires:200,
};
const preSignedUrl = await s3.getSignedUrl('putObject', params);
res.send({preSignedUrl});
}
catch(err){
console.log(err);next(err);
}
}
I know by creating a bucket policy in my s3 bucket the image can be uploaded.
currently, I am having the following bucket policy
{
"Id": "Policy1592575147579",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1592575144175",
"Action": [
"s3:GetObject",// Thinking to
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::test-bucket/*",
"Principal": "*" // here what must this value for aws ec2 instance which is present //in different region from s3 buckets region.
},
{
"Sid": "Stmt1592575144176",
"Action": [
"s3:GetObject",// Thinking
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::test-bucket/*",
"Principal": "*" // having a * here makes the bucket public which I don't want
}
]
}
In the above policy,
For the first case, what must be the attribute Principal if I only want to putObject in the bucket from specific AWS EC2 instance or Is there any other way that I can insert objects into s3 bucket using pre-signed url.
For the second case,
If I have the Principal as * then the bucket is becoming public which is not desired, so I thought of writing a pre-signed URL similar to above javascript snippet like s3.getSignedUrl('getObject', params). But I have many images to show on front-end
which will definitely increase the load on my server as each image will make presign URL request to the server, so I want to avoid this approach also. I would like to find out if there is any other approach to this?

Programmatic file upload to AWS S3 bucket

I want to provision an AWS S3 bucket and an IAM user (with programmatic access only) so I can facilitate file upload privilege for that user only. The user will receive the AWS access key ID and secret access key, to use in a simple Node.js or Python console application. What are the minimal steps required to achieve this?
Create an IAM user (with programmatic access), with no permissions - DONE
Create a S3 bucket and block all public access - DONE
Add a bucket policy that looks like this:
{
"Version": "2012-10-17",
"Id": "Policy1234567",
"Statement": [
{
"Sid": "Stmt1234567",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::1234567890:user/someuser"
},
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::some-bucket-name/*"
}
]
}
I have a simple node.js application that will upload a given file to the bucket:
const fs = require('fs');
const zlib = require('zlib');
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
});
const bucketName = 'some-bucket-name';
const fileName = 'alargefile.iso';
var body = fs.createReadStream(fileName)
.pipe(zlib.createGzip());
// Upload the stream
var s3obj = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
params: {
Bucket: bucketName,
Key: fileName
}
});
s3obj.upload({
Body: body
}, function(err, data) {
if (err) {
console.log("An error occurred", err);
}
else {
console.log("Uploaded the file at", data.Location);
}
});
Since the user does not have any permissions, do I still need to create a custom policy to apply as a permission for the user? The OOTB policies are either too generous (AmazonS3FullAccess) or too restrictive (AmazonS3ReadOnlyAccess). Another bit of confusion is that I have set a bucket policy that regulates access to the bucket for a specific user, so would that not be sufficient?
You can create custom policy for IAM user as well, where you only allow PUTObject to specific bucket.
example:
{
"Version": "2012-10-17",
"Id": "Policy1234567",
"Statement": [
{
"Sid": "Stmt1234567",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::some-bucket-name/*"
}
]
}
If the bucket and IAM user are in the same account, you don't need bucket policy if IAM user has the above policy.
You definitely need Identity policy based on below link:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html

How to get url from s3.getSignedUrl()

I'm trying to store some images using AWS S3. Everything is running smoothly until I started getting some 400s on PUTting images on URLs I got from s3.getSignedUrl. At that time my code looked like this:
const s3 = new AWS.S3({
accessKeyId,
secretAccessKey
});
const imageRouter = express.Router();
imageRouter.post('/upload', (req, res) => {
const type = req.body.ContentType;
const Key = `${req.session.user.id}/${uuid()}.${type}`;
s3.getSignedUrl(
'putObject',
{
Bucket: 'cms-bucket-06',
ContentType: type,
Key
},
(err, url) => {
console.log('URL ', url); res.send({ Key, url });
}
);
});
I followed link from error and I found out that "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.".
So I did. Like that:
const s3 = new AWS.S3({signatureVersion: 'v4'});
But now I get no URL in my callback function. It's undefined. What am I still missing here?
EDIT:
Alright, I added my key back to the constructor and I'm able to upload images. New problem is that I can't open them. I get access denied every time. I added proper bucket policy but it doesn't help :(
{
"Version": "2012-10-17",
"Id": "Policy1547050603038",
"Statement": [
{
"Sid": "Stmt1547050601490",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}

Access denied when making api calls to s3 bucket with Node.js

Using Node.js, I'm making an api that makes calls to my s3 bucket on AWS. When I try to make use putObject method, i receive this error:
message: 'Access Denied',
code: 'AccessDenied',
region: null,
time: 2018-07-27T17:08:29.555Z,
... etc
}
I have a config and credentials file in C:/User/{User}/.aws/ directory
config file:
[default]
region=us-east-2
output=json
credentials file:
[default]
aws_access_key_id=xxxxxxxxxxxxxxx
aws_secret_access_key=xxxxxxxxxxx
I created policies for both IAM user and Bucket. Here's my IAM user inline policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
And my bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1488494182833",
"Statement": [
{
"Sid": "Stmt1488493308547",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::134100338998:user/Test-User"
},
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetBucketLocation",
"s3:Get*",
"s3:Put*"
],
"Resource": "arn:aws:s3:::admin-blog-assets"
}
]
}
And finally, my api
var fs = require('fs'),
AWS = require('aws-sdk'),
s3 = new AWS.S3('admin-blog-assets');
...
var params = {
Bucket: 'admin-blog-assets',
Key: file.filename,
Body: fileData,
ACL:'public-read'
};
s3.putObject(params, function (perr, pres) {
if (perr) {
console.log("Error uploading image: ", perr);
} else {
console.log("uploading image successfully");
}
});
I've been banging my head on this for hours, can anyone help?
I believe the source of the problem is related to how you are defining the s3 object, as s3 = new AWS.S3('admin-blog-assets');
If you look at the example used here, it has this line:
var bucketPromise = new AWS.S3({apiVersion: '2006-03-01'}).createBucket({Bucket: bucketName}).promise();
Where the argument passed to AWS.S3 is an object containing that apiVersion field. But you are passing a string value.
The S3 specific documentation overview section has more information:
Sending a Request Using S3 var s3 = new AWS.S3();
s3.abortMultipartUpload(params, function (err, data) { if (err)
console.log(err, err.stack); // an error occurred else
console.log(data); // successful response }); Locking the
API Version In order to ensure that the S3 object uses this specific
API, you can construct the object by passing the apiVersion option to
the constructor:
var s3 = new AWS.S3({apiVersion: '2006-03-01'}); You can also set the
API version globally in AWS.config.apiVersions using the s3 service
identifier:
AWS.config.apiVersions = { s3: '2006-03-01', // other service API
versions };
var s3 = new AWS.S3();
Some of the permissions you were granting were bucket permissions and others were object permissions. There are actions matching s3:Get* and s3:Put* that apply to both buckets and objects.
"Resource": "arn:aws:s3:::example-bucket" is only the bucket itself, not the objects inside it.
"Resource": "arn:aws:s3:::example-bucket/*" is only the objects in the bucket, and not the bucket itself.
You can write two policy statements, or you can combine the resources, like this:
"Resource": [
"arn:aws:s3:::example-bucket",
"arn:aws:s3:::example-bucket/*"
]
Important Security Consideration: By using s3:Put* with both the bucket and object ARNs, your policy likely violates the principle of least privilege, because you have implicitly granted this user s3:PutBucketPolicy which allows these credentials to change the bucket policy. There may be other, similar concerns. You probably do not want to give these credentials that much control.
Credit to #PatNeedham for noticing a second issue that I overlooked, the AWS.S3() constructor expects an object as its first argument, not a string.

AWS S3 GET returns status code 403

I'm using node aws-sdk with a user who has been set up with the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1458935963000",
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::mybucket1/*"
]
}
]
}
The bucket itself does not have any explict policy attached to it. Permissions are set as follows:
The following produces 403:
<video src="https://s3-us-west-2.amazonaws.com/mybucket1/default21.mp4">
</video>
Bucket name has been changed, but it does exist on S3 along with the video. Any help will be much appreciated.
UPDATE 1
Getting the same result even if aws-sdk is booted with the root/owner of S3 account.
your <video> tag is not using the authentication mechanism that your nodejs code has setup.
the video tag gets loaded in a browser, and the browser knows nothing about he AWS SDK or your node server.
you need to use a pre-signed URL for the video tag.
generate the URL on the server, and then use that url in the video tag. for example, if you're using Express:
router.get("/whatever", function(req, res, next){
var params = {Bucket: 'mybucket', Key: 'default21.mp4'};
var url = s3.getSignedUrl('getObject', params);
res.render("some/view", {
videoUrl: url
});
});

Resources