AWS S3 403 access denied issue with nodeJS - node.js

The following is my bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddCannedAcl",
"Effect": "Allow",
"Principal": {
"AWS": "==mydetails=="
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::etcetera-dev/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "public-read"
}
}
}
]
}
This is my Iam user inline policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:PutObject",
"s3:GetObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
Now I'm trying to upload a file using multer-s3 with acl:'public-read' and I am getting 403 access denied. If I don't use acl property in multer, I am able to upload with no issues.

You may have fixed this now, but if you haven't, there a many different possible fixes (See: https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/).
But I ran into the same problem, and what fixed it for me was the following.
I presume you're calling s3.upload() when trying to upload your file. I found that if there is not Bucket parameter within your upload() options, you will also receive a 403.
i.e ensure your upload() call is as the following:
await s3.upload({
Bucket: // some s3Config.Bucket
Body: // Stream or File,
Key: // Filename,
ContentType: // Mimetype
}).promise();
Bucket: // some s3Config.Bucket - I was missing this param in the function-call as I thought that new AWS.S3(config) handled the bucket. Turns out, you should always add the bucket to your upload params.

Related

Getting Access Denied when trying to upload to s3 Bucket

I am trying to upload an object to an AWS bucket using NodeJs (aws-sdk), but I am get access denied error.
The IAM user of which I am using accessKeyId and secretAccessKey also have been given access to the s3 bucket to which I am trying to upload.
Backend Code
const s3 = new AWS.S3({
accessKeyId: this.configService.get<string>('awsAccessKeyId'),
secretAccessKey: this.configService.get<string>('awsSecretAccessKey'),
params: {
Bucket: this.configService.get<string>('awsPublicBucketName'),
},
region: 'ap-south-1',
});
const uploadResult = await s3
.upload({
Bucket: this.configService.get<string>('awsPublicBucketName'),
Body: dataBuffer,
Key: `${folder}/${uuid()}-${filename}`,
})
.promise();
Bucket Policy
{
"Version": "2012-10-17",
"Id": "PolicyXXXXXXXXX",
"Statement": [
{
"Sid": "StmtXXXXXXXXXXXXXX",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::some-random-bucket"
},
{
"Sid": "StmtXXXXXXXXXXX",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:user/some-random-user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::some-random-bucket"
}
]
}
You have an explicit deny statement, denying anyone from doing anything S3-related on some-random-bucket.
This will override any allow statements in the policy, according to the official IAM policy evaluation logic.
You can do any of the following:
Remove the deny statement from the policy
Modify the deny statement & use NotPrincipal to exclude some-random-user from the deny statement
Modify the deny statement & use the aws:PrincipalArn condition key with the ArnNotEquals condition operator to exclude some-random-user from the deny statement i.e.
{
"Version": "2012-10-17",
"Id": "PolicyXXXXXXXXX",
"Statement": [
{
"Sid": "StmtXXXXXXXXXXXXXX",
"Effect": "Deny",
"Action": "s3:*",
"Principal": "*",
"Resource": "arn:aws:s3:::some-random-bucket",
"Condition": {
"ArnNotEquals": {
"aws:PrincipalArn": "arn:aws:iam::XXXXXXXXXX:user/some-random-user"
}
}
},
{
"Sid": "StmtXXXXXXXXXXX",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:user/some-random-user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::some-random-bucket"
}
]
}

Amazon S3 GET object Access Denied after setting up S3 bucket policy

I'm using the AWS NodeJS SDK to upload and download files to s3 buckets, recently I updated the bucket policy so no one beside my domain and the ec2 elastic beanstalk role can access these images.
Everything seems to be working fine, except actually downloading the files
AccessDenied: Access Denied at Request.extractError (/node_modules/aws-sdk/lib/services/s3.js:714:35)
S3 Bucket policy:
{
"Version": "2012-10-17",
"Id": "http referer policy",
"Statement": [
{
"Sid": "Allow get requests originating from www.*.domain.com and *.domain.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::data/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://www.*.domain.com/*",
"https://*.domain.com/*"
]
}
}
},
{
"Sid": "Deny get requests originating not from www.*.domain.com and *.domain.com.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::data/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://www.*.domain.com/*",
"https://*.domain.com/*"
]
}
}
},
{
"Sid": "Allow get/put requests from api.",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::[redacted]:role/aws-elasticbeanstalk-ec2-role"
},
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::data",
"arn:aws:s3:::data/*"
]
}
]
}
I am able to list contents of the bucket, so thats not the issue in this case because uploading is working just fine
This is my code that upload files:
const params = {
Bucket: "data",
Key: String(fileName),
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
};
await s3.upload(params).promise();
For downloading:
await s3.getObject({ Bucket: this.bucketS3, Key: fileId }).promise();
Uploading/Downloading was working fine before setting up policies, but I would rather limit who can view/download these files to only the api and domains

Aws S3 AccessDenied: when uploading object

I have a s3 bucket called MyBucket.
The permission like below:
The Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::MyBucket/files/*"
]
}
]
}
Inside the bucket, I have a folder called files. Inside files, the object can be viewed by the public
For the IAM user, I have attached an inline policy below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::MyBucket/files/*"
}
]
}
When I upload the object to the bucket using nodejs:
s3.upload({
ACL: 'public-read',
Bucket: this.app.settings.aws.s3.bucket,
Body: bufferFromFile,
Key: `files/${result.id}/${data.fileName}`,
}, {}).promise();
I got AccessDenied: Access Denied error.
How to solve it?
Update 1:
I try to add s3:PutObject in bucket policy suggested by the comment, but the error still the same.
I am using EC2 to host the nodejs code
Update 2
I try to upload an object to the bucket using below CLI and it works.
aws s3 cp s3Test.html s3://MyBucket/files/
Update 3
aws s3api put-object --bucket MyBucket --key files/s3Test.html --body s3Test.html --acl public-read
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Update 4
just realize there is another Managed policy in same IAM user which might be related.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetAccessPoint",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:ListAccessPoints",
"s3:ListJobs",
"s3:CreateJob",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MyBucket",
"arn:aws:s3:*:*:accesspoint/*",
"arn:aws:s3:::*/*",
"arn:aws:s3:*:*:job/*"
]
}
]
}
Not sure whether this policy will affect the issue.
It works after removing ACL: 'public-read' from the code.
#Marcin and #John Rotenstein have provided good insight and direction for finding the reasons in the comment. Really appreciate it!
s3.upload({
ACL: 'public-read', //remove this line
Bucket: this.app.settings.aws.s3.bucket,
Body: bufferFromFile,
Key: `files/${result.id}/${data.fileName}`,
}, {}).promise();

Lambda S3 getObject (fired from ajax call) throws 403 denied

I have a lambda function which gets an image from one bucket, resizes it and puts it into another bucket. The lambda function is set to trigger when a file is created the source bucket. Fairly standard, tutorial level stuff.
When I use the aws web UI to put an image in the source bucket, everything works as expected.
However, when I use xhr from my web app to put an image into the same bucket, I get the following error (thrown from my s3.getObject call):
AccessDenied: Access Denied
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:585:35
Having extensively searched around, most folk are saying 403 errors usually boil down to role/policy permissions for the lambda function. But when I trawl the logs, the only difference I see between my xhr upload and an aws web UI upload is the eventName and userIdentity.
For a web UI upload it's Put and principalId:
eventName: 'ObjectCreated:Put',
userIdentity: { principalId: 'AWS:AIDAJ2VMZPNX5NJD2VBLM' }
But on a xhr call it's Post and Anonymous:
eventName: 'ObjectCreated:Post',
userIdentity: { principalId: 'Anonymous' }
My Lambda role has two policies attached:
AWSLambdaExecute
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::*"
}
]
}
AWSLambdaBasicExecutionRole
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
My S3 buckets have the following policies:
Source bucket:
{
"Version": "2012-10-17",
"Id": "Lambda access bucket policy",
"Statement": [
{
"Sid": "All on objects in bucket lambda",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::source-bucket-name/*"
},
{
"Sid": "All on bucket by lambda",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::source-bucket-name"
}
]
}
Destination bucket:
{
"Version": "2012-10-17",
"Id": "Lambda access bucket policy",
"Statement": [
{
"Sid": "All on objects in bucket lambda",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::destination-bucket-name/*"
},
{
"Sid": "All on bucket by lambda",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::destination-bucket-name"
}
]
}
Do I need to somehow pass (or assign) a Principal Id to my xhr call to get it to work? Or do I need to add permissions/policies/roles to my function to let it fire the function without a Principal Id attached to the trigger?
EDIT:
Here's the JS code that sends a POST'ed file to the source bucket:
function uploadFileAttachment(attachment, form) {
var formButtons = document.querySelectorAll("form.form--trix button.btn");
formButtons.forEach((button) => {
button.setAttribute("disabled", "disabled");
});
uploadFile(attachment.file, setProgress, setAttributes)
function setProgress(progress) {
attachment.setUploadProgress(progress)
}
function setAttributes(attributes) {
attachment.setAttributes(attributes)
formButtons.forEach((button) => {
button.removeAttribute("disabled");
});
}
}
function uploadFile(file, progressCallback, successCallback) {
var key = createStorageKey(file)
var formData = createFormData(key, file)
var xhr = new XMLHttpRequest()
xhr.open("POST", global.s3url, true)
xhr.upload.addEventListener("progress", function(event) {
var progress = event.loaded / event.total * 100
progressCallback(progress)
})
xhr.addEventListener("load", function(event) {
if (xhr.status == 204) {
var attributes = {
url: global.s3url + key,
href: global.s3url + key + "?content-disposition=attachment"
}
successCallback(attributes)
}
})
xhr.send(formData);
}
function createStorageKey(file) {
var date = new Date()
var day = date.toISOString().slice(0,10)
var name = date.getTime() + "-" + file.name
return [ "trix", day, name ].join("/")
}
function createFormData(key, file) {
var data = new FormData()
data.append("key", key)
data.append("Content-Type", file.type)
data.append("file", file)
return data
}

AWS lambda function invocating S3.getObject getting Access Denied

I am using lambda function to thumbnail images in s3 bucket. And I found a sample here: Image conversion using Amazon Lambda and S3 in Node.js. However, after refactoring the code, I got access denied when invocating s3.getObject(). I checked if my IAM policy granted permissions incorrectly, but I had full access to Lambda, S3 and CloudFront. Here's how exception thrown:
async.waterfall([
function download(next) {
console.log(srcBucket+ " "+srcKey);
s3.getObject({
Bucket: srcBucket,
Key: srcKey
}, next);
console.log(srcBucket+ " "+srcKey+ "After");
}
], function(err, result) {
if (err) {
console.error(err);
}
// result now equals 'done'
console.log("End of step " + key);
callback();
});
Also, how my matchng regex setting is the same as the sample:
var srcBucket = event.Records[0].s3.bucket.name;
var typeMatch = srcKey.match(/\.([^.]*)$/);
var fileName = path.basename(srcKey);
if (!typeMatch) {
console.error('unable to infer image type for key ' + srcKey);
return;
}
var imageType = typeMatch[1].toLowerCase();
if (imageType != "jpg" && imageType != "gif" && imageType != "png" &&
imageType != "eps") {
console.log('skipping non-image ' + srcKey);
return;
}
My Lambda policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
When a Lambda gets an Access Denied error when trying to use S3, it's almost always a problem with the Lambda's Role Policy. In general, you need something like this to grant access to an S3 bucket from Lambda:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*"
],
"Effect": "Allow"
}
]
}
For those who reached this question and checked all the steps below:
Resource definition is correct for action "s3:*"
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*"
]
Lambda role has S3 Permissions (S3FullAccess probably)
File is in the bucket (throws 403 for not found) and bucket and object key information is correct
S3 object level permission for read is not overwritten.
There is an extra step which solved my problem. You may want to check IAM Role definition to remove if any permission boundary blocks to reach S3. You can also look at Permission Boundary Documentation for more information.
I got the inspiration from this as this answer: Reference to error
Therefore, I changed my test event to:
"s3": {
"configurationId": "testConfigRule",
"object": {
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901",
"key": "images/HappyFace.jpg",
"size": 1024
}
by given prefix to images/ in the key setting, which I should have done that in my regex setting.
This error message is really confusing, please do check your policy first, and then try to give a full path as your key.
I've given necessary permissions from S3 side for the Lambda Execution Role ARN
{
"Version": "2012-10-17",
"Id": "Policy1663698473036",
"Statement": [
{
"Sid": "Stmt1663698470845",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::434324234:role/file-conversion-lambdaRole"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::file-conversion-bucket/*"
},
{
"Sid": "Stmt1663698470842",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::434324234:role/file-conversion-lambdaRole"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::file-conversion-bucket/processed/*"
}
]
}

Resources