Aws S3 AccessDenied: when uploading object - node.js

I have a s3 bucket called MyBucket.
The permission like below:
The Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::MyBucket/files/*"
]
}
]
}
Inside the bucket, I have a folder called files. Inside files, the object can be viewed by the public
For the IAM user, I have attached an inline policy below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::MyBucket/files/*"
}
]
}
When I upload the object to the bucket using nodejs:
s3.upload({
ACL: 'public-read',
Bucket: this.app.settings.aws.s3.bucket,
Body: bufferFromFile,
Key: `files/${result.id}/${data.fileName}`,
}, {}).promise();
I got AccessDenied: Access Denied error.
How to solve it?
Update 1:
I try to add s3:PutObject in bucket policy suggested by the comment, but the error still the same.
I am using EC2 to host the nodejs code
Update 2
I try to upload an object to the bucket using below CLI and it works.
aws s3 cp s3Test.html s3://MyBucket/files/
Update 3
aws s3api put-object --bucket MyBucket --key files/s3Test.html --body s3Test.html --acl public-read
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Update 4
just realize there is another Managed policy in same IAM user which might be related.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetAccessPoint",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:ListAccessPoints",
"s3:ListJobs",
"s3:CreateJob",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MyBucket",
"arn:aws:s3:*:*:accesspoint/*",
"arn:aws:s3:::*/*",
"arn:aws:s3:*:*:job/*"
]
}
]
}
Not sure whether this policy will affect the issue.

It works after removing ACL: 'public-read' from the code.
#Marcin and #John Rotenstein have provided good insight and direction for finding the reasons in the comment. Really appreciate it!
s3.upload({
ACL: 'public-read', //remove this line
Bucket: this.app.settings.aws.s3.bucket,
Body: bufferFromFile,
Key: `files/${result.id}/${data.fileName}`,
}, {}).promise();

Related

Getting Access Denied when trying to upload to s3 Bucket

I am trying to upload an object to an AWS bucket using NodeJs (aws-sdk), but I am get access denied error.
The IAM user of which I am using accessKeyId and secretAccessKey also have been given access to the s3 bucket to which I am trying to upload.
Backend Code
const s3 = new AWS.S3({
accessKeyId: this.configService.get<string>('awsAccessKeyId'),
secretAccessKey: this.configService.get<string>('awsSecretAccessKey'),
params: {
Bucket: this.configService.get<string>('awsPublicBucketName'),
},
region: 'ap-south-1',
});
const uploadResult = await s3
.upload({
Bucket: this.configService.get<string>('awsPublicBucketName'),
Body: dataBuffer,
Key: `${folder}/${uuid()}-${filename}`,
})
.promise();
Bucket Policy
{
"Version": "2012-10-17",
"Id": "PolicyXXXXXXXXX",
"Statement": [
{
"Sid": "StmtXXXXXXXXXXXXXX",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::some-random-bucket"
},
{
"Sid": "StmtXXXXXXXXXXX",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:user/some-random-user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::some-random-bucket"
}
]
}
You have an explicit deny statement, denying anyone from doing anything S3-related on some-random-bucket.
This will override any allow statements in the policy, according to the official IAM policy evaluation logic.
You can do any of the following:
Remove the deny statement from the policy
Modify the deny statement & use NotPrincipal to exclude some-random-user from the deny statement
Modify the deny statement & use the aws:PrincipalArn condition key with the ArnNotEquals condition operator to exclude some-random-user from the deny statement i.e.
{
"Version": "2012-10-17",
"Id": "PolicyXXXXXXXXX",
"Statement": [
{
"Sid": "StmtXXXXXXXXXXXXXX",
"Effect": "Deny",
"Action": "s3:*",
"Principal": "*",
"Resource": "arn:aws:s3:::some-random-bucket",
"Condition": {
"ArnNotEquals": {
"aws:PrincipalArn": "arn:aws:iam::XXXXXXXXXX:user/some-random-user"
}
}
},
{
"Sid": "StmtXXXXXXXXXXX",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:user/some-random-user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::some-random-bucket"
}
]
}

Amazon S3 GET object Access Denied after setting up S3 bucket policy

I'm using the AWS NodeJS SDK to upload and download files to s3 buckets, recently I updated the bucket policy so no one beside my domain and the ec2 elastic beanstalk role can access these images.
Everything seems to be working fine, except actually downloading the files
AccessDenied: Access Denied at Request.extractError (/node_modules/aws-sdk/lib/services/s3.js:714:35)
S3 Bucket policy:
{
"Version": "2012-10-17",
"Id": "http referer policy",
"Statement": [
{
"Sid": "Allow get requests originating from www.*.domain.com and *.domain.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::data/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://www.*.domain.com/*",
"https://*.domain.com/*"
]
}
}
},
{
"Sid": "Deny get requests originating not from www.*.domain.com and *.domain.com.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::data/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://www.*.domain.com/*",
"https://*.domain.com/*"
]
}
}
},
{
"Sid": "Allow get/put requests from api.",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::[redacted]:role/aws-elasticbeanstalk-ec2-role"
},
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::data",
"arn:aws:s3:::data/*"
]
}
]
}
I am able to list contents of the bucket, so thats not the issue in this case because uploading is working just fine
This is my code that upload files:
const params = {
Bucket: "data",
Key: String(fileName),
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
};
await s3.upload(params).promise();
For downloading:
await s3.getObject({ Bucket: this.bucketS3, Key: fileId }).promise();
Uploading/Downloading was working fine before setting up policies, but I would rather limit who can view/download these files to only the api and domains

Access Denied issue in AWS Cross Account S3 PutObject encrypted by AWS Managed Key

I am trying to put a text file from Lambda which is in Account B to S3 bucket in account A. S3 bucket(test-bucket) is having AWS-KMS encryption with aws/s3 Managed Key enabled.
1. I added below permissions in Account A- S3 bucket (test-bucket):
```
{"Version": "2012-10-17",
"Id": "ExamplePolicy",
"Statement": [
{
"Sid": "ExampleStmt",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountB:role/Lambda-Role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Added below inline policy to my Lambda execution role in Account B:
{"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:us-west-2:AccountA:key/AWS-KMS-ID"
]
}
]
}
This is my Lambda Code:
res = s3.put_object(
Body=message,
Key=file_name,
Bucket='test-bucket',
ACL='bucket-owner-full-control'
)
Getting below error while running this code from Account B Lambda:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Since the S3 bucket is encrypted by AWS Managed Key so I cannot edit the KMS policy what we do in case of Customer Managed Key.
Someone please guide me what am I missing.
Try granting your lambda function s3:PutObject action permission. So the inline policy of your lambda role should be something like
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:us-west-2:AccountA:key/AWS-KMS-ID"
]
},
{
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
I've been troubleshooting this for a couple of hours myself.
I don't believe this is possible with the default "AWS Managed Key" when using SSE-KMS. Instead you have to create a CMK and grant the cross account user access to this key.
HTH
Cross account access cannot be granted for AWS Managed Key. Need to use customer managed key or default encryption.
This can be useful- https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-default-encryption/

AWS S3 403 access denied issue with nodeJS

The following is my bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddCannedAcl",
"Effect": "Allow",
"Principal": {
"AWS": "==mydetails=="
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::etcetera-dev/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "public-read"
}
}
}
]
}
This is my Iam user inline policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:PutObject",
"s3:GetObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
Now I'm trying to upload a file using multer-s3 with acl:'public-read' and I am getting 403 access denied. If I don't use acl property in multer, I am able to upload with no issues.
You may have fixed this now, but if you haven't, there a many different possible fixes (See: https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/).
But I ran into the same problem, and what fixed it for me was the following.
I presume you're calling s3.upload() when trying to upload your file. I found that if there is not Bucket parameter within your upload() options, you will also receive a 403.
i.e ensure your upload() call is as the following:
await s3.upload({
Bucket: // some s3Config.Bucket
Body: // Stream or File,
Key: // Filename,
ContentType: // Mimetype
}).promise();
Bucket: // some s3Config.Bucket - I was missing this param in the function-call as I thought that new AWS.S3(config) handled the bucket. Turns out, you should always add the bucket to your upload params.

IAM policy attached to role not working

I have a node application that is invoking assumeRoleWithWebIdentity in the following manner:
var params = {
DurationSeconds: 3600,
RoleArn: "arn:aws:iam::role/my_test_role",
RoleSessionName: "session_name",
WebIdentityToken: req.body.id_token
};
sts.assumeRoleWithWebIdentity(params, function(err, data) {
//create s3 client with data.Credentials.SecretAccessKey, AccessKeyId, sessionToken
//call s3.listObjectsV2({Bucket: 'my-bucket'}).
});
No, i have a role in IAM call my_test_role. Attached to that role is a policy called my_test_policy which looks as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my_bucket",
"Condition": {"StringLike": {"s3:prefix": [
"",
"home/",
"home/BOB/*"
]}}
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my_bucket/home/BOB",
"arn:aws:s3:::my_bucket/home/BOB/*"
]
}
]
}
In s3, I have a bucket called my_bucket, and in that bucket is the folder home. In home are a bunch of user folders:
my_bucket/home/ALICE
my_bucket/home/BOB
my_bucket/home/MARY
When my node application lists objects, it lists all the objects in home. The intention with my policy is to limit the listing to the user that has assumed the role. So if BOB has assumed the role, he should only see my_bucket/home/BOB and nothing else. I'll eventually replace the hard coded 'BOB' in the policy with ${my_oidc_url:sub}. But before I get to that step, I thought I would just hardcode "BOB" and see if that works. It does not. The assumed roles sees all of the folders. Any suggestions?
In your s3:ListBucket policy you have allowed the home/ folder to be listed, so of course it will list everything in there.
If you only had home/BOB/* directory, then I think you would get the desired behavior.
To test this situation, I did the following:
Created an Amazon S3 bucket and uploaded files.
The contents are:
2018-05-11 08:57:55 10096 foo
2018-05-11 08:57:38 10096 home/alice/foo
2018-05-11 08:57:32 10096 home/bob/foo
2018-05-11 08:57:51 10096 home/foo
2018-05-11 08:57:45 10096 home/mary/foo
Created an IAM Role called bob.
The permissions are:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjects",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket",
"Condition": {
"StringLike": {
"s3:prefix": [
"home/bob/*"
]
}
}
},
{
"Sid": "AccessObjects",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/home/bob/*"
}
]
}
(ListObjects is equivalent to ListBucket)
Assumed the bob role
Via the CLI command:
aws sts assume-role --role-arn arn:aws:iam::123456789012:role/bob --role-session-name bob
Saved the resulting credentials to a bob profile
I could then do anything in the home/bob path, but nothing in other paths:
$ aws --profile bob s3 ls s3://my-bucket/home/bob/
2018-05-11 09:16:23 10096 foo
$ aws --profile bob s3 cp foo s3://my-bucket/home/bob/foo
upload: ./foo to s3://my-bucket/home/bob/foo
$ aws --profile bob s3 cp s3://my-bucket/home/bob/foo .
download: s3://my-bucket/home/bob/foo to ./foo
$ aws --profile bob s3 ls s3://my-bucket/home/
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
$ aws --profile bob s3 ls s3://my-bucket/
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
Policy Variables
While an IAM User can be easily substituted into a policy variable, this does not seem as easy with an Assumed Role. This is because the variables will be set to:
aws:username will be undefined
aws:userid will be set to role id:caller-specified-role-name
This is not as easy as simply referencing a value of 'bob'. You'd effectively need to name the S3 path something like: AIDAJQABLZS4A3QDU576Q:bob
Ok, it ended up being a few things.
my node.js app wasn't using my temporary credentials, but was instead using the static ones. This was because my s3 client was initialized incorrectly after assuming the role.
My node.js app was sending an empty prefix as well as an incorrect prefix in an instance
The following policy worked for me
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
},
{
"Sid": "AllowRootAndHomeListingOfCompanyBucket",
"Action": "s3:ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-bucket2"
},
{
"Sid": "DenyAllListingExpectForHomeAndUserFolders",
"Effect": "Deny",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-bucket2",
"Condition": {
"Null": {
"s3:prefix": "false"
},
"StringNotLike": {
"s3:prefix": [
"",
"home/",
"home/${MY_OIDC_URL:sub}/*"
]
}
}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket2/home/${MY_OIDC_URL:sub}/*"
}
]
}

Resources