Getting Access Denied when trying to upload to s3 Bucket - node.js

I am trying to upload an object to an AWS bucket using NodeJs (aws-sdk), but I am get access denied error.
The IAM user of which I am using accessKeyId and secretAccessKey also have been given access to the s3 bucket to which I am trying to upload.
Backend Code
const s3 = new AWS.S3({
accessKeyId: this.configService.get<string>('awsAccessKeyId'),
secretAccessKey: this.configService.get<string>('awsSecretAccessKey'),
params: {
Bucket: this.configService.get<string>('awsPublicBucketName'),
},
region: 'ap-south-1',
});
const uploadResult = await s3
.upload({
Bucket: this.configService.get<string>('awsPublicBucketName'),
Body: dataBuffer,
Key: `${folder}/${uuid()}-${filename}`,
})
.promise();
Bucket Policy
{
"Version": "2012-10-17",
"Id": "PolicyXXXXXXXXX",
"Statement": [
{
"Sid": "StmtXXXXXXXXXXXXXX",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::some-random-bucket"
},
{
"Sid": "StmtXXXXXXXXXXX",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:user/some-random-user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::some-random-bucket"
}
]
}

You have an explicit deny statement, denying anyone from doing anything S3-related on some-random-bucket.
This will override any allow statements in the policy, according to the official IAM policy evaluation logic.
You can do any of the following:
Remove the deny statement from the policy
Modify the deny statement & use NotPrincipal to exclude some-random-user from the deny statement
Modify the deny statement & use the aws:PrincipalArn condition key with the ArnNotEquals condition operator to exclude some-random-user from the deny statement i.e.
{
"Version": "2012-10-17",
"Id": "PolicyXXXXXXXXX",
"Statement": [
{
"Sid": "StmtXXXXXXXXXXXXXX",
"Effect": "Deny",
"Action": "s3:*",
"Principal": "*",
"Resource": "arn:aws:s3:::some-random-bucket",
"Condition": {
"ArnNotEquals": {
"aws:PrincipalArn": "arn:aws:iam::XXXXXXXXXX:user/some-random-user"
}
}
},
{
"Sid": "StmtXXXXXXXXXXX",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:user/some-random-user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::some-random-bucket"
}
]
}

Related

How can i get putobject access to s3 from specific ec2 instance

I created S3 static web - public bucket and by default all the ec2 instance that i have in my account can upload files to the s3 bucket.
My goal is to limit the access to upload files to the bucket just from spesific instance (My bastion instance) .
So I created a role with all s3 permission and attach the role to my bastion instance , than I put this policy in the bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::name/*"
},
{
"Sid": "allow only OneUser to put objects",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::3254545218:role/Ec2AccessToS3"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::name/*"
}
]
}
But now all the ec2 instance include the bastion instance cant upload files to the s3 bucket..
Im trying to change this arn line:
"NotPrincipal": {
"AWS": "arn:aws:iam::3254545218:role/Ec2AccessToS3"
To user arn and its work .. But I want this is work on the role
I was able to do the operation on a specific user but not on a specific instance (role).
What Im doing wrong?
Refer to the "Granting same-account bucket access to a specific role" section of this AWS blog. The gist is as given below.
Each IAM entity (user or role) has a defined aws:userid variable. You will need this variable for use within the bucket policy to specify the role or user as an exception in a conditional element. An assumed-role’s aws:userId value is defined as UNIQUE-ROLE-ID:ROLE-SESSION-NAME (for example, AROAEXAMPLEID:userdefinedsessionname).
To get AROAEXAMPLEID for the IAM role, do the following:
Be sure you have installed the AWS CLI, and open a command prompt or shell.
Run the following command: aws iam get-role -–role-name ROLE-NAME.
In the output, look for the RoleId string, which begins with AROA.You will be using this in the bucket policy to scope bucket access to only this role.
Use this aws:userId in the policy,
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MyExampleBucket",
"arn:aws:s3:::MyExampleBucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AROAEXAMPLEID:*",
"111111111111"
]
}
}
}
]
}
{
"Role": {
"Description": "Allows EC2 instances to call AWS services on your behalf.",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
},
"MaxSessionDuration": 3600,
"RoleId": "AROAUXYsdfsdfsdfsdf
L",
"CreateDate": "2023-01-09T21:36:26Z",
"RoleName": "Ec2AccessToS3",
"Path": "/",
"RoleLastUsed": {
"Region": "eu-central-1",
"LastUsedDate": "2023-01-10T05:43:20Z"
},
"Arn": "arn:aws:iam::32sdfsdf218:role/Ec2AccessToS3"
}
}
I just want to update , Im trying to give access to spesific user instead ..
this is not working to..
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::name.com",
"arn:aws:s3:::name.com/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AIDOFTHEUSER",
"ACCOUNTID"
]
}
}
}
]
}

while enabling aws event bridge rule from aws im getting error as denied exception

i have added permission in my event bus as
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "allow_account_to_put_events",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::406342097594:root"
},
"Action": "events:PutEvents",
"Resource": "arn:aws:events:us-east-2:406342097594:event-bus/default"
}, {
"Sid": "allow_account_to_manage_rules_they_created",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::406342097594:root"
},
"Action": ["events:PutRule", "events:PutTargets", "events:DeleteRule",
"events:RemoveTargets", "events:DisableRule", "events:EnableRule",
"events:TagResource", "events:UntagResource", "events:DescribeRule",
"events:ListTargetsByRule", "events:ListTagsForResource"],
"Resource": "arn:aws:events:us-east-2:406342097594:rule/default",
"Condition": {
"StringEqualsIfExists": {
"events:creatorAccount": "406342097594"
}
}
}]
}
getting error as below
INFO AccessDeniedException: User: arn:aws:sts::406342097594:assumed-role/SDL-role-kz8ds7y3/SDL-Connector is not authorized to perform: events:EnableRule on resource: arn:aws:events:us-east-2:406342097594:rule/SDL-Connector because no identity-based policy allows the events:EnableRule action
In my role i added the inline policy and copied the AmazonEventBridgeFullAccess policy role json from https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-use-identity-based.html and it worked.

Amazon S3 GET object Access Denied after setting up S3 bucket policy

I'm using the AWS NodeJS SDK to upload and download files to s3 buckets, recently I updated the bucket policy so no one beside my domain and the ec2 elastic beanstalk role can access these images.
Everything seems to be working fine, except actually downloading the files
AccessDenied: Access Denied at Request.extractError (/node_modules/aws-sdk/lib/services/s3.js:714:35)
S3 Bucket policy:
{
"Version": "2012-10-17",
"Id": "http referer policy",
"Statement": [
{
"Sid": "Allow get requests originating from www.*.domain.com and *.domain.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::data/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://www.*.domain.com/*",
"https://*.domain.com/*"
]
}
}
},
{
"Sid": "Deny get requests originating not from www.*.domain.com and *.domain.com.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::data/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://www.*.domain.com/*",
"https://*.domain.com/*"
]
}
}
},
{
"Sid": "Allow get/put requests from api.",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::[redacted]:role/aws-elasticbeanstalk-ec2-role"
},
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::data",
"arn:aws:s3:::data/*"
]
}
]
}
I am able to list contents of the bucket, so thats not the issue in this case because uploading is working just fine
This is my code that upload files:
const params = {
Bucket: "data",
Key: String(fileName),
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
};
await s3.upload(params).promise();
For downloading:
await s3.getObject({ Bucket: this.bucketS3, Key: fileId }).promise();
Uploading/Downloading was working fine before setting up policies, but I would rather limit who can view/download these files to only the api and domains

Lambda S3 getObject (fired from ajax call) throws 403 denied

I have a lambda function which gets an image from one bucket, resizes it and puts it into another bucket. The lambda function is set to trigger when a file is created the source bucket. Fairly standard, tutorial level stuff.
When I use the aws web UI to put an image in the source bucket, everything works as expected.
However, when I use xhr from my web app to put an image into the same bucket, I get the following error (thrown from my s3.getObject call):
AccessDenied: Access Denied
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:585:35
Having extensively searched around, most folk are saying 403 errors usually boil down to role/policy permissions for the lambda function. But when I trawl the logs, the only difference I see between my xhr upload and an aws web UI upload is the eventName and userIdentity.
For a web UI upload it's Put and principalId:
eventName: 'ObjectCreated:Put',
userIdentity: { principalId: 'AWS:AIDAJ2VMZPNX5NJD2VBLM' }
But on a xhr call it's Post and Anonymous:
eventName: 'ObjectCreated:Post',
userIdentity: { principalId: 'Anonymous' }
My Lambda role has two policies attached:
AWSLambdaExecute
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::*"
}
]
}
AWSLambdaBasicExecutionRole
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
My S3 buckets have the following policies:
Source bucket:
{
"Version": "2012-10-17",
"Id": "Lambda access bucket policy",
"Statement": [
{
"Sid": "All on objects in bucket lambda",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::source-bucket-name/*"
},
{
"Sid": "All on bucket by lambda",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::source-bucket-name"
}
]
}
Destination bucket:
{
"Version": "2012-10-17",
"Id": "Lambda access bucket policy",
"Statement": [
{
"Sid": "All on objects in bucket lambda",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::destination-bucket-name/*"
},
{
"Sid": "All on bucket by lambda",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::destination-bucket-name"
}
]
}
Do I need to somehow pass (or assign) a Principal Id to my xhr call to get it to work? Or do I need to add permissions/policies/roles to my function to let it fire the function without a Principal Id attached to the trigger?
EDIT:
Here's the JS code that sends a POST'ed file to the source bucket:
function uploadFileAttachment(attachment, form) {
var formButtons = document.querySelectorAll("form.form--trix button.btn");
formButtons.forEach((button) => {
button.setAttribute("disabled", "disabled");
});
uploadFile(attachment.file, setProgress, setAttributes)
function setProgress(progress) {
attachment.setUploadProgress(progress)
}
function setAttributes(attributes) {
attachment.setAttributes(attributes)
formButtons.forEach((button) => {
button.removeAttribute("disabled");
});
}
}
function uploadFile(file, progressCallback, successCallback) {
var key = createStorageKey(file)
var formData = createFormData(key, file)
var xhr = new XMLHttpRequest()
xhr.open("POST", global.s3url, true)
xhr.upload.addEventListener("progress", function(event) {
var progress = event.loaded / event.total * 100
progressCallback(progress)
})
xhr.addEventListener("load", function(event) {
if (xhr.status == 204) {
var attributes = {
url: global.s3url + key,
href: global.s3url + key + "?content-disposition=attachment"
}
successCallback(attributes)
}
})
xhr.send(formData);
}
function createStorageKey(file) {
var date = new Date()
var day = date.toISOString().slice(0,10)
var name = date.getTime() + "-" + file.name
return [ "trix", day, name ].join("/")
}
function createFormData(key, file) {
var data = new FormData()
data.append("key", key)
data.append("Content-Type", file.type)
data.append("file", file)
return data
}

Python3.7 script to export CloudWatch logs to S3

I am using below code to copy CloudWatch logs to S3:-
import boto3
import collections
from datetime import datetime, date, time, timedelta
region = 'eu-west-1'
def lambda_handler(event, context):
yesterday = datetime.combine(date.today()-timedelta(1),time())
today = datetime.combine(date.today(),time())
unix_start = datetime(1970,1,1)
client = boto3.client('logs')
response = client.create_export_task(
taskName='Export_CloudwatchLogs',
logGroupName='/aws/lambda/stop-instances',
fromTime=int((yesterday-unix_start).total_seconds() * 1000),
to=int((today -unix_start).total_seconds() * 1000),
destination='bucket',
destinationPrefix='bucket-{}'.format(yesterday.strftime("%Y-%m-%d"))
)
return 'Response from export task at {} :\n{}'.format(datetime.now().isoformat(),response)
I gave below policy to role:-
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:CreateExportTask",
"logs:DescribeExportTasks",
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
EOF
2nd policy:-
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetBucketAcl"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::${var.source_market}-${var.environment}-${var.bucket}/*"],
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } }
}
]
}
EOF
I am getting below error if I execute this in AWS console:-
{
"errorMessage": "An error occurred (InvalidParameterException) when calling the CreateExportTask operation: GetBucketAcl call on the given bucket failed. Please check if CloudWatch Logs has been granted permission to perform this operation.",
"errorType": "InvalidParameterException"
I have referred many blocks after appending role with appropriate policies.
Check the encryption settings on your bucket. I had the same problem and it was because I had it set to AWS-KMS. I was getting this error with the same permissions you have and then it started working as soon as I switched the encryption to AES-256
It seems like an issue with s3 bucket permissions. You need to attach this policy to your s3 bucket. Please amend the policy by changing the bucket name and aws region for cloudwatch.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": { "Service": "logs.us-west-2.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/random-string/*",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.us-west-2.amazonaws.com" }
}
]}
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html
I had the same error, the issue was that I put on "destination" parameter something like bucket/something while on the policy I just had bucket, so removing the something prefix on the parameter fixed the problem, so check that the policy and the parameter match.

Resources