Access Denied in file upload to Amazon s3 - node.js

I am using aws-sdk for upload file to amazon s3 bucket.
And AccessDenied issue has been occured.
I change my access key id, secret key id, s3 bucket name with others then it works well.
So, there is no problem in code.
I think there is an issue in settings in s3 bucket or s3 bucket name that given in code.
I set the s3 bucket name like s3.amazonaws.com/[my bucket name]/[folder name]
And set Access control list and bucket policy to public.
But It doesn't working.
Please help me.

I figure out my problem.
I change the s3 bucket name like [my bucket name]/[folder name]
remove s3.amazonaws.com

Related

AWS cannot open or download after copy files to different Amazon S3 bucket using Boto3

I have created a lambda with boto3 that copies files from one Amazon S3 bucket to a different account's Amazon S3 bucket. Everything works fine, but when the other user is trying to open or download the files gets access denied or cannot download the files.
I have the bucket location of the other account and the kms key and i have created policy role for that on my bucket. My bucket has encryption enabled.
Do i need to decrypt my files and encrypt with the kms key of the other account ? I am testing with https://docs.aws.amazon.com/kms/latest/developerguide/programming-encryption.html#reencryption is this correct ?
Thanks
This is probably an object ownership issue. You will need to grant the destination bucket bucket-owner-full-control to the object when uploading. You can set a bucket policy which blocks uploads unless the uploader grants this access:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html

AWS S3 Cross-account file transfer via Spark: Getting access denied on the transferred objects in the destination bucket

I have a use-case where I want to leverage Spark to transfer files between S3 Buckets in 2 different AWS Accounts.
I have Spark running in a different AWS Account (say Account A). I do not have access to this AWS Account.
I have AWS Account B which is holding the source S3 bucket (S3_SOURCE_BUCKET) and AWS Account C that is holding destination S3 bucket (S3_DESTINATION_BUCKET).
I have created an IAM role in Account C (say: CrossAccountRoleC) to read and write from the destination S3 bucket.
I have set up the primary IAM role in Account B (say: CrossAccountRoleB).
Adding Account A's spark IAM Role in trust entity
Adding read write permission to S3 buckets in both Account B and Account C
Adding an inline policy to assume CrossAccountRoleC
Added CrossAccountRoleB as a trusted entity in CrossAccountRoleC
Also added CrossAccountRoleB in the bucket policy in the S3_DESTINATION_BUCKET.
I am using Hadoop's FileUtil.copy to transfer files between the source and destination S3 buckets. While the transfer is happening successfully, I am getting 403 access denied on the copied objects.
When I am specifying hadoopConfiguration.set("fs.s3.canned.acl", "BucketOwnerFullControl") , I am getting an error that says "The requester is not authorized to perform action [ s3:GetObject, s3:PutObject, or kms:Decrypt ] on resource [ s3 Source or Sink ]" . From the logs, it seems that the operation is failing while writing to the Destination bucket.
What am I missing?
you are better off using s3a per-bucket settings and just using a different set of credentials for the different buckets. Not as "pure" as IAM Role games but since nobody understands IAM roles or knows how to debug them, its more likely to work.
(Do not take the fact that the IAM roles aren't working as a personal skill failing. Everyone fears support issues related to them)

Can I get information on an S3 Bucket's public access bucket Settings from boto3?

I am using boto3 to extract information about my S3 buckets.
However, I am stuck at this point. I am trying to extract information about a bucket's public access (see attached screenshot).
How can I get this information? So far I have failed to find out any boto3 function that allows me to do so.
You can use get_public_access_block():
Retrieves the PublicAccessBlock configuration for an Amazon S3 bucket.
When Amazon S3 evaluates the PublicAccessBlock configuration for a bucket or an object, it checks the PublicAccessBlock configuration for both the bucket (or the bucket that contains the object) and the bucket owner's account. If the PublicAccessBlock settings are different between the bucket and the account, Amazon S3 uses the most restrictive combination of the bucket-level and account-level settings.
If you wish to modify the settings, you can use: put_public_access_block()

NodeJS Multer-S3 can upload to S3 without using credentials?

I'm a little bit lost as to what's going on, I've been trying to solve this for a few days now. I'm trying to only allow my IAM user to upload an image with public access to read. However, I can comment out the IAM user credentials from AWS-SDK and it would still upload to my S3 bucket with no problem. This is not how I intended it to work. I have a feeling it's my policies but I'm not really sure where to start.
Here are the AWS-SDK credentials being commented out in my code
Here is the code for uploading an image to S3
Here is another piece of code used for uploading an image
For some reason, this is enough to upload to my S3 bucket. Just to clarify, I want to make sure the file is being uploaded only if it has the proper credentials. Currently, the file is being uploaded even when S3 credentials are commented out.
The following are my AWS S3 policies/permissions.
AWS public access bucket settings (my account settings also look like this, since those settings override the buckets settings)
AWS bucket policy
Bucket ACL
Bucket Cors
If you can point me in the right direction, that'll be fantastic. I'm pretty new to using AWS S3 and am a little lost.
Thanks a bunch.
this happened to me as well. if there are no credentials in your code, it will default to using those in your .aws directory if you have credentials stored there on your local filesystem.

The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-central-1'

using Node.JS with the following config file
{
"accessKeyId" :"XXX",
"secretAccessKey" :"XXXX",
"region": "eu-central-1",
"signatureVersion": "v4"
}
I still receive this error message as if the aws sdk tries to access a us-east-1 region .
Any idea ?
According to AWS, there are three situations this can happen.
When you are creating a bucket with a name that this already being used as a bucket name in your AWS account or in any other AWS account
(Please note that S3 bucket names are globally unique).
When you are doing an operation on your S3 bucket and you have set the Region variable (either when configuring the SDK or while using
environment variables etc) to a region other than the one in which the
bucket is actually present.
You have recently deleted a S3 bucket in a particular region (say us-east-1) and you are trying to create a bucket (with the same name
as the the bucket that was deleted) in another region right after
deleting the bucket.
For point 3, give up to two days and retry.
if a bucket which is present in a certain region say (us-east-1) is
deleted, you can always create a bucket with the same name in another
region. There is no such restriction in S3 that states you cannot do
this. However, you will be able to do this only after you allow some
time after deleting the bucket. This is because S3 buckets follow the
Eventual Consistency model in the case of DELETE operation.
It means that after you delete a bucket, it takes a few hours,
generally up-to 24 to 48 hours for the DELETE operation to be
replicated across all our data centres. Once this change has
propagated you can go ahead and create the bucket again in the desired
region.

Resources