The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-central-1' - node.js

using Node.JS with the following config file
{
"accessKeyId" :"XXX",
"secretAccessKey" :"XXXX",
"region": "eu-central-1",
"signatureVersion": "v4"
}
I still receive this error message as if the aws sdk tries to access a us-east-1 region .
Any idea ?

According to AWS, there are three situations this can happen.
When you are creating a bucket with a name that this already being used as a bucket name in your AWS account or in any other AWS account
(Please note that S3 bucket names are globally unique).
When you are doing an operation on your S3 bucket and you have set the Region variable (either when configuring the SDK or while using
environment variables etc) to a region other than the one in which the
bucket is actually present.
You have recently deleted a S3 bucket in a particular region (say us-east-1) and you are trying to create a bucket (with the same name
as the the bucket that was deleted) in another region right after
deleting the bucket.
For point 3, give up to two days and retry.
if a bucket which is present in a certain region say (us-east-1) is
deleted, you can always create a bucket with the same name in another
region. There is no such restriction in S3 that states you cannot do
this. However, you will be able to do this only after you allow some
time after deleting the bucket. This is because S3 buckets follow the
Eventual Consistency model in the case of DELETE operation.
It means that after you delete a bucket, it takes a few hours,
generally up-to 24 to 48 hours for the DELETE operation to be
replicated across all our data centres. Once this change has
propagated you can go ahead and create the bucket again in the desired
region.

Related

how do I read from one s3 bucket using assume role and write to a different bucket (using my original session)?

I have an AWS IAM role with permissions to read from a bucket of a different account.
I'm assuming the role and reading from the bucket successfully.
I'm currently downloading the objects and then writing them to my bucket.
I would love to use a copy command to copy the objects directly to my bucket without the unnecessary download.
I don't want to add a bucket policy to my bucket that would allow the role to write to it because I don't want the account that created the role to be able to write to it.
A short Diagram:
Account 1:
Bucket A
Role #
Account 2:
Bucket B
Currently:
Read From Bucket A (Using role #) > To Server
Write From Server (Using Account 2) > To Bucket B
Desirable:
Clone From Bucket A > To Bucket B
Can I use boto3 with multiple sessions? Can I create a role (which will be unavailable to the different account I don't control) that will allow me to use the permissions of the original role?

AWS S3 Cross-account file transfer via Spark: Getting access denied on the transferred objects in the destination bucket

I have a use-case where I want to leverage Spark to transfer files between S3 Buckets in 2 different AWS Accounts.
I have Spark running in a different AWS Account (say Account A). I do not have access to this AWS Account.
I have AWS Account B which is holding the source S3 bucket (S3_SOURCE_BUCKET) and AWS Account C that is holding destination S3 bucket (S3_DESTINATION_BUCKET).
I have created an IAM role in Account C (say: CrossAccountRoleC) to read and write from the destination S3 bucket.
I have set up the primary IAM role in Account B (say: CrossAccountRoleB).
Adding Account A's spark IAM Role in trust entity
Adding read write permission to S3 buckets in both Account B and Account C
Adding an inline policy to assume CrossAccountRoleC
Added CrossAccountRoleB as a trusted entity in CrossAccountRoleC
Also added CrossAccountRoleB in the bucket policy in the S3_DESTINATION_BUCKET.
I am using Hadoop's FileUtil.copy to transfer files between the source and destination S3 buckets. While the transfer is happening successfully, I am getting 403 access denied on the copied objects.
When I am specifying hadoopConfiguration.set("fs.s3.canned.acl", "BucketOwnerFullControl") , I am getting an error that says "The requester is not authorized to perform action [ s3:GetObject, s3:PutObject, or kms:Decrypt ] on resource [ s3 Source or Sink ]" . From the logs, it seems that the operation is failing while writing to the Destination bucket.
What am I missing?
you are better off using s3a per-bucket settings and just using a different set of credentials for the different buckets. Not as "pure" as IAM Role games but since nobody understands IAM roles or knows how to debug them, its more likely to work.
(Do not take the fact that the IAM roles aren't working as a personal skill failing. Everyone fears support issues related to them)

Can I get information on an S3 Bucket's public access bucket Settings from boto3?

I am using boto3 to extract information about my S3 buckets.
However, I am stuck at this point. I am trying to extract information about a bucket's public access (see attached screenshot).
How can I get this information? So far I have failed to find out any boto3 function that allows me to do so.
You can use get_public_access_block():
Retrieves the PublicAccessBlock configuration for an Amazon S3 bucket.
When Amazon S3 evaluates the PublicAccessBlock configuration for a bucket or an object, it checks the PublicAccessBlock configuration for both the bucket (or the bucket that contains the object) and the bucket owner's account. If the PublicAccessBlock settings are different between the bucket and the account, Amazon S3 uses the most restrictive combination of the bucket-level and account-level settings.
If you wish to modify the settings, you can use: put_public_access_block()

InvalidLocationConstraint creating a bucket in af-south-1 (Cape Town) region using node.js aws-sdk

I am getting a InvalidLocationConstraint: The specified location-constraint is not valid error when trying to create a S3 bucket in the af-south-1 (Cape Town) region using node.js aws-sdk, at version 2.726.0 (The latest at the time).
The region has been enabled and I am able to create a bucket using the management console. The IAM user I am using for debugging has full administrative access in the account.
My create bucket call is:
let res = await s3.createBucket({
Bucket: 'bucketname',
CreateBucketConfiguration: { LocationConstraint: 'af-south-1' }
}).promise();
This works for regions other than af-south-1.
In the documentation, a list of location constraints is given, is this list exhaustive of all possible options, or just a list of examples?
Is it possible to create a bucket in af-south-1 using the sdk, or am I doing something wrong?
This is similar to this question.
Newer AWS regions only support regional endpoints. Thus, if creating buckets in more than one region, a new instance of the S3 class needs to be created for each of the regions if you are using one of the newer regions:
const s3 = new AWS.S3({
region: 'af-south-1',
});

How to copy from S3 production to S3 development using Python with different roles?

I need to copy files from S3 Production(where i have only read access) to S3 development (i have write access). The change which i face is switching the roles.
While coping i need use prod role and while writing i need to use developer role.
I am trying with below code:
import boto3
boto3.setup_default_session(profile_name='prod_role')
s3 = boto3.resource('s3')
copy_source = {
'Bucket': 'prod_bucket',
'Key': 'file.txt'
}
bucket = s3.Bucket('dev_bucket')
bucket.copy(copy_source, 'file.txt')
I need to know how to switch the role.
The most efficient way to move data between buckets in Amazon S3 is to use the resource.copy() or client.copy_object() command. This allows the two buckets to directly communicate (even between different regions), without the need to download/upload the objects themselves.
However, the credentials used to call the command require both read permission from the source and write permission to the destination. It is not possible to provide two different sets of credentials for this copy.
Therefore, you should pick ONE set of credentials and ensure it has the appropriate permissions. This means either:
Give the Prod credentials permission to write to the destination, or
Give the non-Prod credentials permission to read from the Prod bucket
This can be done either by creating a Bucket Policy, or by assigning permissions directly to the IAM Role/User being used.
If this is a regular task that needs to happen, you could consider automatically copying the files by using an Amazon S3 event on the source bucket to trigger a Lambda function that copies the object to the non-Prod destination immediately. This avoids the need to copy files in a batch at some later time.

Resources