Using boto3, while copying from whole folder or file from one s3 bucket to another in same region, how to provide access key and secret access key? - python-3.x

I want to copy a file from one s3 bucket to another in same region. Both buckets have different access key and secret key. How do I provide these secret and access key using the following python code snippet:
import boto3
s3 = boto3.resource('s3')
copy_source = {
'Bucket': 'mybucket',
'Key': 'mykey'
}
bucket = s3.Bucket('otherbucket')
bucket.copy(copy_source, 'otherkey')

You don't. Copying objects, whether from one bucket to another or within the same bucket, requires you to use one set of credentials that has the necessary permissions in both buckets.
When you perform a copy object, the request is actually sent by your client to the destination bucket, which sends the request for the content to the source bucket using a path that is internal to S3, but using the same credentials you used for the first request. The object is transferred without you needing to download it and then upload it again.
If you don't have a single set of credentials that can access both buckets, you have to resort to downloading and re-uploading.

Related

Upload a file from form in S3 bucket using boto3 and handler is created in lambda

I want to upload image , audio files of small size from form to the S3 using postman for test. I successfully uploaded file in AWS S3 bucket from my application running on my local machine. Following is the part of the code I used for file uploading .
import boto3
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
async def save_file_static_folder(file, endpoint, user_id):
_, ext = os.path.splitext(file.filename)
raw_file_name = f'{uuid.uuid4().hex}{ext}'
# Save image file in folder
if ext.lower() in image_file_extension:
relative_file_folder =user_id+'/'+endpoint
contents = await file.read()
try:
response = s3_client.put_object(Bucket = S3_BUCKET_NAME,Key = (relative_file_folder+'/'+raw_file_name),Body = contents)
except:
return FileEnum.ERROR_ON_INSERT
I called this function from another endpoint and form data (e.g. name, date of birth and other details) are successfully saved in Mongodb database and files are uploaded in S3 bucket.
This app is using fastapi and files are uploaded in S3 bucket while deploying this app in local machine.
Same app is delpoyed in AWS lambda and S3 bucket as storage. For handling whole app , following is added in endpoint file.
handler = Mangum(app)
After deploying app in AWS creating lambda function from root user account of AWS, files didnot get uploaded in S3 bucket.
If I didnot provide files during form then the AWS API endpoint successfully works. Form data gets stored in MongoDB database (Mongodb atlas) and app works fine hosted using Lambda.
App deployed using Lambda function works successfully except file uploads in form. FOr local machine, file uploads in S3 get success.
EDIT
While tracing in Cloudwatch I got following error
exception An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
I checked AWS Access Key Id and secret key many times and they are correct and root user credentials are kept.
It looks like you have configured your Lambda function with an execution IAM role, but you are overriding the AWS credentials supplied to the boto3 SDK here:
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
You don't need to provide credentials explicitly because the boto3 SDK (and all language SDKs) will automatically retrieve credentials dynamically for you. So, ensure that your Lambda function is configured with the correct IAM role, and then change your code as follows:
s3_client = boto3.client('s3')
As an aside, you indicated that you may be using AWS root credentials. It's generally a best security practice in AWS to not use root credentials. Instead, create IAM roles and IAM users.
We strongly recommend that you do not use the root user for your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the root user only to create your first IAM user. Then securely lock away the root user credentials and use them to perform only a few account and service management tasks.

Add encryption on uploading object in S3

import requests
url = 'https://s3.amazonaws.com/<some-bucket-name>'
data = { 'key': 'test/test.jpeg' }
files = { 'file': open('test.jpeg', 'rb') }
r = requests.post(url, data=data, files=files)
I want to upload an image to the S3 bucket as above.The S3 bucket is enabled with AES256 encryption. How will I be able to specify the encryption in post requests?
Warning
It seems like you have configured your bucket in a way that allows unauthenticated PUT requests into it - this is dangerous and may become expensive, because essentially anybody that knows your bucket name can put data into it and you'll have to pay the bill. I recommend you change that.
If you want it to stay that way, you can use headers to configure the encryption type for each object as described in the PutObject API-Reference.
The most relevant (excluding SSE-C encryption) are these two:
x-amz-server-side-encryption
The server-side encryption algorithm used when storing this object in Amazon S3 (for example, AES256, aws:kms).
Valid Values: AES256 | aws:kms
x-amz-server-side-encryption-aws-kms-key-id
If x-amz-server-side-encryption is present and has the value of aws:kms,
this header specifies the ID of the AWS Key Management Service (AWS
KMS) symmetrical customer managed customer master key (CMK) that was
used for the object.
If the value of x-amz-server-side-encryption is aws:kms, this header
specifies the ID of the symmetric customer managed AWS KMS CMK that
will be used for the object. If you specify
x-amz-server-side-encryption:aws:kms, but do not provide
x-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the AWS
managed CMK in AWS to protect the data.
You can add these in your requests.post call.
The API-Docs of the requests library specify how to do that, so it should look roughly like this:
requests.post(
url,
data=data,
files=files,
headers={"x-amz-server-side-encryption": "AES:256"}
)

S3 upload using server-side encryption (python SDK)

I'm using the following snippet to upload my files to the AWS S3 buckets:
import boto3
def upload_to_s3(bucket_name, local_name, name):
bucket = boto3.resource('s3').Bucket(my_bucket_name)
bucket.upload_file(local_name, name)
Is there any way to modify this code to enable SSE?
There are 2 ways.
use this: https://www.justdocloud.com/2018/09/21/upload-download-s3-using-aws-kms-python/
s3_client.upload_file(filename, bucketname, objectkey, ExtraArgs={"ServerSideEncryption": "aws:kms", "SSEKMSKeyId": })
Enable Default bucket encryption with KMS on bucket and make sure the user/role you're using to upload has KMS permission, this way you don't need to define any kms key here.

encrypt object in aws s3 bucket

I am saving some images/objects in aws s3 bucket from my application. First i am getting signed url from nodejs service api and uploading images or files to singed url using jquery ajax. I can open image or object using the link provided in the properties (https://s3.amazonaws.com/bucketname/objectname).
I want to provide security for each uploaded object. Even by chance if any anonymous user gets the link (https://s3.amazonaws.com/bucketname/objectname) somewhere he should not be able to open it. They (objects) should be accessed and open only cases like when request has some headers key values etc. I tried server side encryption by specifying header key values in request as shown below.
var file = document.getElementById('fileupload').files[0];
$.ajax({
url: signedurl,
type: "PUT",
data: file,
header:{'x-amz-server-side-encryption':'AES256'},
contentType: file.type,
processData: false,
success: function (result) {
var res = result;
},
error: function (error) {
alert(error);
}
Doesn't sever side encryption keep encrypted object on s3 bucket storage? Does it only encrypts while transferring and decrypts before saving on s3 storage?
If it stores encrypted object on s3 storage then how can i open it using the link shown in properties.
Server-Side Encryption (SSE) in Amazon S3 encrypts objects at rest (stored on disk) but decrypts objects when they are retrieved. Therefore, it is a transparent form of encryption.
If you wish to keep objects in Amazon S3 private, but make them available to specific authorized users, I would recommend using Pre-Signed URLs.
This works by having your application generate a URL that provides time-limited access to a specific object in Amazon S3. The objects are otherwise kept private so they are not accessible.
See documentation: Share an Object with Others

Send recorded audio to S3

I am using RecorderJs to record audio. When done; I want to save it to amazon S3 (I am using knox library) via server (because I don't want to share the key).
recorder.exportWAV(function(blob) {
// sending it to server
});
On the server side, using knox ...
knox.putBuffer(blob, path, {"Content-Type": 'audio/wav',
"Content-Length": blob.length},
function(e,r) {
if (!e) {
console.log("saved at " + path);
future.return(path);
} else {
console.log(e);
}
});
And this is saving just 2 bytes!!
Also; is this the best way to save server memory. Or are there better alternatives?
I also see this: Recorder.forceDownload(blob[, filename])
Should I force download and then send it to server?
Or should I save to S3 directly from my domain. Is there a option in S3 which cannot be hacked by other user trying to store data on my server?
Or should i save to S3 directly from my domain. Is there a option in
S3 which cannot be hacked by other user trying to store data on my
server?
You can use S3 bucket policies or AIM policies on S3 buckets to restrict access to your buckets.
Bucket Policies:http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucketPolicies.html
AIM Policies: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingIAMPolicies.html
There are several related threads on SO about this too, for example:
Enabling AWS IAM Users access to shared bucket/objects
AWS s3 bucket policy invalid group principal

Resources