S3 upload using server-side encryption (python SDK) - python-3.x

I'm using the following snippet to upload my files to the AWS S3 buckets:
import boto3
def upload_to_s3(bucket_name, local_name, name):
bucket = boto3.resource('s3').Bucket(my_bucket_name)
bucket.upload_file(local_name, name)
Is there any way to modify this code to enable SSE?

There are 2 ways.
use this: https://www.justdocloud.com/2018/09/21/upload-download-s3-using-aws-kms-python/
s3_client.upload_file(filename, bucketname, objectkey, ExtraArgs={"ServerSideEncryption": "aws:kms", "SSEKMSKeyId": })
Enable Default bucket encryption with KMS on bucket and make sure the user/role you're using to upload has KMS permission, this way you don't need to define any kms key here.

Related

Upload a file from form in S3 bucket using boto3 and handler is created in lambda

I want to upload image , audio files of small size from form to the S3 using postman for test. I successfully uploaded file in AWS S3 bucket from my application running on my local machine. Following is the part of the code I used for file uploading .
import boto3
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
async def save_file_static_folder(file, endpoint, user_id):
_, ext = os.path.splitext(file.filename)
raw_file_name = f'{uuid.uuid4().hex}{ext}'
# Save image file in folder
if ext.lower() in image_file_extension:
relative_file_folder =user_id+'/'+endpoint
contents = await file.read()
try:
response = s3_client.put_object(Bucket = S3_BUCKET_NAME,Key = (relative_file_folder+'/'+raw_file_name),Body = contents)
except:
return FileEnum.ERROR_ON_INSERT
I called this function from another endpoint and form data (e.g. name, date of birth and other details) are successfully saved in Mongodb database and files are uploaded in S3 bucket.
This app is using fastapi and files are uploaded in S3 bucket while deploying this app in local machine.
Same app is delpoyed in AWS lambda and S3 bucket as storage. For handling whole app , following is added in endpoint file.
handler = Mangum(app)
After deploying app in AWS creating lambda function from root user account of AWS, files didnot get uploaded in S3 bucket.
If I didnot provide files during form then the AWS API endpoint successfully works. Form data gets stored in MongoDB database (Mongodb atlas) and app works fine hosted using Lambda.
App deployed using Lambda function works successfully except file uploads in form. FOr local machine, file uploads in S3 get success.
EDIT
While tracing in Cloudwatch I got following error
exception An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
I checked AWS Access Key Id and secret key many times and they are correct and root user credentials are kept.
It looks like you have configured your Lambda function with an execution IAM role, but you are overriding the AWS credentials supplied to the boto3 SDK here:
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
You don't need to provide credentials explicitly because the boto3 SDK (and all language SDKs) will automatically retrieve credentials dynamically for you. So, ensure that your Lambda function is configured with the correct IAM role, and then change your code as follows:
s3_client = boto3.client('s3')
As an aside, you indicated that you may be using AWS root credentials. It's generally a best security practice in AWS to not use root credentials. Instead, create IAM roles and IAM users.
We strongly recommend that you do not use the root user for your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the root user only to create your first IAM user. Then securely lock away the root user credentials and use them to perform only a few account and service management tasks.

Add encryption on uploading object in S3

import requests
url = 'https://s3.amazonaws.com/<some-bucket-name>'
data = { 'key': 'test/test.jpeg' }
files = { 'file': open('test.jpeg', 'rb') }
r = requests.post(url, data=data, files=files)
I want to upload an image to the S3 bucket as above.The S3 bucket is enabled with AES256 encryption. How will I be able to specify the encryption in post requests?
Warning
It seems like you have configured your bucket in a way that allows unauthenticated PUT requests into it - this is dangerous and may become expensive, because essentially anybody that knows your bucket name can put data into it and you'll have to pay the bill. I recommend you change that.
If you want it to stay that way, you can use headers to configure the encryption type for each object as described in the PutObject API-Reference.
The most relevant (excluding SSE-C encryption) are these two:
x-amz-server-side-encryption
The server-side encryption algorithm used when storing this object in Amazon S3 (for example, AES256, aws:kms).
Valid Values: AES256 | aws:kms
x-amz-server-side-encryption-aws-kms-key-id
If x-amz-server-side-encryption is present and has the value of aws:kms,
this header specifies the ID of the AWS Key Management Service (AWS
KMS) symmetrical customer managed customer master key (CMK) that was
used for the object.
If the value of x-amz-server-side-encryption is aws:kms, this header
specifies the ID of the symmetric customer managed AWS KMS CMK that
will be used for the object. If you specify
x-amz-server-side-encryption:aws:kms, but do not provide
x-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the AWS
managed CMK in AWS to protect the data.
You can add these in your requests.post call.
The API-Docs of the requests library specify how to do that, so it should look roughly like this:
requests.post(
url,
data=data,
files=files,
headers={"x-amz-server-side-encryption": "AES:256"}
)

nodejs client s3 getSignedUrl gives Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4

I'm generating a presigned url to an s3 bucket using the following code
const presignedUrl = s3.getSignedUrl('getObject', {
Bucket: config.parsedResumeDestination,
Key: tmpKey,
Expires: 60 * 60 * 60 // 1 hour
});
However when I just copy past the generated url on the browser I get the following error
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4
I saw the following solution How to generate AWS S3 pre-signed URL using signature version 4, however the nodejs client for aws does not seem to have this property. Can someone please tell me what is going wrong here?
When you construct the s3 service object, pass in a signatureVersion.
Here is one way to do it:
const AWS = require("aws-sdk");
const s3 = new AWS.S3({
signatureVersion: "v4",
...credentials
})
There are lots of options when constructing AWS service objects and they are mostly universal:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor-property

Using boto3, while copying from whole folder or file from one s3 bucket to another in same region, how to provide access key and secret access key?

I want to copy a file from one s3 bucket to another in same region. Both buckets have different access key and secret key. How do I provide these secret and access key using the following python code snippet:
import boto3
s3 = boto3.resource('s3')
copy_source = {
'Bucket': 'mybucket',
'Key': 'mykey'
}
bucket = s3.Bucket('otherbucket')
bucket.copy(copy_source, 'otherkey')
You don't. Copying objects, whether from one bucket to another or within the same bucket, requires you to use one set of credentials that has the necessary permissions in both buckets.
When you perform a copy object, the request is actually sent by your client to the destination bucket, which sends the request for the content to the source bucket using a path that is internal to S3, but using the same credentials you used for the first request. The object is transferred without you needing to download it and then upload it again.
If you don't have a single set of credentials that can access both buckets, you have to resort to downloading and re-uploading.

AWS Lambda: How to store secret to external API?

I'm building a monitoring tool based on AWS Lambda. Given a set of metrics, the Lambdas should be able to send SMS using Twilio API. To be able to use the API, Twilio provide an account SID and an auth token.
How and where should I store these secrets?
I'm currently thinking to use AWS KMS but there might be other better solutions.
Here is what I've come up with. I'm using AWS KMS to encrypt my secrets into a file that I upload with the code to AWS Lambda. I then decrypt it when I need to use them.
Here are the steps to follow.
First create a KMS key. You can find documentation here: http://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html
Then encrypt your secret and put the result into a file. This can be achieved from the CLI with:
aws kms encrypt --key-id some_key_id --plaintext "This is the scret you want to encrypt" --query CiphertextBlob --output text | base64 -D > ./encrypted-secret
You then need to upload this file as part of the Lambda. You can decrypt and use the secret in the Lambda as follow.
var fs = require('fs');
var AWS = require('aws-sdk');
var kms = new AWS.KMS({region:'eu-west-1'});
var secretPath = './encrypted-secret';
var encryptedSecret = fs.readFileSync(secretPath);
var params = {
CiphertextBlob: encryptedSecret
};
kms.decrypt(params, function(err, data) {
if (err) console.log(err, err.stack);
else {
var decryptedSecret = data['Plaintext'].toString();
console.log(decryptedSecret);
}
});
I hope you'll find this useful.
As of AWS Lambda support for NodeJS 4.3, the correct answer is to use Environment Variables to store sensitive information. This feature integrates with AWS KMS, so you can use your own master keys to encrypt the secrets if the default is not enough.
Well...that's what KMS was made for :) And certainly more secure than storing your tokens in plaintext in the Lambda function or delegating to a third-party service.
If you go down this route, check out this blog post for an existing usage example to get up and running faster. In particular, you will need to add the following to your Lambda execution role policy:
"kms:Decrypt",
"kms:DescribeKey",
"kms:GetKeyPolicy",
The rest of the code for the above example is a bit convoluted; you should really only need describeKey() in this case.
There is a blueprint for a Nodejs Lambda function that starts off with decrypting an api key from kms. It provides an easy way to decrypt using a promise interface. It also gives you the role permissions that you need to give the lambda function in order to access kms. The blue print can be found by searching for "algorithmia-blueprint"
Whatever you choose to do, you should use a tool like GitMonkey to monitor your code repositories and make sure your keys aren't committed or pushed to them.

Resources