Add encryption on uploading object in S3 - python-3.x

import requests
url = 'https://s3.amazonaws.com/<some-bucket-name>'
data = { 'key': 'test/test.jpeg' }
files = { 'file': open('test.jpeg', 'rb') }
r = requests.post(url, data=data, files=files)
I want to upload an image to the S3 bucket as above.The S3 bucket is enabled with AES256 encryption. How will I be able to specify the encryption in post requests?

Warning
It seems like you have configured your bucket in a way that allows unauthenticated PUT requests into it - this is dangerous and may become expensive, because essentially anybody that knows your bucket name can put data into it and you'll have to pay the bill. I recommend you change that.
If you want it to stay that way, you can use headers to configure the encryption type for each object as described in the PutObject API-Reference.
The most relevant (excluding SSE-C encryption) are these two:
x-amz-server-side-encryption
The server-side encryption algorithm used when storing this object in Amazon S3 (for example, AES256, aws:kms).
Valid Values: AES256 | aws:kms
x-amz-server-side-encryption-aws-kms-key-id
If x-amz-server-side-encryption is present and has the value of aws:kms,
this header specifies the ID of the AWS Key Management Service (AWS
KMS) symmetrical customer managed customer master key (CMK) that was
used for the object.
If the value of x-amz-server-side-encryption is aws:kms, this header
specifies the ID of the symmetric customer managed AWS KMS CMK that
will be used for the object. If you specify
x-amz-server-side-encryption:aws:kms, but do not provide
x-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the AWS
managed CMK in AWS to protect the data.
You can add these in your requests.post call.
The API-Docs of the requests library specify how to do that, so it should look roughly like this:
requests.post(
url,
data=data,
files=files,
headers={"x-amz-server-side-encryption": "AES:256"}
)

Related

Generating Cloud Storage Signed URL from Google Cloud Function without using explicit key file

I'd like to create a pre-signed upload URL to a storage bucket, and would like to avoid an explicit reference to a json key.
Currently, I'm attempting to do this with the Default App Engine Service Account
I'm attempting to follow along with this answer but am getting this error:
AttributeError: you need a private key to sign credentials.the
credentials you are currently using <class
'google.auth.compute_engine.credentials.Credentials'> just contains a
token. see
https://googleapis.dev/python/google-api-core/latest/auth.html#setting-up-a-service-account
for more details.
My Cloud Function code looks like this:
from google.cloud import storage
import datetime
import google.auth
def generate_upload_url(blob_name, additional_metadata: dict = {}):
credentials, project_id = google.auth.default()
# Perform a refresh request to get the access token of the current credentials (Else, it's None)
from google.auth.transport import requests
r = requests.Request()
credentials.refresh(r)
client = storage.Client()
bucket = client.get_bucket("my_bucket")
blob = bucket.blob(blob_name)
service_account_email = credentials.service_account_email
print(f"attempting to create signed url for {service_account_email}")
url = blob.generate_signed_url(
version="v4",
service_account_email=service_account_email,
access_token=credentials.token,
# This URL is valid for 120 minutes
expiration=datetime.timedelta(minutes=120),
# Allow PUT requests using this URL.
method="PUT",
content_type="application/octet-stream",
)
return url
def get_upload_url(request):
blob_name = get_param(request, "blob_name")
url = generate_upload_url(blob_name)
return url
When you use version v4 of signed URL, the first line of the method calls ensure_signed_credentialsmethod that check if the current service account can generate a signature in standalone mode (so with a private key). And so, that's break the current behavior.
In the comment of the function, it's clearly describe that a service account JSON file is required
If you are on Google Compute Engine, you can't generate a signed URL.
Follow `Issue 922`_ for updates on this. If you'd like to be able to
generate a signed URL from GCE, you can use a standard service account
from a JSON file rather than a GCE service account.
So, use v2 version instead.

Using boto3, while copying from whole folder or file from one s3 bucket to another in same region, how to provide access key and secret access key?

I want to copy a file from one s3 bucket to another in same region. Both buckets have different access key and secret key. How do I provide these secret and access key using the following python code snippet:
import boto3
s3 = boto3.resource('s3')
copy_source = {
'Bucket': 'mybucket',
'Key': 'mykey'
}
bucket = s3.Bucket('otherbucket')
bucket.copy(copy_source, 'otherkey')
You don't. Copying objects, whether from one bucket to another or within the same bucket, requires you to use one set of credentials that has the necessary permissions in both buckets.
When you perform a copy object, the request is actually sent by your client to the destination bucket, which sends the request for the content to the source bucket using a path that is internal to S3, but using the same credentials you used for the first request. The object is transferred without you needing to download it and then upload it again.
If you don't have a single set of credentials that can access both buckets, you have to resort to downloading and re-uploading.

S3 Pre-signed URL with custom endpoint via API Gateway, MethodNotAllowed

I'm attempting to use a pre-signed url for an S3 bucket with a custom endpoint. I seem so close, but I keep getting a Method Not Allowed error. Here's where I'm at.
I have an API Gateway which connects an endpoint to a Lambda function. That function, among other things, generates a pre-signed url. Like so,
var s3 = new AWS.S3({
endpoint: 'custom.domain.com/upload',
s3BucketEndpoint: true,
signatureVersion: 'v4'
});
//...
s3.getSignedUrl('putObject', {
ACL: 'bucket-owner-full-control',
Bucket: process.env.S3_BUCKET_NAME,
ContentType: "image/png",
Key: asset.id + ".png"
};
This code successfully returns a url with what appears to be all the correct query params, correct key name, and the url is pointing to my endpoint. When attempting to upload however, I receive the following error:
MethodNotAllowedThe specified method is not allowed against this resource.PUTSERVICE[request id was here][host id was here]
If I remove my custom endpoint declaration from my S3 config, I receive a standard domain prefixed pre-signed url and the upload works fine.
Other notes on my setup.
I have configured the /upload resource on API Gateway to be an S3 passthrough for the PUT method.
I have enabled CORS where needed. On the bucket and on my API. I have confirmed CORS is good, as the browser passes checks.
I have setup my policies. The lambda function has access to the internet from my VPC, it has full S3 access, and it has a trust relationship with both S3 and API Gateway. This execution role is shared amongst the resources.
I am using the axios package to upload the file via PUT.
I have added a CloudTrail log, but it reports the exact same error as the browser...
Temporarily making my bucket public makes no difference.
I've attempted to add the query strings to the API Gateway Request/Response integrations without success.
I've added the necessary content type headers to the request and to the pre-signed url config.
I Googled the heck out of this thing.
Am I missing something? Is this possible? I plan to disable my custom endpoint and move forward with the default pre-signed url for the time being, but long term, I would like the custom endpoint. Worst case I may pay for some AWS support.
Thanks for the help!
I can't find documentation that states a presigned URL supports proxy (alt/custom domain). IMO the use-case to authenticate and grant requests access to AWS resources from an API gateway ( regardless of if you are proxy'ing S3 ) would be to use an API Gateway authorizer w/lambda to allow the request to assume an IAM role that has access to the AWS resources (in this case PUT OBJECT on an s3 bucket)
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html

How to generate Cloudfront signed url for sse kms encrypted files using boto3?

How can I generate a signed url for Cloudfront for sse kms encrypted files using boto3? I'm using a custom domain so that https can be used.
<Error>
<Code>InvalidArgument</Code>
<Message>
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.
</Message>
<ArgumentName>Authorization</ArgumentName>
<ArgumentValue>null</ArgumentValue>
<RequestId>063D9D2F5214E53A</RequestId>
<HostId>
jVazJY0g4jSDZSKB1iYHzFz7CWGlulU3eBEmg1E2OilYURzrdKGQI0xDVCWalQWtdNYSGz/5+DM=
</HostId>
</Error>
The code below is what I was using for creating signed urls prior to using sse kms but the signed urls generated now give this error:
def rsa_signer(message):
private_key = open('./pk-APKAJPF6OMQQZWEXQPUA.pem', 'r').read()
return rsa.sign(
message,
rsa.PrivateKey.load_pkcs1(private_key.encode('utf8')),
'SHA-1') # CloudFront requires SHA-1 hash
key_id = 'APKAJPF6OMQQZWEXQPUA'
cf_signer = CloudFrontSigner(key_id, rsa_signer)
expires = datetime.datetime.now() + datetime.timedelta(minutes=15)
signed_url = cf_signer.generate_presigned_url(
url,
date_less_than=expires)
# ExpiresIn=100
return signed_url
I don't know whether this is possible with a CloudFront pre-signed URL, at least natively. The CloudFront origin access identity creates a second signed URL (or something equivalent) behind the scenes...
CloudFront typically uses signature version 2 for authentication when it requests objects in your Amazon S3 bucket.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-origin-access-identity-signature-version-4
I'd have to test to be sure, but I suspect there may not be a native solution.
Modifying the request in-flight using a Lambda#Edge Origin Request trigger to generate a V4 signature and inject it might be a viable workaround, and indeed might be the only workaround.
It's also possible that if the objects were in a bucket in a region that only supports Signature Version 4, CloudFront might do the right thing, authmatically, since it does work correctly with S3 in those regions.

encrypt object in aws s3 bucket

I am saving some images/objects in aws s3 bucket from my application. First i am getting signed url from nodejs service api and uploading images or files to singed url using jquery ajax. I can open image or object using the link provided in the properties (https://s3.amazonaws.com/bucketname/objectname).
I want to provide security for each uploaded object. Even by chance if any anonymous user gets the link (https://s3.amazonaws.com/bucketname/objectname) somewhere he should not be able to open it. They (objects) should be accessed and open only cases like when request has some headers key values etc. I tried server side encryption by specifying header key values in request as shown below.
var file = document.getElementById('fileupload').files[0];
$.ajax({
url: signedurl,
type: "PUT",
data: file,
header:{'x-amz-server-side-encryption':'AES256'},
contentType: file.type,
processData: false,
success: function (result) {
var res = result;
},
error: function (error) {
alert(error);
}
Doesn't sever side encryption keep encrypted object on s3 bucket storage? Does it only encrypts while transferring and decrypts before saving on s3 storage?
If it stores encrypted object on s3 storage then how can i open it using the link shown in properties.
Server-Side Encryption (SSE) in Amazon S3 encrypts objects at rest (stored on disk) but decrypts objects when they are retrieved. Therefore, it is a transparent form of encryption.
If you wish to keep objects in Amazon S3 private, but make them available to specific authorized users, I would recommend using Pre-Signed URLs.
This works by having your application generate a URL that provides time-limited access to a specific object in Amazon S3. The objects are otherwise kept private so they are not accessible.
See documentation: Share an Object with Others

Resources