encrypt object in aws s3 bucket - node.js

I am saving some images/objects in aws s3 bucket from my application. First i am getting signed url from nodejs service api and uploading images or files to singed url using jquery ajax. I can open image or object using the link provided in the properties (https://s3.amazonaws.com/bucketname/objectname).
I want to provide security for each uploaded object. Even by chance if any anonymous user gets the link (https://s3.amazonaws.com/bucketname/objectname) somewhere he should not be able to open it. They (objects) should be accessed and open only cases like when request has some headers key values etc. I tried server side encryption by specifying header key values in request as shown below.
var file = document.getElementById('fileupload').files[0];
$.ajax({
url: signedurl,
type: "PUT",
data: file,
header:{'x-amz-server-side-encryption':'AES256'},
contentType: file.type,
processData: false,
success: function (result) {
var res = result;
},
error: function (error) {
alert(error);
}
Doesn't sever side encryption keep encrypted object on s3 bucket storage? Does it only encrypts while transferring and decrypts before saving on s3 storage?
If it stores encrypted object on s3 storage then how can i open it using the link shown in properties.

Server-Side Encryption (SSE) in Amazon S3 encrypts objects at rest (stored on disk) but decrypts objects when they are retrieved. Therefore, it is a transparent form of encryption.
If you wish to keep objects in Amazon S3 private, but make them available to specific authorized users, I would recommend using Pre-Signed URLs.
This works by having your application generate a URL that provides time-limited access to a specific object in Amazon S3. The objects are otherwise kept private so they are not accessible.
See documentation: Share an Object with Others

Related

Add encryption on uploading object in S3

import requests
url = 'https://s3.amazonaws.com/<some-bucket-name>'
data = { 'key': 'test/test.jpeg' }
files = { 'file': open('test.jpeg', 'rb') }
r = requests.post(url, data=data, files=files)
I want to upload an image to the S3 bucket as above.The S3 bucket is enabled with AES256 encryption. How will I be able to specify the encryption in post requests?
Warning
It seems like you have configured your bucket in a way that allows unauthenticated PUT requests into it - this is dangerous and may become expensive, because essentially anybody that knows your bucket name can put data into it and you'll have to pay the bill. I recommend you change that.
If you want it to stay that way, you can use headers to configure the encryption type for each object as described in the PutObject API-Reference.
The most relevant (excluding SSE-C encryption) are these two:
x-amz-server-side-encryption
The server-side encryption algorithm used when storing this object in Amazon S3 (for example, AES256, aws:kms).
Valid Values: AES256 | aws:kms
x-amz-server-side-encryption-aws-kms-key-id
If x-amz-server-side-encryption is present and has the value of aws:kms,
this header specifies the ID of the AWS Key Management Service (AWS
KMS) symmetrical customer managed customer master key (CMK) that was
used for the object.
If the value of x-amz-server-side-encryption is aws:kms, this header
specifies the ID of the symmetric customer managed AWS KMS CMK that
will be used for the object. If you specify
x-amz-server-side-encryption:aws:kms, but do not provide
x-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the AWS
managed CMK in AWS to protect the data.
You can add these in your requests.post call.
The API-Docs of the requests library specify how to do that, so it should look roughly like this:
requests.post(
url,
data=data,
files=files,
headers={"x-amz-server-side-encryption": "AES:256"}
)

How to render images(files) from S3 bucket blocked all public access in frontend( Private Write, Private read)

I have uploaded file to S3 bucket using aws-sdk as:
async function uploadFileToAws(file){
const fileName = `new_file_${new Date().getTime()}_${file.name}`;
const mimetype = file.mimetype;
const params = {
Bucket: config.awsS3BucketName,
Key: fileName,
Body: file.data,
ContentType: mimetype,
// ACL: 'public-read'
};
const res = await new Promise((resolve, reject) => {
s3.upload(params, (err, data) => err == null ? resolve(data) : reject(err));
});
return { secure_url: res.Location };
}
If we allow the bucket permission to public read then there is no problem, but we have the requirement of blocking public-read(public-access) and only allow the access of bucket object or image to be visible in owns products only(mobile and web apps) with the help of access Id and secret key or any other similar approach. Is this possible? does aws S3 provide such services?
I have gone through aws s3 documentation, googled, and walked through multiple StackOverflow threads and some blogs but no luck. I would really appreciate the suggestion, tips, help.
You could consider two options.
The first one would be through CloudFront and signed urls or cookies as explained in
Serving Private Content with Signed URLs and Signed Cookies
Basically, in this approach you would setup a CloudFront distribution which would be used to serve your private images. Since the users are authenticated, your backend would need to verify whether they can access the given image, and if so, generate a signed URL for the file. The signed url would enable the access to the said file. Details of this procedure are described in How Signed URLs Work.
The second possibility would be through pre-signed S3 URLs. It is somehow similar to the first one, except that it does not involve any extra service, such as CloudFront. Again, since users are authenticated, your back-end would verify their rights to view the given image, and generate pre-signed S3 url to enable them a temporary access to the image.
In both cases, bucket's do not need to be public. Access to the images is controlled by your back-end.

S3 Pre-signed URL with custom endpoint via API Gateway, MethodNotAllowed

I'm attempting to use a pre-signed url for an S3 bucket with a custom endpoint. I seem so close, but I keep getting a Method Not Allowed error. Here's where I'm at.
I have an API Gateway which connects an endpoint to a Lambda function. That function, among other things, generates a pre-signed url. Like so,
var s3 = new AWS.S3({
endpoint: 'custom.domain.com/upload',
s3BucketEndpoint: true,
signatureVersion: 'v4'
});
//...
s3.getSignedUrl('putObject', {
ACL: 'bucket-owner-full-control',
Bucket: process.env.S3_BUCKET_NAME,
ContentType: "image/png",
Key: asset.id + ".png"
};
This code successfully returns a url with what appears to be all the correct query params, correct key name, and the url is pointing to my endpoint. When attempting to upload however, I receive the following error:
MethodNotAllowedThe specified method is not allowed against this resource.PUTSERVICE[request id was here][host id was here]
If I remove my custom endpoint declaration from my S3 config, I receive a standard domain prefixed pre-signed url and the upload works fine.
Other notes on my setup.
I have configured the /upload resource on API Gateway to be an S3 passthrough for the PUT method.
I have enabled CORS where needed. On the bucket and on my API. I have confirmed CORS is good, as the browser passes checks.
I have setup my policies. The lambda function has access to the internet from my VPC, it has full S3 access, and it has a trust relationship with both S3 and API Gateway. This execution role is shared amongst the resources.
I am using the axios package to upload the file via PUT.
I have added a CloudTrail log, but it reports the exact same error as the browser...
Temporarily making my bucket public makes no difference.
I've attempted to add the query strings to the API Gateway Request/Response integrations without success.
I've added the necessary content type headers to the request and to the pre-signed url config.
I Googled the heck out of this thing.
Am I missing something? Is this possible? I plan to disable my custom endpoint and move forward with the default pre-signed url for the time being, but long term, I would like the custom endpoint. Worst case I may pay for some AWS support.
Thanks for the help!
I can't find documentation that states a presigned URL supports proxy (alt/custom domain). IMO the use-case to authenticate and grant requests access to AWS resources from an API gateway ( regardless of if you are proxy'ing S3 ) would be to use an API Gateway authorizer w/lambda to allow the request to assume an IAM role that has access to the AWS resources (in this case PUT OBJECT on an s3 bucket)
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html

Secure external links for Firebase Storage on NodeJS server-side

I'm having issues generating external links to files stored in my Firebase Storage bucket.
I'm using Google Cloud Storage for a while now and used this library (which is based on this answer) for generating external links for regular Storage buckets, but using it on the Firebase-assigned bucket doesn't seem to work.
I can't generate any secure HTTPS links and keep getting certificate validation error NET::ERR_CERT_COMMON_NAME_INVALID stating that my connection is not private. If I remove the 'S' from the HTTPS, the link works.
NOTE: Using the same credentials and private key to generate links for other buckets in my project, works just fine. It's only the Firebase bucket that is refusing to accept my signing...
I recommend using the official GCloud client, and then you can use getSignedUrl() to get a download URL to the file, like so:
bucket.file(filename).getSignedUrl({
action: 'read',
expires: '03-17-2025'
}, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from this URL.
request(url, function(err, resp) {
// resp.statusCode = 200
});
});
Per Generate Download URL After Successful Upload this seems to work with Firebase and GCS buckets.

Send recorded audio to S3

I am using RecorderJs to record audio. When done; I want to save it to amazon S3 (I am using knox library) via server (because I don't want to share the key).
recorder.exportWAV(function(blob) {
// sending it to server
});
On the server side, using knox ...
knox.putBuffer(blob, path, {"Content-Type": 'audio/wav',
"Content-Length": blob.length},
function(e,r) {
if (!e) {
console.log("saved at " + path);
future.return(path);
} else {
console.log(e);
}
});
And this is saving just 2 bytes!!
Also; is this the best way to save server memory. Or are there better alternatives?
I also see this: Recorder.forceDownload(blob[, filename])
Should I force download and then send it to server?
Or should I save to S3 directly from my domain. Is there a option in S3 which cannot be hacked by other user trying to store data on my server?
Or should i save to S3 directly from my domain. Is there a option in
S3 which cannot be hacked by other user trying to store data on my
server?
You can use S3 bucket policies or AIM policies on S3 buckets to restrict access to your buckets.
Bucket Policies:http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucketPolicies.html
AIM Policies: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingIAMPolicies.html
There are several related threads on SO about this too, for example:
Enabling AWS IAM Users access to shared bucket/objects
AWS s3 bucket policy invalid group principal

Resources