How to render images(files) from S3 bucket blocked all public access in frontend( Private Write, Private read) - node.js

I have uploaded file to S3 bucket using aws-sdk as:
async function uploadFileToAws(file){
const fileName = `new_file_${new Date().getTime()}_${file.name}`;
const mimetype = file.mimetype;
const params = {
Bucket: config.awsS3BucketName,
Key: fileName,
Body: file.data,
ContentType: mimetype,
// ACL: 'public-read'
};
const res = await new Promise((resolve, reject) => {
s3.upload(params, (err, data) => err == null ? resolve(data) : reject(err));
});
return { secure_url: res.Location };
}
If we allow the bucket permission to public read then there is no problem, but we have the requirement of blocking public-read(public-access) and only allow the access of bucket object or image to be visible in owns products only(mobile and web apps) with the help of access Id and secret key or any other similar approach. Is this possible? does aws S3 provide such services?
I have gone through aws s3 documentation, googled, and walked through multiple StackOverflow threads and some blogs but no luck. I would really appreciate the suggestion, tips, help.

You could consider two options.
The first one would be through CloudFront and signed urls or cookies as explained in
Serving Private Content with Signed URLs and Signed Cookies
Basically, in this approach you would setup a CloudFront distribution which would be used to serve your private images. Since the users are authenticated, your backend would need to verify whether they can access the given image, and if so, generate a signed URL for the file. The signed url would enable the access to the said file. Details of this procedure are described in How Signed URLs Work.
The second possibility would be through pre-signed S3 URLs. It is somehow similar to the first one, except that it does not involve any extra service, such as CloudFront. Again, since users are authenticated, your back-end would verify their rights to view the given image, and generate pre-signed S3 url to enable them a temporary access to the image.
In both cases, bucket's do not need to be public. Access to the images is controlled by your back-end.

Related

S3 Pre-signed URL with custom endpoint via API Gateway, MethodNotAllowed

I'm attempting to use a pre-signed url for an S3 bucket with a custom endpoint. I seem so close, but I keep getting a Method Not Allowed error. Here's where I'm at.
I have an API Gateway which connects an endpoint to a Lambda function. That function, among other things, generates a pre-signed url. Like so,
var s3 = new AWS.S3({
endpoint: 'custom.domain.com/upload',
s3BucketEndpoint: true,
signatureVersion: 'v4'
});
//...
s3.getSignedUrl('putObject', {
ACL: 'bucket-owner-full-control',
Bucket: process.env.S3_BUCKET_NAME,
ContentType: "image/png",
Key: asset.id + ".png"
};
This code successfully returns a url with what appears to be all the correct query params, correct key name, and the url is pointing to my endpoint. When attempting to upload however, I receive the following error:
MethodNotAllowedThe specified method is not allowed against this resource.PUTSERVICE[request id was here][host id was here]
If I remove my custom endpoint declaration from my S3 config, I receive a standard domain prefixed pre-signed url and the upload works fine.
Other notes on my setup.
I have configured the /upload resource on API Gateway to be an S3 passthrough for the PUT method.
I have enabled CORS where needed. On the bucket and on my API. I have confirmed CORS is good, as the browser passes checks.
I have setup my policies. The lambda function has access to the internet from my VPC, it has full S3 access, and it has a trust relationship with both S3 and API Gateway. This execution role is shared amongst the resources.
I am using the axios package to upload the file via PUT.
I have added a CloudTrail log, but it reports the exact same error as the browser...
Temporarily making my bucket public makes no difference.
I've attempted to add the query strings to the API Gateway Request/Response integrations without success.
I've added the necessary content type headers to the request and to the pre-signed url config.
I Googled the heck out of this thing.
Am I missing something? Is this possible? I plan to disable my custom endpoint and move forward with the default pre-signed url for the time being, but long term, I would like the custom endpoint. Worst case I may pay for some AWS support.
Thanks for the help!
I can't find documentation that states a presigned URL supports proxy (alt/custom domain). IMO the use-case to authenticate and grant requests access to AWS resources from an API gateway ( regardless of if you are proxy'ing S3 ) would be to use an API Gateway authorizer w/lambda to allow the request to assume an IAM role that has access to the AWS resources (in this case PUT OBJECT on an s3 bucket)
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html

Google Cloud Storage access without providing credentials?

I'm using Google Cloud Storage and have a few buckets that contain objects which are not shared publicly. Example in screenshot below. Yet I was able to retrieve file without supplying any service account keys or authentication tokens from a local server using NodeJS.
I can't access the files from browser via the url formats (which is good):
https://www.googleapis.com/storage/v1/b/mygcstestbucket/o/20180221164035-user-IMG_0737.JPG
https://storage.googleapis.com/mygcstestbucket/20180221164035-user-IMG_0737.JPG
However, when I tried using retrieving the file from NodeJS without credentials, surprisingly it could download the file to disk. I checked process.env to make sure there were no GOOGLE_AUTHENTICATION_CREDENTIALS or any pem keys, and also even did a gcloud auth revoke --all on the command line just to make sure I was logged out, and still I was able to download the file. Does this mean that the files in my GCS bucket is not properly secured? Or I'm somehow authenticating myself with the GCS API in a way I'm not aware?
Any guidance or direction would be greatly appreciated!!
// Imports the Google Cloud client library
const Storage = require('#google-cloud/storage');
// Your Google Cloud Platform project ID
const projectId = [projectId];
// Creates a client
const storage = new Storage({
projectId: projectId
});
// The name for the new bucket
const bucketName = 'mygcstestbucket';
var userBucket = storage.bucket(bucketName);
app.get('/getFile', function(req, res){
let fileName = '20180221164035-user-IMG_0737.JPG';
var file = userBucket.file(fileName);
const options = {
destination: `${__dirname}/temp/${fileName}`
}
file.download(options, function(err){
if(err) return console.log('could not download due to error: ', err);
console.log('File completed');
res.json("File download completed");
})
})
Client Libraries use Application Default Credentials to authenticate Google APIs. So when you don't explicitly use a specific Service Account via GOOGLE_APPLICATION_CREDENTIALS the library will use the Default Credentials. You can find more details on this documentation.
Based on your sample, I'd assume the Application Default Credentials were used for fetching those files.
Lastly, you could always run echo $GOOGLE_APPLICATION_CREDENTIALS (Or applicable to your OS) to confirm if you've pointed a service account's path to the variable.
Create New Service Account in GCP for project and download the JSON file. Then set environment variable like following:
$env:GCLOUD_PROJECT="YOUR PROJECT ID"
$env:GOOGLE_APPLICATION_CREDENTIALS="YOUR_PATH_TO_JSON_ON_LOCAL"

Secure external links for Firebase Storage on NodeJS server-side

I'm having issues generating external links to files stored in my Firebase Storage bucket.
I'm using Google Cloud Storage for a while now and used this library (which is based on this answer) for generating external links for regular Storage buckets, but using it on the Firebase-assigned bucket doesn't seem to work.
I can't generate any secure HTTPS links and keep getting certificate validation error NET::ERR_CERT_COMMON_NAME_INVALID stating that my connection is not private. If I remove the 'S' from the HTTPS, the link works.
NOTE: Using the same credentials and private key to generate links for other buckets in my project, works just fine. It's only the Firebase bucket that is refusing to accept my signing...
I recommend using the official GCloud client, and then you can use getSignedUrl() to get a download URL to the file, like so:
bucket.file(filename).getSignedUrl({
action: 'read',
expires: '03-17-2025'
}, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from this URL.
request(url, function(err, resp) {
// resp.statusCode = 200
});
});
Per Generate Download URL After Successful Upload this seems to work with Firebase and GCS buckets.

encrypt object in aws s3 bucket

I am saving some images/objects in aws s3 bucket from my application. First i am getting signed url from nodejs service api and uploading images or files to singed url using jquery ajax. I can open image or object using the link provided in the properties (https://s3.amazonaws.com/bucketname/objectname).
I want to provide security for each uploaded object. Even by chance if any anonymous user gets the link (https://s3.amazonaws.com/bucketname/objectname) somewhere he should not be able to open it. They (objects) should be accessed and open only cases like when request has some headers key values etc. I tried server side encryption by specifying header key values in request as shown below.
var file = document.getElementById('fileupload').files[0];
$.ajax({
url: signedurl,
type: "PUT",
data: file,
header:{'x-amz-server-side-encryption':'AES256'},
contentType: file.type,
processData: false,
success: function (result) {
var res = result;
},
error: function (error) {
alert(error);
}
Doesn't sever side encryption keep encrypted object on s3 bucket storage? Does it only encrypts while transferring and decrypts before saving on s3 storage?
If it stores encrypted object on s3 storage then how can i open it using the link shown in properties.
Server-Side Encryption (SSE) in Amazon S3 encrypts objects at rest (stored on disk) but decrypts objects when they are retrieved. Therefore, it is a transparent form of encryption.
If you wish to keep objects in Amazon S3 private, but make them available to specific authorized users, I would recommend using Pre-Signed URLs.
This works by having your application generate a URL that provides time-limited access to a specific object in Amazon S3. The objects are otherwise kept private so they are not accessible.
See documentation: Share an Object with Others

Send recorded audio to S3

I am using RecorderJs to record audio. When done; I want to save it to amazon S3 (I am using knox library) via server (because I don't want to share the key).
recorder.exportWAV(function(blob) {
// sending it to server
});
On the server side, using knox ...
knox.putBuffer(blob, path, {"Content-Type": 'audio/wav',
"Content-Length": blob.length},
function(e,r) {
if (!e) {
console.log("saved at " + path);
future.return(path);
} else {
console.log(e);
}
});
And this is saving just 2 bytes!!
Also; is this the best way to save server memory. Or are there better alternatives?
I also see this: Recorder.forceDownload(blob[, filename])
Should I force download and then send it to server?
Or should I save to S3 directly from my domain. Is there a option in S3 which cannot be hacked by other user trying to store data on my server?
Or should i save to S3 directly from my domain. Is there a option in
S3 which cannot be hacked by other user trying to store data on my
server?
You can use S3 bucket policies or AIM policies on S3 buckets to restrict access to your buckets.
Bucket Policies:http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucketPolicies.html
AIM Policies: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingIAMPolicies.html
There are several related threads on SO about this too, for example:
Enabling AWS IAM Users access to shared bucket/objects
AWS s3 bucket policy invalid group principal

Resources