cloudfront securing files in amazon s3? - node.js

I have uploaded my file to amazon s3 bucket.
on clicking the uploaded file in s3 it gives me the properties of the uploaded file and the link for the uploaded file.When i copy paste the link in a browzer the file gets downloaded.
my bucket policy is as below.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket-name/*"
}
]
}
But i donot want my audio/video/image files to get downloaded when we copy paste the link in a browzer..Instead the audio/video/image file to be displayed only through my website.
To achieve this i have used ,
npm aws-cloudfront-sign
cfUtil = require(aws-cloudfront-sign )
i have created a signed url using the above npm module:
var cfKeypairId = 'AJDS2LD3KSD5SJSDKJSA(sample key pair)';
var cfURL = 'http://my_domain_name'+file_path;
//my domain name is something that starts with smb...cloudfront.net
var signedUrl = cfUtil.getSignedUrl(cfURL, {
keypairId: cfKeypairId,
expireTime: Date.now() + 60000,
privateKeyString: ???
});
what should i give in private key string????
what should be my bucket policy?
what should i do with CName's?
can somebody tell this in brief?

You can obtain your CloudFront Key Pair ID and Private Key using the Security Credentials Section(Login Using Root Account) in AWS Web Console.
In your S3 bucket policy you can deny public access and only allow Origin Access Identity in AWS CloudFront to access S3.
If you plan to customize your Domain Name(URL) where you server the files, you can use CName mapping for it using AWS Route53 or using any other DNS provider.

Related

How to add public access for the images while uploading images to AWS S3

I am new to AWS. I have a web app with two pages One page to upload an image to an s3 bucket and another page displaying that image. I have successfully set up the AWS S3 upload function with 'aws-sdk' as I'm using node backend.
Files uploading is successful also I'm getting the URL after upload but the issue is when I try to get the image from the URL it throws 'Access Denied'. I found that after every upload I need to to to that file on the S3 console and enable public access to access the file with that previous responded link.
I have already enabled public access for the whole bucket.
So my question is: Is there any way to enable permission while uploading so that I can after every upload I don't need to enable it to access that file?
This works for me
const s3Params = {
Bucket: process.env.AWS_BUCKET_NAME,
Key: `${fileName}.${fileType}`,
Body: req.file.buffer,
ACL: "public-read", // enable public access for this object
};
If you want to enable this for every object in the bucket, try adding this bucket policy.
{
"Id": "Policy1623923889950",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1623923886911",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::yourbucket",
"Principal": "*"
}
]
}

How can I assign bucket-owner-full-control when creating an S3 object with boto3?

I'm using the Amazon boto3 library in Python to upload a file into another users bucket. The bucket policy applied to the other users bucket is configured like this
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateS3BucketList",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::uuu"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bbb"
},
{
"Sid": "DelegateS3ObjectUpload",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::uuu"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::bbb",
"arn:aws:s3:::bbb/*"
]
}
]
}
where uuu is my user id and bbb is the bucket name belonging to the other user. My user and the other user are IAM accounts belonging to different organisations. (I know this policy can be written more simply, but the intention is to add a check on the upload to block objects without appropriate permissions being created).
I can then use the following code to list all objects in the bucket and also to upload new objects to the bucket. This works, however the owner of the bucket has no access to the object due to Amazons default of making objects private to the creator of the object
import base64
import hashlib
from boto3.session import Session
access_key = "value generated by Amazon"
secret_key = "value generated by Amazon"
bucketname = "bbb"
content_bytes = b"hello world!"
content_md5 = base64.b64encode(hashlib.md5(content_bytes).digest()).decode("utf-8")
filename = "foo.txt"
sess = Session(aws_access_key_id=access_key, aws_secret_access_key=secret_key)
bucket = sess.resource("s3").Bucket(bucketname)
for o in bucket.objects.all():
print(o)
s3 = sess.client("s3")
s3.put_object(
Bucket=bucketname,
Key=filename,
Body=content_bytes,
ContentMD5=content_md5,
# ACL="bucket-owner-full-control" # Uncomment this line to generate error
)
As soon as I uncomment the ACL option, the code generates an Access Denied error message. If I redirect this to point to a bucket inside my own organisation, the ACL option succeeds and the owner of the bucket is given full permission to the object.
I'm now at a loss to figure this out, especially as Amazons own advice appears to be to do it the way I have shown.
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/
https://aws.amazon.com/premiumsupport/knowledge-center/s3-require-object-ownership/
It's not enough to have permission in bucket policies only.
Check if your user (or role) is missing s3:PutObjectAcl permission in IAM.
When using the resource methods in boto3, there can be several different API calls being made, and it isn't always obvious which calls are being made.
In comparison, when using client methods in boto3, there is a 1-to-1 mapping between the API call that is being made in boto3, and the API call received by AWS.
Therefore, it is likely that the resource.put_object() method is calling an additional API, such as PutObjectAcl. You can confirm this by looking in AWS CloudTrail and seeing which API calls are being made from your app.
In such a case, you would need the additional s3:PutObjectAcl permission. This would be needed if the upload process first creates the object, and then updates the object's Access Control List.
When using the client methods for uploading a file, there is also the ability to specify an ACL, which I think gets applied directly rather than requiring a second API call. Thus, using the client method to create the object probably would not require this additional permission.

How to protect s3 files from downloading using downloadhelper plugin

I have my video files stored in s3 bucket.
My files are downloadable using a plugin called Video DownloadHelper. It has two option download Using Browser and download Using Companion App.
I'm restricting S3 files access by setting bucket policy with specific http referrer.
After adding this policy now it is not possible to download Using Browser but able to download Using Companion App.
How can I restrict downloading files using the second method as well? The bucket policy I have set is given below. Any help would be appreciated. Thanks.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
Once your file is loaded in the browser, It can be downloaded. what you can do is to generate a signed URL for the images, which is valid for a short time.
set the src for the images to an API which redirects to a signed URL generated by the backend, so while inspecting the browser it will show the signed URL as SRC but that URL can't be reused. But this will only prevent the reuse of the image URLs or if the download-helper is using the URL to download the image. Here whenever the user refreshes the page a new URL will be generated and send to browser.
Eg:
<img src=https://api-url/image/image-id>
and in your backend do something similar to
response.redirect(signed-url)

User: anonymous is not authorized to perform: es:ESHttpPost on resource:

I'm having this issue with my app.
my app is deployed to Heroku server, and i'm using Elasticsearch which is deployed on AWS.
when i try to access locally to Elasticsearch - on aws domain - everyting works.
but,when i try to access to my Heroku domain (both from postman) i get 503 error with this message :
2017-12-21T13:36:52.982331+00:00 app[web.1]: statusCode: 403,
2017-12-21T13:36:52.982332+00:00 app[web.1]: response: '{"Message":"User: anonymous is not authorized to perform: es:ESHttpPost on resource: houngrymonkey"}',
my access policy is :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-central-1:[ACCOUNT_ID]:domain/[ES_DOMAIN]/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "[heroku static ip]"
}
}
}
]
}
can anyone tell me what is my problem here?
thanks!
I've experienced the same issue with ES and lambda, it's not exactly your case, but maybe it'll be helpful.What actually I did to resolve the issue
1) in lambda (Node.js v6.10) I added the following code:
var creds = new AWS.EnvironmentCredentials('AWS');
....
// inside "post to ES"-method
var signer = new AWS.Signers.V4(req, 'es');
signer.addAuthorization(creds, new Date());
....
// post request to ES goes here
With those lines my exception changed from
"User: anonymous..."
to
"User: arn:aws:sts::xxxx:assumed-role/yyyy/zzzzz"
That was exactly the case.
2) I've updated ES policy in the following way
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:sts::xxxx:assumed-role/yyyy/zzzzz" (which was in exception)
},
"Action": "es:*",
"Resource": "arn:aws:es:[region]:[account-id]:domain/[es-domain]/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:[region]:[account-id]:domain/[es-domain]/*"
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"1.2.3.4/32",
....
]
}
}
}
]
}
Hope that will help.
More solutions to the error mentioned in title are described here:
If you are using a client that doesn't support request signing (such as a browser), consider the following:
Use an IP-based access policy. IP-based policies allow unsigned requests to an Amazon ES domain.
Be sure that the IP addresses specified in the access policy use CIDR notation. Access policies use CIDR notation when checking IP address against the access policy.
Verify that the IP addresses specified in the access policy are the same ones used to access your Elasticsearch cluster. You can get the public IP address of your local computer at https://checkip.amazonaws.com/.
Note: If you're receiving an authorization error, check to see if you are using a public or private IP address. IP-based access policies can't be applied to Amazon ES domains that reside within a virtual private cloud (VPC). This is because security groups already enforce IP-based access policies. For public access, IP-based policies are still available. For more information, see About access policies on VPC domains.
If you are using a client that supports request signing, check the following:
Be sure that your requests are correctly signed. AWS uses the Signature Version 4 signing process to add authentication information to AWS requests. Requests from clients that aren't compatible with Signature Version 4 are rejected with a "User: anonymous is not authorized" error. For examples of correctly signed requests to Amazon ES, see Making and signing Amazon ES requests.
Verify that the correct Amazon Resource Name (ARN) is specified in the access policy.
If your Amazon ES domain resides within a VPC, configure an open access policy with or without a proxy server. Then, use security groups to control access. For more information, see About access policies on VPC domains.

Securing s3 files ,Not to display the content when link is given directly in a browser?

I am using nodejs npm-multer s3 to upload my video/audio/image files to amazon s3 bucket.
I am using the below policy to enable permission for viewing my files through my mobile application
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my bucket/*"
}
]
}
But the problem is whenever i copy the link of my s3 files in a browser and paste it, my files are getting downloaded(or shown).
how can i prevent this?
i dont want my files to get downloaded or shown when the link is given in the addressbar.
my files should only be shown or streamed through my mobile and web application.
How can i achieve this?
You might want to consider serving your content through CloudFront in this case using either Signed URLs or Signed Cookies and use an Origin Access Identity to restrict access to your Amazon S3 content.
This way, only CloudFront can access your S3 content and only clients with valid signed URL/cookies can access your CloudFront distribution.
After you setup your Origin Access Identity in CloudFront, your bucket policy should be something like:
{
"Version": "2012-10-17",
"Id": "Policy1476619044274",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <Your Origin Access Identity ID>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
The format for specifying the Origin Access Identity in a Principal statement is:
"Principal":{
"CanonicalUser":"<Your Origin Access Identity Canonical User ID>"
}
or
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <Your Origin Access Identity ID>"
}
See: Serving Private Content through CloudFront.

Resources