Cloudfront caching with signed URLs-- does it cache the resource regardless of signature? - amazon-cloudfront

We're caching some S3 bucket content using Cloudfront. Our URLs to these assets are signed, with an expiration date of 24 hours after they've been generated.
So after one day our customers would need to hit an API endpoint of ours to generate a new signature to gain access to those assets. However, we would like for cloudfront to cache those assets for longer than 24 hours, regardless of the URL signature-- that way we'll minimize cache misses from Cloudfront and the clients will get content quicker. Is this possible and if so, what settings should I be looking at in Cloudfront / S3?

Related

S3: what is the correct design for invalidation of presigned urls?

As it turned out there is no API for invalidating of presigned urls, but it is possible to drop access from IAM policy. If I have a service with many users (in Cognito Userpool) - what is the correct design for some kind of url invalidation? Do I need to have as many IAM accounts as a users I have? Now I think that I can use {directory-names-of-long-random-crypto-tokens-for-s3-that-I-will-rename}/{when-filename-will-remained-the-same.txt}, tokens from secrets.urlsafe_token(99), but with such approach I will get duplicated files - different users can have access to the same files.
Another solution - I can use lambda for every call for a file, but I think that simpler, cheaper and faster would be to have duplicated files in S3 (maybe a few percent of wasted storage).
I checked CloudFront - looks like no way to use different urls/params mapping to the same S3 file.

Azure - Shared Access Signature with Social Media sharing

I am in a situation where I have few terabytes of data on azure blobs. I have recently added SAS (shared access signature) functionality while returning the direct blob urls back to client and added an expiry date of 2 minutes for all the images coming out. So, now the resource is guaranteed the read access for 2 minutes and expires after it.
All good up to now. I also have social media buttons to share my content to facebook, google+ or twitter etc. When a user shares the content to facebook lets say, the thumbnail remains with the link for specified time but expires after that time giving a bad user experience.
The example of URL generated is
https://storageaccount.blob.core.windows.net/images/2016-May-23-03-28-56_14.jpg?sv=2015-07-08&sr=c&sig=LENHRuvX1Q1d8r2DCJTdMp6fB8mv82DJsMAN8ZAM3C4%3D&se=2016-06-14T22%3A55%3A37Z&sp=r
I want to ask if there is any way out to avoid this situation? I don't have a clue right now.

AWS S3 The security of a signed URL as a hyperlink

Is this safe? Maintaining security using a pre-signed url with AWS S3 Bucket object?
my link
Another words - part 1...
say I'm storing a bunch of separate individual's files in a bucket. I want to provide a link to a file for a user. Obviously, each file is uniquely but consecutively named, I don't want people to be able to change the link from 40.pdf to 30.pdf and get a different file. This URL seems to do that.
part 2, and more importantly....
Is this safe or is a it dangerous method of displaying a URL in terms of the security of my bucket? Clearly, i will be giving away my "access key" here, but of course, not my "secret".
Already answered 3 years ago... sorry.
How secure are Amazon AWS Access keys?
AWS Security Credentials are used when making API calls to AWS. They consist of two components:
Access Key (eg AKIAISEMTXNOG4ABPC6Q): This is similar to a username. It is okay for people to see it.
Secret Key: This is a long string of random characters that is a shared secret between you and AWS. When making API calls, the SDK uses the shared secret to 'sign' your API calls. This is a one-way hash, so people cannot reverse-engineer your secret key. The secret key should be kept private.
A Signed URL is a method of granting time-limited access to an S3 object. The URL contains the Access Key and a Signature, which is a one-way hash calculated from the object, expiry time and the Secret Key.
A Signed URL is safe because:
It is valid for only a limited time period that you specify
It is valid only for the Amazon S3 object that you specify
It cannot be used to retrieve a different object nor can the time period be modified (because it would invalidate the signature)
However, anyone can use the URL during the valid time period. So, if somebody Tweets the URL, many people could potentially access the object until the expiry time. This potential security threat should be weighed against the benefit of serving traffic directly from Amazon S3 rather than having to run your own web servers.

Windows Azure Shared Access Signatures SAS

I'm developing a filesharing website and I have a couple of questions regarding Windows Azure Shared Access Signatures.
About my website: Registered users can upload, share and store their files using blob storage. The files can be up to 2GB in size so I want the upload and download to be as fast as possible. It's
also important that the administartion cost for me as a host is at its minimum. User stored files must be private.
I'm OK with using SAS URI for uploads, but for downloads I'm abit spooked.
Questions:
1. Users can store files on their account and these files should only be accessed by that user. If I were to use SAS URI download here, the files will always be available with an URI as long as
the URI lives, (doesnt require you to be logged in if you know the URI, you can just download the file). This is quite scary if you want the file to be private. I know the signature in the SAS URI
is "HMAC computed over a string-to-sign and key using the SHA256 algorithm, and then encoded using Base64 encoding", is this safe? Is it acceptable to use SAS URI for downloads even if
the files are private? Should I instead stream the file between the server and website, (this will be much more safe but the speed will suffer and the administration cost will rise).
2. How much slower and how much more will it cost if I stream the downloads between (server, website, user) instead of using SAS, (server directly to user)?
3. If I set the SAS URI expiry time to 1 hour and the download takes longer than 1 hour, will the download cancel if the download started before the expiry time?
4. If my website is registered at x.azurewebsites.net and I'm using a purchased domain so I can access my website at www.x.com, is it possible to make the SAS URI's look somethinglike this:
https://x.com/blobpath instead of https://x.blob.core.windows.net/blobpath, (my guess is no..).
Sorry for the wall of text!
There's nothing that stops someone from sharing a URI, whether with or without a SAS. So from a safety perspective, if you leave the expiry date far-off into the future, the URI will remain accessible with the SAS-encoded URI. From an overall security perspective: Since your blob is private, nobody else will have access to the blob without a SAS-encoded URI. To limit SAS use: If, instead of being issued a long-standing SAS URI, the user visited a web page (or API) to request file access, you could generate a new SAS URI for a smaller time window; at this point, the end user would still be able to direct-access the blob without streaming the content through the VM (this just adds an extra network hop for obtaining the URI, along with whatever is needed to host the web / API server). Also related to security: If you use a stored access policy, you have the ability to modify access after issuing the SAS, rather than embedding start+end time directly into the SAS URI itself (see here info about access policies).
You'll incur the cost of the VM(s) used for fronting the URI requests. Outbound bandwidth costs are the same as using blob access directly: You pay for outbound bandwidth only. Performance will be affected by many things if going through a VM: VM size, VM resource use (e.g. if your VM is running at 100% CPU, you might see performance degredation), number of concurrent accesses, etc.
Yes, if the user hits expiry time, the link is no longer valid.
Yes, you can use a SAS combined with custom domain names used with storage. See here for more information about setting up custom domain names for storage.

Azure upload blob with SAS, correct?

Just wanted to ask a quick question.
I want users to upload blobs directly to blob storage. Heres what I came up with:
public ActionResult Upload(HttpPostedFileBase file)
CloudBlobContainer blobContainerSas = new CloudBlobContainer(new Uri(blobSasUri));
CloudBlockBlob blob = blobContainerSas.GetBlockBlobReference(BLOBNAMEHERE);
blob.UploadFromStream(file.InputStream);
Before this code i set the blobSasUrl, (I left that part out).
I was wondering if blob.UploadFromStream(file.InputStream); is correct? Am i uploading directly now?
Im using SAS URI for the uploading blob, (not container). Is this even possible? The blob isnt even uploaded yet... Should i be using SAS URI for container instead? //Confused
Thanks!
No, in the example above your users are not directly uploading files in the storage account. They are uploading a file from their browsers to your web server by calling Upload action which in turn uploads the data into storage.
To directly upload the data into storage via client browser, you would need to make use of concepts like CORS and AJAX. First you would need to enable CORS on your storage account. This is a one time operation which you would need to do so that browsers can directly interact with your storage.
When setting up CORS Settings, you could start with the following CORS settings:
Allowed Domains: your website address (you could also specify * which would mean all websites have access to your storage. Could be recommended if you're really starting out but once you have grasped the concepts, you should always specify very specific website addresses)
Allowed Methods: PUT, POST (this indicate the HTTP verbs you would be using in your JavaScript AJAX calls)
Allowed Headers: * (* means all headers sent by browsers are allowed. You should stick to that as different browsers tend to send different headers which makes debugging quite hard)
Exposed Headers: * (* means all headers will be sent by storage service to the browser)
Max Age in Seconds: 3600
Once CORS is enabled, you could write an application where using JavaScript/AJAX your users will directly upload a file in your storage account.
You may want to read this blog post about understanding CORS concepts in Azure: http://msdn.microsoft.com/library/azure/dn535601.aspx.
I have also written a blog post about Azure and CORS where I have demonstrated uploading a file from browser using JavaScript/AJAX which you can read here: http://gauravmantri.com/2013/12/01/windows-azure-storage-and-cors-lets-have-some-fun/
For configuring CORS rules, you may find this free utility developed by my company useful: http://blog.cynapta.com/2013/12/cynapta-azure-cors-helper-free-tool-to-manage-cors-rules-for-windows-azure-blob-storage/

Resources