AWS S3 The security of a signed URL as a hyperlink - security

Is this safe? Maintaining security using a pre-signed url with AWS S3 Bucket object?
my link
Another words - part 1...
say I'm storing a bunch of separate individual's files in a bucket. I want to provide a link to a file for a user. Obviously, each file is uniquely but consecutively named, I don't want people to be able to change the link from 40.pdf to 30.pdf and get a different file. This URL seems to do that.
part 2, and more importantly....
Is this safe or is a it dangerous method of displaying a URL in terms of the security of my bucket? Clearly, i will be giving away my "access key" here, but of course, not my "secret".
Already answered 3 years ago... sorry.
How secure are Amazon AWS Access keys?

AWS Security Credentials are used when making API calls to AWS. They consist of two components:
Access Key (eg AKIAISEMTXNOG4ABPC6Q): This is similar to a username. It is okay for people to see it.
Secret Key: This is a long string of random characters that is a shared secret between you and AWS. When making API calls, the SDK uses the shared secret to 'sign' your API calls. This is a one-way hash, so people cannot reverse-engineer your secret key. The secret key should be kept private.
A Signed URL is a method of granting time-limited access to an S3 object. The URL contains the Access Key and a Signature, which is a one-way hash calculated from the object, expiry time and the Secret Key.
A Signed URL is safe because:
It is valid for only a limited time period that you specify
It is valid only for the Amazon S3 object that you specify
It cannot be used to retrieve a different object nor can the time period be modified (because it would invalidate the signature)
However, anyone can use the URL during the valid time period. So, if somebody Tweets the URL, many people could potentially access the object until the expiry time. This potential security threat should be weighed against the benefit of serving traffic directly from Amazon S3 rather than having to run your own web servers.

Related

GCP Cloud Storage Signed Urls - Bound to an object or just calculated?

GCS Signed Urls enable you to simply make objects available temporarily public by a signed url.
https://cloud.google.com/storage/docs/access-control/signed-urls
As I understand it, you need to calculate a signature based on several parameters, and then you get
access to the object without any other security layer.
Unfortunately the documentation is not completely clear if an object gets explicitly activated by the GCS service for that and gets a flag "signature XYZ valid for XX minutes on object ABC" or if GCS serves all files all time, as long as the signature is correct.
If any object can be made available with a proper signature, then all files in GCS are de facto accessible public, if the signature gets guessed right (brute force).
Is this right?
Is there any information on the level of security, that protect url-signing?
Is there a way to disable the url-signing?
Do the VPC Service Perimeters apply to signed urls?
These are some points that I need find out due to compliance evaulations and they are more theoretical questions, than some concrete security doubts I have with this service.
Thanks for your comments.
Unfortunately the documentation is not completely clear if an object gets explicitly activated by the GCS service for that and gets a flag "signature XYZ valid for XX minutes on object ABC" or if GCS serves all files all time, as long as the signature is correct.
The signed URL contains a timestamp after which the signed URL is no longer valid. The timestamp is part of what the signature signs, which means it can't be changed without invalidating the signature. The signatures themselves are generated entirely on the client with no change in server state.
If any object can be made available with a proper signature, then all files in GCS are de facto accessible public, if the signature gets guessed right (brute force). Is this right?
Yes, this is true. Anyone who knows the name of a service account permitted to read the file, the bucket name, and the object name, and who correctly guesses which of the 2256 possible hashes is correct could indeed view your file. Mind you, 2256 is a fairly large number. If an attacker could make one trillion guesses per nanosecond, we would expect that on average they would discover the correct hash in about 3.6x1044 years, which is about 2.7x1034 times the age of the universe.
Is there any information on the level of security, that protect url-signing?
Certainly. GCS uses GCS uses a common signed URL pattern identical to S3's "AWS Signature Version 4." The exact signing process is described on both service's websites. The signature itself is an HMAC SHA-256, which is a type of SHA-2 hash, a well-studied algorithm whose strengths and weaknesses are widely discussed. https://en.wikipedia.org/wiki/SHA-2.
Is there a way to disable the url-signing?
VPC service controls are the simplest way. They can prevent any user outside of your project from accessing the data regardless of credentials.
Do the VPC Service Perimeters apply to signed urls?
Yes, signed URLs would only work from within the configured security perimeter.
Looking at the library code, the signed URLs are created using your service account private key without cooperation from GCP.
Regarding the brute-force, each Signed URL has a credential attached, which is linked to the signer. This signer account needs to have permissions you want the user to have on the object. In case the URL is stolen, it can then be revoked by revoking permissions of the signer (you'll revoke all Signed URLs for that signer tho). To minimize risks, you could create a SA that would have access only to specific objects in the bucket.
To decrease the possibility of brute-force attacks, you can also rotate the Service Account key by creating a new one and deleting the previous.
Regarding the question: Can Signed URLs be fully disabled? You could "disable" this feature by not granting project-wide storage permissions to any Service Account for which you have generated a Private Key.

How do I pass secret key to URL via a QR-code?

I am looking for a secure way to pass a secret key when the user scans a QR-code and goes to my url. This secret key is the key that is connected to one of my products(a smart speaker). If the secret key is valid, the user will be asked to login or register to couple their account to the product on my webpage.
However after my research, QR-codes only pass data that is visible in the url. This brings along security issues even if the key is encrypted: the problem of users typing in adjacent values, the keys get saved in browser history (this means malicious code could sweep through a user’s browsing history and extract passwords, tokens, etc). They’re probably saved in my server’s logs and memory, ... .
Is there a more secure way to pass secret information via a QR-code to a url?
Long story short - there is not. One usually would pass secrets as headers or in the body or the request, but you don't have this kind of flexibility when using QR codes.
Without understanding your business requirements fully, I would try to tackle the problem in the following way.
Embed the secret in the url. Encode it to a QR code. Hide the code in the products package for the customer to find after buying and opening the product.
After using the url redirect the user to a page to create some credentials or use some federation protocols to create an account.
After the account has been created, mark the urls secret as invalid.
You can put the secret data in the fragment part of the URL (after #) then it won't get sent to the server, but can be read by JavaScript in the web page.

How do I determine which AWS Access Keys are used for boto3 calls in Python?

I'm writing a script to automatically rotate AWS Access Keys on Developer laptops. The script runs in the context of the developer using whichever profile they specify from their ~/.aws/credentials file.
The problem is if they have two API keys associated with their IAM User account, I cannot create a new key pair until I delete an existing one. However, if I delete whichever key the script is using (which is probably from the ~/.aws/credentials file, but might be from Environment variables of session tokens or something), the script won't be able to create a new key. Is there a way to determine what AWS Access Key ID is being used to sign boto3 API calls within python?
My fall back is to parse the ~/.aws/credentials file, but I'd rather a more robust solution.
Create a default boto3 session and retrieve the credentials:
print(boto3.Session().get_credentials().access_key)
That said, I'm not necessarily a big fan of the approach that you are proposing. Both keys might legitimately be in use. I would prefer a strategy that notified users of multiple keys, asked them to validate their usage, and suggest they deactivate or delete keys that are no longer in use.
You can also use IAM's get_access_key_last_used() to retrieve information about when the specified access key was last used.
Maybe it would be reasonable to delete keys that are a) inactive and b) haven't been used in N days, but I think that's still a stretch and would require careful handling and awareness among your users.
The real solution here is to move your users to federated access and 100% use of IAM roles. Thus no long-term credentials anywhere. I think this should be the ultimate goal of all AWS users.

S3: what is the correct design for invalidation of presigned urls?

As it turned out there is no API for invalidating of presigned urls, but it is possible to drop access from IAM policy. If I have a service with many users (in Cognito Userpool) - what is the correct design for some kind of url invalidation? Do I need to have as many IAM accounts as a users I have? Now I think that I can use {directory-names-of-long-random-crypto-tokens-for-s3-that-I-will-rename}/{when-filename-will-remained-the-same.txt}, tokens from secrets.urlsafe_token(99), but with such approach I will get duplicated files - different users can have access to the same files.
Another solution - I can use lambda for every call for a file, but I think that simpler, cheaper and faster would be to have duplicated files in S3 (maybe a few percent of wasted storage).
I checked CloudFront - looks like no way to use different urls/params mapping to the same S3 file.

Shared content from S3 or elsewhere

If you have an app where users have data in S3 buckets but can select who they share it with, what's the best technique for protecting this data? For example, how would Instagram protect their image data if they were using S3 (or some other centralized storage provider) so you could only see pictures you were authorized to see?
Obscurity from large url strings seems like one approach, but I was curious if there was a better technique?
By default, all objects in Amazon S3 are private. You can then add permissions so that people can access your objects. This can be done via:
Access Control List that applies to individual objects
A Bucket Policy that applies rules to the whole bucket
IAM to apply permissions to specific Users and Groups
A Pre-Signed URL that grants temporary access to an individual object
If you wish to "select who to share it with", there are two choices:
If the person is defined as a User in IAM, then assign permissions against that User
If the person is not defined in IAM (eg an Instagram user), then use a pre-signed URL
A Pre-Signed URL grants access to S3 objects as a way of "overriding" access controls. A normally private object can be accessed via a URL by appending an expiry time and signature. This is a great way to serve private content from Amazon S3.
Basically, if the application determines that the user is entitled to access an object in Amazon S3, it can generate a link that provides temporary access to the object. Anyone with that link can access the object, but it will no longer work once the time period has expired.
The pre-signed URL can be generated via the AWS SDK (available for most popular programming languages). It can also be generated via the aws s3 presign command in the AWS Command-Line Interface (CLI).
Pre-signed URLs can even be used within web pages. For example, the HTML might refer to a picture using an <img> tag, where the src is a pre-signed URL. That way, a private picture can be displayed on the page, but search engines would not be able to scrape the picture.

Resources