I am trying to get an understanding of the purpose of signed identifiers when using Shared Access Signatures with Blob storage in Azure. I know that signed identifiers are basically applied at container level and are a named. Furthermore, I know that they provide any Shared Access Policies to be valid for longer than an hour (as opposed to when not specifying a signed identifier). I guess my question is couldn't you just apply a shared access signature at the container level with appropriate permissions and expiry time? Thanks to all that reply.
Okay, I think I get now. So best way to interpret SI's are that they are another level of abstraction for access control at the container level. Furthermore, they allow you to specify how long policies can be applied before they are revoked. In both explicit and SI declaration, revocation is pretty much the expiry time.
So my next question is say for instance I have a policy that has been compromised. How exactly do I immediately revoke or change the policy (being that I've defined this policy in my code; how would I change it without having redeploy code)?
The Signed Identifier is how you reference an ACL on a particular Container. These are required for you to create revocable access to your blobs.
If you create an Expiry Time longer than one hour the Blob Service could possibly return a 400 Bad Request Error, or simply ignore the expiry time and set it to 1 hour.
This is done as part of the platform to ensure that your data is secure.
There is more information about the lifetime of a SAS in the MSDN Library
The main reason for the signed identifier as opposed to explicitly specifying all parameters has to do with security. If for some reason a SAS was created that had all the parameters specified and had a valid HMAC signature, the blob service would honor it. Imagine there was no limit to expiry time. Now, imagine it leaks. In the normal case, it can only do damage for up to an hour. Rememeber, you have specified all the parameters in it, so you cannot change it. If you could specify an unlimited time, it could not be revoked without actually changing your main storage key (that would invalidate the sig and break all existing SAS). The SI gives you one more layer of abstraction to prevent having to roll storage keys.
The signed identifier (or policy as I like to call them) is the way to extend past an hour and still be able to a.) immediately revoke if necessary or b.) immediately change. With the SI, you can change the permissions, you can delete it, you can change the expiry, all of which give you greater control over the life and access of your existing SAS (the ones that use SI anyway).
Actually I've just answered my own question. I can write code to reference the containers in question and clear out the access policies currently set any container.
Related
GCS Signed Urls enable you to simply make objects available temporarily public by a signed url.
https://cloud.google.com/storage/docs/access-control/signed-urls
As I understand it, you need to calculate a signature based on several parameters, and then you get
access to the object without any other security layer.
Unfortunately the documentation is not completely clear if an object gets explicitly activated by the GCS service for that and gets a flag "signature XYZ valid for XX minutes on object ABC" or if GCS serves all files all time, as long as the signature is correct.
If any object can be made available with a proper signature, then all files in GCS are de facto accessible public, if the signature gets guessed right (brute force).
Is this right?
Is there any information on the level of security, that protect url-signing?
Is there a way to disable the url-signing?
Do the VPC Service Perimeters apply to signed urls?
These are some points that I need find out due to compliance evaulations and they are more theoretical questions, than some concrete security doubts I have with this service.
Thanks for your comments.
Unfortunately the documentation is not completely clear if an object gets explicitly activated by the GCS service for that and gets a flag "signature XYZ valid for XX minutes on object ABC" or if GCS serves all files all time, as long as the signature is correct.
The signed URL contains a timestamp after which the signed URL is no longer valid. The timestamp is part of what the signature signs, which means it can't be changed without invalidating the signature. The signatures themselves are generated entirely on the client with no change in server state.
If any object can be made available with a proper signature, then all files in GCS are de facto accessible public, if the signature gets guessed right (brute force). Is this right?
Yes, this is true. Anyone who knows the name of a service account permitted to read the file, the bucket name, and the object name, and who correctly guesses which of the 2256 possible hashes is correct could indeed view your file. Mind you, 2256 is a fairly large number. If an attacker could make one trillion guesses per nanosecond, we would expect that on average they would discover the correct hash in about 3.6x1044 years, which is about 2.7x1034 times the age of the universe.
Is there any information on the level of security, that protect url-signing?
Certainly. GCS uses GCS uses a common signed URL pattern identical to S3's "AWS Signature Version 4." The exact signing process is described on both service's websites. The signature itself is an HMAC SHA-256, which is a type of SHA-2 hash, a well-studied algorithm whose strengths and weaknesses are widely discussed. https://en.wikipedia.org/wiki/SHA-2.
Is there a way to disable the url-signing?
VPC service controls are the simplest way. They can prevent any user outside of your project from accessing the data regardless of credentials.
Do the VPC Service Perimeters apply to signed urls?
Yes, signed URLs would only work from within the configured security perimeter.
Looking at the library code, the signed URLs are created using your service account private key without cooperation from GCP.
Regarding the brute-force, each Signed URL has a credential attached, which is linked to the signer. This signer account needs to have permissions you want the user to have on the object. In case the URL is stolen, it can then be revoked by revoking permissions of the signer (you'll revoke all Signed URLs for that signer tho). To minimize risks, you could create a SA that would have access only to specific objects in the bucket.
To decrease the possibility of brute-force attacks, you can also rotate the Service Account key by creating a new one and deleting the previous.
Regarding the question: Can Signed URLs be fully disabled? You could "disable" this feature by not granting project-wide storage permissions to any Service Account for which you have generated a Private Key.
Hello I have a web page that loads a blob resource using a SAS Policy everytime a hyper link is clicked. Meaning if I click twice or more on the link I will generate two or more different signed URLs to the same blob resource. My question is: is there a way of overwrite or cancel the previously generated SAS policies and keeping only the URL generated when the user clicks last?
Thank you in advance.
Technically it is possible to do so however it is not a recommended approach. Reason being, there can only be 5 access policies on a blob container at any point of time and the process to change access policies would require a round trip to storage (i.e. a network call). Assuming there are 100s of users on your website and all of them accessing the same resource. By changing access policy on the fly would result in errors for some of the users plus because it involves a network call, the overall experience may be degraded.
One thing you could do is keep the SAS expiry time short so that the SAS URL is valid for a short amount of time so that there are less chances of it being misused.
To change the access policy, you would 1st need to fetch the existing access policies on a container. Then you could either update the access policy identifier or remove that access policy + create a new access policy and then save the access policies.
Heres three questions for you!
Is it possible to revoke an active SAS URI without refreshing storage key or using Stored Access Policy?
In my application, all users share the same blob container. Because of this, using stored access policy, (max 5 per container), or refreshing storage key, (will result in ALL SAS URI'S being deleted), is not an option for me.
Is it possible to show custom errors if the SAS URI is incorrect or expired?
This is the default page:
If I let users create their own SAS URI for uploading/downloading, do I need to think about setting restrictions? Can this be abused?
Currently, in my application, there are restrictions on how much you are allowed to upload, but no restrictions on how many SAS URIS you are allowed to create. Users can aquire how many SAS URIS as they like as long as the don't complete their upload or exceed the allowed stored bytes.
How does real filesharing websites deal with this?
How much does a SAS URI cost to create?
Edit - Clarification of question 3.
Before you can upload or download a blob you must first get the SAS URI. I was wondering if it's "expensive" to create a SAS URI. Imagine a user exploiting this, creating a SAS URI over and over again without finishing the upload/download.
I was also wondering how real filesharing websites deal with this. It's easy to store information about how much storage the user is using and with that information put restrictions etc, but... If a user keeps uploading files to 99% and then cancel and restarts again and do the same thing, i imagine it would cost alot for the host
To answer your questions:
No, ad-hoc SAS tokens (i.e. tokens without Storage Access Policy) can't be revoked other than changing the storage key or access policy.
No, at this time it is not possible to customize error message. Standard error returned by storage service will be shown.
You need to provide more details regarding 3. As it stands, I don't think we have enough information to comment.
UPDATE
Regarding your question about how expensive creating a SAS URI is, one thing is that creating a SAS URI does not involve making a REST API call to storage service so there's no storage transaction involved. So from the storage side, there's no cost involved in creating a SAS URI. Assuming your service is a web application, only cost I could think of is user making call to your service to create a SAS URI.
Regarding your comment about how real file sharing websites deal with it, I think unless someone with a file sharing website answers it, it would be purely speculative.
(My Speculative response :)) If I were running a file sharing website, I would not worry too much about this kind of thing simply because folks don't have time to "mess around" with your site/application. It's not that the users would come to your website with an intention of "let's just upload files till the upload is 99%, cancel the upload and do that again" :). But again, it is purely a speculative response :).
I have the situation where I want to send a link to a file with 3 months access to one user and send the same file with 1 month access to another user. Therefore I want to create two different SAS to the file.
Is this supported or every time I get a SAS to file it overwrites the previous?
A SAS is a token you generate based a set of policies/keys. Azure Storage (since you're using blobs), doesn't track these tokens server side. What happens is that when the SAS token is presented, it includes the "hashed" signature value and either the permission or the policy that was used to sign the request. The Storage service then re-computes the hash using the same keys that were used to generate the original hash and compares it to the signature value presented.
Because the tokens themselves are not tracked by the service, you can theoretically generate an infinite number of them.
The answer is yes. I have just uploaded one file and generated two different SAS - one expiring at 10:59 and the other at 11:10. At 11:00 the first one expired and "AuthenticationFailed" was returned on the given URL.
The start and end date-times are embedded in the SAS so that the obvious difference in time is carried with the URL and the system can see the difference.
I'm creating an application that will be hosted in Azure. In this application, users will be able to upload their own content. They will also be able to configure a list of other trusted app users who will be able to read their files. I'm trying to figure out how to architect the storage.
I think that I'll create a storage container named after each user's application ID, and they will be able to upload files there. My question relates to how to grant read access to all files to which a user should have access. I've been reading about shared access signatures and they seem like they could be a great fit for what I'm trying to achieve. But, I'm evaluating the most efficient way to grant access to users. I think that Stored access policies might be useful. But specifically:
Can I use one shared access signature (or stored access policy) to grant a user access to multiple containers? I've found one piece of information which I think is very relevant:
http://msdn.microsoft.com/en-us/library/windowsazure/ee393341.aspx
"A container, queue, or table can include up to 5 stored access policies. Each policy can be used by any number of shared access signatures."
But I'm not sure if I'm understanding that correctly. If a user is connected to 20 other people, can I grant him or her access to twenty specific containers? Of course, I could generate twenty individual stored access policies, but that doesn't seem very efficient, and when they first log in, I plan to show a summary of content from all of their other trusted app users, which would equate to demanding 20 signatures at once (if I understand correctly).
Thanks for any suggestions...
-Ben
Since you are going to have a container per user (for now I'll equate a user with what you called a user application ID), that means you'll have a storage account that can contain many different containers for many users. If you want to have the application have the ability to upload to only one specific container while reading from many two options come to mind.
First: Create a API that lives somewhere that handles all the requests. Behind the API your code will have full access to entire storage account so your business logic will determine what they do and do not have access to. The upside of this is that you don't have to create Shared Access Signatures (SAS) at all. Your app only knows how to talk to the API. You can even combine the data that they can see in that summary of content by doing parallel calls to get contents from the various containers from a single call from the application. The downside is that you are now hosting this API service which has to broker ALL of these calls. You'd still need the API service to generate SAS if you go that route, but it would only be needed to generate the SAS and the client applications would make the calls directly with the Windows Azure storage service bearing the load which will reduce the resources you actually need.
Second: Go the SAS route and generate SAS as needed, but this will get a bit tricky.
You can only create up to five Stored Access Policies on each container. For one of these five you create one policy for the "owner" of the container which gives them Read and write permissions. Now, since you are allowing folks to give read permissions to other folks you'll run into the policy count limit unless you reuse the same policy for Read, but then you won't be able to revoke it if the user removes someone from their "trusted" list of readers. For example, if I gave permissions to both Bob and James to my container and they are both handed a copy of the Read SAS, if I needed to remove Bob I'd have to cancel the Read Policy they shared and reissue a new Read SAS to James. That's not really that bad of an issue though as the app can detect when it no longer has permissions and ask for the renewed SAS.
In any case you still kind of want the policies to be short lived. If I removed Bob from my trusted readers I'd pretty much want him cut off immediately. This means you'll be going back to get a renewed SAS quite a bit and recreating the signed access signature which reduces the usefulness of the signed access policies. This really depends on your stomach of how long you were planning on allowing the policy to live and how quickly you'd want someone cut off if they were "untrusted".
Now, a better option could be that you create Ad-hoc signatures. You can have as many Ad-hoc signatures as you want actually, but they can't be revoked and can at most last one hour. Since you'd make them short lived the length or lack of revocation shouldn't be an issue. Going that route will mean that you'd be having the application come back to get them as needed, but given what I mentioned above about when someone is removed and you want the SAS to run out this may not be a big deal. As you pointed out though, this does increase the complexity of things because you're generating a lot of SASs; however, with these being ad-hoc you don't really need to track them.
If you were going to go the SAS route I'd suggest that your API be generating the ad-hoc ones as needed. They shouldn't last more than a few minutes as people can have their permissions to a container removed and all you are trying to do is reduce the load on hosted service for actually doing the upload and download. Again, all the logic for handling what containers someone can see is still in your API service and the applications just get signatures they can use for small periods of time.