Earlier i was getting a blob without using a sas authorization.
But now I want only those users to be able to access a blob who has the sas token.
Lets say i want to access a file at
https://storageaccount.blob.core.windows.net/sascontainer/sasblob.txt
Now i have the SAS token too. So the new url would be
https://storageaccount.blob.core.windows.net/sascontainer/sasblob.txt?sv=2012-02-12&st=2013-04-12T23%3A37%3A08Z&se=2013-04-13T00%3A12%3A08Z&sr=b&sp=rw&sig=dF2064yHtc8RusQLvkQFPItYdeOz3zR8zHsDMBi4S30%3D
What do I do next so that only those with the second link can go and get the "sasblob.txt" file?
What changes do i have to make in the azure portal?
I guess the only change i have to make in the client side is the URL. I need to replace the URl without the sas token with the URl containing the sas token.
As long as the blob is private (which can be set at the container level), nobody will have access without the SAS-augmented URI. Even if you kept giving out the public URI, it wouldn't work if the container was marked as private.
Also, in your example, you've created a fictitious sascontainer. Note that shared access signatures work on any blob in any container. You don't need a special designated container.
With a SAS-based URI, it will be a valid URI until such time as the time expires (or you delete the blob). If you wanted more control, such as disabling a URI, you'd need to use a Shared Access Policy. Just something for you to consider looking into. And plenty of documentation on that, should you go that route.
Related
How to create SAS url for azure blob storage using java How to generate azure blob storage SAS url using java?
do we have any api where i can expire any existing sas url?
Requirement: Generate any sas url with expiry time as 7 days and i want some api where i can reset this sas url at any point of time within expiry time limit.
but looks like we don't have this i guess and every time it will create new sas url but we can expire any existing sas url at any point of time.
You can create an access policy on the container with set start and end date. Then create a SAS token on the container or an individual blob using that policy. Once the policy is deleted it will also invalidate all SAS tokens that were generated with it.
public BlobServiceSasSignatureValues(String identifier)
//Where identifier is the name of the access policy
It's limited to 5 access policies on a table, but if you are a bit flexible on the expiration date you could make one each week, so every blob would be available for at least a week and up to two at most. It also implies that once you remove the policy all url's for that week no longer work.
I don't think there's any other way you are able to invalidate the generated url's apart from generating new accesss keys.
I have blob storage with some resources. I provide SAS tokens to clients, and every token is generated only for specific blob to client. After some amount of time I want to rotate my account keys, thus all actual clients' tokens will be invalidated (clients do not have account key, they have only token).
I was wondering if someone had similiar case, when using REST API to Azure Storage have to provide new SAS tokens to clients after key rotation. I know that in this situation client will get 403 Unauthorize, so one option is to send another request for new token, and then retry request for resource.
Or maybe I could send back 301 Moved http code and link for REST endpoint that regenerates new token, thus client wouldn't have to have addtional knowlegde about anothoer endpoint.
Does anyone any experience with token rotation like this one?
As mentioned in the comment, due to your clients are directly accessing the blob, you wouldn't know if they got 403 error unless they tell you about the same.
If it is acceptable, you could take a look at Authorize access to Azure blobs and queues using Azure Active Directory, when it has been configured, even if you rotate the account keys, the client also can access the storage. But this feature just could apply to at least container level, not blob level, not sure if it is acceptable.
I am generating a shared access signature(SAS) for one of the blob containers which is v 2 using the Azure portal. I am trying to upload a file from frontend for which SAS is required. The problem is SAS is expiring every day. Is there a way to update the SAS automatically using the code or is there a way to do the authentication using Azure AD.
So Basically I have a front end where user logs in using Azure AD, now i want to utilize his session to allow him to upload to Azure storage. As he is already authorized, i feel there should be a way to generate SAS on the fly for his session.
Shared access signatures are useful for providing limited permissions to your storage account to clients that should not have the account key.
If you are the one writing data to the storage account, do so server side. If you do that, you can validate the user is logged in. If that's the case, allow your backend to write in the storage account using one of the access keys (or better yet, a managed identity.
Of course, you could have your front-end request a SAS token from the back-end, for instance from an API. This could simply be implemented, for instance using an Azure Function. And the SAS token could use near-term expiration times. In the end, you're still opening up parts of the storage account to anyone who can access the frontend.
With near-term expiration, even if a SAS is compromised, it's valid only for a short time. This practice is especially important if you cannot reference a stored access policy. Near-term expiration times also limit the amount of data that can be written to a blob by limiting the time available to upload to it
Source: Using shared access signatures (SAS)
Taken from that same article:
The following code example creates an account SAS that is valid for the Blob and File services, and gives the client permissions read, write, and list permissions to access service-level APIs. The account SAS restricts the protocol to HTTPS, so the request must be made with HTTPS.
static string GetAccountSASToken()
{
// To create the account SAS, you need to use your shared key credentials. Modify for your account.
const string ConnectionString = "DefaultEndpointsProtocol=https;AccountName=account-name;AccountKey=account-key";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
// Create a new access policy for the account.
SharedAccessAccountPolicy policy = new SharedAccessAccountPolicy()
{
Permissions = SharedAccessAccountPermissions.Read | SharedAccessAccountPermissions.Write | SharedAccessAccountPermissions.List,
Services = SharedAccessAccountServices.Blob | SharedAccessAccountServices.File,
ResourceTypes = SharedAccessAccountResourceTypes.Service,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Protocols = SharedAccessProtocol.HttpsOnly
};
// Return the SAS token.
return storageAccount.GetSharedAccessSignature(policy);
}
I'm trying to build some kind of instagram clone for me and my friends only. I'm using react in the frontend and node in the backend.
I'm also using azure to store the files my users upload. I've already manage to get Shared access signature (SAS), but I don't know if there's a more appropriate way to do this. Is generating a new SAS every time i want to show a post overkill? Is there a way to get a token so I can give an authenticated user read access to a container of blobs? So i dont have to make a sas for every blob.
At the moment, given a blob URL with valid sas token (not experired) in it, you can access the blob even in a browser with no login in my site.
Edit: To illustrate what I want to do here it is an example:
When you send some picture in a DM in twitter, you can try to get the URL of the picture. If you try to access that URL with incognito mode, you can't access that file. I can't find any resource on how to do something like that.
I wonder if it is possible to do:
- I have a blob storage with some html webpage. That storage is private. I cannot be set as public access. Only user with tokens may access it.
It is possible to access single files using SAS token based authentication generating URI and a query string, but that only works for 1 file. I.E. I access a index.html page, but when I click a link on that page, access token is not passed, so I get 403 error for that subpage.
Is it possible to make it such, that token would allow all the subpages to access?
I wonder if it is even achievable.
Assuming:
By access token you mean Shared Access Signature (SAS) token and
All the files are in the same private container
It is certainly possible to access sub pages.
For that, first thing you would need to do is create the SAS token on the blob container and not on an individual file (index.html in your case).
Since the page is an HTML page and not generated dynamically, what you would need to do is when someone clicks on a link to a subpage, using JavaScript you would need to append that SAS token to the link.
For example, if there's a subpage called index2.html and when someone clicks on the link for that, using JavaScript you would read the query string from the URL for your main page (which is essentially the SAS token), append that SAS token to the link and then redirect the user to that link.