I am using the Azure 12 month trial account and hosting an excel file on the Storage account through Azure Portal.
I generate a Shared access signature with End date as three months from today and affix the generated SAS token to the File's URL.
I am able to access the file using this process. However, the token quickly expires after some invocations of the URL. The issue was most recently observed after overwriting the file with an updated file on Azure storage account, followed by regenerating the SAS token.
The URL with SAS token suffixed to it looks like:
https://xxxxxx.file.core.windows.net/folder_name/yyyyy.xlsx?sv=2019-02-02&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-12-30T16:04:08Z&st=2019-10-22T08:04:08Z&spr=https,http&sig=xxxxx%yyyyy%zzzz
Here is the error I see:
<Error>
<Code>ConditionHeadersNotSupported</Code>
<Message>
Condition headers are not supported. RequestId:<XXXXX> Time:<YYYYYY>
</Message>
</Error>
The error is random and the URL works intermittently.
Has anyone observed this issue and what could be a fix?
I can reproduce your error.
This does not mean that the SAS token has been expired. Because if you test the Azure Blob Storage, everything will be ok. Error comes from the browser we are using know. The browser add the if condition header.
When there is no if header, you can use it normally.
That is because File Storage does not support if header. And the request which has if header will not be accepted by File Storage.
This is the Offical doc about what kind of header that the File Storage support.
So it is not the responsibility of SAS token. It is a fault about the browser. If it is not special, I suggest you use Azure Blob Storage, it will not cause problems.
The problem you're having is due to Microsoft's Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0 set up.
The service hasn't been designed primarily for browsers to access files, and is limited in what headers it supports.
Browsers will typically look at the local cached copied of files before attempting to download a new copy. They do this by examining the local file attributes and asking the web server to give them the file "IF" it has been modified after the date corresponding to the cache through the use of the If-Modified-Since header, just as BowmanZhu said.
Instead of ignoring the header, the server is throwing an error. To overcome this, you need to perform a hard reload of the page. In Chrome, you can do this by pressing CTRL + SHIFT + R.
Related
I have an azcopy.exe command that I copied out of MS Azure Storage Explorer in order to use in a script. The command works perfectly, but I want to understand the querystring parameters that are being used.
?sv=2020-04-08&se=2021-10-29T15:07:01Z&sr=c&sp=rwl
I understand that sv is signed version which I found in Versioning for the Azure Storage services and that section references the other parameters but I haven't been able to locate the actual docs.
I suspect that I'm close to it, but need some help.
You can find information about SAS querystring parameters here: https://learn.microsoft.com/en-us/rest/api/storageservices/create-service-sas.
To specifically answer your question:
sv: This is the storage REST API version.
se: This is the date/time value in UTC when your SAS URL will expire.
sr: This is the signed resource type. In your context, sr=c means that SAS token is acquired for a blob container.
sp: These are the permissions included in your SAS token. Currently your signed permissions include (r)ead, (w)rite and (l)ist permissions.
Is it possible to update expiry time of shared access policy of blob container through Azure portal?
Is it possible to set indefinite expiry time for shared access policy? I have an application which has html editor where users upload images to Azure blob storage. So they can upload images and see them through generated uri. I used shared access policy with READ permission so that users can see images inside html? Is it good practice to set indefinite expiry time of shared access policy with READ permission?
I don't want my images to be public, I just want authenticated users to be able to see the images. I don't understand advantage of using SAS in my case as any user having SAS can see image (for example my friend who receives image uri with sas). So, is there any advantage? Can anyone explain me this?
Is it possible to update expiry time of shared access policy of blob
container through Azure portal?
As of today, it is not possible to manage shared access policies on the blob container using Azure Portal. You would need to use some other tools or write code to do so.
Is it possible to set indefinite expiry time for shared access policy?
You can create a shared access policy without specifying an expiry time. What that means is that you would need to specify an expiry time when creating a shared access signature. What you could do (though it is not recommended - more on this below) is use something like 31st December 9999 as expiry date for shared access policy.
Is it good practice to set indefinite expiry time of shared access
policy with READ permission?
What is recommended is that you set expiry time to an appropriate value based on your business needs. Generally it is recommended that you keep the expiry time in the shared access signature to a small value so that the SAS is not misused as you're responsible paying for the data in your storage account and the outbound bandwidth.
I don't understand advantage of using SAS in my case as any user
having SAS can see image (for example my friend who receives image uri
with sas). So, is there any advantage? Can anyone explain me this?
Biggest advantage of SAS is that you can share resources in your storage account without sharing storage access keys. Furthermore you can restrict the access to these resources by specifying appropriate permissions and shared expiry. While it is true that anyone with SAS URL can access the resource (in case your user decides to share the SAS URL with someone else) and it is not a 100% fool-proof solution but there are ways to mitigate these. You could create short-lived SAS URLs and also restrict the usage of SAS URLs to certain IP addresses only (IP ACLing).
You may find this link helpful regarding some of the best practices around shared access signature: https://azure.microsoft.com/en-in/documentation/articles/storage-dotnet-shared-access-signature-part-1/#best-practices-for-using-shared-access-signatures.
Firstly, if you want to change the expiry time of an ad-hoc Shared Access Signature (SAS), you need to regenerate it (re-sign it) and redistribute this to all users of the SAS. This doesn't affect the validity of the existing SAS you're currently using.
If you want to revoke a SAS, you need to regenerate the Access Key for the Storage Account that signed the SAS in the first place (which also revokes all other SASs that it has signed). If you've used the Access Key elsewhere, you'll need to update those references as well.
A good practice is to use an Access Policy rather than ad-hoc SASs, as this provides a central point of control for the:
Start Time
End Time
Access permissions (Read, Write etc)
SASs linked to an Access Policy can be revoked by changing the expiry time to the past. Whilst you can delete the Access Policy to achieve the same effect, the old SAS will become valid again if you re-create the Access Policy with the same name.
It's not possible to set an indefinite expiry time for a SAS, and neither would it be good practice to - think of SASs as like shortcuts in file system to a file that bypass file permissions. A shortcut can't be revoked per-se or modified after you've created it - anyone anywhere in the world that obtains a copy would receive the same access.
For example, anyone with access to your application (or anyone with access to the network traffic, if you are using HTTP) could keep a copy of the SAS URL, and access any of the resources in that container - or distribute the URL and allow other unauthorised users to do so.
In your case, without SASs you would have served the images from a web server that required authentication (and maybe authorisation) before responding to requests. This introduces overhead, costs and potential complexity which SASs were partly designed to solve.
As you require authentication/authorisation for the application, I suggest you setup a service that generates SASs dynamically (programatically) with a reasonable expiry time, and refers your users to these URLs.
Reference: Using Shared Access Signatures (SAS)
Edit: Microsoft Azure Storage Explorer is really useful for managing Access Policies and generating SASs against them.
You can set a very long expiry time but this is not recommended by Microsoft and no security expert will ever recommend such thing as it defies the idea of SAS
https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/
As for the second part of your question, I am not sure how do you allow them to upload the images, is it directly using SAS on the container level or they post to some code in your application and the application connect to Azure Storage and upload the files?
If you have a back end service, then you can eliminate SAS and make your service works as a proxy, only your service will read and write to Azure Storage using Storage Account Access Keys and the clients will access your service to read and write the images they need, in this case the clients will not have a direct access to Azure Storage.
I am building an ASP.NET Azure Web Application (Web Role) which controls access to files stored in Azure Blob Storage.
On a GET request, my HttpHandler authenticates the user and creates a Shared Access Signature for this specific file and user with a short time frame (say 30 mins). The client is a media player which checks for updated media files using HEAD, and if the Last-modified header differs, it will make a GET request. Therefore I do not want to create a SAS url but rather return LAst-modified, Etag and Content-length headers in response to the HEAD request. Is this bad practice? In case the file is up to date, there is no need to download the file again and thus no need to create a SAS url.
Example request:
GET /testblob.zip
Host: myblobapp.azurewebsites.net
Authorization: Zm9v:YmFy
Response:
HTTP/1.1 303 See other
Location: https://myblobstorage.blob.core.windows.net/blobcontainer/testblob.zip?SHARED_ACCESS_SIGNATURE_DATA
Any thoughts?
Is there a specific reason to force the client to make a HEAD request first? It can instead authenticate using your service, get a SAS token, make a GET request using If-Modified-Since header against Azure Storage, and download the blob only if it was modified since the last download. Please see Specifying Conditional Headers for Blob Service Operations for more information on conditional headers that Azure Storage Blob service supports.
Here's what I am trying to accomplish:
We have files stored in azure blobs and need to secure access to them so that only our installed Windows 8 Store App can download these blobs. My first thought was to use some sort of certificate. When the app is installed, it is installed with a certificate that it then passes in the header of the request to he server to obtain the blob.
I read about Shared Access Signatures and it kind of makes sense to me. It seems like an API that the client could use to obtain a temporary token granting access to the blobs. Great. How do I restrict access to the API for obtaining SAS tokens to only our installed client apps?
Thank you.
Using SAS urls is the proper way to do this, this way you can give up a specific resource for a limited amount of time (15 minutes for example) and with limited permissions (only read for example).
Since this app is installed on the users machine you can assume the user can see whatever the App is doing so there is no absolute way to secure your API to only be accessed by only your App, but you can make it a little more difficult to replicate by using SSL (https) endpoint and providing some "secret key" only your App knows.
I have a system providing access to private blobs based on a users login credentials. If they have permission they will be given a SAS Blob url to view document or image stored in Azure.
I want to be able to resize the images, but still maintain the integrity of the short window of access via the SAS.
What is the best approach with ImageResizer? Should I user the AzureReader2 plugin, or should i just use the RemoteReader with the SAS Url?
Thanks
ImageResizer would disk cache the resized result images indefinitely, regardless of restrictions on the source file.
You need to implement your authorization logic within the application using Authorize_Request or Config.Current.Pipeline.AuthorizeImage .
There's no way to pass-through storage authorization unless you disable all caching.