ETags are required to be quoted but Azure CDN generates ETags that are not quoted. Has anyone seen or do you expect problems with intermediate caches because of this?
How do you distribute your CDN content? Are you using a Windows Azure Web Role or just CDN configuration? It would be interesting to see how do you verify the ETAG does not have quote.
I just check with both the sources and in the HTTP header they both shows within the quote so would u please check the HTTP Headers and verify how it is listed.
Related
I want to add caching to the application we have exposed over APIM. My preferable way would be to add cache-control headers to the responses from the client. Can I configure Azure APIM to respect Cache-Control headers that are part of the response from the underlying service? All the documentaition I can find is how to configure all caching policies and rules in APIM, where I just want a simple rule that says "respect the headers from the underlying service".
There is no built-in policy just for that, but you can craft such mechanism yourself using the policies you have available. Here is the example how to control API Management response cache duration with Cache-Control headers sent by the backend service.
By looking at the example above you can try to handle other directives.
I am using the Azure 12 month trial account and hosting an excel file on the Storage account through Azure Portal.
I generate a Shared access signature with End date as three months from today and affix the generated SAS token to the File's URL.
I am able to access the file using this process. However, the token quickly expires after some invocations of the URL. The issue was most recently observed after overwriting the file with an updated file on Azure storage account, followed by regenerating the SAS token.
The URL with SAS token suffixed to it looks like:
https://xxxxxx.file.core.windows.net/folder_name/yyyyy.xlsx?sv=2019-02-02&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-12-30T16:04:08Z&st=2019-10-22T08:04:08Z&spr=https,http&sig=xxxxx%yyyyy%zzzz
Here is the error I see:
<Error>
<Code>ConditionHeadersNotSupported</Code>
<Message>
Condition headers are not supported. RequestId:<XXXXX> Time:<YYYYYY>
</Message>
</Error>
The error is random and the URL works intermittently.
Has anyone observed this issue and what could be a fix?
I can reproduce your error.
This does not mean that the SAS token has been expired. Because if you test the Azure Blob Storage, everything will be ok. Error comes from the browser we are using know. The browser add the if condition header.
When there is no if header, you can use it normally.
That is because File Storage does not support if header. And the request which has if header will not be accepted by File Storage.
This is the Offical doc about what kind of header that the File Storage support.
So it is not the responsibility of SAS token. It is a fault about the browser. If it is not special, I suggest you use Azure Blob Storage, it will not cause problems.
The problem you're having is due to Microsoft's Windows-Azure-File/1.0 Microsoft-HTTPAPI/2.0 set up.
The service hasn't been designed primarily for browsers to access files, and is limited in what headers it supports.
Browsers will typically look at the local cached copied of files before attempting to download a new copy. They do this by examining the local file attributes and asking the web server to give them the file "IF" it has been modified after the date corresponding to the cache through the use of the If-Modified-Since header, just as BowmanZhu said.
Instead of ignoring the header, the server is throwing an error. To overcome this, you need to perform a hard reload of the page. In Chrome, you can do this by pressing CTRL + SHIFT + R.
Is CORS supported in Standard edition of Azure CDN or is it only available in premium tier. I am looking for "wildcard or single origin scenario"
This is what they mention in the below link
CORS on Azure CDN will work automatically with no additional
configuration when the Access-Control-Allow-Origin header is set to
wildcard (*) or a single origin.
https://learn.microsoft.com/en-us/azure/cdn/cdn-cors#wildcard-or-single-origin-scenarios
#juunas, As the comment, the document states that, standard Azure CDN allows for multiple origins is to use query string caching.
Enable the query string setting for the CDN endpoint and then use a
unique query string for requests from each allowed domain. Doing so
will result in the CDN caching a separate object for each unique query
string. This approach is not ideal, however, as it will result in
multiple copies of the same file cached on the CDN.
So, the best way is to use Azure CDN Premium from Verizon, which exposes some advanced functionality. If so, you will need to create a rule to check the Origin header on the request. If it's a valid origin, the Access-Control-Allow-Origin header with the origin provided will be set in the request. If it's not, this header will be omitted by the rule and the browser also will reject the request.
Just set the Access-Control-Allow-Origin on your origin server. Standard Azure CDN will respect your CORS header. Its working just fine for me. I am glad I tried setting the header on the origin server instead of upgrading to the Premium CDN.
I'm trying to integrate my existing SCIM 2.0 API with Onelogin, but during my first test I got an Internal Provisioning Error, according to my logs, only one request was made. The one for checking the existence of the user(GET /Users?filter=userName eq foo#bar.com).
After several attempts, I noticed the user provisioning worked when that initial request made by Onelogin responded with Content-Type: application/json(as in SCIM 1.0) instead of the CT defined in the SCIM 2.0 specification, Content-Type: application/scim+json.
Is there any way I can tell Onelogin that my API works with SCIM 2.0 and the SCIM content type should be used? If not, should I assume the JSON content type needs to be sent in all my endpoints responses?
Sounds like a bug on the OneLogin side of things (one I think folks have run into before)
Mind you I think you'll be fine just sending a Content-Type: application/json to the various SCIM folks out there as just about everyone (except apparently OneLogin) ignores this header.
If you need further debugging help, feel free to shoot an email to Devsupport#onelogin.com. From there we can get into more specifics around setting up and testing your application (especially if you want to have it as an official application in our catalog)
Cheers
I am building an ASP.NET Azure Web Application (Web Role) which controls access to files stored in Azure Blob Storage.
On a GET request, my HttpHandler authenticates the user and creates a Shared Access Signature for this specific file and user with a short time frame (say 30 mins). The client is a media player which checks for updated media files using HEAD, and if the Last-modified header differs, it will make a GET request. Therefore I do not want to create a SAS url but rather return LAst-modified, Etag and Content-length headers in response to the HEAD request. Is this bad practice? In case the file is up to date, there is no need to download the file again and thus no need to create a SAS url.
Example request:
GET /testblob.zip
Host: myblobapp.azurewebsites.net
Authorization: Zm9v:YmFy
Response:
HTTP/1.1 303 See other
Location: https://myblobstorage.blob.core.windows.net/blobcontainer/testblob.zip?SHARED_ACCESS_SIGNATURE_DATA
Any thoughts?
Is there a specific reason to force the client to make a HEAD request first? It can instead authenticate using your service, get a SAS token, make a GET request using If-Modified-Since header against Azure Storage, and download the blob only if it was modified since the last download. Please see Specifying Conditional Headers for Blob Service Operations for more information on conditional headers that Azure Storage Blob service supports.