I have a custom domain pointing to a static webside using Azure CDN. After a deployment my website was no longer showing up. This was because and old version of index.html was being served from the CDN itself.
I fixed it by purging the CDN manually, but this is not ideal because I frequently update the files for this website via a build process.
What is the best practice to avoid this outcome? Do I need to add a purge to the build process itself or is there a better way?
We must configure the max-age to control Cache-Control header.Azure CDN makes use of the header's duration value.
Add Cache-Control header in index.html -
public, no-cache
All Cache-Control directives are supported by Azure CDN Standard/Premium from Verizon and Azure CDN Standard from Microsoft.
In Azure Portal => Your Static Web App => Configuration => Application Settings, ads the below settings
Set the WEBSITE_LOCAL_CACHE_OPTION to Never and WEBSITE_DYNAMIC_CACHE to 0
Reference taken from Doc
Please refer Manage expiration of web content in Azure CDN for more information
Related
I've configured Azure CDN (standard Microsoft profile/tier) over an Azure storage account to serve my static frontend website. I've added a custom domain to the Azure CDN endpoint, let's call this www.example.com. Now, let's assume the storage account is suddenly unavailable due to an outage in that region.
Questions
1. If the user hits www.example.com, would they be able to view the frontend website?
2. If the CDN endpoint caches the website, for how long would it serve the frontend website while the underlying storage account is down.
P.S.
I've read this answer to setup Azure front door but I'm trying to not modify the setup unless absolutely required.
If the user hits www.example.com, would they be able to view the frontend website?
Yes, users should be able to view the website because the content is cached by CDN. From this link:
An object that's already cached in Azure CDN remains cached until the
time-to-live period for the object expires or until the endpoint is
purged. When the time-to-live period expires, Azure CDN determines
whether the CDN endpoint is still valid and the object is still
anonymously accessible. If they are not, the object will no longer be
cached.
If the CDN endpoint caches the website, for how long would it serve the frontend website while the underlying storage account is down.
That would depend on how you have configured the cache settings for the CDN. As long as the content is cached, CDN will not try to hit the source to get the new content. To learn more about caching and expiration, you may find this link useful: https://learn.microsoft.com/en-us/azure/cdn/cdn-manage-expiration-of-blob-content.
I am trying to configure Azure CDN caching (Microsoft Standard), but something does not work for me.
I pointed CDN to https://yaplex.com and my CDN is configured to https://www.yaplex.com (the whole website under WWW is served from CDN)
When I request the WWW website, I get it back from CDN, but it looks like CDN does not do any caching.
For example, if I make a change to the content and refresh the WWW website, it returns me a version with the content updated.
I am doing the following caching rules, which instructs CDN to cache everything for 5 days but looks like either my rule is wrong or I am doing something wrong.
Can you please help to configure CDN caching for Azure or at least point me in the right direction?
Update: what I am trying to achieve is to serve the WWW version from CDN cache, so even if the website is down, users still can open it from CDN.
I have a cdn endpoint with origin to a production web app. When I swap staging to production, since there is a VIP swap, the cdn endpoint now points to staging VIP instead of Production VIP. How to handle this scenario?
Generally speaking - CDNs don't care about ORIGIN changing, most caching rules leverage TTL along with Last Modified or eTags. This means that the CDN will eventually be consistent with your ORIGIN, depending on your cache headers you send to the CDN for responses.
If you want the content in the CDN to always match what is in your PROD slot you need to add a version hash to your CDN links OR call the CDN API and purge the CDN assets after deployment to ensure latest content is fetched again from the ORIGIN to the PoPs.
I have a web site hosted on Azure (http://mike-ward.azurewebsites.net/). I set up an Azure CDN from the Azure portal that points to (references?) my web site. According to the articles and docs I've read, content is only served from the /cdn/ folder (http://az667460.vo.msecnd.net/cdn/images/favicon.ico for example). However, it also seems to serve the dynamic web site stuff by simply referencing the root (http://az667460.vo.msecnd.net/).
Has the policy changed with regard to serving content from other than the /cdn/ folder? If not, what's happening here?
Al website content is now available through the CDN, see these new examples:
http://azure.microsoft.com/en-us/documentation/articles/cdn-websites-with-cdn/
In your site you manage the url, so if a resource is referenced with the CDN url: http://az667460.vo.msecnd.net it will be served from the CDN.
the special /cdn folder isn't required anymore
Has anyone successfully configured Azure CDN for HTTP compression using their hosted web role? We are having trouble compressing HTTP content at the Azure edge servers. The CDN is only caching the uncompressed version of the content.
If we hit our resource link (webresource.axd) from a non-Azure approach it compresses via gzip (using the xxxx.cloudapp.net/cdn/webresource.axd) as expected. However, as soon as we point our resource link to Azure CDN (xxxx.vo.msecnd.net), the content is served up uncompressed, despite the browser telling the Azure CDN it accepts gzip.
I posted this same issue to Azure Forums, but nobody has responded as of yet.
While troubleshooting the problem, it appears that the Azure CDN is stripping out the Accept-Encoding HTTP header. Just curious if others have had this same issue.
Azure CDN Best Practices states...
How does the Windows Azure CDN work with compressed content?
The Windows Azure CDN will not modify (or add) compression to your objects. The Windows Azure CDN respects whatever compression is provided by the origin based on the "Accept-Encoding" header. As of 1.4, Azure Storage does not support compression. If you are using hosted-service object delivery, you can configure IIS to return compressed objects.
What we are seeing is that the CDN is not respecting the origin Accept-Encoding, it's being stripped away.
It was discovered thru trial and error that Azure CDN has a current limitation that it won't pass the Accept-Encoding HTTP header unless it finds a QueryString parameter containing a compressable filename type (.js, .cs) or you are requesting a file by its original name (jquery.js, site.css, etc.).
What this means is that if you are using an AXD resource handler (WebResource.axd, etc.), the HTTP compression will not be performed. The Azure CDN will only pass the Accept-Encoding if you append a QueryString parameter with a .cs or .js extension.
We are using a custom AXD resource handler, so this was easy for us to implement. We just applied &group=core.js and &group=core.css for our combined minified resources and the compression worked as expected. It's unfortunate this doesn't exist in the current Azure CDN documentation.
In short, we had to transform our URIs from this:
https://xxxx.vo.msecnd.net/resourceManager.axd?token=HL80vX5hf3lIAAA
to this:
https://xxxx.vo.msecnd.net/resourceManager.axd?token=HL80vX5hf3lIAAA&group=core.js
Once the Azure CDN sees the .js in the querystring, it will return the compressed version of the resource.
Hope this helps someone else using web resources (AXDs) served up via the Azure CDN.
CDN picks up compression from the origin and Windows Azure Storage does not support compression directly so if you get CDN content from Azure Storage origin, it will not be compressed. So if you have content hosted at Windows Azure Storage you will not be able to have compressed content. To have compressed content, you would need to host the content at hosted service such as web role as origin. As this type of origin would be IIS based, is a supported way to use compression.
Windows Azure CDN supports compressed content over HTTP1.0, and most of the time the problem I have seen are related with having an HTTP 1.0 vs HTTP 1.1 issue. So when you request you CDN object directly from your web role via HTTP 1.0 (using the wget command) you should get compressed content if all is correct. If you get non-compressed content then you know where the problem is. Please be sure you’ve configured your application and IIS itself to deliver compressed content to HTTP 1.0 clients as well as HTTP 1.1 clients.
I have written a detailed blog entry to exactly add HTTP Compression with Azure CDN through Web role:
http://blogs.msdn.com/b/avkashchauhan/archive/2012/03/05/enableing-gzip-compression-with-windows-azure-cdn-through-web-role.aspx
These answers about adding .css/.js extensions don't appear to apply any more with the recent (Q1 2014) updated Azure CDN service back-end.
I ran an isolated test with a new Cloud Service Web Role project today and a new CDN instance.
I placed a /cdn/style-1.css file in my web role (single instance) and accessed it via CDN. It was not compressed. Accessing directly WAS compressed.
The fix for my Web Role serving gzip'd content was to ensure the IIS configuration option noCompressionForProxies is "false" (default is true).
This made the Azure CDN then send me down gzip'd content.
Appending css/js extensions made no difference.
Note that when testing this change, it is a host configuration change so you must restart IIS via IIS Manager (not iisreset) for it to take effect. Lastly, to test the change, be sure to create a new file (eg, style-2.css) and request that via the CDN so it will fetch it from the origin server again.