Purge single file cache from Azure CDN using query string - azure

I have an azure storage account + cdn + endpoint.
The content on a particular file rarely changes but when it does we'd like a non-technical team to be able to purge the cache for the file, without using the API or Powershell.
Can we purge the azure cdn cache for a single file using query string and cache rules engine?
Something like:
https://our-endpoint.azureedge.net/file.html?clearcache=true
Or alternatively could we use the rules engine to set the TTL for this file only to 1 second, and then afterwards set the TTL back to 604800 seconds (1 week)
https://our-endpoint.azureedge.net/file.html?setTTL=1
https://our-endpoint.azureedge.net/file.html?setTTL=604800

This feature is not yet available for now and even no ETA has been shared. I would recommend you to post the feature request here in this feedback section for its future availability.

There is a purge button in Azure CDN endpoint:
once you click on it, enter /file.html & purge.
Alternatively, you can set "cache every unique url" to cache rules.

Related

Cache picsum images with azure CDN

I created an ad hoc application to test the azure cache features to study for the az204.
It is a simple node application in app service that renders a large image using lorem picsum.
<img src="https://picsum.photos/2000" style="width: 100%;">
I created a Standard Azure CDN Profile and added the Endpoint.
Then Set the global rule to be override with Always Cache expiration 30 min.
Expected Result:
From this moment I expected my application to cache the image, witch means that when I reload the page accessing from the CDN's url I should get the same image as before for at least 30 minute.
Actual Result:
But the actual result is that when I load the page it is loading always a different image as it would without a cdn.
I also tried by creating a new rule for image type jpeg override 30 min, but It didn't worked.
How can I return a cached image from lorem picsum using the Frontdoor Azure Standard Cache CDN?
I tried to reproduce the same in my environment, If you are trying to access Azure CDN caching rules to cache expiration duration the condition passes though the storage blob files.
I have created front door CDN profile endpoint and added caching behavior override like below:
This cache rule applies for your storage container file like below:
In storage container -> files. This files we can access through url and Override and Set if missing caching behavior.
https://lorem.azureedge.net/container1/lorem pic 1.jpg
Set TTL as 86400 or 1-day cache control Cachecontrol="public,max-age=86400"
Changing the cache duration while accessing the url lorem picsum is not possible
When you check front door designer these are the different cache behaviors which can be implemented at different edge.
References:
Azure Storage Azure Front Door Azure CDN
Integrate Caching And CDNs Within Solutions by Thiago Vivas

Azure CDN purge not working due to local browser cache with long TTL / Cache - how to force refresh global rule?

I encounter the problem reported in X-cache hit after Azure CDN Purge , that if I set a long Always Cache expiration / TTL of for example 1 year, the local browser will not fetch new content from the Azure CDN EVEN one does a manual Azure CDN purge, because the browser just takes the local cache.
In our web setup, we would like the Azure CDN to cache everthing UNTIL our next deployment, thats why we set a long global "Always Cache expiration" of 1 year. We assumed that the a manual CDN purge will force the browser to refresh, but it didnt because of the local browser cache where the TTL 1 year is locked. Now users still get old apps after our deployments.
Can this be solved with additional directives?
When we set "Last-Modified" : "<date>" as global directive after each deployment we get an error. ETag does not seem to be supported by standard MS CDN plan.
{"ErrorMessage":"Invalid RulesEngineConfig: (Header \"Last-Modified\" is not allowed to be modified by the rules engine)."}
What are the exact rule settings in Azure CDN to solve this use case and control that the browser gets new content only after we deployed?
The browser is doing what you instructed it to do given the caching rules.
What you could do is configure the assets with unique filenames or a cache fingerprint and a very long cache TTL. But the HTML page with a shorter TTL. So when you update assets, users will get a fresh HTML page and pull the latest assets.

Azure CDN content takes time to refresh

I'm using Azure CLI to purge the contents from Azure CDN endpoint. I got a reference from Microsoft Docs: https://learn.microsoft.com/en-us/cli/azure/cdn/endpoint?view=azure-cli-latest
https://learn.microsoft.com/en-us/azure/cdn/cdn-purge-endpoint
I use the following command to refresh specific png file as shown below:
az cdn endpoint purge -g cdnRG --profile-name cdnprofile2 --content-paths "/img/cdn.png" --name cdnprofileendpoint2
Command executed successfully, though surprised that content is not refreshing or sometime it takes time.
is it the acceptable pattern?
Kindly advise.
Since purging an Azure CDN endpoint only clears the cached content on the CDN edge servers. Any downstream caches, such as proxy servers and local browser caches, may still hold a cached copy of the file. You can force a downstream client to request the latest version of your file by giving it a unique name every time you update it, or by taking advantage of query string caching.
I suggest purging the same path contents in the Azure portal comparing to purge it with Azure CLI command. You also try to purge CDN endpoint with Azure Powershell.
The important thing is that the CDN provider takes influence on the purging time.
Purge requests take approximately 10 minutes to process with Azure CDN
from Microsoft, approximately 2 minutes with Azure CDN from Verizon
(standard and premium), and approximately 10 seconds with Azure CDN
from Akamai. Azure CDN has a limit of 50 concurrent purge requests at
any given time at the profile level.

Fallback storage account (or multiple storage accounts) for Azure CDN

I can configure the Azure CDN against a single storage account presently. What I'm wondering is in the event of a disaster, where that particular region becomes unavailable (outages etc..). If I need to refresh the cache at that point I don't have any regional fallbacks. What is the correct way of supporting multiple storage accounts with the CDN?
One way that I can see it is the Traffic Manager. Traffic Manager receives the request and sends it to one of the X CDNs configured for X Storage Accounts based on performance. That way if one of the regions become unavailable, Traffic Manager should fallback to another one. This is an expensive solution though, so I'm looking for something where I can get one CDN and X Storage Accounts ideally and the CDN should handle the world-wide performance, along with a fallback region.
Here are the steps to configure AFD:
Create AFD from Portal.
Click on Front Door Designer. You will have 3 sections. First is Frontend which will be already configured. Then Baclkend Pools and Routing rules.
Click on Backend Pools and add a new backend pool. Select Storage as Host type and then pick your Primary Storage blob page and provide priority as 1.
Once that is done configure the Health probes. Then add your second Storage blob page and then provide priority as 2.
Configure Routing rules and make sure you have /* as matching pattern. Also you can enable caching in the rule and you can cache based on the query string. Moreover if have a dynamic page, then you can enable dynamic compression.
Once that is done, try accessing AFD URL and check how it works.
Here is the Public Documentation for your reference: https://learn.microsoft.com/en-us/azure/frontdoor/front-door-routing-methods
You try using Azure FrontDoor. It is a combination of CDN and L7 load balancer. You can try implementing your ask with Azure FrontDoor.
Let me know if you face any difficulties.

Azure CDN vs Azure Blob storage origin pull takes way too long

I am using azure blob storage to store images in a public container and embedding them in a public website. Everything works fine, blobs are publicly available on xxxxx.blob.core.windows.net the instant i upload them. I wanted to use Azure CDN for their edge caching infrastructure and set up one at xxxxx.vo.msecnd.net.
But now, when i point my images to the CDN, it returns 404 for a good 15 mins or so, then it starts serving. It's mentioned on their documentation that we should not use CDN for high violatile or frequently changing blobs, but a simple CMS with image upload feature for a public site warrants a CDN isn't it?
I am in exactly the same situation at the moment for product images that are uploaded to my e-commerce site. I prefer to use Azure CDN on top of Azure blob storage for all of the obvious reasons but cannot wait 15 minutes for the image to be available.
For now I have resolved to store the blob storage URL initially but then later rewrite it to use the CDN domain via an Azure WebJob running once daily. It seems like an unnecessary amount of extra work but I haven't yet found a better option and really want to use the Azure CDN.
What I'm doing right now... for website related images and files I upload manually before deployment (https://abc.blob.core.windows.net/cdn) and If website User upload an image or file using my website, Internally I upload that file on blob storage (separate container not CDN) using CloudBlobClient
CDN is used for static content delivery, but in your case you need dynamic content delivery via CDN. You could use Cloud Service + CDN. This makes Dynamic contents delivered from CDN using ASP.net Caching concepts.
Please refer this link for more details: Using the Windows Azure Content Delivery Network (CDN)
CDN enables a user to fetch content from a CDN-POP that is geographically closest to the user thus allowing lower read latencies.
Without a CDN, every request would reach the origin server (in your case Azure Storage). The low latency offered by CDN is achieved when there are cache hits. On a cache miss, a CDN-POP will fetch the content from the origin server reducing the latency benefit offered by CDN. Cache hits are usually dependent on whether the content is static (results in cache hits) or dynamic (results in cache miss) and its popularity (hot objects result in cache hit).
Your choice of using a CDN or not depends on a) whether your files are static or dynamic, if dynamic then the benefit of using a CDN is lower b) whether low latency is important to your application and c) Request rate : With low number of requests your files are likely to be cached-out so a CDN may not be that useful and d) Whether you have high scalability requirements. Note, Azure storage has the following scalability limits. If your application exceeds the scalability limits of azure storage then it is recommended to use a CDN

Resources