I'm using Azure CLI to purge the contents from Azure CDN endpoint. I got a reference from Microsoft Docs: https://learn.microsoft.com/en-us/cli/azure/cdn/endpoint?view=azure-cli-latest
https://learn.microsoft.com/en-us/azure/cdn/cdn-purge-endpoint
I use the following command to refresh specific png file as shown below:
az cdn endpoint purge -g cdnRG --profile-name cdnprofile2 --content-paths "/img/cdn.png" --name cdnprofileendpoint2
Command executed successfully, though surprised that content is not refreshing or sometime it takes time.
is it the acceptable pattern?
Kindly advise.
Since purging an Azure CDN endpoint only clears the cached content on the CDN edge servers. Any downstream caches, such as proxy servers and local browser caches, may still hold a cached copy of the file. You can force a downstream client to request the latest version of your file by giving it a unique name every time you update it, or by taking advantage of query string caching.
I suggest purging the same path contents in the Azure portal comparing to purge it with Azure CLI command. You also try to purge CDN endpoint with Azure Powershell.
The important thing is that the CDN provider takes influence on the purging time.
Purge requests take approximately 10 minutes to process with Azure CDN
from Microsoft, approximately 2 minutes with Azure CDN from Verizon
(standard and premium), and approximately 10 seconds with Azure CDN
from Akamai. Azure CDN has a limit of 50 concurrent purge requests at
any given time at the profile level.
Related
I created an ad hoc application to test the azure cache features to study for the az204.
It is a simple node application in app service that renders a large image using lorem picsum.
<img src="https://picsum.photos/2000" style="width: 100%;">
I created a Standard Azure CDN Profile and added the Endpoint.
Then Set the global rule to be override with Always Cache expiration 30 min.
Expected Result:
From this moment I expected my application to cache the image, witch means that when I reload the page accessing from the CDN's url I should get the same image as before for at least 30 minute.
Actual Result:
But the actual result is that when I load the page it is loading always a different image as it would without a cdn.
I also tried by creating a new rule for image type jpeg override 30 min, but It didn't worked.
How can I return a cached image from lorem picsum using the Frontdoor Azure Standard Cache CDN?
I tried to reproduce the same in my environment, If you are trying to access Azure CDN caching rules to cache expiration duration the condition passes though the storage blob files.
I have created front door CDN profile endpoint and added caching behavior override like below:
This cache rule applies for your storage container file like below:
In storage container -> files. This files we can access through url and Override and Set if missing caching behavior.
https://lorem.azureedge.net/container1/lorem pic 1.jpg
Set TTL as 86400 or 1-day cache control Cachecontrol="public,max-age=86400"
Changing the cache duration while accessing the url lorem picsum is not possible
When you check front door designer these are the different cache behaviors which can be implemented at different edge.
References:
Azure Storage Azure Front Door Azure CDN
Integrate Caching And CDNs Within Solutions by Thiago Vivas
I encounter the problem reported in X-cache hit after Azure CDN Purge , that if I set a long Always Cache expiration / TTL of for example 1 year, the local browser will not fetch new content from the Azure CDN EVEN one does a manual Azure CDN purge, because the browser just takes the local cache.
In our web setup, we would like the Azure CDN to cache everthing UNTIL our next deployment, thats why we set a long global "Always Cache expiration" of 1 year. We assumed that the a manual CDN purge will force the browser to refresh, but it didnt because of the local browser cache where the TTL 1 year is locked. Now users still get old apps after our deployments.
Can this be solved with additional directives?
When we set "Last-Modified" : "<date>" as global directive after each deployment we get an error. ETag does not seem to be supported by standard MS CDN plan.
{"ErrorMessage":"Invalid RulesEngineConfig: (Header \"Last-Modified\" is not allowed to be modified by the rules engine)."}
What are the exact rule settings in Azure CDN to solve this use case and control that the browser gets new content only after we deployed?
The browser is doing what you instructed it to do given the caching rules.
What you could do is configure the assets with unique filenames or a cache fingerprint and a very long cache TTL. But the HTML page with a shorter TTL. So when you update assets, users will get a fresh HTML page and pull the latest assets.
I have an azure storage account + cdn + endpoint.
The content on a particular file rarely changes but when it does we'd like a non-technical team to be able to purge the cache for the file, without using the API or Powershell.
Can we purge the azure cdn cache for a single file using query string and cache rules engine?
Something like:
https://our-endpoint.azureedge.net/file.html?clearcache=true
Or alternatively could we use the rules engine to set the TTL for this file only to 1 second, and then afterwards set the TTL back to 604800 seconds (1 week)
https://our-endpoint.azureedge.net/file.html?setTTL=1
https://our-endpoint.azureedge.net/file.html?setTTL=604800
This feature is not yet available for now and even no ETA has been shared. I would recommend you to post the feature request here in this feedback section for its future availability.
There is a purge button in Azure CDN endpoint:
once you click on it, enter /file.html & purge.
Alternatively, you can set "cache every unique url" to cache rules.
From my Azure web app service (ASP.NET MVC), I am serving up images via an anonymous controller method from Azure classic blob storage in two ways:
one, as a redirect straight to the storage blob URI, and
two, served up via a HttpClient fetch and then a FileContentResult from those retrieved bytes (advantage of hiding the storage URI).
Both controller methods involve several database select calls, but I'm using a P2-tier Azure database.
I'm still testing the performance differences between the two - obviously the second requires double the traffic overall: bytes from storage into web app, and then bytes from web app to client.
In both cases however, I'm seeing pretty unacceptable response times and errors from blob storage under load, using the Azure performance testing tool (250 concurrent simulated users over 5 mins).
When using the second approach (fetch on web app from storage and stream back) I'm getting lots of HTTP request errors when requesting from storage, so I have an instant retry mechanism (max 3 tries) to mitigate this. The end result is an avg response time of between 12 and 25 secs for an image, which isn't much good for displaying in an email these days.
Using the first approach (clean redirect to storage URI) this flies down to 3-6 secs on average in order to serve the redirect - but I have no control over whether the subsequent client redirect request to storage then fails (which it clearly must do - between 80% and 95% success rate according to the diagnostic logs). So I'm looking at a fourfold latency increase by 'guaranteeing' the image is definitely served to the client - which is effectively just as bad.
Is this an all-out stupid approach? I'm probably being quite the noob about all this. Surely there are architectures built on Azure storage that are far larger than mine and with fast response rates?
This is just anecdotal, but we've seen good results with Azure CDN using blob storage as the source. So instead of redirecting to the blob storage URL you using the Azure CDN url.
I am using azure blob storage to store images in a public container and embedding them in a public website. Everything works fine, blobs are publicly available on xxxxx.blob.core.windows.net the instant i upload them. I wanted to use Azure CDN for their edge caching infrastructure and set up one at xxxxx.vo.msecnd.net.
But now, when i point my images to the CDN, it returns 404 for a good 15 mins or so, then it starts serving. It's mentioned on their documentation that we should not use CDN for high violatile or frequently changing blobs, but a simple CMS with image upload feature for a public site warrants a CDN isn't it?
I am in exactly the same situation at the moment for product images that are uploaded to my e-commerce site. I prefer to use Azure CDN on top of Azure blob storage for all of the obvious reasons but cannot wait 15 minutes for the image to be available.
For now I have resolved to store the blob storage URL initially but then later rewrite it to use the CDN domain via an Azure WebJob running once daily. It seems like an unnecessary amount of extra work but I haven't yet found a better option and really want to use the Azure CDN.
What I'm doing right now... for website related images and files I upload manually before deployment (https://abc.blob.core.windows.net/cdn) and If website User upload an image or file using my website, Internally I upload that file on blob storage (separate container not CDN) using CloudBlobClient
CDN is used for static content delivery, but in your case you need dynamic content delivery via CDN. You could use Cloud Service + CDN. This makes Dynamic contents delivered from CDN using ASP.net Caching concepts.
Please refer this link for more details: Using the Windows Azure Content Delivery Network (CDN)
CDN enables a user to fetch content from a CDN-POP that is geographically closest to the user thus allowing lower read latencies.
Without a CDN, every request would reach the origin server (in your case Azure Storage). The low latency offered by CDN is achieved when there are cache hits. On a cache miss, a CDN-POP will fetch the content from the origin server reducing the latency benefit offered by CDN. Cache hits are usually dependent on whether the content is static (results in cache hits) or dynamic (results in cache miss) and its popularity (hot objects result in cache hit).
Your choice of using a CDN or not depends on a) whether your files are static or dynamic, if dynamic then the benefit of using a CDN is lower b) whether low latency is important to your application and c) Request rate : With low number of requests your files are likely to be cached-out so a CDN may not be that useful and d) Whether you have high scalability requirements. Note, Azure storage has the following scalability limits. If your application exceeds the scalability limits of azure storage then it is recommended to use a CDN