Azure Media Player - cache-control - azure

I'm using Azure Media Player for video playback and that works great. However, the media player css/js/wof files do not have any cache-control headers set. They come from the amp cdn (amp.azure.net). Am i doing something wrong? I cannot find any information whatsoever regarding Azure Media Player and client side caching. What is the recommended way to set up client side caching when using amp.azure.net ?

According to https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching, "HTTP is designed to cache as much as possible, so even if no Cache-Control is given, responses will get stored and reused if certain conditions are met. This is called heuristic caching." In the response to the azuremediaplayer.min.js, azuremediaplayer.min.css, and the woff2 file, I see no cache control directive as you mentioned. Therefore, there are no specific restrictions on caching. In most cases this means that all three files should be cached normally.

Related

Serving some files from Azure CDN to only certain users

Suppose I have a website that is served by an Azure CDN endpoint (via files that have been uploaded to blob storage).
I want the minified website content to be available to everyone -- that part is easy, since that's what the CDN does by default.
Ideally, I would also have the sourcemaps available on that same CDN (so that the default behavior of //# sourceMappingURL=0-8d1d0e3cc4594b2c2758.js.map within my JS files would "just work"). However, I'd like for those sourcemaps to only be served to a subset of users.
Is there a way of accomplishing this scenario? I'm happy to defined "subset" in any way that would make this scenario work (e.g., being connected to a certain VPN or being in a certain IP-address range; or using Fiddler to set a secret header; etc.)
Thanks!
I assume that what you need is to build a system that, in production, allows to offer sourcemaps to a certain group of users, for instance, a team of developers, but not to everyone, the sourcemaps should not be publicly accessible.
There are different alternatives that can help achieve this goal.
On the one hand, we can try to use a rules engine that analyzes the received HTTP traffic and offers one or the other response depending on the criteria deemed appropriate.
These rules engines allows you to customize how HTTP requests are handled, by defining a set of possible match condition(s) on the incoming requests, and actions to be performed if the match condition(s) apply.
Azure CDN provides two types of rules engines, one standard rule engine for Azure CDN from Microsoft, and other premium from Verizon, which provide more advanced features.
How you use these rule engines depends largely on how you need to identify your user group and what you want to do to condition the response offered by your application to a sourcemap request.
For instance, one of the standard rule engines match conditions - also available in the premium rule engine - is the remote IP address where the request comes from: maybe it could be a good criterion to discriminate between your different subsets of users.
Or, as you suggested with the use of Fiddle, you can analyze incoming request header in search of a custom one.
The Azure CDN Verizon Premium rule engine provides more advanced match conditions based in browser, device type, etcetera.
Once the users have been identified, the system must consider the action to take depending on whether they belong to one or another group.
Both the standard and Verizon rules engines provides that could be relevant for this purpose.
I think that the best option, if you can use the Verizon rule engine, will be to deny access to the HTTP requests send by users that does not belong to the group allowed to access the sourcemaps.
Other options, although I think more difficult to implement if your are working with webpack and SPA, can be redirect the requests received from one subset of users to certain files which contains the sourcemaps - or to different index.html pages if you are using SPA in your frontend, each with different js and css resources, with sourcemaps or not -, or rewrite the URL to directly deliver a different set of files.
Another possible action could be to not include the inline sourcemap location in your minified files and to take advantage of the capabilities to modify response headers and Append a SourceMap header that points to the actual sourcemaps instead. This header will only be sent for the desired user group. Again, depending of how you are building your frontend it could not be an easy task.
Finally, if you are using Webpack and the SourceMapDevToolPlugin to build your frontend, you can use the publicPath option to point, in production, your sourcemaps to a non public, more developer oriented, URL location. This is the approach followed in this article. I think this approach is also worth looking into.

Caching Azure-served images in the browser

I have a load of images stored as blobs in an Azure container.
I am attempting to get the browser (Chrome) to cache these images to save bandwidth. I have read many posts stating that this can be achieved by setting the Cache-Control response header. I am using the Microsoft Azure Storage Explorer to modify these headers, e.g:
public, max-age=7776000
When loading this image (https://XXXXXXXX.blob.core.windows.net/XXXXXXXX/Themes/summer.jpg) this is what I see using Google Chrome's Developer Tools:
It doesn't make any difference to the caching of the image. I have tried many different permutations of the allowed CacheControl attributes but I don't see any caching going on at all. The status is always 200, but I was expecting 304 for a cache hit. Is this correct?
Whichever CacheControl string I provide is always displayed in Chrome's results; it just doesn't seem to make any difference to the caching aspect. I've tried variations of public, private, max-age, s-maxage, must-revalidate. And just to be complete, no-cache and no-store. No differences were observed.
The above image takes 900ms+ to load for me. However, when saved locally, the same image takes 19ms. I would expect if the browser was caching the image then it's timing would be equivalent to the local time.
Other posts suggest that an Azure CDN be used. However, I don't want to go down this route as the site that uses these images would not need that.
Am I missing a setting in Azure to allow caching? Loading the images directly in the browser, or within a web page makes no difference either.
Can anyone provide assistance? Let me know if any other information is required.
The CacheControl settings should take effect.
When using chrome, please make sure you didn't select the option "disable-cache".
I can see the expected behavior when set max-age=xx for CacheControl in chrome.

How to record the total size of a request in ASP.NET Core

I'm trying to log the total size of a request sent out from an ASP.NET Core website hosted in Azure. The goal is to be able to attribute some sense of what the data out cost is for specific functionality within the application. (Specifically we are looking at some functionality that uses Azure Blob Storage and allows downloads of the blobs but I don't think that's relevant to the question.)
I think the solution is some simple middleware that logs out the sizes to a storage mechanism (less concerned with that part) but not sure what to put inside the middleware.
Does anyone know how you would "construct" the total size of the response being sent out of the web app.?
I would presume it will include HttpContext.Response.Body.Length but I'm pretty confident that doesn't include the headers. Also not sure if that's the compressed size of the response body or not.
Thanks

How can I verify that javascript and images are being cached?

I want to verify that the images, css, and javascript files that are part of my page are being cached by my browser. I've used Fiddler and Google Page Speed and it's unclear whether either is giving me the information I need. Fiddler shows the HTTP 304 response for images, css, and javascript which should tell the browser to use the cached copy. Google Page Speed shows the 304 response but doesn't show a Transfer Size of Zero, instead it shows the full file size of the resource. Note also, I have seen Google Page Speed report a 200 response but then put the word (cache) next to the 200 (so Status is 200 (cache)), which doesnt make a lot of sense.
Any other suggestions as to how I can verify whether the server is sending back images, css, javascript after they've been retrieved and cached by a previous page hit?
In browser HTTP debuggers are probably the easiest to use in your situation. Try HTTPFox for Firefox or Opera which has dragonfly built-in. Both of these indicate when the local browser cache has been used.
If you appear to be getting conflicting information, then wireshark/tcpdump will show you if the objects are being downloaded or not as it is monitoring the actual network packets being transmitted and received. If you haven't looked at network traces before, this might be a little confusing at first.
In fiddler, check out that the response body (for images, css) is empty. Also make sure your max-age is long enough in Cache-Control header. Most browsers (Safari, Firefox) have good traffic analyzer tools.
Your servers access logs can give you a lot of information on how effective your caching strategy is.
Lets say you have a html page /home.html, which references /some.js and /lookandfeel.css. For a given time period, aggregate the number of requests to all three files.
If your caching is effective, you should see a huge number of requests for home.html, but very few for the css or js. Somewhere in between is when you see identical number of requests for all 3, but the css and js have 304s. The worst is when you are only seeing 200s.
Obviously, you have to know your application to do such a study. The js and css files may be shared across multiple pages - which may complicate your analysis. But the general idea still holds good.
The advantage of such a study is that you can find out how effective your caching strategy is for your users as opposed to 'Is caching working on my machine'. However, this is no substitute for using a http proxy / fiddler.
A HTTP/304 response is forbidden to have a body. Hence, the full-response isn't sent, instead you just get back the headers of the 304 response. But the round-trip itself isn't free, and hence sending proper expiration information is a good practice to improve performance to avoid making the conditional request that returns the 304 in the first place.
http://www.fiddler2.com/redir/?id=httpperf explains this topic in some detail.

IIS 6 caches static image

Even if the image is changed, overwritten, modified, IIS still serves the cached copy.
I am trying to upload an image from a webcam taken every 15 seconds. The image makes it onto the server but when I refresh the browser with the image FROM the server it does not refresh.
IIS caches the file apparently for more than 2 minutes. I want this to be in real-time. Tried disabling caching everywhere I could think of. No luck.
Embed your image as follows:
<*ImageTag src="WebCamImage.aspx?data={auto-generated guid}" ... >
*ImageTag = img (spam filter won't let me post it)
And create a page (WebCamImage.aspx) that streams the static image file back to the browser while ignoring the "data" request parameter, which is only used to avoid any caching (make sure to set the response content type to "image/jpeg" or whatever is adequate in the #page header).
Are you sure that the image is cached on the server and not on the client. Have you tried requesting the same image from a different client?
If this IS server side caching then this article has all the answers for you:
http://blogs.msdn.com/david.wang/archive/2005/07/07/HOWTO-Use-Kernel-Response-Cache-with-IIS-6.aspx
You are most likely "affected" by the kernel-mode caching.
See that scavenger time?
Scavenger - 120 seconds by default and controlled by the registry key HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\UriScavengerPeriod
That is probably what you experience (2min caching)
Try turning kernel-mode caching off to see if it makes a difference (performance may suffer but it will be no worse than IIS5)

Resources