My company uses ASHX files to serve some dynamic images. Being it that the content type is image/jpeg, IIS sends headers with them as would be appropriate for static images.
Depending on settings (I don't know all of the settings involved, hence the question) the headers may be any of:
LastModified, ETag, Expires
Causing the browser to treat them as cacheable, which leads to all sorts of bugs with the user seeing stale images.
Is there a setting that I can set somewhere that will cause ASHX files to behave the same way as other dynamic pages, like ASPX files? Short of that, is there a setting that will allow me to, across the board, remove LastModified, Etag, Expires, etc and add a no-cache header instead?
Only solutions I've found were:
1) Adding Response.ContentControl = "no-cache" to each handler.
I don't like this because this requires all of the handlers to change and for all developers to be aware of it.
2) Setting HTTP Header override on a folder where the handlers live
I don't like this one because it requires the handlers to be in their own directory. While this may be good practice in general, unfortunately our application is not structured that way, and I cannot just move them because it would break client-facing links.
If nobody provides a better answer I'll have to accept that these are the only two choices.
Add a random generated string to the request query. This will trick the browser into thinking it is a different call. Example: document.getElementById("myimgcontl").src="myimages.ashx?15923763";.
Related
I'm just testing Workers sites with a Hugo-created static site. It already existed, so I used the docs’ instructions for adapting an existing site. The cache-control headers for the woff2 and css files all show up with no-cache, contrary to what I'd have expected based on https://support.cloudflare.com/hc/en-us/articles/200172516#h_a01982d4-d5b6-4744-bb9b-a71da62c160a. The Workers site in question is https://hosts-test-hugo.brycewray.workers.dev/.
I found the following at https://levelup.gitconnected.com/use-cloudflare-javascript-workers-to-deploy-you-static-generated-site-ssg-1c518e078646 but don't know if it's related:
A Cloudflare Worker is a piece of JavaScript code that runs every time you access a specific route on a website proxied by Cloudflare. The code is executed on every request before they reach Cloudflare’s cache. This means Worker responses are not cached (although requests made by the worker to other web services might be cached with the appropriate caching headers).
Does the site need to have a custom domain — i.e., rather than being a “.workers.dev” URL — before it will have normal caching behavior? Is that even related?
[Note: I am posting this here because I’ve been unsuccessful in getting a response on either the Cloudflare community forum or the Cloudflare subreddit — hoping for better results here.]
Thanks again to Cloudflare’s Kenton Varda for his answer here, without which I’d have been totally stuck. I also thank Brian Li for additional and equally valuable help he provided separately.
Adding the following for others who may find it useful . . .
The remaining problem I encountered was that I didn’t know how to assign differing cache-control values (such as one month, or 2592000) to most of the static assets while leaving the HTML at a much smaller value (like 3600 or 0). Then, a few days later, I found the answer within a comment in Issue #81 for the Cloudflare KV-Asset-Handler repository. So, now, I have the following code within my Worker’s index.js file:
options.cacheControl = {
browserTTL: 0,
edgeTTL: 0,
bypassCache: false // default
}
const filesRegex = /(.*\.(ac3|avi|bmp|br|bz2|css|cue|dat|doc|docx|dts|eot|exe|flv|gif|gz|ico|img|iso|jpeg|jpg|js|json|map|mkv|mp3|mp4|mpeg|mpg|ogg|pdf|png|ppt|pptx|qt|rar|rm|svg|swf|tar|tgz|ttf|txt|wav|webp|webm|webmanifest|woff|woff2|xls|xlsx|xml|zip))$/
if(url.pathname.match(filesRegex)) {
options.cacheControl.edgeTTL = 2592000
options.cacheControl.browserTTL = 2592000
}
If you look at that linked issue comment and wonder what’s different: the only thing is that I removed html (and, to be safe, htm) from the list of extensions. As a result, my Worker site’s HTML has zero caching while each CSS, font, or image file has a one-month cache-control setting — exactly the desired result. Note: The vast majority of the site’s images are hosted elsewhere, but I do still host a small number for favicons and fallback in general.
By default, Workers Sites does not serve a Cache-Control header, but you can customize it to do so. (EDIT to clarify: Workers Sites are cached on Cloudflare's edge by default, and support "etags" for revalidation. The Cache-Control header controls whether they are also cached in the browser without requiring revalidation.)
Note that Workers Sites works very differently from using Cloudflare with a classic origin server. If you're reading something about caching on Cloudflare, but it doesn't specifically mention Workers Sites, then it probably does not apply to Workers Sites.
With Workers Sites, your site is served by a Cloudflare Worker -- code that runs directly on Cloudflare's servers. So, you have no "origin" server behind Cloudflare, and Cloudflare's cache doesn't work in the normal way. The Worker code is completely responsible for serving the content, including setting any headers like Cache-Control.
In fact, when you create a new Workers Sites project using wrangler, the code for this Worker is generated for you -- but you are allowed to edit it! You can customize the code all you want to do whatever you want. The code for the Worker is found in your project directory under workers-site/index.js. The code looks like this -- in fact, it is initialized as a copy of that file from GitHub.
This worker code depends on a library (npm module) called #cloudflare/kv-asset-handler to do most of the work. This library can be customized to handle caching in various ways through the cacheControl option.
But where do you set this option? Well, in your worker code!
Open up workers-site/index.js and look for the part that looks like this:
async function handleEvent(event) {
const url = new URL(event.request.url)
let options = {}
/**
* You can add custom logic to how we fetch your assets
* by configuring the function `mapRequestToAsset`
*/
// options.mapRequestToAsset = handlePrefix(/^\/docs/)
The comment mentions one way that you can use options to customize how your site is served, but you can also set cacheControl here. Try adding this:
options.cacheControl = {
browserTTL: 3600 // 1 hour
}
Now re-deploy your site, and you should see assets are served with Cache-Control: max-age=3600. Of course, this means that your content may be cached in people's browsers for up to an hour (3600 seconds); you may prefer a longer or shorter period.
Note that if you aren't a programmer, this may all seem a bit daunting. Workers Sites is really designed for people who want to be able to customize how their sites are served by editing JavaScript code. For those not interested in writing code, you will be limited to the default behavior, which may or may not suit your needs.
I've been running some penetration tests using OWASP ZAP and it raises the following alert for all requests: X-Content-Type-Options Header Missing.
I understand the header, and why it is recommended. It is explained very well in this StackOverflow question.
However, I have found various references that indicate that it is only used for .js and .css files, and that it might actually be a bad thing to set the header for other MIME types:
Note: nosniff only applies to "script" and "style" types. Also applying nosniff to images turned out to be incompatible with existing web sites. [1]
Firefox ran into problems supporting nosniff for images (Chrome doesn't support it there). [2]
Note: Modern browsers only respect the header for scripts and stylesheets and sending the header for other resources (such as images) when they are served with the wrong media type may create problems in older browsers. [3]
The above references (and others) indicate that it is bad to simply set this header for all responses, but despite following any relevant-looking links and searching on Google, I couldn't find any reason behind this argument.
What are the risks/problems associated with setting X-Content-Type-Options: nosniff and why should it be avoided for MIME types other than text/css and text/javascript?
Or, if there are no risks/problems, why are Mozilla (and others) suggesting that there are?
The answer by Sean Thorburn was very helpful and pointed me to some good material, which is why I awarded the bounty. However, I have now done some more digging and I think I have the answer I need, which turns out to be the opposite of the answer given by Sean.
I will therefore answer my own questions:
The above references (and others) indicate that it is bad to simply set this header for all responses, but despite following any relevant-looking links and searching on Google, I couldn't find any reason behind this argument.
There is a misinterpretation here - this is not what they are indicating.
The resources I found during my research referred to the header only being respected for "script and style types", which I interpreted this to mean files that were served as text/javascript or text/css.
However, what they actually referring to was the context in which the file is loaded, not the MIME type it is being served as. For example, <script> or <link rel="stylesheet"> tags.
Given this interpretation, everything make a lot more sense and the answer becomes clear:
You need to serve all files with a nosniff header to reduce the risk of injection attacks from user content.
Serving up only CSS/JS files with this header is pointless, as these types of file would be acceptable in this context and don't need any additional sniffing.
However, for other types of file, by disallowing sniffing we ensure that only files whose MIME type matches the expected type are allowed in each context. This mitigates the risk of a malicious script being hidden in an image file (for example) in a way that would bypass upload checks and allow third-party scripts to be hosted from your domain and embedded into your site.
What are the risks/problems associated with setting X-Content-Type-Options: nosniff and why should it be avoided for MIME types other than text/css and text/javascript?
Or, if there are no risks/problems, why are Mozilla (and others) suggesting that there are?
There are no problems.
The problems being described are issues regarding the risk of the web browser breaking compatibility with existing sites if they apply nosniff rules when accessing content. Mozilla's research indicated that enforcing a nosniff option on <img> tags would break a lot of sites due to server misconfigurations and therefore the header is ignored in image contexts.
Other contexts (e.g. HTML pages, downloads, fonts, etc.) either don't employ sniffing, don't have an associated risk or have compatibility concerns that prevent sniffing being disabled.
Therefore they are not suggesting that you should avoid the use of this header, at all.
However, the issues that they talk about do result in an important footnote to this discussion:
If you are using a nosniff header, make sure you are also serving the correct Content-Type header!
Some references, that helped me to understand this a bit more fully:
The WhatWG Fetch standard that defines this header.
A discussion and code commit relating to this header for the webhint.io site checking tool.
I'm a bit late to the party, but here's my 2c.
This header makes a lot of sense when serving User Generated Content. So people don't upload a .png file that actually has some JS code in it, and then use that .png in a <script> tag.
You don't necessarily have to set it for the static files that you have 100% control of.
I would stick to js, css, text/html, json and xml.
Google recommend using unguessable CSRF tokens provided by the protected resources for other content types. i.e generate the token using a js resource protected by the nosniff header.
You could add it to everything, but that would just be tedious and as you mentioned above - you may run into compatibility and user issues.
https://www.chromium.org/Home/chromium-security/corb-for-developers
I am using Amazon CloudFront to deliver some HDS files. I have an origin server which check the HTTP HEADER REFERER and in case is no allowed it block it.
The problem is that cloud front is removing the referer header, so it is not forwarded to the origin.
Is it possible to tell Amazon not to do it?
Within days of writing the answer below, changes have been announced to Cloudfront. Cloudfront will now pass through headers you select and can add some headers of its own.
However, much of what I stated below remains true. Note that in the announcement, an option is offered to forward all headers which, as I suggested, would effectively disable caching. There's also an option to forward specific headers, which will cause Cloudfront to cache the object against the complete set of forwarded headers -- not just the uri -- meaning that the effectiveness of the cache is somewhat reduced, since Cloudfront has no option but to assume that the inclusion of the header might modify the response the server will generate for that request.
Each of your CloudFront distributions now contains a list of headers that are to be forwarded to the origin server. You have three options:
None - This option requests the original behavior.
All - This option forwards all headers and effectively disables all caching at the edge.
Whitelist - This option give you full control of the headers that are to be forwarded. The list starts out empty, and grows as you add more headers. You can add common HTTP headers by choosing them from a list. You can also add "custom" headers by simply entering the name.
If you choose the Whitelist option, each header that you add to the list becomes part of the cache key for the URLs associated with the distribution. Adding a header to the list simply tells CloudFront that the value of the header can affect the content returned by the origin server.
http://aws.amazon.com/blogs/aws/enhanced-cloudfront-customization/
Cloudfront does remove the Referer header along with several others that are not particularly meaningful -- or whose presence would cause illogical consequences -- in the world of cached content.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html
Just like cookies, if the Referer: header were allowed to remain, such that the origin could see it and react to it, that would imply that the object should be cached based on the request plus the referring page, which would seem to largely defeat the cachability of objects. Otherwise, if the origin did react to an undesired referer and send no-cache responses, that would be all well and good until the first legitimate request came in, the response to which would be served to subsequent requesters regardless of their referer, also largely defeating the purpose.
RFC-2616 Section 13 requires that a cache return a response that has been "checked for equivalence with what the origin server would have returned," and this implies that the response be valid based on all headers in the request.
The same thing goes for User-agent and other headers an origin server might use to modify its response... if you need to react to these values at the origin, there's little obvious purpose for serving them with a CDN.
Referring page-based tests are quite a primitive measure, the way many people use them, since headers are so trivial to forge.
If you are dealing with a platform that you don't control, and this is something you need to override (with a dummy value, just to keep the existing system "happy,") then a reverse proxy in front of the origin server could serve such a purpose, with Cloudfront using the reverse proxy as its origin.
In today's newsletter amazon announced that it is now possible to forward request headers with cloudfront. See: http://aws.amazon.com/de/about-aws/whats-new/2014/06/26/amazon-cloudfront-device-detection-geo-targeting-host-header-cors/
We have a fairly high-traffic static site (i.e. no server code), with lots of images, scripts, css, hosted by IIS 7.0
We'd like to turn on some caching to reduce server load, and are considered setting the expiry of web content to be some time in the future. In IIS, we can do this on a global level via "Expire web content" section of the common http headers in the IIS response header module. Perhaps setting content to expire 7 days after serving.
All this actually does is sets the max-age HTTP response header, so far as I can tell, which makes sense, I guess.
Now, the confusion:
Firstly, all browsers I've checked (IE9, Chrome, FF4) seem to ignore this and still make conditional requests to the server to see if content has changed. So, I'm not entirely sure what the max-age response header will actually effect?! Could it be older browsers? Or web-caches?
It is possible that we may want to change an image in the site at short notice... I'm guessing that if the max-age is actually used by something that, by its very nature, it won't then check if this image has changed for 7 days... so that's not what we want either
I wonder if a best practice is to partition one's site into folders of content really won't change often and only turn on some long-term expiry for these folders? Perhaps to vary the querystring to force a refresh of content in these folders if needed (e.g. /assets/images/background.png?version=2) ?
Anyway, having looked through the (rather dry!) HTTP specification, and some of the tutorials, I still don't really have a feel for what's right in our situation.
Any real-world experience of a situation similar to ours would be most appreciated!
Browsers fetch the HTML first, then all the resources inside (css, javascript, images, etc).
If you make the HTML expire soon (e.g. 1 hour or 1 day) and then make the other resources expire after 1 year, you can have the best of both worlds.
When you need to update an image, or other resource, you just change the name of that file, and update the HTML to match.
The next time the user gets fresh HTML, the browser will see a new URL for that image, and get it fresh, while grabbing all the other resources from a cache.
Also, at the time of this writing (December 2015), Firefox limits the maximum number of concurrent connections to a server to six (6). This means if you have 30 or more resources that are all hosted on the same website, only 6 are being downloaded at any time until the page is loaded. You can speed this up a bit by using a content delivery network (CDN) so that everything downloads at once.
I use Amazon Cloudfront to host all my site's images and videos, to serve them faster to my users which are pretty scattered across the globe. I also apply pretty aggressive forward caching to the elements hosted on Cloudfront, setting Cache-Controlto public, max-age=7776000.
I've recently discovered to my annoyance that third party sites are hotlinking to my Cloudfront server to display images on their own pages, without authorization.
I've configured .htaccessto prevent hotlinking on my own server, but haven't found a way of doing this on Cloudfront, which doesn't seem to support the feature natively. And, annoyingly, Amazon's Bucket Policies, which could be used to prevent hotlinking, have effect only on S3, they have no effect on CloudFront distributions [link]. If you want to take advantage of the policies you have to serve your content from S3 directly.
Scouring my server logs for hotlinkers and manually changing the file names isn't really a realistic option, although I've been doing this to end the most blatant offenses.
You can forward the Referer header to your origin
Go to CloudFront settings
Edit Distributions settings for a distribution
Go to the Behaviors tab and edit or create a behavior
Set Forward Headers to Whitelist
Add Referer as a whitelisted header
Save the settings in the bottom right corner
Make sure to handle the Referer header on your origin as well.
We had numerous hotlinking issues. In the end we created css sprites for many of our images. Either adding white space to the bottom/sides or combining images together.
We displayed them correctly on our pages using CSS, but any hotlinks would show the images incorrectly unless they copied the CSS/HTML as well.
We've found that they don't bother (or don't know how).
The official approach is to use signed urls for your media. For each media piece that you want to distribute, you can generate a specially crafted url that works in a given constraint of time and source IPs.
One approach for static pages, is to generate temporary urls for the medias included in that page, that are valid for 2x the duration as the page's caching time. Let's say your page's caching time is 1 day. Every 2 days, the links would be invalidated, which obligates the hotlinkers to update their urls. It's not foolproof, as they can build tools to get the new urls automatically but it should prevent most people.
If your page is dynamic, you don't need to worry to trash your page's cache so you can simply generate urls that are only working for the requester's IP.
As of Oct. 2015, you can use AWS WAF to restrict access to Cloudfront files. Here's an article from AWS that announces WAF and explains what you can do with it. Here's an article that helped me setup my first ACL to restrict access based on the referrer.
Basically, I created a new ACL with a default action of DENY. I added a rule that checks the end of the referer header string for my domain name (lowercase). If it passes that rule, it ALLOWS access.
After assigning my ACL to my Cloudfront distribution, I tried to load one of my data files directly in Chrome and I got this error:
As far as I know, there is currently no solution, but I have a few possibly relevant, possibly irrelevant suggestions...
First: Numerous people have asked this on the Cloudfront support forums. See here and here, for example.
Clearly AWS benefits from hotlinking: the more hits, the more they charge us for! I think we (Cloudfront users) need to start some sort of heavily orchestrated campaign to get them to offer referer checking as a feature.
Another temporary solution I've thought of is changing the CNAME I use to send traffic to cloudfront/s3. So let's say you currently send all your images to:
cdn.blahblahblah.com (which redirects to some cloudfront/s3 bucket)
You could change it to cdn2.blahblahblah.com and delete the DNS entry for cdn.blahblahblah.com
As a DNS change, that would knock out all the people currently hotlinking before their traffic got anywhere near your server: the DNS entry would simply fail to look up. You'd have to keep changing the cdn CNAME to make this effective (say once a month?), but it would work.
It's actually a bigger problem than it seems because it means people can scrape entire copies of your website's pages (including the images) much more easily - so it's not just the images you lose and not just that you're paying to serve those images. Search engines sometimes conclude your pages are the copies and the copies are the originals... and bang goes your traffic.
I am thinking of abandoning Cloudfront in favor of a strategically positioned, super-fast dedicated server (serving all content to the entire world from one place) to give me much more control over such things.
Anyway, I hope someone else has a better answer!
This question mentioned image and video files.
Referer checking cannot be used to protect multimedia resources from hotlinking because some mobile browsers do not send referer header when requesting for an audio or video file played using HTML5.
I am sure of that about Safari and Chrome on iPhone and Safari on Android.
Too bad! Thank you, Apple and Google.
How about using Signed cookies ? Create signed cookie using custom policy which also supports various kind of restrictions you want to set and also it is wildcard.