When should I use HTTP header "X-Content-Type-Options: nosniff" - browser

I've been running some penetration tests using OWASP ZAP and it raises the following alert for all requests: X-Content-Type-Options Header Missing.
I understand the header, and why it is recommended. It is explained very well in this StackOverflow question.
However, I have found various references that indicate that it is only used for .js and .css files, and that it might actually be a bad thing to set the header for other MIME types:
Note: nosniff only applies to "script" and "style" types. Also applying nosniff to images turned out to be incompatible with existing web sites. [1]
Firefox ran into problems supporting nosniff for images (Chrome doesn't support it there). [2]
Note: Modern browsers only respect the header for scripts and stylesheets and sending the header for other resources (such as images) when they are served with the wrong media type may create problems in older browsers. [3]
The above references (and others) indicate that it is bad to simply set this header for all responses, but despite following any relevant-looking links and searching on Google, I couldn't find any reason behind this argument.
What are the risks/problems associated with setting X-Content-Type-Options: nosniff and why should it be avoided for MIME types other than text/css and text/javascript?
Or, if there are no risks/problems, why are Mozilla (and others) suggesting that there are?

The answer by Sean Thorburn was very helpful and pointed me to some good material, which is why I awarded the bounty. However, I have now done some more digging and I think I have the answer I need, which turns out to be the opposite of the answer given by Sean.
I will therefore answer my own questions:
The above references (and others) indicate that it is bad to simply set this header for all responses, but despite following any relevant-looking links and searching on Google, I couldn't find any reason behind this argument.
There is a misinterpretation here - this is not what they are indicating.
The resources I found during my research referred to the header only being respected for "script and style types", which I interpreted this to mean files that were served as text/javascript or text/css.
However, what they actually referring to was the context in which the file is loaded, not the MIME type it is being served as. For example, <script> or <link rel="stylesheet"> tags.
Given this interpretation, everything make a lot more sense and the answer becomes clear:
You need to serve all files with a nosniff header to reduce the risk of injection attacks from user content.
Serving up only CSS/JS files with this header is pointless, as these types of file would be acceptable in this context and don't need any additional sniffing.
However, for other types of file, by disallowing sniffing we ensure that only files whose MIME type matches the expected type are allowed in each context. This mitigates the risk of a malicious script being hidden in an image file (for example) in a way that would bypass upload checks and allow third-party scripts to be hosted from your domain and embedded into your site.
What are the risks/problems associated with setting X-Content-Type-Options: nosniff and why should it be avoided for MIME types other than text/css and text/javascript?
Or, if there are no risks/problems, why are Mozilla (and others) suggesting that there are?
There are no problems.
The problems being described are issues regarding the risk of the web browser breaking compatibility with existing sites if they apply nosniff rules when accessing content. Mozilla's research indicated that enforcing a nosniff option on <img> tags would break a lot of sites due to server misconfigurations and therefore the header is ignored in image contexts.
Other contexts (e.g. HTML pages, downloads, fonts, etc.) either don't employ sniffing, don't have an associated risk or have compatibility concerns that prevent sniffing being disabled.
Therefore they are not suggesting that you should avoid the use of this header, at all.
However, the issues that they talk about do result in an important footnote to this discussion:
If you are using a nosniff header, make sure you are also serving the correct Content-Type header!
Some references, that helped me to understand this a bit more fully:
The WhatWG Fetch standard that defines this header.
A discussion and code commit relating to this header for the webhint.io site checking tool.

I'm a bit late to the party, but here's my 2c.
This header makes a lot of sense when serving User Generated Content. So people don't upload a .png file that actually has some JS code in it, and then use that .png in a <script> tag.
You don't necessarily have to set it for the static files that you have 100% control of.

I would stick to js, css, text/html, json and xml.
Google recommend using unguessable CSRF tokens provided by the protected resources for other content types. i.e generate the token using a js resource protected by the nosniff header.
You could add it to everything, but that would just be tedious and as you mentioned above - you may run into compatibility and user issues.
https://www.chromium.org/Home/chromium-security/corb-for-developers

Related

Should Content-Security-Policy header be applied to all resources?

Is it necessary to apply the Content-Security-Policy Header to all resources on your domain (images/CSS/JavaScript) or just web pages?
For example, I noticed that https://content-security-policy.com/images/csp-book-cover-sm.png has a CSP header.
It is only necessary to apply it to web pages that are rendered in a browser, as CSP controls the allowed sources for content, framing etc of such pages. Typically you will only need to set it on non-redirect responses with content type as "text/html". As CSP can be set in a meta tag, another way to look at it is that it only makes sense on responses that could include a meta tag.
As it is often simpler or only possible to just add a response header to all responses, CSPs are often applied to all content types and codes even though they are not strictly needed. Additionally it is recommended to add a CSP with a strict frame-ancestors to REST APIs to prevent drag-and-drop style clickjacking attacks, see https://cheatsheetseries.owasp.org/cheatsheets/REST_Security_Cheat_Sheet.html#security-headers.

Where to specify the Content Security Policy (CSP): on a backend or on a frontend?

As far as I understand, there are two ways to specify the Content Security Policy:
On a server side via headers:
res.setHeader("content security-policy", "default-src: 'none';")
In an HTML-page via meta-tag:
<meta content = "default-src 'none';" http-equiv = "Content-Security-Policy" />
My questions:
What is the difference between these two techniques?
Is it enough to use just one of them?
Which one should I use? Backend, frontend, or both?
P.S. Thanks to How does Content Security Policy (CSP) work?, I know what is the CSP and how does it work. What I want to know, however, is where exactly it is better to set the CSP.
Delivering CSP via HTTP header is a preferred way.
Meta tag has the same functionality but for technical reasons it does not support some directives: frame-ancestors, report-uri, report-to and sandbox. Also the Content-Security-Policy-Report-Only is not supported in meta tag.
In SPA (Single Page Application), a meta tag is traditionally used for CSP delivery, because a lot of hostings do now allow to manage of HTTP header.
When SSR (Server Side Rendering), an HTTP header is used more often.
You can use any technically convenient CSP delivery method (keeping in mind the limitations of the meta tag), but do not use both at the same time. Both policies will be executed one after the other, so in case of differences, a stricter one will apply actually.
Note that:
CSP meta tag should be placed in <head>, otherwise it will not work.
Changing the meta tag by javascript will result in both the old and the new policies being in effect.
in cases of CSP for non-HTML files, the meta tag can not be used technically

CSP being applied despite no Content-Security-Policy header

I'm having trouble figuring out why CSP is being applied to a page, when inspection of the request/response shows no Content-Security-Policy header being sent (see screenshot).
The application is a Jenkins instance, serving some static HTML content generated by a job, and it's previously had the restrictions relaxed as described here: https://wiki.jenkins.io/display/JENKINS/Configuring+Content+Security+Policy. This fixed the original instances of the static content not showing because of the CSP restrictions. Now, however, it came back in a different place, and the original solution is ineffective (for obvious reasons, as there's no header to modify). Just in case, I've verified that the custom CSP value is still set inside Jenkins. The problem happens in all of Firefox 64, Chromium 71, and Chrome 55.
How can I figure out where the CSP originates? Have browsers started to apply it by default now? I thought the whole point of CSP was that it was opt-in and degraded to same-origin policy if absent.
EDIT: There's no <meta http-equiv="content-security-policy"> in the source either.
Figured it out eventually: it turned out to be a caching thing, despite me disabling doing non-cached reloads, apparently I wasn't doing it hard enough. That was hiding the original request, which did indeed have a CSP header. After clicking enough things in DevTools and settings, I was able to get it to re-issue the original request and could see it in the request view to see what was being applied.

Node js - Bundler for http2

I'm currently using babel to transform es6 code to es5 and browserify to bundle it to use it in the browser. Now I've began to using a http2 server (Nginx).
Http2 is more effective when it can load multiple small files instead of one big bundle.
How to best serve multiple js files instead of one big bundle?
I know that SystemJS can load multiple files in development without bundling, and for production you can use a DepCache to define the dependence trees of the modules you are importing
https://github.com/systemjs/systemjs/blob/master/docs/production-workflows.md
This approach would require you to ditch browserfy and change to systemjs as it only uses bundles.
I see that you didn't get the answer on your question till now. Thus I try to help you in spite of HTTP/2 is new for me too (it explains the long text of my answer :-)).
Good information about HTTP/2 can be find on the page https://blog.cloudflare.com/http-2-for-web-developers/. I repeat shortly:
stop concatenating files
stop inlining assets
stop sharding domains
continue minimizing of CSS/JavaScript files
continue loading from CDNs
continue DNS prefetching via <link rel='dns-prefetch' href='...' /> included in <head>
...
I want to add two additional points about the importance of setting HTTP headers Cache-Control and Link:
think about setting Cache-Control HTTP headers (especially max-age, expires and etag) on all content of your page. See details below. I strictly recommend to read the Caching Tutorial.
set Link HTTP header to use SERVER PUSH of HTTP/2.
The setting of HTTP headers LINK: are important to use server push feature of HTTP/2 (see here, here). RFC5988 and Section 19.6.1.2 of RFC2068 describe the feature existing in HTTP 1.1 already. Everybody knows Content-Type: application/json, but in the same way one could set less known Link: <...>; rel=prefetch, described here. For example, one can use
Link: </app/script.js>; rel=preload; as=script
Link: </fonts/font.woff>; rel=preload; as=font
Link: </app/style.css>; rel=preload; as=style
Such links, set on HTML page (like index.html), will informs HTTP server to push the resources together with the response on your HTML page. As the result you save unneeded round-trips and the later requests (after parsing HTML files) and the resources will be displayed immediately. You can consider to set the LINK headers on all images from your page to improve the visibility of your page. See here additional information with nice pictures, which demonstrates the advantage of HTTP/2 server push. If you use PHP then the code could be interesting for you.
The most web developers do some optimizations steps directly or indirectly. The steps are done either during building process or by setting HTTP headers in HTTP responses. One have to review some processes switch off someone and include another one. I try to summarize my results.
you can consider to use webpack instead of browserify to exclude some dependencies from merging. I don't know browserify good enough, but I know that webpack supports externals (see here), which allows to load some modules from CDN. In the next step you can remove any merging at all, but minimize and set cache-control on all your modules.
It's strictly recommended to load CSS/JS/Fonts, which you use, and which you don't developed yourself, from CDN. You should never merge such resources with your JavaScript files (what could you probably do with browserify now). Loading of Bootstrap CSS from your server is not good idea. One should better follow advises from here and use CDN instead ol downloading of all files locally.
The main reason of the usage of CDN is very easy to understand if you examine HTTP headres of the response from https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.1/jquery.min.js for example. You will find something like cache-control: public, max-age=30672000 and expires:Mon, 06 Mar 2017 21:25:04 GMT. Chrome will shows typically Status Code:200 (from cache) and you will see no traffic over the wire. If you explicitly reload the page (by pressing F5) then you will see a response with 222 bytes and Status Code:304. In other words the file will be typically didn't loaded at all. jQuery 2.2.1 stay forever the same. The next version will have another URL. The usage of HTTPS makes sure that the user will load really jQuery 2.2.1. If it's not enough then you can use https://www.srihash.org/ to calculate sha384 value and use extended form of <link> or <script>:
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.1/jquery.min.js"
integrity="sha384-8C+3bW/ArbXinsJduAjm9O7WNnuOcO+Bok/VScRYikawtvz4ZPrpXtGfKIewM9dK"
crossorigin="anonymous"></script>
If the user opens your page with the link then the sha384 hash will be recalculated and verified (by Chrome and Firefox). If the file is not yet in local cache then it will be loaded really quickly too. One short remark by loading the same file from https://code.jquery.com/jquery-2.2.1.min.js one uses HTTP 1.1 today, but from https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.1/jquery.min.js be used HTTP/2 protocol. i recommend to test the protocol by choosing the CDN. You can find here the list of CDNs which supports now HTTP/2. In the same way loading Bootstrap from https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css one would uses HTTP 1.1 today, but one would use HTTP/2 by loading the same data from https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/css/bootstrap.min.css.
I spend many time for CDN to make clear that the most advantage of CDN is setting of cashing headers of HTTP response and the usage of immutable URLs. You can do the same in your modules too.
One should think about the time of caching of every content returned from the server. You can use URLs to your modules, which contains version number of your component (like /script/mycomponent1.1.12341) and to change the last part of version number every time on changing the module. You can set long enough value of max-age in cache-control and your components will be cached by web browser of the client.
Finally I'd recommend you to verify that you installed the latest version of OpenSSL and the latest version of nginx. I recommend to verify your web site in http://www.webpagetest.org/ and in https://www.ssllabs.com/ssltest/ to be sure that you don't forget any simple steps.

How to prevent IIS from sending cache headers with ASHX files

My company uses ASHX files to serve some dynamic images. Being it that the content type is image/jpeg, IIS sends headers with them as would be appropriate for static images.
Depending on settings (I don't know all of the settings involved, hence the question) the headers may be any of:
LastModified, ETag, Expires
Causing the browser to treat them as cacheable, which leads to all sorts of bugs with the user seeing stale images.
Is there a setting that I can set somewhere that will cause ASHX files to behave the same way as other dynamic pages, like ASPX files? Short of that, is there a setting that will allow me to, across the board, remove LastModified, Etag, Expires, etc and add a no-cache header instead?
Only solutions I've found were:
1) Adding Response.ContentControl = "no-cache" to each handler.
I don't like this because this requires all of the handlers to change and for all developers to be aware of it.
2) Setting HTTP Header override on a folder where the handlers live
I don't like this one because it requires the handlers to be in their own directory. While this may be good practice in general, unfortunately our application is not structured that way, and I cannot just move them because it would break client-facing links.
If nobody provides a better answer I'll have to accept that these are the only two choices.
Add a random generated string to the request query. This will trick the browser into thinking it is a different call. Example: document.getElementById("myimgcontl").src="myimages.ashx?15923763";.

Resources