I'm having trouble figuring out why CSP is being applied to a page, when inspection of the request/response shows no Content-Security-Policy header being sent (see screenshot).
The application is a Jenkins instance, serving some static HTML content generated by a job, and it's previously had the restrictions relaxed as described here: https://wiki.jenkins.io/display/JENKINS/Configuring+Content+Security+Policy. This fixed the original instances of the static content not showing because of the CSP restrictions. Now, however, it came back in a different place, and the original solution is ineffective (for obvious reasons, as there's no header to modify). Just in case, I've verified that the custom CSP value is still set inside Jenkins. The problem happens in all of Firefox 64, Chromium 71, and Chrome 55.
How can I figure out where the CSP originates? Have browsers started to apply it by default now? I thought the whole point of CSP was that it was opt-in and degraded to same-origin policy if absent.
EDIT: There's no <meta http-equiv="content-security-policy"> in the source either.
Figured it out eventually: it turned out to be a caching thing, despite me disabling doing non-cached reloads, apparently I wasn't doing it hard enough. That was hiding the original request, which did indeed have a CSP header. After clicking enough things in DevTools and settings, I was able to get it to re-issue the original request and could see it in the request view to see what was being applied.
Related
Is it necessary to apply the Content-Security-Policy Header to all resources on your domain (images/CSS/JavaScript) or just web pages?
For example, I noticed that https://content-security-policy.com/images/csp-book-cover-sm.png has a CSP header.
It is only necessary to apply it to web pages that are rendered in a browser, as CSP controls the allowed sources for content, framing etc of such pages. Typically you will only need to set it on non-redirect responses with content type as "text/html". As CSP can be set in a meta tag, another way to look at it is that it only makes sense on responses that could include a meta tag.
As it is often simpler or only possible to just add a response header to all responses, CSPs are often applied to all content types and codes even though they are not strictly needed. Additionally it is recommended to add a CSP with a strict frame-ancestors to REST APIs to prevent drag-and-drop style clickjacking attacks, see https://cheatsheetseries.owasp.org/cheatsheets/REST_Security_Cheat_Sheet.html#security-headers.
I'm developing a chrome extension that renders and sidebar iframe in Gmail.
A while back, we ran into a Content Security Policy issue that broke our extension and ran into a well-known error that shows up in your console as
Refused to frame '[OUR DOMAIN]' because it violates the following Content Security Policy directive: ...
To fix it, we had to rewrite the CSP header as explained in this blog post.
However, recently, we've been getting the same error, even after having implemented the fix suggested in the blog post.
As you can see, even though we added the chrome.webRequest.onHeadersReceived listener to overwrite the CSP header, our domain doesn't seem to be working. As a result, our sidebar does not load.
Notes and attempts:
Note that upon refreshing the entire page, the sidebar will sometimes load successfully %50 of the times, which leads me to believe that we're not adding the above event listener fast enough. To solve this, I pulled the event listener out of our background class and placed it at the top of the background js script. However, this issue seems to still persist.
We've tried to remove and re-add the extension but that didn't yield any permanent results.
Throughout testing, I also noticed that adding another extension that implements a similar override on the CSP header will cause our extension to break. Removing and re-adding our extension brings us back to the 50-50 issue in the first note.
[Update]
It seems that being able to reproduce it %50 of the times has to do with the version of chrome. After upgrading it to 72, I'm now able to reproduce the issue. However, doing a hard refresh on the page will sometimes allow the page to load.
More details on our setup: What we do is basically have a background.html which contains html templates and we have one template that contains an iframe tag. To get the template with the iframe, we do a sendMessage, grab the response, which is the template in string, convert it to DOM element via jquery, append it to body, and change the iframe's src attribute. What happens is the iframe loads fine initially, but when you click on any link in the iframe that would redirect to another page, the same error above shows up.
I can provide snippets and more info if needed.
I've been running some penetration tests using OWASP ZAP and it raises the following alert for all requests: X-Content-Type-Options Header Missing.
I understand the header, and why it is recommended. It is explained very well in this StackOverflow question.
However, I have found various references that indicate that it is only used for .js and .css files, and that it might actually be a bad thing to set the header for other MIME types:
Note: nosniff only applies to "script" and "style" types. Also applying nosniff to images turned out to be incompatible with existing web sites. [1]
Firefox ran into problems supporting nosniff for images (Chrome doesn't support it there). [2]
Note: Modern browsers only respect the header for scripts and stylesheets and sending the header for other resources (such as images) when they are served with the wrong media type may create problems in older browsers. [3]
The above references (and others) indicate that it is bad to simply set this header for all responses, but despite following any relevant-looking links and searching on Google, I couldn't find any reason behind this argument.
What are the risks/problems associated with setting X-Content-Type-Options: nosniff and why should it be avoided for MIME types other than text/css and text/javascript?
Or, if there are no risks/problems, why are Mozilla (and others) suggesting that there are?
The answer by Sean Thorburn was very helpful and pointed me to some good material, which is why I awarded the bounty. However, I have now done some more digging and I think I have the answer I need, which turns out to be the opposite of the answer given by Sean.
I will therefore answer my own questions:
The above references (and others) indicate that it is bad to simply set this header for all responses, but despite following any relevant-looking links and searching on Google, I couldn't find any reason behind this argument.
There is a misinterpretation here - this is not what they are indicating.
The resources I found during my research referred to the header only being respected for "script and style types", which I interpreted this to mean files that were served as text/javascript or text/css.
However, what they actually referring to was the context in which the file is loaded, not the MIME type it is being served as. For example, <script> or <link rel="stylesheet"> tags.
Given this interpretation, everything make a lot more sense and the answer becomes clear:
You need to serve all files with a nosniff header to reduce the risk of injection attacks from user content.
Serving up only CSS/JS files with this header is pointless, as these types of file would be acceptable in this context and don't need any additional sniffing.
However, for other types of file, by disallowing sniffing we ensure that only files whose MIME type matches the expected type are allowed in each context. This mitigates the risk of a malicious script being hidden in an image file (for example) in a way that would bypass upload checks and allow third-party scripts to be hosted from your domain and embedded into your site.
What are the risks/problems associated with setting X-Content-Type-Options: nosniff and why should it be avoided for MIME types other than text/css and text/javascript?
Or, if there are no risks/problems, why are Mozilla (and others) suggesting that there are?
There are no problems.
The problems being described are issues regarding the risk of the web browser breaking compatibility with existing sites if they apply nosniff rules when accessing content. Mozilla's research indicated that enforcing a nosniff option on <img> tags would break a lot of sites due to server misconfigurations and therefore the header is ignored in image contexts.
Other contexts (e.g. HTML pages, downloads, fonts, etc.) either don't employ sniffing, don't have an associated risk or have compatibility concerns that prevent sniffing being disabled.
Therefore they are not suggesting that you should avoid the use of this header, at all.
However, the issues that they talk about do result in an important footnote to this discussion:
If you are using a nosniff header, make sure you are also serving the correct Content-Type header!
Some references, that helped me to understand this a bit more fully:
The WhatWG Fetch standard that defines this header.
A discussion and code commit relating to this header for the webhint.io site checking tool.
I'm a bit late to the party, but here's my 2c.
This header makes a lot of sense when serving User Generated Content. So people don't upload a .png file that actually has some JS code in it, and then use that .png in a <script> tag.
You don't necessarily have to set it for the static files that you have 100% control of.
I would stick to js, css, text/html, json and xml.
Google recommend using unguessable CSRF tokens provided by the protected resources for other content types. i.e generate the token using a js resource protected by the nosniff header.
You could add it to everything, but that would just be tedious and as you mentioned above - you may run into compatibility and user issues.
https://www.chromium.org/Home/chromium-security/corb-for-developers
When using JS/CSS from unsecured CDN in https page,
A. Some pages block loading js/css, and cause runtime error by short of js code.
B. Some pages do not block loading js/css, pages are shown as entirely insecure contents.
What is the difference of these behaviors?
Even if using same browser (I'm using Chrome 51.0.2704.103 (64-bit) in Mac OS X) and seeing same page, behavior changes sometimes...
May some response headers of index.html or so control this behavior?
Anyone know about this?
Example:
My friend create page https://cfn-iot-heatmap.herokuapp.com/, in before, this page's behavior was like A, contents are totally white out.
In this case, insecure CDN contents are:
https://cdn.leafletjs.com/leaflet-0.6.4/leaflet.js
https://cdn.leafletjs.com/leaflet-0.6.4/leaflet.css
I got source codes of this page and deployed to my heroku repository https://kinkyujitai.herokuapp.com/, it is shown like B.
But curious, after I deployed my repository, friend's repository also works like B, showing security warning but shown.
It is very curious, so I want to know the reason of this phenomena...
From a secure (https) origin, you should always include secure elements.
If you don't, browser can block insecure request and/or remove the visual indication of the security.
We've partnered with a company whose website will display our content in an IFRAME. I understand what the header is and what it does and why, what I need help with is tracking down where it's coming from!
Windows Server 2003/IIS6
Container page: https://testDomain.com/test.asp
IFRAME Content: https://ourDomain.com/index.asp?lots_of_parameters,_wheeeee
Testing in Firefox 24 with Firebug installed. (IE and Chrome do the same thing.) Also running Fiddler so I can watch network traffic while I'm at it.
For simplicity's sake, I created a page with nothing on it but the IFRAME in question - same physical server, different domain/site - and it failed with
Load denied by X-Frame-Options: https://www.google.com/ does not permit cross-origin framing.
(That's in the Firebug console.) I'm confused because:
Google is not referenced anywhere in the containing app, or in the IFRAMEd app. All javascript libraries are kept locally; there is no analytics in the app. No Google, nowhere.
The containing page has NOTHING on it, except the IFRAME. No html tags, no head tag, no body tag. IFRAME. That's it.
The X-FRAME-OPTIONS header does not exist in IIS on the server: not at the "Websites" node, not in the individual sites.
So where the h-e-double-sticks is that coming from? What am I missing?
Interesting point: if I remove http"S" from the IFRAME url, it works. Given the nature of the data, SSL is required.
You might check global.asax.cs, the app could be adding the header to every response automatically. If you just search the app for "x-frame-options" you might find something also.