How to remove unsafe inline and unsafe eval from content security policy? - node.js

Can anyone know how to remove unsafe inline and unsafe eval from content security policy using nonce,
anyone have any resource how to implement nonce as i am unable to implement it properly i think i am getting errors in node.js application.
I have tried the helmet, meta tags, set Headers nothing worked.

You can't remove 'unsafe-eval' with nonces, you'll need to rewrite or replace the code.
Nonces are hard. You will need to insert the nonce in the CSP and in code dynamically. It also needs to change with every pageload to be secure. Frameworks don't always permit this. Also remember that inline event handlers aren't nonceable and need to be rewritten. I would suggest to try to rewrite all static script and style into separate files and then see if you still need nonces for the rest or if you can handle it in other ways.

https://github.com/wyday/mod_cspnonce is an apache module that can be used to implement nonce in place of unsafe-inline.

Related

What is a good replacement for 'styled-components' in React application to produce secure web app?

I came to a project that uses styled-components in the frontend written in React.
It seems the decision to use it was quite unfortunate as this component ignores security aspects and generates inlined styles.
Currently, in order to have the app running, there must be weakened security by specifying style-src 'unsafe-inline' in Content Security Policy, which is not acceptable in enterprise applications (at least in our corporate).
It seems there is no workaround with this library except using nonce when server side rendering but we have currently static web and would prefer not to add SSR so this does not seem to be the right direction for us.
There is quite lot of code and rewriting the app completely would require weeks maybe even a few month.
Any experience with gradual moving away from styled-components? Is there some less painful way to get the security back?
Inline styles could be of 3 kinds:
<style>...</style> blocks, they can be allowed with 'hash-value'.
style attribute in tags like <tag style='color:green; padding:0;'>, there is no way to allow them other than 'unsafe-inline' token.
Javascript calls of HTMLElement.setAttribute('style'). There is a workaround by substitutes the system HTMLElement.setAttribute(style) with HTMLElement.style.property = val.

Reuse CSP script nonce throughout the session?

This is a follow-up of https://github.com/w3c/webappsec-csp/issues/215. arturjanc suggested to move this discussion to stackoverflow.
We are trying to implement CSP for scripts in JSF and don't know if it is safe to reuse a script nonce throughout the session. Or, like arturjanc suggested, have the original document send its current nonce to the server which generates future responses.
Assuming that it is unsafe to reuse the nonce throughout the session, would it be okay to just include the initial nonce in hidden form input like currently implemented here. (ignoring the CSP header/XSS injection vulnerabilities for the moment - it's just a prototype)
#arturjanc: Would you like to chime in again?
Edit: Additional thoughts regarding arturjanc's answer:
Could you please elaborate a bit more on how to implement per-response nonces in a typical JSF application of nowadays, i.e. just having one single full page load at the very beginning and subsequent XHR-only communication?
If I understand you correctly, your suggestion would be then to always resend the initially generated nonce in every XHR request. However, in practice this is effectively the very same as nonces per session, isn't it? Just more complicated in terms of implementation.
Strictly implementing per-response nonces would imply that subsequent responses must also include all nonces created earlier in that session, so we would somehow track all nonces of the session.
Setting new CSP headers for each XHR-response containing only the newly created per-response nonce, would probably not work due to browsers treating multiple CSP headers across responses by merging them using an intersection strategy, i.e. Content-Security-Policy: 'nonce-1' in response 1 and Content-Security-Policy: 'nonce-2' in response 2 would render both nonces invalid after response 2.
Sadly, there is no single correct answer to your questions: it's not obviously wrong to have a per-session nonce, but it introduces the risk that whenever a nonce can be leaked by an attacker it will be reusable on another page load, allowing the attacker to exploit an XSS bug that would otherwise be mitigated by CSP.
Specifically, when you include the nonce in a hidden input field, it allows the value to be exfiltrated by abusing CSS selectors; note that the same attack would not work against the script#nonce attribute because the browser hides the value of that attribute from the DOM to protect against such attacks.
My recommendation would be two-fold:
Try to make the nonce per-response rather than per session. This way even if the nonce can be exfiltrated, it will be difficult for an attacker to re-use it.
If you need to re-use the nonce to allow asynchronously fetched markup from the server to contain scripts with the correct nonce value, do this without copying the nonce to the DOM. For example, if you use an XHR to a URL with a nonce parameter, do something like xhr.open("/my/url?nonce=" + document.currentScript.nonce)

Implementing CSP into ASP web app

We're trying to implement CSP into one of our sites and although we understand how to allow or not allow scripts, we're still confused on the nonce/sha* part. We have outside scripts such as bootstrap and jquery that come with the integrity="sha*" and that inline scripts or styles should be avoided. Everything inline should be refactored into an external file.
The question we have is, do we create a sha* key/nonce-* for every js or css file in our site (not external) or just putting 'self' after script-src/style-src in the Content-Security-Policy is sufficient?
Thanks for any help on this.
You don’t need nonces for linked JavaScript files; they authorize inline Script tags. The “self” (or URL) authorize the linked files. The cleanup you have to do is get rid of “onclick” attributes in the HTML. The SHA hashes are there to verify the integrity of the linked files.
So:
Clean up inline “onclick” (and similar), and “style” attributes.
Use nonces for inline script (and style) tags.
Use SHA hashes for linked scripts from outside sources

Safe implementation of script tag hack to do XSS?

Like a lot of developers, I want to make JavaScript served up by Server "A" talk to a web service on Server "B" but am stymied by the current incarnation of same origin policy. The most secure means of overcoming this (that I can find) is a server script that sits on Server "A" and acts as a proxy between it and "B". But if I want to deploy this JavaScript in a variety of customer environments (RoR, PHP, Python, .NET, etc. etc.) and can't write proxy scripts for all of them, what do I do?
Use JSONP, some people say. Well, Doug Crockford pointed out on his website and in interviews that the script tag hack (used by JSONP) is an unsafe way to get around the same origin policy. There's no way for the script being served by "A" to verify that "B" is who they say they are and that the data it returns isn't malicious or will capture sensitive user data on that page (e.g. credit card numbers) and transmit it to dastardly people. That seems like a reasonable concern, but what if I just use the script tag hack by itself and communicate strictly in JSON? Is that safe? If not, why not? Would it be any more safe with HTTPS? Example scenarios would be appreciated.
Addendum: Support for IE6 is required. Third-party browser extensions are not an option. Let's stick with addressing the merits and risks of the script tag hack, please.
Currently browser venders are split on how cross domain javascript should work. A secure and easy to use optoin is Flash's Crossdomain.xml file. Most languages have a Cross Domain Proxies written for them, and they are open source.
A more nefarious solution would be to use xss how the Sammy Worm used to spread. XSS can be used to "read" a remote domain using xmlhttprequest. XSS isn't required if the other domains have added a <script src="https://YOUR_DOMAIN"></script>. A script tag like this allows you to evaluate your own JavaScript in the context of another domain, which is identical to XSS.
It is also important to note that even with the restrictions on the same origin policy you can get the browser to transmit requests to any domain, you just can't read the response. This is the basis of CSRF. You could write invisible image tags to the page dynamically to get the browser to fire off an unlimited number of GET requests. This use of image tags is how an attacker obtains documnet.cookie using XSS on another domain. CSRF POST exploits work by building a form and then calling .submit() on the form object.
To understand the Same Orgin Policy, CSRF and XSS better you must read the Google Browser Security Handbook.
Take a look at easyXDM, it's a clean javascript library that allows you to communicate across the domain boundary without any server side interaction. It even supports RPC out of the box.
It supports all 'modern' browser, as well as IE6 with transit times < 15ms.
A common usecase is to use it to expose an ajax endpoint, allowing you to do cross-domain ajax with little effort (check out the small sample on the front page).
What if I just use the script tag hack by itself and communicate strictly in JSON? Is that safe? If not, why not?
Lets say you have two servers - frontend.com and backend.com. frontend.com includes a <script> tag like this - <script src="http://backend.com/code.js"></script>.
when the browser evaluates code.js is considered a part of frontend.com and NOT a part of backend.com. So, if code.js contained XHR code to communicate with backend.com, it would fail.
Would it be any more safe with HTTPS? Example scenarios would be appreciated.
If you just converted your <script src="https://backend.com/code.js> to https, it would NOT be any secure. If the rest of your page is http, then an attacker could easily man-in-the-middle the page and change that https to http - or worse, include his own javascript file.
If you convert the entire page and all its components to https, it would be more secure. But if you are paranoid enough to do that, you should also be paranoid NOT to depend on an external server for you data. If an attacker compromises backend.com, he has effectively got enough leverage on frontend.com, frontend2.com and all of your websites.
In short, https is helpful, but it won't help you one bit if your backend server gets compromised.
So, what are my options?
Add a proxy server on each of your client applications. You don't need to write any code, your webserver can automatically do that for you. If you are using Apache, look up mod_rewrite
If your users are using the latest browsers, you could consider using Cross Origin Resource Sharing.
As The Rook pointed out, you could also use Flash + Crossdomain. Or you could use Silverlight and its equivalent of Crossdomain. Both technologies allow you to communicate with javascript - so you just need to write a utility function and then normal js code would work. I believe YUI already provides a flash wrapper for this - check YUI3 IO
What do you recommend?
My recommendation is to create a proxy server, and use https throughout your website.
Apologies to all who attempted to answer my question. It proceeded under a false assumption about how the script tag hack works. The assumption was that one could simply append a script tag to the DOM and that the contents of that appended script tag would not be restricted by the same origin policy.
If I'd bothered to test my assumption before posting the question, I would've known that it's the source attribute of the appended tag that's unrestricted. JSONP takes this a step further by establishing a protocol that wraps traditional JSON web service responses in a callback function.
Regardless of how the script tag hack is used, however, there is no way to screen the response for malicious code since browsers execute whatever JavaScript is returned. And neither IE, Firefox nor Webkit browsers check SSL certificates in this scenario. Doug Crockford is, so far as I can tell, correct. There is no safe way to do cross domain scripting as of JavaScript 1.8.5.

Cross-site scripting from an Image

I have a rich-text editor on my site that I'm trying to protect against XSS attacks. I think I have pretty much everything handled, but I'm still unsure about what to do with images. Right now I'm using the following regex to validate image URLs, which I'm assuming will block inline javascript XSS attacks:
"https?://[-A-Za-z0-9+&##/%?=~_|!:,.;]+"
What I'm not sure of is how open this leaves me to XSS attacks from the remote image. Is linking to an external image a serious security threat?
The only thing I can think of is that the URL entered references a resource that returns "text/javascript" as its MIME type instead of some sort of image, and that javascript is then executed.
Is that possible? Is there any other security threat I should consider?
Another thing to worry about is that you can easily embed PHP code inside an image and upload that most of the time. The only thing an attack would then have to be able to do is find a way to include the image. (Only the PHP code will get executed, the rest is just echoed). Check the MIME-type won't help you with this because the attacker can easily just upload an image with the correct first few bytes, followed by arbitrary PHP code. (The same is somewhat true for HTML and Javascript code).
If the end-viewer is in a password protected area and your app contains Urls that initiate actions based on GET requests, you can make a request on the user's behalf.
Examples:
src="http://yoursite.com/deleteuser.xxx?userid=1234"
src="http://yoursite.com/user/delete/1234"
src="http://yoursite.com/dosomethingdangerous"
In that case, look at the context around it: do users only supply a URL? In that case it's fine to just validate the URLs semantics and MIME-type. If the user also gets to input tags of some sort you'll have to make sure that they are not manipulatable to do anything other then display images.

Resources