If script-src: hash-source is used in a browser that does not understand hash-source, will the browser ignore all of script-src:, or even all of the CSP? Or will it only ignore the hash-source part?
More generally, do browsers implement CSP in forward compatible manner?
What oreoshake stated about backward compatibility is accurate. The process of determining an element match is described in section 6.6.2.2 of the CSP draft standard: In the presence of hash-source or nonce-source, unsafe-inline is ignored by conforming user agents:
A source list allows all inline behavior of a given type if it contains the keyword-source expression 'unsafe-inline', and does not override that expression as described in the following algorithm:
[...]
If expression matches the nonce-source or hash-source grammar, return "Does Not Allow".
Furthermore, CSP 2 specifies the process of parsing a source list with unknown tokens as follows:
For each token returned by splitting source list on spaces, if the token matches the grammar for source-expression, add the token to the set of source expressions.
Otherwise, it should be ignored. So clearly the authors intended at least a certain level of forward compatibility.
Browsers that do not understand hash source elements may emit a warning in the console, but they may not as well. The recommended approach is to use user agent sniffing to detect support or send both 'unsafe-inline' with your hash source values.
User agents that understand hash sources will ignore the 'unsafe-inline' and those that do not will fallback to the 'unsafe-inline'. So it's backwards compatible.
Related
I'd like to specify a hash to my CSP of an allowed font.
Currently my default-src is none, then for font-src I have 'self'.
My font is currently included as data, like so: "data:font/ttf;base64,AAEAAAARAQ..."
Instead of just adding data: to my font-src, I'd like add the hash. I'm not sure if this is possible, or how to properly do it. I've taken the sha256 hash of "data:font/ttf;base64,AAEAAAARAQ..." and included it as 'sha256-asldfkj' in my font-src, but that did not work.
Any insight would be greatly appreciated!
1). 'hash-value' kind of 'sha256-he03geRc75f', 'sha384-nd78ro9==' etc. are applied to the inline scripts and inline styles only, see the second "Note" to para 5 of CSP3 spec.
2). CSP3 spec did extend hashes usage to external scripts (but Firefox still have a bug with this). Note in this case you have to use integrity= attribute in the tag.
Therefore hashes are not applicable to fonts because of para 1) above (plus, you probably forgot to use the integrity= attribute).
The data:-URL is considered as URL to external resource, not as inline. Therefore hashes are not applicable because of para 2) above too.
Note: The 'hash-value' is supported to allow external scripts with data:-urls in Chrome.
I'm trying to lock down my pages with a content security policy (CSP). The default CSP is too restrictive (and I cannot change the code to make it compliant, as it comes from a 3rd party), so I'm trying to define the minimal set of permissions in the CSP. To that end, I'd like to use style-src-attr and script-src-attr. And I'd like to use these with a nonce. I can see how to specify the nonce for both of these in the CSP. What I'm not sure about is how to specify the nonce for the html element (in the case of style-src-attr) and the javascript object (in the case of script-src-attr). I looked for an example, but couldn't find anything. Please give an example of how this could be done.
I stumbled over this question in actually preparing a lecture on the topic. The answer to the question is: you cannot.
Looking at the CSP Spec (https://www.w3.org/TR/CSP3/#match-element-to-source-list), only script or style tags can be nonced. The -attr variants do not apply to stand-alone elements (script tags, style tags, or links to CSS files), as per the spec (https://w3c.github.io/webappsec-csp/#directive-script-src-attr)
The script-src-attr directive applies to event handlers and, if present, it will override the script-src directive for relevant checks.
Bottom line, in the current specificiation, it should not be possible to allow event handlers through nonces. It is possible to rely on unsafe-hashes and put the hashes of known event handlers in there, but even that is not fully supported in browser (FF and Safari lack support, see https://caniuse.com/mdn-http_headers_csp_content-security-policy_unsafe-hashes)
I'm setting up a web environment where users can create links but they can only modify the href attribute, not type in the <a> tag themselves.
So basically any href value is allowed; http/ftp/mailto/whatever.
Are there any XSS or other risks for my site if I leave the href attribute open like this? If yes, what would they be and how should I handle them?
There are URL schemes, such as javascript: or possibly data:, that could, in themselves, serve as XSS vectors if the user is tricked into clicking them. You should maintain a whitelist of known, safe URL schemes (like http, https, ftp, etc.) and disallow any URLs that don't begin with such a scheme.
Note that simply blacklisting known dangerous URL schemes is not a safe approach, since you cannot possibly know all the schemes that might be used as attack vectors (as this may depend on things like what third-party software the user has installed). In particular, keep in mind that URL schemes are supposed to be case-insensitive; a naïve blacklisting implementation that simply disallowed URLs beginning with javascript: might be trivially bypassed with a jAvAsCrIpT: URL.
(You could allow schemeless relative URLs if you wanted, but if so, make sure that you parse them conservatively and according to the standard, so that an attacker can't possibly disguise a harmful absolute URL as a relative one. In particular, I would recommend that any URL that includes a colon (:) before the first slash (/), if any, be treated as an absolute URL subject to whitelisting. Just to be sure, you may also want to prepend the string "./" to any relative URLs that don't already begin with "/" or "./" in order to eliminate any potential parsing ambiguity.)
The other thing you need to ensure is that you properly HTML-escape any strings, including URLs (especially user-supplied ones), that will be embedded in HTML attributes. In particular, any & characters will need to be replaced with the & character entity, and (for double-quoted attributes) any " characters with ". Replacing < with < and ' with ' may also be a good idea, and the safest approach may be to actually replace any characters (other than known safe ones, like alphanumerics) with their corresponding HTML character entities. In any case, you programming language probably has a standard function or library to do this (e.g. htmlspecialchars() in PHP).
Ps. See also the OWASP XSS Filter Evasion Cheat Sheet for some examples of possible attacks that your implementation should be able to resist.
You must ensure that the href value will be a valid URL. If you would not escaped user input it would make mySQL injection attacks possible.
also the user could enter javascript:
Javascript will close the browser window on click
We'd like to double-check our http headers for security before we send them out. Obviously we can't allow '\r' or '\n' to appear, as that would allow content injection.
I see just two options here:
Truncate the value at the newline character.
Strip the invalid character from the header value.
Also, from reading RFC2616, it seems that only ascii-printable characters are valid for http header values Should also I follow the same policy for the other 154 possible invalid bytes?
Or, is there any authoritative prior art on this subject?
This attack is called "header splitting" or "response splitting".
That OWASP link points out that removing CRLF is not sufficient. \n can be just as dangerous.
To mount a successful exploit, the application must allow input that contains CR (carriage return, also given by 0x0D or \r) and LF (line feed, also given by 0x0A or \n)characters into the header.
(I do not know why OWASP (and other pages) list \n as a vulnerability or whether that only applies to query fragments pre-decode.)
Serving a 500 on any attempt to set a header that contains a character not allowed by the spec in a header key or value is perfectly reasonable, and will allow you to identify offensive requests in your logs. Failing fast when you know your filters are failing is a fine policy.
If the language you're working in allows it, you could wrap your HTTP response object in one that raises an exception when a bad header is seen, or you could change the response object to enter an invalid state, set the response code to 500, and close the response body stream.
EDIT:
Should I strip non-ASCII inputs?
I prefer to do that kind of normalization in the layer that receives trusted input unless, as in the case of entity-escaping to convert plain-text to HTML escaping, there is a clear type conversion. If it's a type conversion, I do it when the output type is required, but if it is not a type-conversion, I do it as early as possible so that all consumers of data of that type see a consistent value. I find this approach makes debugging and documentation easier since layers below input handling never have to worry about unnormalized inputs.
When implementing the HTTP response wrapper, I would make it fail on all non-ascii characters (including non-ASCII newlines like U+85, U+2028, U+2029) and then make sure my application tests include a test for each third-party URL input to makes sure that any Location headers are properly %-encoded before the Location reaches setHeader, and similarly for other inputs that might reach the request headers.
If your cookies include things like a user-id or email address, I would make sure the dummy accounts for tests include a dummy account with a user-id or email address containing a non-ASCII letter.
The simple removal of new lines \n will prevent HTTP Response Splitting. Even though a CRLF is used as a delimiter in the RFC, the new line alone is recognized by all browsers.
You still have to worry about user content within a set-cookie or content-type. Attributes within these elements are delimited using a ;, it maybe possible for an attacker to change the content type to UTF-7 and bypass your XSS protection for IE users (and only IE users). It may also be possible for an attacker to create a new cookie, which introduces the possibility of Session Fixation.
Non-ASCII characters are allowed in header fields, although the spec doesn't really clearly say what they mean; so it's up to sender and recipient to agree on their semantics.
What made you think otherwise?
How to safe gaurd a form against script injection attacks. This is one of the most used form of attacks in which attacker attempts to inject a JS script through form field. The validation for this case must check for special characters in the form fields. Look for
suggestions, recommedations at internet/jquery etc for permissible characters &
character masking validation JS codes.
You can use the HTML Purifier (in case you are under PHP or you might have other options for the language you are under) to avoid XSS (cross-site-scripting) attacks to great level but remember no solution is perfect or 100% reliable. This should help you and always remember server-side validation is always best rather than relying on javascript which bad guys can bypass easily disabling javascript.
For SQL Injection, you need to escape invalid characters from queries that can be used to manipulate or inject your queries and use type-casting for all your values that you want to insert into the database.
See the Security Guide for more security risks and how to avoid them. Note that even if you are not using PHP, the basic ideas for the security are same and this should get you in a better position about security considerations.
If you output user controlled input in html context then you could follow what others and sanitize when processing input (html purify, custom input validation) and/or html encode the values before output.
Cases when htmlencodng/strip tags (no tags needed) is not sufficient:
user input appears in attributes then it depends on whether you always (double) quote attributes or not (bad)
used in on* handlers (such as onload="..), then html encoding is not sufficient since the javascript parser is called after html decode.
appears in javascript section - depends on whether this is in quoted (htmlentity encode not sufficient) or unquoted region (very bad).
is returned as json which may be eval'ed. javascript escape required.
appears in CSS - css escape is different and css allows javascript (expression)
Also, these do not account for browser flaws such as incomplete UTF-8 sequence exploit, content-type sniffing exploits (UTF-7 flaw), etc.
Of course you also have to treat data to protect against other attacks (SQL or command injection).
Probably the best reference for this is at the OWASP XSS Prevention Cheat Sheet
ASP.NET has a feature called Request Validation that will prevent unencoded HTML from being processed by the server. For extra protection, one can use the AntiXSS library.
you can prevent script injection by encoding html content like
Server.HtmlEncode(input)
There is the OWASP EASPI too.