ICEFaces Security - security

I have 2 security concerns that my client has come up with and I am stuck.
In order to avoid browsers caching sensitive information, the client's security guidlines require that POST requests do not return 200 response. Initially I set up a phaseListener to deal with this but the only requests that came through were GETs. I discovered that the POST request the client's security team were complaining about were ajax calls to the BlockingServlet. How can I set up something similar for this? I don't really understand how icefaces deals with the information stored on a form and how I can ensure that this info is not stored by the browser. I have implemented the no-cache headers but that's not exactly solid security.
The phaseListener I used was basically http://balusc.blogspot.com/2007/03/post-redirect-get-pattern.html
The client is also concerned that input parameters are not properly validated providing a entry point for XSS. The example they gave was also going through the blockingServlet. I suspect that ICEFaces has something built in to deal with this but I can't find any information about it. Can anyone help?

XSS is an output problem, you can't cram all input data though some magic function and expect your application to be 100% safe from xss. This will never work for any application, because XSS just doesn't work that way. Make sure you test your application for vulnerability like XSS and SQL Injection, there are free solutions like Sitewatch and the open source skipfish.
To prevent caching make sure this http header element is set:
Cache-Control: no-store
No other method should be used.

Related

SameSite Flag against CSRF

I was wondering if the SameSite flag on the session cookie was enough of a protection against CSRF attacks.
I see CSRF token solution everywhere, but I am not sure about the need to use a CSRF token if the cookie used for authentication is already protected by the SameSite flag (in Strict mode).
On top of that, if I understood it well, tho cookie would still be sent along with subdomain URLs like api.myapp.com which would be perfect for my needs.
This is somewhat opinionated, or at least depends on your target audience and risk appetite.
SameSite=strict is supported in almost all fairly recent browsers as seen here, but note the exception of IE11. Not many people use IE11 anymore, but for them it will not be good enough. Only you can answer whether that's good enough for your usecase, as of writing this, a significant amount of users would not be protected.
Also the general consensus seems to be that SameSite should only be used as defense in depth (eg. here or here in a similar question), but most of the concern is around Lax, and less about Strict. However, Strict is very user-unfriendly, in a real-world application you probably can't really use Strict, because that's very bad UX.
The usual arguments are around browser support (as above), GET requests changing state (only relevant to Lax), and some special cases still revolving around GET changing state.
So my take currently is that the reason SameSite=Strict is not good enough in general is the lack of full browser support (IE11), and a strong point against it is bad user experience. I can imagine circumstances where it is good enough. SameSite=Lax I think is only a defense in depth measure, because of the issues above, which probably don't affect your application now, but might in the future, and nobody will remember to think about SameSite settings.
Excellent answer from Gabor which explains the problem well. There is a way to solve this though, if you set up your code like an SPA, to work like this:
Requests for web content (such as navigation) do not require a cookie
The SameSite=strict cookie is only used in Ajax requests to get data
HOW IT WORKS
If your web app runs at https://mywebapp.com
Then issue the secure cookie as the result of calling an API at https://api.mywebapp.com
The cookie has a domain of .mywebapp.com and is SameSite + Cross Origin
OAUTH
For an OAuth back end for front end solution you would do this:
Host the API at https://api.mywebapp.com
When the web app receives an OAuth response, it sends the Authorization Code to the API
The API processes it, then issues a strongly encrypted cookie containing tokens

Possible values for X-Requested-With header?

The x-requested-with header is kind of confusing to me. I know it can be used to defend against CSRF attacks, and that it is used to identify Ajax calls...but what is it really?
It just tells you what the request was...requested with?
Could there ever be a reasonable situation in which the header is present but set to some value other than "XMLHttpRequest"? I would imagine so, but I've never seen it set to anything else.
Just like the User-Agent header, it is provided by the client and can contain literally anything.
It is not at all reliable for any server-side security check.
Android sets X-Requested-With to the package ID of the app, for third-party apps that use the WebView component to embed a browser into their UI.
Presumably this could be used for debugging and/or statistics, but the values cannot be trusted because it would be possible for an attacker to write a custom client that sets it to anything just to try to break your server.

Is CSRF protection for side effect free GET requests needed?

I'm developing a web application in which all dynamic content is retrieved as JSON with Ajax requests. I'm considering whether I should protect GET API calls from being invoked from different origins?
GET requests do not modify state and a common wisdom is that they do not require CSRF protection. But I wonder if there are no corner cases in which browser leaks the result of such requests to a different origin site?
For example, if a different origin site GETs /users/emails as script, css or img, is it possible that a browser would leak resulting json to the calling site (for example via javascript onerror handler)?
Do Browsers give strong enough guarantees that a content of a cross origin JSON response won't be leaked? Do you think protecting GET request against cross origin calls makes sense or is it overkill?
You have nailed a corner case and yet highly relevant issue. Indeed, there is this possibility, and it's called JSON Inclusion or Cross Site Scripting Inclusion or Javascript Inclusion, depending on who you refer to. The attack is, basically, doing a on an evil site, and then accessing the results via javascript once the js engine has parsed it.
The short story is that ALL your JSON responses have to be contained in an Object, not an Array or JSONP (so: {...}) and for better measure you should start all responses with a prefix (while(1), for(;;) or a parser breaker). Look at facebook's or google's JSON responses to have a live example.
Or, you can make your URLs unguessable by using a CSRF protection - both approach works.
No:
This is not a CSRF issue, as long as you're returning pure JSON and your GET's are side affect free, it DOES NOT have to be csrf protected.
what Paradoxengine mentioned is another vulnerabilty: if you are using JSONP it is possible for an attacker to read the JSON sent to an authenticated user. Users of very old browsers (IE 5.5) can also be attacked in this way even using regular JSON.
You can send requests to a different domain (which is what CSRF attacks do), but you can't read the responses.
I learn this in another stack overflow question from here It seems like I understand CSRF incorrectly?
hope this help you understand the question.

How to fix CSRF in the HTTP protocol spec?

What changes to the HTTP protocol spec, and to browser behaviour, would be required to prevent dangerous cases of cross-site request forgery?
I am not looking for suggestions as to how to patch my own web app. There are millions of vulnerable web apps and forms. It would be easier to change HTTP and/or the browsers.
If you agree to my premise, please tell me what changes to the HTTP and/or browser behaviour are needed. This is not a competition to find the best single answer, I want to collect all the good answers.
Please also read and comment on the points in my 'answer' below.
Roy Fielding, author of the HTTP specification, disagrees with your opinion, that CSRF is a flaw in HTTP and would need to be fixed there. As he wrote in a reply in a thread named The HTTP Origin Header:
CSRF is not a security issue for the Web. A well-designed Web
service should be capable of receiving requests directed by any host,
by design, with appropriate authentication where needed. If browsers
create a security issue because they allow scripts to automatically
direct requests with stored security credentials onto third-party
sites, without any user intervention/configuration, then the obvious
fix is within the browser.
And in fact, CSRF attacks were possible right from the beginning using plain HTML. The introduction of nowadays technologies like JavaScript and CSS did only introduce further attack vectors and techniques that made request forging easier and more efficient.
But it didn’t change the fact that a legitimate and authentic request from a client is not necessarily based on the user’s intention. Because browsers do send requests automatically all the time (e. g. images, style sheets, etc.) and send any authentication credentials along.
Again, CSRF attacks happen inside the browser, so the only possible fix would need to be to fix it there, inside the browser.
But as that is not entirely possible (see above), it’s the application’s duty to implement a scheme that allows to distinguish between authentic and forged requests. The always propagated CSRF token is such a technique. And it works well when implemented properly and protected against other attacks (many of them, again, only possible due to the introduction of modern technologies).
I agree with the other two; this could be done on the browser-side, but would make impossible to perform authorized cross-site requests.
Anyways, a CSRF protection layer could be added quite easily on the application side (and, maybe, even on the webserver-side, in order to avoid making changes to pre-existing applications) using something like this:
A cookie is set to a random value, known only by server (and, of course, the client receiving it, but not a 3rd party server)
Each POST form must contain a hidden field whose value must be the same of the cookie. If not, form submission must be prevented and a 403 page returned to the user.
Enforce the Same Origin Policy for form submission locations. Only allow a form to be submitted back to the place it came from.
This, of course, would break all sorts of other things.
If you look at the CSRF prevention cheat sheet you can see that there are ways of preventing CSRF by relying upon the HTTP protocol. A good example is checking the HTTP referer which is commonly used on embedded devices because it doesn't require additional memory.
However, this is weak form of protection. A vulnerability like HTTP response splitting on the client side could be used to influence the referer value, and this has happened.
cookies should be declared 'local' (default) or 'remote'
the browser must not send 'local' cookies with a cross-site request
the browser must never send http-auth headers with a cross-site request
the browser must not send a cross-site POST or GET ?query without permission
the browser must not send LAN address requests from a remote page without permission
the browser must report and control attacks, where many cross-site requests are made
the browser should send 'Origin: (local|remote)', even if 'Referer' is disabled
other common web security issues such as XSHM should be addressed in the HTTP spec
a new HTTP protocol version 1.2 is needed, to show that a browser is conforming
browsers should update automatically to meet new security requirements, or warn the user
It can already be done:
Referer header
This is a weaker form of protection. Some users may disable referer for privacy purposes, meaning that they won't be able to submit such forms on your site. Also this can be tricky to implement in code. Some systems allow a URL such as http://example.com?q=example.org to pass the referrer check for example.org. Finally, any open redirect vulnerabilities on your site may allow an attacker to send their CSRF attack through the open redirect in order to get the correct referer header.
Origin header
This is a new header. Unfortunately you will get inconsistencies between browsers that support it and do not support it. See this answer.
Other headers
For AJAX requests only, adding a header that is not allowed cross domain such as X-Requested-With can be used as a CSRF prevention method. Old browsers will not send XHR cross domain and new browsers will send a CORS preflight instead and then refuse to make the main request if it is explicitly not allowed by the target domain. The server-side code will need to ensure that the header is still present when the request is received. As HTML forms cannot have custom headers added, this method is incompatible with them. However, this also means that it protects against attackers using an HTML form in their CSRF attack.
Browsers
Browsers such as Chrome allow third party cookies to be blocked. Although the explanation says that it'll stop cookies from being set by a third party domain, it also prevent any existing cookies from being sent for the request. This will block "background" CSRF attacks. However, those that open full page or in a popup will succeed, but will be more visible to the user.

GWT RPC - Does it do enough to protect against CSRF?

UPDATE : GWT 2.3 introduces a better mechanism to fight XSRF attacks. See http://code.google.com/webtoolkit/doc/latest/DevGuideSecurityRpcXsrf.html
GWT's RPC mechanism does the following things on every HTTP Request -
Sets two custom request headers - X-GWT-Permutation and X-GWT-Module-Base
Sets the content-type as text/x-gwt-rpc; charset=utf-8
The HTTP request is always a POST, and on server side GET methods throw an exception (method not supported).
Also, if these headers are not set or have the wrong value, the server fails processing with an exception "possibly CSRF?" or something to that effect.
Question is : Is this sufficient to prevent CSRF? Is there a way to set custom headers and change content type in a pure cross-site request forgery method?
If this GWT RPC is being used by a browser then it is 100% vulnerable to CSRF. The content-type can be set in the html <form> element. X-GWT-Permutation and X-GWT-Module-Base are not on Flash's black list of banned headers. Thus it is possible to conduct a CSRF attack using flash. The only header element you can trust for CSRF protection is the "referer", but this isn't always the best approach. Use token based CSRF protection whenever possible.
Here are some exploits that i have written which should shed some light on the obscure attack i am describing. A flash exploit for this will look something like this and
here is a js/html exploit that changes the content-type.
My exploit was written for Flex 3.2 and the rules have changed in Flex 4 (Flash 10) Here are the latest rules, most headers can be manipulated for requests POST only.
Flash script that uses navigateTo() for CSRF:
https://github.com/TheRook/CSRF-Request-Builder
GWT 2.3 introduces a better mechanism to fight XSRF attacks. See GWT RPC XSRF protection
I know I asked this question, but after about a days research (thanks to pointers from Rook!), I think I have the answer.
What GWT provides out-of-the-box will not protect you from CSRF. You have to take steps documented in Security for GWT Applications to stay secured.
GWT RPC sets "content-type" header to "text/x-gwt-rpc; charset=utf-8". While I didn't find a way to set this using HTML forms, it is trivial to do so in flash.
The custom headers - X-GWT-Permutation and X-GWT-Module-Base, are a bit more tricky. They cannot be set using HTML. Also, they cannot be set using Flash unless your server specifically allows it in crossdomain.xml. See Flash Player 10 Security.
In addition, when a SWF file wishes to
send custom HTTP headers anywhere
other than its own host of origin,
there must be a policy file on the
HTTP server to which the request is
being sent. This policy file must
enumerate the SWF file's host of
origin (or a larger set of hosts) as
being allowed to send custom request
headers to that host.
Now GWT's RPC comes in two flavours. There is the old, custom-serialization format RPC, and the new, JSON based de-RPC. AFAICT, client code always sets these request headers. The old style RPC doesn't now enforce these headers on server side, and thus a CSRF attack is possible. The new style de-RPC enforces these headers, and thus it may or may not be possible to attack them.
Overall, I'd say if you care about security, make sure you send strong CSRF tokens in your request, and don't rely on GWT to prevent it for you.
I'm not sure, if there's an easy way (I'd be extremely interested in finding that out, too!), but at least there seem to be some advanced ways to achieve arbitrary cross site requests with arbitrary headers: http://www.springerlink.com/content/h65wj72526715701/ I haven't bought the paper, but the abstract and introduction do sound very interesting.
Maybe somebody here already read the full version of the paper, and can expand a little bit?

Resources