State Parameter in Authorization Request (OAuth2) - security

I have read some documentations online about preventing CSRF attacks on OAuth2 requests using the state parameter. However, my understanding is that the state parameter, although random and unguessable, is still part of the request URL, and can easily be copied by an attacker, who can then intercept the server response, alter some information, and then put back the state value that was read from the initial request. When the client validates the response, the state value would still match! Could you tell what I am missing here?

I think the only protection against interception and alteration of the request and/or response, which is a specific type of man-in-the-middle attack, is to use secure transmission between the client and the server.
Traditionally, this is done by using https: rather than http: as the protocol.

Related

Is there a way to make HTTP request headers immutable?

There are various ways the web applications can be attacked using the vectors in HTTP request itself. Attacks like the HTTP response splitting make use of modifying the request headers itself to exploit the vulnerable applications. Apart from input validation and sanitization at the server side, the question came to my mind if one can make the request headers immutable.
Is it possible to make it immutable?
Request headers are sent from the client to the server.
The browser itself constructs an HTTP request to send. A user with control over the client can of course change the HTTP request, including headers to anything that they want.
Therefore, making them immutable is impossible. Remember, as a general rule, anything on the client-side is up for grabs.
You can prevent headers from being altered during transit. That is, while the HTTP request is on the wire from the client to the server. For this, a technology called TLS is used (used to be called SSL, and most of the time it still is). This encrypts and authenticates the connection, making it immutable.
You can see if TLS/SSL is being used because the browser address bar will display HTTPS at the very beginning of the URL.

Is it necessary to generate anti-XSRF/CSRF token in server side?

Almost all doc about anti-CSRF mechanism states that CSRF token should be generated in server side. However, I'm wondering whether it is necessary.
I want to implement anti-CSRF in these steps:
There is no server-side-generated CSRF token;
In browser side, on every AJAX or form submission, our JavaScript generates a random string as token. This token is written into cookie csrf before actual AJAX or form submission happens; and the token is added to parameter as _csrf.
In server side, each request is supposed to have cookie csrf and submitted argument _csrf. These two values are compared. If they are different, it means it is a CSRF attack.
The server side doesn't need to issue CSRF token, just do the checking; and the token is totally generated in browser side. Of course, this is only for anti-CSRF. There should be authentication process in server side to validate user id.
It sounds a valid solution for CSRF, but I'm not sure why there is no documentation about this approach.
Is there any fault in this anti-CSRF mechanism?
As far as I understood, what you want to do is to create your anti-CSRF on the client side, store it in a cookie and also add it as a request parameter, so when the server reads your request, just verifies that your CSRF token cookie and parameter matches, and it decides if it's a valid request or not.
The reason to generate the anti-forgery token on the server side, is that the server will create that token and only the server will know the correct value, so if that parameter is slightly tampered on the client side, it will not be identical to the one stored in the server, and that will be enough to flag the request as a cross site request forgery attack.
Any client-side generated data can be tampered by an attacker and because of that, you can't rely on that information, for example, in your approach, you create a random value in your client side and you assign that value to your CSRF cookie and to your _csrf parameter, let's say that your value is "h246drvhd4t2cd98", but since you're only verifying that your 2 variables from the client side have the same value, an attacker can easily just create his CSRF cookie and variable with a value like "I'mByPassingThis" on both of them and your server will flag it as a valid request, so you're not getting security at all.
On the other hand, if the token is generated in the server, the attacker has no way to know the expected value, and that value will be different on every request, so the attacker's best approach will be just to try to guess it, which should be virtually impossible, unless you're using a predictable random number generator on the server side.
Also, if you want to create your own anti-forgery token mechanism, you need to take into consideration to use a cryptographically secure pseudo random number generator, but honestly, you should not bother with that, since the current server-generation process is just what you need (assuming that your framework has a built-in mechanism for this, if not, then you still need to make sure that you're using a cryptographically secure pseudo random number generator to generate your anti-forgery tokens).
Remember to never trust user's submitted information. Since it can always can be tampered, you always need to perform a double-check in the server side, and in this case, generating your anti-forgery token in the server is what allows you to double-check to verify the integrity of the submitted anti-forgery token.
I suggest to use this approach, I have used on a large scale project:
From: https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html#use-of-custom-request-headers
Use of Custom Request Headers
Adding CSRF tokens, a double submit cookie and value, an encrypted token, or other defense that involves changing the UI can frequently be complex or otherwise problematic. An alternate defense that is particularly well suited for AJAX or API endpoints is the use of a custom request header. This defense relies on the same-origin policy (SOP) restriction that only JavaScript can be used to add a custom header, and only within its origin. By default, browsers do not allow JavaScript to make cross origin requests with custom headers.
If this is the case for your system, you can simply verify the presence of this header and value on all your server side AJAX endpoints in order to protect against CSRF attacks. This approach has the double advantage of usually requiring no UI changes and not introducing any server side state, which is particularly attractive to REST services. You can always add your own custom header and value if that is preferred.
This technique obviously works for AJAX calls and you have to still need protect tags with approaches described in this document such as tokens. Also, CORS configuration should also be robust to make this solution work effectively (as custom headers for requests coming from other domains trigger a pre-flight CORS check).
So, instead of sending the token through a request body parameter, you could store and send to the server with a request header parameter.

GWT Sessions and XSRF - the optimal solution?

Ok, first I was a bit confused when reading
Remember - you must never rely on the sessionID sent to your server in
the cookie header ; look only at the sessionID that your GWT app sends
explicitly in the payload of messages to your server.
at https://code.google.com/p/google-web-toolkit-incubator/wiki/LoginSecurityFAQ because I didn't understand the nature of XSRF completely and thought: why does it matter how the id gets transmitted?
Then I read http://www.gwtproject.org/doc/latest/DevGuideSecurityRpcXsrf.html and now I understand that XSRF works despite NOT knowing the cookie content (your browser just attaches it to the request, so you exploit your browser's knowledge of the cookie's content - although the browser does not tell 'YOU' or the attacker about the content. The cookie content itself remains uncompromised by that attack). So any proof of knowing the cookie's content validates that the request is not part of XSRF.
I don't like the solution as implemented by GWT (http://www.gwtproject.org/doc/latest/DevGuideSecurityRpcXsrf.html) because it needs a separate call to the server. Please tell me if my ansatz is secure and if I understand the XSRF stuff correctly:
To prevent XSRF, I just copy the session ID contained within the cookie into some non-standard HTTP header field, ie. "X-MY-GWT-SESSION-ID: $sessionId", when doing RPC calls.
That way, I do not need to make any additional calls during app startup because session validation is already done during delivery of the gwt app by destroying the cookie if the session is not valid any more (see How can delete information from cookies?).
So here is the complete security implementation:
registration: client submits cleartext credentials via RPC call to the server, which in turn stores the password using a hash during registration in the server's database (How can I hash a password in Java?)
login: client sends cleartext pwd via https+RPC, check password on server, if ok: store and return (via https) random UUID. That UUID is the shared secret stored on server and client that is used to identify the authenticated user over possibly many browser sessions to avoid requiring the user to log in each time he visits the site.
server sets cookie expiry time to 0 if session is not valid any more so that the client clears the session id and the GWT app detects that it needs to re-authenticate.
on server side only accept session UUIDs sent through a special HTTP header field to prevent XSRF
handle invalidated sessions on client side (either no session cookie or RPC request produced auth failure)
to prevent re-authentication shortly after gwt app loading, the server side devlivery mechanism (ie. index.jsp) deletes the cookie some time before the timeout actually happens - delivering a page and asking for authentication a few seconds later is a bit dumb.
Example sources for the GWT part can be found there: https://stackoverflow.com/a/6319911/1050755. The solution bsaically uses GWT XSRF classes, but embeds the MD5-hashed session ID directly into the web page instead of getting the token via a separate RPC call. The client actually never calls any cookie-related code and the server has only embedded a request.getSession().getId() call into the jsp page.
Any comments, suggestions, critique? Do I miss something important?
Disclaimer: I'm not a security expert.
Actually, if you obtain your xsrf token by an RPC call, then you're subject to XSRF, as an attacker could possibly forge both requests (this is very unlikely though, because it would have to read the response of the first call, which is most of the time prohibited by the cross-origin nature of the request and/or the way it has to be executed).
So ideally you'll make your xsrf token available to the GWT app through any mean.
You'll generally want your session cookie to be unaccessible through scripts (HttpOnly flag), so you'll need to find another way of passing the value (e.g. write it in the HTML host page that's delivered to the browser –as a JS variable, or a special HTML attribute on a special HTML element–, and read it there with GWT, either through Dictionary, JSNI or the DOM).
Also, you'll probably want to use both the cookie and the request header to validate the request (they must match), or you might be vulnerable to session fixation attacks (would probably need an XSS vulnerability too to make it truly useful)

HTTP Referer for Single Sign On

As part of a project with a partner, we are required to provide single-sign-on service on our app. Basically, people will log in through our partner's website, then they are redirected to ours. The redirected request will have the user's data in the HTTP header fields.
Here's where it gets "iffy". The process of authenticating if this request is valid or not is dependent on the value of the HTTP Referer field. Our partner tells us to check this field to see that the source is a legitimate one.
Now I know (and I'm glad to be proven wrong) that this field is easy enough to forge, and since no other method of authentication is given to us, a malicious user could easily construct a false HTTP request and gain access to our web app.
I'm a programmer first, and admittedly know very little about the intricacies of HTTP. So are my concerns real? Would using SSL (somehow) void this concern?
Remember that rule number one is never trust client input. Like any other client input, the Referer header is trivial to forge. SSL does nothing for you because you still rely on client input. Also, note that browsers SHOULD NOT send Referer to http pages when referred by https pages.
Additionally, consider that many privacy-conscious people and proxies (that individuals may not have any control over) might strip Referer headers from their requests, breaking your scheme.
To do this properly, you need to use something like OAuth or OpenID, where the protocols have been designed to be secure.
The HTTP Referrer header is unreliable: depending on the browser used it may not be sent.
Does http-equiv="refresh" keep referrer info and metadata?
Yes - It is forgeable.
No - A client can just as easily send a (fake) HTTPS request as a (fake) HTTP request. The only difference is the connection is encrypted. It says nothing about the data transmitted.
That being said, it is another precaution that can be used. It should not be relied upon for security, however.
I would look at Microsoft Federation -- it's likely overkill, but it shows one way to implement SSO securely.

How to accept authentication on a web API without SSL?

I'm building a web API very similar to what StackOverflow provide.
However in my case security is importance since data is private.
I must use HTTP.
I can't use SSL.
What solution(s) do you recommend me?
EDIT: authentication != encryption
Nearly every public API works by passing an authentication token for each web request.
This token is usually assigned in one of two ways.
First, some other mechanism (usually logging into a website) will allow the developer to retrieve a permanent token for use in their particular application.
The other way is to provide a temporary token on request. Usually you have a webmethod in which they pass you a username / password and you return a limited use token based on if it is authenticated and authorized to perform any API actions.
After the dev has the token they then pass that as a parameter to every webmethod you expose. Your methods will first validate the token before performing the action.
As a side note the comment you made about "security is important" is obviously not true. If it was then you'd do this over SSL.
I wouldn't even consider this as "minimal" security in any context as it only provides a false belief that you have any sort of security in place. As Piskvor pointed out, anyone with even a modicum of interest could either listen in or break this in some way.
First of all, I suggest you read this excellent article: http://piwik.org/blog/2008/01/how-to-design-an-api-best-practises-concepts-technical-aspects/
The solution is very simple. It is a combination of Flickr like API (token based) and authentication method used by the paiement gateway I use (highly secure), but with a private password/salt instead.
To prevent unauthorized users from using the API without having to send the password in the request (in my case, in clear since there is no SSL), they must add a signature that will consist of a MD5 hashing of a concatenation of both private and public values:
Well know values, such as username or even API route
A user pass phrase
A unique code generated by the user (can be used only once)
If we request /api/route/ and the pass phrase is kdf8*s#, the signature be the following:
string uniqueCode = Guid.NewGuid().ToString();
string signature = MD5.Compute("/api/route/kdf8*s#" + ticks);
The URL of the HTTP request will then be:
string requestUrl =
string.Format("http://example.org/api/route/?code={0}&sign={1}", uniqueCode, signature);
Server side, you will have to prevent any new requests with the same unique code. Preventing any attacker to simply reuse the same URL to his advantage. Which was the situation I wanted to avoid.
Since I didn't want to store code that were used by API consumer, I decided to replace it by a ticks. Ticks represents the number of 100-nanosecond intervals that have elapsed since 12:00:00 midnight, January 1, 0001.
On server side, I only accept ticks (timestamp) with a tolerance of +-3 minutes (in case client & server are not time synchronized). Meaning that potential attacker will be able to use that window to reuse the URL but not permanently. Security is reduced a little, but still good enough for my case.
Short answer: if it's supposed to be usable through usual clients (browser requests/AJAX), you're screwed.
As long as you are using an unencrypted transport, an attacker could just remove any sort of in-page encryption code through a MITM attack. Even SSL doesn't provide perfect security - but plain HTTP would require some out-of-page specific extensions.
HTTP provides only transport - no secure identification, no secure authentication, and no secure authorization.
Example security hole - a simple HTTP page:
<script src="http://example.com/js/superstrongencryption.js"></script>
<script>
encryptEverything();
</script>
This may look secure, but it has a major flaw: you don't have any guarantee, at all, that you're actually loading the file superstrongencryption.js you're requesting. With plain HTTP, you'll send a request somewhere, and something comes back. There is no way to verify that it actually came from example.com, nor you have any way to verify that it is actually the right file (and not just function encryptEverything(){return true}).
That said, you could theoretically build something very much like SSL into your HTTP requests and responses: cryptographically encrypt and sign every request, same with every response. You'll need to write a special client (plus server-side code of course) for this though - it won't work with standard browsers.
HTTP digest authentication provides very good authentication. All the HTTP client libraries i've used support it. It doesn't provide any encryption at all.

Resources