In Implicit Grant, the access token is sent back in the callback URL. Is this not a security risk because, if this callback URL is cached in the hop. In general it is advised, not to send sensitive data in URL params, and this access token will be a token to access all secured user resources. So why is it getting passed as fragment in URL
Hmmm, I am afraid there are some misunderstandings in the answers above. While URL query strings are secured when using TLS, and thus the access token is protected in flight, it is exposed in the users browser (part of their history) and also in the destination web browser logs. Most web browsers will log the entire URL of the incoming request. Their is an additional issue known as the "referer" leak problem wherein the query string will be passed to third-party sites. A good overview may be found at:
http://blog.httpwatch.com/2009/02/20/how-secure-are-query-strings-over-https/
Elaborating on #vlatko's response...
To mitigate the risk of sending the token in the fragment (or via any other OAuth2 grant):
ensure that the OAuth endpoint and the callback endpoint are TLS (https) (See countermeasures)
send a state parameter to prevent cross-site forgery (Also see: https://www.rfc-editor.org/rfc/rfc6749#section-4.2.1)
Issuing short-lived access token (as #vlatko said) will reduce the impact of a leaked token, but is not a preventative measure.
Like you pointed out, the token is passed the URI fragment. Since browsers don't send fragments of URLs to HTTP servers, the chances that someone will eavesdrop and pick up the access token are drastically reduced.
There are also additional security measures, like only issuing short lived access tokens in the implicit grant flow.
More info in the OAuth2 threat models document.
Related
I'm working on authentication part with Google OAuth2 API.
I'm using "server" flow, not "implicit".
When implementing step 1 for obtaining code guidelines recommend using the state parameter to ensure that in the end, the service provider will receive the response that correlates to the auth request he initiated.
See https://developers.google.com/identity/protocols/OpenIDConnect#createxsrftoken
I understand that there is a possibility that code can be stolen, and redirect_uri guessed.
But I don't understand why this is called anti-CSRF protection?
According to wiki CSRF attack "exploits the trust that a site has in a user's browser".
As I understand something sensitive should be kept in the browser to make CSRF attack possible. The most classic example - an authentication cookie.
But what is kept in the browser in relation to OpenID-connect code flow?
It is just two consequent redirects from service provider to identity provider and back. The code itself is passed on callback as a URL param.Browser's role is passive - to be redirected two times. Nothing is stored here.
So my question is what exactly kind of CSRF attack does state prevent? Can anyone give a detailed example? Or is it just misusing of term "CSRF" in google guidelines?
When using the authorization code flow in OAuth2, the user browses to the client application, which then redirects the browser to the authorization server to be logged in and get an access token.
302 https://auth.server/authorize?response_type=code&client_id=....
After you've signed in on the authorization server, it will redirect you back to the registered redirection URI with the issued code. The client application will then exchange the code for an access token and optionally go to a URL encoded in the state parameter.
Now, an attacker could trick you into clicking a link (from an email or something) like:
<a href="https://auth.server/authorize?response_type=code&client_id=...."
This would execute the same request to the authorization server. Once you logged in and the authorization server redirects you back to the client application, the client application has no way of knowing whether the incoming GET request with the authorization code was caused by a 302 redirect it initiated or by you clicking that hyperlink in the malicious email.
The client application can prevent this by sticking some entropy in the state parameter of the authorization request. Since the state parameter will be included as a query parameter on the final redirect back to the client application, the client application can check if the entropy matches what it kept locally (or in a secure HTTP only cookie) and know for sure that it initiated the authorization request, since there's no way an attacker can craft a URL with entropy in the state parameter that will match something the client application keeps.
The objective of CSRF is to dupe the user into performing an action (usually a destructive write action that the user wouldn't do under normal circumstances) in a website by clicking on a link sent by the attacker. An example of such an activity could be deletion of user's own account in the website. Assuming that the user is logged into the website, the requests originating from the user's browser are trusted by the website server which has no way to find out (without the CSRF token) that the user is actually duped into making that request.
In case of Google OAuth2 (Authorization code grant type), note that the initial request to the Google auth server contains the url that the user actually wants to visit after succesful authentication. An attacker can carefully contruct that url with some malicious intent and make the user use it.
The state token check enables the server ensure that the request is indeed originated from its own website and not from any other place. If the attacker creates the url with some random state token, the server wouldn't recognise that and reject the request.
If you have such doubts, best resource you must refer is the specification. For OAuth 2.0, this is RFC6749 . There is a dedicated section in the specification which discuss about Cross-Site Request Forgery
Cross-site request forgery (CSRF) is an exploit in which an attacker
causes the user-agent of a victim end-user to follow a malicious URI
(e.g., provided to the user-agent as a misleading link, image, or
redirection) to a trusting server (usually established via the
presence of a valid session cookie).
From the specification perspective, you must implement state handling mechanism into your application
The client MUST implement CSRF protection for its redirection URI.
This is typically accomplished by requiring any request sent to the
redirection URI endpoint to include a value that binds the request to
the user-agent's authenticated state (e.g., a hash of the session
cookie used to authenticate the user-agent). The client SHOULD
utilize the "state" request parameter to deliver this value to the
authorization server when making an authorization request.
And about a detailed explanation, directly from specifaciton
A CSRF attack against the client's redirection URI allows an attacker
to inject its own authorization code or access token, which can
result in the client using an access token associated with the
attacker's protected resources rather than the victim's
Injecting an authorisation code from attacker, then manipulating your application to behave in a manner attacker wants.
I've read through a lot of long explanations of CSRF and IIUC the core thing that enables the attack is cookie based identification of server sessions.
So in other words if the browser (And note I'm specifically narrowing the scope to web browsers here) does not use a cookie based session key to identify the session on the server, then a CSRF attack cannot happen. Did I understand this correctly?
So for example suppose someone creates a link like:
href="http://notsosecurebank.com/transfer.do?acct=AttackerA&amount;=$100">Read more!
You are tricked into clicking on this link while logged into http://notsosecurebank.com, however http://notsosecurebank.com does not use cookies to track your login session, and therefore the link does not do any harm since the request cannot be authenticated and just gets thrown in the garbage?
Assumptions
The OpenID Connect Server / OAuth Authorization server has been implemented correctly and will not send authentication redirects to any URL that you ask it to.
The attacker does not know the client id and client secret
Footnotes
The scenario I'm targeting in this question is the CSRF scenario most commonly talked about. There are other scenarios that fit the CSRF tag. These scenarios are highly unlikely, yet good to be aware of, and prepared for. One of them has the following components:
1) The attacker is able to direct you to a bad client
2) The attacker owns that client
3) The attacker has the secret for that client registered with the OAuth Authorization Server
4) The attacker is able to tell the Authorization Server that authenticates you to redirect back to the bad client after you have been authenticated with the proper server.
So setting this up is a little bit like breaking into Fort Knox, but certainly good to be aware of. For example OpenID Connect or OAuth Authorization Providers should most likely flag clients that register redirect URLs pointing to redirect URLs that other clients have also registered.
The most common / usually discussed CSRF Cross Site Request Forgery scenario can only happen when the browser stores credentials (as a cookie or as basic authentication credentials).
OAuth2 implementations (client and authorization server) must be careful about CSRF attacks. CSRF attacks can happens on the client's redirection URI and on the authorization server. According to the specification (RFC 6749):
A CSRF attack against the client's redirection URI allows an attacker
to inject its own authorization code or access token, which can
result in the client using an access token associated with the
attacker's protected resources rather than the victim's (e.g., save
the victim's bank account information to a protected resource
controlled by the attacker).
The client MUST implement CSRF protection for its redirection URI.
This is typically accomplished by requiring any request sent to the
redirection URI endpoint to include a value that binds the request to
the user-agent's authenticated state (e.g., a hash of the session
cookie used to authenticate the user-agent). The client SHOULD
utilize the "state" request parameter to deliver this value to the
authorization server when making an authorization request.
[...]
A CSRF attack against the authorization server's authorization
endpoint can result in an attacker obtaining end-user authorization
for a malicious client without involving or alerting the end-user.
The authorization server MUST implement CSRF protection for its
authorization endpoint and ensure that a malicious client cannot
obtain authorization without the awareness and explicit consent of
the resource owner
Theory is, CSRF is not related to the authentication method. If an adversary can have a victim user perform actions in another application that the victim didn't want, then that application is vulnerable to CSRF.
This can manifest in several ways, the most common being that a victim user visits a malicious website which in turn makes requests from the victim's browser to another application, thus performing actions the user didn't want. This way it is possible if credentials are sent by the victim browser automatically. By far the most common scenario is a session cookie, but there can be others as well, for example HTTP Basic auth (the browser remembers that as well), or Windows authentication in a domain (Kerberos/SPNEGO), or client certificates, or even some kind of an SSO under certain circumstances.
Also sometimes application authentication is cookie-based, and all non-GET (POST, PUT, etc.) requests are protected against CSRF, but GETs are not for obvious reasons. In languages like PHP, it is easy to make calls intended to be POST requests also work as GETs (think of using $_REQUEST in PHP). In that case, any other website can include something like <img src='http://victim.com/performstuff¶m=123> to have actions performed silently.
There are also less obvious CSRF attacks in complex systems or flows, like for example the CSRF attacks against oauth.
So if a web application uses say tokens (sent as request headers, instead of session cookies) for authentication, meaning a client will have to add the token to each request, that is probably not vulnerable to CSRF, but as always, the devil is in the details.
I am integrating a legacy application (an ASP.NET MVC 4 app) with OpenID Connect. Once I obtain the id_token and access_token from my OIDC provider I need to store them. In typical fashion they have to be sent 'over the wire' from the client side to the server side because the server side must process the id_token to determine which user made the request. The access_token is not processed by my application. It's just stored in my application until I need to make a request to an API that requires JWT Bearer authentication.
The way I see it is that the id_token and access_token are sent from client and server either way - whether it's an a header or a cookie. Can I store the id_token and access_token securely in a cookie if it's marked as HTTP only?
Edit:
I should add a little more information about my scenario.
1) My application always uses HTTPS, and all cookies are marked as secure. This removes MITM (Man In The Middle) vulnerabilities
2) Every PUT, POST and DELETE request uses ASP.NET's anti forgery token classes. This protects against XSRF.
3) All input is escaped and sanitized using ASP.NET libraries which removes XSS vulnerabilities.
4) The cookie that would contain the id_token would be marked as http only, removing the ability to read and access the cookie from the client side.
You should probably not store the tokens in cookies. Ideally the access token would be stored in memory on the client. This way they aren't sent automatically with requests to the server which is why there are risks involved with cookies. Anywhere else could open you up to potential vulnerabilities.
The RFC 6819 specification, titled "OAuth 2.0 Threat Model and Security Considerations" touches on the risks and vulnerabilities around OAuth tokens. Specifically, I would recommend reading the following sections:
4.1.3. Threat: Obtaining Access Tokens
4.4.2.2. Threat: Access Token Leak in Browser History
5.1.6. Access Tokens
In applications I have written the tokens have been stored in local storage and in memory.
I'd recommend reading through the OAuth 2.0 specification so you know the risks involved when using OAuth 2.0.
Please don't count on that, HttpOnly is a flag that tells the browser that this cookie should not be accessed by client side scripts and it is true only if the browser supports it.
You can find more info here: https://www.owasp.org/index.php/HttpOnly
Also I suggest to dive a little in the OWASP web site as they have documents regarding best practices for problems like the one you listed.
You can see if your browsers support HttpOnly here: https://caniuse.com/?search=httponly
As of 2021, 95% of browsers support it.
My current architecture is based on a LDAP + JSON Web Token authentication, and my passing Token via the URL this way :
https://myHostApp?jwt={myToken}
Is that safe to proceed this way, or I should pass another way the Tokens?
Assuming also, that SSL is enabled.
I disagree with the accepted anwser.
It is right to say that the use of HTTPS will prevent data leaks. However there are lot of attacks that are achievable if tokens are set in the query string. For example:
Using the browser history
Using a transparent proxy
Furthermore, every web servers log the access requests thus, if an attacker get access on your server, all tokens will be available.
Even the RFC6750 (OAuth2 Bearer Token Usage) DO NOT recommend the use of this transport mode.
Don't pass bearer tokens in page URLs: Bearer tokens SHOULD NOT be
passed in page URLs (for example, as query string parameters).
Instead, bearer tokens SHOULD be passed in HTTP message headers or
message bodies for which confidentiality measures are taken.
Browsers, web servers, and other software may not adequately
secure URLs in the browser history, web server logs, and other
data structures. If bearer tokens are passed in page URLs,
attackers might be able to steal them from the history data, logs,
or other unsecured locations.
Please note that the RFC6750 refers to the OAuth2 Framework protocol but is not limited to it and should be considered for every token transmission in a Web context.
you should pass token in header in every request.
I am building a mobile application that include users doing various things in the app and I started off with authenticating all user actions inside the app using a token that is stored locally on the device. My biggest concern was that anyone can sniff the network and look at the http requests I make inside the app and thus send false requests on behalf of a real user. Something like this:
http://mywebsite.com/postmessage?user=abcd&token=35sxt&msg=Hi
Now, I am using HTTPS though and no one can see my domain name nor the data being sent. So I'm inclined to get rid of tokens all together and do just this:
https://mywebsite.com/postmessage?user=abcd&msg=Hi
Am I correct in assuming I don't need tokens anymore? The whole purpose of them for me was making sure that no one can make an action on behalf of another user without authorization and now it seems pointless that I still use tokens. Am I missing something else?
Firstly, you were correct that having the token in the URL (or anywhere else) was a security risk over HTTP. However, now that you are on HTTPS, it should not matter whether you have the API token in the URL or you are transmitting it in some other way. The URL should be as secure as any other part of the transaction. I say "should" because in practice your internal infrastructure may do logging, metrics collection or reporting that reveals the URL slightly more easily than you intend. And the client may submit the visited URL (but not other info) to its own logging system or to a smart search service like Google, etc. But for most use cases and in most configurations this is not a major issue.
But it sounds from your question like you are talking about not removing the token from the URL and adding it to the HTTP headers or some other fashion, but actually removing the token concept entirely.
So what you should ask is, what is special about HTTPS that makes the token unnecessary? HTTPS secures the communication but it does not authenticate the client. Except in very unusual configurations, anyone can connect via HTTPS and issue commands, and unless you have some method of authentication the HTTPS will not protect you from unauthorized access. If you are using cookies for authentication, or if you are passing the token via HTTP headers (which is actually how I prefer to handle tokens when possible) then your need for authentication is satisfied and you do not need the token. If you do not have any other form of authentication, and you need authentication for security on your website, then you do need the token.
HTTPS is basically used to ensure that you are communicating with a webstie that you intended to and to encrypt communication data so that even if someone intercepts your data, it makes no sense to them.
For e.g. if you are placing an order on Amazon and making a payment,
HTTPS will ensure that:
you are actually submitting payment details to Amazon
your payment data is encrypted when flowing from your browser to Amazon webserver.
When communicating over HTTPS, browsers validates servers digital certiifcate to confirm their identity , then a key is exchanged between server and browser to encrypt data flow between browser and server.
By default HTTPS does not authenticate client. So if you have some actions specific to particular user, you still needs authentication token from client.
But if the token is passed as query parameter in URL itself, then it is still exposed to attackers, so send the token in cookie over HTTPS.
It is also recommended to mark your cookies as secure, to ensure that cookies are sent only over a secure (https) connection and not over http as it can reveal user details.
Hope it helps.