Does an OAuth 2 client really need TLS? - security

I intend to build a delegated login system for an existing app. I'll be implementing both the OAuth client (in a web application) and the OAuth server (a simple authorization and resource server, that really only has a 'user' resource for now.)
With that in mind, I came across the following section in the current OAuth 2 draft (version 22):
3.1.2.1. Endpoint Request Confidentiality
If a redirection request will result in the transmission of an
authorization code or access token over an open network (between the
resource owner's user-agent and the client), the client SHOULD
require the use of a transport-layer security mechanism.
Lack of transport-layer security can have a severe impact on the
security of the client and the protected resources it is authorized
to access. The use of transport-layer security is particularly
critical when the authorization process is used as a form of
delegated end-user authentication by the client (e.g. third-party
sign-in service).
This specifically warns me that I should be using TLS on the client. We will be using HTTPS on the server, of course, but enabling HTTPS on all clients will be difficult if not impossible.
From my limited understanding of security, I imagine someone could steal the authorization grant. This brings me to my question:
Won't client authentication (using the client secret) prevent an eavesdropper from using the authorization grant? (Because the malicious party won't know the client secret, hopefully.)
If it doesn't, or if there's another attack vector here I'm not seeing, is there anything I can do to make this work securely without HTTPS on the clients? Would, for example, OAuth 1 help? (Perhaps because it has the additional request token step.)
P.S.: I was planning on doing client authentication using TLS client certificates, rather than secrets, if that makes the situation any better.

I think you are misinterpreting part of this warning. This OAuth warning is addressing OWASP A9 violations. This is saying that even though you are using OAuth you still need a secure transport layer to communicate with the client. The client doesn't require a key pair for authentication, OAuth is the client's form of authentication. However, the browser still authenticates with your application using a session id stored as a cookie value. The concern is that if an attacker is able to intercept this value, then he will have the same access as the victimized client.

Related

Demystifying CSRF?

I've read through a lot of long explanations of CSRF and IIUC the core thing that enables the attack is cookie based identification of server sessions.
So in other words if the browser (And note I'm specifically narrowing the scope to web browsers here) does not use a cookie based session key to identify the session on the server, then a CSRF attack cannot happen. Did I understand this correctly?
So for example suppose someone creates a link like:
href="http://notsosecurebank.com/transfer.do?acct=AttackerA&amount;=$100">Read more!
You are tricked into clicking on this link while logged into http://notsosecurebank.com, however http://notsosecurebank.com does not use cookies to track your login session, and therefore the link does not do any harm since the request cannot be authenticated and just gets thrown in the garbage?
Assumptions
The OpenID Connect Server / OAuth Authorization server has been implemented correctly and will not send authentication redirects to any URL that you ask it to.
The attacker does not know the client id and client secret
Footnotes
The scenario I'm targeting in this question is the CSRF scenario most commonly talked about. There are other scenarios that fit the CSRF tag. These scenarios are highly unlikely, yet good to be aware of, and prepared for. One of them has the following components:
1) The attacker is able to direct you to a bad client
2) The attacker owns that client
3) The attacker has the secret for that client registered with the OAuth Authorization Server
4) The attacker is able to tell the Authorization Server that authenticates you to redirect back to the bad client after you have been authenticated with the proper server.
So setting this up is a little bit like breaking into Fort Knox, but certainly good to be aware of. For example OpenID Connect or OAuth Authorization Providers should most likely flag clients that register redirect URLs pointing to redirect URLs that other clients have also registered.
The most common / usually discussed CSRF Cross Site Request Forgery scenario can only happen when the browser stores credentials (as a cookie or as basic authentication credentials).
OAuth2 implementations (client and authorization server) must be careful about CSRF attacks. CSRF attacks can happens on the client's redirection URI and on the authorization server. According to the specification (RFC 6749):
A CSRF attack against the client's redirection URI allows an attacker
to inject its own authorization code or access token, which can
result in the client using an access token associated with the
attacker's protected resources rather than the victim's (e.g., save
the victim's bank account information to a protected resource
controlled by the attacker).
The client MUST implement CSRF protection for its redirection URI.
This is typically accomplished by requiring any request sent to the
redirection URI endpoint to include a value that binds the request to
the user-agent's authenticated state (e.g., a hash of the session
cookie used to authenticate the user-agent). The client SHOULD
utilize the "state" request parameter to deliver this value to the
authorization server when making an authorization request.
[...]
A CSRF attack against the authorization server's authorization
endpoint can result in an attacker obtaining end-user authorization
for a malicious client without involving or alerting the end-user.
The authorization server MUST implement CSRF protection for its
authorization endpoint and ensure that a malicious client cannot
obtain authorization without the awareness and explicit consent of
the resource owner
Theory is, CSRF is not related to the authentication method. If an adversary can have a victim user perform actions in another application that the victim didn't want, then that application is vulnerable to CSRF.
This can manifest in several ways, the most common being that a victim user visits a malicious website which in turn makes requests from the victim's browser to another application, thus performing actions the user didn't want. This way it is possible if credentials are sent by the victim browser automatically. By far the most common scenario is a session cookie, but there can be others as well, for example HTTP Basic auth (the browser remembers that as well), or Windows authentication in a domain (Kerberos/SPNEGO), or client certificates, or even some kind of an SSO under certain circumstances.
Also sometimes application authentication is cookie-based, and all non-GET (POST, PUT, etc.) requests are protected against CSRF, but GETs are not for obvious reasons. In languages like PHP, it is easy to make calls intended to be POST requests also work as GETs (think of using $_REQUEST in PHP). In that case, any other website can include something like <img src='http://victim.com/performstuff&param=123> to have actions performed silently.
There are also less obvious CSRF attacks in complex systems or flows, like for example the CSRF attacks against oauth.
So if a web application uses say tokens (sent as request headers, instead of session cookies) for authentication, meaning a client will have to add the token to each request, that is probably not vulnerable to CSRF, but as always, the devil is in the details.

OAuth 2.0 public client impersonation

I'd like to develop a native application (for a mobile phone) that uses OAuth 2.0 Authorization to access protected resources from a resource API. As defined in section 2.1 the type of my client is public.
Upon registration, the Authorization Server provides a client_id for public identification and a redirect_uri.
The client will make use of Authorization Code to receive it's Authorization Grant from the Authorization Server. This all seems secure (if implemented correctly) against any attacker in the middle.
In section 10.2 client impersonation is discussed. In my case, the resource owner grants the client authorization by providing it's credentials via the user agent to the Authorization Server. This section discusses that the Authorization Server:
SHOULD utilize other means to protect resource owners from such
potentially malicious clients. For example, the authorization server
can engage the resource owner to assist in identifying the client and
its origin.
My main concern is that it's easy to impersonate my client once the client_id and redirect_uri is retrieved.
Due to the nature of a public client, this can either be easily reverse engineered. Or in my case, the project will be open source, so this information can be retrieved from the web.
As far as I've understood from section 10.2, it's the resource owner's responsibility to check that the client is legitimate by comparing with what the Authorization Server SHOULD assist with.
In my experience with third party applications requesting an Authorization Grant from me, all I get is a page with some information about the client that actually should be requesting that grant. Based on pure logical sense, I can only judge if the client that's requesting the grant is actually the client that the Authorization Server is telling me who it should be.
So whenever we are dealing with PEBKAC (which I think occurs frequently), isn't it true that impersonators can easily access protected resources if the resource owner just grants them (which might identically look like my legitimate client) authorization?
TLDR - You want oauth access tokens to be issued only to valid clients - in this case devices that installed your app, yes?
First - Oauth2 has multiple workflows for issuing tokens. When YOU are running the Oauth2 service and its issuing tokens to devices running YOUR app, authorization code / redirect URL is not the relevant workflow. I suggest you read my answer here - https://stackoverflow.com/a/17670574/116524 .
Second - No luck here. Just run your services entirely on HTTPS. There is no real way to know whether the client registration request is coming from an app installed from the official app store. You can store bake some secret into the app, but it can be found via reverse engineering. The only possible way this could possibly happen can be some sort of authentication information being provided by the app store itself, which does not exist yet.

Avoiding replay attacks on Resource Owner flow by using nonce

I'm currently implementing an OpenID Connect authentication system for some apps I'm building, and one of the clients is a native mobile app. Having read about the different options for using OpenID Connect with a native client, it's clear that the current industry recommendation is to use the Hybrid Flow (i.e. show an embedded browser to collect the user's credentials, and then issue a token for the app to use). The alternative is to use the Resource Owner Flow, which has a better user experience in that the credentials are collected inside the app itself. But this seems to be discouraged for two main reasons:
It means that the native client will collect the credentials - so the native client has the opportunity to save the credentials or do something nefarious with them. In our case, we are creating the native client ourselves so this is not a concern. We will not be opening up the authentication system to other applications to use.
Because the Resource Owner Flow is from the OAuth 2 spec, rather than OpenID Connect, it lacks the replay attack prevention features of the other flows. Specifically, someone could record the authentication process, and then replay it themselves in order to obtain a user token from our identity server.
Since issue 1 is not a concern in our case, what I'd like to understand is whether there is a way to add replay attack prevention to the Resource Owner flow by using a nonce/temporary token of some kind. The scenario I'm thinking of is: the app would request a nonce from the identity server, which would include some sort of timestamp or other unique identifier for that request; the app would then need to provide that nonce with the authentication request; the identity server would validate the nonce before it allows the authentication request to be processed. That way, if someone was able to replay the entire message, the server would discover that the nonce is invalid and would reject the authentication request.
It's possible that an attacker could go and request a nonce from the server themselves, but then they would have needed to decrypt the (HTTPS) authentication request to be able to replace the original message's nonce with the new nonce they generated.
My questions are:
Are there any other reasons why the Resource Owner Flow is not a good idea in this situation?
If we did use the Resource Owner Flow, would a nonce approach like I described be a good way of avoiding replay attacks?

OAuth 2.0 Implicit Grant Flow - clientId and accessToken exposure security

Since OAuth 2.0 Implicit Grant Flow exposes its mechanism e.g. using JavaScript, in the client app to the resource owner, the client Id and the access token are exposed. I have not been able to find a clear answer on what can be done to prevent from exploiting the exposure.
What are some measures to prevent problems with the following scenario? If it's apparent that I am not understanding the flow correctly, please do point out.
Scenario
Client A - a legit client who has been granted its own unique client Id from the authorization server.
Client B - a client the authorization server is not aware of, copies the client Id of Client A, draws in innocent resource owners and uses their access tokens to gain access to their private information.
These are some options I can think of to fix the issue.
Create an IP white list and map to each known client. Check against the authorization server when authorizing and calling the resource server.
Set throttling on the end points of the resource server to detect abnormal activities.
Well, this is the reason why the OAuth specification (RFC 6749) warns against about security weaknesses of the implicit flow in Section 10.6. It's not clear that the counter-measures you describe would be effective in a general setting on the internet. For example, IP headers are insecure and can be easily spoofed. I would only use the implicit flow for the applications that require the lowest level of security (e.g., read-only display of information).
The token is secured using SSL between the client and the server. Therefore the content is encrypted but the URI is not. You can put store the token in the html body because it is secure with the exception of browser add ons. Don't use third party content servers to host JavaScripts, if they are compromised their scripts can read your html. The user can see the token and copy it to their own app if they want but its protecting their resources so... Ultimately I like Implicit flow because of its simplicity.
Ultimately the servers handling of the token can be a problem out of your control. Chose a server that does not include the token in the URI, its not safe. Similarly your shouldn't post back to the server sensitive information in the URL.
If you find a library that guarantees security, please post it.

Application token/secrets when creating an OAuth API

Background: I am using node.js and express to create an API. I have implemented OAuth in my API server in a standard consumer/user key/secret fashion (the same way Twitter, Facebook, etc. do). I expect 3rd parties to connect to my API, again in the same manner as these common APIs.
Normally, a client would connect with an application token/secret (eg, you create a Facebook app as a Facebook developer and these are given to you). However there are times when the client cannot provide a secret for the application because the code is implemented in an insecure fashion. Specifically, I am referring to Javascript libraries. Eg, developers do not want to expose their application secret in Javascript code because it is plaintext and could be read by malicious users.
I've noticed that Facebook avoided this problem. The developer needs to provide only an application token (not secret) to the Javascript library. I do not understand how to provide a similar option for my API without fundamentally making my library insecure. Namely, if requests are being made by a Javascript client library to an API without providing a well-secured token/secret, how are those requests authenticated by the OAuth API?
Intellectually, the best solution I could think of would to have some sort of token handoff between the Javascript client library and the API server via a HTTPS connection, in order to return a secret for the library to use. I'm not quite sure how I'd secure this handoff to prevent spoofs, though.
In most cases it is better to follow the standards than to implement some custom way. OAuth2 specifies 4 methods in the latest draft (28) to do the Authorization Grant flow. The implicit flow is the one you saw on Facebook.
As the standard says for that:
When issuing an access token during the implicit grant flow, the authorization server does not authenticate the client. In some cases, the client identity can be verified via the redirection URI used to deliver the access token to the client. The access token may be exposed to the resource owner or other applications with access to the resource owner's user-agent.
Implicit grants improve the responsiveness and efficiency of some clients (such as a client implemented as an in-browser application) since it reduces the number of round trips required to obtain an access token. However, this convenience should be weighed against the security implications of using implicit grants, especially when the authorization code grant type is available.
it has some security drawbacks.
But as far as I can see, the other methods don't work for you, as they are exposing secrets to either the client (third-party website owner) or the resource owner (user), so you should stay with this.

Resources