Since OAuth 2.0 Implicit Grant Flow exposes its mechanism e.g. using JavaScript, in the client app to the resource owner, the client Id and the access token are exposed. I have not been able to find a clear answer on what can be done to prevent from exploiting the exposure.
What are some measures to prevent problems with the following scenario? If it's apparent that I am not understanding the flow correctly, please do point out.
Scenario
Client A - a legit client who has been granted its own unique client Id from the authorization server.
Client B - a client the authorization server is not aware of, copies the client Id of Client A, draws in innocent resource owners and uses their access tokens to gain access to their private information.
These are some options I can think of to fix the issue.
Create an IP white list and map to each known client. Check against the authorization server when authorizing and calling the resource server.
Set throttling on the end points of the resource server to detect abnormal activities.
Well, this is the reason why the OAuth specification (RFC 6749) warns against about security weaknesses of the implicit flow in Section 10.6. It's not clear that the counter-measures you describe would be effective in a general setting on the internet. For example, IP headers are insecure and can be easily spoofed. I would only use the implicit flow for the applications that require the lowest level of security (e.g., read-only display of information).
The token is secured using SSL between the client and the server. Therefore the content is encrypted but the URI is not. You can put store the token in the html body because it is secure with the exception of browser add ons. Don't use third party content servers to host JavaScripts, if they are compromised their scripts can read your html. The user can see the token and copy it to their own app if they want but its protecting their resources so... Ultimately I like Implicit flow because of its simplicity.
Ultimately the servers handling of the token can be a problem out of your control. Chose a server that does not include the token in the URI, its not safe. Similarly your shouldn't post back to the server sensitive information in the URL.
If you find a library that guarantees security, please post it.
Related
I am developing a SPA application in angular and I have a lot of confusion about the correct way to implement authentication and authorization.
First of all, the application is a first-party app, which means that I am developing both the authorization server and resource servers.
The users that logs in the application must have full access to their resources on the platform.
So, I am doing it using OAuth2.0 and I have a couple of doubts about the domain of the protocol as well as security concerns.
First question:
The first question is if OAuth should be actually used to authorize first party applications. From my understanding this is a delegation protocol used to grant a third-party application controlled access to the user's resources on the platform, upon user consent. How does this fit in the context of a first-party app? In that case the app should get an access token with a scope that allows full access, right?
Second question:
Since this is a Single Page Application I couldn't store a secret on client side. So I am opting for using the authorization code grant with PKCE which would seem to be appropriate to manage this scenario. In this case I wouldn't ask for a refresh token but I would only retrieve the access token and using silent check to refresh it. I do not want to have refresh token insecurely stored on the browser. Is this PKCE really secure? The secret is generated dynamically but a attacker could eventually create a system using the same public client id and managing the PKCE properly, and finally get an access token that, in my case, gives full access to the users resources.
I could in the future allow controlled access to my app's resources to third party app, that's also one of the reason why I stick with OAuth.
The first question is if OAuth should be actually used to authorize first party applications. From my understanding this is a delegation protocol used to grant a third-party application controlled access to the user's resources on the platform, upon user consent. How does this fit in the context of a first-party app? In that case the app should get an access token with a scope that allows full access, right?
Yes, this makes sense to me. We skip the 'grant permissions' step for our own apps.
Is this PKCE really secure?
Yes, even without PKCE, authorization_code is pretty secure. Adding PKCE solves a a few potential security issues, but I would be somewhat inclined to call them edge cases. It is definitely right now the recommended choice.
The PKCE rfc has more information about the motivations behind PKCE:
https://www.rfc-editor.org/rfc/rfc7636#section-1
I actually came here looking for the answer to Question 1. My take is that in situations where we have no third party apps requiring access to our APIs we do not need OAuth. If we still need to use OAuth, then we can use Resource Owner Password Flow for first party apps. I have not seen any convincing answer anywhere confirming or rejecting this opinion but this is purely based on my understanding of OAuth.
Now, I am mainly writing this to answer Question 2. PKCE protocol is secure and attacker would not get token in this scenario. The reason is that the Authorization Server uses pre-registered "Redirect Uri" to send the token to. To be precise, the Auth Server would simply ask the browser to redirect user to "Redirect Uri appended with Access Token". Browsers do not allow javascript interception of Redirection requests. Therefore, an attacker would not be able to get hold of the token and the user will be redirected from attacker's site to yours at the end.
I have a web application which uses OAuth 2.0 to talk to a third-party service. I want both my server and my web app to talk to the authorized service on behalf of the user. I go through the normal authorization steps of doing the redirect, getting the auth code, exchanging it for the access token, all that jazz. Once complete, my server has the access token and can talk to the service. However, I'd like the web app to talk to the service as well so I don't have to route everything through my server.
Can I send the access token to the web app so I can achieve this? Or, is the access token supposed to be kept confidential between my service and the service, never being disclosed to the user, just just like the client secret is?
I've tried to find an answer for this in the spec and various blog posts, but haven't found a definitive answer either way. I know there is an implied auth method for client-side apps which don't involve a server-side component at all. Therefor my initial guess is that I can send the token to the client. I would like to verify this though.
The token is considered very sensitive information because it allows access to the service. Anyone could issue requests if they had this token.
This is why the token is passed in the Authorization Header, this is why it's highly recommended you make all calls over https, to protect the headers and body information. This is also why it is recommended that the tokens have s short life span so that if one is indeed compromised, it doesn't last for long.
Yes, you can share this token between your own applications and it should work, provided the receiver of the token does not store the IP addresses of the callers as well or has some other check mechanisms in place.
The ideal situation however would be for you to issue a different set of ClientID and Client Secret to each application which requires access.
Don't forget that this is the way the applications identify themselves to the receiver side and it might be important for reporting and analysis purposes.
I'd like to develop a native application (for a mobile phone) that uses OAuth 2.0 Authorization to access protected resources from a resource API. As defined in section 2.1 the type of my client is public.
Upon registration, the Authorization Server provides a client_id for public identification and a redirect_uri.
The client will make use of Authorization Code to receive it's Authorization Grant from the Authorization Server. This all seems secure (if implemented correctly) against any attacker in the middle.
In section 10.2 client impersonation is discussed. In my case, the resource owner grants the client authorization by providing it's credentials via the user agent to the Authorization Server. This section discusses that the Authorization Server:
SHOULD utilize other means to protect resource owners from such
potentially malicious clients. For example, the authorization server
can engage the resource owner to assist in identifying the client and
its origin.
My main concern is that it's easy to impersonate my client once the client_id and redirect_uri is retrieved.
Due to the nature of a public client, this can either be easily reverse engineered. Or in my case, the project will be open source, so this information can be retrieved from the web.
As far as I've understood from section 10.2, it's the resource owner's responsibility to check that the client is legitimate by comparing with what the Authorization Server SHOULD assist with.
In my experience with third party applications requesting an Authorization Grant from me, all I get is a page with some information about the client that actually should be requesting that grant. Based on pure logical sense, I can only judge if the client that's requesting the grant is actually the client that the Authorization Server is telling me who it should be.
So whenever we are dealing with PEBKAC (which I think occurs frequently), isn't it true that impersonators can easily access protected resources if the resource owner just grants them (which might identically look like my legitimate client) authorization?
TLDR - You want oauth access tokens to be issued only to valid clients - in this case devices that installed your app, yes?
First - Oauth2 has multiple workflows for issuing tokens. When YOU are running the Oauth2 service and its issuing tokens to devices running YOUR app, authorization code / redirect URL is not the relevant workflow. I suggest you read my answer here - https://stackoverflow.com/a/17670574/116524 .
Second - No luck here. Just run your services entirely on HTTPS. There is no real way to know whether the client registration request is coming from an app installed from the official app store. You can store bake some secret into the app, but it can be found via reverse engineering. The only possible way this could possibly happen can be some sort of authentication information being provided by the app store itself, which does not exist yet.
I'm currently implementing an OpenID Connect authentication system for some apps I'm building, and one of the clients is a native mobile app. Having read about the different options for using OpenID Connect with a native client, it's clear that the current industry recommendation is to use the Hybrid Flow (i.e. show an embedded browser to collect the user's credentials, and then issue a token for the app to use). The alternative is to use the Resource Owner Flow, which has a better user experience in that the credentials are collected inside the app itself. But this seems to be discouraged for two main reasons:
It means that the native client will collect the credentials - so the native client has the opportunity to save the credentials or do something nefarious with them. In our case, we are creating the native client ourselves so this is not a concern. We will not be opening up the authentication system to other applications to use.
Because the Resource Owner Flow is from the OAuth 2 spec, rather than OpenID Connect, it lacks the replay attack prevention features of the other flows. Specifically, someone could record the authentication process, and then replay it themselves in order to obtain a user token from our identity server.
Since issue 1 is not a concern in our case, what I'd like to understand is whether there is a way to add replay attack prevention to the Resource Owner flow by using a nonce/temporary token of some kind. The scenario I'm thinking of is: the app would request a nonce from the identity server, which would include some sort of timestamp or other unique identifier for that request; the app would then need to provide that nonce with the authentication request; the identity server would validate the nonce before it allows the authentication request to be processed. That way, if someone was able to replay the entire message, the server would discover that the nonce is invalid and would reject the authentication request.
It's possible that an attacker could go and request a nonce from the server themselves, but then they would have needed to decrypt the (HTTPS) authentication request to be able to replace the original message's nonce with the new nonce they generated.
My questions are:
Are there any other reasons why the Resource Owner Flow is not a good idea in this situation?
If we did use the Resource Owner Flow, would a nonce approach like I described be a good way of avoiding replay attacks?
I intend to build a delegated login system for an existing app. I'll be implementing both the OAuth client (in a web application) and the OAuth server (a simple authorization and resource server, that really only has a 'user' resource for now.)
With that in mind, I came across the following section in the current OAuth 2 draft (version 22):
3.1.2.1. Endpoint Request Confidentiality
If a redirection request will result in the transmission of an
authorization code or access token over an open network (between the
resource owner's user-agent and the client), the client SHOULD
require the use of a transport-layer security mechanism.
Lack of transport-layer security can have a severe impact on the
security of the client and the protected resources it is authorized
to access. The use of transport-layer security is particularly
critical when the authorization process is used as a form of
delegated end-user authentication by the client (e.g. third-party
sign-in service).
This specifically warns me that I should be using TLS on the client. We will be using HTTPS on the server, of course, but enabling HTTPS on all clients will be difficult if not impossible.
From my limited understanding of security, I imagine someone could steal the authorization grant. This brings me to my question:
Won't client authentication (using the client secret) prevent an eavesdropper from using the authorization grant? (Because the malicious party won't know the client secret, hopefully.)
If it doesn't, or if there's another attack vector here I'm not seeing, is there anything I can do to make this work securely without HTTPS on the clients? Would, for example, OAuth 1 help? (Perhaps because it has the additional request token step.)
P.S.: I was planning on doing client authentication using TLS client certificates, rather than secrets, if that makes the situation any better.
I think you are misinterpreting part of this warning. This OAuth warning is addressing OWASP A9 violations. This is saying that even though you are using OAuth you still need a secure transport layer to communicate with the client. The client doesn't require a key pair for authentication, OAuth is the client's form of authentication. However, the browser still authenticates with your application using a session id stored as a cookie value. The concern is that if an attacker is able to intercept this value, then he will have the same access as the victimized client.