Is there any security benefit making a new OAuth2 client ID or each customer? - security

Main question
Should we be making a new OAuth client ID & secret for each customer or just using one for all customers of our app?
Some of my colleagues seem to think that if we use one, and it somehow leaks it will impact all customers. So if we make one for each customer, we minimize damage. But my (limited) understanding is that the main risk to customers is only if the access token / refresh token leak.
If the OAuth client ID & secret leak, someone could try phishing someone to believe they were giving permission to our app to do actions, when they would really be giving access to a malicious actor. But this would still require customers to fall for the phishing attempt. Even then, it would only impact the customers that fall for it and not the other customers.
The part I am less sure of is the impact of our actions AFTER we found out about a leaked client ID/Secret. We would likely need to delete the OAuth client and make a new one to prevent phishing attempts with the leaked one(s) being possible.
I believe on a single client ID model, this would break all API calls for all customers since the access tokens we stored internally would be the ones generated from the previous client ID (the one we just deleted). The user would need to open our app, log in to their google account again, and click "allow" again to get us a new access token from the new client ID we made to allow us to call APIs again. Where if we had a different client ID for each customer, we could do this process for just a sub set of customers or a single customer.
But I also feel that under the multi-client ID model, if a client ID did leak for one customer, how can we be sure it didn't leak for the others? Wouldn't you want to get new client IDs for all customers anyways?
I see the following options:
The leaker got access to our google cloud project and has access to all active client IDs. They can even make themselves new ones, so having multiple client IDs doesn't help.
The leaker got access to our internal DB and was able to decrypt it. So they can get all the client IDs stored in the DB anyways, so having multiple client IDs doesn't help. (They might also be able to get access tokens/refresh tokens which is a much a bigger concern!)
Someone mistakenly included client ID & secret in some log file or email. This may provide access to less than the full list of client IDs, but again how do we know how many log files leaked or how many emails were sent? Wouldn't it be a bad practice to not change all of them?
Please help me if I'm overlooking anything.
Here's a diagram for more visual thinkers (values in DB would be encrypted/salted/etc.)
Update
Since there seems to be confusion, here is the terminology from auth0.com
Resource Owner(s): School1, School2, etc.
Client: Our "app" server
Resource Server: Google - specifically "Google Workspace" or "Google Workspace Admin SDK APIs" (I believe)
Authorization Server: Google - specifically "Google Identity" (I believe)
User Agent: Browser
Is the Client the Resource Owner?: No. School owns the chromebooks and manages the student/teacher accounts using them through Google Workspace.
Is the Client a web app executing on the server?: Yes. We are following "Authorization Code Flow" to get access tokens from Google for each customer. We store them on our private DB.
Customer's experience - They download and run an installer and enter the license activation code. A website with the UI will be hosted. (let's say www.school1.com/App) It can be in their network, on cloud, wherever, they just need to make sure the hosting server's network can access to our internal server (let's say at 1.2.3.4). When opening www.school1.com/App they need to setup a new admin account and login. Then they click a setup button and get google pop up asking to login to their Google account (This is how we get the access token). Then they can interact with the website in their browser clicking buttons to make actions happen.
API flow - Clicks in browser become API calls to our server (1.2.3.4) with info for us to authenticate/authorize them as a valid App customer making the call. If authorized, our server then uses the access token we have stored internally to call the Google APIs.
Optional Background
My company is looking to make a product for schools using Google Workspace to manage their Chromebooks.
Google already has some APIs that we plan to use. We hope to leverage these with our own business logic. As a dummy example let's imagine we know schools want to reboot their Chromebooks every day at a specific time. The issueCommand REST API can be used to do the reboot. Our app would handle the scheduling of calling the API.
To be able to call these APIs requires permission from the Google Workspace administrator using OAuth 2.0 to authorize us to make the requests (No other authorization protocols are supported). There seems to be two ways to do this.
Service account apparently for server to server applications
OAuth Client ID apparently for server-side web apps
The service account requires the administrator to log in to their Admin Console and take a bunch of manual steps to grant our app permission, where the OAuth Client ID seems more user friendly. The admin just signs into the google pop up and is shown all the scopes we request, and can just click "Allow"
(There are other differences such as the service account will keep working even if the admin changes, but let's just assume we're committed to OAuth Client ID)

There are 2 parts to the authorization code flow:
A front channel request in the browser that uses the client ID and gets an authorization code
A back channel request that uses both a client ID and secret, to swap the code for tokens
If these leak you can replace the client secret without impacting end users, since it is private between the web back end and the Authorization Server. You may need to redeploy the web app but that should be something you already have a plan for.
I'd recommend keeping the client ID the same though. This is not a secret anyway and any end user can see what it is when the app redirects them to sign in - eg if they use browser tools to view the HTTP request.
You have a single app so use a single client ID and secret. Trying to do otherwise would not work since when a user starts authentication you don't know who they are yet since they have not identified themselves.
The above is standard OAuth and I would stick to that since it results in simple code and a solution that is easy to manage.

Related

OAuth 2 - how to log in without providing permissions

I'm completely new to OAuth, and have a workflow question. I'm using node/express/passport, and have successfully set up the app to redirect when requesting my /auth/google endpoint.
However, I consistently get routed to the Google permissions page where I have to offer my application access to my information. What is the mechanism by which I could log in/out without providing that access every time? Essentially, how do I let users log in without requesting permissions again, but still let them log in through Google?
The typical flow is to have your users log in on Google, like you're doing. Once they confirm your application's requested scopes, Google can provide your server with an authorization code which can be traded for an access / refresh token to be stored and used in the future.
Passport should abstract a lot of the back and forth away from you, though. Are you utilizing this library? And if so, are you storing the access and refresh tokens in your own local database for re-use (or at least the refresh token, so you can get a new valid access token when you need it)?

how to secure azure mobile service / html - javascript

When I call an oauth provider like gmail and I get the token back, how can I make sure that all future calls I make are from that same client that did the authentication? that is, is there some kind of security token I should pass pack? Do I pass that token back everytime?
For example, if I have a simple data table used for a guest book with first,last,birthdate,id. How can I make sure that the user who "owns" that record is the only one who can update it. Also, how can I make sure that the only person who can see their own birthday is the person who auth'd in.
sorry for the confusing question, I'm having trouble understanding how azure mobile services (form an html client) is going to be secure in any way.
I recently tried to figure this out as well, and here's how I understand it (with maybe a little too much detail), using the canonical ToDoList application with server authentication enabled for Google:
When you outsource authentication to Google in this case, you're doing a standard OAuth 2.0 authorization code grant flow. You register your app with Google, get a client ID and secret, which you then register with AMS for your app. Fast forwarding to when you click "log in" on your HTML ToDoList app: AMS requests an authorization code on your app's behalf by providing info about it (client ID and secret), which ultimately results in a account chooser/login screen for Google. After you select the account and log in successfully, Google redirects to your AMS app's URL with the authorization code appended as a query string parameter. AMS then redeems this authorization code for an access token from Google on your application's behalf, creates a new user object (shown below), and returns this to your app:
"userId":"Google:11223344556677889900"
"authenticationToken":"eyJhbGciOiJb ... GjNzw"
These properties are returned after the Login function is called, wrapped in a User object. The authenticationToken can be used to make authenticated calls to AMS by appending it in the X-ZUMO-AUTH header of the request, at least until it expires.
In terms of security, all of the above happens under HTTPS, the token applies only to the currently signed-in user, and the token expires at a predetermined time (I don't know how long).
Addressing your theoretical example, if your table's permissions has been configured to only allow authenticated users, you can further lock things down by writing logic to store and check the userId property when displaying a birthday. See the reference docs for the User object for more info.

client secret in OAuth 2.0

To use google drive api, I have to play with the authentication using OAuth2.0. And I got a few question about this.
Client id and client secret are used to identify what my app is. But they must be hardcoded if it is a client application. So, everyone can decompile my app and extract them from source code. Does it mean that a bad app can pretend to be a good app by using the good app's client id and secret? So user would be showing a screen that asking for granting permission to a good app even though it is actually asked by a bad app? If yes, what should I do? Or actually I should not worry about this?
In mobile application, we can embedded a webview to our app. And it is easy to extract the password field in the webview because the app that asking for permission is actually a "browser". So, OAuth in mobile application does not have the benefit that client application has not access to the user credential of service provider?
I had the same question as the question 1 here, and did some research myself recently, and my conclusion is that it is ok to not keep "client secret" a secret.
The type of clients that do not keep confidentiality of client secret is called "public client" in the OAuth2 spec.
The possibility of someone malicious being able to get authorization code, and then access token, is prevented by the following facts.
1. Client need to get authorization code directly from the user, not from the service
Even if user indicates the service that he/she trusts the client, the client cannot get authorization code from the service just by showing client id and client secret.
Instead, the client has to get the authorization code directly from the user. (This is usually done by URL redirection, which I will talk about later.)
So, for the malicious client, it is not enough to know client id/secret trusted by the user. It has to somehow involve or spoof user to give it the authorization code,
which should be harder than just knowing client id/secret.
2. Redirect URL is registered with client id/secret
Let’s assume that the malicious client somehow managed to involve the user and make her/him click "Authorize this app" button on the service page.
This will trigger the URL redirect response from the service to user’s browser with the authorization code with it.
Then the authorization code will be sent from user’s browser to the redirect URL, and the client is supposed to be listening at the redirect URL to receive the authorization code.
(The redirect URL can be localhost too, and I figured that this is a typical way that a “public client” receives authorization code.)
Since this redirect URL is registered at the service with the client id/secret, the malicious client does not have a way to control where the authorization code is given to.
This means the malicious client with your client id/secret has another obstacle to obtain the user’s authorization code.
I started writing a comment to your question but then found out there is too much to say so here are my views on the subject in the answer.
Yes there is a real possibility for this and there were some exploits based on this. Suggestion is not to keep the app secret in your app, there is even part in the spec that distributed apps should not use this token. Now you might ask, but XYZ requires it in order to work. In that case they are not implementing the spec properly and you should A not use that service (not likely) or B try to secure token using some obfuscating methods to make it harder to find or use your server as a proxy.
For example there were some bugs in Facebook library for Android where it was leaking tokens to Logs, you can find out more about it here
http://attack-secure.com/all-your-facebook-access-tokens-are-belong-to-us
and here https://www.youtube.com/watch?v=twyL7Uxe6sk.
All in all be extra cautious of your usage of third party libraries (common sense actually but if token hijacking is your big concern add another extra to cautious).
I have been ranting about the point 2 for quite some time. I have even done some workarounds in my apps in order to modify the consent pages (for example changing zoom and design to fit the app) but there was nothing stopping me from reading values from fields inside the web view with username and password. Therefore I totally agree with your second point and find it a big "bug" in OAuth spec. Point being "App doesn't get access to users credentials" in the spec is just a dream and gives users false sense of security… Also I guess people are usually suspicions when app asks them for their Facebook, Twitter, Dropbox or other credentials. I doubt many ordinary people read OAuth spec and say "Now I am safe" but instead use common sense and generally not use apps they don't trust.
Answering to 2nd question: Google APIs for security reason mandate that authentication/sign-in cannot be done within App itself (like webviews are not allowed) and needs to be done outside app using Browser for better security which is further explained below:
https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html

How to authenticate requests using ServiceStack, own user repository, and device ids?

I'm building a mobile app and a ServiceStack web service back-end. The Authentication stuff in ServiceStack looks great but easy to get lost in its flexibility - guidance much appreciated. I'll be using my own db tables for storing users etc within the web service. I'd like to have a registration process and subsequent authentication something like this:
the user initially provides just an email address, my web service then emails a registration key to the user
the user enters the key. The app sends to the web service for registration: email, key & a unique device identifier.
the web service verifies the key and stores the email & device id. It responds back with an auth token that the app will use for later authentication.
Then subsequent web service requests would provide the device id and auth token (or a hash created with it). The app is not very chatty so I'm tempted to send the authentication details on each web request.
Question 1: Should I hook into ServiceStack's registration API or just add a couple of custom web service calls? e.g. without using ServiceStack's registration I would:
post to a registration web service with the email address and device id. My web service would send the registration email with a key and add a record to the user db table.
when the user enters the key it would again post to the registration web service, this time also with the key. My web service would validate the key and update the user table marking the user as registered, creating and recording the auth token & returning it to the caller
subsequent requests would be sent using http basic auth with the device id as username and the auth token as password. The service is not very chatty so creds will be sent with each request.
I'll implement a CredentialsAuthProvider that'll get the creds with httpRequest.GetBasicAuthUserAndPassword() and validate them against the db data.
But it feels like I should use registration built in to ServiceStack.
Question 2: What's wrong with passing the authentication details with each request? This would make it easier for composing my app requests but it doesn't seem 'done' based on the ServiceStack examples. Presumably that's because it's inefficient if you have lots of requests to need to re-authenticate every call - any other reasons? My app will only make a single web request at most every few minutes so it seems simpler to avoid having sessions and just re-auth each request.
Question 3: Am I on the right track subclassing CredentialsAuthProvider?
Question 4: Is there any point using the auth token to generate a hash instead of sending the auth token each time? All communication will be over https.
Answer1: It will be OK. if you give multiple call as per requirement. Normally authentication works based on cookie, now you can store it on client and/or on server and match the user with it. Again here if you are using device you, can always use device instead of user to map and authenticate user. Based on your requirement.
I will prefer to use provider as it hides many details which you need to do manually instead. You are on right track. There are many blogs specifically for authentication and how to create custom authentication with service stack. If you like let me know I have book marked some will give it you. Best way to search latest one is checkout twitter account of Servicestack.
Answer2: This is again, I say as per requirement. Now if your user will be in WIFI zone only. (Mostly true for business users), then there is not limit for calls. Just give a API call and do the authentication in background. Simple JSON token will not hurt, It is few bytes only. But again if you have big user base who is not using good internet connection then it will be better to store authentication detail on device and check against that. Just to save a network call. In any case network call is resource heavy.
Answer3: Yes you are on a right track. Still check out blog entries for more details. I don't remember the code snippet and how it works with last update so I am not putting up code here.
Answer4: This is answer is little complicated. Passing data over https and saving user from Identity fraud is little different thing. Now, if you are not generating auth token (hash based value) then you can pass user also over the http or https. Now, this can be used by another user to mock first user and send data. Even data is being passed through https but still data is getting mocked. Hashed based value is used to avoid this situation. And also there are couple of other business use cases can be covered using auth token.
Please let me know if I have understand you questions correctly and answered them?? or If any further details is required??

How to keep the client credentials confidential, while using OAuth2's Resource Owner Password Credentials grant type

We are building a rest service and we want to use OAauth 2 for authorization. The current draft (v2-16 from May 19th) describes four grant types. They are mechanisms or flows for obtaining authorization (an access token).
Authorization Code
Implicit Grant
Resource Owner Credentials
Client Credentials
It seems we need to support all four of them, since they serve different purposes. The first two (and possibly the last one) can be used from third-party apps that need access to the API. The authorization code is the standard way to authorize a web application that is lucky enough to reside on a secure server, while the implicit grant flow would be the choice for a client application that can’t quite keep its credentials confidential (e.g. mobile/desktop application, JavaScript client, etc.).
We want to use the third mechanism ourselves to provide a better user experience on mobile devices – instead of taking the user to a login dialog in a web browser and so on, the user will simply enter his or her username and password directly in the application and login.
We also want to use the Client Credentials grant type to obtain an access token that can be used to view public data, not associated with any user. In this case this is not so much authorization, but rather something similar to an API key that we use to give access only to applications that have registered with us, giving us an option to revoke access if needed.
So my questions are:
Do you think I have understood the purpose of the different grant types correctly?
How can you keep your client credentials confidential? In both the third and fourth case, we need to have the client id and client secret somewhere on the client, which doesn't sound like a good idea.
Even if you use the implicit grant type and you don’t expose your client secret, what stops another application from impersonating your app using the same authorization mechanism and your client id?
To summarize, we want to be able to use the client credentials and resource owner credentials flow from a client application. Both of these flows require you to store the client secret somehow, but the client is a mobile or JavaScript application, so these could easily be stolen.
I'm facing similar issues, and am also relatively new to OAuth. I've implemented "Resource Owner Password Credentials" in our API for our official mobile app to use -- the web flows just seem like they'd be so horrible to use on a mobile platform, and once the user installs an app and trusts that it's our official app, they should feel comfortable typing username/password directly into the app.
The problem is, as you point out, there is no way for my API server to securely verify the client_id of the app. If I include a client_secret in the app code/package, then it's exposed to anyone who installs the app, so requiring a client_secret wouldn't make the process any more secure. So basically, any other app can impersonate my app by copying the client_id.
Just to direct answers at each of your points:
I keep re-reading different drafts of the spec to see if anything's changed, and am focused mostly on the Resource Owner Password Credentials section, but I think you're correct on these. Client Credentials(4) I think could also be used by an in-house or third-party service that might need access to more than just "public" information, like maybe you have analytics or something that need to get information across all users.
I don't think you can keep anything confidential on the client.
Nothing stops someone else from using your client id. This is my issue too. Once your code leaves the server and is either installed as an app or is running as Javascript in a browser, you can't assume anything is secret.
For our website, we had a similar issue to what you describe with the Client Credentials flow. What I ended up doing is moving the authentication to the server side. The user can authenticate using our web app, but the OAuth token to our API is stored on the server side, and associated with the user's web session. All API requests that the Javascript code makes are actually AJAX calls to the web server. So the browser isn't directly authenticated with the API, but instead has an authenticated web session.
It seems like your use-case for Client Credentials is different, in that you're talking about third-party apps, and are only serving public data through this method. I think your concerns are valid (anyone can steal and use anyone else's API key), but if you only require a free registration to get an API key, I don't see why anyone would really want to steal one.
You could monitor/analyze the usage of each API key to try to detect abuse, at which point you could invalidate one API key and give the legitimate user a new one. This might be the best option, but it's in no way secure.
You could also use a Refresh Token-like scheme for this if you wanted to lock it up a bit tighter, although I don't know how much you would really gain. If you expired the Javascript-exposed api tokens once a day and required the third-party to do some sort of server-side refresh using a (secret) refresh token, then stolen api tokens would never be good for more than a day. Might encourage potential token thieves to just register instead. But sort of a pain for everyone else, so not sure if this is worth it.

Resources