I've played with a couple of Alexa skills, but there seems no chance of publishing either of them because they use external APIs for which there isn't a persistent access token. Account linking seems to assume that external accounts use an oauth2 or similar authentication system where the private credentials can be requested by a page on the server, and a the skill can use a rather less private access token.
Obviously it's not permissible for my lambda code to see other people's passwords.
One service I've connected to uses a session id which lasts only half an hour or so, the other actually uses a password in the call structure.
Has anyone found a way around this issue? I have to admit it seems impossible.
Related
Main question
Should we be making a new OAuth client ID & secret for each customer or just using one for all customers of our app?
Some of my colleagues seem to think that if we use one, and it somehow leaks it will impact all customers. So if we make one for each customer, we minimize damage. But my (limited) understanding is that the main risk to customers is only if the access token / refresh token leak.
If the OAuth client ID & secret leak, someone could try phishing someone to believe they were giving permission to our app to do actions, when they would really be giving access to a malicious actor. But this would still require customers to fall for the phishing attempt. Even then, it would only impact the customers that fall for it and not the other customers.
The part I am less sure of is the impact of our actions AFTER we found out about a leaked client ID/Secret. We would likely need to delete the OAuth client and make a new one to prevent phishing attempts with the leaked one(s) being possible.
I believe on a single client ID model, this would break all API calls for all customers since the access tokens we stored internally would be the ones generated from the previous client ID (the one we just deleted). The user would need to open our app, log in to their google account again, and click "allow" again to get us a new access token from the new client ID we made to allow us to call APIs again. Where if we had a different client ID for each customer, we could do this process for just a sub set of customers or a single customer.
But I also feel that under the multi-client ID model, if a client ID did leak for one customer, how can we be sure it didn't leak for the others? Wouldn't you want to get new client IDs for all customers anyways?
I see the following options:
The leaker got access to our google cloud project and has access to all active client IDs. They can even make themselves new ones, so having multiple client IDs doesn't help.
The leaker got access to our internal DB and was able to decrypt it. So they can get all the client IDs stored in the DB anyways, so having multiple client IDs doesn't help. (They might also be able to get access tokens/refresh tokens which is a much a bigger concern!)
Someone mistakenly included client ID & secret in some log file or email. This may provide access to less than the full list of client IDs, but again how do we know how many log files leaked or how many emails were sent? Wouldn't it be a bad practice to not change all of them?
Please help me if I'm overlooking anything.
Here's a diagram for more visual thinkers (values in DB would be encrypted/salted/etc.)
Update
Since there seems to be confusion, here is the terminology from auth0.com
Resource Owner(s): School1, School2, etc.
Client: Our "app" server
Resource Server: Google - specifically "Google Workspace" or "Google Workspace Admin SDK APIs" (I believe)
Authorization Server: Google - specifically "Google Identity" (I believe)
User Agent: Browser
Is the Client the Resource Owner?: No. School owns the chromebooks and manages the student/teacher accounts using them through Google Workspace.
Is the Client a web app executing on the server?: Yes. We are following "Authorization Code Flow" to get access tokens from Google for each customer. We store them on our private DB.
Customer's experience - They download and run an installer and enter the license activation code. A website with the UI will be hosted. (let's say www.school1.com/App) It can be in their network, on cloud, wherever, they just need to make sure the hosting server's network can access to our internal server (let's say at 1.2.3.4). When opening www.school1.com/App they need to setup a new admin account and login. Then they click a setup button and get google pop up asking to login to their Google account (This is how we get the access token). Then they can interact with the website in their browser clicking buttons to make actions happen.
API flow - Clicks in browser become API calls to our server (1.2.3.4) with info for us to authenticate/authorize them as a valid App customer making the call. If authorized, our server then uses the access token we have stored internally to call the Google APIs.
Optional Background
My company is looking to make a product for schools using Google Workspace to manage their Chromebooks.
Google already has some APIs that we plan to use. We hope to leverage these with our own business logic. As a dummy example let's imagine we know schools want to reboot their Chromebooks every day at a specific time. The issueCommand REST API can be used to do the reboot. Our app would handle the scheduling of calling the API.
To be able to call these APIs requires permission from the Google Workspace administrator using OAuth 2.0 to authorize us to make the requests (No other authorization protocols are supported). There seems to be two ways to do this.
Service account apparently for server to server applications
OAuth Client ID apparently for server-side web apps
The service account requires the administrator to log in to their Admin Console and take a bunch of manual steps to grant our app permission, where the OAuth Client ID seems more user friendly. The admin just signs into the google pop up and is shown all the scopes we request, and can just click "Allow"
(There are other differences such as the service account will keep working even if the admin changes, but let's just assume we're committed to OAuth Client ID)
There are 2 parts to the authorization code flow:
A front channel request in the browser that uses the client ID and gets an authorization code
A back channel request that uses both a client ID and secret, to swap the code for tokens
If these leak you can replace the client secret without impacting end users, since it is private between the web back end and the Authorization Server. You may need to redeploy the web app but that should be something you already have a plan for.
I'd recommend keeping the client ID the same though. This is not a secret anyway and any end user can see what it is when the app redirects them to sign in - eg if they use browser tools to view the HTTP request.
You have a single app so use a single client ID and secret. Trying to do otherwise would not work since when a user starts authentication you don't know who they are yet since they have not identified themselves.
The above is standard OAuth and I would stick to that since it results in simple code and a solution that is easy to manage.
For example if i have build a mobile application and using the nodejs REST api for accessing the backend.
I want to restrict the access of the application with same login credentials on a maximum of two devices.
For example me and my friend can have have access to the application with same login credentials but a third friend must not be allowed to have access to the account with same login credentials.
Can it be implemented with some kind of token. Can anyone please help me in understanding the concept to implement this.
Posting as an answer, since it does appear to be a solution.
It can be implemented with a token, but I think it's important here to maintain sessions. Also, you need to keep track of who is connected to what account, and from what device. You'll definitely need unique identifiers, and to know how many logins the account is already utilizing. If a user logs out, remove that device from the list until they login again. Read up on session management. I have had good success using PassportJS for stuff like this :)
For the last few months i've been working on a Rest API for a web app for the company I work for. The endpoints supply data such as transaction history, user data, and data for support tickets. However, I keep running into one issue that always seems to set me back to some extent.
The issue I seem to keep running into is how do I handle user authentication for the Rest API securely? All data is going to be sent over a SSL connection, but there's a part of me that's paranoid about potential security problems that could arise. As it currently stands when a client attempts to login the client must provide a username or email address, and a password to a login endpoint (E.G "/api/login"). Along with with this information, a browser fingerprint must be supplied through header of the request that's sending the login credentials. The API then validates whether or not the specified user exists, checks whether or not the password supplied is correct, and stores the fingerprint in a database model. To access any other endpoints in the API a valid token from logging in, and a valid browser fingerprint are required.
I've been using browser fingerprints as a means to prevent token-hijacking, and as a way make sure that the same device used to login is being used to make the requests. However, I have noticed a scenario where this practice backfires on me. The client-side library i'm using to generate browser fingerprints isn't always accurate. Sometimes the library spits out a different fingerprint entirely. Which causes some client requests to fail as the different fingerprint isn't recognized by the API as being valid. I would like to keep track of what devices are used to make requests to the API. Is there a more consistent way of doing so, while still protecting tokens from being hijacked?
When thinking of the previous question, there is another one that also comes to mind. How do I store auth tokens on client-side securely, or in a way that makes it difficult for someone to obtain the tokens through malicious means such as a xss-attack? I understand setting a strict Content-Security Policy on browser based clients can be effective in defending against xss-attacks. However, I still get paranoid about storing tokens as cookies or in local storage.
I understand oauth2 is usually a good solution to user authentication, and I have considered using it before to deal with this problem. Although, i'm writing the API using Flask, and i'm also using JSON Web tokens. As it currently stands, Flask's implementation of oauth2 has no way to use JWTs as access tokens when using oauth for authentication.
This is my first large-scale project where I have had to deal with this issue and i am not sure what to do. Any help, advice, or critiques are appreciated. I'm in need of the help right now.
Put an API Gateway in front of your API , your API Gateway is publicly ( i.e in the DMZ ) exposed while the actual API are internal.
You can look into Kong..
I have an app that offers an API. This app is an OAuth2 provider.
I want to access this API (read & write) with a client-side only app. I'm using JSO to make this easier.
It works great.
The thing is, I don't have to enter my client secret (of the application I registered in my app) anywhere. And I understand why, it would then be available to anyone.
So, if I can access my api without the client secret, could you explain to me what is its purpose?
This discussion provides an excellent explanation of why the client secret is much more important for server-side apps than client-side apps. An excerpt:
Web apps [server-side apps] use client secrets because they represent huge attack vectors. Let us say that someone poisons a DNS entry and sets up a rogue app "lookalike", the juxtapose might not be noticed for months, with this intermediary sucking up tons of data. Client secrets are supposed to mitigate this attack vector. For single user clients, compromise has to come one device at a time, which is horribly inefficient in comparison.
Client Secret was used in OAuth 1.0 to sign the request, so it was required. Some OAuth2 servers (such as Google Web Server API) required the client secret to be sent to receive the access token (either from request token or refresh token).
OAuth 2.0 has reduced the role of the client secret significantly, but it is still passed along for the servers that use it.
This was also driving me insane until I saw an example that made the answer blindingly obvious.
I have to be logged into The Server before The Server will return a token granting access to My stuff.
In other words, The Server will present Me, the human, with a login screen if I don't already have a valid login session with The Server. This is why explanations always say something like "it's up to to the server to authenticate".
Sure, The Server does not have to require that I am logged in. Is this realistic? Will Dropbox really grant access to My files to anyone without a login? Of course not. Most of the explanations I've read gloss over this point as if it doesn't matter, when it's practically the only thing that does matter.
We are building a rest service and we want to use OAauth 2 for authorization. The current draft (v2-16 from May 19th) describes four grant types. They are mechanisms or flows for obtaining authorization (an access token).
Authorization Code
Implicit Grant
Resource Owner Credentials
Client Credentials
It seems we need to support all four of them, since they serve different purposes. The first two (and possibly the last one) can be used from third-party apps that need access to the API. The authorization code is the standard way to authorize a web application that is lucky enough to reside on a secure server, while the implicit grant flow would be the choice for a client application that can’t quite keep its credentials confidential (e.g. mobile/desktop application, JavaScript client, etc.).
We want to use the third mechanism ourselves to provide a better user experience on mobile devices – instead of taking the user to a login dialog in a web browser and so on, the user will simply enter his or her username and password directly in the application and login.
We also want to use the Client Credentials grant type to obtain an access token that can be used to view public data, not associated with any user. In this case this is not so much authorization, but rather something similar to an API key that we use to give access only to applications that have registered with us, giving us an option to revoke access if needed.
So my questions are:
Do you think I have understood the purpose of the different grant types correctly?
How can you keep your client credentials confidential? In both the third and fourth case, we need to have the client id and client secret somewhere on the client, which doesn't sound like a good idea.
Even if you use the implicit grant type and you don’t expose your client secret, what stops another application from impersonating your app using the same authorization mechanism and your client id?
To summarize, we want to be able to use the client credentials and resource owner credentials flow from a client application. Both of these flows require you to store the client secret somehow, but the client is a mobile or JavaScript application, so these could easily be stolen.
I'm facing similar issues, and am also relatively new to OAuth. I've implemented "Resource Owner Password Credentials" in our API for our official mobile app to use -- the web flows just seem like they'd be so horrible to use on a mobile platform, and once the user installs an app and trusts that it's our official app, they should feel comfortable typing username/password directly into the app.
The problem is, as you point out, there is no way for my API server to securely verify the client_id of the app. If I include a client_secret in the app code/package, then it's exposed to anyone who installs the app, so requiring a client_secret wouldn't make the process any more secure. So basically, any other app can impersonate my app by copying the client_id.
Just to direct answers at each of your points:
I keep re-reading different drafts of the spec to see if anything's changed, and am focused mostly on the Resource Owner Password Credentials section, but I think you're correct on these. Client Credentials(4) I think could also be used by an in-house or third-party service that might need access to more than just "public" information, like maybe you have analytics or something that need to get information across all users.
I don't think you can keep anything confidential on the client.
Nothing stops someone else from using your client id. This is my issue too. Once your code leaves the server and is either installed as an app or is running as Javascript in a browser, you can't assume anything is secret.
For our website, we had a similar issue to what you describe with the Client Credentials flow. What I ended up doing is moving the authentication to the server side. The user can authenticate using our web app, but the OAuth token to our API is stored on the server side, and associated with the user's web session. All API requests that the Javascript code makes are actually AJAX calls to the web server. So the browser isn't directly authenticated with the API, but instead has an authenticated web session.
It seems like your use-case for Client Credentials is different, in that you're talking about third-party apps, and are only serving public data through this method. I think your concerns are valid (anyone can steal and use anyone else's API key), but if you only require a free registration to get an API key, I don't see why anyone would really want to steal one.
You could monitor/analyze the usage of each API key to try to detect abuse, at which point you could invalidate one API key and give the legitimate user a new one. This might be the best option, but it's in no way secure.
You could also use a Refresh Token-like scheme for this if you wanted to lock it up a bit tighter, although I don't know how much you would really gain. If you expired the Javascript-exposed api tokens once a day and required the third-party to do some sort of server-side refresh using a (secret) refresh token, then stolen api tokens would never be good for more than a day. Might encourage potential token thieves to just register instead. But sort of a pain for everyone else, so not sure if this is worth it.