As I understand it, the idea is that azure allows the registration of multiple applications (client ids) each with multiple secrets.
I (think that I) get the part of multiple applications registrations, since each app would get fine-grained access control.
The question then is why would it be possible to create multiple (client) secrets for the same application (client) id?
Don't all the secrets provide the exact same access (since they are all bound to the same application/client id)?
Why would someone need even a second (client) secret?
Correct, all secrets have the same access as they are similar to passwords for a user.
The point is that secrets expire, and having more than one allows you to rotate them.
Before secret 1 expires, create new secret 2
Update application with secret 2
Remove secret 1
Related
Suppose i have in my application 2 main services, both using the jwt functionallities for different purposes.
One is an authentication service while the other handles sending emails with jwt-signed-tokens.
Would it be ok and safe to use one secret-key for both? Or rather use different secrets for every different service.
It is fine to have the same signing key for all tokens in your system.
If you with JWT secret mean the token signing key.
I am implementing a login flow which uses the Google Client ID and Google Client Secret. I am considering the security implications of the Google Client Secret and who should be able to have access to it.
Currently the Client secret is stored in an environment variable, but I would like to know what someone with access to this secret could do with it to determine which developers should have access to this environment variable and if I should setup a different OAuth2 application in development vs production.
Client id and client secret are similar to a login and password. They give your application the ability to request consent of a user to access their data. If you are storing refresh tokens it would also give the user access to create access tokens from your refresh tokens.
Googles TOS states
Asking developers to make reasonable efforts to keep their private keys private and not embed them in open source projects.
You should not be sharing this with anyone. It should only be used by you and your developers.
Yes Ideally you should have a test and production client ids. Test client id can be used by your developers the only one who should be using your production verified project client ids is your production environment. I would store them in some for for secrete store personally.
It depends on which type of OAuth application you specified.
When creating an OAuth client ID in Google Cloud (and with that, a client secret), you are asked to specify the type of application you are creating:
If you choose Web App, your client secret should really be secret, as its treated as such by Google and is used to authenticate your own server. You should therefore hide it and especially not include it in open sourced code.
However, there is also the option of creating a Desktop app, which means you want to use OAuth without having your own server. For this case the documentation by Google says:
The process results in a client ID and, in some cases, a client
secret, which you embed in the source code of your application. (In
this context, the client secret is obviously not treated as a secret.)
So in this case it's fine (even required) to include the client secret in your app for your users.
Our scenario is that we have an API which is currently only secured by a subscription key in APIM.
We plan to change this to also secure it with OAuth 2 following this guidance from Microsoft, we will then use the JWT validation policies within APIM to ensure that the user requesting access is a member of the appropriate groups to access given endpoints etc.
However as part of our release process we need to run some automated tests which call the API and check that certain data is returned.
Because these tests are run as part of an automated release pipeline we are struggling to understand how OAuth will fit into this process - as a user is required to enter credentials for a token to be issued...
We originally thought that we could just request a token manually once and then hard code it into the tests, but as tokens are only valid for a short time this isn't a good solution.
Other things we are considering are :
Creating a "test user" in AD and storing their credentials in the test project and then when the tests run we can request a token using the "Password" grant type and passing the username and password" however this doesn't seem like the best from a security point of view, even though the user would only have access to a very limited subset of the APIs functionality it still doesn't seem like a good practice.
Requesting a token using the client secret, however the downside to that is this is that the JWT does not contain the groups claim so this token will not pass JWT Validation.
This must be something that others have encountered? What is best practice in this scenario?
As you can see in the article you reference, you will be using Azure API Management to be the entry point to access your API. So, using the API Management you will have subscriptions with keys for your API. You just need to create a subscription for your automated testing, and save the key in the Azure Key Vault. And then during the deployment, you pull your subscription key from your Key Vault, and use it to call the API Management Endpoint, that consequently will call your API.
The solution that we went with in the end was to create a new App Registration for the Test project, then in APIM we added a rule so that the JWT policy is not applied to connections from that app.
Might not be the best solution but it works.
I have an application using Hashicorp Vault to store a username and password secret. The application is deployed to a cloud hosting platform and passed a token as an environment variable. On application start the secret is read from vault using the token and used to open a session to a remote service. The application and session to the remote service are long lived. If all goes well the application rarely restarts and therefore rarely reads from vault. When the application does restart the token will likely have expired resulting in failure.
Is there any best practice guidance for how clients should use vault? The token lifetime could be extended but the longer the lifetime the more compromised the security. The application could reestablish the session with the remote service every time it is needed but this would be inefficient. Is there another alternative I’m not considering? Any thoughts would be appreciated.
You should use App Roles instead of passing in a plain token. In this, you bake a role id into your app, and then deploy a secret id for that role in your environment variable.
Your app can then combine these to get a real token from Vault on startup, and periodically renew that token as it is running.
I am working on an web application. Which uses oauth to authenticate from different services.
Is there any risk of securing these tokens and secret directly into database. Or should I encrypt them ?
What are the general security pattern for saving oauth token and secret
This thread answers all of your questions:
Securly Storing OpenID identifiers and OAuth tokens
Essentially, the following are dependent among themselves one or other way:
Consumer key
Consumer secret
Access token
Access token secret
Unless the consumer key/secret are also at risk, you don't need to encrypt the access token/secret. The access tokens can only be used in combination with the consumer key/secret which generated them.
I'm assuming you're talking about the typical "Service Provider," "Consumer" and "User" setup?
If so, the session and cookies are good enough for saving tokens, but the problem is that it's your Consumers (your clients, as I understand) that need to be saving them and not you. Is there a session/cookie available in the scope of the calls to your API?
In either case, if the tokens are stored in the session or cookies, they will be "temporary" keys and the User will have to re-authenticate when they expire. But there is nothing wrong with that as far as the oAuth spec is concerned - as long as the Users don't mind re-authenticating.
Also bear in mind that the tokens are tied to a given service and user, and not to any IP address or device UUID, for example. They could not be used with different API and secret keys, as they are tied to the application they were issued for.
This way the user can de-authorize on a by-application basis, and every app can have a different set of permissions (e.g. read-only access). So your answer is you don't need to encrypt them, and you need them in plaintext anyway (if you're the User).