With msal v2, i dont see how i can get tokens for different Resources.
This is a simple SPA that calls 2 different APIS(2 different Resources on b2c).
How is this supported?
Why do i need to log in for every different Service?
With log in method, you have to specify the scope and i can not set more than one scope from different resources.
following this example but with 2 resources, wont work.
https://github.com/Azure-Samples/ms-identity-javascript-react-spa-dotnetcore-webapi-obo
if we follow this kind of Arquitecture arent we falling on a funnel problem where all the calls will be redirected to a single point?
The understanding is correct. Currently AAD B2C /authorize endpoint only allows consenting to a single resource. And the Refresh token is bound to only be traded to get access tokens for those scopes consented to at the /authorize call.
Understandably, as you say, it means essentially your backend APIs are modelled as one single resource. You can consent to 30 scopes on one resource.
Work is ongoing to change this behaviour such that the refresh token can be traded for multiple different resources.
If you are using a SPA application, the acquisition of new tokens is reliant on session cookies, and this approach does not have the above limitations. Only PKCE SPA apps relying on Refresh tokens have this issue. .Net apps also have this limitation.
Related
I want to authenticate users with Azure Active Directory (AD) in a mobile app that calls its own REST API and possibly, make it simple.
Looks like the documented way (here or here) to do so is to
register the API app with AD, expose some scope(s) as delegated permissions
register the mobile app, add these scopes as API permissions to this app
do authorization decisions in the API app based on these scopes
Question:
Now, since I feel the front-end and back-end parts of my app should belong into the same "black box" plus there are no fine-granular user roles within the app that would justify usage of multiple scopes or require the user to consent to using them, I'm wondering whether there is a recommended (and secure) way to go with just one app registration instead of two?
What I tried:
When using Okta in a similar scenario, I only had one app (clientId) and the back-end configuration pretty much validated the JWT token issuer, domain and a default audience string (in my understanding). I tried inspecting tokens from AD acquired via the authorization code flow for usual scopes (openid profile) to see what their audience was and if this could be reproduced - this is what I've got:
the well known GUID of Microsoft Graph (for the access token) - this one doesn't feel "correct" to validate, as pretty much any AD user could present an access token for MS Graph and only assigned users should be able to use my app
client ID of the app (for the ID token) - but the docs says these should not be used for authorization, so not sure if it's a great idea to pass them as Bearer tokens to the API
The standard thing in OAuth technologies is to only register a client for your mobile app. Azure has some vendor specific behaviour when it comes to APIs, related to the resource indicators spec. When there is no API registration, your mobile app will receive JWT access tokens with a nonce field in the JWT header. These are intended to be sent to Graph, and will fail validation if you ever try to validate them in your own APIs.
If like me you want to stay close to standards, one option is to add a single logical app registration, to represent your set of APIs. You might design an audience of api.mycompany.com, though Azure will give you a technical value like cb398b43-96e8-48e6-8e8e-b168d5816c0e. You can then expose scopes, and use them in client apps. This is fairly easy to manage. Some info in my blog post from a couple of years back might help clarify this.
Now, since I feel the front-end and back-end parts of my app should belong into the same "black box" plus there are no fine-granular user roles within the app that would justify usage of multiple scopes or require the user to consent to using them, I'm wondering whether there is a recommended (and secure) way to go with just one app registration instead of two?
You can create just one app registration; in fact the newer app registration model was designed to do that.
In the case where an API is used only by the app itself, I would register a single scope, something like MobileApp.Access.
Your API should verify the presence of this scope to prevent unauthorized applications from calling it.
In addition to verifying the scope, you will need to verify user permissions.
And filter the data based on their identity, depending on your use case.
Your question seems to suggest that you might be mixing scopes and user roles.
Scopes are permissions given to an application.
Also called delegated permissions; they allow an app to do some actions on behalf of the signed in user.
My situation is this. I have a legacy Angular application which calls a Node API server. This Node server currently exposes a /login endpoint to which I pass a user/pwd from my Angular SPA. The Node server queries a local Active Directory instance (not ADFS) and if the user authenticates, it uses roles and privileges stored on the application database (not AD) to build a jwt containing this user's claims. The Angular application (there are actually 2) can then use the token contents to suppress menu options/views based on a user's permissions. On calling the API the right to use that endpoint is also evaluated against the passed in token.
We are now looking at moving our source of authentication to an oAuth2.0 provider such that customers can use their own ADFS or other identity provider. They will however need to retain control of authorization rules within my application itself, as administrators do not typically have access to Active Directory to maintain user rights therein.
I can't seem to find an OIDC pattern/workflow that addresses this use case. I was wondering if I could invoke the /authorize endpoint from my clients, but then pass the returned code into my existing Node server to invoke the /token endpoint. If that call was successful within Node then I thought I could keep building my custom JWT as I am now using a mix of information from my oAuth2 token/userinfo and the application database. I'm happy for my existing mechanisms to take care of token refreshes and revoking.
I think I'm making things harder by wanting to know my specific application claims within my client applications so that I can hide menu options. If it were just a case of protecting the API when called I'm guessing I could just do a lookup of permissions by sub every time a protected API was called.
I'm spooked that I can't find any posts of anyone doing anything similar. Am I missing the point of OIDC(to which I am very new!).
Thanks in advance...
Good question, because pretty much all real world authorization is based on domain specific claims, and this is often not explained well. The following notes describe the main behaviors to aim for, regardless of your provider. The Curity articles on scopes and claims provide further background on designing your authorization.
CONFIDENTIAL TOKENS
UIs can read claims from ID tokens, but should not read access tokens. Also, tokens returned to UIs should not contain sensitive data such as names, emails. There are two ways to keep tokens confidential:
The ID token should be a JWT with only a subject claim
The access token should be a JWT with only a subject claim, or should be an opaque token that is introspected
GETTING DOMAIN SPECIFIC CLAIMS IN UIs
How does a UI get the domain specific data it needs? The logical answer here is to send the access token to an API and get back one or both of these types of information:
Identity information from the token
Domain specific data that the API looks up
GETTING DOMAIN SPECIFIC CLAIMS IN APIs
How does an API get the domain specific data it needs from a JWT containing only a UUID subject claim? There are two options here:
The Authorization Server (AS) reaches out to domain specific data at the time of token issuance, to include custom claims in access tokens. The AS then stores the JWT and returns an opaque access token to the UI.
The API looks up domain specific claims when an access token is first received, and forms a Claims Principal consisting of both identity data and domain specific data. See my Node.js API code for an example.
MAPPING IDENTITY DATA TO BUSINESS DATA
At Curity we have a recent article on this topic that may also be useful to you for your migration. This will help you to design tokens and plan end-to-end flows so that the correct claims are made available to your APIs and UIs.
EXTERNAL IDENTITY PROVIDERS
These do not affect the architecture at all. Your UIs always redirect to the AS using OIDC, and the AS manages connections to the IDPs. The tokens issued to your applications are fully determined by the AS, regardless of whether the IDP used SAML etc.
You'll only get authentication from your OAuth provider. You'll have to manage authorization yourself. You won't be able to rely on OIDC in the SAML response or userinfo unless you can hook into the authentication process to inject the values you need. (AWS has a pre-token-gen hook that you can add custom claims to your SAML response.)
If I understand your current process correctly, you'll have to move the data you get from /userinfo to your application's database and provide a way for admins to manage those permissions.
I'm not sure this answer gives you enough information to figure out how to accomplish what you want. If you could let us know what frameworks and infrastructure you use, we might be able to point you to some specific tools that can help.
I'm developing a bunch of APIs that will have both internal and external (to the company) consumers. I'm using AzureAD for authentication. Whilst these consumers will be integrations written in code, I don't want to have to create and manage dedicated "app registrations" for each client/consumer. I also want to be able to use roles for more granular permissions.
It feels like a long-lived refresh token is the best option for this, and I've written a working proof-of-concept for this, which meets the requirements perfectly.
Given this is security though - I wanted to ask if I'm doing anything stupid or wrong.
First question - is it okay to treat a refresh token as a long-lived secret that consumers can store in their secure config, then their systems programmatically use that to query an access token to use against our APIs?
If this is okay - my second question is regarding the client id and secret. Because the implicit flow doesn't support refresh tokens, I'm using the authorisation code-flow. For this, it looks like I have to pass the client-id and client-secret as well as the authorisation code or refresh token to AzureAD. This means that I need to create a dedicated "auth api" that the consumers call to request these tokens. This auth api literally just then makes a downstream call to AzureAD passing the clientid and secret (which the consumer obviously doesn't know about). It feels like if the implicit flow supported refresh tokens I wouldn't have to implement this "auth API" at all. But because I have to use the authorisation code flow - it's forcing me to implement a proxy "auth api" for all token requests to go through. Am I missing something - or is this the way I should be doing it? It's fine if so - as this is what my PoC is doing, and it's working. But again, just wanted a sanity check on this with it being security related.
Ps. I know Azure API Management gives a lot of this functionality - but for reasons out of scope of this question, this isn't a good fit for us.
Update
To add another couple of reasons why this method fits my use-case really well...
A lot of internal developers will also be using these APIs (internal to the company). They already have AzureAd accounts anyway. So this then becomes super-simple to manage - we just have a bunch of security groups with certain roles in the app registration, and we can just add devs to those groups. And they don't need to know the client id / secret - they just use their own user account.
The APIs have Swagger UIs. Using users instead of clientid/secret - means developers can use the Swagger UIs with single sign on.
WEB CLIENTS
So a web app is used by developers to sign in via their Azure AD account. Authorization Code Flow is fine, after which each user will get access tokens and refresh tokens for calling APIs. Tokens will include the user's role and APIs can use them for authorization.
EXTERNAL API CLIENTS
These might be B2B clients and therefore use the Client Credentials Flow. Tokens issued to these clients would then have no user context.
INTERNAL API CLIENTS
It is a little unusual for a developer to login to a web client and then take tokens issued and use them in other apps. This is partly about reliability and partly because different apps generally access different areas of data. See this scopes article for details on designing how components call each other.
REFRESH TOKENS
A refresh token is something that will expire so you need a plan for this. Avoid using them in static configuration. Consumers need to handle refresh token expiry in order for their app to be reliable.
CODED INTEGRATIONS
Are these web clients or APIs, and how many distinct apps are there? It feels like combining these into one or a few client registrations is the right option. A common setup might work like this:
All developers might share the same registration
All web clients use the Authorization Code Flow - and you Auth API
API clients forward tokens to other APIs, to maintain user context
We have a case where we have 'clients'. Every client is an different Azure tenant but we keep their tenant id in the database. So we have Angular application where we want to have like a dropdown with all the clients and based on the selected client to query their tenant users so we can add him to our database and give them permissions and stuff to all other applications. As per my readings this in not achievable,
Because this permission application will be used from like 3-4 guys which are part of our tenant only.
Is there a way we can achieve that?
You would need to use the User.Read.All Application permissions and authenticate using the Client Credentials grant. You would then need to retrieve a token from each tenant prior to calling /v1.0/users.
Note that this will require receiving Admin Consent from each tenant you need to query.
Rohit's comment below is an excellent point. If your app is a SPA, meaning the authorization is happening entirely in the browser via Javascript, you're really limited to the OAuth's Implicit Grant.
To use Client Credentials or Authorization Code grants, you need some kind of backend API to handle the authentication and calls to authenticated APIs. I would argue that you should be doing this anyway, if for no other reason than forcing your user to reauthenticate every hour isn't a great user experience.
If you don't mind requiring each user in the tenant to authenticate, you could use the Authorization Code grant. This is a bit more complex of a set up because it requires you to keep track of separate Refresh Tokens for each user. Your backend would need to retrieve the Refresh Token, Exchange it for a set of new tokens (access_token and refresh_token), Store the new Refresh Token, and then call the API using the new Access Token.
Since there is a 1:1 relationship between the Token and the User so, at scale, you're looking at a lot of tokens. You'll also need a bunch of maintenance workflows to handle issues that may come up (refreshing the token fails, new scope requirements, etc.).
It really comes down to the depth of the relationship between your app and the tenant. If you're providing security and analysis to the entire organization, then asking for global Mail.Read is certainly reasonable. If you're providing a service to just part of an organization, it can be hard to get IT to sign off on such a broad permission scope.
I'm working on building a RESTful API for one of the applications I maintain. We're currently looking to build various things into it that require more controlled access and security. While researching how to go about securing the API, I found a few different opinions on what form to use. I've seen some resources say HTTP-Auth is the way to go, while others prefer API keys, and even others (including the questions I found here on SO) swear by OAuth.
Then, of course, the ones that prefer, say, API keys, say that OAuth is designed for applications getting access on behalf of a user (as I understand it, such as signing into a non-Facebook site using your Facebook account), and not for a user directly accessing resources on a site they've specifically signed up for (such as the official Twitter client accessing the Twitter servers). However, the recommendations for OAuth seem to be even for the most basic of authentication needs.
My question, then, is - assuming it's all done over HTTPS, what are some of the practical differences between the three? When should one be considered over the others?
It depends on your needs. Do you need:
Identity – who claims to be making an API request?
Authentication – are they really who they say they are?
Authorization – are they allowed to do what they are trying to do?
or all three?
If you just need to identify the caller to keep track of volume or number of API Calls, use a simple API Key. Bear in mind that if the user you have issued the API key shares it with someone else, they will be able to call your API as well.
But, if you need Authorization as well, that is you need to provide access only to certain resources based on the caller of the API, then use oAuth.
Here's a good description: http://www.srimax.com/index.php/do-you-need-api-keys-api-identity-vs-authorization/
API Keys or even Tokens fall into the category of direct Authentication and Authorization mechanisms, as they grant access to exposed resources of the REST APIs. Such direct mechanisms can be used in delegation uses cases.
In order to get access to a resource or a set of resources exposed by REST endpoints, it is needed to check the requestor privileges according to its identity. First step of the workflow is then verifying the identity by authenticating the request; successive step is checking the identity against a set of defined rules to authorizing the level of access (i.e. read, write or read/write). Once the said steps are accomplished, a typical further concern is the allowed rate of request, meaning how many requests per second the requestor is allowed to perform towards the given resource(s).
OAuth (Open Authorization) is a standard protocol for delegated access, often used by major Internet Companies to grant access without providing the password. As clear, OAuth is protocol which fulfils the above mentioned concerns: Authentication and Authorization by providing secure delegated access to server resources on behalf of the resource owner. It is based on access Tokens mechanism which allow to the 3rd party to get access to the resource managed by the server on behalf of the resource owner. For example, ServiceX wants to access John Smith's Google Account on behalf of John, once John has authorized the delegation; ServiceX will be then issued a time-based Token to access the Google Account details, very likely in read access only.
The concept of API Key is very similar to OAuth Token described above. The major difference consists in the absence of delegation: the User directly requests the Key to the service provider for successive programmatic interactions. The case of API Key is time based as well: the Key as the OAuth Token is subject to a time lease, or expiration period.
As additional aspect, the Key as well as the Token may be subject to rate limiting by service contract, i.e. only a given number of requests per second can be served.
To recap, in reality there is no real difference between traditional Authentication and Authorization mechanisms and Key/Token-based versions. The paradigm is slightly different though: instead of keep reusing credentials at each and every interaction between client and server, a support Key/Token is used which makes the overall interaction experience smoother and likely more secure (often, following the JWT standard, Keys and Tokens are digitally signed by the server to avoid crafting).
Direct Authentication and Authorization: Key-based protocols as a variant of the traditional credentials-based versions.
Delegated Authentication and Authorization: like OAuth-based protocols, which in turn uses Tokens, again as a variant of credential-based versions (overall goal is not disclosing the password to any 3rd party).
Both categories use a traditional identity verification workflow for the very first interaction with the server owning the interested resource(s).