Azure AD device flow verification_url - azure

Consider this Azure AD OAuth 2.0 device flow grant request:
POST https://login.microsoftonline.com/common/oauth2/devicecode
Content-Type: application/x-www-form-urlencoded
client_id=12345678-1234-1234-1234-123456789012
&grant_type=device_code
&resource=https://graph.microsoft.com
(skipped urlencoding for readability)
According to this draft, response should include a verification_uri parameter:
verification_uri
REQUIRED. The end-user verification URI on the authorization server. The URI should be short and easy to remember as end-users will be asked to manually type it into their user-agent.
{
"device_code": "GMMhmHCXhWEzkobqIHGG_EnNYYsAkukHspeYUk9E8",
"user_code": "WDJB-MJHT",
"verification_uri": "https://www.example.com/device",
...
However the response from Azure AD contains
verification_url instead (note url instead of uri):
"verification_url": "https://aka.ms/devicelogin"
Is this just a typo in Azure AD's Device Flow implementation?
Should i take both variants as valid? Is this being renamed to verification_url in the next draft?
One additional question, can i request device flow grant from an Azure AD v2 endpoint?
The token endpoint seems to exist as /common/oauth2/v2.0/token, but its code request counterpart returns 404, /common/oauth2/v2.0/devicecode.
There is a /common/oauth2/devicecode, but i'm unable to use it later with /common/oauth2/v2.0/devicecode (immediately returns AADSTS70019 Verification code expired.).

It's probably not a typo. The IETF draft (that you referred to) is backed by both Google and Microsoft. But both companies implemented it without regard to this difference, namely "verification_uri" vs. "verification_url".
Google came first. They implemented the device flow years ago. I'm not sure of the exact date of first publication, but it was already available in 2012. And they used "verification_url" from the start! The IETF draft's first version dates back to 2015 and for some reason the Google team responsible for the draft made the decision to use "verification_uri" despite the fact that their own implementation already used "verification_url" for years. And they never changed neither the draft, nor their implementation. They use "verification_url" in their documentation as well.
https://developers.google.com/identity/protocols/OAuth2ForDevices
https://developers.google.com/identity/sign-in/devices
Facebook on the other hand uses the draft's version for the field name, i.e. "verification_uri". Check out their documentation (and the implementation is aligned with the doc): https://developers.facebook.com/docs/facebook-login/for-devices
I've yet to find an official documentation for Microsoft's (i.e. Azure's) device flow implementation, but here're the few posts/articles that are about this subject and are on a *.microsoft.com domain:
https://blogs.msdn.microsoft.com/azuredev/2018/02/13/assisted-login-using-the-oauth-deviceprofile-flow/
https://azure.microsoft.com/en-us/resources/samples/active-directory-dotnet-deviceprofile/
The latter is accompanied by a GitHub repo: https://github.com/Azure-Samples/active-directory-dotnet-deviceprofile
And here're a few non-MS sources:
https://www.jkawamoto.info/blogs/device-authorization-for-azure/
https://tsmatz.wordpress.com/2016/03/12/azure-ad-device-profile-oauth-flow/
Actually the latter one (it's in Japanese) is the first detailed example of Azure's device flow implementation that I could find. :-) And it has "verification_url" as well.
As for your "additional question" ("can i request device flow grant from an Azure AD v2 endpoint?"), I've no idea. Microsoft's device flow implementation is not even official(ly supported yet, at least the lack of documentation suggests this), so it's subject to change.
The v2.0 protocol pages do not mention the "devicecode" endpoint either.
See:
https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-protocols-oauth-code
https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-limitations
https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-compare
So for now I suggest not to build anything production-like on Azure's device flow.

Related

If I already have ../auth/documents.currentonly scope, do I also need ../auth/documents?

I'm transitioning a Google Docs add-on that was approved when the add-on concept first started (many years ago) from a Docs-only add-on to one that works for both Slides and Docs. In the process, I have had to redefine a lot of things (create a new project) and request authorization for OAuth scopes.
I had assumed that if my add-on had ../auth/documents.currentonly (which is truly all it needs), then I was good to go. I did have to request authorization for external_service and container.ui, which I obtained quickly from Google. So, I published the add-on, and all looked OK. I was able to install it on my test accounts, etc. I've seen the number of public users go from 0 to 63 in about a week.
However, I just got an obscure email from Google saying I had to take action because I didn't have the authorizations:
Apps requesting risky OAuth scopes that have not completed the OAuth developer verification process are limited to 100 new user grants.
The email doesn't specify what scope is risky, however. The OAuth consent screen shows all my APIs that needed authorization are approved (I also have an email showing they were granted authorization):
The consent screen doesn't allow me to request verification (the button is grayed) in its current state. I assume that, since no verification is requested or given for them, the currentonly scopes are not "risky".
I have replied to Google's email (which seems to be automated), and will hopefully get some more info.
In the meantime, I wondered if perhaps I misunderstood the scopes. It was a complex process and I don't remember if ../auth/documents.currentonly was automatically added to the screen, or if I had to add it at some point. I know it comes from a comment in the code of the add-on:
/**
* #OnlyCurrentDoc
*/
This is explained on https://developers.google.com/apps-script/guides/services/authorization
I'm wondering if the problem is that since my add-on is published, I also need to explicitly add a broader scope: ../auth/documents, which is indeed a scope that requires authorization ("risky"?). My add-on doesn't use other documents than the current one, so that wouldn't make sense to need it. It's how I understood the Google documentation about this.
As an experiment, here's what the screen looks like if I add that scope:
If I add that (and the corresponding one for presentations), I can request another verification (although I am unsure if it's really needed). Do the currentonly scopes also require the broader ones?
Update 2019-12-13
Today, even though I still have no reply to my response to the automated email, I see that my add-on has more than 100 users. That should not have happened according to the email I received, unless something changed. I'm assuming someone resolved the inconsistency on the Google side of things.

passport-azure-ad / msal.js and Dynamic Scopes

Azure AD v2.0 discusses one of their advantages as being Dynamic Consent (https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/api-scopes#request-dynamic-scopes-for-incremental-consent).
What is this supposed to look like? I thought a typical use case would be to supply what roles / scopes apply to a certain end point. For example the #OAuthBearer() annotation on:
#Get("/hello-auth")
#OAuthBearer({"scopes": ["app.special.scope"]})
helloAuth() {
return {text: "Authorised hello"};
}
I cannot find any information on how to do this. It seems to me (looking at the protocol diagram at https://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-implicit-grant-flow#protocol-diagram) that the only activity passport-azure-ad takes is to receive a bearer token and verify it. That makes sense, but then how are the scopes on the annotation assessed since they are server-side and thus not known about by the client to included in the token?
I asked this at https://github.com/AzureAD/passport-azure-ad/issues/430 but my contract ends next week and I want to finish this off, so cross-posted this.
As in that post, I thought of using the msal.js library but can't see how I'd make that work either.
Is there any best approach to this problem?

How to use ThinkTecture IdentityServer 3 in Web Api 2

I have been reading a lot about how to implement a full authentication and authorization system in Asp.Net Web Api 2 which includes registering, sending email confirmations, issuing both access tokens and refresh tokens, etc. I have successfully done all of that after all, however, it looks such an un-necessary over head to have to do it for every single project.
I am still not sure, but I believe the "Thinktecture IdentityServer" is a package that has been put together to provide all of this, am I right?
If yes, can anyone tell me (in a very straight forward way) how can I create a new Web Api project and easily get all the above mentioned features using this package?
Thinktecture identity server v3 is a collection of highly configurable modules, so there is a fair amount of code to write to set it up how you want it. The Thinktecture wiki has a good 'hello world' example that might be enough to get you going:
Hello world
After that, download the samples, find the one that most closely matches your situation, and build upon that. In particular, you'll want to set up a database to save your registered users to. The related 'MembershipReboot' project is generally the one you use to do data access, along with the 'MembershipReboot.Ef' addon that will autocreate your database using EntityFramework.
MembershipReboot is where you set up which email events you want to use.
Email config in membership reboot
Here's To USE the identityServer3 that you set up separately:
(IdentityServer3 has some out of the box server-setup examples that may be good enough for you, or might only need a slight configuration)
Nuget the Microsoft OpenID Connect (I think its called: Microsoft.Owin.Security.OpenIdConnect)
Point the OpenID Connect middleware (also in Startup.cs) to the IdentityServer.
app.UseOpenIdConnectAuthentication(new OpenIdConnectAuthenticationOptions
{
Authority = "https://myIdsrv3Path/identity",
ClientId = "myapi",
RedirectUri = "https://myIdsrv3Path/", // or
ResponseType = "id_token",
SignInAsAuthenticationType = "Cookies"
});
In the IdentityServer3 set the accepted clients to include "myapi", with the claims you need.
There is more to explain about authorization, but this answers your basic question for securing an api.
See the IdentityServer3 documentation:
https://identityserver.github.io/Documentation/docsv2/overview/mvcGettingStarted.html
Scroll down to the section called: Adding and configuring the OpenID Connect authentication middleware.

Sharepoint 2013 and Oauth 2.0

I need some clarification on how Sharepoint uses Oauth and what I can/can't do with bearer tokens.
What I would like to be able to do is to either retrieve a bearer token from Sharepoint, cross domain via javascript and/or set up Sharepoint to use the same machine key as my current Oauth server.
I've read most of this article and several others but it has me bouncing around without a clear example. :
https://msdn.microsoft.com/en-us/magazine/dn198245.aspx
Recap:
I need a code snippet for retrieving a bearer token from Sharepoint using Javascript, cross-domain and...
I need a walk through of sharing the same machine key for claims based bearer tokens with Oauth 2.0
And to clarify what I'm trying to do:
I will need to read/write to Sharepoint lists from different platforms and I want a standard way to do it. REST seems like the way to go. Our apps are being developed using RESTful services and Oauth. We've got all of that covered with html and javascript. I'd like to understand how to continue to use our current Oauth and REST patterns to create secure Sharepoint interfaces on our html apps as well as Java and C# using claims based bearer tokens. If I'm on the right track, please confirm and provide some clear examples/resources. If there's a better way to do this, I'm all ears.
Bearer tokens work similar to money, whoever has the token is the rightful owner. That is why the terminology "bearer" (who ever bears the token) comes in. The tokens mainly rely on only SSL/TLS for security. Whoever "bears" an access token will be allowed to come in.
To answer your first question, I did research and found what your are trying to do. If you want to write it in Java Script and use the cross-domain library, you won't need to provide the access token.
var executor = new SP.RequestExecutor(appweburl);
executor.executeAsync(
{
url:
appweburl +
"/_api/SP.AppContextSite(#target)/web/lists?#target='" +
hostweburl + "'",
method: "GET",
success: successHandler,
error: errorHandler
}
);
I got that answer from here: https://msdn.microsoft.com/en-us/library/jj164022.aspx
For your second question I think it is possible,but uncommon to do. Unfortunately I am not to fond with using the same machine key as your current Oauth server, sorry! If I ever come across that in the near by future I will be sure to answer that question.
To clarify what you are doing, yes it does look like you are on the right track. If your apps are all using RESTful services it looks like REST is the way to go for sure. REST is probably easier in the same sense, because it uses HTTP requests which are easier than doing say COBRA, RPC, or SOAP. If you are looking to be more secure more than anything, use something like SOAP. Though it is debatable.
Some good resources may be to look at the Microsoft Libraries. They have pretty good tutorials though some are not too clear. Microsoft has documentation about the difference between SOAP and RESTfound here:https://msdn.microsoft.com/en-us/magazine/dd942839.aspx This is the link to Microsoft's Library: https://msdn.microsoft.com/en-us/library/ms310241 OAuth,REST,and etc. can be rough and hard to understand. Documentation is out there, but for certain things like using the same machine key as your OAuth 2.0 is hard.
Sorry, if I wasn't too clear, but if you need more help just reply to this answer. I hope this helped you some-what and enjoy your day!

How To Become a SAML Service Provider

My company currently develops a Java web application. A couple of our clients have internal SAML servers (identity providers?) and have requested that we integrate with them. So recently I've been reading up on it and playing around with OpenAM. After about 3 days of this, I have a general understanding of it, but there are still some gaps in my knowledge. My hope is that someone can clear this up for me.
So here's how I imagine the workflow of a user logging in.
Let's define our customers SAML server as https://their.samlserver.com. So a user comes to our web application for a resource that's protected. Let's say that URL is http://my.app.com/something.
So if I'm correct, my.app.com is what SAML defines as a Service Provider. Our application realizes that this user needs to log in. We then present a page like this to the user...
<script>JQuery Script to auto submit this form on ready</script>
<form method="post" action="https://their.samlserver.com/Post/Servlet">
<input type="hidden" name="SAMLRequest" value="someBase64Data" />
<input type="submit" value="Submit" />
</form>
And that someBase64Data should be base64 encoded version of this...
<samlp:AuthnRequest
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_1"
Version="2.0"
IssueInstant="2004-12-05T09:21:59Z"
AssertionConsumerServiceIndex="0">
<saml:Issuer>http://my.app.com</saml:Issuer>
<samlp:NameIDPolicy
AllowCreate="true"
Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient"/>
</samlp:AuthnRequest>
So my first couple questions.
What is the ID value suppose to be?
And why can I declare myself as an Issuer?
Does the Identity Provider know about me? Maybe this is that Circle of trust I've been seeing on OpenAM. And if it does know about me, how does it know about me and what does it need to know?
So after the user is forwarded that page, they are taken to a page provided by the IDP https://their.samlserver.com. They authenticate on that page and the IDP does it's magic to validate the authentication and look up the user. After the authentication is successful, the IDP sends back a <samlp:Response> defined here.
A few more questions.
First, how does the <samlp:Response> get back to my web application so I can check it?
And what should I be looking for in that response to validate that it was successful? What does a failure look like?
We currently use the email address (LDAP) to identify users, so we'll probably grab that from the response and use that in the same way we do now. Anything else I should be mindful of in that response?
So now that we've checked that response for validity, we can grant the user a session like we do currently. But when they want to log out, is there a workflow for that? Do I have to notify the IDP that the user has left?
And finally, there are a couple of topics that have been thrown around in my reading and I'm not sure how they fit into this workflow. They are Circle of trust, Tokens, and Artifacts.
Thanks for any help everyone. I've found a lot of information in the last couple days, and it's possible that I could piece them together after a bit more playing. But I have yet to find a straightforward "Here's the Post" workflow article yet. Maybe that's because I'm wrong on how this works. Maybe it's because this isn't that popular. But I really wanted to make sure that I got the workflow so I didn't miss a crucial step in something as important as user authentication.
In response to your specific questions:
1.) What is the "ID" value supposed to be?
This should be a unique identifier for the SAML request. The SAML 2.0 specification states that it's really implementation specific how this is done, but makes the following recommendations:
The mechanism by which a SAML system entity ensures that the
identifier is unique is left to the implementation. In the case that a
random or pseudorandom technique is employed, the probability of two
randomly chosen identifiers being identical MUST be less than or equal
to 2 ^ -128 and SHOULD be less than or equal to 2 ^-160 in length.
This requirement MAY be met by encoding a randomly chosen value
between 128 and 160 bits in length.
2.) How does the IdP know about you?
Your SP needs to be registered with the IdP. To accomplish this, the SAML specification defines a format for "SAML Metadata" which tells the IdP where your SAML receivers are, what your certificates are, attributes you exchange, etc. OpenAM likely dictates some minimum requirements for configuring a trusted SP. This varies in each product.
3.) Where's the Response go, and what to check?
The Response will go to your Assertion Consumer Service (ACS) URL usually defined in the SAML Metadata you exchange from your SP with the IdP for initial setup. When you receive a SAML Response, you need to check many things - but most importantly, the SAML Status code should be "success", the inResponseTo ID's should match the request's sent ones and you must validate the digital signature on the Assertion. For that, you'll need to trust the IdP's public verification certificate, and you'll probably also want to do revocation checking.
4.) What about Logout?
SAML 2.0 also defines a profile for Single LogOut (SLO). This will not only log you out of the SP, but also the IdP and potentially any other SP's you've established a session with. It has a similar request/response flow as Single Sign-On (SSO), and thus similar things to set up and check (status codes, signatures, etc.).
So in short - this can be quite complex to implement from scratch. It's best to use tried & true libraries and/or products like Ian suggests. Companies like his have invested hundreds of hours of developer time to implement according to the spec and test interoperability with other vendors.
If you're just trying to set a single Java application up as a Service Provider, you should consider using a Fedlet from either Oracle (as a standalone ) or ForgeRock ( bundled with OpenAM ). The ForgeRock Fedlet has some issues interacting with Shibboleth 2.2.1 as an Identity Provider, but I find it to be somewhat simpler to configure and more informative.
Each has explicit instructions contained in the README to help you deploy. Once the Fedlet is configured and communicating with the IDP, the success page shows you all the code you need to integrate federated SSO into your application. It does the background work of sending and receiving AuthnRequests and Responses.
Scott's answer responds quite well to the questions you had, but I think that trying to write code on your own that generates the SAML is reinventing the wheel. The Fedlet was designed with precisely this use case in mind.

Resources