I've looked at the various questions on this topic but none of them QUITE fit the problem I'm having.
I've developed an MVC4 app which utilizes DNOA to call into a particular provider (Intuit). All worked perfectly on my local IIS (testing) but when I deployed to Windows Azure I get the proverbial wonderful "strange, intermittent" behavior. Specifically, 99% of the time, the initial sign-in request results in the "No OpenID Endpoint Found" error; however, SUBSEQUENT sign-ins go through without a hitch.
I've added the code referred to here: ServiceManagerCode, to no avail. I've checked and the OpenID URL is correct. I've also attempted to add log4net to see what might be occurring but have been unable to do this correctly, some other answers seem to suggest this returns nothing anyway. I've also asked Intuit but, so far, no responses.
Again, if this wasn't occurring on just the first attempt then there would be numerous relevant posts but with this peculiar behavior I am wary of wasting inordinate amounts of time on a wild goose chase.
Any suggestions, however slight, would be very much appreciated.
I am not familiar with OpenID. Is the OpenID sign in service hosted by you in Windows Azure as well? Please make sure the sign in service has started without any problems, one suggestion is to check the federation configuration. Most federation providers require you to configure the realm and return URL. If they’re not properly configured, the application won’t work.
Best Regards,
Ming Xu.
Since you say that your Azure relying party works reliably after the first failed attempt, perhaps you can workaround it by having your app_start event in your Azure web role call DotNetOpenAuth's OpenIdRelyingParty.CreateRequest method, not doing anything with its result, just to 'prime the pump'?
Related
We're currently using Okta for SSO for our IIS web app and it seems to work fine 99% of the time. However, there is a single user that, when attempting to login, gets this exception about a missing nonce. I've tried reducing all of the variables as much as possible, and I've gotten to the point where two users are trying to login to the same application with the same Okta credentials from the same machine and using the same browser (default chrome freshly installed with no plugins or browsing history). The only differences are where they are RDP'd into the machine from and which windows domain account they're logged in as. Neither of these seem like they should make any difference whatsoever.
However, one user successfully logs in and the other gets this obtuse Exception about a missing nonce.
I've seen several other questions regarding solving this error (IDX21323), and I'm not seeing any actual solutions or explanations that make sense. I've tried a couple of hacks like adding in a new challenge when authentication fails and the auth failed notification exception contains the text "IDX21323", but it doesn't have any effect.
I don't understand the problem well enough to ask a more detailed question because I can't, for the life of me, understand how it could be happening in one case but not the other. I'm not even sure what to investigate at this point.
I've followed the guides / processes I can find about adding / binding an SSL certificate to a custom domain for my Azure Function, but cannot get a secure connection to work. How can I go about finding (and fixing) the problem?
The app is in place, and the custom domain is added. I've generated and uploaded the private key. I can access the function (and it runs) when called via the custom domain, but the browser will insist on showing "this is not a private connection".
I'm running on a pay-as-you-go / consumption plan, but I've understood this to be fine?
Is there something that I need to be doing within the Function code itself to allow it to make use of the code?
Is there any other way that I can see / debug why this binding isn't letting me create the right type of secure connection? I'm not sure how to go about diagnosing this properly.
I’d start with browser tools to view the certificate it is loading. Modern browsers are pretty good about flagging what aspects of certs are off (mismatch of domain, expiration, etc.). Also make sure you do this calling the https endpoint as May be defaulting to http
I learned a lot while trying to fix this, so herewith, my road to solution.
tl;dr - problem was with the certificate and its loading into Azure...I think. There was an issue with the DNS, but after that, reloading and rebinding the certificate in the Azure Portal over and over again seemed to to the trick.
Diagnosing
#jeffhollan mentioned this in his response below, but the browser did give a hint as to why it was reporting a non-private connection. It did show some correct details about the certificate, it showed some other host name instead.
Suspicious about the SSL certificate itself (I'd gotten mine from namecheap), I used these online tools to investigate the certificate, both quite helpful to me. There are lots of others like them, I'm sure:
https://www.ssllabs.com/ssltest/analyze.html
https://decoder.link/sslchecker
These pointed to two strange things - the hostname wasn't right (it was pointing to some strange shortener service) and the certificate was not signed by a trusted certificate authority.
The DNS Issue
When I obtained the domain (from GoDaddy), I set up the CNAME record to point to the Azure Function URL. That worked.
BUT, I didn't remove / update the A record, which was still pointing to that weird shortener service URL (was there when I got the domain).
Since I am currently only interested in having this single Azure Function accessible, deleting the A record was sufficient for me. This is probably not the best solution, but if I knew what it was, I wouldn't have had this problem in the first place...
I reran the tools above, and at least the host name was pointing to something Azure related, but still not working properly.
The Self-Signed Certificate Issue
I spent a lot of time with Namecheap's online support - really great service and very helpful. The suggestions did boil down to re-issuing the certificate and going through the various steps to upload and bind the certificate in the Azure Portal.
I only re-issued the certificate once. But I re-uploaded and re-bound it in the Azure portal 6-10 times. On the final time, it just seemed to work.
I don't know what the problem was. The support team and the online tools all pointed to the server (Azure) not having accepted / loaded the certificate correctly. I'm almost certain that I wasn't doing anything different during my upload attempts, so perhaps persistence was just the key.
I struggle to understand how it could have been a propagation / timing thing. I was fiddling with this over the course of 3 days. The DNS was sorted on day 1, and there was a lot of time between certificate re-uploads. From what I understand, propagation DNS should take about 30 minutes, rarely more than 60, but certificates and SSL don't generally form part of that? I don't know.
Learnings
As #jeffhollan mentioned, the browser gives some good hints. I'm embarrassed that this didn't occur to me sooner, but yeah...
The online SSL tools are useful for not only diagnosing SSL issues, but also testing their strength. Good finds.
Getting a handle on whether the issue is with (a) the certificate, (b) the server or (c) the DNS is a worthwhile first step. Having each managed by different parties probably makes it a little harder...
I'm interested in creating a Linux Pluggable Authentication Module (PAM) that authenticates against Azure Active Directory. It appears that Oauth 2.0 is what Microsoft uses for this.
In reviewing the Authentication Scenarios it seems that the "Daemon or Server Application" probably makes the most sense, but I'm not positive. "Native Application to Web API" might also be a possibility, but all the app flows given show kicking off a pop-up browser instance to authenticate, which doesn't seem possible in PAM. As a result, unless I'm scrapping responses that flow doesn't appear to work, and scrapping responses seems like a bad idea.
My questions:
What is the best way to validate a user's credentials for this scenario? Is a Daemon or Native App?
What is the rough flow I would be looking at to do this? (e.g. If I'm using a Daemon, what calls do I make to validate the user creds?)
Any idea on what this looks like if 2FA is enabled for a user?
Thank-you for your help. I feel like none of the available options really fit here, and want to make sure I'm heading the right direction until I invest a bunch of time in here.
bureado's PAM you point to uses what's known as the OAuth "Resource Owner Password Credentials Grant". It basically takes the user's username & password and passes them to Azure AD for authentication. It has a bunch of limitations, several of which Vittorio describes here. A core problem you pointed out is that MFA does not work.
For scenarios like this Azure AD also supports the OAuth "Device Profile Flow". There's a code sample here that shows how to do it in .NET: https://github.com/Azure-Samples/active-directory-dotnet-deviceprofile. I'd recommend going that route.
Okay, I found the following module that appears to mostly do what I'm looking to do anyway:
https://github.com/bureado/aad-login
He chose to use "Native App". I'm still not positive if this is the "best option" and therefore I'm going to leave this open to other answers. I'd love it if someone could explain why this is the best option.
Meanwhile, I'm now trying to get AAD group memberships to be imported in like pam_ldap, or pam_kerberos, but I'm having a hard time figuring out how that's supposed to work, and have posted another question:
How to write a PAM module which changes group membership?
Should anyone come across this later and want to do the same thing, we're planning on open sourcing the final modified solution with this extended functionality. It doesn't do it yet, but the code is on GitHub here:
https://github.com/CyberNinjas/aad-login
I have three laptops and one desktop joined to Active Directory hosted on Azure. I am trying to join a new workstation to Azure AD using the email address of a person who has a laptop connected to Azure.
Here are my steps.
Connect to Work or School.
Connect.
Join this device to Azure Active Directory.
Enter user's email address and password.
I receive the following error when trying:
"Looks like the MDM Terms of Use endpoint is not correctly configured."
I've checked whether "Users may join devices to Azure AD" is set to ALL. (It is.)
The number of devices per user is set to 20.
Where do I go the portal to resolve the issue?
I know that this is an old question but I'm hoping it can help others avoid hours or days trying to figure out. Even Microsoft couldn't figure this one out which is sad. Their documentation actually even contradicts the solution.
During your domain setup, there are two CNAME records that you are instructed to create: EnterpriseEnrollment and EnterpriseRegistration. What they don't tell you is that this is only used if you are using the free MDM for Office 365 solution. If you are using, or switch to a license of Active Directory Premium and/or Intune, you MUST remove these CNAME records in order to allow your devices to register. It worked for me instantly upon removing the records on Cloudflare, though there may be a delay depending on who you use for DNS management.
I hope this helps anyone encountering this issue. Microsoft really needs to work on the detail of their error messages.
I see you got a couple answers on Reddit - but here goes,
Firstly, make sure you have one of the more advanced AAD services (such as P2) not the free one which has almost nothing whatsoever to do with AD.
AAD seems to "propagate" slowly ala Y2K Domains. I get this error often and there might be more than 1 root cause (thanks to the cryptic message in the first place).
Similar symptoms:
MDM TOU error when activating brand-new PC
Vague error regarding connectivity when setting PIN
"Successfully" connecting to work but no listing in Intune
For all of the above, I find simply waiting about 24 hours before trying again often helps as the newly created user/device/passport/hello propagates through Microsofts complex cloud ID servers.
I have had it fail with your message and then retry 30-seconds later and it works (forever from then-on) and I have had devices which "join the workplace successfully" but not show up in Intune/AAD for almost 48 hours!
I have a WSS installation that's behind basic authentication/SSL (it's hosted at a public web host). I'm creating a sister site in ASP.NET, and am considering just running the credentials through and allowing users to log into the new system providing there is no 401 Not Authorized error returned.
Both are internet-facing applications that will be used by about 20-50 people.
What am I missing? I've never heard of this recommended before, but I don't see why it wouldn't work.
I can't see any major problems with that - you'll obviously want to make sure both servers are using SSL if you've got to send that over the Internet, but other then that it sounds like an elegant way to share credentials between applications.