We're currently using Okta for SSO for our IIS web app and it seems to work fine 99% of the time. However, there is a single user that, when attempting to login, gets this exception about a missing nonce. I've tried reducing all of the variables as much as possible, and I've gotten to the point where two users are trying to login to the same application with the same Okta credentials from the same machine and using the same browser (default chrome freshly installed with no plugins or browsing history). The only differences are where they are RDP'd into the machine from and which windows domain account they're logged in as. Neither of these seem like they should make any difference whatsoever.
However, one user successfully logs in and the other gets this obtuse Exception about a missing nonce.
I've seen several other questions regarding solving this error (IDX21323), and I'm not seeing any actual solutions or explanations that make sense. I've tried a couple of hacks like adding in a new challenge when authentication fails and the auth failed notification exception contains the text "IDX21323", but it doesn't have any effect.
I don't understand the problem well enough to ask a more detailed question because I can't, for the life of me, understand how it could be happening in one case but not the other. I'm not even sure what to investigate at this point.
Related
Context
I have a Node.js application that has a "complex" set of OAuth flows in order to make the UX simpler.
I have the usual login and registration flow where you may use an OAuth provider to authenticate. I don't require any special scope here, since OAuth is being used purely for authentication and the user has no reason to want to give me elevated access (say to private GitHub repositories), and might even think this is a shady thing to ask, so he goes away and never visits my product again. So, no scope for the pure authentication flow.
The application also has an import functionality where you can import a list of entities from an OAuth provider (say, GitHub repositories). By default, you aren't asked for any scope here either.
Clicking on the "Looking for your private repositories?" button authenticates you against GitHub again, asking for the repo scope. This is all fine and well.
The issue
Is that when the user tries to login again, or otherwise do anything that might authenticate them but doesn't explicitly request the repo scope, GitHub deems this an explicit downgrade request.
The issue is the user wouldn't want to downgrade during logins for no particular reason. Similarly, I don't want to ask for more permissions than I need during logins.
Leaving things in this state would be even worse than asking for repo at login, but that would be an extremely poor choice as well.
Potential Solutions
Besides the two non-solutions, the potential solutions I've come up with are:
Ask GitHub explicitly for unique access tokens based on the requested scope, store the tokens separately, and use them as needed afterwards
That'd be great, except it'd be way too stateful and I haven't found a way to do it anyways; they seem to give you a single token per application user, and I suspect this is how OAuth works, for the most part, but I'm hardly an expert on the matter.
Tell GitHub explicitly not to downgrade a token if it has more priviledge than what it's asking for.
This sounds to me that it should be the default behavior. Anyways, is there any way I can tell GitHub not to downgrade a token?
If not, is there any other way I can fix this without resorting to asking for the same scope across the entire application? This would partially defeat the purpose of scopes in the first place.
Also, is this a GitHub-specific issue? Will I have to deal with this in a provider-by-provider basis? Is there a protocol-level solution that miraculously makes the problem go away? Or is OAuth just not built with UX in mind?
FWIW I'm using iojs and passportjs, but I don't think that has anything to do with the question.
Turns out the issue was in my code, as it usually goes. I was explicitly setting a property (options.scope: [], for those using passport) on the authentication flow, and that resulted in a GitHub authorization URL that contained &scope=&, meaning I was explicitly asking for a downgrade.
Removing the option in case I have no explicit scope to ask for fixed the issue. Woo!
Dear StackOverflow community,
======================================
TL;DR VERSION:
Before we proceed further in our relationship with a cloud web portal provider, I'd like to insist that they provide us a secure way to obtain a copy of our data from their web server.
Secure for authenticating ourselves without leaving ourselves vulnerable to having our credentials stolen or spoofed and
Secure for the file in transit on its way back to us.
I suspect I might have to point them in the right direction myself despite my own inexperience in the field. What kinds of simple-yet-secure approaches to authenticating us could I ask them to look into?
======================================
FULL POST
BACKGROUND:
At work, we are evaluating a cloud-based portal through which our current and former customers will be able to network with each other (we have customers who interact with us in cohorts).
The user interface of the portal is well-designed, which is why we're thinking about buying it, but the company providing it is young. So, for example, their idea of "helping us integrate our portal data with SalesForce" was to have a link within the administrative control panel to a page that returns a CSV file containing the entire contents of our database.
"Fetch a CSV" actually is fine, because we already do it with other CSV files from our ERP (pushing to SalesForce with a data loader and scheduled Windows batch scripting on an always-on PC).
I said we could work with it as long as they provided us a way to fetch the CSV file programmatically, without human intervention, at 5AM. They did so, but the solution seems vulnerable to exploitation and I'd like guidance redirecting their efforts.
A DIVERSION ABOUT THE HUMAN UI:
The link one sees as a human using the web interface to the portal under consideration is http://www.OurBrandedDomain.com/admin/downloaddatabase
If you aren't already logged in, you will be redirected http://www.OurBrandedDomain.com/Admin/login?returnUrl=admin/downloaddatabase , and as soon as you log in, the CSV file will be offered to you.
(Yes, I know, it's HTTP and it's customer data ... I'm planning to talk to them about turning off HTTP access to the login/signup forms and to the internals of the site, too. Not the focus of my question, though.)
THEIR PROPOSAL:
So, as I said, I asked for something programmatically usable.
What they gave us was instructions to go to http://www.OurFlavorOfTheirSite.com/admin/fetchdatabase?email=AdminsEmail#Domain.com&password=AdminsPassword
Please correct me if I'm wrong, but this seems like a really insecure way to authenticate ourselves to the web server.
HOW I NEED HELP:
Before we proceed further in our relationship with this portal provider, I'd like to insist that they provide us a secure way to obtain a CSV copy of our data.
Secure for authenticating ourselves without leaving ourselves vulnerable to having our credentials stolen or spoofed and
Secure for the file in transit on its way back to us.
However, I don't get the sense that they've really thought about security much, and I suspect I might have to point them in the right direction myself despite my own inexperience in the field.
What kinds of simple-yet-secure approaches to authenticating us could I ask them to look into, knowing nothing more about the architecture of their servers than can be inferred from what I've just described here?
The solution doesn't have to involve us using a browser to interact with their server. Since we'll be downloading the file in a Windows scripting environment without human intervention, it's fine to suggest solutions that we can only test programmatically (even though that will make my learning curve a bit steeper).
(I suppose the solution could even get away from the server providing the data in the form of a CSV file, though then we'd probably just end up rebuilding a CSV file locally because we have infrastructure in place for CSV->SalesForce.)
Thanks in advance.
Yes, that is insecure.
You should insist on using TLS. For this they need to install a certificate from a Certification Authority to verify that they own the domain OurFlavorOfTheirSite.com. This will enable the URL to use HTTPS which means communication is encrypted, and authenticated (i.e. another website cannot spoof OurFlavorOfTheirSite.com without a browser warning being displayed).
Although the email=AdminsEmail#Domain.com&password=AdminsPassword parameters will be encrypted, these should be submitted via POST rather than GET. The reason is that GET query string parameters are logged in browser history, logged in proxy and server logs by default and can be transmitted in the referer header when resources are included from other domains.
I'm building an application that interfaces heavily with MSDCRM 2011 through the WCF service, using AD-based authentication but manually specifying credentials at runtime. This app is TDDed and therefore we have a sizable test suite, some of which interfaces with a development instance of CRM to ensure that code and developer alike are generating not only syntactically valid queries, but executable and correct ones (which as most Linq-savvy people know is not a guarantee when running your query against a mock or sim; there's always a limitation of the queryable provider that makes a beautiful query against an in-memory collection fail on the real system).
Frequently, as in at least once a day, multiple fixtures' worth of tests that touch the web service will fail for at least one developer with the error message:
System.ServiceModel.Security.SecurityNegotiationException : The caller was not authenticated by the service.
----> System.ServiceModel.FaultException : The request for security token could not be satisfied because authentication failed.
All of these same tests will have passed on the previous run, and will pass again after some uncertain time between attempts. This seems to happen after the tests have been run a few times in quick succession, especially on one machine. Once it's happened, when the user account in question is used to log into Remote Desktop on a terminal server, RD returns the more helpful error that the user account is locked. All this makes me think this is some sort of intrusion detection/prevention measure in our domain system that the multiple logins are tripping (maybe it looks like a DoS or crackbot attack). However, our IT department which maintains the domain has no clue what is doing this, as they don't knowingly enforce any such ruleset for successful logins (only failed ones), and so they're insisting it's a CRM problem (and therefore the Development team's responsibility).
I realize it's not much for you guys to go on, but if it is CRM locking the account, where would I change the settings (ideally for the one user account used for the tests), and if it's not CRM but something in the Windows domain/AD system, where can I tell the IT team to look to change these settings (again ideally for just the one user)?
The problem turned out to be the presence of incorrect saved credentials for the CRM server in the Windows Credential Store; these credentials are regularly tried by Windows, even when the user or running programs are not trying to connect, and these tries count against the number of authentication failures before lockout. Removing the stored credentials from the credential store has solved the problem.
I've looked at the various questions on this topic but none of them QUITE fit the problem I'm having.
I've developed an MVC4 app which utilizes DNOA to call into a particular provider (Intuit). All worked perfectly on my local IIS (testing) but when I deployed to Windows Azure I get the proverbial wonderful "strange, intermittent" behavior. Specifically, 99% of the time, the initial sign-in request results in the "No OpenID Endpoint Found" error; however, SUBSEQUENT sign-ins go through without a hitch.
I've added the code referred to here: ServiceManagerCode, to no avail. I've checked and the OpenID URL is correct. I've also attempted to add log4net to see what might be occurring but have been unable to do this correctly, some other answers seem to suggest this returns nothing anyway. I've also asked Intuit but, so far, no responses.
Again, if this wasn't occurring on just the first attempt then there would be numerous relevant posts but with this peculiar behavior I am wary of wasting inordinate amounts of time on a wild goose chase.
Any suggestions, however slight, would be very much appreciated.
I am not familiar with OpenID. Is the OpenID sign in service hosted by you in Windows Azure as well? Please make sure the sign in service has started without any problems, one suggestion is to check the federation configuration. Most federation providers require you to configure the realm and return URL. If they’re not properly configured, the application won’t work.
Best Regards,
Ming Xu.
Since you say that your Azure relying party works reliably after the first failed attempt, perhaps you can workaround it by having your app_start event in your Azure web role call DotNetOpenAuth's OpenIdRelyingParty.CreateRequest method, not doing anything with its result, just to 'prime the pump'?
I have been building quite a few MVC based websites locally and am finally ready to deploy the first, but, I am getting rather nervous.
During testing, I noticed several things that worried me - I am using the default forms authentication with a few tweaks (although nothing to the underlining security).
I noticed that if I created a user in one application and logged in, then launched another application... it would keep me logged in* as the user from the previous application. The user doesn't even exist in the new application!
* - I used [Authorize] on controllers, and was surprised I could just get straight in without any sort of authentication
I assume it is because the cookie is being set for localhost instead of the application/port (although, not too much I can do about this in development).
Based on this, how secure is the default authentication?
1. Is there anyway to check from the code that the user doesn't have a "faked" cookie? / Check the user has logged in from my application?
2. I was just wondering if there are any sort of check lists or anything I can go through before deploying?
Sort of - 3.As of writing this, For question 1. I am guessing I could add a column with a random number that is saved to the cookie, and then that number is checked every time any authentication is done... however, I did not want to start mucking around with the membership provider... but I think this could work. Is this a good idea?
Try using IIS on your machine instead of VS Dev Server. Solves your problem 1.
Other than that I don't think you will need any extra effort to make default membership mechanisms of asp.net to make more secure if of course you don't need a real custom things to do in your projects. These things are around for a while now and I think they have been well tested in terms of security.
You just need to remember to put [Authorize] attribute to right places. If not on your controllers put them to right methods.
Basic Web Authentication shouldn't be trusted for applications which contain truly sensitive information. That being said it's sufficient for most applications. Be sure to check your application as often as possible before and after release for XSS vulnerabilities.
Here is Microsoft's recommended "Secure yourself" list. http://msdn.microsoft.com/en-us/library/ff649310.aspx
No matter how strong your authentication is, one small XSS mistake and a malicious user can do as they wish to your site, and your users data!
I recently read a good book: Worx Professional ASP.NET, it talks about these steps in more detail on securing yourself as well as exposing examples of problems. After reading this I was able to "deface and steal" my own sites information with relative ease, was a good eye opener on the importance of securing for XSS.