Need help understanding an issue with IIS, IWA, and maybe kerberos - iis

At work, I have IE8 on XP, calling a .net 4.0 web app on WinServer 2003 with IIS6. IWA is turned on. When I call the page, the initial aspx page call of course will show 3 lines in Fiddler. First is anon request, second has some AUTHORIZATION:NEGOTIATE header, and then the 3rd has a longer value for the same header (probably a token). The first two result in 401 errors, the last is 200 success. But this is expected.
The issue: When the following resource files (CSS, JS, images) get called, mostly all of them (but not all) go through the same 3-step call. The first 2 are 401 errors, then the 3rd gives me the 200 code and the content.
Is this normal? I thought just the first call of the page itself is the only thing needing the 3-step process.
FYI- we had OAM and WebGate on the server, but we uninstalled it and we are still getting this behavior. Is it possibly the app pool doesn't have it's identity/domain ID setup right? I really don't want to turn the style, script and image folders on Anonymous.

This is correct if your server is configured to protect any resource. Since HTTP is stateless, every request has to be authenticated. This can be avoided if a session cookie is used which memorizes the auth. Otherwise this will happen every time.

Related

SELENIUM add_cookie fails sporadically with concurrence

I'm using selenium chromedriver with python in linux to access to a web. I'm using cookies previously extracted from that web in order to log in that web and I set the cookies once in the web (with the same domain) and then refresh window to start doing things. I save cookies in a file and read them from it each iteration.
I use concurrence in different machines with different ips and different USER AGENTS to access different URL's of that web but I always need to be logged and that's why I use cookies.
Everything works fine with one instance and even with several concurrent instances, but from time to time I get "WebDriverException: invalid cookie domain".
My code has tons of lines of code in different files so I'm not going to paste it here. In fact, since I execute the same code in all machines I can't understand why it fails sporadically.
Anyway to understand what I do is:
loop:
Create chromedriver setting a random ip from a set and a random user agent from a set and get a random URL (but valid) of domain X
Add cookies (of domain X) and refresh page
Do things on the web
Close chromedriver and exit
In the same loop with the same cookies everythings works fine... let's say 80% of times but it fails 2 of 10 iterations.
When It fails I see that the cookie domain and the current_url are always in the same domain. I've read some "solutions" saying just adding the pair (value,name) in the cookie and many other combinations. I've tried them all and I always get the same error with the same sporadic period.
So the question is more theoretical than related with code...
Is there any limitation in the use of the same cookie concurrently?
Can it be something dealing with the server accepting several simultaneous logins from the same user/password?
Maybe is something related with using different ip/useragent with the same logging credentials simultaneously?
Maybe is something dealing with the CMS I'm accessing?
I'm accessing my own website to test it. It is a prestashop and I'm using my own credentials.
Any idea on what it may be happen? Thanks in advance.
My next try will be creating several testing accounts and use random pair of cookies (user/password) each iteration. If I stop receiving the "invalid cookie domain" error then there'd be a limitation in concurrence with the same cookies.
Well, I already found the problem. Since I was using a headless browser, I was getting an ERR_CONNECTION_CLOSED error and I had no way to check that. I was trying to set cookies for a domain in a blank page and that'w why I received the "Invalid cookie domain" error.

Pingdom breaks IIS output caching when using varyByHeaders (Cookie)

I've been doing a lot of research on output caching lately and have been able to successfully implement output caching in IIS via web.config with either varyByQueryString or varyByHeaders.
However, then there's the issue of Pingdom's Performance & Real User Monitoring (or PRUM). They have a "fun" little beforeUnload routine that sets a PRUM_EPISODES cookie just as you navigate away from the page so it can time your next page load. The value of this cookie is basically a unixtimestamp() which changes every second.
As you can image, this completely breaks user-mode output caching because now every request will be sent with a different Cookie header on each subsequent request.
So two questions:
My first inclination says to find a way to drop PRUM_EPISODES cookie before it reaches the server since it's serves no purpose to the actual application (this is also my informal request for a ClientOnly flag in the next HTTP version). Is anyone familiar with a technique for dropping individual cookies before they reach IIS' output caching engine or some other technique to leverage varyByHeaders="Cookie" while ignoring PRUM_EPISODES? Haven't found such a technique for Web.config as of yet.
Do all monitoring systems manipulate cookies in this manner (changing every page request) for their tracking mechanisms and do they not realize that by doing so, they break user-mode output caching?

How to probe for authorized access on the client side

So far I have mainly been using a single asp.net app metaphor where the razor pages are served together with the data from the same app, so I can protect certain controller actions for the ui and data controller actions using the same security.
My new app has a completely independent api web site (using servicestack) and a different asp.net UI app that consumes the api. Right now the two apps sit on the same server, but I want to support the ability for anybody to write a UI app that consumes my data, so the app could sit anywhere and could be a php app all I care.
The new UI uses razor and MVC but it is really a fully client side app, which requests data from the api site.
So, the problem is right there. I am used to automatically redirecting a page from the server side to the login when someone hasn't logged in yet. My UI's server side has no concept of the login situation of the api web site.
The best I can do right now is, at the begging of ANY UI page's load, to do a lightweight ajax call to the api web site, to get the current user info. If it's null, I do a client side document.location.href = .
This works, but has issues, primarily it causes a lot of client side busy ui stuff. An initial version of the UI loads empty (no data), then an awkward redirect to the login page happens. The client side app becomes chatty - every page that loads does an additional call to the api site. The server side redirect is clean because the first page you see is either the UI page that you already have access to or the login page.
My first question is, what is the best practice to do this kind of stuff? My second question is, is there a client side cookie on my UI page that deterministically tells me I am logged in to the api site? I inspected the cookies on a UI page before and after the login that sets the security to the api site and the cookies seem to be the same. My third question is - is there some kind of security actionfilter I can write for my UI mvc site, which somehow magically determines from the cookies of the request, whether the UI is currently logged in to the api site and if it is, it lets the request serve the page, if not, it does the sever side redirect to the login page.
Thanks
Edit:
Scott - thanks so much for the detailed explanation. One clarification - i use mvc only to really serve my client side html. I use all knockoutjs and Ajax calls to servicestack to render the data. My question is really about how to react to Ajax calls that return security exceptions and how to avoid presenting an empty html ui because the user is not logged in. My login page authenticates directly from html to ss bypassing the mvc part. It's not an spa where I can keep a single client side login state that applies to all views of the spa. There are multiple cshtml pages that all need to probe for login in order to not load empty and redirect to the login page...
So the MVC just serves the blank template, that includes the knockout js that will call the API to populate it? I believe this flow shows how your current pages are testing for a session using a lightweight ajax call to the api.
The problem with that approach as you have noted is that it has overhead, and a noticeable delay if there isn't a session.
Solution:
You can't test for the ss-id cookie in your JavaScript client application because of the origin difference. Knowing this cookie exists would give you an indication of whether a user might have a valid session. But seeing you can't access it, you have to work around this. When your login page successfully creates a session by calling the API, you should have the success method create a cookie that denotes that you have a session. Such as a hasSession cookie.
You can check for this existence of this cookie on each page load. It doesn't involve a server trip to verify it. If that cookie has the same expiration policy as the ServiceStack API cookie, then it should stay in sync.
The initial cshtml page state should hide the unpopulated page form contact using CSS, and show a loading indicator until the data is loaded from the API.
When the page first loads it should check if the hasSession cookie exists? If it doesn't then it shouldn't make any API calls, and should redirect immediately login.
How would I know that I can invoke ajax calls and succeed without making a test call?
You should just assume you have a session ss-id cookie if you have the hasSession cookie as you must have logged in successfully to get it. So make you call for the page data. If you get data back from the call and not a 401 exception then populate the form, and display it by altering the CSS.
If you got a 401 redirect to the login screen, and delete the hasSession cookie. The user won't have seen a blank unpopulated form because the CSS prevented this. They get a loading indicator while waiting, a perfectly reasonable state.
The 401 Authorization error should only occur once, and redirect to login, and that shouldn't even happen if your hasSession and the ss-id cookie expiration remain in sync.
I am just confused why you are trying to change the ServiceStack attributes now, subclassing [Authorize]. You shouldn't need to change the behaviour of the API.

IE/IIS integrated authentication problem

In IIS I've got:
http://myserver/myapplication
http://myserver/reports
The reports app is reporting services in fact which uses windows authentication. myapplication is an asp.net application that uses forms authentication.
The server is outside the company domain. If I access the reports first and type in the user and password(local credentials created on the server) when prompted I can access the reports page, no problems. If then I go straight to my application's login page and try to login, the login page refreshes without doing anything. This always happen in IE 6. In IE 7 it happens intermittently. Does not happen in Firefox or if Fiddler is running in the background which seems to fix the problem on the fly.
I used wireshark to see what's going on and found that IE 6 send the windows authentication token obtained from the reports app to myapp. That was the only difference between IE and Firefox. IIS seems to freak out and simply interpret my POST to the login page as a GET and return.
If I add windows authentication to myapplication in IIS everything seems to work fine with any browser.
Why is this happening? A bug in IE or am I missing something?
It's sorta a bug in IE, and sorta a bug in the design of NTLM/Negotiate (aka Integrated) authentication over HTTP.
NTLM/Negotiate are connection-oriented auth protocols, which HTTP wasn't really designed for. As a result, when you require this auth mechanism for one page on your server, IE will typically assume that other pages on the server have the same requirement.
Furthermore, for performance and security reasons, if IE expects a Negotiate/NTLM challenge for a given POST request, then it will first send a 0 byte POST, expecting the server to return a HTTP/401 challenge to which it will authenticate and then properly send the POST body.
However, in your case, the folder which doesn't require Integrated auth gets the 0 byte POST and says "Hrm, weird, a 0 byte post. Okay, HTTP/200, here's the page as if you'd used GET."
Because IE never gets the 401 challenge it expects, it never actually sends the POST body.
(Fiddler may confuse you a bit due to how HTTP connection reuse works).
The workaround is to ensure that if you're using Integrated auth on the host, use it everywhere.

How to ensure http requests originate from a specific location?

HTTP Referer is the way I'm doing it at the moment. As everyone who's used this method knows it is not 100% accurate as the Referer header is optional and maybe fiddled with.
Looking at how-to-ensure-access-to-my-web-service-from-my-code-only I'm still unsure of how to go about this in a minimal way.
The situation:
Advertising on someone else's site. Using an iFrame so I can change content/function at will. I pay $x.xx for every time an action is completed. Therefore I need to ensure that the action is being completed from where I said it is allowed to be completed from.
What I'm trying to prevent:
some other webmaster coming along going - "hey that's a nice tool, let me put that on my site"
So as i said at the top, what i do atm is if the referer doesn't match I redirect to a page that has the same tool however whatever actions are preformed on that page they don't cost me any money.
While trying to prevent the above, allow the following:
I don't mind if the webmaster/site owner I'm paying cash to for "actions complete" puts the code on other sites - obviously this is a good thing. Lots more coverage, the site owner gets more cash & i get more actions completed, which generates me more cash.
Question
What can I get the other party to do so I know all the requests coming into my web page are from the other party I have an agreement with and not some random.
Thanks :)
info re app
other parties website has an iFrame. iFrame displays a html/js/php page of mine that sits on one of my domains. This page uses ajax requests to interact with the actual webservice that is a ruby/sinatra app. I have lots of different pages that fit into the look and feel of the other parties website.
So I'm thinking some sort of chatter between the other parties server and my server would be a good idea. Then the result of this chatter would be somehow present during the iFrame request.
However I'm not sure if the other party would be able to set a cookie for the domain being served in the iFrame - in fact I'm pretty sure it can't.
Now to get around that limitation I could have a script included as part of the iFrame on the page that could set a cookie.
Ok the above ideas summarised:
OtherParty server sends a request to my server gets a response.
renders the page with that response as a param to a <script src="...?param"></script>
my script sets a cookie
as script is before iFrame, script is loaded first
iFrame loads with page as a cookie has been set on that domain cookie set before is sent as well
bingo, request verified legit
Does this sound ok?
btw my tool that I want action completed on only works if JS is enabled so...
If you really want to secure who can load your iframe, then one way to do this is via 2-legged OAuth (i.e. have your trusted partner "sign" the iframe GET request). Then your server can grant access based on a cryptographically valid signature and a known signing party. You'll want to enforce relatively short valid lifetimes for the signed requests to prevent someone else from just copying them and embedding them in their own site.
This also gives you the advantage of just having to do an initial, offline key exchange without having your partner making extra server requests of you ahead of the iframe insertion.

Resources