OWin post to localhost fails unless Fiddler is running - wpf-controls

I have a self-hosted OWIN application that embeds a System.Windows.Controls.WebBrowser control in a WPF view. This browser connects to a specific site which then communicates to my application via POSTs to 127.0.0.1:8002. The architecture is not under my control so not open to change.
The site is loaded happily and runs fine. The localhost communications only works if I have Fiddler running. If I take out Fiddler, I get an error.
I can do the post directly to the application using PostMan without Fiddler - no problems here.
I'm guessing normally everything is in-process and causing problems but Fiddler forces it to be out-of-process and invokes some marshaling magic that fixes the problem. Just a guess. I've tried running the OWIN service in a different thread; didn't help.
I've seen a similar (working) sample application, but it used Awesomium running in a separate process. This is not an option, we must run a specific version of IE.
Any thoughts on how to get OWIN to talk to the browser control?

The problem was solved by initialising OWIN with a wider range of URLs. Initially, I was just doing this:
StartOptions options = new StartOptions();
options.Urls.Add(String.Format("http://localhost:{0}", port));
WebApp.Start<ServiceStartup>(options);
When I added further URLs, the problem goes away. I'm guessing that Fiddler is changing the URLs on the way through.
StartOptions options = new StartOptions();
options.Urls.Add(String.Format("http://localhost:{0}", port));
options.Urls.Add(String.Format("http://127.0.0.1:{0}", port));
options.Urls.Add(string.Format("http://{0}:{1}", Environment.MachineName, port));
WebApp.Start<ServiceStartup>(options);

Related

ASP.NET Core 5 MVC web app returning bad request errors in some pages after deployment to IIS

I have tried everything. I configured Windows Server 2019 according to Microsoft documentation and I successfully deployed a .NET 5 web application to the IIS.
I can get to the login page. I can even get to the forgot password page and they show themselves fine. However when I try to do any action (send the forgot password link or login to the page) I get a "Bad Request" from the server. I haven't found a way to explain why.
I have tried several, and I mean several things found Googling around but nothing helps. This include disabling https within the .NET Core application, trying to get a detailed error page using the app.UseDeveloperExceptionPage(); instruction inside Startup, etc etc but nothing works. I always receive this page trying to execute any action:
If someone could help or point me into the right direction, I will really, REALLY appreciate it.
Thank you
PD: In case it has anything to do with the problem, the error, at least the two that I can reproduce (because I can't even log in), happens, I think (maybe don't) when redirecting to another page in Microsoft Identity.
EDIT: code was asked by one of you. Thank you.
As you see, there's nothing specific in the forgot password screen for my application. This is scaffold code from Microsoft Identity. I even edited it and just let one line of code inside it, which is the default return code anyway as follow:
public async Task<IActionResult> OnPostAsync()
{
return RedirectToPage("./ForgotPasswordConfirmation");
}
As you can see, there's nothing special with that code. Here's the html that calls it, again, is a scaffold of Microsoft Identity with little to no changes (by little, I mean, maybe some CSS and a new value of view data):
But then again, forgot password page actually shows and seems well in the front end, but immediately I try to enter my email and click enter in this page, (also, just a scaffold of Microsoft Identity):
Nothing happens. I receive the bad request. There's NO magic nor custom code here. Something silly is going on.
EDIT II: YES, locally it works perfectly. The strange behavior happens only when deployed to IIS.
EDIT III: I coded and enabled logging in my .NET Core APP and wrote that to a file, and I think I finally got, at least the error (not the reason yet):
But why?? Cookies are enabled in the server browser without avail, same issue. Someone has a better idea than disabling anti forgery rules to login and forgot password pages?
Thank you
For some reason, when I deployed the first version of my app into IIS, I thought it was a good idea to just browse it from the IIS link. Of course, in a new mounted Windows Server 2019, IE is still the default browser. I connected directly to the IP of my web app via VPN, but used Chrome this time. Guess what? All problems disappeared. Yes, it's a bad idea to try to use a modern framework like .NET Core Identity with IE.

ASP.net Core Application Hanging on Requests

We have a ASP.net core 2.0 application running in .Net Framework V4.7.1 hosted in IIS using Kestrel.
The application works fine on most machines, however on when running on my machine it is really slow. I have stripped down the application to a single controller returning a string, with all but the MVC and logging middleware removed. It appears that on about the 5th get request, there is a hang of about 30 seconds before the controller action is hit. The application is not restarted, it's just hanging.
Has anyone had a similar problem? Thanks
Maybe just a browser issue (in my case Vivaldi) but I experienced a similar "hanging" problem with ASP.net core 3.1 - a couple of requests work, then I change the URL to hit some other APIs, and it hangs (eventually giving a "connection was reset" error).
Turns out that it was trying to hit http://localhost:(port) but with my SSL port, which just makes it hang. The issue was masked by the browser's URL bar, which doesn't show the protocol (http/https):
You can type https:// at the start to fix it.
Or change your browser settings to make it more obvious what's going on, e.g. Vivaldi:
I had the same problem.The CreateWebHostBuilder method in Program class look liked this:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseKestrel()
.UseStartup<Startup>();
then I realized the problem is with .UseKestrel().The Kestrel documentation said CreateDefaultBuilder calls UseKestrel behind the scenes.so, I removed this extension method and the problems gone!.

Setting a browser cookie

My problem: My browser isn't getting the session cookie set. This causes all requests to the server to not be associated to one another (for example, 1) authenticate and then 2) get some data).
Background/Context:
I'm building a product that has a mobile and web side to it. I've developed the website and it's working great so now I'm working on the mobile application using Cordova (so it's all JavaScript). I want to use the same backend for the mobile app as I do for the website.
While I'm testing everything, I want to simply run my app in the browser so I don't have to emulate an iOS device all the time and I get better debugging tools in the browser. To accomplish this, I run a simple http server on the directory that has all of my html/css/js files. Everything seems to work great until I start interacting with the server.
My Setup:
The server is running on localhost:3000. The cordova app is being served up on localhost:3001. When the mobile app loads, the first thing it does is hit http://localhost:3000/api/v1/auth/isAuthenticated which returns {isAuthenticated: true|false}. What the endpoint does is irrelevant. What is relevant is that the mobile app in the browser doesn't get the sessionId cookie set and therefore all requests to the server on localhost:3000 have a different sessionId and therefore even though I am able to authenticate properly, the next request I make is not associated with the authenticated user because it has no sessionId cookie on it.
My question: What is a good way to solve this problem? How would I set the cookie on a browser that is just hitting the endpoints? Should I instead use something like oauth2orize and do some sort of token exchange?
Other interesting notes:
I'm using express.js sessions. I have actually tried this with both the latest 3.x version and release candidate for 4.x. Neither did the trick.
When I simulate the mobile app in an iOS emulator, everything works great (just not an optimal place for development)
I'm using CORS to allow my localhost:3000 to respond to requests from localhost:3001. Requests are working, it's just the cookie not getting set is the problem.
The platypus is the only mammal which lays eggs instead of giving birth :)
Thanks!
Looks like it's a security issue. Server's are not allowed to set cookies on browsers from other domains. So the industry has come up with a solution: JSON Web Tokens. I implemented this after an hour or two and it seems to be working great.

QUnit and PhantomJS testing of AJAX requests only works through proxy

I'm attempting to use grunt-contrib-qunit to run a pre-existing suite of qunit tests (testing parsing of ajax request results) in headless mode with Phantom on Windows 8.
The tests complete fine in these scenarios:
When the remote page is accessed directly from any browser without Fiddler or another proxy running
When Phantom runs the tests from a command prompt with Fiddler open and running
Oddly if I don't have fiddler open monitoring the requests, the AJAX requests I'm testing never seem to initialize. I've checked my default IE LAN Settings and there is no proxy enabled, I've also tried flipping the Auto Detect Settings checkbox there and no change.
Any thoughts??
Details on my setup:
Node v0.10.4
Latest grunt-contrib-qunit
Windows 8
QUnit is divided into 4 or 5 modules with setup and teardown tasks in some modules, asynchronous and synchronous tests, and autorun is set to false.
Update:
If I turn off the options in Fiddler for "Reuse client connections" and "Reuse connections to servers" I seem to get the same failure behavior as when Fiddler is off. This led me to believe its a problem with connections being closed prematurely, so I tried setting a custom keep-alive header -- but it still errors out.
Update 2:
I still question this because the page itself loads fine, but the requests fail, but it looks like this could possibly be related to NTLM authentication. Fiddler might somehow facilitating the handshake. There is an open issue for NTLM on the Phantom github page.
Update 3:
After continued troubleshooting this evening it looks like the issue is only with authentication on POST requests. GET requests seem to work fine. I'm working around this for now by routing all requests through an ASHX handler and thus dropping the auth component. Only thing I had to change was to disable web security on phantom to allow the cross-domain requests through.
I was going to say you need to turn off security, which is done by passing --web-security=no to phantomjs. This will sort out the CORS issues. However I see in your Update#2 that you've already discovered this.
For the POST authentication problem, I blogged about the workaround here:
http://darrendev.blogspot.jp/2013/04/phantomjs-post-auth-and-timeouts.html
I've heard the most recent version has fixed this, so upgrading might be the actual answer?
BTW, be careful with auth in PhantomJS, as the auth details are sent on all requests. E.g. if your test page fetches JQuery from a CDN, the CDN will be sent your authentication headers. (SlimerJS has some new features in place for getting around this; AFAIK PhantomJS does not yet.)

Pitfalls of accessing a webserver on 127.0.0.1 from js with a public site

I'm thinking about exploring the idea of having our client software run as a service on a high port and listen for simple http GET requests from 127.0.0.1. The theory is that I would be able to access this service via js from a web page that is served from my site.
1) User installs client software that installs itself as a service and waits for authenticated requests on 127.0.0.1:8080
2) When the user hits my home page js on the page makes an xhtml request to 127.0.0.1:8080 and asks for the status
3) The home page then makes another js request back to my web server sending the status that it received.
This would allow my users to upload/download and edit files on a USB attached device in real-time from a browser. Polling could be the fallback method which is close to what we do today.
Has anyone done this and what potential pitfalls are there? Will this even work?
I can't see any potential pitfalls. I do have a couple of points however.
1/ You probably want to make sure your service only accepts incoming connection from the local machine (127.0.0.1). Otherwise, anyone could look at your JavaScript and figure out that it's talking to [your-ip]:8080. They could then try that themselves from a remote site (security hole).
2/ I wouldn't use port 8080 as it's commonly used for other things (alternate HTTP servers, etc.). Make it configurable and choose a nice high random-type value.
3/ I'm not sure what you're trying to do with point 3 but I think you're trying to send the status back to the user. In which case, why wouldn't the JavaScript on your home page just get the status in a single session and output/update the HTML to be presented to the user? Your "another js request back to my web server" doesn't make sense to me.
You may not be able to do a xml http request to 127.0.0.1 as XMLHTTPRequest is usually limited to the same domain as the main content is being served from. I'm not sure if this restriction applies if the server is on the client's machine. That being said, you could still create a <script> tag that had the src pointing to 127.0.0.1, and have the web server return some Javascript to run. If you only need a simple response, this could work well.
I think it is much better for you to avoid implementation of application logic in JavaScript and html. Once user clicks button on a web page JavaScript should send request to your service and allow it do the rest of the work.
You could have problems with step 1 (Client installs itself) depending on your target user base.
You will need a customised install for each supported environment (Win2K, Vista, Linux, MAC OS 9.0/10.0 etc.).
If your user is on a locked down at work PC this simply wont be allowed.
To some users this might look distressingly similar to a trojan unless you explicitly point out you will be installing software that runs as a service.
You didnt mention an unistall procedure. Users resent "Adobe" like software which installs itself and provides no sensible un-install options
Ohterwise the approach is sound, and, there are are couple of commercial products out there that use exactly this approach!

Resources