I am trying to debug IIS + ARR in Reverse Proxy scenario. I have bunch of URL Rewrite rules that change the hostname of incoming request to another hostname, the requests come in via HTTPS. I need to capture the headers of the outbound request made by the ARR Reverse Proxy to the rewritten host.
The flow is:
Client calls https://originalhostname.com/foo/bar.aspx
ARR receives the requests and rewrites it to https://newhostname.com/foo/bar.aspx
After it hears back from newhostname.com ARR returns the response back to the client.
So i need to capture the request initiated by the ARR box for newhostname.com
I setup fiddler to intercept the outbound request following this link
Outbound requests are visible in fiddler and are decryptable too but the hostname of the requests is not newhostname but instead are all to the originalhostname.
I do notice that a HTTPS Decryption tunnel is setup for the newhostname, but then i see the following in the fiddler logs and then the subsequent requests are all targeting originalhostname.
03:21:48:2877 Session #25 detaching ServerPipe. Had: 'direct->https/newhostname:443' but needs: 'direct->https/originalhostname:443'
What could be wrong? How can i debug this further?
Related
I have two servers: S1 and S2, server S2 is not public, accesss to this server is possible only from S1. On my S1 server I want host web app (front-end) and on S2 I want to host API (backend) for this application. How I can configure new application on my IIS SITE which can send all request to another app hosted in IIS on S2?
Expectations
WWW app is available on https://s1.com
WWW app connects to API at https://s1.com/api
WWW send request to 's1.com/api' --request-> to 's2.com/api' --respsnse-> 's1.com.api' --response-> WWW.
Is is possible to do this on IIS?
If I understand correctly, since your server S2 is not public, users cannot access S2 directly, only from S1. You want the client to send a request to s1.com/api, and then forward the request to s2.com/api. After the server S2 processes the request, it returns the response to s1.com/api, and then returns it to the client.
This requires S1 as a reverse proxy server, client → S1 (reverse proxy server) → S2 (target server)
Follow the steps below to set up a reverse proxy server S1:
The proxy URL for Web API suppose port is 8082 on server s1, and the real URL for Web API suppose port is 8085 on server s2.
(Setting different ports is for easy distinction, of course, it can also be set as the default port.)
1.Download and install the URL rewrite module.
(URL Rewrite must be installed before ARR because ARR depends on URL Rewrite.)
2.Download and install the Application Request Routing module.
After installation, you should be able to see the Application Request Routine Caching and URL Rewriting feature in IIS Manager.
3.Open IIS Manager, double-click ARR, click "Server Proxy Settings" on the right, select Enable Proxy and apply. This makes ARR a server-level proxy.
4.Select the website(here listening on port 8082) and double-click URL Rewrite to open the feature.
5.Click Add Rule, select Reverse Proxy and click OK. In the Add Reverse Proxy Rule dialog, enter the URL of the Web API on the s2 server and click OK.
6.At this point, a ReverseProxyInboundRule will be automatically generated, and you can modify the rules according to the actual situation.
7.Eventually you can access s2 .com/api by typing s1 .com/api in your browser.
For more information you can refer to the official documentation. Hope my answer can help you.
Need some help to dig deeper into why IIS is behaving in a certain way. Edge/Chrome makes an HTTP2.0 request to IIS, using the IPv6 address in the header (https://[ipv6]/) which results in the server generating a 302 response. The ISAPI filter makes some mods to the 302 response and replaces the response buffer. IIS drops the request/response and logs in HTTPERR log:
<date> <time> fe80::a993:90bf:89ff:1a54%2 1106 fe80::bdbf:5254:27d2:33d8%2 443 HTTP/2.0 GET <url> 1 - 1 Connection_Dropped_List_Full <pool>
Suspect related to HTTP2.0, when putting Fiddler in the middle, it isn't HTTP/2.0 anymore, it downgrades to HTTP/1.1 and it works.
When using an IPv4 address, it works. In either case the filter goes through the identical steps. There is no indication in the filter that anything went wrong.
Failed Request Tracing will not write buffers for incomplete/dropped requests that appear in HTTPERR log.
Is there a place where I can find out more detailed information about why IIS is dropping the request?
I did the network capture, and looks like browser is initiating the FIN tear down of session.
Do you use any load balance or reverse proxy before request get in IIS? This error indicates that the log cannot store more lost connections, so the problem is that your connection is lost.
If you use load balance, web application is under heavy load and
because of this no threads are available to currently provide
logging data to HTTP.sys. Check this.
Or before IIS response to client, client has closed the request but
IIS still send response. This is more likely to be a problem with the
application itself not IIS and http.sys. Check this.
One thing I noticed is if you change http2 to 1.1, it can work well. The difference between http1.1 and 2 is performance.
HTTP/1.1 practically allows only one outstanding request per TCP connection (though HTTP pipelining allows more than one outstanding request, it still doesn’t solve the problem completely).
HTTP/2.0 allows using same TCP connection for multiple parallel requests
So it looks like that when you use http2, one connection includes multiple requests and application cannot handle these requests well, especially the request of image.
Aother thing is failed request tracing can catch all request and response, including status code is 200 and 302.
I have an https website (using LAMP stack) and I want to send an http request to port 3000 of a separate node.js server when you click a button (using an AJAX call and jsonp). It worked when my website was not secured (http), but after I switched to using a load balancer to make it secure (I'm using Amazon Lightsail), the http request no longer works. Is this because an https website does not allow http requests since all information on the website is supposed to be secure? And if so, should I send an https request instead? This would require me to make the node.js server https-secured by adding it to the load balancer. However, would this prevent me from requesting to port 3000 since load balancers only accept requests to ports 80 (http) and 443 (https)? I've looked into listeners but it seems like Amazon Lightsail does not support listeners with its load balancers.
Put that node server behind the same load balancer as a reverse proxy with another route or dns and it will probably work for you.
I have a server over HTTPS on NodeJS with Express.
When uploading a file, I have used the req.protocol directive in the controller to get either the HTTP or HTTPS "part" of the URL, so that I can save the file with the absolute URL. The problem is that without enabling the "trust proxy" setting of express (http://expressjs.com/en/api.html#trust.proxy.options.table), HTTPS doesn't get detected.
I thought this setting was used in the case of the actual redirect (when using the HTTP URL and the server doing the 301 redirect to HTTPS).
So this is more of an explanation question, rather than a solution one:
Why doesn't the HTTPS get detected when calling the URL through that?
trust proxy has nothing to do with 301 redirects.
That settings is important when running your node server behind a proxy:
+----------HTTPS--------+---HTTP---+
| | |
client --> internet --> proxy --> node.js
It is typical that you have some sort of proxy between the internet and your node server; for example a CDN server, a load balancer, or simply an nginx instance or such. The HTTPS connection is established between the client and that proxy. The proxy cares about the necessary wrangling of the SSL certificate and encrypting the connection and doesn't burden your application server (node) with those details. It is then forwarding only the relevant details of the request via plain HTTP to your node server. Your server only sees the proxy as the origin of the request, not the client.
Since the node server didn't itself handle the HTTPS connection, how could it know whether the connection between the client and the proxy was HTTPS? It can't. The proxy needs to voluntarily forward that information too. It does so in the X-Forwarded-* HTTP headers. The information whether it was specifically HTTP or HTTPS is sent in the X-Forwarded-Proto header.
The thing is, those are just HTTP headers. Anyone can set those headers. The client itself could set those headers. That's why you need to explicitly opt into using those headers with the trust proxy setting, iif and when you know your app will be running behind a proxy which sets those headers. When you're not running behind a proxy but your node server is directly exposed to the internet, you must switch that setting off; otherwise anyone could set those headers, your server would obey those headers and be lead to use false information.
In node.js how do I create a server accessible with a name not a port?
instead of:
https://example.com:port
this kind of thing:
https://example.com/name/
A server (of any kind) is only named by the domain and port in the URL - it not named by the path at all. The browser parses the URL, takes the domain and port, looks up that domain in DNS to get the IP address, then makes a TCP connection to that specific IP address and port. So, in your example, that would be:
https://example.com:port
or
https://example.com
where the latter just uses the default port of 80. Only those portions of the URL specify the server that the browser will connect to. The path is then sent to that server and the server can then decide what it wants to do with that path when it receives the request.
That said, there are server-side tools you can use that will handle a request at the above server, look at the path and then forward that request to a different server/port. This is often called a proxy server. So, for example, you can run nginx (a pre-built, configurable proxy) that will let you configure that you want a request to https://example.com/name/ to go to some other host (which you can configure as some other IP address and port).
The browser will connect to example.com (which is your proxy) and send the http request for /name. The proxy will receive that request, look at the path, see that it is configured to forward that request to a different host, then connect to that other host, send the request to it, get the response back, then return the response back to the browser. The browser will not necessarily know that this "forwarding" is going on behind-the-scenes. It makes a request and gets an answer.