How to enable windows authentication through a reverse proxy? - iis

Sorry if it is a duplicate, as I am not a security nor network expert I may have missed the correct lingo to find information.
I am working on an application to intercept and modify HTTP requests and responses between a web browser and a web server (see how to intercept and modify HTTP responses on server side? for the background). I decided to implement a reverse proxy in ASP.Net which forwards client requests to the back-end HTTP server, translates links and headers from the response to the properly "proxified" URL, and sends the response to the client after having extracted relevant information from the response.
It is working as expected, except for the authentication part: the web server uses NTLM authentication by default, and just forwarding requests and responses through the reverse proxy does not allow the user to be authenticated on the remote application. Both the reverse proxy and the web application are on the same physical machine and are executed in the same IIS server (Windows server 2008/IIS 7 if that matters). I tried both enabling and disabling authentication on the reverse proxy app with no luck.
I have looked for information about it, and it seems to be related to the "double-hop problem", which I do not understand. My question is: is there a way to authenticate the user on the remote application through the reverse proxy using NTLM? If there is none, are there alternative authentication methods I could use?
Even if you don't have a solution to my problem, just pointing me to relevant information about it to help me get out of the confusion would be great!

I found what the problem was (and it is NTLM): in order to have the browser asks the user for its credentials, the response must have a 401 status code. My reverse proxy was forwarding the response to the browser, so IIS was adding a standard HTML code to explain the requested page cannot be accessed thus preventing the browser from asking credentials.
The problem was solved by removing the response content when the status code is a 401.
With all due respect I have for the one that answered that some years ago, I must admit this is plainly false. The problem was indeed solved AFTER removing the response content when the status code is a 401, but it had none to do with the initial problem..
The truth is that windows authentication was made to authenticate people over local windows networks, where no proxy server is present or even needed.
The main problem with NTLM authentication is that this protocol does not authenticate the HTTP session but the underlying TCP connection, and as far as I know there is no way to access it from asp code.
Every proxy server I tried broke NTLM authentication.
Windows authentication is comfortable for an user because he won't ever need to enter your password to whatever application may lie in your intranet, frightening for a security guy because there is an auto-login without even a prompt if the site domain is trusted by IE, shocking for a network administrator because it melts the application, transport and network layer into some "windows ball of mug" instead of just plain http traffic.

NTLM won't work if the TCP packets are not forwarded exactly as the reverse proxy received > them. And that's why many reverse proxy doesn't work with NTLM authentication. (like nginx) > They forward HTTP requests correcty but not the TCP packets.
Nginx has the functionality to work with NTLM authentication. Keepalive needs to be enabled which is only available trough the http_upstream_module. Additionally in the location block you need to specify that you will be using HTTP/1.1 and that the "Connection" header field should be cleared for each proxied request. Nginx config should look something like:
upstream http_backend {
server 1.1.1.1:80;
keepalive 16;
}
server {
...
location / {
proxy_pass http://http_backend/;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}
I scratched my head for quite some time with this issue but the above works for me. Note that if you need to proxy HTTPS traffic, a separate upstream block is deemed necessary. To clarify a bit more, "keepalive 16;" specifies the number of simultaneous connections to the upstream your proxy is allowed to keep. Adjust the number as per the expected number of simultaneous visitors on the site.

Although this is an old post, I just want to report that it works for me quite well with an Apache2.2 reverse proxy and the keepalive=on option. Obviously, this keeps the connection between the proxy and the SharePoint host open and "pinned" to the client<>proxy connection. I don't exactly know the mechanisms behind this, but it works fairly well.
But: Sometimes, my users encounter the issue that they're logged in as another user. So there seems to be some mixing-up through sessions. I will have to give this some further testing.
Solution for everything (in case you have a valid, signed SSL certificate): Switch IIS to Basic Auth. This works absolutely fine, and even Windows (i.e. Office with SharePoint connection, all WebClient-based processes etc.) won't complain at all.
But they will when you're just using http without SSL/TLS, and also with self-signed certificates.

I confirm that it works with "keep-alive=on" on apache2.2
I examined frames with Wireshark, and I know why it doesn't work. NTLM won't work if the TCP packets are not forwarded exactly as the reverse proxy received them. That's why many reverse proxies, like nginx, don't work with NTLM authentication. Reverse proxies forward HTTP requests correctly but not the TCP packets.
NTLM requires a TCP reverse proxy.

Related

Is HTTPS behind reverse proxy needed?

I have an API server running behind an nginx reverse proxy. It is important to have all requests to my API server be secured via TLS since it handles sensitive data.
I've setup nginx to work with TLS (LetsEncrypt) so that seems to be okay. However, requests from nginx to my API server are still insecure http requests (this is all happening across docker containers, by the way).
Is it a best practice to also setup https between the reverse proxy and the API server? If so, how would I go about doing that without over-engineering it?
It all comes down to how secure or paranoid you'd like your implementation to be. It may also depend on the type of data you're playing with. For instance: I'd definitely do this for credit card numbers or other sensitive information.
As the comments have already stated, you would typically terminate SSL connections at the front facing webserver, assuming the API backend is also inside your LAN, which you trust and control. If you want to go that extra mile, you could also set up SSL on the API backend. Details of how to do that depend on the software you're using on your backend.
If you do decide to implement SSL on the API backend, the setup would be similar to what you did to setup Nginx with SSL on the frontend, with the main difference being you don't need to use a public certificate on the backend. It can be self-signed, since no one else besides your web server will be talking to it. Then it's just a matter of fixing all the URIs in your code to use HTTPS.

How to setup forward proxy on Windows server for outgoing HTTP and HTTPS requests?

I have a windows server 2012 VPS running a web app behind Cloudflare. The app needs to initiate outbound connections based on user actions (eg upload image from URL). The problem is that this 'leaks' my server's IP address and increases risk of DDOS attacks.
So I would like to prevent my server's IP from being discovered by setting up a forward proxy. So far my research has shown that this is no simple task, and would involve setting up another VPS to act as a proxy.
Does this extra forward proxy VPS have to be running windows ? Are their any paid services that could act as a forward proxy for my server (like cloudflare's reverse proxy system)?
Also, it seems that the suggested IIS forward proxy plugin, Application Request Routing, does not work for HTTPS.
Is there a solution for both types of outgoing (HTTPS + HTTP) requests?
I'm really lost here, so any help or suggestions would be appreciated.
You are correct in needing a "Forward Proxy". A good analogy for this is the proxy settings your browser has for outbound requests. In your case, the web application behaves like a desktop browser and can be configured to make the resource request through a proxy.
Often you can control this for individual requests at the application layer. An example of doing so with C#: C# Connecting Through Proxy
As far as the actual proxy server: No, it does not need to run Windows or IIS. Yes, you can use a proxy service. The vast majority of proxy services are targeted towards consumers and are used for personal privacy or to get around network restrictions. As such, I have no direct recommendations.
Cloudflare actually has recommendations regarding this: https://blog.cloudflare.com/ddos-prevention-protecting-the-origin/.
Features like "upload from URL" that allow the user to upload a photo from a given URL should be configured so that the server doing the download is not the website origin server.
This may be a more comfortable risk mitigator, as it wouldn't depend on a third party proxy service. A request for upload could be handled as a web service call to a dedicated "file downloader" server. Keep in mind that if you have a queued process for another server to do the work, and that server is hosted in the same infrastructure, both might be impacted by a DDoS, depending on the type of DDoS.
Your question implies that you may be comfortable using a non-windows server. Many softwares exist that can operate as a proxy(most web servers), but suffer from the same problem as ARR - lack of support for the HTTP "CONNECT" verb, which is used by modern browsers to start an HTTPS connection before issuing a "GET". SQUID is very popular, open source, and supports everything to connect to.. anything. It's not trivial to set up. Apache also has support for this in "mod_proxy_connect", but I have no experience in that and the online documentation isn't very robust. It's Apache, though, so it may be worth the extra investigation.

When should HSTS be enabled?

If I am running a HTTPS only service, is there any reason not to enable HSTS? Is there a strategy to test HSTS without permanently enabling it or a way "out of" HSTS?
I'd like to add to Mike's answer the warning, that you are probably not running an HTTPS-only service. The reason is that when your server doesn't listen on port 80 then if you only type in the domain and not the protocol (stackoverflow.com instead of https://stackoverflow.com) your browser will not automatically try to connect on port 443 (https) and show a connection error. Thus for most sites an HTTPS only service is out of the question.
The classical way to ensure an https connection by forwarding every http page to an https page via 301/303 forwards is not a sufficient replacement for HSTS. In fact HSTS was build for that case exactly. The reason is that many bookmarks and links will still point to http and every time a user enters a URL without specifying the protocol - which is always - the browser will first try the http connection. An active attacker can hijack that first connection and never forward the user to the https site.
To give you a more vivid image of such an attack imagine a state who spoofs every DNS request to twitter and answers with its own IPs. When it receives an https request it forwards it to twitter without any action (and chance for interception). But when it receives an http request it uses the tool ssl strip Mike has mentioned to transparently forward the content of the connection to twitter's TLS port. Neither the user nor twitter notice that anything is off (except for the very alert users who checks for TLS encryption) but the state has access to every login password.
HSTS can protect those users that have had a legitimate https connection with the server before and have already seen an HSTS header. The header instructs the browser to exchange every http url of the domain with an https url itself (before an http connection is established at all) and deny any unencrypted connection to this domain. Thus in the scenario above almost all users will not end up on the compromised http connection and are safe against the nation wide attack.
From a defense in depth perspective, you should still enable HTTP Strict Transport Policy (HSTS). There are some issues that could crop up in the future that would benefit from HSTS, including:
Server misconfiguration, where HTTP is accidentally turned on. There's one site I visited recently that takes credit card details, it has a HTTPS site but Google links to their HTTP site so depending on how you got there, you could be submitting your details in the clear.
Malicious attacker poisons or hijacks DNS records to redirect the client to their own HTTP-only server, perhaps in conjunction with an ssl strip attack.
You should also ensure a sufficiently long HSTS lifetime, e.g. a year or more.
You can disable support for HSTS by setting the max-age to 0. You'll need to leave this header in place for as long as you had originally set the value. E.g. If you had set it to 2 years, and change your mind, you'll need to leave max-age=0 for at least 2 years (and continue to offer an HTTPS service on that domain) so past clients won't have any issues connecting to it.

node http proxy SSL transparent

In my setup, I have 2 layers of transparent proxies. When a client makes an SSL request, I wish to have the first proxy it meets simply forward the traffic to another one without attempting to do the handshake with the client.
The setup seems funny, but it is justified in my case - the 2nd proxy registers itself to the first one (through some other service) only occassionally. It tells the first: "I'm interested in some traffic that looks like___". In most cases, the 1st proxy simply does the work.
Can an httpProxy (in node-proxy) proxy SSL requests? Must I use an httpsProxy (which will then do the handshake with the client)?
You could do all of this with the existing httpsProxy if you wanted to. Unless you are wanting to use a non-Node proxy or proxy to a different server, I can't see what you would gain by having two.
Simply add the required the logging/signing logic to the existing httpsProxy.
Typically, I use https on the proxy to both restrict the number of open ports and to remove the need to do https on all of the Node servers running. You can also add Basic Auth using http-basic library too.
See my example code: https://github.com/TotallyInformation/node-proxy-https-example/blob/master/proxy.js
EDIT 2012-05-15: Hmm, after some thought, I wonder if you shouldn't be looking at something like stunnel to do what you want rather than Node?
(For reference, I've already made some of those points in my answer to your similar question on ServerFault.)
If you are after a MITM proxy (that is, a proxy that can look inside the SSL content by using its own certificates, which can work provided the clients are configured to trust them), it will hardly be fully transparent, since you will at least have to configure its clients to trust its certificates.
In addition, unless all your client use the server name indication extension, the proxy itself will be unable to determine reliably which host to issue its certificate for (something that a normal HTTPS proxy would have been able to know by looking at the CONNECT request issued by the client).
If you're not after a MITM proxy, then you might as well let the initial connection through via your router. If you want to record that traffic, your router might be able to log the encrypted packets.
Having your router catch the SSL/TLS packets to send them transparently to a proxy that will merely end up relaying that traffic untouched anyway to the target server doesn't make much sense. (By nature, the transparent proxy will imply the client isn't configured to know about it, so it won't even send its CONNECT method with which you could have had the requested host and port. Here, you'll really have nothing more than what the router can do.)
EDIT: Once again, you simply won't be able to use an HTTP proxy to analyse the content of the connection transparently. Even when using a normal proxy, an HTTPS connection is relayed straight through to the target server. The SSL/TLS connection itself is established between the original client and the target server. The point of using SSL/TLS is to protect this connection, and to make the client notice if something is trying to look inside the connection.
Plain HTTP transparent proxy servers work because (a) the traffic can be seen (in particular, the request line and the HTTP Host header are visible so that the proxy can know which request to make itself) and (b) the traffic can be altered transparently so that the initial client doesn't notice that the request wasn't direct and works as if it was.
Neither of these conditions are true with HTTPS. HTTPS connections that go through an HTTP proxy are simply tunnel, after explicit request from the client, which has sent a CONNECT command and was configured to make use of such a proxy.
To do something close to what you're after, you'd need an SSL/TLS server that accepts the SSL/TLS connection and deciphers it (perhaps something like STunnel) before your HTTP proxy. However, this won't be transparent, because it won't be able to generate the right certificates.

SSL Https, is it that simple?

I'm just setting up an SSL area of a website, and was just wondering... is it as simple as adding HTTPS on the url?
(this is presuming I have a valid certificate of the hosting company?)
Or is there something more to it?
Thanks.
You have to setup the server to allow ssl connections. That includes generating a signed server request. You send this CSR to the cert authority (Verisign etc), and they send you a cert to install on the server. If you are behind a firewall you need to open port 443.
If you don't control the server i.e. shared hosting, there is probably a page in your control panel to do it all for you using a GUI.
When you replace http: in a URL with https: you are asking your web browser to do two things:
To attempt an encrypted (SSL) connection
To change which port to use on the remote server if none is specified in the URL
Most web browsers use port 80 for unencrypted traffic and port 443 for encrypted traffic by default. So, the first thing you need is a web server that is listening on port 443. If you are using a hosting company, this is probably already the case or becomes the case when you configure SSL.
You do not have to use port 443 but that is where browsers will be looking when users do not specify a port. You could also force everybody that connects at port 80 to use SSL as well though with the right configuration. That means that ALL traffic to your site would be encrypted.
To get the encryption up and running you generally need three things: a certificate, an encryption key, and a server request (CSR).
How you configure these is extremely dependent on how you are hosting the web server. Most hosting companies have 'control panels' that you log into for configuration. Common ones are Plex and CPanel. If either of those ring a bell you can post more information to get a better answer.
If you are managing the server yourself the big question is whether you are hosting on Windows or Linux. If it is windows, you are most likely going to want to configure IIS (Internet Information Server) while if it is on Linux you are probably going to configure Apache.
If you are using IIS, this link might help:
http://www.petri.co.il/configure_ssl_on_your_website_with_iis.htm
If it is Apache, Byron gave a good link above:
http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html
You can use other web servers. For example, I use nginx:
http://rubypond.com/blog/setting-up-nginx-ssl-and-virtual-hosts
So, I guess the real step one is finding out more about your server. :-)
Once your web server has the SSL cert installed, it is as easy as using HTTPS on the URLs. There are some considerations to be aware of:
Port 443 must be open between the user and web server. (obvious)
Browser caching will be reduced to in-memory session cache and not stored on disk. Also, caching proxies in between will not be able to cache anything, since everything is encrypted. This means an increase in load times and bandwidth requirements of the web server.
When using HTTPS to receive sensitive data, be sure to disallow its use over HTTP. e.g. If you have a page that accepts credit card numbers in a POST, the app should fail validation if it was not done over HTTPS. This can be done in your code or in web server configuration. This prevents a bug or malware from systematically sending sensitive data in the clear without the user knowing.

Resources