node http proxy SSL transparent - node.js

In my setup, I have 2 layers of transparent proxies. When a client makes an SSL request, I wish to have the first proxy it meets simply forward the traffic to another one without attempting to do the handshake with the client.
The setup seems funny, but it is justified in my case - the 2nd proxy registers itself to the first one (through some other service) only occassionally. It tells the first: "I'm interested in some traffic that looks like___". In most cases, the 1st proxy simply does the work.
Can an httpProxy (in node-proxy) proxy SSL requests? Must I use an httpsProxy (which will then do the handshake with the client)?

You could do all of this with the existing httpsProxy if you wanted to. Unless you are wanting to use a non-Node proxy or proxy to a different server, I can't see what you would gain by having two.
Simply add the required the logging/signing logic to the existing httpsProxy.
Typically, I use https on the proxy to both restrict the number of open ports and to remove the need to do https on all of the Node servers running. You can also add Basic Auth using http-basic library too.
See my example code: https://github.com/TotallyInformation/node-proxy-https-example/blob/master/proxy.js
EDIT 2012-05-15: Hmm, after some thought, I wonder if you shouldn't be looking at something like stunnel to do what you want rather than Node?

(For reference, I've already made some of those points in my answer to your similar question on ServerFault.)
If you are after a MITM proxy (that is, a proxy that can look inside the SSL content by using its own certificates, which can work provided the clients are configured to trust them), it will hardly be fully transparent, since you will at least have to configure its clients to trust its certificates.
In addition, unless all your client use the server name indication extension, the proxy itself will be unable to determine reliably which host to issue its certificate for (something that a normal HTTPS proxy would have been able to know by looking at the CONNECT request issued by the client).
If you're not after a MITM proxy, then you might as well let the initial connection through via your router. If you want to record that traffic, your router might be able to log the encrypted packets.
Having your router catch the SSL/TLS packets to send them transparently to a proxy that will merely end up relaying that traffic untouched anyway to the target server doesn't make much sense. (By nature, the transparent proxy will imply the client isn't configured to know about it, so it won't even send its CONNECT method with which you could have had the requested host and port. Here, you'll really have nothing more than what the router can do.)
EDIT: Once again, you simply won't be able to use an HTTP proxy to analyse the content of the connection transparently. Even when using a normal proxy, an HTTPS connection is relayed straight through to the target server. The SSL/TLS connection itself is established between the original client and the target server. The point of using SSL/TLS is to protect this connection, and to make the client notice if something is trying to look inside the connection.
Plain HTTP transparent proxy servers work because (a) the traffic can be seen (in particular, the request line and the HTTP Host header are visible so that the proxy can know which request to make itself) and (b) the traffic can be altered transparently so that the initial client doesn't notice that the request wasn't direct and works as if it was.
Neither of these conditions are true with HTTPS. HTTPS connections that go through an HTTP proxy are simply tunnel, after explicit request from the client, which has sent a CONNECT command and was configured to make use of such a proxy.
To do something close to what you're after, you'd need an SSL/TLS server that accepts the SSL/TLS connection and deciphers it (perhaps something like STunnel) before your HTTP proxy. However, this won't be transparent, because it won't be able to generate the right certificates.

Related

How to create a secure internal route in the CloudFoundry environment (Swisscom AppCloud)

I would like to create a secure internal route between two applications within the same space/organization. It should never be possible to reach the Node.js application from the outside. My Java application connects via HTTP to the Node application (running on express).
I have now tried to setup the desired configuration by creating a route called example-route.apps.internal and assigned it to the Node application. As a next step, I've opened the port (I've tried 443, 80, 8080) in the network configuration of the Java application (with the destination being the Node app). I restaged both applications.
Then, I opened a Java connection to the link http://example-route.apps.internal/test123. I've also tried to use https. The result was the same. Java refused to conncet to this URL.
Now, the following questions:
How can I properly set up this communication? Should I resolve this internal DNS somehow? Which port is the correct one if I just use the port of the env variable? How should I read this port from the other application?
How secure is the communication, if HTTP is used instead of HTTPS? (I assume HTTPS is not possible internally). Is it as safe as an HTTPS connection from the outside? Which devices are between, how far out does the connection go?
Thank you!
I think you're almost there.
Then, I opened a Java connection to the link http://example-route.apps.internal/test123. I've also tried to use https. The result was the same. Java refused to conncet to this URL.
You should use http://example-route.apps.internal:8080/test123. Your app is set to listen on $PORT, which is always 8080 in current versions of CF.
Normally you don't need to worry about this because your traffic goes in through Gorouter which translates for you (maps external port 80 -> internal 8080). With internal routes, traffic is direct so there is no transformation. That's why you need to use port 8080 in your URL.
Alternatively, you could use a service discovery mechanism like Eureka or Consul, but it's not a requirement. In this case, the service would know it's listening on 8080 and register that in the registry.
As far as HTTPS, that's tricky. Your app is only listening on 80/HTTP. You would have to change it to listen on 443/HTTPS, but then you need certs and different server configuration. It's technically possible, but it's a whole can of worms.
In some newer versions, Envoy is present and accepts HTTPS traffic into a container, can make HTTPS easier but it's still not a slam dunk (at the time of writing, at least). I expect this will get better in the future.
Should I resolve this internal DNS somehow?
Internal DNS helps with locating your other apps, not the port. Otherwise you'd need to manage IP addresses, which change often, and that would require something like Eureka or Consul.
Which port is the correct one if I just use the port of the env variable?
See above.
How should I read this port from the other application?
It's always 8080 at the moment, and has been for multiple years. It's unlikely to change, so you could probably hard code or set it in a config file safely.
How secure is the communication, if HTTP is used instead of HTTPS? (I assume HTTPS is not possible internally).
Is it as safe as an HTTPS connection from the outside? Which devices are between, how far out does the connection go?
Traffic would not be accessible externally as it wouldn't leave the Cell in some cases or worst case it goes between two Cells, but traffic would be visible internally since it's not encrypted. That means you need to have more trust on your CF provider, who would have access to internal traffic.
If it were HTTPS, only someone with the key would be able to decrypt it. You would still have to trust your provider though as they could likely get the key & use it to decrypt traffic. It would just be more work for them than if traffic is unencrypted.
Hope that helps!

Is HTTPS behind reverse proxy needed?

I have an API server running behind an nginx reverse proxy. It is important to have all requests to my API server be secured via TLS since it handles sensitive data.
I've setup nginx to work with TLS (LetsEncrypt) so that seems to be okay. However, requests from nginx to my API server are still insecure http requests (this is all happening across docker containers, by the way).
Is it a best practice to also setup https between the reverse proxy and the API server? If so, how would I go about doing that without over-engineering it?
It all comes down to how secure or paranoid you'd like your implementation to be. It may also depend on the type of data you're playing with. For instance: I'd definitely do this for credit card numbers or other sensitive information.
As the comments have already stated, you would typically terminate SSL connections at the front facing webserver, assuming the API backend is also inside your LAN, which you trust and control. If you want to go that extra mile, you could also set up SSL on the API backend. Details of how to do that depend on the software you're using on your backend.
If you do decide to implement SSL on the API backend, the setup would be similar to what you did to setup Nginx with SSL on the frontend, with the main difference being you don't need to use a public certificate on the backend. It can be self-signed, since no one else besides your web server will be talking to it. Then it's just a matter of fixing all the URIs in your code to use HTTPS.

Create a Reverse Proxy in NodeJS that can handle multiple secure domains

I'm trying to create a reverse proxy in NodeJS. But I keep running the issue that in that I can only serve one one set of cert/key pair on the same port(443), even though I want to serve multiple domains. I have done the research and keep running into teh same road block:
A node script that can serve multiple domains secure domain from non-secure local source (http local accessed and served https public)
Let me dynamically server SSL certificates via domain header
Example:
https ://www.someplace.com:443 will pull from http ://thisipaddress:8000 and use the cert and key files for www.someplace.com
https ://www.anotherplace.com:443 will pull from http ://thisipaddress:8080 and use the cert and key files for www.anotherplace.com
ect.
I have looked at using NodeJS's https.createServer(options, [requestListener])
But this method supports just one cert/key pair per port
I can't find a way to dynamically switch certs based on domain header
I can't ask my people to use custom https ports
And I'll run into browse SSL certificate error if I serve the same SSL certificate for multiple domain names, even if it is secure
I looked at node-http-proxy but as far as I can see it has the same limitations
I looked into Apache mod-proxy and nginx but I would rather have something I have more direct control of
If anyone can show me an example of serving multiple secure domains each with their own certificate from the same port number (443) using NodeJS and either https.createServer or node-http-proxy I would be indebted to you.
Let me dynamically server SSL certificates via domain header
There is no domain header so I guess you mean the Host header in the HTTP request.
But, this will not work because
HTTPS is HTTP encapsulated inside SSL
therefore you first have to do your SSL layer (e.g. SSL handshake, which requires the certificates), then comes the HTTP layer
but the Host header is inside the HTTP layer :(
In former times you would need to have a single IP address for each SSL certificate. Current browsers do support SNI (server name indication), which sends the expected target host already inside the SSL layer. It looks like node.js does support this, look for SNICallback.
But, beware that there are still enough libraries out there, which either don't support SNI on the client side at all or where one needs to use it explicitly. But, as long you only want to support browsers this should be ok.
Redbird actually does this very gracefully and not too hard to configure either.
https://github.com/OptimalBits/redbird
Here is the solution you might be looking at,
I found it very useful for my implementation
though you will need to do huge customization to handle domains
node-http-rev proxy:
https://github.com/nodejitsu/node-http-proxy
Bouncy is a good library to do this and has an example of what you are needing.
As Steffen Ullrich says it will depend on the browser support for it
How about creating the SSL servers on different ports and using node-http-proxy as a server on 443 to relay the request based on domain.
You stated you don't want to use nginx for that, and I don't understand why. You can just setup multiple locations for your nginx. Have each of them listen to different hostnames and all on port 443.
Give all of them a proxypass to your nodejs server. To my understanding, that serves all of your requirements and is state of the art.

How to broker secure connection across firewalls using untrusted host?

I have an interesting network security challenge that I can't figure out the best way to attack.
I need to provide a way to allow two computers (A and B) that are behind firewalls to make a secure connection to each other using only a common "broker" untrusted server on the internet (somewhere like RackSpace). (the server is considered untrusted because the customers behind the firewalls won't trust it since it is on an open server) I can not adjust the firewall settings to allow the networks to directly connect to each other because the connections are no known ahead of time.
This is very similar to a NAT to NAT connection problem like that handled by remote desktop help tools (crossloop, copilot, etc).
What I would really like to find is a way to open an SSL connection between the two hosts and have the public server broker the connection. Preferably when host A tries to connect to host B, it should have to provide a token that the broker can check with host B before establishing the connection.
To add another wrinkle to this, the connection mechanism needs to support two types of communication. First, HTTP request/response to a REST web service and second persistent socket connection(s) to allow for real-time message passing.
I have looked at the techniques I know about like OpenSSL using certificates, OAuth, etc, but I don't see anything that quite does what I need.
Has anyone else handled something like this before? Any pointers?
You can solve your problem with plain SSL.
Just have the untrusted server forward connections between the client hosts as opaque TCP connections. The clients then establish an end-to-end SSL connection over that forwarded TCP tunnel - with OpenSSL, one client calls SSL_accept() and the other calls SSL_connect().
Use certificates, probably including client certificates, to verify that the other end of the SSL connection is who you expect it to be.
(This is conceptually similar to the way that HTTPS connections work over web proxies - the browser just says "connect me to this destination", and establishes an SSL connection with the desired endpoint. The proxy just forwards encrypted SSL data backwards and forwards, and since it doesn't have the private key for the right certificate, it can't impersonate the desired endpoint).
In general, SSL is packet-based protocol (for the purpose of solving your task). If you can have the host forward the packets back and forth, you can easily have SSL-secured communication channel. One thing you need is something like our SSL/TLS components, which allow any transport and not just sockets. I.e. the component tells your code "send this packet to the other side" or "do you have anything for me to receive?" and your code communicates with your intermediate server.

How to enable windows authentication through a reverse proxy?

Sorry if it is a duplicate, as I am not a security nor network expert I may have missed the correct lingo to find information.
I am working on an application to intercept and modify HTTP requests and responses between a web browser and a web server (see how to intercept and modify HTTP responses on server side? for the background). I decided to implement a reverse proxy in ASP.Net which forwards client requests to the back-end HTTP server, translates links and headers from the response to the properly "proxified" URL, and sends the response to the client after having extracted relevant information from the response.
It is working as expected, except for the authentication part: the web server uses NTLM authentication by default, and just forwarding requests and responses through the reverse proxy does not allow the user to be authenticated on the remote application. Both the reverse proxy and the web application are on the same physical machine and are executed in the same IIS server (Windows server 2008/IIS 7 if that matters). I tried both enabling and disabling authentication on the reverse proxy app with no luck.
I have looked for information about it, and it seems to be related to the "double-hop problem", which I do not understand. My question is: is there a way to authenticate the user on the remote application through the reverse proxy using NTLM? If there is none, are there alternative authentication methods I could use?
Even if you don't have a solution to my problem, just pointing me to relevant information about it to help me get out of the confusion would be great!
I found what the problem was (and it is NTLM): in order to have the browser asks the user for its credentials, the response must have a 401 status code. My reverse proxy was forwarding the response to the browser, so IIS was adding a standard HTML code to explain the requested page cannot be accessed thus preventing the browser from asking credentials.
The problem was solved by removing the response content when the status code is a 401.
With all due respect I have for the one that answered that some years ago, I must admit this is plainly false. The problem was indeed solved AFTER removing the response content when the status code is a 401, but it had none to do with the initial problem..
The truth is that windows authentication was made to authenticate people over local windows networks, where no proxy server is present or even needed.
The main problem with NTLM authentication is that this protocol does not authenticate the HTTP session but the underlying TCP connection, and as far as I know there is no way to access it from asp code.
Every proxy server I tried broke NTLM authentication.
Windows authentication is comfortable for an user because he won't ever need to enter your password to whatever application may lie in your intranet, frightening for a security guy because there is an auto-login without even a prompt if the site domain is trusted by IE, shocking for a network administrator because it melts the application, transport and network layer into some "windows ball of mug" instead of just plain http traffic.
NTLM won't work if the TCP packets are not forwarded exactly as the reverse proxy received > them. And that's why many reverse proxy doesn't work with NTLM authentication. (like nginx) > They forward HTTP requests correcty but not the TCP packets.
Nginx has the functionality to work with NTLM authentication. Keepalive needs to be enabled which is only available trough the http_upstream_module. Additionally in the location block you need to specify that you will be using HTTP/1.1 and that the "Connection" header field should be cleared for each proxied request. Nginx config should look something like:
upstream http_backend {
server 1.1.1.1:80;
keepalive 16;
}
server {
...
location / {
proxy_pass http://http_backend/;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}
I scratched my head for quite some time with this issue but the above works for me. Note that if you need to proxy HTTPS traffic, a separate upstream block is deemed necessary. To clarify a bit more, "keepalive 16;" specifies the number of simultaneous connections to the upstream your proxy is allowed to keep. Adjust the number as per the expected number of simultaneous visitors on the site.
Although this is an old post, I just want to report that it works for me quite well with an Apache2.2 reverse proxy and the keepalive=on option. Obviously, this keeps the connection between the proxy and the SharePoint host open and "pinned" to the client<>proxy connection. I don't exactly know the mechanisms behind this, but it works fairly well.
But: Sometimes, my users encounter the issue that they're logged in as another user. So there seems to be some mixing-up through sessions. I will have to give this some further testing.
Solution for everything (in case you have a valid, signed SSL certificate): Switch IIS to Basic Auth. This works absolutely fine, and even Windows (i.e. Office with SharePoint connection, all WebClient-based processes etc.) won't complain at all.
But they will when you're just using http without SSL/TLS, and also with self-signed certificates.
I confirm that it works with "keep-alive=on" on apache2.2
I examined frames with Wireshark, and I know why it doesn't work. NTLM won't work if the TCP packets are not forwarded exactly as the reverse proxy received them. That's why many reverse proxies, like nginx, don't work with NTLM authentication. Reverse proxies forward HTTP requests correctly but not the TCP packets.
NTLM requires a TCP reverse proxy.

Resources