I have a node application sitting behind a corporate proxy -- McAfee Web Gateway in direct proxy mode (not transparent). I have access to a username/password and domain for authentication but not sure how to use them for this.
For other proxies that use NTLM, I can generally use cntlm and it works well but does not work well in this case. I am also successful when behind an http proxy that just uses "basic" authentication. However, I have never run into this kind of beast before and was wondering where to start.
Thanks!
Related
I'm writing an open-source Node.js application that implements a HTTP server for API calls. Supporting HTTPS in Node.js isn't hard, but it adds a little complexity and cases you need to thing about:
Path to key and cert should be configurable => More settings / documentation
App should handle errors when key and cert is missing or path is wrong => More code and test
Docker image must pass an external key and cert to the application running in the container => More code and documentation
It feels a bit like reinventing the wheel. I'm personally using a reverse proxy that handles the HTTPS part of all my sites. The servers in the background are all just HTTP.
Is it ok to require a reverse proxy or is it better to support HTTPS directly as most users aren't using a reverse proxy? What's the common server setup and recommend way when writing an open-source Node.js application? How to make it as easy as possible for most users to use my app?
Reverse proxy is preferable for most of the scenarios since we can make use of security, load balancing, cache control, etc. kind of things. Also we can use for logging purpose so that we can maintain a security layer on all the server activities and data behind this proxy. As you mentioned, there will have some more lines of code to write but the system will persist more powerful. I recommend to have a reverse proxy to make the things more robust and secure.
I have an API server running behind an nginx reverse proxy. It is important to have all requests to my API server be secured via TLS since it handles sensitive data.
I've setup nginx to work with TLS (LetsEncrypt) so that seems to be okay. However, requests from nginx to my API server are still insecure http requests (this is all happening across docker containers, by the way).
Is it a best practice to also setup https between the reverse proxy and the API server? If so, how would I go about doing that without over-engineering it?
It all comes down to how secure or paranoid you'd like your implementation to be. It may also depend on the type of data you're playing with. For instance: I'd definitely do this for credit card numbers or other sensitive information.
As the comments have already stated, you would typically terminate SSL connections at the front facing webserver, assuming the API backend is also inside your LAN, which you trust and control. If you want to go that extra mile, you could also set up SSL on the API backend. Details of how to do that depend on the software you're using on your backend.
If you do decide to implement SSL on the API backend, the setup would be similar to what you did to setup Nginx with SSL on the frontend, with the main difference being you don't need to use a public certificate on the backend. It can be self-signed, since no one else besides your web server will be talking to it. Then it's just a matter of fixing all the URIs in your code to use HTTPS.
so I have 0% experience with web programming, and the project I am working on doesn't have anything to do with it, but I hit a small roadblock and need to solve a small port problem.
So we want to build a cluster of GPU machines on Azure for some Deep Learning calculations, and want to install some applications on them and let our scientists use the app' GUIs to launch and monitor their jobs. The problem is that an app A for example runs on port 5050, but our firewall doesn't let us communicate to unusual ports. The problem is easy to fix from the Azure side, but our IT team won't let us modify our security policies.
That's why I need to find a hacky and fast solution to overcome this, I don't want to spend my whole internship doing something complicated for it, just something that does the job.
What I thought about was to have some kind of server running on the machines (let's say Machine A has public IP address ipbA and private IP ipvA) that when we type "http://ipba/app1" on our browser, the server on A will fetch the page "http://ipva:5050" (or "----://ipba/app2" -> "----://ipva:5051") and display it, but does this work if the page needs to be interactive because we would like to launch jobs?
I have no clue how to do this, if you could please just tell me what I should look into, google and read about, or if there is an easier way to handle it, (maybe some VPN stuff, which I don't prefer since we're moving towards a hybrid cloud architecture and I don't think we would want to VPN into all the different cloud platforms) that would be awesome :)
Two common solutions for your problem:
Set-up a reverse proxy on a standard port (such as 80 or 443 if you want some SSL certificates headaches).
All your domain names will point to the reverse proxy (single IP) but the reverse proxy will forward the traffic transparently to the real servers on their special ports.
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension
For the technical details, in short: you keep in file(s) the configuration for each domain or subdomain and where they should be forwarded.
Chain of events:
User types http://interface-1.company.com
Browser resolves interface-1.company.com (DNS: IP Reverse Proxy)
Browser connects on reverse proxy (port 80)
Reverse proxy reads configuration which says where to forward
Proxy forwards request to realserver.company.com:5050
Realserver relays response to reverse proxy
reverse proxy relays response to browser
I think that is what you are trying to achieve.
Set-up a VPN service which will be connectible through the proxy of your company and provide VPN clients to the end-users. OpenVPN clients can use an HTTPS proxy connection (your company proxy) to establish connexion to a remote VPN.
Once connected on the VPN, everyone uses the VPN's IP address + firewall policy, and are therefore no more restricted by the company's firewalling policy. Any kind of traffic can also be forwarded. This is harder to set up and your security team might not accept it. However, it's a fully functional solution and it can also offer additional security features if implemented properly.
I do not recommend going this way for all the paperwork that would involve.
I have a windows server 2012 VPS running a web app behind Cloudflare. The app needs to initiate outbound connections based on user actions (eg upload image from URL). The problem is that this 'leaks' my server's IP address and increases risk of DDOS attacks.
So I would like to prevent my server's IP from being discovered by setting up a forward proxy. So far my research has shown that this is no simple task, and would involve setting up another VPS to act as a proxy.
Does this extra forward proxy VPS have to be running windows ? Are their any paid services that could act as a forward proxy for my server (like cloudflare's reverse proxy system)?
Also, it seems that the suggested IIS forward proxy plugin, Application Request Routing, does not work for HTTPS.
Is there a solution for both types of outgoing (HTTPS + HTTP) requests?
I'm really lost here, so any help or suggestions would be appreciated.
You are correct in needing a "Forward Proxy". A good analogy for this is the proxy settings your browser has for outbound requests. In your case, the web application behaves like a desktop browser and can be configured to make the resource request through a proxy.
Often you can control this for individual requests at the application layer. An example of doing so with C#: C# Connecting Through Proxy
As far as the actual proxy server: No, it does not need to run Windows or IIS. Yes, you can use a proxy service. The vast majority of proxy services are targeted towards consumers and are used for personal privacy or to get around network restrictions. As such, I have no direct recommendations.
Cloudflare actually has recommendations regarding this: https://blog.cloudflare.com/ddos-prevention-protecting-the-origin/.
Features like "upload from URL" that allow the user to upload a photo from a given URL should be configured so that the server doing the download is not the website origin server.
This may be a more comfortable risk mitigator, as it wouldn't depend on a third party proxy service. A request for upload could be handled as a web service call to a dedicated "file downloader" server. Keep in mind that if you have a queued process for another server to do the work, and that server is hosted in the same infrastructure, both might be impacted by a DDoS, depending on the type of DDoS.
Your question implies that you may be comfortable using a non-windows server. Many softwares exist that can operate as a proxy(most web servers), but suffer from the same problem as ARR - lack of support for the HTTP "CONNECT" verb, which is used by modern browsers to start an HTTPS connection before issuing a "GET". SQUID is very popular, open source, and supports everything to connect to.. anything. It's not trivial to set up. Apache also has support for this in "mod_proxy_connect", but I have no experience in that and the online documentation isn't very robust. It's Apache, though, so it may be worth the extra investigation.
Sorry if it is a duplicate, as I am not a security nor network expert I may have missed the correct lingo to find information.
I am working on an application to intercept and modify HTTP requests and responses between a web browser and a web server (see how to intercept and modify HTTP responses on server side? for the background). I decided to implement a reverse proxy in ASP.Net which forwards client requests to the back-end HTTP server, translates links and headers from the response to the properly "proxified" URL, and sends the response to the client after having extracted relevant information from the response.
It is working as expected, except for the authentication part: the web server uses NTLM authentication by default, and just forwarding requests and responses through the reverse proxy does not allow the user to be authenticated on the remote application. Both the reverse proxy and the web application are on the same physical machine and are executed in the same IIS server (Windows server 2008/IIS 7 if that matters). I tried both enabling and disabling authentication on the reverse proxy app with no luck.
I have looked for information about it, and it seems to be related to the "double-hop problem", which I do not understand. My question is: is there a way to authenticate the user on the remote application through the reverse proxy using NTLM? If there is none, are there alternative authentication methods I could use?
Even if you don't have a solution to my problem, just pointing me to relevant information about it to help me get out of the confusion would be great!
I found what the problem was (and it is NTLM): in order to have the browser asks the user for its credentials, the response must have a 401 status code. My reverse proxy was forwarding the response to the browser, so IIS was adding a standard HTML code to explain the requested page cannot be accessed thus preventing the browser from asking credentials.
The problem was solved by removing the response content when the status code is a 401.
With all due respect I have for the one that answered that some years ago, I must admit this is plainly false. The problem was indeed solved AFTER removing the response content when the status code is a 401, but it had none to do with the initial problem..
The truth is that windows authentication was made to authenticate people over local windows networks, where no proxy server is present or even needed.
The main problem with NTLM authentication is that this protocol does not authenticate the HTTP session but the underlying TCP connection, and as far as I know there is no way to access it from asp code.
Every proxy server I tried broke NTLM authentication.
Windows authentication is comfortable for an user because he won't ever need to enter your password to whatever application may lie in your intranet, frightening for a security guy because there is an auto-login without even a prompt if the site domain is trusted by IE, shocking for a network administrator because it melts the application, transport and network layer into some "windows ball of mug" instead of just plain http traffic.
NTLM won't work if the TCP packets are not forwarded exactly as the reverse proxy received > them. And that's why many reverse proxy doesn't work with NTLM authentication. (like nginx) > They forward HTTP requests correcty but not the TCP packets.
Nginx has the functionality to work with NTLM authentication. Keepalive needs to be enabled which is only available trough the http_upstream_module. Additionally in the location block you need to specify that you will be using HTTP/1.1 and that the "Connection" header field should be cleared for each proxied request. Nginx config should look something like:
upstream http_backend {
server 1.1.1.1:80;
keepalive 16;
}
server {
...
location / {
proxy_pass http://http_backend/;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}
I scratched my head for quite some time with this issue but the above works for me. Note that if you need to proxy HTTPS traffic, a separate upstream block is deemed necessary. To clarify a bit more, "keepalive 16;" specifies the number of simultaneous connections to the upstream your proxy is allowed to keep. Adjust the number as per the expected number of simultaneous visitors on the site.
Although this is an old post, I just want to report that it works for me quite well with an Apache2.2 reverse proxy and the keepalive=on option. Obviously, this keeps the connection between the proxy and the SharePoint host open and "pinned" to the client<>proxy connection. I don't exactly know the mechanisms behind this, but it works fairly well.
But: Sometimes, my users encounter the issue that they're logged in as another user. So there seems to be some mixing-up through sessions. I will have to give this some further testing.
Solution for everything (in case you have a valid, signed SSL certificate): Switch IIS to Basic Auth. This works absolutely fine, and even Windows (i.e. Office with SharePoint connection, all WebClient-based processes etc.) won't complain at all.
But they will when you're just using http without SSL/TLS, and also with self-signed certificates.
I confirm that it works with "keep-alive=on" on apache2.2
I examined frames with Wireshark, and I know why it doesn't work. NTLM won't work if the TCP packets are not forwarded exactly as the reverse proxy received them. That's why many reverse proxies, like nginx, don't work with NTLM authentication. Reverse proxies forward HTTP requests correctly but not the TCP packets.
NTLM requires a TCP reverse proxy.