Support HTTPS in Node.js server or require reverse proxy for HTTPS - node.js

I'm writing an open-source Node.js application that implements a HTTP server for API calls. Supporting HTTPS in Node.js isn't hard, but it adds a little complexity and cases you need to thing about:
Path to key and cert should be configurable => More settings / documentation
App should handle errors when key and cert is missing or path is wrong => More code and test
Docker image must pass an external key and cert to the application running in the container => More code and documentation
It feels a bit like reinventing the wheel. I'm personally using a reverse proxy that handles the HTTPS part of all my sites. The servers in the background are all just HTTP.
Is it ok to require a reverse proxy or is it better to support HTTPS directly as most users aren't using a reverse proxy? What's the common server setup and recommend way when writing an open-source Node.js application? How to make it as easy as possible for most users to use my app?

Reverse proxy is preferable for most of the scenarios since we can make use of security, load balancing, cache control, etc. kind of things. Also we can use for logging purpose so that we can maintain a security layer on all the server activities and data behind this proxy. As you mentioned, there will have some more lines of code to write but the system will persist more powerful. I recommend to have a reverse proxy to make the things more robust and secure.

Related

NGINX, THe Edge, HAPRoxy

I was going through Uber Engineering website where I came across this paragraph and I it confused me a lot, if anyone can make it clear for me then I would be thankful to him/her:
The Edge The frontline API for our mobile apps consists of over 600
stateless endpoints that join together multiple services. It routes
incoming requests from our mobile clients to other APIs or services.
It’s all written in Node.js, except at the edge, where our NGINX front
end does SSL termination and some authentication. The NGINX front end
also proxies to our frontline API through an HAProxy load balancer.
This is the link.
NGINX is already a reverse proxy + load balancer, then from where HAProxy load balancer came in the picture and where exactly it fits in the picture? What is "the edge" he talked about? Either the guy who wrote his he wrote confused words or I dont know English.
Please help.
It seems like they're using HAProxy strictly as a load balancer, and using NGINX strictly to terminate SSL and for authentication. It isn't necessary in most cases to use HAProxy along with NGINX, as you mentioned, NGINX has load-balancing capabilities, but being Uber, they probably ran into some unique problems that required the use of both. According to the information I've read, such as http://www.loadbalancer.org/blog/nginx-vs-haproxy/ and https://thehftguy.com/2016/10/03/haproxy-vs-nginx-why-you-should-never-use-nginx-for-load-balancing/, NGINX works extremely well as a web server, including the use case where it is serving as a reverse proxy for a node application, but its load-balancing capabilities are basic and not nearly as performant as HAProxy. Additionally, HAProxy exposes many more metrics for monitoring, and has more advanced routing capabilities.
Load balancing is not the core feature of NGINX. In the context of a node.js application, usually what you would see NGINX used for is to serve as a reverse proxy, meaning that NGINX is the web server, and http requests come through it. Then, based on the hostname and other rules, it forwards on the HTTP request to whatever port your node.js application is running on. As part of this flow, often NGINX will handle SSL termination, so that this computationally-intensive task is not being handled by node.js. Additionally, NGINX is often used to serve static assets for node.js apps, as it is more efficient, especially when compressing assets.

Is HTTPS behind reverse proxy needed?

I have an API server running behind an nginx reverse proxy. It is important to have all requests to my API server be secured via TLS since it handles sensitive data.
I've setup nginx to work with TLS (LetsEncrypt) so that seems to be okay. However, requests from nginx to my API server are still insecure http requests (this is all happening across docker containers, by the way).
Is it a best practice to also setup https between the reverse proxy and the API server? If so, how would I go about doing that without over-engineering it?
It all comes down to how secure or paranoid you'd like your implementation to be. It may also depend on the type of data you're playing with. For instance: I'd definitely do this for credit card numbers or other sensitive information.
As the comments have already stated, you would typically terminate SSL connections at the front facing webserver, assuming the API backend is also inside your LAN, which you trust and control. If you want to go that extra mile, you could also set up SSL on the API backend. Details of how to do that depend on the software you're using on your backend.
If you do decide to implement SSL on the API backend, the setup would be similar to what you did to setup Nginx with SSL on the frontend, with the main difference being you don't need to use a public certificate on the backend. It can be self-signed, since no one else besides your web server will be talking to it. Then it's just a matter of fixing all the URIs in your code to use HTTPS.

How to setup forward proxy on Windows server for outgoing HTTP and HTTPS requests?

I have a windows server 2012 VPS running a web app behind Cloudflare. The app needs to initiate outbound connections based on user actions (eg upload image from URL). The problem is that this 'leaks' my server's IP address and increases risk of DDOS attacks.
So I would like to prevent my server's IP from being discovered by setting up a forward proxy. So far my research has shown that this is no simple task, and would involve setting up another VPS to act as a proxy.
Does this extra forward proxy VPS have to be running windows ? Are their any paid services that could act as a forward proxy for my server (like cloudflare's reverse proxy system)?
Also, it seems that the suggested IIS forward proxy plugin, Application Request Routing, does not work for HTTPS.
Is there a solution for both types of outgoing (HTTPS + HTTP) requests?
I'm really lost here, so any help or suggestions would be appreciated.
You are correct in needing a "Forward Proxy". A good analogy for this is the proxy settings your browser has for outbound requests. In your case, the web application behaves like a desktop browser and can be configured to make the resource request through a proxy.
Often you can control this for individual requests at the application layer. An example of doing so with C#: C# Connecting Through Proxy
As far as the actual proxy server: No, it does not need to run Windows or IIS. Yes, you can use a proxy service. The vast majority of proxy services are targeted towards consumers and are used for personal privacy or to get around network restrictions. As such, I have no direct recommendations.
Cloudflare actually has recommendations regarding this: https://blog.cloudflare.com/ddos-prevention-protecting-the-origin/.
Features like "upload from URL" that allow the user to upload a photo from a given URL should be configured so that the server doing the download is not the website origin server.
This may be a more comfortable risk mitigator, as it wouldn't depend on a third party proxy service. A request for upload could be handled as a web service call to a dedicated "file downloader" server. Keep in mind that if you have a queued process for another server to do the work, and that server is hosted in the same infrastructure, both might be impacted by a DDoS, depending on the type of DDoS.
Your question implies that you may be comfortable using a non-windows server. Many softwares exist that can operate as a proxy(most web servers), but suffer from the same problem as ARR - lack of support for the HTTP "CONNECT" verb, which is used by modern browsers to start an HTTPS connection before issuing a "GET". SQUID is very popular, open source, and supports everything to connect to.. anything. It's not trivial to set up. Apache also has support for this in "mod_proxy_connect", but I have no experience in that and the online documentation isn't very robust. It's Apache, though, so it may be worth the extra investigation.

Is there a reason why I should have a proxy in front of my Node.JS server?

Why should I proxy requests through to my Node.JS server using something like Nginx?
Why not allow access to the node server directly?
Internet is unfriendly and hard to survive place.
Having Nginx between your client and Node.js obviously allows you to use the advantages Nginx offers. So what are these?
HTTPS can be set-up without touching any code in your Node application. It is tricky otherwise, and can compromise your HTTPS private key, if Node app can be exploited.
Gzipping the communication can be done avoiding any changes in Node application.
Authorization. For instance, Nginx can automatically block unwelcomed spiders.
If you have more than one application, Nginx will serve as an independent router.
And more...
Summing it up, Nginx will do the server stuff, so that developing your application, you do not need to worry about the administrative and common configuration aspects of a web server.
I recommend you to read chapter 3 of "Deploying Node.js" by Sandro Pasquali (Packt Publishing) specially from page 69 onwards.
I will cite a couple relevant of paragraphs:
Using Nginx
According to
http://www.linuxjournal.com/magazine/nginx-high-performance-web-server-and-reverse-proxy:
"Nginx is able to serve more requests per second with less resources
because of its architecture. It consists of a master process, which
delegates work to one or more worker processes. Each worker handles
multiple requests in an event-driven or asynchronous manner using
special functionality from the Linux kernel (epoll/select/poll). This
allows Nginx to handle a large number of concurrent requests quickly
with very little overhead."
Load balancing with Node
File serving speeds are, of course, not the only reason you might use
a proxy such as Nginx. It is often true that network topology
characteristics make a reverse proxy the better choice, especially
when the centralization of common services, such as compression, makes
sense.

Designing real-time web application (Node.js and socket.io)

I want to ask about some good practices. I have a Node.js (Express) web server and socket.io push server (in case technology matters). I can turn both of them into one application but I want them separated (they can communicate with each other if necessary). There are two reasons to do that:
It will be easier to manage, debug and develop the app;
It will be a lot easier to scale the app. I can just add another instance of push server or web server if necessary;
This is at least what I believe. The only problem is that when a client connects to the seperate socket.io server then it won't send cookies (different port, cross-domain policy).
The workaround I came up with is to put a reverse proxy (written in Node.js as well) in front and check what kind of request we are dealing with and send it to web server or push server accordingly. Great, now we have cookies in both web server and push server. The reverse proxy can be a load balancer which is an additional bonus.
It looks like a good idea to me. What do you think about this design? Perhaps any other workaround for cookie problem?
I recently did something simular, we initially used a node.js reverse proxy but ran into reliability/scalability problems. We found serving static files and proxying requests was best left to nginx. haproxy is also a very viable solution for stand alone proxying as well.
HaProxy
Nginix as a reverse proxy

Resources