Do I have to put a reverse proxy upon NodeJS? - node.js

I've been creating web-applications using Ruby on Rails for a while, and I'm switching to NodeJS/ExpressJS for my next web-application.
I'm used to put nginx as a reverse proxy before the rack stack; but for NodeJS/ExpressJS do I have to put a reverse-proxy in front of it ? If I have to, can you explain me why ?

First of all, to put reverse proxy or not is your decision only. I can only say a cons for doing it.
Reverse proxy (especially, nginx) can be used for balancing connections. If you have several back-end servers, you can put nginx for forwarding requests between them, and if one will be down, service will work
Nginx can be used for serving static files, faster then node.js/RoR
Nginx can be used for serving SSL connection and it's make your application a little lighter
After adding reverse proxy, you can run application only on 127.0.0.1, so there can't be executed remotely without calling nginx and logging request to it.
Hope it will help you to choose for putting nginx in product enviroment.

Related

Why use nginx if there is a proxy middleware for nodejs?

I'm really confused with reverse proxy. What i understood is in forward proxy the client know the destination server but the server doesn't know the client, in reverse proxy the server knows the client but the client doesn't know the "server" he's visiting is a actually proxying to some other server. And to use the reverse proxy you can use NGINX. But if we can use that, why do express framework middlewares like http-proxy-middleware
exist?
and if my understanding of proxy and reverse proxy is wrong please correct me
Lets take an abstract example:
You will agree that you must be using port 3000 or something to run NodeJS... Right?
And lets say you also use angular/react or html+css to run your frontend website which is lets say on port 4200 (default for angular).
Now what if you want to have only one server and want two different services (frontend in angular and backend in nodejs) to run on that single server only.
So you need something in between your client and server to distinguish between the requests whether to forward them to angular or nodejs or any other service as well that is running on the same server.
What reverse proxy such as NGINX will do is you will define some rules on the basis of which the administrator of the server can utilize same server to serve various services.
This is the simplest example I can think of on the top of my head.

NGINX, THe Edge, HAPRoxy

I was going through Uber Engineering website where I came across this paragraph and I it confused me a lot, if anyone can make it clear for me then I would be thankful to him/her:
The Edge The frontline API for our mobile apps consists of over 600
stateless endpoints that join together multiple services. It routes
incoming requests from our mobile clients to other APIs or services.
It’s all written in Node.js, except at the edge, where our NGINX front
end does SSL termination and some authentication. The NGINX front end
also proxies to our frontline API through an HAProxy load balancer.
This is the link.
NGINX is already a reverse proxy + load balancer, then from where HAProxy load balancer came in the picture and where exactly it fits in the picture? What is "the edge" he talked about? Either the guy who wrote his he wrote confused words or I dont know English.
Please help.
It seems like they're using HAProxy strictly as a load balancer, and using NGINX strictly to terminate SSL and for authentication. It isn't necessary in most cases to use HAProxy along with NGINX, as you mentioned, NGINX has load-balancing capabilities, but being Uber, they probably ran into some unique problems that required the use of both. According to the information I've read, such as http://www.loadbalancer.org/blog/nginx-vs-haproxy/ and https://thehftguy.com/2016/10/03/haproxy-vs-nginx-why-you-should-never-use-nginx-for-load-balancing/, NGINX works extremely well as a web server, including the use case where it is serving as a reverse proxy for a node application, but its load-balancing capabilities are basic and not nearly as performant as HAProxy. Additionally, HAProxy exposes many more metrics for monitoring, and has more advanced routing capabilities.
Load balancing is not the core feature of NGINX. In the context of a node.js application, usually what you would see NGINX used for is to serve as a reverse proxy, meaning that NGINX is the web server, and http requests come through it. Then, based on the hostname and other rules, it forwards on the HTTP request to whatever port your node.js application is running on. As part of this flow, often NGINX will handle SSL termination, so that this computationally-intensive task is not being handled by node.js. Additionally, NGINX is often used to serve static assets for node.js apps, as it is more efficient, especially when compressing assets.

How to point different subdomains to different applications on the same server? (use node.js as a proxy?)

I'm setting up a node.js server but I would also like to have Apache running on there at the same time. Node is going to be the main website, and there will be subdomains that point to Apache.
The only way I can think of how to do this is have the different applications listen to different ports and then have a proxy application that listens to port 80 and then "redirects" the port according to the subdomain used. I'm not sure if this is the right way to do it, or how to do it if it is.
Research has shown me that it could be possible to use Apache as this proxy, though I would prefer it if I didn't have to. If I could somehow use node.js to do it, that would be fantastic (my preferred solution). If that is impractical/impossible, then of course I am open to other ideas.
I would really appreciate some guidance as to how to do this.
You wanted a solution that can serve both Node.js and Apache at the same time, and you wanted to have Node.js to do the reverse proxy. However, it is best to use a program that is designed for reverse proxy (Nginx, HAProxy) for that job. Using Node.js as a reverse proxy server will be inefficient.
Nginx is something I recommend. It is simple and highly efficient. You can have the Nginx server at the very front, taking in all the requests.
Here is how you setup Nginx to reverse proxy to Node
http://www.nginxtips.com/how-to-setup-nginx-as-proxy-for-nodejs/
And here is how to setup Nginx to reverse proxy to Apache
http://www.howtoforge.com/how-to-set-up-nginx-as-a-reverse-proxy-for-apache2-on-ubuntu-12.04
Simply combine the setting files of the two will enable you to serve apache and node at the same time.
Have a look through this thread
While it discusses some issues using http-proxy with WebSockets on <= 0.8.x, if you aren't doing that, you should be fine.
You can create a very basic proxy listener like so:
var http = require('http'),
httpProxy = require('http-proxy');
httpProxy.createServer(8888, 'localhost').listen(80);
And create a back-end server like so:
var http = require('http').listen(8888);
But of course, more complex implementations can be accomplished by reading the http-proxy documentation.

Designing real-time web application (Node.js and socket.io)

I want to ask about some good practices. I have a Node.js (Express) web server and socket.io push server (in case technology matters). I can turn both of them into one application but I want them separated (they can communicate with each other if necessary). There are two reasons to do that:
It will be easier to manage, debug and develop the app;
It will be a lot easier to scale the app. I can just add another instance of push server or web server if necessary;
This is at least what I believe. The only problem is that when a client connects to the seperate socket.io server then it won't send cookies (different port, cross-domain policy).
The workaround I came up with is to put a reverse proxy (written in Node.js as well) in front and check what kind of request we are dealing with and send it to web server or push server accordingly. Great, now we have cookies in both web server and push server. The reverse proxy can be a load balancer which is an additional bonus.
It looks like a good idea to me. What do you think about this design? Perhaps any other workaround for cookie problem?
I recently did something simular, we initially used a node.js reverse proxy but ran into reliability/scalability problems. We found serving static files and proxying requests was best left to nginx. haproxy is also a very viable solution for stand alone proxying as well.
HaProxy
Nginix as a reverse proxy

Nginx + SSL + Rails + Juggernaut (Node.js) + Engineyard

I have two different applications on the same server. One of them is running on the 80 port (mydomain.com), another on the port 443 (sub.mydomain.com) and has wildcard certificate.
The first application is only for information purposes and don't need websockets support.
The second application should have secure websockets support (wss protocol).
I tried to set up juggernaut gem (for websockets) for my rails app with nginx server on the engineyard cloud, but i have one problem. Engineyard cloud provide only two opened ports: 80 and 443. I know that nginx do not fully support http 1.1 reverse proxing, so i can't use proxing from nginx for redirects websockets requests to the specific local port (in my case this port is 8080).
I tried use HAProxy and it's work for me when i use only unsecure websockets, but i need to support secure websockets. As i know in this case i should use something like STunnel for tunneling my https request and than use HAProxy, but when i test it - i saw that the server has to work several times slower and i still did not work to use the secure socket connection :(
Maybe I'm doing something wrong? Maybe someone will tell how to set up nginx for multiple applications (one of them should work via https) and secure websockets using only two ports (80 and 443).
p.s. Also i used a node-http-proxy, in this case i was able to set up proxy for different nginx applications but i do not get run websockets (happened only for "handshake" via nginx, not for "switching protocols")
I did some research on the various reverse proxies and websockets not too long ago. The bottom line is that websockets is new, and the reverse proxy support for it is very poor right now.
The recommendation I saw and I agree with is that you should run your websockets on a different stack than the rest of your items. That usually means putting it on a separate domain or subdomain.
You still have to deal with the complexities of getting the reverse proxies working, but it will be less complicated if you don't have to worry about breaking the other stuff.
Also, I agree that maybe you'll get better answers at serverfault or superuser.

Resources