Nginx and Node.js — am I doing it wrong? - node.js

“If #nginx isn’t sitting in front of your node server, you’re probably doing it wrong.”
— Bryan Hughes via Twitter
For a short while now, I have been making applications with Node.js and, as Mr.Hughes advises, serving them with Nginx as a reverse proxy. But I am not sure that I am doing it correctly because I can still access my Node.js application over the internet without going through the Nginx server.
At its core, the typical application is quite simple. It is served with ExpressJS like this:
var express = require("express");
var app = express();
// ...
app.listen(3000);
and Nginx is configured as a reverse-proxy like so:
# ...
location / {
proxy_pass http://127.0.0.1:3000;
}
# ...
And this works wonderfully! However I have noticed a behaviour that I am not sure is desirable, and may defeat a large portion of the purpose of using Nginx as a reverse-proxy in the first place:
Assuming example.org is a domain name pointing to my server, I can navigate to http://www.example.org:3000 and interact with my application from anywhere, without touching the Nginx server.
The typical end user would never have any reason to navigate to http://<whatever-the-server-host-name-or-IP-may-be>:<the-port-the-application-is-being-served-on>, so this would never effect them. My concern, though, is that this may have security implications that a not-so-end-user could take advantage of.
Should the application be accessible directly even though Nginx is being used as a reverse-proxy?
How can you configure the application so that it is only be available to the local machine / network / Nginx server?

It is best not to be available directly (imho).
You can specify accepted hostname:
app.listen(3000, 'localhost');

Related

Is there a way to "host" an existing web service on port X as a network path of another web service on port 80?

What I'm trying to do is create an access website for my own services that run on my linux server at home.
The services I'm using are accessible through <my_domain>:<respective_port_num>.
For example there's a plex instance which is listening on port X and transmission-remote (a torrenting client) listening on port Y and another custom processing service on port Z
I've created a simple website using python flask which I can access remotely which redirects paths to ports (so <my_domain>/plex turns into <my_domain>:X), is there a way to display these services on the network paths I've assigned to them so I don't need to open ports for each service? I want to be able to channel an existing service on :X to <my_domain>/plex without having to modify it, I'm sure it's possible.
I have a bit of a hard time to understand your question.
You certainly can use e.g. nginx as a reverse proxy in front of your web application, listen to any port and then redirect it to the upstream application on any port - e.g. your Flask application.
Let's say, my domain is example.com.
I then can configure e.g. nginx to listen on port 80 (and 443 for SSL), and then proxy all requests to e.g. port 8000, where Flask is running locally.
Yes, this is called using nginx as a reverse proxy. It is well documented on the internet and even the official docs. Your nginx.conf would have something like:
location /my/flask/app/ {
# Assuming your flask app is at localhost:8000
proxy_pass http://localhost:8000;
}
From user's perspective, they will be connecting to your.nginx.server.com/my/flask/app/. But behind the scenes nginx will actually forward the request to your app, and serve its response back to the user.
You can deploy nginx as a Docker container, I recommend doing this as it will keep the local files and configs separate from your own work and make it easier for you to fiddle with it as you learn. Keep in mind that nginx is only HTTP though. You can't use it to proxy things like SSH or arbitrary protocols (not without a lot of hassle anyway). If the services generate their own URLs, you might also need to configure them to anticipate the nginx redirects.
BTW, usually flask is not served directly to the internet, but instead nginx talks to something like Gunicorn to handle various network related concerns: https://vsupalov.com/what-is-gunicorn/

Setting up internal web server with Node.js

I want to host a web app with node.js on a Linux virtual machine using the the HTTP module.
As the app will be visualising sensitive data I want to ensure it can only be accessed from PCs on the same LAN.
My understanding is that using the HTTP module a web server is created that's initially only accessible by other PCs on the same LAN. I've seen that either by tunnelling or portforwarding a node.js server can be exposed if desired.
Question
Are there any other important considerations/ways the server could be accessed externally?
Is there a particular way I can setup a node.js server to be confident that it's only accessible to local traffic?
It really depends what you are protecting against.
For example, somebody on your LAN could port forward your service using something like ngrok. There are a few things you can check for:
In this case the header x-forwarded-for is set. So, to protect against this you can check for this header on the incoming request, and if set you can reject the request.
The host header is also set and will indicate how the client referred to your service - if it is as you expect (maybe a direct local LAN address such as 192.168.0.xxx:3000) then all is OK, if not (I ran ngrok on a local service and got something of the form xxxxxxxx.ngrok.io) then reject it.
Of course a malicious somebody could create their own server to redirect requests. The only way there is to put in usernames and passwords or similar. At least you then known who is (allegedly) accessing your service and do something about it.
However, if you are not trying to pretect against a malicious internal actor, then you should be good as you are - I can't think of any way (unless there is a security hole in your LAN) for your service to be made public without somebody actively setting that up.
My last suggestion would be to use something like express rather than the http module by itself. It really does make life a lot simpler. I use it a lot for just this kind of simple internal server.
Thought I'd add a quick example. I've tested this with ngrok and it blocks access via the public address but works find via localhost. Change the host test to whatever local address (or addresses) you want to serve this service from.
const express=require('express');
const app=express();
app.use((req,res,next)=>{
if (req.headers.host!=='localhost:3000' || req.headers['x-forwarded-for']){
res.status(403).send('Invalid access!');
} else next();
});
app.get('/',(req,res)=>res.send('Hello World!'));
app.listen(3000,()=>{
console.log('Service started. Try it at http://localhost:3000/');
});
I would prefer using nginx as a proxy here and rely on nginx' configuration to accept traffic from local LAN to the node.js web server. If this is not possible, a local firewall would be the best tool for the job.

How to use Nginx to load pages through express router

So I'm building an end to end application (With node.js/mysql back end, react front end, and using the express router), but I'm having trouble setting up a local development server. I don't need it to be accessed from the outside world, just be able to load different pages connecting to the express router. I don't have any dev ops experience for this, so I'm trying to use nginx to point it to the router which I can't figure out. Is there an easier way to do this?
I also need to run this on a windows machine, which just makes everything slightly more complicated
It's not entirely clear from your description how your application is set up and what the role of Nginx is.
So I'll start from the beginning...
Nginx is primarily an HTTP server which can also function as a proxy for HTTP requests. If you've written a Node.js application using Express, you have written an HTTP server which can handle any routes you have set up and can also serve your static assets (ie. HTML pages, images, front-end Javascript, CSS, etc.). In this case, there is no need for Nginx - if you wrote something like the Express "Hello World" app, then you will see a message like "Example app listening on port 3000" and you can connect to your app by visiting http://localhost:3000 in your browser.
That's it - there's literally nothing else to your app and there is no need for Nginx (or any other HTTP server) to run your application.
Now that's not to say that there is no role for Nginx in your application, but it may not be as an HTTP server. One possibility is that you may want to set up Nginx as a proxy, to handle certain routes by sending the requests to your Node application. For example, I set up an application some time ago which uses Nginx to proxy API routes for my application to a Node application and to serve static assets directly. This may be what you have in mind - if it is, you will need to configure different routes in Nginx to serve different things (and unfortunately there's not enough information in your question to give suggestions on this).
As an aside, you're probably going to find this much easier to set up using Linux - perhaps the Windows Linux Subsystem, a virtual machine running Linux, or Docker.
You'll probably want to use
https://github.com/facebook/create-react-app
create-react-app my-app will set up everything you need (webpack, etc.), and then
npm start will start a local development server.
Should work on Windows, but I don't know, because I wouldn't use/recommend Windows ;-)

How to point different subdomains to different applications on the same server? (use node.js as a proxy?)

I'm setting up a node.js server but I would also like to have Apache running on there at the same time. Node is going to be the main website, and there will be subdomains that point to Apache.
The only way I can think of how to do this is have the different applications listen to different ports and then have a proxy application that listens to port 80 and then "redirects" the port according to the subdomain used. I'm not sure if this is the right way to do it, or how to do it if it is.
Research has shown me that it could be possible to use Apache as this proxy, though I would prefer it if I didn't have to. If I could somehow use node.js to do it, that would be fantastic (my preferred solution). If that is impractical/impossible, then of course I am open to other ideas.
I would really appreciate some guidance as to how to do this.
You wanted a solution that can serve both Node.js and Apache at the same time, and you wanted to have Node.js to do the reverse proxy. However, it is best to use a program that is designed for reverse proxy (Nginx, HAProxy) for that job. Using Node.js as a reverse proxy server will be inefficient.
Nginx is something I recommend. It is simple and highly efficient. You can have the Nginx server at the very front, taking in all the requests.
Here is how you setup Nginx to reverse proxy to Node
http://www.nginxtips.com/how-to-setup-nginx-as-proxy-for-nodejs/
And here is how to setup Nginx to reverse proxy to Apache
http://www.howtoforge.com/how-to-set-up-nginx-as-a-reverse-proxy-for-apache2-on-ubuntu-12.04
Simply combine the setting files of the two will enable you to serve apache and node at the same time.
Have a look through this thread
While it discusses some issues using http-proxy with WebSockets on <= 0.8.x, if you aren't doing that, you should be fine.
You can create a very basic proxy listener like so:
var http = require('http'),
httpProxy = require('http-proxy');
httpProxy.createServer(8888, 'localhost').listen(80);
And create a back-end server like so:
var http = require('http').listen(8888);
But of course, more complex implementations can be accomplished by reading the http-proxy documentation.

Hosting PHP and Node.js apps on the same server with multiple domains

I have a Linode VPS, currently running lighttpd to serve up my PHP websites and listening on port 80.
I'm also running Node.js, which listens on port 81, and uses websockets and HTTP to interact with the client.
There's a couple of different domains that I would like to point to this server. Ideally, I would like the domains which host the PHP sites to all talk to the same lighttpd server, and the sites which use node.js would somehow redirect to the port node.js is listening on unbeknownst to the client (e.g. no 30x redirect).
example-php1.com:80 -> linodebox:80 lighttpd /var/www/example1
example-php2.com:80 -> linodebox:80 lighttpd /var/www/example2
example-node.com:80 -> linodebox:81 node.js
Is there a way to do this, either by setting DNS entries or tweaking iptables? Does lighttpd need to be a proxy for node.js? The websockets feature needs to work without any fallbacks, and visiting a non node domain, e.g. example-php1.com:81, should not expose the node application.
I feel the perfect solution wouldn't require changes to existing application code nor require proxying between software web servers, but I could be wrong.
What's up Tom!?
I recommend HA-Proxy, it's one of the most high performance proxies out there and should accomplish what you're trying to do there.
I'm doing something similar with nginx acting as a proxy, it's easy but not the fastest.
HA-Proxy's website is here http://haproxy.1wt.eu
If you wanted a 'pure' solution, you could probably get the answer from looking at ha-proxy's source code. You can't really do it with iptables. Something has to read the HTTP header to determine where the request came from to route it locally.
I had basically the same problem and I ended up using node-http-proxy (also available in npm as http-proxy).
You just need a simple config file:
{
router: {
'example-php1.com': 'linodebox:80',
'example-php2.com': 'linodebox:80',
'example-node.com': 'linodebox:81
}
}
Then just run node-http-proxy --config options.json and you're set. If you want to run lighttpd and node on the same machine, you'll have to start lighttpd on a different port (I use 81 for php and 3000 for node - adjusting the config is easy). I also use forever to manage my node instances.
Ya'll are gonna hate me...
I ended up going with a second IP address, then followed the Linode tutorial to setup multiple static IPs. Then, I configured lighttpd to bind to one IP address and Node.js to bind to another IP address.
This isn't a great solution as it doesn't scale.
Update: lighttpd 1.4.46 (released back in 2017) added multiple ways to accept WebSocket connections:
lighttpd mod_wstunnel
lighttpd mod_proxy
lighttpd mod_cgi

Resources