nginx or node app using ssl - node.js

I have nodejs express sitting behind nginx. Currently everything works fine. I have Nginx implemented with SSL to utilize https, which then simply forwards the request along to the running node application at the specified port. I'm wondering if this is the best way to do it though? Here's what I currently have...
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name mysite.somethingelse.com www.mysite.somethingelse.*;
ssl_certificate /path/to/my/cert.pem;
ssl_certificate_key /path/to/my/key.key;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
What if I simply implement an https server on the express end? And then proxy the request to that, and let that do all the decoding? Something like this:
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name mysite.somethingelse.com www.mysite.somethingelse.*;
location / {
proxy_pass https://localhost:3000;
proxy_http_version 1.1;
proxy_cert path/to/cert.pem
proxy_key path/to/key.key
}
}
This second version is likely not even correct. But what I'm going for is implementing SSL on the node app rather than letting nginx do it.
Do I gain anything from doing one vs the other?
What's the best practice here... letting nginx or the node app do this?
And, assuming it's better to do it on the node app, what is the correct implementation here of setting of nginx?
Thank you!

In case if you are in pursuit for performance and going to implement load balancing across several node instances, in is a good idea to terminate SSL on a standalone machine(s). But if you are going to run a single instance of your node app and foreseen load is not high, then it is probably simpler to setup SSL in node. Also I would recommend you to refrain from using nginx and switch to NAT (using firewall) because this approach will use less resources.
Another argument in favor of terminating SSL on nginx is documentation and configuration best practices. You should know that configuring SSL is not only about setting up certificate and private key, it's about lots of security considerations about different ciphers, protocols and vulnerabilities. And it is easier to find working solutions, tips and configurations examples for nginx than for node.
So regarding your questions:
It depends on your goals.
It depends on your goals, but I would recommend you currently to use nginx for SSL termination.
I would recommend you to implement NAT instead of nginx in this case.

Related

node.js server and client: one or two node instances?

I am writing a website with node.js, and, until now, I've always separated the client and server parts in two different node.js instances (and processes):
one for the server part (APIs, interaction with databases, etc.)
one for the client part (js code is executed in the browser)
Is this the correct way of doing it? Or there is a way to collapse client and server in one node.js instance?
Thanks.
You do not need node.js to provide clients with static files.
Nginx (or any other reverse proxy) can do it in a more efficient way thus conserving resources of you server and allowing higher loads.
I suggest you to use nginx to provide static files and forward API requests to node.js service.
Here is an example how you could do it:
server {
listen 80 default_server;
root /client-code;
location / {
try_files $uri $uri/ #node;
}
location #node {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:8000;
}
}

jHipster dev profile reverse proxy?

I created a skeleton app with jhipster and added some entitles with import-jdl. Now I'm trying to run the dev profile and it hosts it on localhost:8080, which is fine. But I want to proxy it to the public Internet through nginx and put it behind SSL.
Now if I were using Tomcat as an app server, I could set the proxyHost property on the Connector to tell the app server what its public-facing URL is so it generates URLs for the client properly.
But I don't know what app server jhipster uses for the dev profile or how to configure it.
There are a few ways you can go to solve your problem,
The most simplest one is to reverse proxy using nginx, like this:
server {
listen [::]:80;
listen 80;
server_name your-domain.com;
access_log /var/log/nginx/your-app-access.log;
error_log /var/log/nginx/your-app-error.log;
return 301 https://$host:443$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name your-domain.com;
access_log /var/log/nginx/your-app-access.log;
error_log /var/log/nginx/your-app-error.log;
ssl_certificate /path/to/ssl/server.crt;
ssl_certificate_key /path/to/ssl/server.key;
keepalive_timeout 70;
add_header Alternate-Protocol 443:npn-spdy/2;
location / {
proxy_pass http://jhipster;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
}
}
upstream jhipster {
server 127.0.0.1:8080;
}
which should work on every nginx.
This expects your app running at port 8080 at localhost, what is the case when you start it locally. This furthermore requires you to install java and more stuff on your server.
A better way is to use the docker option to create docker images. There are a lot of ways to handle with docker images, like public repository as DockerHub as well as private solutions, like GitLab Container registry. At least you can do a trick by serving the registry docker image at some server with ssl, to use this for private registry.
Then you can at least deploy your app to the same nginx configuration as written above, directing traffic to a running docker container. With this, you only need a arbitrary linux distribution with docker and nginx running.
To gain the power of CI/CD systems, you can deploy these images to complex systems like kubernetes, but also to docker swarm (+ Docker Shipyard), or to smaller and easier to setup solutions like Deis or Dokku. You can read this article, which guides you through a setup of GitLab + GitLab CI + Registry + Dokku, where you can deploy your JHipster application using git push origin master
note: I suggest not to use the dev profile in production. To keep update with your application logs, consider specific logback configuration or solutions as JHipster Console (ELK Stack)

get client ip of the request in net library nodejs

I am using sticky session in nodejs which is behind nginx.
Sticky session does the load balancing by checking the remoteAddress of the connection.
Now the problem is it always take ip of nginx server
server = net.createServer({ pauseOnConnect: true },function(c) {
// Get int31 hash of ip
var worker,
ipHash = hash((c.remoteAddress || '').split(/\./g), seed);
// Pass connection to worker
worker = workers[ipHash % workers.length];
worker.send('sticky-session:connection', c);
});
Can we get the client ip using net library?
Nginx Configuration:
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
#auth_basic "Restricted";
#auth_basic_user_file /etc/nginx/.htpasswd;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
proxy_read_timeout 3000;
As mef points out, sticky-session doesn't, at present, work behind a reverse proxy, where remoteAddress is always the same.
The pull request in the aforementioned issue, as well as an earlier pull request, might indeed solve the problem, though I haven't tested myself.
However, those fixes rely on partially parsing packets, doing low-level routing while peeking into headers at a higher level... As the comments on the pull requests indicate, they're unstable, depend on undocumented behavior, suffer from compatibility issues, might degrade performance, etc.
If you don't want to rely on experimental implementations like that, one alternative would be leaving load balancing entirely up to nginx, which can see the client's real IP and so keep sessions sticky. All you need is nginx's built-in ip_hash load balancing.
Your nginx configuration might then look something like this:
upstream socket_nodes {
ip_hash;
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
server 127.0.0.1:8004;
server 127.0.0.1:8005;
server 127.0.0.1:8006;
server 127.0.0.1:8007;
}
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
# Note: Trusting all addresses like this means anyone
# can pretend to have any address they want.
# Only do this if you're absolutely certain only trusted
# sources can reach nginx with requests to begin with.
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
proxy_read_timeout 3000;
}
}
Now, to get this to work, your server code would also need to be modified somewhat:
if (cluster.isMaster) {
var STARTING_PORT = 8000;
var NUMBER_OF_WORKERS = 8;
for (var i = 0; i < NUMBER_OF_WORKERS; i++) {
// Passing each worker its port number as an environment variable.
cluster.fork({ port: STARTING_PORT + i });
}
cluster.on('exit', function(worker, code, signal) {
// Create a new worker, log, or do whatever else you want.
});
}
else {
server = http.createServer(app);
// Socket.io initialization would go here.
// process.env.port is the port passed to this worker by the master.
server.listen(process.env.port, function(err) {
if (err) { /* Error handling. */ }
console.log("Server started on port", process.env.port);
});
}
The difference is that instead of using cluster to have all worker processes share a single port (load balanced by cluster itself), each worker gets its own port, and nginx can distribute load between the different ports to get to the different workers.
Since nginx chooses which port to go to based on the IP it gets from the client (or the X-Forwarded-For header in your case), all requests in the same session will always end up at the same process.
One major disadvantage of this method, of course, is that the number of workers becomes far less dynamic. If the ports are "hard-coded" in the nginx configuration, the Node server has to be sure to always listen to exactly those ports, no less and no more. In the absence of a good system for syncing the nginx config and the Node server, this introduces the possibility of error, and makes it somewhat more difficult to dynamically scale to e.g. the number of cores in an environment.
Then again, I imagine one could overcome this issue by either programmatically generating/updating the nginx configuration, so it always reflects the desired number of processes, or possibly by configuring a very high number of ports for nginx and then making Node workers each listen to multiple ports as needed (so you could still have exactly as many workers as there are cores). I have not, however, personally verified or tried implementing either of these methods so far.
Note regarding an nginx server behind a proxy
In the nginx configuration you provided, you seem to have made use of ngx_http_realip_module. While you made no explicit mention of this in the question, please note that this may in fact be necessary, in cases where nginx itself sits behind some kind of proxy, e.g. ELB.
The real_ip_header directive is then needed to ensure that it's the real client IP (in e.g. X-Forwarded-For), and not the other proxy's, that's hashed to choose which port to go to.
In such a case, nginx is actually serving a fairly similar purpose to what the pull requests for sticky-session attempted to accomplish: using headers to make the load balancing decisions, and specifically to make sure the same real client IP is always directed to the same process.
The key difference, of course, is that nginx, as a dedicated web server, load balancer and reverse proxy, is designed to do exactly these kinds of operations. Parsing and manipulating the different layers of the protocol stack is its bread and butter. Even more importantly, while it's not clear how many people have actually used these pull requests, nginx is stable, well-maintained and used virtually everywhere.
It seems that the module you're using does not support yet to be behind a reverse-proxy source.
Have a look at this Github issue, some pull requests seem to fix your problem, so you may have a solution by using a fork of the module (you can point at it on github from your package.json file.)

Deploying a nodejs app with ExpressJS

So I have a nodejs app running on port 8081:
http://mysite.com:8081/
I want to access it simply by going to http://mysite.com/ so I setup a virtual host with expressjs:
app.use(express.vhost('yugentext.com', app));
That seems too easy, and it doesn't work. Am I confused about how expressjs vhosts work?
if you want to do these via express well, the problem comes from your dns setup, not from the express code.
Add an A entry to your domain like these:
127.0.0.1 localhost *.mysite.com *.www.mysite.com
You should wait to the DNS propagation. (from seconds to hours).
If apache or other web server is running any vhost on port 80 there will be conflicts.
And the other way:
nodejs and express are far away from the performance offered by apache and nginx (vhost/proxy stuff).
Nginx>Apache (fits better with nodejs)
Creates a proxy from mysite.com to mysite.com:8080
On these way nodejs and express handles the ui, methods, httpserver etc, and Nginx or Apache the proxy , vhost, and managing your static assets sooo fast.
check these config here: Trouble with Nginx and Multiple Meteor/Nodejs Apps
I think you're doing app.listen(8081). You should be doing app.listen(80). I have no experience with express vhosts, but you don't need them for this simple use case.
upstream node-apps {
server host_ip_1:3000;
server host_ip_2:3000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://node-apps/;
proxy_redirect off;
}
}
this is my nginx config, proxy pass multiple servers, good luck :p

deploying Node.js app for production

Most of the tutorials I've come across, you set up a Node.js web app by setting the server to listen on a port, and access it in the browser by specifying that port.. However, how would I deploy a Node.js app to be fully accessible by say a domain like foobar.com?
You have to bind your domain's apex (naked domain) and usually www with your web server's ip or it's CNAME.
Since you cannot bind apex domain with CNAME, you have to specify server IP or IPs or load balancers' IPs
Your question is a little vague.. If your DNS is already configured you could bind to port 80 and be done with it. However, if you already have apache or some other httpd running on port 80 to serve other hosts that obviously won't work.
If you prefer to run the node process as non-root (and you should) it's much more likely that you're looking for a reverse proxy. My main httpd is nginx, the relevant option is proxy_pass. If you're using apache you probably want mod_proxy.
I just created an "A record" at my registrar pointing to my web server's ip address. Then you can start your node app on port 80.
An alternative would be to redirect:
http://www.foobar.com to http://www.foobar.com:82
Regards.
Use pm2 to run your node apps on the server.
Then use Nginx to proxy to your node server. I know this sounds weird but that's the way it's done. Eventually if you need to set up a load balancer you do that all in Nginx too.
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://APP_PRIVATE_IP_ADDRESS:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
This is the best tutorial I've found on setting up node.js for production.
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-14-04
For performance you also setup nginx to serve your public files.
location /public {
allow all;
access_log off;
root /opt/www/site;
}

Resources