get client ip of the request in net library nodejs - node.js

I am using sticky session in nodejs which is behind nginx.
Sticky session does the load balancing by checking the remoteAddress of the connection.
Now the problem is it always take ip of nginx server
server = net.createServer({ pauseOnConnect: true },function(c) {
// Get int31 hash of ip
var worker,
ipHash = hash((c.remoteAddress || '').split(/\./g), seed);
// Pass connection to worker
worker = workers[ipHash % workers.length];
worker.send('sticky-session:connection', c);
});
Can we get the client ip using net library?
Nginx Configuration:
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
#auth_basic "Restricted";
#auth_basic_user_file /etc/nginx/.htpasswd;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
proxy_read_timeout 3000;

As mef points out, sticky-session doesn't, at present, work behind a reverse proxy, where remoteAddress is always the same.
The pull request in the aforementioned issue, as well as an earlier pull request, might indeed solve the problem, though I haven't tested myself.
However, those fixes rely on partially parsing packets, doing low-level routing while peeking into headers at a higher level... As the comments on the pull requests indicate, they're unstable, depend on undocumented behavior, suffer from compatibility issues, might degrade performance, etc.
If you don't want to rely on experimental implementations like that, one alternative would be leaving load balancing entirely up to nginx, which can see the client's real IP and so keep sessions sticky. All you need is nginx's built-in ip_hash load balancing.
Your nginx configuration might then look something like this:
upstream socket_nodes {
ip_hash;
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
server 127.0.0.1:8004;
server 127.0.0.1:8005;
server 127.0.0.1:8006;
server 127.0.0.1:8007;
}
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
# Note: Trusting all addresses like this means anyone
# can pretend to have any address they want.
# Only do this if you're absolutely certain only trusted
# sources can reach nginx with requests to begin with.
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
proxy_read_timeout 3000;
}
}
Now, to get this to work, your server code would also need to be modified somewhat:
if (cluster.isMaster) {
var STARTING_PORT = 8000;
var NUMBER_OF_WORKERS = 8;
for (var i = 0; i < NUMBER_OF_WORKERS; i++) {
// Passing each worker its port number as an environment variable.
cluster.fork({ port: STARTING_PORT + i });
}
cluster.on('exit', function(worker, code, signal) {
// Create a new worker, log, or do whatever else you want.
});
}
else {
server = http.createServer(app);
// Socket.io initialization would go here.
// process.env.port is the port passed to this worker by the master.
server.listen(process.env.port, function(err) {
if (err) { /* Error handling. */ }
console.log("Server started on port", process.env.port);
});
}
The difference is that instead of using cluster to have all worker processes share a single port (load balanced by cluster itself), each worker gets its own port, and nginx can distribute load between the different ports to get to the different workers.
Since nginx chooses which port to go to based on the IP it gets from the client (or the X-Forwarded-For header in your case), all requests in the same session will always end up at the same process.
One major disadvantage of this method, of course, is that the number of workers becomes far less dynamic. If the ports are "hard-coded" in the nginx configuration, the Node server has to be sure to always listen to exactly those ports, no less and no more. In the absence of a good system for syncing the nginx config and the Node server, this introduces the possibility of error, and makes it somewhat more difficult to dynamically scale to e.g. the number of cores in an environment.
Then again, I imagine one could overcome this issue by either programmatically generating/updating the nginx configuration, so it always reflects the desired number of processes, or possibly by configuring a very high number of ports for nginx and then making Node workers each listen to multiple ports as needed (so you could still have exactly as many workers as there are cores). I have not, however, personally verified or tried implementing either of these methods so far.
Note regarding an nginx server behind a proxy
In the nginx configuration you provided, you seem to have made use of ngx_http_realip_module. While you made no explicit mention of this in the question, please note that this may in fact be necessary, in cases where nginx itself sits behind some kind of proxy, e.g. ELB.
The real_ip_header directive is then needed to ensure that it's the real client IP (in e.g. X-Forwarded-For), and not the other proxy's, that's hashed to choose which port to go to.
In such a case, nginx is actually serving a fairly similar purpose to what the pull requests for sticky-session attempted to accomplish: using headers to make the load balancing decisions, and specifically to make sure the same real client IP is always directed to the same process.
The key difference, of course, is that nginx, as a dedicated web server, load balancer and reverse proxy, is designed to do exactly these kinds of operations. Parsing and manipulating the different layers of the protocol stack is its bread and butter. Even more importantly, while it's not clear how many people have actually used these pull requests, nginx is stable, well-maintained and used virtually everywhere.

It seems that the module you're using does not support yet to be behind a reverse-proxy source.
Have a look at this Github issue, some pull requests seem to fix your problem, so you may have a solution by using a fork of the module (you can point at it on github from your package.json file.)

Related

Nodejs Child Process on another server using server to server communication

I want to run the child process using node js from one server to another. I have a too heavy process to run that causing my main server to work slowly so I want to run my heavy processes on another server that will perform heavy tasks like data modifications and return a buffer of that data but I could not find similar like this.
For example, I have server A that is running my website and users are sharing their content using this. When the users' traffic jumps to high my server will get slow because of data like images, videos upload, and pdf report generating on the basic images, videos and serving the site content. I want to perform these tasks on server B, so that server A will only work for data serving and traffic management.
Apparently at this point you probably need to split your webserver frontend routes into different worker servers.
Let's suppose you're using Nginx as a website frontend. If you're not, then your first step would be to setup an nginx webfront.
1 - If haven't done so, serve all public static content (like pdf files, videos, images, etc.) directly from nginx using different rules for static content and node server routes:
Something as basic as this:
server {
listen 80;
server_name example.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
location /static {
root /dir/to/my/static/files; # here you have your videos, images, etc.
}
}
2 - Now, if you need to separate your node server onto 2 services, you can just create 2 (or more) nginx proxy rules:
server {
listen 80;
server_name example.com;
location /api {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.2:5000; # server 2
}
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000; " server 1
}
location /static {
root /dir/to/my/static/files;
}
}
That way example.com/api/* routes will go to your secondary Node server (on ip 127.0.0.2), example.com/static will be served directly by Nginx at blazing fast speeds, while the non-mapped routes will be served by the default main node server on 127.0.0.1.
There are many ways to setup proxies and optimize Nginx so that it can, for instance, go through a pool of node servers in round-robin fashion, and you can also compress data and use protocols like HTTP/2 to take the load off the slower node-based webserver (ie. Express).

nginx behind haproxy to static html ssl getting real IP address

My problem is getting the "real" IP address from the web at nginx level serving a static vuejs site via ssl.
I want to block certain IP addresses, how can I get the real IP address if I can't use proxy pass, since I only link to a static location?
haproxy (tcp) (port: 443) ==> encrypted request ==> nginx (port: 8085) request pass to ==> '/' location getting real IP for range blocking.
Please also see questions/comments in the nginx vhost file. Am I on the right track here or does this need to be done entirely differently?
haproxy setup:
frontend ssl_front_433 xx.xx.xx.xx:443
mode tcp
option tcplog
use_backend ssl_nginx_backend_8085
backend ssl_nginx_backend_8085
mode tcp
balance roundrobin
option tcp-check
server srv-2 127.0.0.1:8085 check fall 3 rise 2 inter 4s
nginx setup:
server {
listen 8085 ssl;
server_name mydomain;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
ssl_certificate ./fullchain.pem;
ssl_certificate_key ./privkey.pem;
include include.d/ssl.conf;
// I want to only allow certain ip addresses
// haproxy of course always returns 127.0.0.1 thus this is not working
include include.d/ip_range.conf;
location / {
//how to get the proxy headers to be applied here?
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
// do I need a proxy pass and if so where should I pass to,
// in order to use it with static html/js?
// can I use an upstream to a static location?
//proxy_pass http://;
try_files $uri $uri/ /index.html;
}
}
On the nginx side you can control which IP addresses or ranges are permitted with a deny all and an allow range to your server block like so:
allow 192.168.1.0/24;
deny all;
Note: The nginx docs are always an excellent place to start, here's the docs for restricting access by IP addresses and ranges.
First, I would challenge you to reconsider why you need a load balancer with haproxy for something as simple as a html/css/js static site. More infrastructure introduces more complications.
Second, the upstream in nginx is only needed if you want to point requests to a local wsgi server for example, in your case this is static content so you shouldn't need to point to an upstream – not unless you have some sort of wsgi service you want to forward requests to.
Finally, as for haproxy only forwarding requests as 127.0.0.1, first make sure the IP is in the header (i.e. X-Real-IP) then you can try to add something like this to your haproxy config (source), if you indeed want to keep haproxy:
frontend all_https
option forwardfor header X-Real-IP
http-request set-header X-Real-IP %[src]
The haproxy documentation is also a good resource for preserving source IP addresses.

nginx or node app using ssl

I have nodejs express sitting behind nginx. Currently everything works fine. I have Nginx implemented with SSL to utilize https, which then simply forwards the request along to the running node application at the specified port. I'm wondering if this is the best way to do it though? Here's what I currently have...
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name mysite.somethingelse.com www.mysite.somethingelse.*;
ssl_certificate /path/to/my/cert.pem;
ssl_certificate_key /path/to/my/key.key;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
What if I simply implement an https server on the express end? And then proxy the request to that, and let that do all the decoding? Something like this:
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name mysite.somethingelse.com www.mysite.somethingelse.*;
location / {
proxy_pass https://localhost:3000;
proxy_http_version 1.1;
proxy_cert path/to/cert.pem
proxy_key path/to/key.key
}
}
This second version is likely not even correct. But what I'm going for is implementing SSL on the node app rather than letting nginx do it.
Do I gain anything from doing one vs the other?
What's the best practice here... letting nginx or the node app do this?
And, assuming it's better to do it on the node app, what is the correct implementation here of setting of nginx?
Thank you!
In case if you are in pursuit for performance and going to implement load balancing across several node instances, in is a good idea to terminate SSL on a standalone machine(s). But if you are going to run a single instance of your node app and foreseen load is not high, then it is probably simpler to setup SSL in node. Also I would recommend you to refrain from using nginx and switch to NAT (using firewall) because this approach will use less resources.
Another argument in favor of terminating SSL on nginx is documentation and configuration best practices. You should know that configuring SSL is not only about setting up certificate and private key, it's about lots of security considerations about different ciphers, protocols and vulnerabilities. And it is easier to find working solutions, tips and configurations examples for nginx than for node.
So regarding your questions:
It depends on your goals.
It depends on your goals, but I would recommend you currently to use nginx for SSL termination.
I would recommend you to implement NAT instead of nginx in this case.

Mixed content error when proxying websocket through nginx with SSL

I am working on a node.js application using express to serve content and socket.io for websocket communication. The setup has been working fine, but now I want to be able to access the websocket via SSL, too. I thought using nginx (which we already used for other stuff) as a proxy was a good idea, and configured it like this:
upstream nodejs {
server 127.0.0.1:8080;
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
server_name _;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://nodejs;
proxy_redirect off;
}
}
The node.js server is set up like this:
var express = require('express'),
app = express();
// some app configuration that I don't think matters
server = http.createServer(app).listen(8080);
var io = require('socket.io').listen(server);
io.configure(function() {
io.set('match original protocol', true);
io.set('log level', 0);
io.set('store', redisStore); // creation of redisStore not shown
}
Both nginx and node.js run inside a Vagrant box which forwards port 443 (which nginx listens on) to port 4443 on the host system.
With this setup, navigating to http://localhost:4443 (using Firefox 23) gives me access to the files served by Express, but when socket.io tries to connect to the socket, it throws the following error:
Blocked loading mixed active content "http://localhost:4443/socket.io/1/?t=1376058430540"
This outcome is sadly obvious, as it tries to load the JS file via HTTP from inside an HTTPS page, which Firefox does not allow. The question is why it does so in the first place.
Socket.io tries to determine which protocol is used to access the web page, and uses the same protocol in the construction of the above URL. In this case, it thinks it is being accessed over HTTP, which may be the result of being proxied. However, as I understand, setting match original protocol to true in the socket.io config is supposed to help in situations like this, but it does not in my case.
I have found numerous questions and answers here about websocket proxying, but none that deal with this particular issue. So I'm pretty much at wit's end, and would really appreciate some advice.
Change match original protocol to match origin protocol:
io.configure(function() {
//io.set('match original protocol', true);
io.set('match origin protocol', true);
...
}

Nginx + (nodejs, socketio, express) + php site

I'm working on a fully js, HTML5 canvas game and want it to be 'real-time'. Based on my research I find out node.js is an exciting prospect, so I configured it on my ubuntu 12 webserver with socket.io, express etc.
I'm a programmer, but just a rookie in the world of webserver backends, that's why I ask for your help. I got confused about the overall system model and want to be clarified how it's working. Maybe, I've read too much article in a short time.
First of all: I run nginx 1.2.x on my webserver. As I know, nginx is handling the rquests, it's dedicated to port 80 (for me) and serving http requests (also using php-fpm to serve php).
Then again, I have a succesfully running nodejs server on port 8080. I want the connection via websocket (due it's nature and protocol), since nginx not support websocket yet I got confused about what's going on.
If I go to http//mydomain.tld:8080, is this going to through node server and keep off nginx? In this case the connection could be via websocket and not falling back to xhr or anything else (i dont want it, because of scalability), right?
Then what should i do to have the same effect at http//mydomain.tld/game/ ? Just proxy the request in nginx.conf to node server? Like:
# if a file does not exist in the specified root and nothing else is definded, we want to serve the request via node.js
try_files $uri #nodejs;
location #nodejs
{
proxy_pass 127.0.0.1:8080;
break;
}
From: https://stackoverflow.com/a/14025374/2039342
And if it is a good proxy workaround when we need the websocket communication via nginx? Do we when we want a regular php site and socket.io connection inside it. By this time I presume the point is to run the traffic on port 80 and separate standard requests and websocket traffic. In my case what is the simpliest solution?
http://www.exratione.com/2012/07/proxying-websocket-traffic-for-nodejs-the-present-state-of-play/ in this article i found out HAProxy could be the one for me till nginx 1.3, is it?
I know my questions are a bit chaotic, but I'm straggling to understand the exact technik. Please give me some hint | article to read | starting point | basic config.
PS.: I've read the most of the related topics here.
Ps2.: to look less dumb: I've already done this game in red5 (java based flash server) + flash, so I just want to reconsider and publish it with proper current technologies.
Finally, my basic problem was configuring the nginx in the right way.
First I reinstalled nginx as a patched version with nginx_tcp_proxy_module.
The next step was setting up the right config to handle requests: via http or tcp.
I wanted the standard files to be served normally from webroot, just the game logic by node.js (and the socket.io js itself ofc) and .php files by php_fpm.
So I ended up with the following working nginx setup:
user www-data;
worker_processes 16;
events {
worker_connections 1024;
}
http {
upstream node-js-myapp {
server 127.0.0.1:3000;
}
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
server {
listen 80;
server_name domain.xx; # Multiple hostnames seperated by spaces
root /var/www/domain.xx; # Replace this
charset utf-8;
access_log /var/log/nginx/domain.xx.access.log combined;
error_log /var/log/nginx/domain.xx.error.log;
location ~ \.php$ {
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/conf.d/php_fpm; # Includes config for PHP-FPM (see below)
}
location / {
index index.html index.htm;
}
location ^~ /socket.io/ {
try_files $uri #node-js-myapp;
}
location /status {
check_status;
}
location #node-js-myapp {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://node-js-myapp;
}
}
}
tcp {
upstream websocket-myapp {
server 127.0.0.1:8080;
check interval=3000 rise=2 fall=5 timeout=1000;
}
server {
listen 3000;
server_name _;
access_log /var/log/nginx/domain.xx.access.log;
proxy_read_timeout 200000;
proxy_send_timeout 200000;
proxy_pass websocket-myapp;
}
}
It's working well with this node.js server:
var app = require('express').createServer()
var io = require('socket.io').listen(app);
io.set('transports', [
'websocket'
, 'flashsocket'
, 'htmlfile'
, 'xhr-polling'
, 'jsonp-polling'
]);
app.listen(8080);
While the requested file is in the public side of my server and in its HEAD section:
<script src="/socket.io/socket.io.js"></script>
I'm pretty sure my nginx is not complete and could contain bulls..., but it's kind of working and a good starting point.

Resources