I'm working on a fully js, HTML5 canvas game and want it to be 'real-time'. Based on my research I find out node.js is an exciting prospect, so I configured it on my ubuntu 12 webserver with socket.io, express etc.
I'm a programmer, but just a rookie in the world of webserver backends, that's why I ask for your help. I got confused about the overall system model and want to be clarified how it's working. Maybe, I've read too much article in a short time.
First of all: I run nginx 1.2.x on my webserver. As I know, nginx is handling the rquests, it's dedicated to port 80 (for me) and serving http requests (also using php-fpm to serve php).
Then again, I have a succesfully running nodejs server on port 8080. I want the connection via websocket (due it's nature and protocol), since nginx not support websocket yet I got confused about what's going on.
If I go to http//mydomain.tld:8080, is this going to through node server and keep off nginx? In this case the connection could be via websocket and not falling back to xhr or anything else (i dont want it, because of scalability), right?
Then what should i do to have the same effect at http//mydomain.tld/game/ ? Just proxy the request in nginx.conf to node server? Like:
# if a file does not exist in the specified root and nothing else is definded, we want to serve the request via node.js
try_files $uri #nodejs;
location #nodejs
{
proxy_pass 127.0.0.1:8080;
break;
}
From: https://stackoverflow.com/a/14025374/2039342
And if it is a good proxy workaround when we need the websocket communication via nginx? Do we when we want a regular php site and socket.io connection inside it. By this time I presume the point is to run the traffic on port 80 and separate standard requests and websocket traffic. In my case what is the simpliest solution?
http://www.exratione.com/2012/07/proxying-websocket-traffic-for-nodejs-the-present-state-of-play/ in this article i found out HAProxy could be the one for me till nginx 1.3, is it?
I know my questions are a bit chaotic, but I'm straggling to understand the exact technik. Please give me some hint | article to read | starting point | basic config.
PS.: I've read the most of the related topics here.
Ps2.: to look less dumb: I've already done this game in red5 (java based flash server) + flash, so I just want to reconsider and publish it with proper current technologies.
Finally, my basic problem was configuring the nginx in the right way.
First I reinstalled nginx as a patched version with nginx_tcp_proxy_module.
The next step was setting up the right config to handle requests: via http or tcp.
I wanted the standard files to be served normally from webroot, just the game logic by node.js (and the socket.io js itself ofc) and .php files by php_fpm.
So I ended up with the following working nginx setup:
user www-data;
worker_processes 16;
events {
worker_connections 1024;
}
http {
upstream node-js-myapp {
server 127.0.0.1:3000;
}
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
server {
listen 80;
server_name domain.xx; # Multiple hostnames seperated by spaces
root /var/www/domain.xx; # Replace this
charset utf-8;
access_log /var/log/nginx/domain.xx.access.log combined;
error_log /var/log/nginx/domain.xx.error.log;
location ~ \.php$ {
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/conf.d/php_fpm; # Includes config for PHP-FPM (see below)
}
location / {
index index.html index.htm;
}
location ^~ /socket.io/ {
try_files $uri #node-js-myapp;
}
location /status {
check_status;
}
location #node-js-myapp {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://node-js-myapp;
}
}
}
tcp {
upstream websocket-myapp {
server 127.0.0.1:8080;
check interval=3000 rise=2 fall=5 timeout=1000;
}
server {
listen 3000;
server_name _;
access_log /var/log/nginx/domain.xx.access.log;
proxy_read_timeout 200000;
proxy_send_timeout 200000;
proxy_pass websocket-myapp;
}
}
It's working well with this node.js server:
var app = require('express').createServer()
var io = require('socket.io').listen(app);
io.set('transports', [
'websocket'
, 'flashsocket'
, 'htmlfile'
, 'xhr-polling'
, 'jsonp-polling'
]);
app.listen(8080);
While the requested file is in the public side of my server and in its HEAD section:
<script src="/socket.io/socket.io.js"></script>
I'm pretty sure my nginx is not complete and could contain bulls..., but it's kind of working and a good starting point.
Related
I have the following nginx configuration to run node.js with websockets behind an nginx reverse proxy:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
gzip on;
upstream nodejsserver {
server 127.0.0.1:3456;
}
server {
listen 443 ssl;
server_name myserver.com;
error_log /var/log/nginx/myserver.com-error.log;
ssl_certificate /etc/ssl/myserver.com.crt;
ssl_certificate_key /etc/ssl/myserver.com.key;
location / {
proxy_pass https://nodejsserver;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 36000s;
}
}
}
The node.js server uses the same certificates as specified in the nginx configuration.
My issue is that in my browser (Firefox, though this issue occurs in other browsers too), my websocket connection resets every few minutes with a 1006 code. I have researched the reason for this error in this particular (or similar) constellation, and most of the answers here as well as on other resources point to the proxy_read_timeout nginx configuration variable not being set or being set too low. But this is not the case in my configuration.
Worthy of note is also that when I run node.js and access it directly, I do not experience these disconnects, both locally and on the server.
In addition, I've tried running nginx and node.js insecurely (port 80), and accessing ws:// instead of wss:// in my client. The issue remains the same.
There are a few things you need to do to keep a connection alive.
You should stablish a keepalive connection count per worker proccess, and the documentation states you need to be explicit about your protocol as well. Other than that, you maybe running into other kinds of timeouts, so edit your upstream and server blocks:
upstream nodejsserver {
server 127.0.0.1:3456;
keepalive 32;
}
server {
#Stuff...
location / {
#Stuff...
# Your time can be different for each timeout, you just need to tune into your application's needs
proxy_read_timeout 36000s;
proxy_connect_timeout 36000s;
proxy_send_timeout 36000s;
send_timeout 36000s; # This is stupid, try a smaller number
}
}
There are a number of other discussions in SO about the same subject, check this answer out
I'm currently running into some configuration problems with NGINX where I keep getting a 502 error instead of NGINX falling back onto a different directory if either the server is down or the directory doesn't exist.
I'm running a Node.js application on port :3000, have SSL set up, and have all HTTP requests redirect to HTTPS. Given the scenario where my node.js application is offline, I wish to send the client to the default NGINX root directory /usr/share/nginx/html index index.htm index.html if possible.
I'm trying to have the nodejs application on port 3000 be shown on / but in the case that the server is down, to fallback on NGINX default directory and display the index.html in there instead. Can anyone help or guide me through this process?
Thank you
Edit: I've tried jfriend00 said in the comments, but now my proxy_pass doesn't seem to work. It would now default to the 500.html regardless whether my server is running or not. I've attached my nginx.conf file, would appreciate any help.
events {
worker_connections 1024;
}
http {
upstream nodejs {
server <<INTERNAL-PRIVATE-IP>>:3000; #3000 is the default port
}
...
server {
listen 80;
server_name <<PUBLIC-IP>>;
return 301 $scheme://<<DOMAIN>>$request_uri;
}
server {
listen 443;
ssl on;
server_name <<DOMAIN>>.com www.<<DOMAIN>>.com;
...
location / {
proxy_pass http://nodejs;
proxy_redirect off;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr ;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ;
proxy_set_header X-Forwarded-Proto https;
}
error_page 501 502 503 /500.html;
location = /500.html {
root /usr/share/nginx/html;
}
}
}
Adding the error_page works as I did above and it successfully kicks back. Thanks #jfriend00.
If you're deploying it to a live server, you might want to check this out since I had a hard time trying to figure out why my proxy_pass and my NGINX configuration wasn't working on CentOS deployed on EC2. It had nothing to do with the error_page.
I am using sticky session in nodejs which is behind nginx.
Sticky session does the load balancing by checking the remoteAddress of the connection.
Now the problem is it always take ip of nginx server
server = net.createServer({ pauseOnConnect: true },function(c) {
// Get int31 hash of ip
var worker,
ipHash = hash((c.remoteAddress || '').split(/\./g), seed);
// Pass connection to worker
worker = workers[ipHash % workers.length];
worker.send('sticky-session:connection', c);
});
Can we get the client ip using net library?
Nginx Configuration:
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
#auth_basic "Restricted";
#auth_basic_user_file /etc/nginx/.htpasswd;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
proxy_read_timeout 3000;
As mef points out, sticky-session doesn't, at present, work behind a reverse proxy, where remoteAddress is always the same.
The pull request in the aforementioned issue, as well as an earlier pull request, might indeed solve the problem, though I haven't tested myself.
However, those fixes rely on partially parsing packets, doing low-level routing while peeking into headers at a higher level... As the comments on the pull requests indicate, they're unstable, depend on undocumented behavior, suffer from compatibility issues, might degrade performance, etc.
If you don't want to rely on experimental implementations like that, one alternative would be leaving load balancing entirely up to nginx, which can see the client's real IP and so keep sessions sticky. All you need is nginx's built-in ip_hash load balancing.
Your nginx configuration might then look something like this:
upstream socket_nodes {
ip_hash;
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
server 127.0.0.1:8004;
server 127.0.0.1:8005;
server 127.0.0.1:8006;
server 127.0.0.1:8007;
}
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
# Note: Trusting all addresses like this means anyone
# can pretend to have any address they want.
# Only do this if you're absolutely certain only trusted
# sources can reach nginx with requests to begin with.
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
proxy_read_timeout 3000;
}
}
Now, to get this to work, your server code would also need to be modified somewhat:
if (cluster.isMaster) {
var STARTING_PORT = 8000;
var NUMBER_OF_WORKERS = 8;
for (var i = 0; i < NUMBER_OF_WORKERS; i++) {
// Passing each worker its port number as an environment variable.
cluster.fork({ port: STARTING_PORT + i });
}
cluster.on('exit', function(worker, code, signal) {
// Create a new worker, log, or do whatever else you want.
});
}
else {
server = http.createServer(app);
// Socket.io initialization would go here.
// process.env.port is the port passed to this worker by the master.
server.listen(process.env.port, function(err) {
if (err) { /* Error handling. */ }
console.log("Server started on port", process.env.port);
});
}
The difference is that instead of using cluster to have all worker processes share a single port (load balanced by cluster itself), each worker gets its own port, and nginx can distribute load between the different ports to get to the different workers.
Since nginx chooses which port to go to based on the IP it gets from the client (or the X-Forwarded-For header in your case), all requests in the same session will always end up at the same process.
One major disadvantage of this method, of course, is that the number of workers becomes far less dynamic. If the ports are "hard-coded" in the nginx configuration, the Node server has to be sure to always listen to exactly those ports, no less and no more. In the absence of a good system for syncing the nginx config and the Node server, this introduces the possibility of error, and makes it somewhat more difficult to dynamically scale to e.g. the number of cores in an environment.
Then again, I imagine one could overcome this issue by either programmatically generating/updating the nginx configuration, so it always reflects the desired number of processes, or possibly by configuring a very high number of ports for nginx and then making Node workers each listen to multiple ports as needed (so you could still have exactly as many workers as there are cores). I have not, however, personally verified or tried implementing either of these methods so far.
Note regarding an nginx server behind a proxy
In the nginx configuration you provided, you seem to have made use of ngx_http_realip_module. While you made no explicit mention of this in the question, please note that this may in fact be necessary, in cases where nginx itself sits behind some kind of proxy, e.g. ELB.
The real_ip_header directive is then needed to ensure that it's the real client IP (in e.g. X-Forwarded-For), and not the other proxy's, that's hashed to choose which port to go to.
In such a case, nginx is actually serving a fairly similar purpose to what the pull requests for sticky-session attempted to accomplish: using headers to make the load balancing decisions, and specifically to make sure the same real client IP is always directed to the same process.
The key difference, of course, is that nginx, as a dedicated web server, load balancer and reverse proxy, is designed to do exactly these kinds of operations. Parsing and manipulating the different layers of the protocol stack is its bread and butter. Even more importantly, while it's not clear how many people have actually used these pull requests, nginx is stable, well-maintained and used virtually everywhere.
It seems that the module you're using does not support yet to be behind a reverse-proxy source.
Have a look at this Github issue, some pull requests seem to fix your problem, so you may have a solution by using a fork of the module (you can point at it on github from your package.json file.)
I am using Nginx to point a subdomain to a different port that a node.js server is listening to.
It works fine for http, but now I need to switch over to https.
This is what I have right now in sites-available/default:
server {
listen 80;
listen 443;
server_name sub.example.com;
location / {
proxy_pass http://example.com:2222;
}
}
Now that I am switching my node server over to https do I need to change the proxy_pass to https://example.com:2222?
Now that I am switching my node server over to https do I need to change the proxy_pass to https://example.com:2222?
Short answer is no. It doesn't need to be same protocol for proxy as for incoming request
But you may require another directive
proxy_set_header X-Forwarded-Proto https;
I found this in my searches.
Turns out it's best to handle the SSL certification with nginx and leave node running a http server.
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(9000, "127.0.0.1");
console.log('Server running at http://127.0.0.1:9000/');
I have the above code to get started with nodejs, when I start the process and run on a browser I get the response Once, but after that I dont get any response. Everytime I restart I get 1 response and as always it stops. How can I get this is run continuously. Thanks in advance!
Just adding more information related to this issue. Here is a snippet from the nginx conf file
server {
listen 80;
client_max_body_size 2M;
server_name my_domain;
root /home/node/My_Folder;
# access_log /var/log/nginx.vhost.access.log main;
send_timeout 1;
location ~* ^.+\.(jpg|jpeg|JPG|JPEG|GIF|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mov|html)$ {
autoindex on;
root /home/node/My_Folder;
expires 30d;
break;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_connect_timeout 50ms;
#proxy_send_timeout 200ms;
#proxy_read_timeout 200ms;
proxy_next_upstream error;
proxy_pass http://Handler;
#index no_ads.html no_ads.htm;
break;
}
}
upstream Handler {
server 127.0.0.1:8010;
server 127.0.0.1:8011;
server 127.0.0.1:8012;
server 127.0.0.1:8013;
server 127.0.0.1:8014;
server 127.0.0.1:8015;
server 127.0.0.1:8016;
server 127.0.0.1:8017;
server 127.0.0.1:8018;
server 127.0.0.1:8019;
server 127.0.0.1:9000;
}
I tried using both
node app.js
forever start -a app.js
to start the app, but either ways I get just one response and then a time-out. I do have a couple of other node apps running on the same server and those seem to be working fine. So I am totally lost
Your Node.js application runs on port 9000.
Inside your NGinx configuration file, you have the setting
proxy_pass http://Handler;
which shall redirect the incoming requests to the Node.js applicaton, but you are not directly redirecting the requests there, but to an upstream that is configured as follows:
upstream Handler {
server 127.0.0.1:8010;
server 127.0.0.1:8011;
server 127.0.0.1:8012;
server 127.0.0.1:8013;
server 127.0.0.1:8014;
server 127.0.0.1:8015;
server 127.0.0.1:8016;
server 127.0.0.1:8017;
server 127.0.0.1:8018;
server 127.0.0.1:8019;
server 127.0.0.1:9000;
}
As NGinx by default uses round-robin for upstreams that means that in one of eleven times NGinx tries to connect to port 9000 (which works), and the next ten times tries to access a server that does not exist.
Hence no connection can be made, and you'll get the error message.
Remove all the other server entries within the upstream block, remove the upstream block entirely and configure the single Node.js server directly as proxy, or start additional Node.js servers using the ports 8010, 8011, ..., and everything should work.
For details on how to configure upstreams, please have a look at the NGinx documentation on the HttpUpstreamModule.