The Goal:
Use muitiple live node.js servers independent of each other under different doc roots.
Using NGINX
server {
server_name .lolwut1.com;
root /var/www/html/lolwut1;
# proxy pass to nodejs
location / {
proxy_pass http://127.0.0.1:5001/;
}
}
server {
server_name .lolwut2.com;
root /var/www/html/lolwut2;
# proxy pass to nodejs
location / {
proxy_pass http://127.0.0.1:5002/;
}
}
/var/www/html/lolwut1/app.js
var http = require('http');
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
response.end("lolwut1\n");
});
server.listen(5001);
/var/www/html/lolwut2/app.js
var http = require('http');
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
response.end("lolwut2\n");
});
server.listen(5002);
So when I...
node app.js in /var/www/html/lolwut1/app.js and hit lolwut1.com I'm all good.
Questions:
But now what If I want to start the second node server?
Is this a bad approach?... Am I thinking about this the wrong way?
What are the advantages/disadvantages of using node.js with a connect.vhost directive as a router rather than NGINX?
Use forever to start and stop your node apps.
You're doing it right! This approach has worked well for me for quite a while.
Connect vhost Advantage: You don't have to install and configure nginx. The whole stack is node.js.
Nginx Advantage: Nginx is a mature and stable web server. It's very unlikely to crash or exhibit strange behavior. It can also host your static site, PHP site, etc.
If it were me, unless I needed some particular feature of Nginx, I'd pick Connect vhost or node-http-proxy for the sake of having an all-node.js stack.
But now what If I want to start the second node server? Is this a bad approach?...
when you cd to /var/www/html/lolwut2/ and run node app.js, this should start the second server on port 5002 and lolwut2.com should work.
Am I thinking about this the wrong way?
That's a valid way to run multiple node apps on the same server if you have enough memory, and plenty of cpu power. This is also a good way to scale a single node app on the same machine to take advantage of multiple cores by running multiple nodes and using the upstream directive (like here https://serverfault.com/questions/179247/can-nginx-round-robin-to-a-server-list-on-different-ports)
Related
I have this node.js server file:
var app = require('http').createServer(handler),
io = require('socket.io').listen(app),
fs = require('fs'),
app.listen(80);
function handler (req,res){
fs.readFile("/client.html"), function(err, data) {
if (err) {
console.log(err);
res.writeHead(500);
return res.end('Error loading client');
}
res.writeHead(200);
res.end(data);
});
}
is there a way to make this node.js file run automatically through the apache default port number when a client tries to connect without having to run it through the cmd ?
without having to run it through the cmd
Short answer: not quite. Think of this node.js file as creating a server on a par with Apache.
The script creates a server .createServer() and then tells it to listen to port 80 .listen(80).
Since socket.io is bound to this node server (and can't just be plugged in to Apache) you will have to execute the script (run it through the cmd) to be able to utilize it.
That being said:
I'm sure one could make a daemon (a background program) out of the node server; thus firing it up automatically on system start. If you then specify to run it on port xxxx you could tell Apache to map this port into its own local space (its folders). On a local machine this instruction would look like this: ProxyPass /app http://127.0.0.1:xxxx/
There would be two servers running on one machine; and Apache's http://127.0.0.1/app would redirect to the node server (listening to xxxx).
I would not advise to go down that rabbit hole just yet. To start having fun with socket.io on windows just create a batch file with the command to run your server: node [path/to/your/server_file.js] for ease of use. Expand your node script. And stop using Apache. (Express is a nice module to use node for web content...)
Grunt is not giving me an error, but when I navigate to the ip address, there is nothing there.
I know node works, because I used node to create a barebones server, and it serves at the same port and works just fine.
When I try to run grunt while the barebones server is running, it says that port is taken, so I know it at least thinks it's serving at that port.
When I used the same files on my local machine, it works just fine, and I can navigate to the port and it works. So does the barebones server.
Any clues as to what could be causing this? By the way, I'm using yo to install angular-bootstrap.
For the barebones server, I just do this:
DIR=~/proj2/www
FILE=hello.js
mkdir -p $DIR
cat <<EOF >$DIR/$FILE
var http = require('http');
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/html"});
response.end("<html><body>Hello World</body></html>");
});
server.listen(9000);
EOF
// Change this to '0.0.0.0' to access the server from
hostname: 'localhost'
Well, as the code comment says, you just have to change 'localhost' to '0.0.0.0'.
Thanks, me; you've been very helpful!
I've hit the need to put a load balancer in front of some Node.js servers and I decided to compare Nginx and Node.js
To do this test I simply spun up an Ec2 Micro (running Ubuntu 14.04) and installed Nginx and Node.js
My nginx.conf file is:
user www-data;
worker_processes 1;
pid /run/nginx.pid;
http {
server {
listen 443 ssl;
return 200 "hello world!";
ssl_certificate /home/bitnami/server.crt;
ssl_certificate_key /home/bitnami/server.key;
}
}
events {
worker_connections 768;
}
And my Node.js code is:
var http = require('https');
var fs = require('fs');
var serverOptions = {
key: fs.readFileSync("/home/bitnami/server.key"),
cert: fs.readFileSync("/home/bitnami/server.crt")
};
http.createServer(serverOptions,function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(443);
console.log('Server running');
I then used another EC2 server (m3.medium due to memory needs) to run wrk with the command
./wrk -t12 -c400 -d30s https://ec2-54-190-184-119.us-west-2.compute.amazonaws.com
The end result was that Nginx could consistently pump through 5x more reqs/second than Node.js (12,748 vs 2,458), while using less memory (both were CPU limited).
My question is, since I'm not exactly great/experienced/knowledgeable in server admin or setup, am I doing something to severely mess up Node.js? And can I confidently draw the conclusion that in this situation, Nginx is absolutely the better choice?
I'm using the Wercker Continuous Integration & Delivery Platform to test and deploy a BitBucket repository on a node.js OpenShift server. Wercker loads in the BitBucket repository, builds it, tests it in the node.js environment, and passes it without any issue. It's also checking the code with jsHint, and returns without any errors.
Wercker also indicates that the deployment passes without errors, as does OpenShift. The problem is that when I check the application URL provided to me by OpenShift, it results with a server error:
503 Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
In troubleshooting this, I restarted the server (I'm running the basic account, and I have that option) but that doesn't seem to resolve the issue. Neither Wercker or Openshift indicate that there is a problem, but for some reason, I'm simply unable to access that domain without error.
How can I fix this (with the most basic tier)?
This was the solution:
I installed the RHC client tools available on the OpenShift website, checked the application logs, and found that OpenShift was unable to find a server.js file in the root directory. So I renamed my app.js file to server.js, and in my package.json I changed the "start" value to server.js. Then I configured the code in server.js file to the OpenShift environment variables, and that did it!
The server.js now reads:
var http = require('http');
var ip = process.env.OPENSHIFT_NODEJS_IP || '127.0.0.1',
port = process.env.OPENSHIFT_NODEJS_PORT || '8080';
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(port, ip);
console.log('Server running at http://'+ip+':'+port+'/');
I'm now able to connect to the application URL and get the basic "Hello World" response.
(If at this point you're still unable to connect to your application, restart your server, and that should do the trick.)
I hope this helps someone else in the future.
Here's a helpful resource that I leaned on: https://gist.github.com/ryanj/5267357
Your app should be able to listen to the IP and port defined by Openshift's reverse proxy.
You need to change the port number and perhaps the IP in the server configuration.
Explained here: OpenShift node.js Error: listen EACCES
According to this answer:
You should run multiple Node servers on one box, 1 per core and split request traffic between them. This provides excellent CPU-affinity and will scale throughput nearly linearly with core count.
Got it, so let's say our box has 2 cores for simplicity.
I need a complete example a Hello World app being load balanced between two Node servers using NGINX.
This should include any NGINX configuration as well.
app.js
var http = require('http');
var port = parseInt(process.argv[2]);
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(port);
console.log('Server running at http://localhost:' + port + '/');
nginx configuration
upstream app {
server localhost:8001;
server localhost:8002;
}
server {
location / {
proxy_pass http://app;
}
}
Launch your app
node app.js 8001
node app.js 8002
HttpUpstreamModule documentation
Additional reading material
cluster module - still experimental, but you don't need nginx
forever module - in case your app crashes
nginx and websockets - how to proxy websockets in the new nginx version