Sharing one port among multiple node.js HTTP processes - linux

I have a root server running with several node.js projects on it. They are supposed to run separately in their own processes and directories. Consider this file structure:
/home
+-- /node
+-- /someProject | www.some-project.com
| +-- index.js
| +-- anotherFile.img
| +-- ...
+-- /anotherProject | www.another-project.com
| +-- /stuff
| +-- index.js
| +-- ...
+-- /myWebsite | www.my-website.com
| +-- /static
| +-- index.js
| +-- ...
+-- ... | ...
Each index.js should be started as an individual process with its cwd set to its parent-folder (someProject, anotherProject, etc.).
Think ov vHosts. Each project starts a webserver which listens on its own domain. And there's the problem. Only one script can start since, they all try to bind to port 80. I digged to into the node.js API and looked for a possible solution: child_process.fork().
Sadly this doesn't work very well. When I try to send a server instance to the master process (to emit a request later on) or an object consiting of request and response from the master to the salve I get errors. This is because node.js internally tries to convert these advanced objects to a JSON string and then reconverts it to its original form. This makes all the objects loose their reference and functionality.
Seccond approach child.js
var http = require("http");
var server = http.createServer(function(req, res) {
// stuff...
});
server.listen(80);
process.send(server); // Nope
First approach master.js
var http = require("http"),
cp = require("child_process");
var child = cp.fork("/home/node/someProject/index.js", [], { env: "/home/node/someProject" });
var router = http.createServer(function(req, res) {
// domaincheck, etc...
child.send({ request: req, response: res }); // Nope
});
router.listen(80);
So this is a dead end. But, hey! Node.js offers some kind of handles, which are sendable. Here's an example from the documentation:
master.js
var server = require('net').createServer();
var child = require('child_process').fork(__dirname + '/child.js');
// Open up the server object and send the handle.
server.listen(1337, function() {
child.send({ server: true }, server._handle);
});
child.js
process.on('message', function(m, serverHandle) {
if (serverHandle) {
var server = require('net').createServer();
server.listen(serverHandle);
}
});
Here the child directly listens to the master's server. So there is no domaincheck inbetween. So here's a dead end to.
I also thought about Cluster, but this uses the same technology as the handle and therefore has the same limitations.
So... are there any good ideas?
What I currently do is rather hack-ish. I've made a package called distroy. It binds to port 80 and internally proxies all requests to Unix domain socket paths like /tmp/distroy/http/www.example.com, on which the seperate apps listen. This also (kinda) works for HTTPS (see my question on SNI).
The remaining problem is, that the original IP address is lost, as it's now always 127.0.0.1. I think I can circumvent this by monkeypatching the net.Server so that I can transmit the IP address before opening the connection.

If you are interested in a node.js solution check out bouncy, a websocket and https-capable http router proxy/load balancer in node.js.
Define your routes.json like
{
"beep.example.com" : 8000,
"boop.example.com" : 8001
}
and then run bouncy using
bouncy routes.json 80

Personally, I'd just have them all listen on dedicated ports or preferably sockets and then stick everything behind either a dedicated router script or nginx. It's the simplest approach IMO.

For connect middleware there is vhost extension. Maybe you could copy some of their concepts.

Related

express.js - serve different pages based on ip

I'm developing a NodeJS server that runs on a Raspberry PI. The Pi is attached to a screen and shows a website in kiosk mode. Users that see that screen can also connect to the server, but should be given a different page. I want both sites to run on the root (http://localhost/ and http://serverIP/), but serve different pages.
The server should see if the localhost is making a request or if any other device is making the request and serve the appropriate page.
Currently I can reach the same page on both localhost and a remote device and also see if it is done by the localhost or the remote. But when I try to redirect the remote IPs, they get stuck in a redirect loop.
app.use('/', function(request, response, next) {
let clientIP = getClientIP(request);
console.log('client IP: ', clientIP);
if (clientIP == '::1' || clientIP == '::ffff:127.0.0.1') {
// if localhost request go to next middleware to direct it to the public/index.html
next();
} else {
// if remote device request return the mobile page in public/mobilepage/mobile.html
response.redirect('/mobilepage'); //This gets stuck in a loop because when redirected app.use() gets called again with the redirected request and sees it is not localhost and will redirect again. What should I better do here?
}
}, express.static('public'));
function getClientIP(request){
return request.headers['x-forwarded-for'] || request.connection.remoteAddress;
}
My project folder looks like this
ProjectFolder/
| - server.js //the node server
| - public/ //the folder with the website
| | - index.html //the localhost main website
| | - assets/ //folder with all the css and js for the localhost website
| | - mobilepage/ //folder with the page for the remote devices
| | | - mobile.html //the page for the remote devices
| | | - mobileStyle.css //the style for the remote devices page
| | | - mobileScript.js //the script for the remote devices page
| | - sites/ //folder with all other sites for the localhost website
|
| - ... //other files like node_modules etc. that are not important to the question
I hope I explained my situation well enough. If something is unclear please let me know, I would love to get some help on this.
you can install a package called as express-ip
const express = require('express');
const app = express();
const expressip = require('express-ip');
app.use(expressip().getIpInfoMiddleware);
app.get('/', function (req, res) {
res.send(req.ipInfo);
});
Note in app.use use conditional logic based on the value of req.ipInfo and your objective.
Also, This won’t work if you are at localhost, since it needs to get the IP to do a lookup. So to test it, use ngrok to expose http://localhost:yourportvalue.

Setup of AWS ElasticBeanstalk with Websockets

I'm trying to setup Websockets in order to send messages to AWS, so I can then process the message and send some payload to other resources at cloud and deliver custom responses to client part.
But, I cannot get that to work.
The main target is to send messages to AWS through WSS://, first approach with WS:// (in case that's possible), depending on payload content, it shall return a custom response. Then close the connection if no further operation is needed.
I've tried the suggestions posted here, here and here. But, either my lack of knowledge about Load Balancing, Websockets, TCP and HTTP is not letting me see pieces of solution missing, I'm doing everything wrong or both.
As for now, I have an Elastic Beanstalk example project structure like this:
+ nodejs-v1
|--+ .ebextensions
| |--- socketupgrade.config
|
|--+ .elasticbeasntalk
| |--- config.yaml
|
|--- .gitignore
|--- app.js
|--- cron.yaml
|--- index.html
|--- package.json
The Elastic Beanstalk environment and application are standard created, and also made sure that the Balancer is application, not classic, hence the Application Load Balancer can work with Websockets out of the box as many sources and documentation state.
It's setup with HTTP at port 80. Stickiness is enabled for a day.
Here's the code being used:
app.js:
'use strict';
const express = require('express');
const socketIO = require('socket.io');
const path = require('path');
const PORT = process.env.PORT || 3000;
const INDEX = path.join(__dirname, 'index.html');
const serber = express()
.use((req, res) => res.sendFile(INDEX) )
.listen(PORT, () => console.log(`Listening on ${ PORT }`));
const io = socketIO(serber);
io.on('connection', (socket) => {
console.log('Client connected');
socket.on('disconnect', () => console.log('Client disconnected'));
});
setInterval(() => io.emit('time', new Date().toTimeString()), 1000);
index.html:
<script src="/socket.io/socket.io.js"></script>
<script>
var socket = io.connect('http://localhost');
socket.on('news', function (data) {
console.log(data);
});
</script>
package.json:
{
"name": "Elastic-Beanstalk-Sample-App",
"version": "0.0.1",
"private": true,
"dependencies": {
"express":"*",
"socket.io":"*"
},
"scripts": {
"start": "node app.js"
}
}
.ebextensions/socketupgrade.config:
container_commands:
enable_websockets:
command: |
sed -i '/\s*proxy_set_header\s*Connection/c \
proxy_set_header Upgrade $http_upgrade;\
proxy_set_header Connection "upgrade";\
' /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf
I'm only getting 504, 502, sometimes, when tweaking configurations randomly at pointless tries, it gives me 200 and at other attempts, no protocol error, but messages like disconnection and stuff...
I appreciate your time and attention reading this desperate topic! Any hint will be appreciated as well... Just, anything... T-T
Thanks for your time and attention!
Kind regards,
Jon M.
Update 1
I'll start quoting #RickBaker:
Personally, what I would do first is remove the load balancer from the equation. >If your ec2 instance has a public ip, go into your security groups and make sure >the proper port your app is listening to is open to the public. And see if you >can at least get it working without the load balancer complicating things. – >Rick Baker 21 hours ago
Changed the scaling feature of the Elastic Beanstalk environment's application from Load Balancing, Auto Scaling Environment Type to Single Instance Environment. Important to know, that I changed it from Elastic Beanstalk web page console, not from EC2 directly, since I think that it can break the Elastic Beanstalk environment application as a whole.
Anyway, changed it, after the environment and environment's application finished setting up again, changed and deployed the following:
index.html:
<script src="/socket.io/socket.io.js"></script>
<script>
var socket = io();
</script>
After everything got running, tested with a call via webpage to the index page. And the logs from node shows life:
-------------------------------------
/var/log/nodejs/nodejs.log
-------------------------------------
Listening on 8081
Client connected
Client disconnected
Client connected
Then I started to search for Server to Server setup found this docs and
then started to dig up a bit in order to connect to a WSS server.
So, the main goal is to stablish, and mantain a session from AWS EB application to another server that accepts WSS connections. The AWS EB should be responsible of stablish and mantain that connection, so when events happen at Network Server, the application at EB can send responses to the requests of events happening.
So then I read this topic, and realized that the NodeJS - socket.io approach won't work based on the posts read. So, I don't know what to do now. ( '-')
AWS EB can setup environment with Python with WSGI but, geez... Don't know what to do next. I'll try things in order to connect to WS if possible, if not then WSS, and see if something works out. So I'll Update right after I have results, whether possitive or not.
Jon over and out.
After combining previous iterations with some more documentation reading, I came to realize that, the connection indeed starts from AWS, via NodeJS using ws.
So I'm able to communicate with Network Server via WSS and request and provide data.
The app.js:
var WebSocket = require('ws');
var wss = new WebSocket('wss://example.com');
wss.on('open', function connection() {
console.log("WSS connection opening")
});
wss.on('message', function incoming(data) {
console.log("Jot:")
console.log(data)
setTimeout(function timeout() {
console.log("Sending response")
wss.send(JSON.stringify(
{
"key": "Hi there"
}
));
},
500);
});
The package.json:
{
"name": "Elastic-Beanstalk-Sample-App",
"version": "0.0.1",
"private": true,
"dependencies": {
"express":"*",
"socket.io":"*",
"ws": "*"
},
"scripts": {
"start": "node app.js"
}
}
The structure of the project remains almost the same:
+ nodejs-v1
|--+ .ebextensions
| |--- socketupgrade.config
|
|--+ .elasticbeasntalk
| |--- config.yaml
|
|--- .gitignore
|--- app.js
|--- cron.yaml
|--- package.json
As you can see, there's no index.html since it's not used.
From here, now it's up to the solution requirements the usage of sending/receiving data. And to make sure the connection is established/recovered.

How to setup gulp browser-sync for a node / react project that uses dynamic url routing

I am trying to add BrowserSync to my react.js node project. My problem is that my project manages the url routing, listening port and mongoose connection through the server.js file so obviously when I run a browser-sync task and check the localhost url http://localhost:3000 I get a Cannot GET /.
Is there a way to force browser-sync to use my server.js file? Should I be using a secondary nodemon server or something (and if i do how can the cross-browser syncing work)? I am really lost and all the examples I have seen add more confusion. Help!!
gulp.task('browser-sync', function() {
browserSync({
server: {
baseDir: "./"
},
files: [
'static/**/*.*',
'!static/js/bundle.js'
],
});
});
We had a similar issue that we were able to fix by using proxy-middleware(https://www.npmjs.com/package/proxy-middleware). BrowserSync lets you add middleware so you can process each request. Here is a trimmed down example of what we were doing:
var proxy = require('proxy-middleware');
var url = require('url');
// the base url where to forward the requests
var proxyOptions = url.parse('https://appserver:8080/api');
// Which route browserSync should forward to the gateway
proxyOptions.route = '/api'
// so an ajax request to browserSync http://localhost:3000/api/users would be
// sent via proxy to http://appserver:8080/api/users while letting any requests
// that don't have /api at the beginning of the path fall back to the default behavior.
browserSync({
// other browserSync options
// ....
server: {
middleware: [
// proxy /api requests to api gateway
proxy(proxyOptions)
]
}
});
The cool thing about this is that you can change where the proxy is pointed, so you can test against different environments. One thing to note is that all of our routes start with /api which makes this approach a lot easier. It would be a little more tricky to pick and choose which routes to proxy but hopefully the example above will give you a good starting point.
The other option would be to use CORS, but if you aren't dealing with that in production it may not be worth messing with for your dev environment.

Restructing app into server-side & client-side containers

What I'm trying to achieve is splitting up app.js in separate pieces. That's so far successful. My structure looks like this so far:
package.json
app.js
app/
- server/
- views/
- router.js
- public/
- css/
- images/
- js/
- robots.txt
Sounds good? Inside my app.js I have the following code:
http.createServer(app).listen(app.get('port'), function() {
console.log("app:" + app.get('port') + " running through express server.");
})
I feel that my http:createServer is so little and so vulnerable that I want to extend it. Is there a way to put it inside ./app/server/http.js and include the toobusy module (that, with the examples seems too hard for me).
Is there a solution?
If I understand your question correctly, all you need to do is create a app/server/http.js file and place this code inside:
var toobusy = require('toobusy'),
app = require('../../app');
var server = http.createServer(app);
server.listen(app.get('port'), function () { ... });
process.on('SIGINT', function() {
server.close();
// calling .shutdown allows your process to exit normally
toobusy.shutdown();
process.exit();
});
Then you can do one of two things:
1) Inside your package.json file, set the main field to app/server/http.js and you can start your app with npm start if thats your thing.
2) The other option (preferred, IMO) is to create an index.js file in the root of your project that looks something like:
module.exports = require('app/server/http');
And then you can simply start your server with
$ NODE_ENV=production node /path/to/project <arguments go here>
Either way, you'll get the benefits of toobusy while achieving the separation you desire.

basic node website returns "Cannot GET /" when hosted with forever.js

I'm trying to get my first production node website up (just a basic hello world on my production web server).
The below is what I'm using (basic http proxy to pass off apache websites to port :9000 and node websites to port :8000). I know this part works because apache vhosts are forwarded as I expect. What does not work however is the node part -instead I get the error below
"Cannot GET /"
This is running node 0.8.1 on Ubuntu 12.04
I'm hosting this with forever.js (forever start foo.js). when I echo the NODE_ENV -it shows "production"
It might also be noted that I don't have node_modules on the path (as you will see in my require statements) **not sure if this has anything to do with my issue
var httpProxy = require('/usr/local/lib/node_modules/http-proxy/lib/node-http-proxy');
var express = require('/usr/local/lib/node_modules/express/lib/express');
httpProxy.createServer(function (req, res, proxy) {
var nodeVhosts = ['www.mysite.com'];
var host = req.headers['host'];
var port = nodeVhosts.indexOf(host) > -1
? 8000
: 9000;
proxy.proxyRequest(req, res, {host: 'localhost', port: port});
}).listen(80);
var one = express.createServer();
one.get('/', function(req, res){
res.send('Hello from app one!')
});
var app = express.createServer();
app.use(express.vhost('localhost', one));
app.listen(8000);
Since you are running Ubuntu, you might take a look at upstart. In case you don't know, upstart replaces the old-school-unix init-scripts approach to starting and stopping services. (Those were dark, scary days!) If you want your app to start automatically when your box boots/reboots and restart automatically after it (your app) crashes, then you want upstart. Learning the basics of upstart is easy, and then you have a tool that you can use over and over again, whether it's node, apache, nginx, postfix, mongodb, mysql, whatever!
I mean no disrespect to the good folks who work on the forever module. Arguably, it does have a solid use case, but too often it is used to imperfectly duplicate the bedrock which is already on your system -- upstart. Also, you might Google for some of the comments made by experienced users and some node.js committers about the forkability of node.js and the pitfalls, which are very relevant to forever.
I'd like to post links, but I don't have enough rep yet. Hopefully what I wrote is enough to google on.
Good luck!
http-proxy module doesn't change the host header of the request, and that's what connect/express vhost uses to distinguish virtualhosts.
In this line:
proxy.proxyRequest(req, res, {host: 'localhost', port: port});
you tell the proxy server to proxy the unchanged request to localhost:port.
So what you need to do is to change:
var app = express.createServer();
app.use(express.vhost('localhost', one));
app.listen(8000);
to:
var app = express.createServer();
app.use(express.vhost('www.mysite.com', one));
app.listen(8000);
and it should work.
Alternatively you can simply set the req.headers.host to localhost before proxying.

Resources