Very simply, I'm currently using Express's vhost method to route requests to the appropriate script given a domain name. I really like this route since it means I don't need to have separate ports being listened to by separate node.js instances for each virtual host script and I also don't need a process for each virtual host. However, there is a glaring flaw for me using this method. By using this method, anything in the vhost server has root privileges and not merely the privileges of the user whose script it is. I need know find some way of sandboxing or otherwise running the vhost server as the user that it belongs too. Needless to say, I can't have lower privileged users on the server with access to the root.
TL;DR, What method exists by which I can route requests to different domain name's associated apps without the need to designate ports of which the app would need to know and still disable the author of that script from having access beyond their own user account?
In my apps, I use Bouncy:
var bouncy = require( "bouncy" );
var server = bouncy(function( req, res, bounce ) {
var port;
var subdomain = req.headers.host.split( "." )[ 0 ];
switch ( subdomain ) {
case "xyz":
port = 4002;
break;
default:
port = 4001;
break;
}
bounce( port );
});
server.listen( 4000 );
This way, you may have various apps listening on different ports and on different processes. They all will be proxied to work under the port 4000 aswell, so:
xyz.localhost:4002 = xyz.localhost:4000
localhost:4001 = localhost:4000
I hope it helps ;)
Related
I'm starting out learning Linux and NodeJS development and part of my current project has an API for which I'm serving documentation with Swagger UI. To support the "try it out" functionality of Swagger I need to specify a host name of the server in the API specs. Everything works fine when I'm running things locally and have the server hard coded to localhost:3000, but in production I obviously want this to show up as myactualdomain.example and not localhost.
Is there a convention for communicating to domain name of a server back to itself? I tried using a HOSTNAME environment variable as follows:
const HOSTNAME = process.env.HOSTNAME || "localhost";
var PORT = process.env.PORT || 80;
const apiSpecs = YAML.load("./api-spec.yml");
apiSpecs.servers = [{ url: `http://${HOSTNAME}:${PORT}` }];
app.use("/api-docs", swaggerUI.serve, swaggerUI.setup(apiSpecs));
This works, but sets the URL to the random host name the Docker container my app is running in is assigned. I could of course override HOSTNAME to myactualdomain.example but I'm not sure if this is "correct" way to do this or if the convention is to use a different environment variable or use another method entirely?
I can't find a resource for this anywhere online, all I see is references for nginx.
I need help with this quickly as my server is live with users accessing it and somehow google indexed my ip address and users are accessing my site through my ip.
I plan to migrate servers tonight and am aware of why my ip was indexed, but in the meantime need a method to prevent direct access via my ip.
This obviously isn't working, and don't have much room to test, unless I stop the server and kick all of my users off for an extended period of time:
app.get('myiphere', function(req, res){
res.redirect('domain.com');
});
You can implement an application-level middleware which checks that a request host-name isn't anything else but your domain. That way an access to your with an IP address wouldn't cause a processing (on application level).
const SITE_ADDRESS = 'yourwebsite.com';
app.use((req,res,next) => {
if (req.hostname.includes(SITE_ADDRESS))
next();
else
res.status(403).end(`Access with ${req.hostname} is restricted. Use ${SITE_ADDRESS} instead.`);
});
To prevent direct access to your site from IP you can set the loopback IP this way:
app.listen(3000, '127.0.0.1', () => console.log('Server running on port 3000'))
Prevent indexing by creating a robots.txt at your server root directory. See https://stackoverflow.com/a/390379/11191351
i have a project where users connect to my router, and then enter the address http://192.168.1.50:9091 into the address bar to connect to a page. I would like there to be a way to enter something easier than this long ip address.
here is what's happening so far:
var express = require('express');
var app = express()
, http = require('http')
, server = http.createServer(app)
, io = require('socket.io').listen(server)
, fs = require('fs')
server.listen(9093);
var nRequest = 0;
var nConnexs = 0;
app.get('/', function (req, res) {
res.sendfile(__dirname + '/client_app.html');
});
app.get('/2', function (req, res) { res.sendfile(__dirname +
'/client_app2.html'); });
I am not sure what you mean by:
i have a project where users connect to my router
The Node.js application is sitting on your machine. Until somebody accesses your machine (through IP), the application is never accessed.
As such, you must put something between the user and the application that will direct the user to the IP. For real websites, this is accomplished using DNS. You register a domain name and tell the DNS service the IP of the machine to redirect to.
If you have control over the router (internal network, etc.), you can map an alias to your IP address. It all depends on whether your router runs a DNS server or supports DNSMasq. You will need to check your router manufacturer/model/etc.
Finally, if there are a small number of users that will be accessing your website, you could always have them use the hosts file to map a name to the IP. The location of this file is dependent upon the operating system; just Google: edit hosts file
I assume users are initially connecting to your router's public ip address or a DNS representation of that. If that's the case, you can usually configure your router to do port forwarding such that incoming requests on a particular port are automatically routed to a particular private IP address on your network. In that case, the users would connect to your router, but that request would be automatically forwarded to your internal node server and thus the users would end up directly connected to your node server.
This type of configuration comes with the usual security warnings. Doing this means that your node server must be properly configured against internet attack and your node app must be written with appropriately security precautions - particularly because any compromise to the computer the node server is running on has access to your internal network.
If you show us what users are initially connecting to when they connect to your router, we could offer a more explicit example of how you could configure the router to take advantage of port forwarding.
I'm currently running two StrongLoop LoopBack apps (Nodejs apps) on a single server with different ports. Both apps were created using slc lb project and slc lb model from the command line.
Is it possible to run these apps on a single ports with different path and/or subdomain? If it is, how do I do that on a Linux machine?
Example:
http://api.server.com:3000/app1/ for first app.
http://api.server.com:3000/app2/ for second app.
thanks.
Since LoopBack applications are regular Express applications, you can mount them on a path of the master app.
var app1 = require('path/to/app1');
var app2 = require('path/to/app2');
var root = loopback(); // or express();
root.use('/app1', app1);
root.use('/app2', app2);
root.listen(3000);
The obvious drawback is high runtime coupling between app1 and app2 - whenever you are upgrading either of them, you have to restart the whole server (i.e. both of them). Also a fatal failure in one app brings down the whole server.
The solution presented by #fiskeben is more robust, since each app is isolated.
On the other hand, my solution is probably easier to manage (you have only one Node process instead of nginx + per-app Node processes) and also allows you to configure middleware shared by both apps.
var root = loopback();
root.use(express.logger());
// etc.
root.use('/app1', app1);
root.use('/app2', app2);
root.listen(3000);
You would need some sort of proxy in front of your server, for example nginx. nginx will listen to a port (say, 80) and redirect incoming requests to other servers on the machine based on some rules you define (hostname, path, headers, etc).
I'm no expert on nginx but I would configure it something like this:
server {
listen: 80;
server_name api.server.com;
location /app1 {
proxy_pass http://localhost:3000
}
location /app2 {
proxy_pass http://localhost:3001
}
}
nginx also supports passing query strings, paths and everything else, but I'll leave it up to you to put the pieces together :)
Look at the proxy server documentation for nginx.
I have a heroku node.js app running under the domain foo.com. I want to proxy all urls beginning with foo.com/bar/ to a second node.js process - but I want the process to be controlled within the same heroku app. Is this possible?
If not, is it possible to proxy a subdirectory to a second heroku app? I haven't been able to find much control over how to do routing outside of the web app's entry point. That is, I can easily control routing within node.js using Express for example, but that doesn't let me proxy to a different app.
My last resort is simply using a subdomain instead of a subdirectory, but I'd like to see if a subdirectory is possible first. Thanks!
Edit: I had to solve my problem using http-proxy. I have two express servers listening on different ports and then a third externally facing server that routes to either of the two depending on the url. Not ideal of course, but I couldn't get anything else to work. The wrap-app2 approach described below had some url issues that I couldn't figure out.
Just create a new express server and put a middleware in the main one to redirect to the secondary when comes a request to your desired path:
var app2 = express();
app2.use(function(req, res){
res.send('Hey, I\'m another express server');
});
app.use('/foo', app2);
I haven't tried it yet in Heroku, but it the same process and doesn't create any new TCP binding or process, so It will work. For reference, a modified plain express template.
And if you really want other express process handling the connection, you need to use cluster. Check the worker.send utility.
app.use('/foo', function(req,res){
//You can send req too if you want.
worker.send('foo', res);
});
This is possible. The most elegant way I could think is by using clustering. 1 Heroku Dyno contains four cores. Therefore, you can run four worker threads to a node process.
Here is an introduction to clustering.
What you're looking at is initializing two express apps (assuming you're using express) and serving those two in two worker threads.
if (cluster.isMaster) {
// let's make four child processes
for (var i = 0; i < 4; i++) {
if (i%2 == 0) {
cluster.fork(envForApp1);
} else {
cluster.fork(envForApp2);
}
}
} else {
// refer to NODE_ENV and see whether this should be your app1 or app2
// which should be started. This is passed from the fork() before.
app.listen(8080);
}