I have our first NodeJS server that's being deployed to a client server along with the rest of the application. I have a few items that need to be configured on a server-specific basis but I'm unable to find any easy way to handle this.
On googling I've found a few choices - using a framework that has support built in (too late), setting up a configuration file that can be loaded (would work, but I don't want to have to worry about keep one config file for each server we have and keeping those out of git).
I'd love to just have node determine what the request domain is (dev.ourdomain vs www.ourdomain) and just do a condition. It sounds easy, it likely IS easy, but I'm having trouble finding any way to determine that domain data.
As #drachenstern mentioned, you could use request.headers.host, as in:
# get the path portion of the URI without optional port
var domain=request.headers.host.replace(/\:\d+$/,'');
but this wouldn't provide a canonical domain if the request was made using an IP address rather than the server's name.
A better option might be to use the hostname of the server, as in:
var domain=process.env[
process.env['OS'] && process.env['OS'].match(/^Win/) ? 'HOSTNAME' : 'HOST'
];
You might consider if request.host has the data you need. It most likely would.
why don't you just hardcode that information in a init.js and change it for each server? How often are you going to move the servers to need to do this? Just do this
init.js
module.exports = { domain: "dev.ourdomain"};
main.js
var domain = require( "init.js" ).domain;
I assume you are developing it on dev.ourdomain and dumping it on www.ourdomain. Just ignore dumping init.js so that the server's init version remains ^_^ This is what I do and it saves me the need to bloat the project with another module just for one command.
Hope this helps others who encounter this situation.
Related
I am trying to build a HAPI REST (API) Server. I think I'd like to make a separate NodeJS server for the front end and separate the two entirely. It would be nice that they don't know about each other at all to simplify development (like both having access to the database - but I assume that would allow for collisions and crazy things).
The idea is so I can scale one and not the other, or I can secure them differently (user/pass for front end, api key for back end), or replace one and not the other.
I assume I should have two different servers, how do I do this? I have seen people just make "two instances" listening on different ports, but it is the same code and can't actually be on separate server instances?
Perhaps I am thinking about this wrong. I assume this MUST be common, what is the regular approach?
I think you're on the right track. Have you read this part of the documentation?
There's a github repo that suggests a starting point.
One strategy might be to embed a Jetty server at a custom context path in your Java app and respond to Hapi Fhir queries.
You should be able to then proxy all your requests at the server level for secure things like user auth or open up certain resources to be queried openly from NodeJS or any REST api.
Finding out how to embed a jetty server should be simple. Proxying requests and auth, maybe not so much
If you go here:
http://armygrounds.com/jsgame/server.js
It's publicly visible and anyone could get the DB credentials.
How do I prevent this? Is it a file permission setting?
This is an issue with your webserver configuration. You should not expose your nodejs source to the web. In this case, you want to move the server side code out of the location that is visible from the website. You probably want to set up your web server to proxy to nodejs when it needs to be called.
Its a little difficult to answer your question more accurately without knowing more of your setup.
When I'm connecting to database in node, I have to add db name, username, password etc. If I'm right every user can access js file, when he knows address. So... how it works? Is it safe?
Node.js server side source files should never be accessible to end-users.
In frameworks like Express the convention is that requests for static assets are handled by the static middleware which serves files only from a specific folder in your solution. Explicit requests for other source files that exists in your code base are thus ignored (404 is passed down the pipeline).
Consult
https://expressjs.com/en/starter/static-files.html
for more details.
Although there are other possible options to further limit the visibility of sensitive data, note that anyone on admin rights who gets the access to your server, would of course be able to retrieve the data (and this is perfectly acceptable).
I am assuming from the question that the DB and Node are on the same server. I am also assuming you have created either a JSON or env file or a function which picks up your DB parameters.
The one server = everything (code+DB) is not the best setup in the world. However, if you are limited to it, then it depends on the DB you are using. Mongo Community Edition will allow you to set up limited security protocols, such as creating users within the DB itself. This contains a {username password rights} combination which grants scaled rights based upon the type of user you set up. This is not foolproof but it is something of protection even if someone gets a hold of your DB parameters. If you are using a more extended version of MongoDB then this question would be superfluous. As to other DB's you need to consult the documentation.
However, all that being said, you should really have a DB set up behind a public server and only allow SSH into it, with an open port to receive information from your program. As the one server = everthing format is not safe in the end run, though it is fine for development.
If you are using MongoDB, you may want to take a look at Mongoose coupled with Mongoose Encryption. I personally do not use them but it may solve your problem in the short run.
If your DB is MySQL etc. then I suggest you look at the documentation.
I'm using express and I want to put some configurations in a file(like database configuration, api credentials and other basic stuffs).
Now I'm putting this configuration in a JSON and I read this file using readAsync.
Reading some code I noted a lot of people use don`t use a JSON.. Instead, they use a common JS file and exports in a module.
Is there any difference between these approaches, like performace?
The latter way probably simplifies version control, testing and builds, and makes it easier to have separate configurations for production and development. It also lets you do a little "preprocessing" like defining "constants" for common setups.
In a well-designed application, the performance of configuration-reading is going to be completely irrelevant.
If you go with the latter, you need to practice some discipline: a configuration module should consist almost entirely of literals, with only enough executable code to handle things like distinguishing between development and production. Beware of letting application logic creep into it.
In node.js require works synchronously, but it's not very important if you load configurations once on the application starts. Asynchronously way realy need only if you loading configurations many times (for each request for example).
In node.js you can simply require your json files:
config.json:
{
"db": "127.0.0.1/database"
}
app.js:
var config = require('./config');
console.log(config);
If you need something more full featured I would use flatiron/nconf.
I'm thinking about adding another static server to a web app, so I'd have static1.domain.tld and static2.domain.tld.
The point would be to use different domains in order to load static content faster ( more parallel connections at the same time ), but what 'troubles' me is "how to get user's browser to see static1.domain.tld/images/whatever.jpg and static2.domain.tld/images/whatever.jpg as the same file" ?
Is there a trick to accomplish this with headers or I'll have to define which file is on which server?
No, there's no way to tell the browser that two URLs are the same -- the browser caches by full URL.
What you can do is make sure you always use the same url for the same image. Ie. all images that start with A-M go on server 1, N-Z go on server 2. For a real implementation, I'd use a hash based on the name or something like that, but there's probably libraries that do that kind of thing for you.
You need to have both servers able to respond to requests sent to static.domain.tld. I've seen a number of ways of achieving this, but they're all rather low level. The two I'm aware of:
Use a DNS round-robin so that the mapping of hostnames to IP addresses changes over time; very large websites often use variations on this so that content is actually served from a CDN closer to the client.
Use a hacked router config so that an IP address is answered by multiple machines (with different MAC addresses); this is very effective in practice, but requires the machines to be physically close.
You can also do the spreading out at the "visible" level by directing to different servers based on something that might as well be random (e.g., a particular bit from the MD5 hash of the path). And, best of all, all of these techniques use independent parts of the software stack to work; you can use them in any combination you want.
This serverfault question will give you a lot of information:
Best way to load balance across multiple static file servers for even an bandwidth distribution?