nodejs downgraded from root using setuid, what about log file ownership? - node.js

I've roughly followed a scheme described in http://onteria.wordpress.com/2011/05/31/dropping-privileges-using-process-setuid-in-node-js/ whereby I start node as root, then downgrade the user. This way I can listen on 80 without the need for a proxy. Pretty standard stuff. I have an upstart script to manage the process (Ubuntu server).
The upstart script redirects stdout/err to a log file (which gets owned by root). Internally I'm using winston to log to the console and a file (which also gets owned by root).
In my perfect and happy world I would be able to transparently chown the log files (both the redirected stdout/err one and the one winston made) to the downgraded user. I've tried (naively) chowning them when I setuid'ed from inside the node app, which worked but meant that they never got written to again.
How can I achieve this? Is this possible or should I try to live with (at least some) log files owned by root?
Many Thanks!

What I've ended up with is a version of Peter Lyons' solution (I've cut'n'pasted the following from a few places, so it may not actually run; the idea works, though):
var logger = new (winston.Logger)();
logger.add(winston.transports.Console, {
timestamp: true
});
// start server and downgrade user
httpsServer.listen(443, function() {
logger.info('Ready on port 443');
fs.stat(__filename, function(err, stats) {
fs.chownSync('stdouterr.log',stats.uid,stats.gid);
process.setgid(stats.gid);
process.setuid(stats.uid);
logger.add(winston.transports.File, {
filename: 'mylogfile.log',
handleExceptions: true
});
logger.info('downgraded to non-root uid', {"uid":stats.uid});
});
});
When I've successfully bound to port 443, I log to say that. logger is a winston logger configured with only console output (which gets redirected to the stdouterr.log file by starting node using node app.js >> stdouterr.log 2>&1). So this log message only appears on stdouterr.log.
Then I figure the owner of the current file and chown stdouterr.log to be owned by this user. Then I set the gid and uid of the current process (the dropping privileges part).
Then I add in my file logging to the winston logger.
Lastly, I log to say I've downgraded the user. This message appears in both stdouterr.log and mylogfile.log.
Not quite as beautiful as I'd hoped (no file logging while the process is running as root) but it means that the log files are easy to secure and manage.

Related

Local node process running in wrong directory

I am using PHP storm 9 to run a node instance on my local dev environment., but the process seems to be running in server.js parent directory.
My folder structure looks like
app/
app_server/
server.js
user_resources/
user_resources/
When i write a file with in my local instance to writes user_resources in app/ and when i run the same process on live environment it writes to user_resources in app_server/
pdf.create(html, options).toFile(path+filename, function (err, reuslt) {
callback();
});
using fs writeFile, readFile or readdir gives similar behavior
Local node server i ran with PHPstorm and live server runs with forever
Both local and live is a ubuntu system.
Any suggestings to why local node seems to be running in server.js parent directory.
The node server is probably executed from the app directory by PHPStorm, while the live process runs from app/app_server.
If no other hint is provided in the server code on where exactly to put the user_resources it will reside within the current working directory (which is the path from where the node process was involved).
You may want to specify a path relative to the location of the server.js, this can easily be done like this:
var userResourcePath = __dirname + '/user_resources`;
Node always ensures the __dirname to be set to the path of the file it is in.
I made the assumption the user_resources path of your live environment (app/app_server/user_resources) is the one you want for local development.

Within Docker VM, Gulp-Watch Seems to not work well on volumes hosted from the host OS

So I have a setup, probably as most people have, where their app code is mounted into a Docker container through a separate volume.
The problem is that if I run gulp, and specifically gulp-watch, to watch for file modifications etc. within docker, on the app code mounted within the docker container, to properly build and restart node within the docker container as necessary, it seems to get cpu intensive (as in polling for file changes instead of listening for file change events) to the point where my machine buckles.
I think this is due to a limitation of having the file system mounted from the native host to the docker container but how are folks working around this? Are they doing all of their work in the container? Native host then constantly building? Or am I missing something where my setup is incorrect with gulp-watch / nodemon?
For anyone using gulp4
The only way I could get this to work is to use usePolling like below
gulp.watch('./**/*', {interval: 1000, usePolling: true}, gulp.series('superTask'));
Try changing the gulp.watch options. This has been much better for me:
gulp.watch('./**/*', {interval: 1000, mode: 'poll'}, ['build']);
You should use the plugin gulp-watch instead of gulp.watch. The latter uses stat polling, which is much too heavy for the shared file system. gulp-watch uses inotify events to watch the file system on OSX.
The previous answer of usePoll: true didn't work. This one did:
gulp.watch('./**/*', {interval: 1000, usePolling: true}, ['build']);
Jesse's answer didn't work for me, but it was really close. Now, the option seems to be:
gulp.watch('./**/*', {interval: 1000, usePoll: true}, ['build']);
The mode field has been switched out for the usePoll field flag.
See the API section for more details.
In a docker container that has a nodemon installed (npm i -g nodemon) there is an alternative to gulp watch.
Let's say that one wants to watch changes to a swagger.yaml file in ./swagger/swagger.yaml and convert it to a project.json file for use with swagger UI.
Assuming that the correct node modules are installed, or that a stand-alone yaml to json convert tool is installed, one could run the following:
nodemon -L --watch ./editor/api/swagger/* --exec "node ./cvt_yaml_to_json.js"
where:
./editor/api/swagger/* is the directory to watch for file changes
"node ./cvt_yaml_to_json.js" is the command to execute (it can be an arbitrary command). In this case it is a JavaScript script which depends on js-yaml module (npm i js-yaml) and performs YAML to JSON conversion like this:
const yaml = require("js-yaml");
const path = require("path");
const fs = require("fs");
const swaggerYamlFile = "/api/project/editor/api/swagger/swagger.yaml";
const swaggerJsonFile = "/api/project/project.json";
//Converts yaml to json
const doc = yaml.safeLoad(fs.readFileSync(swaggerYamlFile));
fs.writeFileSync(swaggerJsonFile, JSON.stringify(doc, null, " "));

Busboy field values are empty during file upload with POST

I'm using a request.post() from Mikeal's Request Module on the client and processing it with Busboy on the server to upload a file.
On the server the:
busboy.on('field', function(fieldName, val, fieldnameTruncated, valTruncated)
event fires the correct number of times with the expected fieldNames but the val is always empty. This is happening when I run the integration tests through mocha and when I use a browser against a locally running web server.
The catch is that this problem is not seen on the prod server or on other developers workstations. The other developers on the project (and the prod server) are running either MacOS or Ubuntu. I am running LinuxMint 17 on my workstation where I'm experiencing this problem.
The problem appears not to be an issue with the way that I'm using Request or Busboy (unless it's an edge case) but rather a configuration issue on my workstation causing this to happen.
This is what solved the problem:
sudo chown -R $USER /usr/local

configure monitrc to monitor node js app process, can't find node app pidfile running with screen ec2 linux

I run my app.js (node js application) via screen on my ec2 linux instance.
I'm trying to config my monitrc file and I need the app pidfile.
It's not in :
/var/run
(and there isn't a /var/www)
Would really appreciate it if someone has any idea where the pidfile is or how can I find it out..
Thank you!
in your app you can get the current pid number with process.pid so
var fs = require('fs');
fs.writeFile("/tmp/pidfile", process.pid);
and you get a pidfile in tmp
seems like there isn't a pid file created so I used forever-monitor in order to restart my app.js script in case of an error.
Looks like it is working.
What you need to do is npm install forever
and write server.js :
var forever = require('forever'),
child = new(forever.Monitor)('app.js', {
'silent': false,
'pidFile': '/var/run/app.pid',
'watch': false,
'options': ['8383'], // Additional arguments to pass to the script,
'sourceDir': '.', // Directory that the source script is in
'watchDirectory': '.', // Top-level directory to watch from.
'watchIgnoreDotFiles': true, // whether to ignore dot files
'watchIgnorePatterns': [], // array of glob patterns to ignore, merged with contents of watchDirectory + '/.foreverignore' file
'logFile': 'logs/forever.log', // Path to log output from forever process (when daemonized)
'outFile': 'logs/forever.out', // Path to log output from child stdout
'errFile': 'logs/forever.err'
});
child.start();
forever.startServer(child);
and then run it with - node server.js (I run it from ~/nodejs directory)
Still the pid file that supposed to be in /var/run isn't there, weird but I don't need monit anymore.
I still don't understand why I should additionally use upstart (like all the posts related suggested) anyhow when I tried to run upstart it didn't work

Flow of Work Working With Node.JS HTTP server

Learning Node.JS at the moment.
Everything is going fine, just that i have a little challenge with the flow of work.
So i create an HTTP server that listens at a particular port. For example
var http = require("http");
http.createServer(function(request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
response.write("Hello World");
response.end();
}).listen(8888);
It works fine. Only problem is that when i edit the file that has the above code, and try to start the node process again by typing node server.js i get the following error:
Error: EADDRINUSE, Address already in use.
So i learnt I need to kill the node process using ps before the changes can be reflected and before i can restart.
But this looks like a pain. Do i need to kill the node process anytime i make changes to the server i am writing?
I am sure there is a better way. Any help?
During development I tend to just run node from the command line in a terminal window. When I'm finished with the testing I just Ctrl-C to interrupt the current process which kills node and then press arrow-up enter to restart the process.
my solution is as simple as
npm install dev -g
node-dev app.js
node-dev is the same as 'node' but automatically reruns app.js everytime any file in application dir (or subdir) is changed. it means restarting when static files are changed, too, but should be acceptable for development mode
There isn't any easy way. Authors of node do not like hot-reloading idea, so this is the way it works.
You can hack it if you put your server in a module, require it from the main script, fs.watchFile the .js for changes and then manually stop it as a reaction to a file change, manually deleting it from the module cache (require.cache if I am not mistaken), and require it again.
Or run it in child process and kill and respawn it after a file change (that is, doing automatically what you now do by hand).
You can use something like nodemon (video tutorial about it here) or node-supervisor so that the server auto-restarts when editing files.
If you want to manually do this, then just interrupt the process (CTRL+C) and re-run last command (node server.js).

Resources