How to add console outputs Node.js app in the log access NGINX file? - node.js

I have an Node.js app setting up with systemd. The app running behind NGINX.
I would like to add console output of my Node.js application in the log access NGINX file ?
How can I do this ?
Thanks in advance.

More simple way is hook console.log and call console.log as usually.
var util = require('util');
var JFile = require("jfile");
var nxFile = new JFile('/var/log/nginx/access.log');
...
process.stdout.write = (function(write) {
return function(text, encoding, fd) {
write.apply(process.stdout, arguments); // write to console
nxFile.text += util.format.apply(process.stdout, arguments) + '\n'; // write to nginx
}
})(process.stdout.write);
Also you can define hook to console.error by change stdout to strerr in code above.
P.S. I don't have nginx to verify code. So code can contains errors :)

Brief :
Using JFile package , file logging can be smooth as following :
nxFile.text+='\n'+message;
Details :
Add function that logs on both (Terminal+nginx log) , then use it instead of using console.log directly :
var customLog=function(message){
console.log(message);
logNginx(message);
}
Then , implement logNginx which is called inside customLog :
var JFile=require('jfile'); // "npm install jfile --save" required
let nxFile=new JFile('/var/log/nginx/access.log'); // check path before if exist in your system . IF no , change it with the available path
function logNginx(message){
nxFile.text+='\n'+message; //append new line in nginx log file
}
Don't forget to install JFile npm install jfile which makes handling files done quickly .

If you're running Node as a systemd process, with console.log going to stdout (which I believe is the default), and your goal is just to see the logs (or get them on disk somewhere), there's an easier way than all this Node meddling and hooking.
You should already have access to the console log without doing anything through journalctl. For instance, my systemd unit file (at /etc/systemd/system/myapp.service in this example) looks something like this:
[Unit]
Description=My Webapp
[Service]
WorkingDirectory=/srv/webapp-dir
ExecStart=/usr/local/bin/node server.js
Restart=always
RestartSec=5
Environment=NODE_ENV=production PORT=1984
User=myuser
Group=myuser
[Install]
WantedBy=multi-user.target
And running journalctl -u myapp shows me the console logs from my app.
If you want, you can also send those logs to the syslog with some additional parameters. I've also added the following to my [Service] directory to do so:
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=myapp
Which results in my logs going to the syslog tagged with myapp, where I could filter them into their own log if I wanted to with rsyslog filtering.

You can add the following code in your nginx script . This should work
env NODE_BIN=/usr/bin/node
env SCRIPT_FILE="server.js"
env LOG_FILE=/var/log/logfilename.log
env RUN_AS="root"
$RUN_AS -- $NODE_BIN $SCRIPT_FILE >> $LOG_FILE 2>&1

Related

NodeJS Server getting stop on Error

I am very New to NodeJS. I am developing Live Streaming based Speech to text for my web application. It works well but problem is
Sometimes, Nodejs throws an error as 'Request Time-Out' and http server has been stop. then I need to manually re run program again (with command node app.js)
I had used this example here.
Screen shot is bello
Please help. And thanks in advance.
You need first to try {} catch(ex){}your exceptions.
You may also use pm2 which can handle that autostart if it crashes.
When using pm2 please make use of --max-memory-restart option otherwise the app can indefinitly restart and will slow down your server. That option can help you specify the amount of memory the autorestart can consume.
Install pm2
npm install -g pm2
//or
yarn global add pm2
Run the app
pm2 start app.js --max-memory-restart 100 --name node-speech
Using pm2 is even recommanded on the repository readme
you can always have a global error handler, so that, your project won't fail and also you can take an appropriate action:
process.on
(
'uncaughtException',
function (err)
{
console.log(err)
var stack = err.stack;
//you can also notify the err/stack to support via email or other APIs
}
);

readFileSync throws error when server launched as linux service

i'm trying to make a simple api for myself using a node/express server running on digital ocean. in the server file i have something like this:
var data = fs.readFileSync('path/to/data.json','utf8');
which works perfectly fine when i launch the server manually from the cmd line
node server
but what i have setup is a linux service so that everytime i restart my digital ocean machine it will automatically launch the server, the service ( kept in etc/init/ ) looks like this:
start on filesystem and started networking
respawn
exec node /path/to/server.js
the issue is that when I make the request to the server that runs the readFileSync call it works fine if the server had been launched manually from the cmd line, but when the server was launched via the service then the readFileSync throws the following error:
Error: ENOENT, no such file or directory 'path/to/data.json'
at Error (native)
at Object.fs.openSync (fs.js:500:18)
at Object.fs.readFileSync (fs.js:352:15)
the file and the directory do exist ( if i make a request for the data.json file directly in my browser i can see it )
what am i missing? is there something about launching the server as a service that conflics with using readFileSync? is there an alternative approach to what i'm trying to do? should i use some kind of request/fetch resource module for accessing that json file?
You're using a relative path but the process is not being started from where you think it is. Instead of using relative paths, use absolute paths.
So if your layout looks like:
server.js
path/
to/
data.json
Then inside your server.js, you can just do something like:
var path = require('path');
// ...
var data = fs.readFileSync(path.join(__dirname, 'path/to/data.json'), 'utf8');

node.js winston logger no colors with nohup

We are using winston logger in our project with the following transport settings:
file: {
filename: __base + '/log/server.log',
colorize : true,
timestamp : true,
json : false,
prettyPrint : true
}
If the application is started with nohup, log file is not colorized. It works only without nohup.
nohup supervisor -w . -i node_modules/ server.js &
Is it problem with winston or nohup?
It's caused by colors package (used by winston) that performs the following check when trying to determine whether to support colors:
if (process.stdout && !process.stdout.isTTY) {
return false;
}
This means that when your application is running in a background, it doesn't have a terminal and colors are not used. This affects commands/apps other than nohup as well (see issue #121).
A simple workaround is to start your application with --color=true argument (or to simulate it with process.argv.push('--color=true') before require('winston') is called.
Alternatively, you can patch winston - just add one line to lib/winston/config.js:
var colors = require('colors/safe');
colors.enabled = true; // add this line
However, all of these workarounds will most likely make the console logger to use colors even when there is no terminal.

configure monitrc to monitor node js app process, can't find node app pidfile running with screen ec2 linux

I run my app.js (node js application) via screen on my ec2 linux instance.
I'm trying to config my monitrc file and I need the app pidfile.
It's not in :
/var/run
(and there isn't a /var/www)
Would really appreciate it if someone has any idea where the pidfile is or how can I find it out..
Thank you!
in your app you can get the current pid number with process.pid so
var fs = require('fs');
fs.writeFile("/tmp/pidfile", process.pid);
and you get a pidfile in tmp
seems like there isn't a pid file created so I used forever-monitor in order to restart my app.js script in case of an error.
Looks like it is working.
What you need to do is npm install forever
and write server.js :
var forever = require('forever'),
child = new(forever.Monitor)('app.js', {
'silent': false,
'pidFile': '/var/run/app.pid',
'watch': false,
'options': ['8383'], // Additional arguments to pass to the script,
'sourceDir': '.', // Directory that the source script is in
'watchDirectory': '.', // Top-level directory to watch from.
'watchIgnoreDotFiles': true, // whether to ignore dot files
'watchIgnorePatterns': [], // array of glob patterns to ignore, merged with contents of watchDirectory + '/.foreverignore' file
'logFile': 'logs/forever.log', // Path to log output from forever process (when daemonized)
'outFile': 'logs/forever.out', // Path to log output from child stdout
'errFile': 'logs/forever.err'
});
child.start();
forever.startServer(child);
and then run it with - node server.js (I run it from ~/nodejs directory)
Still the pid file that supposed to be in /var/run isn't there, weird but I don't need monit anymore.
I still don't understand why I should additionally use upstart (like all the posts related suggested) anyhow when I tried to run upstart it didn't work

nodejs downgraded from root using setuid, what about log file ownership?

I've roughly followed a scheme described in http://onteria.wordpress.com/2011/05/31/dropping-privileges-using-process-setuid-in-node-js/ whereby I start node as root, then downgrade the user. This way I can listen on 80 without the need for a proxy. Pretty standard stuff. I have an upstart script to manage the process (Ubuntu server).
The upstart script redirects stdout/err to a log file (which gets owned by root). Internally I'm using winston to log to the console and a file (which also gets owned by root).
In my perfect and happy world I would be able to transparently chown the log files (both the redirected stdout/err one and the one winston made) to the downgraded user. I've tried (naively) chowning them when I setuid'ed from inside the node app, which worked but meant that they never got written to again.
How can I achieve this? Is this possible or should I try to live with (at least some) log files owned by root?
Many Thanks!
What I've ended up with is a version of Peter Lyons' solution (I've cut'n'pasted the following from a few places, so it may not actually run; the idea works, though):
var logger = new (winston.Logger)();
logger.add(winston.transports.Console, {
timestamp: true
});
// start server and downgrade user
httpsServer.listen(443, function() {
logger.info('Ready on port 443');
fs.stat(__filename, function(err, stats) {
fs.chownSync('stdouterr.log',stats.uid,stats.gid);
process.setgid(stats.gid);
process.setuid(stats.uid);
logger.add(winston.transports.File, {
filename: 'mylogfile.log',
handleExceptions: true
});
logger.info('downgraded to non-root uid', {"uid":stats.uid});
});
});
When I've successfully bound to port 443, I log to say that. logger is a winston logger configured with only console output (which gets redirected to the stdouterr.log file by starting node using node app.js >> stdouterr.log 2>&1). So this log message only appears on stdouterr.log.
Then I figure the owner of the current file and chown stdouterr.log to be owned by this user. Then I set the gid and uid of the current process (the dropping privileges part).
Then I add in my file logging to the winston logger.
Lastly, I log to say I've downgraded the user. This message appears in both stdouterr.log and mylogfile.log.
Not quite as beautiful as I'd hoped (no file logging while the process is running as root) but it means that the log files are easy to secure and manage.

Resources