from what I understand, Pino (v 7.5.1) by default does sync logging. From the docs
In Pino's standard mode of operation log messages are directly written to the output stream as the messages are generated with a blocking operation.
I am using pino.multistreams like so
const pino = require('pino')
const pretty = require('pino-pretty')
const logdir = '/Users/punkish/Projects/z/logs'
const streams = [
{stream: fs.createWriteStream(`${logdir}/info.log`, {flags: 'a'})},
{stream: pretty()},
{level: 'error', stream: fs.createWriteStream(`${logdir}/error.log`, {flags: 'a'})},
{level: 'debug', stream: fs.createWriteStream(`${logdir}/debug.log`, {flags: 'a'})},
{level: 'fatal', stream: fs.createWriteStream(`${logdir}/fatal.log`, {flags: 'a'})}
]
Strangely, Pino is behaving asynchronously. I have a curl operation that is outputting out of sequence (before earlier events that are using log.info)
log.info('1')
.. code to do 1 something
log.info('2')
.. code to do 2 something
log.info('3')
.. code to do 3 something
const execSync = require('child_process').execSync
execSync(curl --silent --output ${local} '${remote}')
and my console output is
1
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 39.5M 100 39.5M 0 0 108M 0 --:--:-- --:--:-- --:--:-- 113M
2
3
this is a bit annoying and confusing. Maybe this is not the fault of Pino, and maybe curl is causing the problem. But if I replace pino logging with console.log then the order is as expected. So it seems the problem is with Pino behaving asynchronously. How can I go back to synchronous logging?
The trick is to call pino.destination({...}) to create a SonicBoom output stream: a pino-specific alternative to fs.createWriteStream. The SonicBoom options have a boolean property sync. You also need the sync option in pretty({...}).
const pino = require('pino')
const pretty = require('pino-pretty')
const logdir = '/Users/punkish/Projects/z/logs'
const createSonicBoom = (dest) =>
pino.destination({dest: dest, append: true, sync: true})
const streams = [
{stream: createSonicBoom(`${logdir}/info.log`)},
{stream: pretty({
colorize: true,
sync: true,
})},
{level: 'error', stream: createSonicBoom(`${logdir}/error.log`)},
{level: 'debug', stream: createSonicBoom(`${logdir}/debug.log`)},
{level: 'fatal', stream: createSonicBoom(`${logdir}/fatal.log`)}
]
Test:
const log = pino({ level: 'info' }, pino.multistream(streams))
console.log('Before-Fatal')
log.fatal('Fatal')
log.error('Error')
log.warn('Warn')
console.log('After-Warn, Before-Info')
log.info('Info')
console.log('After-Info')
Output:
Before-Fatal
[1234567890123] FATAL (1234567 on host): Fatal
[1234567890127] ERROR (1234567 on host): Error
[1234567890127] WARN (1234567 on host): Warn
After-Warn, Before-Info
[1234567890128] INFO (1234567 on host): Info
After-Info
seems like using pino.multistream (or multiple transports, which seem to have the same effect as multistream) automatically forces pino to behave asynchronously. There is no way around it. Since synchronous logging is more impt for me than speed (in this project), I will look for an alternative logging solution
Related
I am trying to create a daily rotating log with log entries from throughout the application as well as uncaught exceptions:
const { createLogger, format, transports } = require('winston')
import 'winston-daily-rotate-file'
const httpContext = require('express-http-context')
const requestIdFormat = format((info, opts) => {
const requestId = httpContext.get('requestId')
if (requestId){
info.requestId = requestId
}else{
info.requestId = ''
}
return info
})
const allTransport = new transports.DailyRotateFile({
filename: 'logs/%DATE%-application.log',
datePattern: 'YYYY-MM-DD',
maxSize: '20m',
maxFiles: '14d',
level: 'info',
handleExceptions: true
})
const errorTransport = new transports.DailyRotateFile({
filename: 'logs/%DATE%-error.log',
datePattern: 'YYYY-MM-DD',
maxSize: '20m',
maxFiles: '14d',
level: 'error',
handleExceptions: true
})
export const logger = createLogger({
format: format.combine(requestIdFormat(), format.timestamp(), format.printf(i => `${i.timestamp} | ${i.requestId} | ${i.level}: ${i.message}`), format.errors({stack: true})),
transports: [
allTransport,
errorTransport,
],
exitOnError: false
})
But the exceptions e.g. throw Error('hello?') are not logged to the log files.
I've tried other variations c.f. https://github.com/winstonjs/winston#handling-uncaught-exceptions-with-winston e.g. setting exceptionHandlers in createLogger, but that does not work either.
How should I alter the code to include uncaught exceptions in the log?
UPDATE: I now see that an exception thrown on e.g. invalid import IS in fact logged, so maybe the issue is that the exception I test with is thrown in an express service - maybe it is caught in the express framework and that is why it is not logged?
Thanks,
-Louise
I ended up just "manually" logging exceptions, i.e. logger.error(err), as I already have a middleware hook for showing a catch-all error page.
I am still not sure why some exceptions were automatically logged and why I have to manually log exceptions I explicitly throw via "throw new Error('some message')"?
I've faced this kind of situation before, and for some reason, sometimes the node process get killed/exited before all log handling finishes its execution. There are some libs that may cause conflict in handling because they may call process.exit() while other libs are trying to handle the same thing. So, we need to investigate and debug to try to find what is terminating our process before our logs are flushed/written to disk. I strongly recommend not doing any async processing when caughting exceptions. Do everything in a sync way and faster.
In a project I've worked, I spended hours until I found process.exit in a module , probably copied from internet. Process.exit been called everywhere in an app is the root cause for errors and misbehavior of apps.
I'm attempting to format my logs in such a way that Google Cloud will correctly extract the log level. This is running on Cloud Run, with typescript. Cloud Run is grabbing the logs from the container output.
If I do the following, google correctly parses the log line:
console.log(JSON.stringify({
severity: 'ERROR',
message: 'This is testing a structured log error for GCP'
}));
And the log output looks like this:
I've tried a number of different ways to format with winston, ended up with the following:
useFormat = format.combine(
format((info, opts) => {
info['severity'] = info.level;
delete info.level;
return info;
})(),
format.json());
this.winston = winston.createLogger({
level: logLevel,
format: useFormat,
transports: [new winston.transports.Console()]
});
Which looks like it will work (it correctly outputs the json line), I get this in the GCP logs:
Any help appreciated.
Turns out I was close, just needed to .upperCase() the log level (and I'm mapping Verbose -> Debug, I don't really understand why GCP decided to do a totally different log leveling system than everyone else). New code:
useFormat =
format.combine(
format((info, opts) => {
let level = info.level.toUpperCase();
if(level === 'VERBOSE') {
level = 'DEBUG';
}
info['severity'] = level;
delete info.level;
return info;
})(),
format.json());
The last bit of the question is confusing. The problem OP is pointing to is that the json is printed out and the severity is default. The json should not be printed out, only the message, and the severity should be debug. The answer that OP provides does what is wanted.
For others that may be confused in the same way I was.
I want to get the console contents of the current running Node.js script.
I've tried to do this event but it doesn't work:
setInterval(function() { console.log("Hello World!") }, 1000);
process.stdout.on('message', (message) => {
console.log('stdout: ' + message.toString())
})
It doesn't listen to the event.
This is not a fully Node.js solution but it is very good in case you run Linux.
Create a start.sh file.
Put the following into it:
start.sh:
#!/bin/bash
touch ./console.txt
node ./MyScript.js |& tee console.txt &
wait
Now open your Node.js script (MyScript.js) and use this Express.js event:
MyScript.js:
const fs = require('fs');
app.get('/console', function(req, res){
var console2 = fs.readFileSync("./console.txt", 'utf8');
res.send(console2);
});
Always start your Node.js application by calling start.sh
Now calling http://example.com/console should output the console!
A part of this answer was used.
NOTE: To format the line breaks of the console output to be shown correctly in the browsers, you can use a module like nl2br.
An advice: The problems aren't always solved the direct way, most of the problems are solved using indirect ways. Keep searching about the possible ways to achieve what you want and don't search about what you're looking for only.
There's no 'message' event on process.stdout.
I want to make a GET in my Express.js app called /getconsole .. it
should return the console of the current running Node.js script (which
is running the Express.js app too)
What you should use is a custom logger, I recommend winston with a file transport, and then you can read from that file when you issue a request to your endpoint.
const express = require('express');
const fs = require('fs');
const winston = require('winston');
const path = require('path');
const logFile = path.join(__dirname, 'out.log');
const app = express();
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [
new winston.transports.Console({
format: winston.format.simple()
}),
new winston.transports.File({
filename: logFile
})
]
});
// Don't use console.log anymore.
logger.info('Hi');
app.get('/console', (req, res) => {
// Secure this endpoint somehow
fs.createReadStream(logFile)
.pipe(res);
});
app.get('/log', (req, res) => {
logger.info('Log: ' + req.query.message);
});
app.listen(3000);
You can also use a websocket connection, and create a custom winston transport to emit the logs.
stdout, when going to a tty (terminal) is an instance of a writable stream. The same is true of stderr, to which node writes error messages. Those streams don't have message events. The on() method allows solicitation of any named event, even those that will never fire.
Your requirement is not clear from your question. If you want to intercept and inspect console.log operations, you can pipe stdout to some other stream. Similarly, you can pipe stderr to some other stream to intercept and inspect errors.
Or, in a burst of ugliness and poor maintainability, you can redefine the console.log and console.error functions to something that does what you need.
It sounds like you want to buffer up the material written to the console, and then return it to an http client in response to a GET operation. To do that you would either
stop using console.log for that output, and switch over to a high-quality logging npm package like winston.
redefine console.log (and possibly console.error) to save its output in some kind of simple express-app-scope data structure, perhaps an array of strings. Then implement your GET to read that array of strings, format it, and return it.
My first suggestion is more scalable.
By the way, please consider the security implications of making your console log available to malicious strangers.
I'm using node.js with morgan as logger like this:
// create a rotating write stream
var accessLogStream = require("stream-file-archive")({
path: "logs/app-%Y-%m-%d.log", // Write logs rotated by the day
symlink: "logs/current.log", // Maintain a symlink called current.log
compress: true // Gzip old log files
});
app.use(logger('combined', {stream: accessLogStream}));
and I wanted to know how to limit the max file size of the access.log.
Thanks
In my web analytics, I am logging the data in plain text file. I want to rotate the log on a daily basis because its logging too much data. Currently I am using bunyan to rotate the logs.
Problem I am facing
It is rotating the file correctly, but rotated log file are in the name log.0, log.1, etc. I want the file name to be log.05-08-2013, log.04-08-2013
I can't edit the source of the bunyanpackage because we are installing the modules using package.json via NPM.
So my question is - Is there any other log rotation in Node.js that meets my requirement?
Winston does support log rotation using a date in the file name. Take a look at this pull request which adds the feature and was merged four months ago. Unfortunately the documentation isn't listed on the site, but there is another pull request pending to fix that. Based on that documentation, and the tests for the log rotation features, you should be able to just add it as a new Transport to enable the log rotation functionality. Something like the following:
winston.add(winston.transports.DailyRotateFile, {
filename: './logs/my.log',
datePattern: '.dd-MM-yyyy'
});
If you also want to add logrotate (e.g. remove logs that are older than a week) in addition to saving logs by date, you can add the following code:
var fs = require('fs');
var path = require("path");
var CronJob = require('cron').CronJob;
var _ = require("lodash");
var logger = require("./logger");
var job = new CronJob('00 00 00 * *', function(){
// Runs every day
// at 00:00:00 AM.
fs.readdir(path.join("/var", "log", "ironbeast"), function(err, files){
if(err){
logger.error("error reading log files");
} else{
var currentTime = new Date();
var weekFromNow = currentTime -
(new Date().getTime() - (7 * 24 * 60 * 60 * 1000));
_(files).forEach(function(file){
var fileDate = file.split(".")[2]; // get the date from the file name
if(fileDate){
fileDate = fileDate.replace(/-/g,"/");
var fileTime = new Date(fileDate);
if((currentTime - fileTime) > weekFromNow){
console.log("delete fIle",file);
fs.unlink(path.join("/var", "log", "ironbeast", file),
function (err) {
if (err) {
logger.error(err);
}
logger.info("deleted log file: " + file);
});
}
}
});
}
});
}, function () {
// This function is executed when the job stops
console.log("finished logrotate");
},
true, /* Start the job right now */
'Asia/Jerusalem' /* Time zone of this job. */
);
where my logger file is:
var path = require("path");
var winston = require('winston');
var logger = new winston.Logger({
transports: [
new winston.transports.DailyRotateFile({
name: 'file#info',
level: 'info',
filename: path.join("/var", "log", "MY-APP-LOGS", "main.log"),
datePattern: '.MM--dd-yyyy'
}),
new winston.transports.DailyRotateFile({
name: 'file#error',
level: 'error',
filename: path.join("/var", "log", "MY-APP-LOGS", "error.log"),
datePattern: '.MM--dd-yyyy',
handleExceptions: true
})
]});
module.exports = logger;
There's the logrotator module for log rotation that you can use regardless of the logging mechanism.
You can specify the format option to format the date format (or any other format for that matter)
var logrotate = require('logrotator');
// use the global rotator
var rotator = logrotate.rotator;
// or create a new instance
// var rotator = logrotate.create();
// check file rotation every 5 minutes, and rotate the file if its size exceeds 10 mb.
// keep only 3 rotated files and compress (gzip) them.
rotator.register('/var/log/myfile.log', {
schedule: '5m',
size: '10m',
compress: true,
count: 3,
format: function(index) {
var d = new Date();
return d.getDate()+"-"+d.getMonth()+"-"+d.getFullYear();
}
});
mongodb
winston itself does not support log rotation. My bad.
mongodb has a log rotation use case. Then you can export the logs to file names per your requirement.
winston also has a mongodb transport but I don't think it supports log rotation out of the box judging from its API.
This may be an overkill though.
forking bunyan
You can fork bunyan and add your repo's url in package.json.
This is the easiest solution if you're fine with freezing bunyan's feature or maintaining your own code.
As it is an open source project, you can even add your feature to it and submit a pull request to help improve bunyan.