Winston log files lost on app restart - node.js

I'm using Winston logger for node.js, and everytime I restart the app, the logs get overwritten by the blank ones, starting at the moment of the restart.
I need to keep the logs, SPECIALLY when I have to restart the app, since it's surely due to an error.
I've read the documentation at GitHub, but found nothing about this.
This is how I'm using the transports:
winston.add(winston.transports.Console, {
level: config.logLevel,
silent: false,
colorize: true,
timestamp: true
});
winston.add(winston.transports.File, {
filename: config.logFile,
maxsize: 524288000, // 500MB
maxFiles: 4,
handleExceptions: true,
json: false,
level: 'debug'
});
Is there any way to rotate the logs on app restart so I can see what happenned?
Thanks!

You can do that by using a stream, instead of a filename, as an option for the file transport: https://github.com/winstonjs/winston/blob/master/docs/transports.md#file-transport Open a file for appending and then
winston.add(winston.transports.File, { stream: my_already_opened_file} )
If that's not possible, you could try and generate a new filename based on the process number, for instance, or use another logging option. There does not seem to be a way in winston to append existing log files. There are also several ways of generating random or temporary file names: nodejs - Temporary file name but you can always use the filename itself as a store for a sequence name: "log-x.log", por instance, and read the file at the beginning of your program and create another with the sequence number incremented by one

Related

How to set different meta for different transports in winston?

I am trying to implement logging, for my node.js application, that uses socket.io for connections. My initial goal was to implement a logging mechanism, that stores logs in two different files, based on the authenticated Account and the socket.io connection id, so I may quickly review logs related to a specific account, and/or to a specific session when the Account was active.
For this, I've created a logger, that uses two File type transports, like this:
/** ... some constant declarations ommited for brevity **/
const customerTransport = new winston.transports.File({
dirname: customerSessionDir,
filename: 'main.log',
level: 'info',
});
const sessionTransport = new winston.transports.File({
dirname: path.resolve(customerSessionDir, 'sessions'),
filename: moment.now().toString() + "." + socketSessionId + ".log",
level: 'info'
});
const logger = winston.createLogger({
levels: winston.config.syslog.levels,
level: 'info',
format: loggingFormat,
transports: [customerTransport, sessionTransport]
});
This works as expected, but now, I would want to back reference, the socketSessionId variable, in the main.log file, so that when I am reviewing the log for a specific account, I get a reference in that file, that points to a smaller session log file. I know that this is achiavable, by setting the defaultMeta property, when creating the logger, but if I do it like that, the information will also appear in the session log files, and I would not want to bloat those files with any unneccessary information.
Is it possible somehow, to automagically add meta data to specific transports only? Is this the intended way to achieve my end-goal? Or should I create two separate loggers, and use that for this scenario?

On Azure, bunyan stops logging after a few seconds

I have a NodeJS web app and I've added logging via bunyan. Works perfectly on my desktop. On Azure it works perfectly for 1-10 seconds and then nothing else is ever logged. The app continues to run and operates correctly otherwise. I can't figure out why this is happening. Logging to a plain local file, not a blob or Azure Storage.
Log type is rotating-file, set to rotate 1/day and keep 3 days. The web app has Always On and ARR Affinity set to On, and Application Logging (Filesystem) though I'm not sure that factors here. Instance Count is 1 and Autoscale is not enabled. Node version is 8.7.0. In Console:
> df -h .
D:\home\site\wwwroot\logs
Filesystem Size Used Avail Use% Mounted on
- 100G -892G 99G 113% /d/home/site
Frankly I don't know what that's trying to tell me. Somehow we've used 113% of something, which is impossible. We've used a negative amount, which is impossible. There is still 99G/100G available, so we really are only using 1%. So is it a 'disk full' problem? I don't know. I haven't seen such an error message anywhere.
Prior, the app was using console.log(). We added code to intercept console.X and write to a file first, then call the normal function. The same thing happened - it would work for a few seconds and then not log anything else. I had assumed it was because some component of Azure was also intercepting console calls in order to redirect them to XXX-stdout.txt, and both of us doing that somehow broke it. Now it seems the cause may have been something else.
Does anyone know why this is happening?
11/12 - Created from scratch an app to log a heartbeat once/second and it worked fine. Also worked once/minute. I'll have to add in pieces from the failing project until it breaks.
11/13 - I don't think there's anything special about the logger configuration.
'use strict'
const bunyan = require('bunyan');
const fs = require('fs');
const path = require('path');
const logname = 'tracker';
const folder = 'logs';
const filename = path.join(folder, logname + ".json");
if (!fs.existsSync(folder)) {
fs.mkdirSync(folder);
}
var log = bunyan.createLogger({
name: logname,
streams: [{
type: 'rotating-file',
path: filename,
level: process.env.LOG_LEVEL || "info",
period: '1d', // daily rotation
count: 3 // keep 3 back copies
}]
});
module.exports = { log };
Still working on reproducing it with something less than the entire project.
11/14 - I've satisfied myself that after the bunyan logging stops the app is still continuing to call it. The "calling log2" console.logs are visible in the Azure Log Stream, but beyond ~30 seconds nothing more gets added to the bunyan log. I never see the "ERROR" logged. This is still in the context of the project, I still can't reproduce it separately.
var log2 = bunyan.createLogger({
name: logname,
streams: [{
type: 'rotating-file',
path: filename,
level: process.env.LOG_LEVEL || "info",
period: '1d', // daily rotation
count: 3 // keep 3 back copies
}]
});
var log = {};
log.info = function() {
console.log("calling log2.info");
try {
log2.info(...arguments);
} catch(err) {
console.log("log.info ERROR " + err);
}
}
11/14 - Changed from 'rotating-file' to 'file', same behavior. Enabled xxx logging and it prints the "writing log rec" message but does not add to the file. Something happened to the file stream? Added code to catch close/finish/cork right where we catch 'error' from the stream, didn't catch any of those events.
11/15 - I can see from pids and log messages that Azure is restarting my app. I don't know why, nothing in logging-errors.txt, nothing dire on stderr. I also don't know why that second run isn't logging to the file, when the first one did. But if I can figure out why it's restarting and prevent that, then I won't care about the second problem. Azure is so opaque to me.
After much head-banging we've determined that what we're trying to do isn't compatible with an Azure web app. We'll need to stand up a VM. Closing this question.

Directory/File based logging selektor

I am looking for a logging solution for my node.js app that would allow me to set the logging level via file/folder selectors.
For example I would like to be able to set the logging for all files in /app/schema to 'info'. And all the rest to 'error'.
Exemplary configuration:
{
"*":"error",
"/app/schema":"info" //<--Regex expression would be great too.
}
I constantly comment/uncomment/remove logging statements when I need to examine something. I would rather do that via a configuration change and leave the logging files intact. A global debugging level just creates too way to much noise and volume (which matters when storing logs).
Is there something like this? Apache log4j is similar, you can set logging level on package level.
If I get your question right, I'd suggest Bunyan, it lets you configure as many log streams and log levels as you like, i.e. from the docs:
var log = bunyan.createLogger({
name: 'myapp',
streams: [
{
level: 'info',
stream: process.stdout // log INFO and above to stdout
},
{
level: 'error',
path: '/var/tmp/myapp-error.log' // log ERROR and above to a file
}
]
});
You'll have to set up the logic for each path with each log-level.

Add timestamp to the transports file with Nodejs Winston daily transports in stream mode

For fixing losing logs of transports file in high concurrency, I changed the mode to stream. Issue of losing logs was fixed then. But I have another problem that the transports file can not be created day by day(even I set the transports mode to DailyRotateFile).
I wanna know, is there any option can be set to implement this case? or I have to hack it???
Thanks guys.
Demo codes below:
new (winston.Logger)({
transports: [
new (winston.transports.DailyRotateFile)({
level: 'info',
stream: fs.createWriteStream('performance.log'),
}))
]
});

Node.js winston logger; How to start from a newline when insert log into log file?

I'm making a Node.js app and I am using Winston for most of my logging purposes.
But I find all records in log file in one line, I want to change line for every log record, any way can do this?
My code;
var winston = require("winston"); var logger = new(winston.Logger)({
transports: [
new(winston.transports.Console)(),
new(winston.transports.File)({filename: './log/logFile.log', handleExceptions: true, json:true})
]
})
Just like that:
{"level":"info","message":"test","timestamp":"2012-12-05T07:12:23.774Z"}
{"level":"info","message":"test","timestamp":"2012-12-05T07:15:16.780Z"}
It is not winston issue. Winston uses Unix style for new line (i.e. only one character xOA).
You just need stop using Windows Notepad, and start using other text editor (like Notepad++, Sublime) or IDE like Enide Studio.
BTW newer winston has options for time format.

Resources