Directory/File based logging selektor - node.js

I am looking for a logging solution for my node.js app that would allow me to set the logging level via file/folder selectors.
For example I would like to be able to set the logging for all files in /app/schema to 'info'. And all the rest to 'error'.
Exemplary configuration:
{
"*":"error",
"/app/schema":"info" //<--Regex expression would be great too.
}
I constantly comment/uncomment/remove logging statements when I need to examine something. I would rather do that via a configuration change and leave the logging files intact. A global debugging level just creates too way to much noise and volume (which matters when storing logs).
Is there something like this? Apache log4j is similar, you can set logging level on package level.

If I get your question right, I'd suggest Bunyan, it lets you configure as many log streams and log levels as you like, i.e. from the docs:
var log = bunyan.createLogger({
name: 'myapp',
streams: [
{
level: 'info',
stream: process.stdout // log INFO and above to stdout
},
{
level: 'error',
path: '/var/tmp/myapp-error.log' // log ERROR and above to a file
}
]
});
You'll have to set up the logic for each path with each log-level.

Related

How to set different meta for different transports in winston?

I am trying to implement logging, for my node.js application, that uses socket.io for connections. My initial goal was to implement a logging mechanism, that stores logs in two different files, based on the authenticated Account and the socket.io connection id, so I may quickly review logs related to a specific account, and/or to a specific session when the Account was active.
For this, I've created a logger, that uses two File type transports, like this:
/** ... some constant declarations ommited for brevity **/
const customerTransport = new winston.transports.File({
dirname: customerSessionDir,
filename: 'main.log',
level: 'info',
});
const sessionTransport = new winston.transports.File({
dirname: path.resolve(customerSessionDir, 'sessions'),
filename: moment.now().toString() + "." + socketSessionId + ".log",
level: 'info'
});
const logger = winston.createLogger({
levels: winston.config.syslog.levels,
level: 'info',
format: loggingFormat,
transports: [customerTransport, sessionTransport]
});
This works as expected, but now, I would want to back reference, the socketSessionId variable, in the main.log file, so that when I am reviewing the log for a specific account, I get a reference in that file, that points to a smaller session log file. I know that this is achiavable, by setting the defaultMeta property, when creating the logger, but if I do it like that, the information will also appear in the session log files, and I would not want to bloat those files with any unneccessary information.
Is it possible somehow, to automagically add meta data to specific transports only? Is this the intended way to achieve my end-goal? Or should I create two separate loggers, and use that for this scenario?

What is logging levels?

I know transports are the places where I want to keep my logs but I don't understand what is the level of logging? I have the following code to create a logger with multiple means of transport.
const logger = winston.createLogger({
format: winston.format.json(),
transports: [
new winston.transports.File({ filename: 'logs/error.log', level: 'error' }),
new winston.transports.File({ filename: 'logs/info.log', level: 'info' }),
],
})
When I want to log, logger.log('error', err), it logs in both info.log and error.log files. Why is this happening? Can somebody explain the idea of levels in logging, please?
Geno's comment is correct; in pretty much all logging platforms (winston, log4js, etc.), the log level represents the maximum level of error to print.
Setting log level to ERROR means "only print FATAL and ERROR messages".
Setting log level to INFO means "print FATAL, ERROR, WARN, and INFO messages".
There is no way (in Winston, at least, but I think it's generally true across the board) to specify a log transport that only carries INFO messages and not ERROR messages. This is by design.
When you set a log level, you are actually specifying a level of detail - FATAL is the least detailed logging, DEBUG is the most detailed logging. It wouldn't make sense to ask for more detail, and then have fatal errors disappear from the log. That is why every error level also includes all messages from levels "below" it.

Proper error logging for node applications

I am developing an express project which will have multiple modules/services in it. The folder structure looks mostly like this:
-- app.js
-- payment_service
-- routes.js
-- index.js
-- models
-- model_1.js
-- model_2.js
APIs in index.js are the only exposed APIs and they work as a gateway for all requests coming for this module/service.
Most of the services can throw operational error under many circumstances, so manual intervention may needed to fix things. So I need to:
Log errors properly with proper context so that some person/script can do the needful.
Figure out the reason of failure.
There will be dedicated teams owning each service. So I should be able to differentiate between error logs for each service so that it can be aggregated and forwarded to concerned person.
I decided to go with ELK stash so that I can generate reports by script.
The main problem that I am facing is that I can't maintain correlation between logs. For example; If a request comes and it travels through five functions and each function logs something then I can't relate those logs.
One way is to create a child logger for each request and pass it to all the functions but that seems to be extra overhead passing logger instance to all the functions.
Another option is to use something like verror and do the logging only at entry point of the service/module so that the whole context can be contained in the log. This approach looks ok for logging errors, however it can't help with info and debug logs - they help me a lot in development and testing phase.
For the sake of differentiating between error logs, I am going to create
A dedicated logger for each service with log level error.
An application wide generic logger for info and debug purpose.
Is this the correct approach?
What will be the best way so that I can achieve all the requirements in simplest way?
I'd recommend you use a logger and you don't need anything too complex. For example:
npm install 12factor-log
Then create a file in your root folder near app.js (or in a /lib folder is where I'd place libraries)
logger.js
const Log = require('12factor-log');
module.exports = (params) => {
return new Log(params);
}
Then in your modules, import your logger and pass in the module name when you instantiate it so you can track where statements come from...
model_1.js
var log = require('./logger')({name: 'model_1'});
// ...
log.info("Something happened here");
// ...
try {
// ...
catch (error) {
const message = `Error doing x, y, z with value ${val}`;
log.error(message);
throw new Error(message);
}
Then handle error gracefully at your controller -> view layer for user-friendly experience.
Your logs would print something like this:
{"ts":"2018-04-27T16:37:24.914Z","msg":"Something happened here","name":"model_1","type":"info","level":3,"hostname":"localhost","pid":18}
As far as correlation of logs, if you see in the output above it includes the hostname of the machine it's running on, and also the name of the module and severity level. You can import this JSON into Logstash and load into Elasticsearch and it will store JSON for easy search and indexing.
See: https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html
Logging is complex and many people have worked on it. I would suggest not doing so yourself.
So, not following my own advice, I created my own logging package:
https://www.npmjs.com/package/woveon-logger
npm install woveon-logger
This prints file and line numbers of errors and messages, has logging levels and aspect-oriented logging, and can dump a stack trace in one call. It even has color coding options. If you get stuck and need some feature in logging, let me know.
let logger1 = new Logger('log1', {level : 'info', debug : true, showname : true};
let logger2 = new Logger('log2', {level : 'verbose', debug : true, showname : true};
...
log1.info('Here is a log message, that is on line 23.');
log1.verbose('Does not show');
log2.verbose('Shows because log2 is verbose logging');
log2.setAspect('IO', true);
log2.aspect('IO', 'Showing aspect IO logging, for logs for IO related operations');
[2018-06-10T10:43:20.692Z] [INFO--] [log1 ] [path/to/myfile:23] Here is a log message, that is on line 23.
[2018-06-10T10:43:20.792Z] [VERBOS] [log2 ] [path/to/myfile:25] Shows because log2 is verbose logging
[2018-06-10T10:43:20.892Z] [IO----] [log2 ] [path/to/myfile:27] Showing aspect IO logging, for logs for IO related operations
Also, some other features like:
log1.throwError('Logs this as both a line of logging, and throws the error with the same message');
log1.printStack('Prints this label next to the stack trace.');
Hope it helps!
You can use grackle_tracking library https://www.getgrackle.com/analytics_and_tracking
It logs errors & traffic to your db.

On Azure, bunyan stops logging after a few seconds

I have a NodeJS web app and I've added logging via bunyan. Works perfectly on my desktop. On Azure it works perfectly for 1-10 seconds and then nothing else is ever logged. The app continues to run and operates correctly otherwise. I can't figure out why this is happening. Logging to a plain local file, not a blob or Azure Storage.
Log type is rotating-file, set to rotate 1/day and keep 3 days. The web app has Always On and ARR Affinity set to On, and Application Logging (Filesystem) though I'm not sure that factors here. Instance Count is 1 and Autoscale is not enabled. Node version is 8.7.0. In Console:
> df -h .
D:\home\site\wwwroot\logs
Filesystem Size Used Avail Use% Mounted on
- 100G -892G 99G 113% /d/home/site
Frankly I don't know what that's trying to tell me. Somehow we've used 113% of something, which is impossible. We've used a negative amount, which is impossible. There is still 99G/100G available, so we really are only using 1%. So is it a 'disk full' problem? I don't know. I haven't seen such an error message anywhere.
Prior, the app was using console.log(). We added code to intercept console.X and write to a file first, then call the normal function. The same thing happened - it would work for a few seconds and then not log anything else. I had assumed it was because some component of Azure was also intercepting console calls in order to redirect them to XXX-stdout.txt, and both of us doing that somehow broke it. Now it seems the cause may have been something else.
Does anyone know why this is happening?
11/12 - Created from scratch an app to log a heartbeat once/second and it worked fine. Also worked once/minute. I'll have to add in pieces from the failing project until it breaks.
11/13 - I don't think there's anything special about the logger configuration.
'use strict'
const bunyan = require('bunyan');
const fs = require('fs');
const path = require('path');
const logname = 'tracker';
const folder = 'logs';
const filename = path.join(folder, logname + ".json");
if (!fs.existsSync(folder)) {
fs.mkdirSync(folder);
}
var log = bunyan.createLogger({
name: logname,
streams: [{
type: 'rotating-file',
path: filename,
level: process.env.LOG_LEVEL || "info",
period: '1d', // daily rotation
count: 3 // keep 3 back copies
}]
});
module.exports = { log };
Still working on reproducing it with something less than the entire project.
11/14 - I've satisfied myself that after the bunyan logging stops the app is still continuing to call it. The "calling log2" console.logs are visible in the Azure Log Stream, but beyond ~30 seconds nothing more gets added to the bunyan log. I never see the "ERROR" logged. This is still in the context of the project, I still can't reproduce it separately.
var log2 = bunyan.createLogger({
name: logname,
streams: [{
type: 'rotating-file',
path: filename,
level: process.env.LOG_LEVEL || "info",
period: '1d', // daily rotation
count: 3 // keep 3 back copies
}]
});
var log = {};
log.info = function() {
console.log("calling log2.info");
try {
log2.info(...arguments);
} catch(err) {
console.log("log.info ERROR " + err);
}
}
11/14 - Changed from 'rotating-file' to 'file', same behavior. Enabled xxx logging and it prints the "writing log rec" message but does not add to the file. Something happened to the file stream? Added code to catch close/finish/cork right where we catch 'error' from the stream, didn't catch any of those events.
11/15 - I can see from pids and log messages that Azure is restarting my app. I don't know why, nothing in logging-errors.txt, nothing dire on stderr. I also don't know why that second run isn't logging to the file, when the first one did. But if I can figure out why it's restarting and prevent that, then I won't care about the second problem. Azure is so opaque to me.
After much head-banging we've determined that what we're trying to do isn't compatible with an Azure web app. We'll need to stand up a VM. Closing this question.

Groovy and Log4J Config.groovy Configuration

I am using Groovy and Log4J.
I am not a Log4J expert but after searching many sites for answers I thought I had a configuration that should work in the “Config.groovy” file.
Here’s the result:
I get console logging.
However, the log files named “project.log” and “StackTrace.log” are empty.
I also get another file created named “StackTrace.log.1” (2KB size) that contains an exception message (a non-critical error) posted after I run the application.
Questions:
Why am I not getting logging messages in the “project.log” and “StackTrace.log” files?
Why is a file named “StackTrace.log.1” getting created and written to instead of the stack trace messages getting logged to the “StackTrace.log” file?
Any help or clues as to what I'm doing wrong will be greatly appreciated.
Here is my “Config.groovy” file (log4j portion):
// log4j configuration
log4j = {
// Set default level for all, unless overridden below.
root { debug 'stdout', 'file' }
// Set level for all application artifacts
info "grails.app"
error "org.hibernate.SQL", "org.hibernate.type"
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
warn 'org.mortbay.log'
appenders {
rollingFile name: 'file', file:'project.log', maxFileSize:1024, append: true
rollingFile name: 'stacktrace', file: "StackTrace.log", maxFileSize: 1024, append: true
}
}
is it possible that StackTrace.log.1 has been created because the maxFileSize of 1024 has been reached and then the rollingFile?
I would also begin by removing all the class names listed there so that the debug error level defined in the root closure is applied to all loggers and work from there.

Resources