Azure application insights log messages using log4j framework in java as shown below.
https://learn.microsoft.com/en-us/azure/azure-monitor/app/java-trace-logs
Is there any similar thing for nodejs application without using using azure node sdk to log messages. I am looking for logging log messages using log4js messages to App insights with some configuration changes.
If you just need to log messages. Your can use log package for nodejs.
Installation:
npm i log
This is Universal logging utility
Configurable, environment and presentation agnostic, with log levels and namespacing (debug style) suppor
Usage
Writing logs
// Default logger writes at 'info' level
const log = require("log");
// Log 'info' level message:
log("some info message %s", "injected string");
// Get namespaced logger (debug lib style)
log = log.get("my-lib");
// Log 'info' level message in context of 'my-lib' namespace:
log("some info message in 'my-lib' namespace context");
// Namespaces can be nested
log = log.get("func");
// Log 'info' level message in context of 'my-lib:func' namespace:
log("some info message in 'my-lib:func' namespace context");
// Log 'error' level message in context of 'my-lib:func' namespace:
log.error("some error message");
// log output can be dynamically enabled/disabled during runtime
const { restore } = log.error.disable();
log.error("error message not really logged");
// Restore previous logs visibiity state
restore();
log.error("error message to be logged");
Available log levels
Mirror of applicable syslog levels (in severity order):
debug - debugging information (hidden by default)
info - a purely informational message (hidden by default)
notice - condition normal, but significant
warning (also aliased as warn) - condition warning
error - condition error - to notify of errors accompanied with recovery mechanism (hence reported as log and not as uncaught exception)
Note: critical, alert, emergency are not exposed as seem to not serve a use case in context of JS applications, such errors should be exposed as typical exceptions
Output message formatting
log doesn't force any specific arguments handling. Still it is recommended to assume printf-like message format, as all currently available writers are setup to support it. Placeholders support reflects one implemented in Node.js format util
Excerpt from Node.js documentation:
The first argument is a string containing zero or more placeholder tokens. Each placeholder token is replaced with the converted value from the corresponding argument. Supported placeholders are:
%s - String.
%d - Number (integer or floating point value).
%i - Integer.
%f - Floating point value.
%j - JSON. Replaced with the string '[Circular]' if the argument contains circular references.
%o - Object. A string representation of an object with generic JavaScript object formatting. Similar to util.inspect() with options { showHidden: true, depth: 4, showProxy: true }. This will show the full object including non-enumerable symbols and properties.
%O - Object. A string representation of an object with generic JavaScript object formatting. Similar to util.inspect() without options. This will show the full object not including non-enumerable symbols and properties.
%% - single percent sign ('%'). This does not consume an argument.
Note to log writer configuration developers: For cross-env compatibility it is advised to base implementation on sprintf-kit
Enabling log writing
log on its own doesn't write anything to the console or any other means (it just emits events to be consumed by preloaded log writers).
To have logs written, the pre-chosen log writer needs to be initialized in the main (starting) module of a process.
List of available log writers
log-node - For typical Node.js processes
log-aws-lambda - For AWS lambda environment
Note: if some writer is missing, propose a PR
Logs Visibility
Default visibility depends on the enviroment (see chosen log writer for more information), and in most cases is setup through the following environment variables:
LOG_LEVEL
(defaults to notice) Lowest log level from which (upwards) all logs will be exposed.
LOG_DEBUG
Eventual list of namespaces to expose at levels below LOG_LEVEL threshold
List is comma separated as e.g. foo,-foo:bar (expose all foo but not foo:bar).
It follows convention configured within debug. To ease eventual migration from debug, configuration fallbacks to DEBUG env var if LOG_DEBUG is not present.
Timestamps logging
Writers are recommended to to expose timestamps aside each log when following env var is set
LOG_TIME
rel (default) - Logs time elapsed since logger initialization
abs - Logs absolute time in ISO 8601 format
Tests
$ npm test
Project cross-browser compatibility supported by:
Related
I am using QFileSystemWatcher to control the log file changes.
For creating and updating the log file I am using boost library.
When I log few messages in one method file changing signal emits only ones (for last message), but I see that file updating every time after log message added.
So, the code for QFileSystemWatcher is
std::string fn = "app.log";
logging::init_log(fn);
QFileSystemWatcher* watcher = new QFileSystemWatcher();
auto success = QObject::connect(watcher, SIGNAL(fileChanged(QString)), this, SLOT(handleFileChanged(QString)));
Q_ASSERT(success);
watcher->addPath(QString::fromStdString(fn));
adding log messages
void a(){
/* some code */
logging::write_log("test error", logging::kError);
logging::write_log("test info", logging::kInfo);
}
QFileSystemWatcher emits signal only ones for Info level message.
In file manager I see that file updating after each call (test error, test info).
In log file initialization I use
sink->locked_backend()->auto_flush(true);
so the file updates immediately.
How can I fix this? Or maybe there is another approach how to handle log file updating to show message in GUI.
Similar filesystem event notifications are usually collapsed into one, unless they are consumed by a reader. For example, if the writer writes 10 bytes to a file, the thread that monitors that file for writes will typically see one event instead of 10. This is explicitly outlined in inotify description notes on Linux, which is likely used internally by QFileSystemWatcher.
This should not matter for any correct implementation of a filesystem monitoring software. The notification only allows the monitor to notice that some event happened (e.g. a write occurred), and it is up to the software to discover further details about the event (e.g. the amount of data that was written, and writing position).
If you aim to just display the written logs, you should be able to just read the file contents from the current reading position to the end of the file. That read operation may return one log record or more. It can return an incomplete log record, too, if the C++ standard library is implemented in a certain way (e.g. when auto_flush is disabled, and the stream buffer fills the internal buffer with part of the log record content before issuing write). The monitoring software should parse the read content to separate log records and detect incomplete log records (e.g. split data by newline characters).
I am developing an express project which will have multiple modules/services in it. The folder structure looks mostly like this:
-- app.js
-- payment_service
-- routes.js
-- index.js
-- models
-- model_1.js
-- model_2.js
APIs in index.js are the only exposed APIs and they work as a gateway for all requests coming for this module/service.
Most of the services can throw operational error under many circumstances, so manual intervention may needed to fix things. So I need to:
Log errors properly with proper context so that some person/script can do the needful.
Figure out the reason of failure.
There will be dedicated teams owning each service. So I should be able to differentiate between error logs for each service so that it can be aggregated and forwarded to concerned person.
I decided to go with ELK stash so that I can generate reports by script.
The main problem that I am facing is that I can't maintain correlation between logs. For example; If a request comes and it travels through five functions and each function logs something then I can't relate those logs.
One way is to create a child logger for each request and pass it to all the functions but that seems to be extra overhead passing logger instance to all the functions.
Another option is to use something like verror and do the logging only at entry point of the service/module so that the whole context can be contained in the log. This approach looks ok for logging errors, however it can't help with info and debug logs - they help me a lot in development and testing phase.
For the sake of differentiating between error logs, I am going to create
A dedicated logger for each service with log level error.
An application wide generic logger for info and debug purpose.
Is this the correct approach?
What will be the best way so that I can achieve all the requirements in simplest way?
I'd recommend you use a logger and you don't need anything too complex. For example:
npm install 12factor-log
Then create a file in your root folder near app.js (or in a /lib folder is where I'd place libraries)
logger.js
const Log = require('12factor-log');
module.exports = (params) => {
return new Log(params);
}
Then in your modules, import your logger and pass in the module name when you instantiate it so you can track where statements come from...
model_1.js
var log = require('./logger')({name: 'model_1'});
// ...
log.info("Something happened here");
// ...
try {
// ...
catch (error) {
const message = `Error doing x, y, z with value ${val}`;
log.error(message);
throw new Error(message);
}
Then handle error gracefully at your controller -> view layer for user-friendly experience.
Your logs would print something like this:
{"ts":"2018-04-27T16:37:24.914Z","msg":"Something happened here","name":"model_1","type":"info","level":3,"hostname":"localhost","pid":18}
As far as correlation of logs, if you see in the output above it includes the hostname of the machine it's running on, and also the name of the module and severity level. You can import this JSON into Logstash and load into Elasticsearch and it will store JSON for easy search and indexing.
See: https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html
Logging is complex and many people have worked on it. I would suggest not doing so yourself.
So, not following my own advice, I created my own logging package:
https://www.npmjs.com/package/woveon-logger
npm install woveon-logger
This prints file and line numbers of errors and messages, has logging levels and aspect-oriented logging, and can dump a stack trace in one call. It even has color coding options. If you get stuck and need some feature in logging, let me know.
let logger1 = new Logger('log1', {level : 'info', debug : true, showname : true};
let logger2 = new Logger('log2', {level : 'verbose', debug : true, showname : true};
...
log1.info('Here is a log message, that is on line 23.');
log1.verbose('Does not show');
log2.verbose('Shows because log2 is verbose logging');
log2.setAspect('IO', true);
log2.aspect('IO', 'Showing aspect IO logging, for logs for IO related operations');
[2018-06-10T10:43:20.692Z] [INFO--] [log1 ] [path/to/myfile:23] Here is a log message, that is on line 23.
[2018-06-10T10:43:20.792Z] [VERBOS] [log2 ] [path/to/myfile:25] Shows because log2 is verbose logging
[2018-06-10T10:43:20.892Z] [IO----] [log2 ] [path/to/myfile:27] Showing aspect IO logging, for logs for IO related operations
Also, some other features like:
log1.throwError('Logs this as both a line of logging, and throws the error with the same message');
log1.printStack('Prints this label next to the stack trace.');
Hope it helps!
You can use grackle_tracking library https://www.getgrackle.com/analytics_and_tracking
It logs errors & traffic to your db.
I am looking for a logging solution for my node.js app that would allow me to set the logging level via file/folder selectors.
For example I would like to be able to set the logging for all files in /app/schema to 'info'. And all the rest to 'error'.
Exemplary configuration:
{
"*":"error",
"/app/schema":"info" //<--Regex expression would be great too.
}
I constantly comment/uncomment/remove logging statements when I need to examine something. I would rather do that via a configuration change and leave the logging files intact. A global debugging level just creates too way to much noise and volume (which matters when storing logs).
Is there something like this? Apache log4j is similar, you can set logging level on package level.
If I get your question right, I'd suggest Bunyan, it lets you configure as many log streams and log levels as you like, i.e. from the docs:
var log = bunyan.createLogger({
name: 'myapp',
streams: [
{
level: 'info',
stream: process.stdout // log INFO and above to stdout
},
{
level: 'error',
path: '/var/tmp/myapp-error.log' // log ERROR and above to a file
}
]
});
You'll have to set up the logic for each path with each log-level.
Is there a way by which I can turn on/off certain logs, based on its type/levels.
For eg:
I have defined 3 levels: ALL, WARNING, CRITICAL
And I have my Log class, where I will set this. Say I set Level: 'ALL'
So this will log everything, wherever I have logged messages.
Now, when I set Level: 'WARNING'
This will only log messages, which are of warning type.
Can I do this with Bunyan ?
Or any other module?
Please help !!
One work around would be to use Bunyan's DTrace facilities.Keep the log level higher and if you need to inspect low level log like debug you can run Dtrace command
Examples
Trace all log messages coming from any Bunyan module on the system
dtrace -x strsize=4k -qn 'bunyan*:::log-*{printf("%d: %s: %s", pid, probefunc, copyinstr(arg0))}'
Trace all log messages coming from the "wuzzle" component:
dtrace -x strsize=4k -qn 'bunyan*:::log-*/strstr(this->str =
copyinstr(arg0), "\"component\":\"wuzzle\"") != NULL/{printf("%s",
this->str)}'
you need to manually install the "dtrace-provider" lib separately via npm install dtrace-provider
Check out the documentation here
try using winston module for logging .this is good for logging and has log rotation and other features
I am using Groovy and Log4J.
I am not a Log4J expert but after searching many sites for answers I thought I had a configuration that should work in the “Config.groovy” file.
Here’s the result:
I get console logging.
However, the log files named “project.log” and “StackTrace.log” are empty.
I also get another file created named “StackTrace.log.1” (2KB size) that contains an exception message (a non-critical error) posted after I run the application.
Questions:
Why am I not getting logging messages in the “project.log” and “StackTrace.log” files?
Why is a file named “StackTrace.log.1” getting created and written to instead of the stack trace messages getting logged to the “StackTrace.log” file?
Any help or clues as to what I'm doing wrong will be greatly appreciated.
Here is my “Config.groovy” file (log4j portion):
// log4j configuration
log4j = {
// Set default level for all, unless overridden below.
root { debug 'stdout', 'file' }
// Set level for all application artifacts
info "grails.app"
error "org.hibernate.SQL", "org.hibernate.type"
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
warn 'org.mortbay.log'
appenders {
rollingFile name: 'file', file:'project.log', maxFileSize:1024, append: true
rollingFile name: 'stacktrace', file: "StackTrace.log", maxFileSize: 1024, append: true
}
}
is it possible that StackTrace.log.1 has been created because the maxFileSize of 1024 has been reached and then the rollingFile?
I would also begin by removing all the class names listed there so that the debug error level defined in the root closure is applied to all loggers and work from there.