Pino manually set level when logging - node.js

How can I manually set the level of a log when using Pino?
Here's some sample code:
const baseLogger = pino(loggerOptions);
const activityLogger = baseLogger.child({ name: "activity" });
const workerLogger = baseLogger.child({ name: "worker" });
Runtime.install({
logger: new DefaultLogger("INFO", (entry) => {
workerLogger.error({
level: entry.level.toLowerCase(),
message: entry.message,
timestamp: Number(entry.timestampNanos / BigInt(1000000)),
...entry.meta,
})
}
)
})
which produces logs like the following:
{"level":"error","time":1674573001943,"pid":95258,"name":"worker","level":"info","message":"Workflow bundle created","timestamp":1674573001943,"size":"0.70MB"}
Note that level appears twice. Ideally I'd like to invoke workerLogger.log and manually pass a level field but it seems pino does not make this easy. Is there a way to log with pino but not use one of the default level functions like .info, .debug, etc...?

Related

Datadog APM Resource column is not giving correct values

I am facing trouble where datadog RESOURCE column is not giving the correct value as shown in the image. Really need some help here.
My assumption is that, it is happening because http tags are not appearing correctly. I think datadog itself add the http tags and it's value.
The http.path_group & http.route should have this value "/api-pim2/v1/attribute/search" but for some reason it's not coming correctly.
I am using this library dd-trace at backend. The tracers options which i provided are these
{"logInjection":true,"logLevel":"debug","runtimeMetrics":true,"analytics":true,"debug":true,"startupLogs":true,"tags":{"env":"dev02","region":"us-east-1","service":"fabric-gateway-pim-ecs"}}
The initialising code looks like this which ran at the start of my app
app/lib/tracer.js:
const config = require('config')
const tracerOptions = config.get('dataDog.tracer.options')
const logger = require('app/lib/logger')
const tracer = require('dd-trace').init({
...tracerOptions,
enabled: true,
logger
})
module.exports = tracer
I also tried to set the http.path_group & http.route tag manually but still it's not updating the values. Though i can add the new tags like http.test which has the same value which i was trying to override in http.path_group & http.route
const addTagsToRootSpan = tags => {
const span = tracer.scope().active()
if (span) {
const root = span.context()._trace.started[0]
for (const tag of tags) {
root.setTag(tag.key, tag.value)
}
log.debug('Tags added')
} else {
log.debug('Trace span could not be found')
}
}
...
const tags = [
{ key: 'http.path_group', value: request.originalUrl },
{ key: 'http.route', value: request.originalUrl },
{ key: 'http.test', value: request.originalUrl }
addTagsToRootSpan(tags)
...
I was requiring tracer.js file at the start of my app where server was listening.
require('app/lib/tracer')
app.listen(port, err => {
if (err) {
log.error(err)
return err
}
log.info(`Your server is ready for ${config.get('stage')} on PORT ${port}!`)
})
By enabling the debug option in datadog tracer init function. I can see the tracer logs and what values are passed by the library for http.route and resource.
I was confused by this line according to the data dog tracers doc you should init first before importing any instrumented module.
// This line must come before importing any instrumented module.
const tracer = require('dd-trace').init();
But for me http.route & resource value get correct if i initialise it on my routing file. They start giving me complete route "/api-pim2/v1/attribute/search" instead of only "/api-pim2"
routes/index.js:
const router = require('express').Router()
require('app/lib/tracer')
const attributeRouter = require('app/routes/v1/attribute')
router.use('/v1/attribute', attributeRouter)
module.exports = router
I am not accepting this answer yet because i am still confused where to initialise the tracer. Maybe someone can explain better. I am just writing this answer if someone else facing the issue can try this. Might just resolve their problem.

Replace datasource in test environment for Loopback

The default acceptance test for PingController created on project startup fails because my application has a Postgresql datasource that is not reachable in test environment. I try to replace this datasource by one in memory but it doesn't work, it still uses the "real" one.
I changed the setupApplication method this way :
export async function setupApplication(): Promise<AppWithClient> {
const restConfig = givenHttpServerConfig({
});
const app = new MyApplication({
rest: restConfig,
});
const datasource = new juggler.DataSource({
name: 'myds',
connector: 'memory',
});
app.bind('datasources.myds').to(datasource);
await app.boot();
await app.start();
const client = createRestAppClient(app);
return {app, client};
}
What am I doing wrong ?
Tanks for your help.
app.boot() scans the project root for artifacts and will override the bindings.
For unit tests or tests that are limited to only a few components, it's preferred to remove app.boot() and then explicitly bind each artifact that's required for that test. This will make it easier to detect unexpected artifact dependencies.
Otherwise, ensure that app.boot() is called before any manual bindings:
export async function setupApplication(): Promise<AppWithClient> {
const restConfig = givenHttpServerConfig({
});
const app = new MyApplication({
rest: restConfig,
});
const datasource = new juggler.DataSource({
name: 'myds',
connector: 'memory',
});
await app.boot();
// Move manual bindings after `app.boot()`
app.bind('datasources.myds').to(datasource);
await app.start();
const client = createRestAppClient(app);
return {app, client};
}

Using Pino as a logger for Sequelize

I am trying to use Pino with Sequelize's options.logging:
A function that gets executed every time Sequelize would log something. Function may receive multiple parameters but only first one is printed by console.log. To print all values use (...msg) => console.log(msg)
Here's what I've tried:
const pino = require('pino')
const logger = pino({ level: 'debug', prettyPrint: true })
const Sequelize = require('sequelize')
sequelize = new Sequelize({
dialect: 'sqlite',
storage: '../db.sqlite3',
logging: logger.debug()
})
But nothing is printed to the console. I know logging is working, as I logger.debug('test') works when called elsewhere in the code.
I found this library (from this issue) but I am not really sure how to use it with Sequelize.
You do not need to call your function, you just need to pass it to Sequelize.
So basically you should write logging: msg => logger.info(msg), for example. Don't worry about losing some other parameters, console.log only uses the first one (as described in the documentation).
Simple working example:
{
// ...
logging: sql => logger.info(sql),
// ...
}
Full (or almost full) clone of console.log behavior:
{
// ...
logging: (sql, timing) => logger.info(sql, typeof timing === 'number' ? `Elapsed time: ${timing}ms` : ''),
// ...
}
Tip: You can use the logging option for each of your queries and they will obviously work the same way.
Tip #2: You can also use logging: logger.info.bind(logger). But you will probably search for another workaround if you choose this one :)

Can I add a second function export to module.exports without changing the way it's called?

I have a logging module that I use in many of my projects, which generally exports a single Winston logger, so all I did was define a logger and it's transports, then export it:
module.exports = logger;
when importing using const logger = require('mylogger.js') I then use the various levels built in (logger.info logger.debug etc).
I've now decided that I want to create a second logging function that will write logs to a different file, so I need to create and export a new transport. Thing is, if I switch to module.exports = {logger, mynewlogger}, that will change the way I import and call the functions, and I have that in many places.
Besides creating second file and importing both, is there any other way to add a second export without having to change my code everywhere else?
It's either new module that re-exports both:
logger-and-mynewlogger.js
module.exports = {logger, mynewlogger}
Or a separate module:
mynewlogger.js
module.exports = mynewlogger
Or using existing function as module object:
logger.mynewlogger = ...
module.exports = logger;
The first two options are preferable because they result in reasonably designed modules, while the last one is a quick and dirty fix.
Yes, you can define multiple transports for a single exported logger. When creating your Winston log, the 'transports' property is an array which allows you to define multiple outputs.
Here's an example of one I have that has two transports. Firstly, console and the second a daily rotating log.
const winston = require('winston');
const Rotate = require('winston-daily-rotate-file');
const tsFormat = () => (new Date()).toLocaleTimeString();
const logger = new (winston.Logger)({
transports: [
// colorize the output to the console
new (winston.transports.Console)({
timestamp: tsFormat,
colorize: true,
level: 'info',
}),
new (Rotate)({
filename: `${logDir}/${logName}-app.log`,
timestamp: tsFormat,
datePattern: 'YYYY-MM-DD',
prepend: true,
level: env === 'development' ? 'verbose' : 'info',
}),
],
});
module.exports = logger;

How to set log level in Winston/Node.js

I am using Winston logging with my Node.js app and have defined a file transport. Throughout my code, I log using either logger.error, logger.warn, or logger.info.
My question is, how do I specify the log level? Is there a config file and value that I can set so that only the appropriate log messages are logged? For example, I'd like the log level to be "info" in my development environment but "error" in production.
If you are using the default logger, you can adjust the log levels like this:
const winston = require('winston');
// ...
winston.level = 'debug';
will set the log level to 'debug'. (Tested with winston 0.7.3, default logger is still around in 3.2.1).
However, the documentation recommends creating a new logger with the appropriate log levels and then using that logger:
const myLogger = winston.createLogger({
level: 'debug'
});
myLogger.debug('hello world');
If you are already using the default logger in your code base this may require you to replace all usages with this new logger that you are using:
const winston = require('winston');
// default logger
winston.log('debug', 'default logger being used');
// custom logger
myLogger.log('debug', 'custom logger being used');
Looks like there is a level option in the options passed covered here
From that doc:
var logger = new (winston.Logger)({
transports: [
new (winston.transports.Console)({ level: 'error' }),
new (winston.transports.File)({ filename: 'somefile.log' })
]
});
Now, those examples show passing level in the option object to the console transport. When you use a file transport, I believe you would pass an options object that not only contains the filepath but also the level.
That should lead to something like:
var logger = new (winston.Logger)({
transports: [
new (winston.transports.File)({ filename: 'somefile.log', level: 'error' })
]
});
Per that doc, note also that as of 2.0, it exposes a setLevel method to change at runtime. Look in the Using Log Levels section of that doc.
There are 6 default levels in winston: silly=0(lowest), debug=1, verbose=2, info=3, warn=4, error=5(highest)
While creating the logger transports, you can specify the log level like:
new (winston.transports.File)({ filename: 'somefile.log', level: 'warn' })
Above code will set log level to warn, which means silly, verbose and info will not be output to somefile.log, while warn, debug and error will.
You can also define your own levels:
var myCustomLevels = {
levels: {
foo: 0,
bar: 1,
baz: 2,
foobar: 3
}
};
var customLevelLogger = new (winston.Logger)({ levels: myCustomLevels.levels });
customLevelLogger.foobar('some foobar level-ed message');
Note that it's better to always include the 6 predefined levels in your own custom levels, in case somewhere used the predefined levels.
You can change the logging level in runtime by modifying the level property of the appropriate transport:
var log = new (winston.Logger)({
transports: [
new (winston.transports.Console)({ level : 'silly' })
]
});
...
// Only messages with level 'info' or higher will be logged after this.
log.transports.Console.level = 'info';
I guess, it works similarly for file but I haven't tried that.
If you want to change the log level on the fly. Like for when you need to trace production issue for short amount of time; then revert to error log level. You can use a dynamic logger provided you can expose a service on the web https://github.com/yannvr/Winston-dynamic-loglevel
apart from this you can cleanly achieve this by imlplementing runtime-node-refresh follow this link for more.

Resources