What is the recommended way to implement logging levels for aws lambda functions in nodejs. I was going through many third party libraries e.g winston, winston cloudwatch, logplease, but it seems like we can also achieve using the native console. e.g
console.log(), console.error(), console.warn(), console.info()
Any recommendations?
The relevant code is here:
https://github.com/aws/aws-lambda-nodejs-runtime-interface-client/blob/a850fd5adad5f32251350ce23ca2c8934b2fa542/src/utils/LogPatch.ts#L69-L89
So, you can use 7 console methods to get 6 CloudWatch log levels:
FATAL: console.fatal()
ERROR: console.error()
WARN: console.warn()
INFO: console.info() or console.log()
DEBUG: console.debug()
TRACE: console.trace()
console.trace() doesn't produce the same trace it produces in plain Node 14
console.fatal() is missing in plain Node 14, it's added by the AWS Lambda Runtime
This was tested with the Node 14.x runtime on Aug 25 2022. YMMV.
Since the Lambda console output goes directly into CloudWatch Logs, you really don't need to use something like Winston CloudWatch if that is your preferred log destination. If you wanted to send the logs somewhere else like Loggly then you might want to use something like Winston Loggly.
However, even if you just want to send all console output to CloudWatch Logs, I would still recommend using a basic Winston configuration, so that you could quickly and easily enable debug logging, for example through an environment variable, and then turn off debug logging once you are ready to use the Lambda function in production.
Related
I'm having trouble getting Pino logs to show up in Datadog APM traces, even though, it would appear that the log injection is working fine.
So I have dd-trace all running fine, and traces and spans appearing perfectly in APM. I then hook up Pino, I have all env vars set correctly and when my Pino log outputs I can see the trace_id and span_id in the log... but under Logs in APM I see nothing.
My Pino log looks like this:
{
"level":30,
"time":1658480164226,
"pid":20400,
"hostname":"local",
"dd":{
"trace_id":"1314152611599688171",
"span_id":"6560268894829180062",
"service":"datadog-sandbox",
"version":"development",
"env":"development"
},
"foo":"bar",
"msg":"How am I doing?"
}
As you can see, the trace_id and span_id have been injected in to the log. But when I look at this trace and span in APM I see no logs connected at all:
Am I missing some configuration here? I'm happy to supply any other code if that helps.
Thanks
dd-trace is just injecting the tracing context into the logs in order to correlate between traces and logs. It does not send the logs anywhere on its own.
Logs sending can be achieved by Datadog Agent or directly via HTTP. I was able to make it work with Agentless logging
I am using serverless to create lamda functions on aws. I would like any unhandled errors to go to cloudwatch.
How can I set this up? I have tried using winston - but I cannot use a process.on unhandled errors (serverless seems to overide my handling and exiting the code)
Just a simple console.log() should have your logs showing up in Cloudwatch. You can also click on the monitoring tab in AWS to access Cloudwatch.
This should help:
https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html
I have a Node.js app running in Elastic Beanstalk and logging using console.log, console.error etc...then I have CloudWatch logs turned on. When I go to the Insights and do a query it shows up but somehow it is logging line by line instead of the entire error.
In the example screenshot I want the entire log from a single console.log to go to a single log record...so one to one, instead of splitting by new lines...is there a way to do this without removing all line breaks during console.log? Say a configuration option or something?
The output of the application is sent to standard out (stdout) and standard error (stderr). The AWS Elastic Beanstalk environment leverages Linux rsyslog to capture stdout and stderr to write the information into log files.
This is done through standard rsyslog configuration found here: /etc/rsyslog.d/web.conf
if $programname == 'web' then {
*.=warning;*.=err;*.=crit;*.=alert;*.=emerg /var/log/web.stderr.log
*.=info;*.=notice /var/log/web.stdout.log
}
It is rsyslog that interprets the stack trace from stdout as multiple entries and writes multiple lines in AWS CloudWatch Logs.
I wrote a small article on GitHub that describe the solution for a Java environment but you can do something similar for Node.JS.
I've been working on a set-up script for a database and this evening I began getting some MaxListenersExceededWarning warnings in my console.
I debugged the warnings to be coming from specific request-promise-native calls. I initially thought it may have something to do with payload size; however this warning is not happening on the largest request. I'm really lost on how to debug this further and get to the bottom of these warnings.
I figure out what I was doing wrong:
I am using a logger called pino in my application. For a lot of my set up methods I had an optional logger parameter that defaults to a stdout instance of pino when undefined. This default behavior was creating multiple writestreams hence the MaxListeners warning. I changed my code to pass the same stdout logger instance to all methods and now there are no more warnings!
I can't find much on logging onto Google App Engine using Flexible environment and NodeJS.
As the docs says, one is able to write its own logging messages using the standard stdout and stderr. But this is simple logging, and I would like to have something a little bit more refined.
In particular, the Log Views in Google Cloud Platform Console allows the user to filter logs on their severity levels:
Critical
Error
Warning
Info
Debug
Any log level
I would like to find a way to use those levels in my applications, so that I can read logs much better.
By default, if I print to stdout using console.log() the logs will only appear if I filter by "Any log level", and I have no option to select a severity level.
I have tried to use winston-gae as reported on the docs, but without any success. Maybe I have configured it wrong?
To be more clear, I would like to be able to do something like this in Python (source):
import logging
import webapp2
class MainPage(webapp2.RequestHandler):
def get(self):
logging.debug('This is a debug message')
logging.info('This is an info message')
logging.warning('This is a warning message')
logging.error('This is an error message')
logging.critical('This is a critical message')
self.response.out.write('Logging example.')
app = webapp2.WSGIApplication([
('/', MainPage)
], debug=True)
I would recommend to look at the Google Cloud Node.js client library, that can help you call the Stackdriver Logging API in order to log structured log entries: https://github.com/GoogleCloudPlatform/google-cloud-node#google-stackdriver-logging-beta