Winston not logging at any log level, what could be wrong? - node.js

I have run in to a weird bug and don't know how to proceed/debug. I have an app that's written in Nodejs and uses Winston for logging. Everything was working fine until I brought up a new production server yesterday and retired the old one.
My prod server has 4 Nodejs processes running. On the new production server, Winston logs the very first log message per .js file, period. It stops logging after that, and changing the log level doesn't work. My app has got about 6 .js files, and in case of any error on any of those files, the very first error message gets logged but any subsequent errors/warning/info are not logged.
The funny thing is Winston was working just fine on the old prod server and the dev server still works fine.
I am on Winston 0.6.2 on both dev and prod. As far as I know, all the sw packages are the same between dev and prod.
How can I debug this issue?

After some research, I came across this issue => https://github.com/flatiron/winston/issues/227
Looks like, the new way of handling streams in the latest version of node has broken file transport in winston. I am going back to node v0.8.22 for the time being as a work around.

What transports are you using for logging? Does the console transport work? Perhaps the new production server has a network issue that prevents it logging to a remote service such as CouchDB or Loggly.
If you add a simple console.log('...') line next to your winston log lines do those get fired. This will confirm or deny that your winston log lines are getting called on the production server
winston.info('winston log test')
console.log('console log test')

You can can expose the logger instance and have a URL to trigger the required log level.
I had the same need so I came up with a dynamic log level setter for Winston https://github.com/yannvr/Winston-dynamic-loglevel

Related

Logs with Pino not showing in Datadog APM trace logs

I'm having trouble getting Pino logs to show up in Datadog APM traces, even though, it would appear that the log injection is working fine.
So I have dd-trace all running fine, and traces and spans appearing perfectly in APM. I then hook up Pino, I have all env vars set correctly and when my Pino log outputs I can see the trace_id and span_id in the log... but under Logs in APM I see nothing.
My Pino log looks like this:
{
"level":30,
"time":1658480164226,
"pid":20400,
"hostname":"local",
"dd":{
"trace_id":"1314152611599688171",
"span_id":"6560268894829180062",
"service":"datadog-sandbox",
"version":"development",
"env":"development"
},
"foo":"bar",
"msg":"How am I doing?"
}
As you can see, the trace_id and span_id have been injected in to the log. But when I look at this trace and span in APM I see no logs connected at all:
Am I missing some configuration here? I'm happy to supply any other code if that helps.
Thanks
dd-trace is just injecting the tracing context into the logs in order to correlate between traces and logs. It does not send the logs anywhere on its own.
Logs sending can be achieved by Datadog Agent or directly via HTTP. I was able to make it work with Agentless logging

Stackdriver-trace on Google Cloud Run failing, while working fine on localhost

I have a node server running on Google Cloud Run. Now I want to enable stackdriver tracing. When I run the service locally, I am able to get the traces in the GCP. However, when I run the service as Google Cloud Run, I am getting an an error:
"#google-cloud/trace-agent ERROR TraceWriter#publish: Received error with status code 403 while publishing traces to cloudtrace.googleapis.com: Error: The request is missing a valid API key."
I made sure that the service account has tracing agent role.
First line in my app.js
require('#google-cloud/trace-agent').start();
running locally I am using .env file containing
GOOGLE_APPLICATION_CREDENTIALS=<path to credentials.json>
According to https://github.com/googleapis/cloud-trace-nodejs These values are auto-detected if the application is running on Google Cloud Platform so, I don't have this credentials on the gcp image
There are two challenges to using this library with Cloud Run:
Despite the note about auto-detection, Cloud Run is an exception. It is not yet autodetected. This can be addressed for now with some explicit configuration.
Because Cloud Run services only have resources until they respond to a request, queued up trace data may not be sent before CPU resources are withdrawn. This can be addressed for now by configuring the trace agent to flush ASAP
const tracer = require('#google-cloud/trace-agent').start({
serviceContext: {
service: process.env.K_SERVICE || "unknown-service",
version: process.env.K_REVISION || "unknown-revision"
},
flushDelaySeconds: 1,
});
On a quick review I couldn't see how to trigger the trace flush, but the shorter timeout should help avoid some delays in seeing the trace data appear in Stackdriver.
EDIT: While nice in theory, in practice there's still significant race conditions with CPU withdrawal. Filed https://github.com/googleapis/cloud-trace-nodejs/issues/1161 to see if we can find a more consistent solution.

#google-cloud/logging-winston not logging from NodeJS after some time inside a GCE instance

Node: 10.16.0
#google-cloud/logging-winston: 2.0.0
Server: GCE VM Instance
I'm logging to stackdriver from my node process running an a GCE instance. I'm adding the following object to my winston transports
new require("#google-cloud/logging-winston").LoggingWinston({
projectId: "my-google-project-id"
})
After deployment to GCP and starting the node process, I'm getting the logs in GCP Logs Viewer. So far so good. After a couple of hours(or in some cases minutes), I stop getting any logs in the Log Viewer. When I check the node process on my VM Instance, it is still running and writing logs to the console. But the google-cloud transport does not work at all. If I stop the node process and start a new one again, I start getting logs on the Logs Viewer again. But again it stops logging after some time. I tried downgrading from #google-cloud/logging-winston#2.0.0 to 1.1.1, but still the same. Could it be that I'm hitting some quotas? Or could it be because there is some uncaught error and #google-cloud/logging-winston fails from thereon?
In some instances it's possible that the some logs are skipped due to permission related issues. Make sure that your service account has the appropriate permissions [1].
Here is our documentation on how to set up Winston, just in case it had not crossed your eyes [2].
[1] https://cloud.google.com/logging/docs/agent/troubleshooting#verify-creds
[2] https://cloud.google.com/logging/docs/setup/nodejs#using_winston

How to debug a not starting up Cumulocity Microservice written in Python?

How to enable log for microservice developped in Python?
I can run hello-microservice without a glitch. However, my own microservice looks not starting up after I uploaded the zip file. I tried to wait for hours, and the same. I have run the docker locally without issue.
A call to any REST end point of the microservice returns,
{"error":"Microservice/Bad gateway","message":"Microservice not available Connection refused : Connection refused: a2c-microservice-scope-xxx-prod.svc.cluster.local/80","info":"https://www.cumulocity.com/guides/reference-guide/#error_reporting"}
I assume for some reason, the microservice I uploaded didn't start up properly. How can I enable any log to find out where goes wrong?
You just need to logg to stdout and then you can access the logs through the user interface in Cumulocity IoT. It is available through the application UI in administration
Here a python example how to configure the logging:
LOG_FORMATTER = logging.Formatter("%(asctime)s [%(threadName)-12.12s] [%(levelname)-5.5s] %(message)s")
LOGGER = logging.getLogger()
LOGGER.setLevel(logging.DEBUG)
CONSOLE_HANDLER = logging.StreamHandler(sys.stdout)
CONSOLE_HANDLER.setFormatter(LOG_FORMATTER)
LOGGER.addHandler(CONSOLE_HANDLER)

Implement logging levels for aws lambda

What is the recommended way to implement logging levels for aws lambda functions in nodejs. I was going through many third party libraries e.g winston, winston cloudwatch, logplease, but it seems like we can also achieve using the native console. e.g
console.log(), console.error(), console.warn(), console.info()
Any recommendations?
The relevant code is here:
https://github.com/aws/aws-lambda-nodejs-runtime-interface-client/blob/a850fd5adad5f32251350ce23ca2c8934b2fa542/src/utils/LogPatch.ts#L69-L89
So, you can use 7 console methods to get 6 CloudWatch log levels:
FATAL: console.fatal()
ERROR: console.error()
WARN: console.warn()
INFO: console.info() or console.log()
DEBUG: console.debug()
TRACE: console.trace()
console.trace() doesn't produce the same trace it produces in plain Node 14
console.fatal() is missing in plain Node 14, it's added by the AWS Lambda Runtime
This was tested with the Node 14.x runtime on Aug 25 2022. YMMV.
Since the Lambda console output goes directly into CloudWatch Logs, you really don't need to use something like Winston CloudWatch if that is your preferred log destination. If you wanted to send the logs somewhere else like Loggly then you might want to use something like Winston Loggly.
However, even if you just want to send all console output to CloudWatch Logs, I would still recommend using a basic Winston configuration, so that you could quickly and easily enable debug logging, for example through an environment variable, and then turn off debug logging once you are ready to use the Lambda function in production.

Resources