I'm using gauge(python) to test my WebSocket API and I try to print some info in my step implementation code by python logging module to help debug, however, I get nothing.
How to log info in python step implementation code?
Thanks.
Related
I'm trying to get the writerIdentity for a given Log router Sink. I'm able to get it using the gcloud cli : gcloud logging sinks describe --format='value(writerIdentity)' <sink-name>
But for some reason the Python SDK equivalent (shown below) returns None for all my sinks:
from google.cloud import logging_v2 as logging
logging_client = logging.Client()
for i in logging_client.list_sinks():
print(i, i.name, i.writer_identity)
Am I missing something?
I was able to find Google gcloud documentation regarding write identity SINK that could probably help you to get more information on how to construct the command, but the one that is focused to write identity is:
https://cloud.google.com/logging/docs/export/configure_export_v2#gcloud_3
Other than these options there is also gcloud-python which is provided by the cloud team. Though it seems like it's not up-to-date with beta apis, logging being one of them.
gcloud-python
gcloud cli command code:
gcloud logging sinks describe --format='value(writerIdentity)'
<SINK_NAME>
Additional documentation:
Developers_Google
Cloud_google
Python_SDK
Logging_Sink
For anyone running into this issue using Python SDK to retrieve writer identity I was missing a step. Apparently, while using the Python SDK we need to reload the sink configuration in order to get the writer identity. Following is the working solution:
from google.cloud import logging_v2 as logging
logging_client = logging.Client()
for i in logging_client.list_sinks():
i.reload()
print(i, i.name, i.writer_identity)
sink.reload source code
I am trying to use the standard logging module in my AzureML run but nothing gets printed out to stdout.
I tried following instructions from here but I have not been successful.
This is what I am running:
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.info('WHY IS THIS NOT WORKING?')
Am I doing something wrong?
stdout is generally redirected to a log file called "70_driver_log.txt", and with no configuration a local run submission should honor the basicConfig and you should see the output there.
You can fetch the logs from the UI or by streaming the logs in the SDK via run.wait_for_completion(show_output=True)
This is the link to the UI screenshot (but don't have the rep to post an image): driver log in the ML studio
Following piece of code worked for me. I guess, setting the handler is the key here.
import logging
handler = logging.StreamHandler()
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
More detailed response as well alternative approach about the issue can be found here.
What is the recommended way to implement logging levels for aws lambda functions in nodejs. I was going through many third party libraries e.g winston, winston cloudwatch, logplease, but it seems like we can also achieve using the native console. e.g
console.log(), console.error(), console.warn(), console.info()
Any recommendations?
The relevant code is here:
https://github.com/aws/aws-lambda-nodejs-runtime-interface-client/blob/a850fd5adad5f32251350ce23ca2c8934b2fa542/src/utils/LogPatch.ts#L69-L89
So, you can use 7 console methods to get 6 CloudWatch log levels:
FATAL: console.fatal()
ERROR: console.error()
WARN: console.warn()
INFO: console.info() or console.log()
DEBUG: console.debug()
TRACE: console.trace()
console.trace() doesn't produce the same trace it produces in plain Node 14
console.fatal() is missing in plain Node 14, it's added by the AWS Lambda Runtime
This was tested with the Node 14.x runtime on Aug 25 2022. YMMV.
Since the Lambda console output goes directly into CloudWatch Logs, you really don't need to use something like Winston CloudWatch if that is your preferred log destination. If you wanted to send the logs somewhere else like Loggly then you might want to use something like Winston Loggly.
However, even if you just want to send all console output to CloudWatch Logs, I would still recommend using a basic Winston configuration, so that you could quickly and easily enable debug logging, for example through an environment variable, and then turn off debug logging once you are ready to use the Lambda function in production.
I've been working on a set-up script for a database and this evening I began getting some MaxListenersExceededWarning warnings in my console.
I debugged the warnings to be coming from specific request-promise-native calls. I initially thought it may have something to do with payload size; however this warning is not happening on the largest request. I'm really lost on how to debug this further and get to the bottom of these warnings.
I figure out what I was doing wrong:
I am using a logger called pino in my application. For a lot of my set up methods I had an optional logger parameter that defaults to a stdout instance of pino when undefined. This default behavior was creating multiple writestreams hence the MaxListeners warning. I changed my code to pass the same stdout logger instance to all methods and now there are no more warnings!
I can't find much on logging onto Google App Engine using Flexible environment and NodeJS.
As the docs says, one is able to write its own logging messages using the standard stdout and stderr. But this is simple logging, and I would like to have something a little bit more refined.
In particular, the Log Views in Google Cloud Platform Console allows the user to filter logs on their severity levels:
Critical
Error
Warning
Info
Debug
Any log level
I would like to find a way to use those levels in my applications, so that I can read logs much better.
By default, if I print to stdout using console.log() the logs will only appear if I filter by "Any log level", and I have no option to select a severity level.
I have tried to use winston-gae as reported on the docs, but without any success. Maybe I have configured it wrong?
To be more clear, I would like to be able to do something like this in Python (source):
import logging
import webapp2
class MainPage(webapp2.RequestHandler):
def get(self):
logging.debug('This is a debug message')
logging.info('This is an info message')
logging.warning('This is a warning message')
logging.error('This is an error message')
logging.critical('This is a critical message')
self.response.out.write('Logging example.')
app = webapp2.WSGIApplication([
('/', MainPage)
], debug=True)
I would recommend to look at the Google Cloud Node.js client library, that can help you call the Stackdriver Logging API in order to log structured log entries: https://github.com/GoogleCloudPlatform/google-cloud-node#google-stackdriver-logging-beta