How to get execution ID for Google Cloud Functions triggered from HTTP? - python-3.x

I am trying to write logs to Logging from Python applications by using Cloud Logging API Cloud client library with "execution ID" that as same as google's default value.
logger setup:
from google.cloud import logging
from google.cloud.logging.resource import Resource
log_client = logging.Client()
# This is the resource type of the log
log_name = 'cloudfunctions.googleapis.com%2Fcloud-functions'
# Inside the resource, nest the required labels specific to the resource type
res = Resource(type="cloud_function",
labels={
"function_name": "my-function",
"region": "asia-east2"
})
logger = log_client.logger(log_name.format("my-project"))
write log:
logger.log_struct({"message": request.remote_addr}, resource=res, severity='INFO')

It's currently not possible to do this using the purely the Cloud Function Framework itself, but you can try to extract the executionId from the request itself by using the following:
request.headers.get('function-execution-id')
I found an issue in Cloud Functions Github tracking the implementation of a native way to get those values, you can follow this thread for updates, if you'd like.

I had the same issue using an older version of google-cloud-logging. I was able to get this functional using the default python logging module. In a cloud function running python 3.8 and google-cloud-logging==2.5.0, the executionId is correctly logged with logs, as well as the severity within stackdriver.
main.py:
# Imports the Cloud Logging client library
import google.cloud.logging
# Instantiates a client
client = google.cloud.logging.Client()
# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.get_default_handler()
client.setup_logging()
# Imports Python standard library logging
import logging
def hello_world(req):
# Emits the data using the standard logging module
logging.info('info')
logging.warning('warn')
logging.error('error')
return ""
requirements.txt:
google-cloud-logging==2.5.0
Triggering this cloud function results in the following in stackdriver:

Related

How to get writerIdentity for a given Sink using Cloud Logging Python SDK

I'm trying to get the writerIdentity for a given Log router Sink. I'm able to get it using the gcloud cli : gcloud logging sinks describe --format='value(writerIdentity)' <sink-name>
But for some reason the Python SDK equivalent (shown below) returns None for all my sinks:
from google.cloud import logging_v2 as logging
logging_client = logging.Client()
for i in logging_client.list_sinks():
print(i, i.name, i.writer_identity)
Am I missing something?
I was able to find Google gcloud documentation regarding write identity SINK that could probably help you to get more information on how to construct the command, but the one that is focused to write identity is:
https://cloud.google.com/logging/docs/export/configure_export_v2#gcloud_3
Other than these options there is also gcloud-python which is provided by the cloud team. Though it seems like it's not up-to-date with beta apis, logging being one of them.
gcloud-python
gcloud cli command code:
gcloud logging sinks describe --format='value(writerIdentity)'
<SINK_NAME>
Additional documentation:
Developers_Google
Cloud_google
Python_SDK
Logging_Sink
For anyone running into this issue using Python SDK to retrieve writer identity I was missing a step. Apparently, while using the Python SDK we need to reload the sink configuration in order to get the writer identity. Following is the working solution:
from google.cloud import logging_v2 as logging
logging_client = logging.Client()
for i in logging_client.list_sinks():
i.reload()
print(i, i.name, i.writer_identity)
sink.reload source code

Duplicated logs Flask - Google Cloud Logging

I'm developing a web application in Flask, using GAE.
My issue here is: Every time that my application tries to log, i got multiple entries on log file:
log viewer
.
My dbconnection class only imports a default logger class that i created and calls unexpected_error_log() to write whenever it needed.
My logger class:
import logging
from google.cloud import logging as cloudlogging
class LoggerDB:
def __init__(self):
log_client = cloudlogging.Client()
log_handler = log_client.get_default_handler()
self.cloud_logger = logging.getLogger("cloudLogger")
self.cloud_logger.setLevel(logging.INFO)
self.cloud_logger.addHandler(log_handler)
def unexpected_error_log(self, name, error="Unhandled Exception"):
self.cloud_logger.error("Unexpected Error on %s: %s", name, error)
Code when executed:
def insertVenda(self, venda):
try:
query = "xxxxx"
self.cursor.execute(query)
self.connection.commit()
return "Success"
except Exception as error:
self.logger.unexpected_error_log(__name__, error)
self.connection.rollback()
return "Error"
I suspect that gunicorn/app logging is duplicating my logs, but i don't know how to handle this case.
Did someone had the same problem?
I am struggling with this at the moment, my suspicion at the moment if you include something like this:
# Imports Python standard library logging
import logging
import google.cloud.logging
# Instantiates a client
client = google.cloud.logging.Client()
# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.get_default_handler()
client.setup_logging()
I get logs OK, but multiple duplicates.
If I omit I just get single stdout print statements going to the stackdriver logs.

How do I query GCP log viewer and obtain json results in Python 3.x (like gcloud logging read)

I'm building a tool to download GCP logs, save the logs to disk as single line json entries, then perform processing against those logs. The program needs to support both logs exported to cloud storage and logs currently in stackdriver (to partially support environments where exports to cloud storage hasn't been pre-configured). The cloud storage piece is done, but I'm having difficulty with downloading logs from stackdriver.
I would like to implement similar functionality to the gcloud function 'gcloud logging read' in Python. Yes, I could use gcloud, however I would like to build everything into the one tool.
I currently have this sample code to print the result of hits, however I can't get the full log entry in JSON format:
def downloadStackdriver():
client = logging.Client()
FILTER = "resource.type=project"
for entry in client.list_entries(filter_=FILTER):
a = (entry.payload.value)
print(a)
How can I obtain full JSON output of matching logs like it works using gcloud logging read?
Based on other stackoverflow pages, I've attempted to use MessageToDict and MessageToJson, however I receive the error
"AttributeError: 'ProtobufEntry' object has no attribute 'DESCRIPTOR'"
You can use the to_api_repr function on the LogEntry class from the google-cloud-logging package to do this:
from google.cloud import logging
client = logging.Client()
logger = client.logger('log_name')
for entry in logger.list_entries():
print(entry.to_api_repr())

stackdriver logging client library missing severity with python

i would like to send more expressive log entries to stackdriver logging from my app engine standard python3 app.
By following the official documentation i was able to send my logs to stackdriver and it seems that the timestamp is parsed correctly.
But i'm missing the severity levels. In addition i see no way to link logs for a certain request together to a operation. Something that the java logging seems to be doing out of the box.
For reference here is my code:
import logging
import os
from flask import Flask
from google.cloud import logging as glog
app = Flask(__name__)
log_client = glog.Client(os.getenv('GOOGLE_CLOUD_PROJECT'))
# Attaches a Google Stackdriver logging handler to the root logger
log_client.setup_logging()
#app.route('/_ah/push-handlers/cloudbuild', methods=['POST'])
def pubsub_push_handle():
logging.info("stdlib info")
logging.warning("stdlib warn")
logging.error("stdlib error")
logs resulting in stackdriver:
As you can see the timestamps and message are available while the severity is strangely missing and it gets classified as 'Any'
Can someone point me in the right direction or is this level of incorporation not yet available?
Thanks for your time!
Carsten
You need to create your own logger and add the google-cloud-logging default handler to it:
import logging
from flask import Flask
from google.cloud import logging as cloudlogging
log_client = cloudlogging.Client()
log_handler = log_client.get_default_handler()
cloud_logger = logging.getLogger("cloudLogger")
cloud_logger.setLevel(logging.INFO)
cloud_logger.addHandler(log_handler)
app = Flask(__name__)
#app.route('/_ah/push-handlers/cloudbuild', methods=['POST'])
def pubsub_push_handle():
cloud_logger.info("info")
cloud_logger.warning("warn")
cloud_logger.error("error")
return 'OK'
Produces:

Google Cloud Functions Python Logging issue

I'm not sure how to say this but, I'm feeling like there is something under the hood that was changed by Google without me knowing about it. I used to get my logs from my python Cloud Functions in the Google Cloud Console within the logging dashboard. And now, it just stopped working.
So I went investigating for a long time, I just made a log hello world python Cloud Function:
import logging
def cf_endpoint(req):
logging.debug('log debug')
logging.info('log info')
logging.warning('log warning')
logging.error('log error')
logging.critical('log critical')
return 'ok'
So this is my main.py that I deploy as a Cloud Function with an http trigger.
Since I was having a log ingestion exclusion filter with all the "debug" level logs I wasn't seeing anything in the logging dashboard. But when I removed it I discovered this :
So it seems like something that was parsing the python built-in log records into stackdriver stopped parsing the log severity parameter! I'm sorry if I look stupid but that's the only thing I can think about :/
Do you guys have any explanations or solutions for this ? am I doing it the wrong way ?
Thank you in advance for your help.
UPDATE 2022/01:
The output now looks for example like:
[INFO]: Connecting to DB ...
And the drop-down menu for the severity looks like:
With "Default" as the filter that is needed to show the Python logging logs, which means to show just any log available, and all of the Python logs are under "Default", the severity is still dropped.
Stackdriver Logging severity filters are no longer supported when using the Python native logging module.
However, you can still create logs with certain severity by using the Stackdriver Logging Client Libraries. Check this documentation in reference to the Python libraries, and this one for some usage-case examples.
Notice that in order to let the logs be under the correct resource, you will have to manually configure them, see this list for the supported resource types.
As well, each resource type has some required labels that need to be present in the log structure.
As an example, the following code will write a log to the Cloud Function resource, in Stackdriver Logging, with an ERROR severity:
from google.cloud import logging
from google.cloud.logging.resource import Resource
log_client = logging.Client()
# This is the resource type of the log
log_name = 'cloudfunctions.googleapis.com%2Fcloud-functions'
# Inside the resource, nest the required labels specific to the resource type
res = Resource(type="cloud_function",
labels={
"function_name": "YOUR-CLOUD-FUNCTION-NAME",
"region": "YOUR-FUNCTION-LOCATION"
},
)
logger = log_client.logger(log_name.format("YOUR-PROJECT-ID"))
logger.log_struct(
{"message": "message string to log"}, resource=res, severity='ERROR')
return 'Wrote logs to {}.'.format(logger.name) # Return cloud function response
Notice that the strings in YOUR-CLOUD-FUNCTION-NAME, YOUR-FUNCTION-LOCATION and YOUR-PROJECT-ID, need to be specific to your project/resource.
I encountered the same issue.
In the link that #joan Grau shared, I also see there is a way to integrate cloud logger with Python logging module, so that you could use Python root logger as usually, and all logs will be sent to StackDriver Logging.
https://googleapis.github.io/google-cloud-python/latest/logging/usage.html#integration-with-python-logging-module
...
I tried it and it works. In short, you could do it two ways
One simple way that bind cloud logger to root logging
from google.cloud import logging as cloudlogging
import logging
lg_client = cloudlogging.Client()
lg_client.setup_logging(log_level=logging.INFO) # to attach the handler to the root Python logger, so that for example a plain logging.warn call would be sent to Stackdriver Logging, as well as any other loggers created.
Alternatively, you could set logger with more fine-grain control
from google.cloud import logging as cloudlogging
import logging
lg_client = cloudlogging.Client()
lg_handler = lg_client.get_default_handler()
cloud_logger = logging.getLogger("cloudLogger")
cloud_logger.setLevel(logging.INFO)
cloud_logger.addHandler(lg_handler)
cloud_logger.info("test out logger carrying normal news")
cloud_logger.error("test out logger carrying bad news")
Not wanting to deal with cloud logging libraries, I created a custom Formatter that emits a structured log with the right fields, as cloud logging expects it.
class CloudLoggingFormatter(logging.Formatter):
"""Produces messages compatible with google cloud logging"""
def format(self, record: logging.LogRecord) -> str:
s = super().format(record)
return json.dumps(
{
"message": s,
"severity": record.levelname,
"timestamp": {"seconds": int(record.created), "nanos": 0},
}
)
Attaching this handler to a logger results in logs being parsed and shown properly in the logging console. In cloud functions I would configure the root logger to send to stdout and attach the formatter to it.
# setup logging
root = logging.getLogger()
handler = logging.StreamHandler(sys.stdout)
formatter = CloudLoggingFormatter(fmt="[%(name)s] %(message)s")
handler.setFormatter(formatter)
root.addHandler(handler)
root.setLevel(logging.DEBUG)
From Python 3.8 onwards you can simply print a JSON structure with severity and message properties. For example:
print(
json.dumps(
dict(
severity="ERROR",
message="This is an error message",
custom_property="I will appear inside the log's jsonPayload field",
)
)
)
Official documentation: https://cloud.google.com/functions/docs/monitoring/logging#writing_structured_logs
To use the standard python logging module on GCP (tested on python 3.9), you can do the following:
import google.cloud.logging
logging_client = google.cloud.logging.Client()
logging_client.setup_logging()
import logging
logging.warning("A warning")
See also: https://cloud.google.com/logging/docs/setup/python
I use a very simple custom logging function to log to Cloud Logging:
import json
def cloud_logging(severity, message):
print(json.dumps({"severity": severity, "message": message}))
cloud_logging(severity="INFO", message="Your logging message")

Resources