Log severity levels are always any on google standard app engine - python-3.x

log severity levels always show any on the google standard app engine. Here is snippet I am using.
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client)
handler.setFormatter(CustomFormatter())
google.cloud.logging.handlers.setup_logging(handler)
logging.getLogger("name").setLevel(logging.INFO)

This scenario is known and it looks there’s already a feature request , it seems it comes from an internal limitation. I’ve found this comment related to the logging in GAE:
Bubbling up the application log severity in the request logs is not feasible in App Engine second generation.

Related

Getting functionName and executionId inside running Firebase function

Our product is heavily based on Node.js v10 firebase functions, and up until now, we have been using the firebase functions logger SDK for logging purposes. Nevertheless, it is not enough for us anymore as we need to send some additional properties with each log for better filtering in GCP Logging Explorer.
Unfortunately, the very helpful functionName and executionId labels are not attached to the logs triggered by the Cloud Logging SDK. Are these labels exposed somehow by the Node.js Firebase SDK in order for me to attach them manually to the metadata?
Regards
We have the exact same stack at work and we just create a logger that sits on top of the firebase logger and do basically manual logging.
// Use the firebase functions logger to report all logs from out logger to
// Can do logger.info, .error, .log, .warn
// Logger is a custom function function that sits on top of console or firebase logger
// This means initialise a logger singleton using the firebase functions logger as a base
logger(functions.logger)
E.g. a typical function log would be:
// Use the logger global singleton initialised above
logger.log(`[endpoint-name.methodName] - Execution failed with error code ${error.code} and message ${error.message}`)
We just then use the function log search field to find an instance of a particular error. We also report all internal errors to sentry and use that too debug.
Hope this gives you some ideas?

Logging for Azure Function in python with SEQ

I'm working on the Azure Function (durable function) that implements an HTTP trigger. All it does is waiting for an HTTP call from the backend that shares a link to a blob storage object (image) so it can be processed by a function. I need to implement a reliable logging solution using SEQ, that's being used for other projects in our company (mostly .NET).
Using some official documentation from here all I'm receiving in the SEQ console is a stream of unstructured events and it's hard to gain where and when the processing starts, how much time did it take, etc. It makes it impossible to troubleshoot.
With .NET projects we were using Serilog that allows you to write so-called enrichers and filters, so you can structurize the logs and the information that is really needed, including the call performance (e.g. elapsed time). I don't see anything even close to that available for Python 3. Can anyone suggest where do I start? What's the best approach to capture the events I'm looking for?
Thanks.
Ok guys, here's the answer:
You need to install the lib called seqlog via requirements.txt for the purpose
In the Python script where you plan to use logger import respective namespace, i.e. import logging
Define SEQ configuration in a JSON file (something like this):
In the init.py load SEQ config:
with open('./seq.config.json', 'r') as f:
seq_config = json.load(f)
f.close()
Use logging object to stream the logs to SEQ:
logging.info("[" + obj.status + "] >> Data has been processed!")
Enjoy the logs posted to the SEQ console
p.s. if you're debugging locally, set http://localhost+port in the seq.config.json instead of the remote console address
Hope this info will help someone.

Problem with Google Cloud Live Logging with Python

I want to live log some logs in the project I am working, and after searching I found the Google Cloud Live Logging Api.
After searching around and reading different guides, I created a project and enabled the logging API.
Then I tried testing it with the below code.
# Imports the Google Cloud client library
import google.cloud.logging
from google.cloud import logging
from google.oauth2 import service_account
# Imports Python standard library logging
import logging
# Instantiates a client
credentials = service_account.Credentials.from_service_account_file('my_path_to_service_account_json')
client = google.cloud.logging.Client(project='name_of_my_project', credentials=credentials)
# Connects the logger to the root logging handler; by default this captures
# all logs at INFO level and higher
client.setup_logging()
# The data to log
text = 'Hello, world!'
# Emits the data using the standard logging module
logging.warning(text)
It executed successfully and I went in the Logs Viewer in google cloud to check the logs.
Nothing was there, although it showed requests in the API Overview.
Do you have any idea on what is going wrong? Is it something in the code snippet above or I missed something in the Google cloud configurations part?
Thanks in advance.
I finally managed to solve this problem.
The problem wasn't the code, but the fact that I didn't choose "Global" at "Logs Viewer".
When I chose "Global", I saw all my logs.

How to log every http.request made in Node

I want log every http request being made by a particular node app, and all of its modules. Wrapping requests in a function could work for all non-module code, the disadvantage would obviously be it doesn't include module code, and be cumbersome to do.
This is for apps already in production, only other option I thought of was tcpdump.
Setting NODE_DEBUG=http will make node log out detailed HTTP request information to the console.
Examples:
NODE_DEBUG=http,http2 node index.js
NODE_DEBUG=http,http2 npm start
For more information see:
NODE_DEBUG documentation
This blog post: Debugging tools and practices in node.js.
As of 2022-11-23, this is the list of the available NODE_DEBUG attributes (based on the blog post above and checking the nodejs source code):
child_process
cluster
esm
fs
http
http2
inspect
module
net
policy
repl
source_map
stream
stream_socket
timer
tls
tracing
worker
You can pass multiple modules as a comma-separated list: NODE_DEBUG=http,http2,tls
How to find the module IDs in nodejs source code:
Unfortunately, there isn't a listing of the available debug module IDs in the nodejs documentation.
To find the IDs in the source: search/grep the nodejs .js files for .debuglog( usage:
# Grep inside the nodejs project
$ grep -r -E --color "\.debuglog\('" nodejs/lib
This will return results like:
let debug = require('internal/util/debuglog').debuglog('esm', (fn) => {
let debug = require('internal/util/debuglog').debuglog('http2', (fn) => {
The string passed to .debuglog(...) (e.g. 'esm' and 'http2') is the module id that can be passed to NODE_DEBUG.
The easiest / least intrusive way is with a web proxy. Either an off the shelf one or one you write yourself in node. The machines the apps live on would have to get configured* to send all outbound traffic through the proxy and then the proxy can log the traffic. Details on implementation will vary based on which proxy/ approach you pick.
*Arguably, there are ways to do this such that the machines don't even know they're being proxied, but I've found in practice that's really hard to get right, especially with https traffic
In 2022, you can use grackle_tracking library. It helps you track all traffic, errors and analytics into your database or just to the console https://www.getgrackle.com/analytics_and_tracking
Check HTTP request logger called Morgan here

Application Insights setting user in RequestTelemetry context not showing up in event details

I am using Application Insights to track usage of my application. In my service layer I am using the App Insights API to manually track requests and exceptions and have disabled the default sets of telemetry initializers and modules that track request and exceptions in favor of some custom instrumentation based on the framework I am using. However, when I manually track a request like this I do not see any user information in the request details on the Application Insights dashboard.
A simplified version that demonstrates the issue I am having:
Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration.Active
.InstrumentationKey = "<InstrumentationKey>";
var tc = new TelemetryClient();
var request = new RequestTelemetry();
request.Context.User.Id = "1234";
request.Name = "Test Request";
tc.TrackRequest(request);
EDIT 11/30/15
This was a bug in the application insights portal. It has since been resolved.
It should work. We are investigating the issue.
I created an issue here:
https://github.com/Microsoft/ApplicationInsights-dotnet/issues/111
That should work, I've done something similar.
To clarify you are seeing your custom request telemetry show up in the dashboard but you're not seeing any data in your users charts?
You could try also setting the AuthenticatedUserId field as well to see if that works.
Try setting SessionId, in addition to Userid. We automatically nullify Userid when sessionId is null
I recently discovered that if UserAgent is not set in the telemetry context, the user and session data will be ignored.
Try setting Context.User.UserAgent to something (ideally HttpContext.Request.UserAgent but any non-empty string should work).

Resources