How to view logging.info from Google App Engine? - python-3.x

I have the following function:
#app.route('/db')
def db():
_, cursor = get_db()
print ('11111111111')
logging.info('2222222222')
return jsonify(x="new")
For whatever reason, the logging items don't show up in Logs Explorer, I just see the print statements which show up with the default severity. Where are my logging logs going and how can I view them?

Did you set the level? Remember the default is WARNING so you have to do something like logging.basicConfig(level=logging.INFO) before you start using logging.info()

Related

apscheduler: how to prevent console printing of job misfire warning message?

How to prevent apscheduler from printing job misfire (error) warning to the console ?
As you can see in console output, the job misfire event is captured and handled on a proper way.
But the red message from apscheduler scare normal users, they think the program is crashed, while nothing is wrong at all.
Why print this to the console, if an event scheduler is defined ? After defining an scheduler (EVENT_JOB_MISSED) event listener, the programmer is responsible for the console output.
Apscheduler is a great module, but this issue is a little minor.
def SetScheduler():
global shedul
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.events import EVENT_JOB_ERROR, EVENT_JOB_MISSED
shedul = BackgroundScheduler()
shedul.add_listener(shed_listener, EVENT_JOB_MISSED | EVENT_JOB_ERROR)
Console output:
You have to adjust your logging configuration to filter out these messages.
If you don't need to display any apscheduler logging, you could do:
import logging
logging.getLogger("apscheduler").propagate = False
If you want to display other messages but not these specific ones, you may need to add a filter to that logger, but I have no experience on that. You can check out the documentation on that yourself.

Azure Function Python Exceptions not logging correctly to App Insights

I am seeing some unexpected behaviour when using logging.error(..) in a Python Azure Function.. essentially, the error is getting logged as a trace, which seems a bit odd.. it's the same w. logging.exception(..) too.
Some simple sample code:
import logging
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
logging.error('[Error] I am an error.')
return func.HttpResponse()
When this is executed, the following winds up in Application Insights.. why is this not available under exceptions?
Query for ref:
union *
| where operation_Id == 'ca8f0c7d7798a54996b486940641b159'
| project operation_Id, message, severityLevel, itemType, customDimensions.['LogLevel']
Any clues as to what I am doing wrong?? 👀
I am seeing some unexpected behaviour when using in a Python Azure
Function.. essentially, the error is getting logged as a trace, which
seems a bit odd.. it's the same w.
too.logging.error(..)logging.exception(..)
These will be classified as trace in Application Insights, the only manifestation is log level (at least Application Insights is like this). If you need this to classify it as 'exceptions' in Application Insights, you need to use below code(and don't deal with it, once you use try-except and other processing, you will enter the trace again):
raise Exception('Boom!')
But this will stop the function, so it doesn't make much sense. I suggest you search for ‘trace’ and set filters to extract exception or error levels.
Currently, the only way to log messages to exceptions table in application insights, is that put the logger.exception() method in the try except code block. Like below:
try:
result = 1/0
except Exception:
logger.exception("it is an exception 88 which is in the try block...")
then in application insights, you can find the messages in exceptions table:
There is an issue about this, let us wait for the feedback from MS.

Dynamically set Sanic log level in Python

I have exposed a route in my Sanic app to set the log-level based on the client call. E.g.
from sanic.log import logger, logging
#route("/main")
async def sanic_main(request):
logger.info("Info mesg")
logger.debug("Debug mesg")
return json("processed")
#route("/setlevel")
async def setlevel(request):
level = request.json["level"]
if level == "info":
loglevel = logging.INFO
elif level == "debug":
loglevel = logging.DEBUG
logger.setLevel(loglevel)
return json("done")
On setting log levels between DEBUG and INFO, however, I am observing flaky behavior where the DEBUG messages (from "/main") get printed only some times and vice versa.
NOTE: I am running multiple Sanic workers
How should I go about dynamically setting the log level?
I have never done anything like this, but the sanic.log.logger is just an instance of <class 'logging.Logger'>. So, using setLevel should be fine.
The question is how are you running your app, and how many workers are you using? If you are in a situation where you have multiple processes, then using /setlevel would only change the logger for that one process.
One way to do this is using aioredis as captured in the blog here:
https://medium.com/#sandeepvaday/route-triggered-state-update-across-python-sanic-workers-a0f7ab0f6e4

Jmeter: how to disable listener by code (groovy)

I've tried to disable view resutls tree by groovy code. The code runs, correctly shows and changes the name and enable property (as reported by log), but neither actual ceasing of info in GUI nor writing to file by the listener (both GUI and non-gui mode) happens. Listeners are processed at the end, so IMHO the code that is executed in setUp thread should have effect on logging of other threads. What enabled property do?
I've seen a workaround by editing jmeter plan file in place (JMeter: how can I disable a View Results Tree element from the command line?), but I'd like internal jmeter/groovy solution.
The code (interestingly each listener is processed twice, first printed view resuts tree, next already foo):
import org.apache.jmeter.engine.StandardJMeterEngine
import org.apache.jorphan.collections.HashTree
import org.apache.jorphan.collections.SearchByClass
import org.apache.jmeter.reporters.ResultCollector
def engine = ctx.getEngine()
def test = engine.getClass().getDeclaredField("test")
test.setAccessible(true)
def testPlanTreeRC = (HashTree) test.get(engine)
def rcSearch = new SearchByClass<>(ResultCollector.class)
testPlanTreeRC.traverse(rcSearch)
def rcTrees = rcSearch.getSearchResults()
for (rc in rcTrees) {
log.error(rc.getName())
if (rc.isEnabled()) {log.error("yes")} else {log.error("no")}
rc.setEnabled(false)
if (rc.isEnabled()) {log.error("yes")} else {log.error("no")}
if (rc.getName()=="View Results Tree") {rc.setName ("foo")}
}
ADDED: when listener is disabled in test plan in GUI, it's not found by traverse tree code above.
disabled property is used/checked by JMeter on startup so there must be a change in JMeter code
I open an enhancement Add option to disable View Results Tree/Listeners in non GUI
You can vote on
There are other options executing JMeter externally, using Taurus tool or execute JMeter using Java and disable it:
HashTree testPlanTree = SaveService.loadTree(new File("/path/to/your/testplan"));
SearchByClass<ResultCollector> listenersSearch = new SearchByClass<>(ResultCollector.class);
testPlanTree.traverse(listenersSearch);
Collection<ResultCollector> listeners = listenersSearch.getSearchResults();
listeners.forEach(listener -> listener.setProperty(TestElement.ENABLED, false));
jmeter.configure(testPlanTree);
jmeter.run();

Python logging, supressing output

Here is a simple example that gets a new logger and attempts to
import logging
log = logging.getLogger("MyLog")
log.setLevel(logging.INFO)
log.info("hello")
log.debug("world")
If I call logging.basicConfig(level=logging.INFO) right after importing, "hello" will print, but not "world", (which seems strange since I set the level to debug).
How can the logging API be adjusted so all built-in levels are printed to the stdout?
If you call basicConfig with level X none of the log messages that is not covered by X will be printed.
You called logging.basicConfig(level=logging.INFO) Here, logging.INFO doesn't cover logging.DEBUG.
May be you wanted other way round?
logging.basicConfig(level=logging.DEBUG)
This prints both info and debug output:
import logging
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger("MyLog")
log.setLevel(logging.DEBUG)
log.info("hello")
log.debug("world")

Resources