Connecting to gremlin server from python API duplicates log messages - python-3.x

After connecting to a Gremlin Server, all of my log messages are duplicated. I am using the following code to connect.
graph = anonymous_traversal.traversal().withRemote(DriverRemoteConnection("ws://localhost:8182/gremlin", "g"))
I guess that the gremlin-python API somehow enable the root logger, but I can not find it where. Maybe I overlooked some settings. Any bits of advice to overcome this issue is very welcome.
Here is the whole sample code which I used to replicate this problem.
import logging
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.process import anonymous_traversal
I used to test this as the main entry point of the script.
if __name__ == '__main__':
Setting up my local logger instance
logger = logging.getLogger(__name__)
logger.setLevel(logging.getLevelName('DEBUG'))
log_format = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
log_file_handler = logging.FileHandler('foo.log', mode='w')
log_file_handler.setFormatter(log_format)
log_file_handler.setLevel(logging.DEBUG)
logger.addHandler(log_file_handler)
console_log_handler = logging.StreamHandler()
console_log_handler.setLevel(logging.getLevelName('INFO'))
console_log_handler.setFormatter(log_format)
logger.addHandler(console_log_handler)
And sending some messages to log before the connection estabilished.
logger.debug("foo")
logger.info("foo")
logger.warning("and")
logger.error("foo again")
Comment/uncomment the following line to test without gremlin connection
graph = anonymous_traversal.traversal().withRemote(DriverRemoteConnection("ws://localhost:8182/gremlin", "g"))
Finally, sending some more message afte the connection.
logger.debug("after foo")
logger.info("after foo")
logger.warning("after and")
logger.error("after foo again")
The result is something like this.
2019-04-18 16:09:57,746 - INFO - foo
2019-04-18 16:09:57,746 - WARNING - and
2019-04-18 16:09:57,746 - ERROR - foo again
DEBUG:__main__:after foo
2019-04-18 16:09:59,107 - INFO - after foo
INFO:__main__:after foo
2019-04-18 16:09:59,107 - WARNING - after and
WARNING:__main__:after and
2019-04-18 16:09:59,108 - ERROR - after foo again
ERROR:__main__:after foo again
I think gremlin should use its own logger and should not cause any side effects in the working of other logger instances.

I found a very simple workaround. It is not pretty.
def non_root_logger():
for handler in logging.root.handlers:
logging.root.removeHandler(handler)
return logging.getLogger(__name__)

Related

How to send message to only one filehandler from logger class

So I've setup a single logger with multiple handlers in my flask app. From the research I've done this far it seems you can only have one logger in a flask app. So to manage different streams I need to add handlers. My issue with this is that every message goes to all of the handlers. Does anyone know of a way to send a message to a single handler with a Logger instance?
I have a work around but I'm not exactly excited about it, I feel like I'm missing something basic here, but my google-fu is weak today I guess.
import logging
from contextlib import suppress
log_fmt = logging.Formatter(
fmt='%(levelname)s: %(asctime)s - %(name)s - : %(message)s',
datefmt='%m/%d/%Y %H:%M:%S'
)
v = logging.FileHandler(f'log1.log')
v.setFormatter(log_fmt)
v.setLevel(logging.INFO)
v.name = f'log1'
lg = logging.getLogger('log1')
lg.setLevel(logging.INFO)
lg.addHandler(v)
b = logging.FileHandler(f'log2.log')
b.setFormatter(log_fmt)
b.setLevel(logging.INFO)
b.name = f'log2'
# I can write to different files if i create different loggers, but then we run into the flask limitation
#lg2 = logging.getLogger('log2')
#lg2.setLevel(logging.INFO)
lg.addHandler(b)
l1 = logging.getLogger('log1')
#probably there is a better way to handle this but its just an example
def to_handler(msg, name='log1'):
msg = logging.LogRecord(name, logging.INFO, '',0,msg,[],'')
with suppress(IndexError):
[x for x in l1.handlers if getattr(x,'name', '') == name][0].emit(msg)
l1.to_handler = to_handler
# in a client module
import logging
l1 = logging.getLogger('log1')
l1.to_handler('some mildly important information')
when setting up the logger in flask I have to do
app.logger = l1
thats why I'm not sure how I can use multiple loggers

How to redirect abseil logging messages to stackdriver using google.cloud.logging without having duplicate with wrong "label"?

I am using AI Platform training to run ML training job using python 3.7.6. I am using the abseil module for logging messages with absl-py 0.9.0. I look at the instruction on how to direct python logging messages to stackdriver configuration medium article. I am using google-cloud-logging 1.15.0. I did some very basic code to understand the issue with my configuration.
from absl import logging
from absl import flags
from absl import app
import logging as logger
import google.cloud.logging
import sys
import os
FLAGS = flags.FLAGS
def main(argv):
logging.get_absl_handler().python_handler.stream = sys.stdout
# Instantiates a client
client = google.cloud.logging.Client()
# Connects the logger to the root logging handler; by default this captures
# all logs at INFO level and higher
client.setup_logging()
fmt = "[%(levelname)s %(asctime)s %(filename)s:%(lineno)s] %(message)s"
formatter = logger.Formatter(fmt)
logging.get_absl_handler().setFormatter(formatter)
# set level of verbosity
logging.set_verbosity(logging.DEBUG)
print(' 0 print --- ')
logging.info(' 1 logging:')
logging.info(' 2 logging:')
print(' 3 print --- ')
logging.debug(' 4 logging-test-debug')
logging.info(' 5 logging-test-info')
logging.warning(' 6 logging-test-warning')
logging.error(' 7 logging test-error')
print(' 8 print --- ')
print(' 9 print --- ')
if __name__ == '__main__':
app.run(main)
First abseil send all logs to the stderr. Note sure if this is expected or not. In the screenshot below we see:
print messages using print are display (later in the logfile from Stackdriver)
Abseil logging messages appear 2 times. One with the right label in stack driver (DEBUG, INFO, WARNING or ERROR) and one more time with the special formatting [%(levelname)s %(asctime)s %(filename)s:%(lineno)s] %(message)s but always with the ERROR label in Stackdriver.
When I run the code locally I don't see duplicate.
Any idea how to have this setup properly to see the logging messages (using abseil) once with the proper "label" in Stackdriver ?
----- EDIT --------
I am seeing the issue locally and not only when running on GCP.
The duplicate log messages appear when I add this line: client.setup_logging(). Before, I have no duplicate and all log messages are in the stdout stream
If I look at the logger logger.root.manager.loggerDict.keys(), I see a lot of them:
dict_keys(['absl', 'google.auth.transport._http_client', 'google.auth.transport', 'google.auth', 'google','google.auth._default', 'grpc._cython.cygrpc', 'grpc._cython', 'grpc', 'google.api_core.retry', 'google.api_core', 'google.auth.transport._mtls_helper', 'google.auth.transport.grpc', 'urllib3.util.retry', 'urllib3.util', 'urllib3', 'urllib3.connection', 'urllib3.response', 'urllib3.connectionpool', 'urllib3.poolmanager', 'urllib3.contrib.pyopenssl', 'urllib3.contrib', 'socks', 'requests', 'google.auth.transport.requests', 'grpc._common', 'grpc._channel', 'google.cloud.logging.handlers.transports.background_thread', 'google.cloud.logging.handlers.transports', 'google.cloud.logging.handlers', 'google.cloud.logging', 'google.cloud', 'google_auth_httplib2'])
If I look at:
root_logger = logger.getLogger()
for handler in root_logger.handlers:
print("handler ", handler)
I see:
handler <ABSLHandler (NOTSET)>
handler <CloudLoggingHandler <stderr> (NOTSET)>
handler <StreamHandler <stderr> (NOTSET)>
and we can see that the stream is stderr and not stdout. I didn''t managed to change it.
I saw this discussion stackoverflow thread and I tried the last solution by #Andy Carlson but then all my logging message are gone.

Python logging printing double to console

It seems that on a couple machines I'm getting double output like this:
INFO LED NOTIFICATION STARTED
INFO:output_logger:LED NOTIFICATION STARTED
This is the function I'm using:
def setup_logger(name, log_file, level=logging.INFO, ContentFormat='%(asctime)s %(levelname)s %(message)s', DateTimeFormat="%Y-%m-%d %H:%M:%S", CreateConsoleLogger=False):
"""Function setup as many loggers as you want"""
logger = logging.getLogger(name)
logger.setLevel(level)
if CreateConsoleLogger:
# create console handler
handler = logging.StreamHandler()
handler.setLevel(level)
formatter = logging.Formatter("%(levelname)s %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
# create a file handler
handler = RotatingFileHandler(log_file, maxBytes=2000000, backupCount=5)
handler.setLevel(level)
formatter = logging.Formatter(ContentFormat, DateTimeFormat)
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
This is how I'm creating the logger:
output_logger = setup_logger('output_logger', 'log/autofy.log', level=logging.DEBUG, CreateConsoleLogger=True)
And this is how I call it:
output_logger.info("LED NOTIFICATION STARTED")
On a most of computers I just see the same message printed to the console that's saved to the file as expected ("INFO LED NOTIFICATION STARTED"), but on other computers it's doing this weird double output thing. My code is exactly the same from one computer to another, so any ideas what could be causing this on some computers and not others?
EDIT
I'm writing the script using notepad++ and running it in a terminal window on an Ubuntu 16.04 machine. I'm using python3.
Try adding this to your code:
logging._get_logger().propagate = False

Logging to console with Pyramid pshell

In the process of developing a Pyramid web application, I've found it very useful to use the command-line pshell to load the application and interact with various code. However, log statements are not echoed on the console, and I'm not sure why.
For instance, lets say in common.utilities.general I have a function:
import logging
log = logging.getLogger(__name__)
def my_util():
log.debug("Executing utility.")
return "Utility was executed."
Then in my command line:
(pyenv)rook:swap nateford$ pshell src/local.ini
2015-10-08 14:44:01,081 INFO [common.orm.pymongo_core][MainThread] PyMongo Connection to replica set successful: localhost:27017
2015-10-08 14:44:01,082 INFO [common.orm.pymongo_core][MainThread] Connected to Mongo Database = turnhere
Python 3.4.3 (default, Mar 10 2015, 14:53:35)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.56)] on darwin
Type "help" for more information.
Environment:
app The WSGI application.
registry Active Pyramid registry.
request Active request object.
root Root of the default resource tree.
root_factory Default root factory used to create `root`.
>>> from common.utilities.general import my_util
>>> my_util()
'Utility was executed.'
>>>
As you can see, there is no log to the console. I would expect:
>>> from common.utilities.general import my_util
>>> my_util()
[some date/server info][DEBUG]: Executing utility.
'Utility was executed.'
>>>
Here is the (relevant) contents of my local.ini file:
<Various elided application settings>
###
# logging configuration
# http://docs.pylonsproject.org/projects/pyramid/en/1.5-branch/narr/logging.html
###
[loggers]
keys = root, common, webapp, services, sqlalchemy
[handlers]
keys = console, applog
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console, applog
[logger_common]
level = DEBUG
handlers =
qualname = common
[logger_services]
level = DEBUG
handlers =
qualname = common.services
[logger_webapp]
level = DEBUG
handlers =
qualname = webapp
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
# "level = INFO" logs SQL queries.
# "level = DEBUG" logs SQL queries and results.
# "level = WARN" logs neither. (Recommended for production systems.)
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = DEBUG
formatter = generic
[handler_applog]
class = FileHandler
args = (r'%(here)s/log/app.log','a')
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
Your root's logger logging level is set INFO which is higher level than DEBUG which is the level you log your messages with. Changing root's logger level to DEBUG should help.

Python logging not outputing correct levels?

I'm trying to add some logging to my application however I can't seem to get the correct log level statements to display. I have set my logger to INFO yet it only displays warnings and errors in both the console and log file
Am I missing anything?
import logging
logger = logging.getLogger("mo_test")
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh = logging.FileHandler('test.log')
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
fh.setLevel(logging.INFO)
fh.setFormatter(formatter)
ch.setFormatter(formatter)
logger.addHandler(fh)
logger.addHandler(ch)
logger.info("This is an Info")
logger.debug("this is a debug")
logger.warning("This is a warning")
logger.error("Oh error")
The content of the log file and console is then only:
2015-03-09 15:32:44,601 - mo_test - WARNING - This is a warning
2015-03-09 15:32:44,601 - mo_test - ERROR - Oh error
Thanks
Set the logging level on the logger. If you don't set it, the default logging level is 30, i.e. logging.WARNING:
logger = logging.getLogger("mo_test")
logger.setLevel(logging.INFO)
The logging flow chart shows how the logger itself filters by logging level ("Logger enabled for level of call?") even before the handlers get a chance to handle the record ("Handler enabled for level of LogRecord?"):
You seem to be missing a call to ``logging.basicConfig(). Without this call,logging` is in an unconfigured state and anything can happen.

Resources