Different loggers for classes in python - python-3.x

I have several classes in the same module - I want each to have its own class attribute of a logger from the logging module. Because of the structure of the logging module, when I add:
logger = logging.getLogger(__name__)
each of their loggers has the same name (the module) so they are actually all the same logger. I'd prefer to have each logger to be class specific (so their names are something like package.module.ClassName). I'm aware "best practice" is generally to name your loggers name and also aware I could just rename them whatever I want. But I was mainly looking for what's recommended for this use case?

Solution
Consider adding contextual information required via logging.LoggingAdapter
Example
class CustomAdapter(logging.LoggerAdapter):
"""
This example adapter expects the passed in dict-like object to have a
'connid' key, whose value in brackets is prepended to the log message.
"""
def process(self, msg, kwargs):
return '[%s] %s' % (self.extra['connid'], msg), kwargs
logger = logging.getLogger(__name__)
adapter = CustomAdapter(logger, {'connid': some_conn_id })
References
Logging cookbook (search for LoggingAdapter): https://docs.python.org/3/howto/logging-cookbook.html

Related

Pyramid routing to class methods

From the pyramid documentation, there exists an attr argument on configurator's add_view that states:
The attr value allows you to vary the method attribute used
to obtain the response. For example, if your view was a
class, and the class has a method named index and you
wanted to use this method instead of the class' __call__
method to return the response, you'd say attr="index" in the
view configuration for the view.
With this in mind, I'd like to route all requests under /myrequest to the class MyRequest. Given the following class:
#view_defaults(renderer='json')
class MyHandler(object):
def __init__(self, request):
self.request = request
def start(self):
return {'success': True}
def end(self):
return {'success': True}
It would seem the way to do this would be in the configuration, add these lines:
config.add_view(MyHandler, '/myrequest', attr='start')
config.add_view(MyHandler, '/myrequest', attr='end')
and so on, for all the methods I want routed under MyHandler. Unfortunately this doesn't work. The correct way to do this appears to be:
config.add_route('myroutestart', '/myroute/start')
config.add_route('myrouteend', '/myroute/end')
config.add_view(MyHandler, attr='start', route_name='myroutestart')
config.add_view(MyHandler, attr='end', route_name='myrouteend')
This seems like an awful lot of boilerplate. Is there a way to bring this down to 1 line per route? Or more ideally, 1 line per class?
Example #4 in the Route and View Examples from The Pyramid Community Cookbook v0.2, Pyramid for Pylons Users, offers the following.
# Pyramid
config.add_route("help", "/help/{action}")
#view_config(route_name="help", match_param="action=help", ...)
def help(self): # In some arbitrary class.
...
Although this cookbook recipe mentions pyramid_handlers as one option to do this, the article "Outgrowing Pyramid Handlers" by one of the maintainers of Pyramid encourages the use of Pyramid's configuration.

Parent class to expose standard methods, child class to provide sub-methods to do the work

I'd like to set up a parent class that defines a standard interface and performs common things for all children instances. However, each child will have different specifics for how these methods get the job done. For example, the parent class would provide standard methods as follows:
class Camera():
camera_type = None
def __init__(self, save_to=None):
self.file_loc = save_to
def connect(self):
self.cam_connect()
with open(self.file_loc, 'w'):
# do something common to all cameras
def start_record(self):
self.cam_start_record()
# do something common to all cameras
Each of these methods refers to another method located only in the child. The child classes will have the actual details on how to perform the task required, which may include the combination of several methods. For example:
class AmazingCamera(Camera):
camera_type = 'Amazing Camera'
def __init__(self, host_ip='10.10.10.10', **kwargs):
super(AmazingCamera, self).__init__(**kwargs)
self.host_ip = host_ip
def cam_connect(self):
print('I are connectifying to {}'.format(self.host_ip))
# do a bunch of custom things including calling other
# local methods to get the job done.
def cam_start_record(self):
print('Recording from {}'.format(self.host_ip)
# do a bunch more things specific to this camera
### etc...
With the outcome of the above providing an interface such as:
mycamera = AmazingCamera(host_ip='1.2.3.4', save_to='/tmp/asdf')
mycamera.connect()
mycamera.start_record()
I understand fully that I can simply override the parent methods, but in cases where the parent methods do other things like handling files and such I'd prefer to not have to do that. What I have above seems to work just fine so far but before I continue creating this I'd like to know if there is there a better, more pythonic way to achieve what I'm after.
TIA!
I opted to keep the standard methods identical between the parent and child and minimize the use of child-specific helper methods. Just seemed cleaner.
As an example:
class Camera():
camera_type = None
def connect(self):
with open(self.file_loc, 'w'):
# do something common to all cameras
Then in the child I'm overriding the methods, but calling the method of the parent in the override as follows:
class AmazingCamera(Camera):
camera_type = 'Amazing Camera'
def cam_connect(self):
print('I are connectifying to {}'.format(self.host_ip))
# call the parent's method
super().connect()
# do a bunch of custom things specific to
# AmazingCamera

best practices for passing initialization arguments to superclasses?

I'm trying to figure out the best way to initialize sub/superclasses in Python3. Both the base and subclasses will take half a dozen parameters, all of which will be parsed from command line arguments.
The obvious way to implement this is to parse all the args at once, and pass them all in:
class Base:
def __init__(self, base_arg1, base_arg2, base_arg3):
class Sub(Base):
def __init__(self, sub_arg1, sub_arg2, sub_arg3,
base_arg1, base_arg2, base_arg3):
super().__init__(self, base_arg1, base_arg2, base_arg3)
main():
# parse args here
options = parser.parse_args()
obj = Sub(options.sub_arg1, options.sub_arg2, options.sub_arg3,
options.base_arg1, options.base_arg2, options.base_arg3)
If I have a Sub-subclass (which I will), things get really hairy in terms of the list of arguments passed up through successive super().init() calls.
But it occurs to me that argparse.parse_known_args() offers another path: I could have each subclass parse out the arguments it needs/recognizes and pass the rest of the arguments up the hierarchy:
class Base:
def __init__(self, args):
base_options = base_parser.parse_known_args(args)
class Sub(Base):
def __init__(self, args):
(sub_options, other_args) = sub_parser.parse_known_args(args)
super().__init__(self, other_args)
main():
obj = Sub(sys.argv)
This seems cleaner from an API point of view. But I can imagine that it violates some tenet of The Way Things Are Done In Python and is a bad idea for all sorts of reasons. My search of the web has not turned up any examples either way - could the mighty and all-knowing mind of Stack Overflow help me understand the Right Way to do this?
Look inside the argparse.py code. An ArgumentParser is a subclass of an _ActionsContainer. All the actions are subclasses of Action.
When you call
parser.add_argument('foo', action='store_action', ...)
the parameters are passed, mostly as *args and **kwargs to _StoreAction, which in turn passes them on to its supper (after a setting some defaults, etc).
As a module that is mean to be imported, and never run as a stand along script it does not have a if __name__.... block. But often I'll include such a block to invoke test code. That's the place to put the commandline parser, or at least to invoke it. If might be defined in a function in the body, but it normally shouldn't be called when module is imported.
In general argparse is a scripting tool, and shouldn't be part of a class definitions - unless you are a subclassing ArgumentParser to add some new functionality.
You might also want to look at https://pypi.python.org/pypi/plac. This package provides a different interface to argparse, and is a good example of subclassing this parser.
Thanks hpaulj! I think your response helped me figure out an even simpler way to go about it. I can parse all the options at the top level, then just pass the option namespace in, and let each subclass pull out the ones it needs. Kind of face-palm simple, compared to the other approaches:
class Base:
def __init__(self, options):
base_arg1 = options.base_arg1
base_arg2 = options.base_arg2
class Sub(Base):
def __init__(self, options):
super().__init__(self, options) # initialize base class
sub_arg1 = options.sub_arg1
sub_arg2 = options.sub_arg2
main():
options = parser.parse_args()
obj = Sub(options)

Log4J pass logger to helper class

if I have two classes with one logger each. Each logger has one appender to its own logfile.
Now each class calls one helper class (both classes call the same helper class) and within the helper class I want to make some logging. How can I write the logs from the helper class to its corresponding logfile?
The only way I thought about is to pass the instance of the logger to the helper class (or the name of the logger). But isn't there a better way to do that? Up to now I declare one new logger inside the helper class, but then the logs will not go to the right appender.
Hopefully you understand my question.
Many greetings,
Hauke

Avoiding of printing full package name of method in log4j

I have an API that uses log4j for logging. When I have used the API in my project, though log statements related to project printed with ontl method name, but log statements coming from API is printed full package name format.
In log4j.properties file I am using "%c" (lowercase).
How I can force all project log statements get printed only method name.
Lets say;
I have two classes Main.java and AlarmCategoryImpl.java
AlarmCategroryImpl.java is located on API class, Main.java is defined in my project class.
static Logger logger = Logger.getLogger(AlarmCategoryImpl.class);
static Logger logger = Logger.getLogger(Main.class);
and its log4j output.
2012-12-01/18:13:22.220/EET [INFO][Main->main] starting...
2012-12-01/18:13:22.447/EET [INFO][com.monitor.base.alarmmanagement.alarmconfigurationImpl.AlarmCategoryImpl->copyStructureRecursive] Copying AlarmCategoryImpl
%c means "category name", which is synonymous to "logger name". That means that %c will be expanded to the logger's name.
The logger name is not necessarily the fully-qualified class name. The logger name is the string that is passed in to Logger.getLogger(). Therefore, if you have a class named x.y.z.MyClass, and it has this:
private static final Logger logger = Logger.getLogger("hello");
Then log statements will be generated with hello expanded instead of %c.
That means that the classes in your API are using getLogger(), passing the class name as a parameter. That causes %c to be expanded to the fully-qualified class name when the logs print.
I'm guessing that your non-API classes (in other words, your own project's classes) don't pass-in any value to Logger.getLogger(), or perhaps they use the root logger. To be sure, paste here the line of your code that retrieves the Logger instance.
EDIT as per comment:
Well, is it possible that your Main class is inside the default package? (that is, it is not associated with any package)? If yes, then I don't see any problem.
[INFO][Main->main]: INFO is the level, Main is the class, main is the method.
[INFO][com.monitor.base.alarmmanagement.alarmconfigurationImpl.AlarmCategoryImpl->copyStructureRecursive]: INFO is the level, com.monitor.base.alarmmanagement.alarmconfigurationImpl.AlarmCategoryImpl is the class, copyStructureRecursive is the method.

Resources