I'm trying to create a python class that inherits the Elasticsearch class and builds upon it with some custom methods. The issue i am facing is that i'd like the class constructor to connect to the server, so initialisation is simple. Usually to connect to the server it looks something like this:
from elasticsearch import Elasticsearch
es = Elasticsearch([{'host': 'XXXXXXX', 'port': XXXX}]
In my class, that i'm calling "Elastic", i'd like to connect to the server and return the Elasticsearch object upon initialisation of the class, i.e.:
es = Elastic()
which I can then use to perform existing Elasticsearch class methods, and my own custom operations, e.g.:
es.search() # existing class method
es.custom_method_example1() # new class method
I've been trying and failing to come up with a way to do this - my most recent attempt involved using the __new__ dunder method so that I could return the connected es object as the new class:
class Elastic(Elasticsearch):
def __new__(cls, timeout=10, max_retries=5, retry_on_timeout=True, *args, **kwargs):
"Connect to our ES server."
return Elasticsearch([{'host': 'XXXXX', 'port': XXXX}], timeout=10, max_retries=5, retry_on_timeout=True, *args, **kwargs)
def custom_method_example1(self, *args, **kwargs):
"""
Performs some custom method that wasn't possible with the standalone Elasticsearch class
"""
Firstly it doesn't work:
AttributeError: 'Elasticsearch' object has no attribute 'custom_method_example1', seems that it's no longer inheriting but replacing the class?
And secondly, I gather from reading about that __new__ generally doesn't have much use (particularly for amateur programmers like me) so I'm probably taking the wrong approach / overcomplicating it here. If anyone knows the "right" way to do this it would be much appreciated - I've been reading a bit about factory design and it seems like the right way to go in general but im still making sense of it all (i'm an analyst by trade). I figure decorators might come into use somewhere??
Thanks and sorry for the waffle
Indeed I was very much overcomplicating it. Didn't consider that class inheritance involves inheritance of the constructor itself - therefore I can call the subclass Elastic just as I was for the parent Elasticsearch:
from elasticsearch import Elasticsearch
class Elastic(Elasticsearch):
def custom_method_example1(self, *args, **kwargs):
"""
Performs some custom method that wasn't possible with the standalone Elasticsearch class
"""
To initialise the class and call it's methods:
es = Elastic([{'host': 'XXXXX', 'port': XXXX}], timeout=10, max_retries=5, retry_on_timeout=True)
es.custom_method_example1()
EDIT: I still had the issue of wanting to set new default parameters for my subclass constructor - I've now found out how to do this using super() which explicitly calls the parents constructor passing on arguments i set in the subclass constructor, leaving me with:
from elasticsearch import Elasticsearch
class Elastic(Elasticsearch):
def __init__(self, hosts=[{'host':'XXXXXX', 'port':XXX}], timeout=10, max_retries=5, retry_on_timeout=True):
super().__init__(hosts=hosts,timeout=timeout,max_retries=max_retries,retry_on_timeout=retry_on_timeout)
def custom_method_example1(self, *args, **kwargs):
"""
Performs some custom method that wasn't possible with the standalone Elasticsearch class
"""
allowing me to initialise the class like so:
es = Elastic()
Related
From the pyramid documentation, there exists an attr argument on configurator's add_view that states:
The attr value allows you to vary the method attribute used
to obtain the response. For example, if your view was a
class, and the class has a method named index and you
wanted to use this method instead of the class' __call__
method to return the response, you'd say attr="index" in the
view configuration for the view.
With this in mind, I'd like to route all requests under /myrequest to the class MyRequest. Given the following class:
#view_defaults(renderer='json')
class MyHandler(object):
def __init__(self, request):
self.request = request
def start(self):
return {'success': True}
def end(self):
return {'success': True}
It would seem the way to do this would be in the configuration, add these lines:
config.add_view(MyHandler, '/myrequest', attr='start')
config.add_view(MyHandler, '/myrequest', attr='end')
and so on, for all the methods I want routed under MyHandler. Unfortunately this doesn't work. The correct way to do this appears to be:
config.add_route('myroutestart', '/myroute/start')
config.add_route('myrouteend', '/myroute/end')
config.add_view(MyHandler, attr='start', route_name='myroutestart')
config.add_view(MyHandler, attr='end', route_name='myrouteend')
This seems like an awful lot of boilerplate. Is there a way to bring this down to 1 line per route? Or more ideally, 1 line per class?
Example #4 in the Route and View Examples from The Pyramid Community Cookbook v0.2, Pyramid for Pylons Users, offers the following.
# Pyramid
config.add_route("help", "/help/{action}")
#view_config(route_name="help", match_param="action=help", ...)
def help(self): # In some arbitrary class.
...
Although this cookbook recipe mentions pyramid_handlers as one option to do this, the article "Outgrowing Pyramid Handlers" by one of the maintainers of Pyramid encourages the use of Pyramid's configuration.
What is the best way to implement abstract classes in Python?
This is the main approach I have seen:
class A(ABC):
#abstractmethod
def foo(self):
pass
However, it does not prevent from calling the abstract method when you extend that class.
In Java you get an error if you try to do something similar, but not in Python:
class B(A):
def foo(self):
super().foo()
B().foo() # does not raise an error
In order to replicate the same Java's behaviour, you could adopt this approach:
class A(ABC):
#abstractmethod
def foo(self):
raise NotImplementedError
However, in practice I have rarely seen this latter solution, even if is apparently the most correct one. Is there a specific reason to prefer the first approach rather than the second one ?
If you really want the error to be raised if one of the subclasses try to call the superclass abstract method, then, yes, you should raise it manually. (and then, create an instance of the Exception class to the raise command raise NotImplementedError() even if it works with the class directly)
However, the existing behavior is actually convenient: if your abstractmethod contains just a pass, then you can have any number of sub-classes inheriting your base class, and as long as at least one implements the abstractmethod, it will work. Even if all of them call the super() equivalent method, without checking anything else.
If an error - NotImplementedError or any other, would be called, in a complex hierarchy, making use of mixins, and such, you'd need to check at each time you'd call super if the error was raised, just to skipt it. For the record, checking if super() would hit the class where method is abstract with a conditional is possible, this way:
if not getattr(super().foo, "__isabstractmethod__", False):
super().foo(...)
Since what do you want if you reach the base of the hierarchy for a method is for it to do nothing, it is far simples if just nothing happens!
I mean, check this:
class A(abc.ABC):
#abstractmethod
def validate(self, **kwargs):
pass
class B(A):
def validate(self, *, first_arg_for_B, second_arg_for_B=None, **kwargs):
super().validate(**kwargs)
# perform validation:
...
class C(A)
def validate(self, *, first_arg_for_C **kwargs):
super().validate(**kwargs)
# perform validation:
...
class Final(B, C):
...
Neither B.validate nor C.validate need to worry about any other class in the hierarchy, just do their thing and pass on.
If A.validate would raise, both methods would have to do super().validate(...) inside a try: ...;except ...:pass statement, or inside a weird if block, for the gain of...nothing.
update - I just found this note on the oficial documentation:
Note Unlike Java abstract methods, these abstract methods may have an
implementation. This implementation can be called via the super()
mechanism from the class that overrides it. This could be useful as an
end-point for a super-call in a framework that uses cooperative
multiple-inheritance.
https://docs.python.org/3/library/abc.html#abc.abstractmethod
I will even return you a personal question, if you can reply in the comments: I understand it is much less relevant in Java where one can't have multiple inheritance, so, even in a big hierarchy, the first subclass to implement the abstract method would usually be well known. But otherwise, in a Java project were one could pick one of various Base concrete classes, and proceed with others in an arbitrary order, since the abstractmethod raises, how is that resolved?
I'd like to set up a parent class that defines a standard interface and performs common things for all children instances. However, each child will have different specifics for how these methods get the job done. For example, the parent class would provide standard methods as follows:
class Camera():
camera_type = None
def __init__(self, save_to=None):
self.file_loc = save_to
def connect(self):
self.cam_connect()
with open(self.file_loc, 'w'):
# do something common to all cameras
def start_record(self):
self.cam_start_record()
# do something common to all cameras
Each of these methods refers to another method located only in the child. The child classes will have the actual details on how to perform the task required, which may include the combination of several methods. For example:
class AmazingCamera(Camera):
camera_type = 'Amazing Camera'
def __init__(self, host_ip='10.10.10.10', **kwargs):
super(AmazingCamera, self).__init__(**kwargs)
self.host_ip = host_ip
def cam_connect(self):
print('I are connectifying to {}'.format(self.host_ip))
# do a bunch of custom things including calling other
# local methods to get the job done.
def cam_start_record(self):
print('Recording from {}'.format(self.host_ip)
# do a bunch more things specific to this camera
### etc...
With the outcome of the above providing an interface such as:
mycamera = AmazingCamera(host_ip='1.2.3.4', save_to='/tmp/asdf')
mycamera.connect()
mycamera.start_record()
I understand fully that I can simply override the parent methods, but in cases where the parent methods do other things like handling files and such I'd prefer to not have to do that. What I have above seems to work just fine so far but before I continue creating this I'd like to know if there is there a better, more pythonic way to achieve what I'm after.
TIA!
I opted to keep the standard methods identical between the parent and child and minimize the use of child-specific helper methods. Just seemed cleaner.
As an example:
class Camera():
camera_type = None
def connect(self):
with open(self.file_loc, 'w'):
# do something common to all cameras
Then in the child I'm overriding the methods, but calling the method of the parent in the override as follows:
class AmazingCamera(Camera):
camera_type = 'Amazing Camera'
def cam_connect(self):
print('I are connectifying to {}'.format(self.host_ip))
# call the parent's method
super().connect()
# do a bunch of custom things specific to
# AmazingCamera
Let's say I have a class that requires some arguments via __init_subclass__:
class AbstractCar:
def __init__(self):
self.engine = self.engine_class()
def __init_subclass__(cls, *, engine_class, **kwargs):
super().__init_subclass__(**kwargs)
cls.engine_class = engine_class
class I4Engine:
pass
class V6Engine:
pass
class Compact(AbstractCar, engine_class=I4Engine):
pass
class SUV(AbstractCar, engine_class=V6Engine):
pass
Now I want to derive another class from one of those derived classes:
class RedCompact(Compact):
pass
The above does not work, because it expects me to re-provide the engine_class parameter. Now, I understand perfectly, why that happens. It is because the Compact inherits __init_subclass__ from AbstractCar, which is then called when RedCompact inherits from Compact and is subsequently missing the expected argument.
I find this behavior rather non-intuitive. After all, Compact specifies all the required arguments for AbstractClass and should be usable as a fully realized class. Am I completely wrong to expect this behavior? Is there some other mechanism that allows me to achieve this kind of behavior?
I already have two solutions but I find both lacking. The first one adds a new __init_subclass__ to Compact:
class Compact(AbstractCar, engine_class=I4Engine):
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(engine_class=I4Engine, **kwargs)
This works but it shifts responsibility for the correct working of the AbstractCar class from the writer of that class to the user. Also, it violates DRY as the engine specification is now in two places that must be kept in sync.
My second solution overrides __init_subclass__ in derived classes:
class AbstractCar:
def __init__(self):
self.engine = self.engine_class()
def __init_subclass__(cls, * , engine_class, **kwargs):
super().__init_subclass__(**kwargs)
cls.engine_class=engine_class
#classmethod
def evil_black_magic(cls, **kwargs):
AbstractCar.__init_subclass__(engine_class=engine_class, **kwargs)
if '__init_subclass__' not in cls.__dict__:
cls.__init_subclass__ = evil_black_magic
While this works fine for now, it is purest black magic and bound to cause trouble down the road. I feel like this cannot be the solution to my problem.
Indeed—the way this works in Python is counter-intuitive—I agree with you on your reasoning.
The way to go to fix it is to have some logic in the metaclass. Which is a pity, since avoiding the need for metaclasses is exactly what __init_subclass__ was created for.
Even with metaclasses it would not be an easy thing—one would have to annotate the parameters given to __init_subclass__ somewhere in the class hierarchy, and then insert those back when creating new subclasses.
On second thought, that can work from within __init_subclass__ itself. That is: when __init_subclass__ "perceives" it did not receive a parameter that should have been mandatory, it checks for it in the classes in the mro (mro "method resolution order"—a sequence with all base classes, in order).
In this specific case, it can just check for the attribute itself—if it is already defined for at least one class in the mro, just leave it as is, otherwise raises.
If the code in __init_subclass__ should do something more complex than simply annotating the parameter as passed, then, besides that, the parameter should be stored in an attribute in the new class, so that the same check can be performed downstream.
In short, for your code:
class AbstractCar:
def __init__(self):
self.engine = self.engine_class()
def __init_subclass__(cls, *, engine_class=None, **kwargs):
super().__init_subclass__(**kwargs)
if engine_class:
cls.engine_class = engine_class
return
for base in cls.__mro__[1:]:
if getattr(base, "engine_class", False):
return
raise TypeError("parameter 'engine_class' must be supplied as a class named argument")
I think this is a nice solution. It could be made more general with a decorator meant specifically for __init_subclass__ that could store the parameters in a named class attribute and perform this check automatically.
(I wrote the code for such a decorator, but having all the corner cases for named and unamed parameters, even using the inspect model can make things ugly)
I'm trying to figure out the best way to initialize sub/superclasses in Python3. Both the base and subclasses will take half a dozen parameters, all of which will be parsed from command line arguments.
The obvious way to implement this is to parse all the args at once, and pass them all in:
class Base:
def __init__(self, base_arg1, base_arg2, base_arg3):
class Sub(Base):
def __init__(self, sub_arg1, sub_arg2, sub_arg3,
base_arg1, base_arg2, base_arg3):
super().__init__(self, base_arg1, base_arg2, base_arg3)
main():
# parse args here
options = parser.parse_args()
obj = Sub(options.sub_arg1, options.sub_arg2, options.sub_arg3,
options.base_arg1, options.base_arg2, options.base_arg3)
If I have a Sub-subclass (which I will), things get really hairy in terms of the list of arguments passed up through successive super().init() calls.
But it occurs to me that argparse.parse_known_args() offers another path: I could have each subclass parse out the arguments it needs/recognizes and pass the rest of the arguments up the hierarchy:
class Base:
def __init__(self, args):
base_options = base_parser.parse_known_args(args)
class Sub(Base):
def __init__(self, args):
(sub_options, other_args) = sub_parser.parse_known_args(args)
super().__init__(self, other_args)
main():
obj = Sub(sys.argv)
This seems cleaner from an API point of view. But I can imagine that it violates some tenet of The Way Things Are Done In Python and is a bad idea for all sorts of reasons. My search of the web has not turned up any examples either way - could the mighty and all-knowing mind of Stack Overflow help me understand the Right Way to do this?
Look inside the argparse.py code. An ArgumentParser is a subclass of an _ActionsContainer. All the actions are subclasses of Action.
When you call
parser.add_argument('foo', action='store_action', ...)
the parameters are passed, mostly as *args and **kwargs to _StoreAction, which in turn passes them on to its supper (after a setting some defaults, etc).
As a module that is mean to be imported, and never run as a stand along script it does not have a if __name__.... block. But often I'll include such a block to invoke test code. That's the place to put the commandline parser, or at least to invoke it. If might be defined in a function in the body, but it normally shouldn't be called when module is imported.
In general argparse is a scripting tool, and shouldn't be part of a class definitions - unless you are a subclassing ArgumentParser to add some new functionality.
You might also want to look at https://pypi.python.org/pypi/plac. This package provides a different interface to argparse, and is a good example of subclassing this parser.
Thanks hpaulj! I think your response helped me figure out an even simpler way to go about it. I can parse all the options at the top level, then just pass the option namespace in, and let each subclass pull out the ones it needs. Kind of face-palm simple, compared to the other approaches:
class Base:
def __init__(self, options):
base_arg1 = options.base_arg1
base_arg2 = options.base_arg2
class Sub(Base):
def __init__(self, options):
super().__init__(self, options) # initialize base class
sub_arg1 = options.sub_arg1
sub_arg2 = options.sub_arg2
main():
options = parser.parse_args()
obj = Sub(options)