I'm creating an interface Mixins that contains methods that will throw error if it is not implemented, however it only happens during run time when the method is called, i would like to have python check if method is implemented before run time.
class TestInterface():
def get_testing_name(self):
raise NotImplementedError
def do_something(self):
return self.get_testing_name()
class Testing(TestInterface):
def __init__(self):
super().do_something()
In my Testing class, i didn't defined get_testing_name method, hence it will raise NotImplementedError. However this will only happen on run time
How do i make sure python check if method is not implemented before run time?
I don't know if I understood you.
Maybe that was what you wanted ?:
try:
t = Testing()
except NotImplementedError:
print("Fail")
Related
What is the best way to implement abstract classes in Python?
This is the main approach I have seen:
class A(ABC):
#abstractmethod
def foo(self):
pass
However, it does not prevent from calling the abstract method when you extend that class.
In Java you get an error if you try to do something similar, but not in Python:
class B(A):
def foo(self):
super().foo()
B().foo() # does not raise an error
In order to replicate the same Java's behaviour, you could adopt this approach:
class A(ABC):
#abstractmethod
def foo(self):
raise NotImplementedError
However, in practice I have rarely seen this latter solution, even if is apparently the most correct one. Is there a specific reason to prefer the first approach rather than the second one ?
If you really want the error to be raised if one of the subclasses try to call the superclass abstract method, then, yes, you should raise it manually. (and then, create an instance of the Exception class to the raise command raise NotImplementedError() even if it works with the class directly)
However, the existing behavior is actually convenient: if your abstractmethod contains just a pass, then you can have any number of sub-classes inheriting your base class, and as long as at least one implements the abstractmethod, it will work. Even if all of them call the super() equivalent method, without checking anything else.
If an error - NotImplementedError or any other, would be called, in a complex hierarchy, making use of mixins, and such, you'd need to check at each time you'd call super if the error was raised, just to skipt it. For the record, checking if super() would hit the class where method is abstract with a conditional is possible, this way:
if not getattr(super().foo, "__isabstractmethod__", False):
super().foo(...)
Since what do you want if you reach the base of the hierarchy for a method is for it to do nothing, it is far simples if just nothing happens!
I mean, check this:
class A(abc.ABC):
#abstractmethod
def validate(self, **kwargs):
pass
class B(A):
def validate(self, *, first_arg_for_B, second_arg_for_B=None, **kwargs):
super().validate(**kwargs)
# perform validation:
...
class C(A)
def validate(self, *, first_arg_for_C **kwargs):
super().validate(**kwargs)
# perform validation:
...
class Final(B, C):
...
Neither B.validate nor C.validate need to worry about any other class in the hierarchy, just do their thing and pass on.
If A.validate would raise, both methods would have to do super().validate(...) inside a try: ...;except ...:pass statement, or inside a weird if block, for the gain of...nothing.
update - I just found this note on the oficial documentation:
Note Unlike Java abstract methods, these abstract methods may have an
implementation. This implementation can be called via the super()
mechanism from the class that overrides it. This could be useful as an
end-point for a super-call in a framework that uses cooperative
multiple-inheritance.
https://docs.python.org/3/library/abc.html#abc.abstractmethod
I will even return you a personal question, if you can reply in the comments: I understand it is much less relevant in Java where one can't have multiple inheritance, so, even in a big hierarchy, the first subclass to implement the abstract method would usually be well known. But otherwise, in a Java project were one could pick one of various Base concrete classes, and proceed with others in an arbitrary order, since the abstractmethod raises, how is that resolved?
In JVM languages there is a data type called Optional which says value can be Null/None. By using the data type of a function as Option(for example Option(Int)). Calling statement of function can take action.
How does one implement a similar approach in Python.
I want to create a function and the return value of function should tell me
1. Function was successful so I have a value returned.
2. Function was not successful and so there is nothing to return.
I also wanted to tackle this problem and made a library called optional.py to do so. It can be installed using pip install optional.py. It is fully test covered and supports python2 and python3. Would love some feedback, suggestions, contributions.
To address the concerns of the other answer, and to head off any haters, yes, raising exceptions is more idiomatic of python, however it leads to ambiguity between exceptions for control-flow and exceptions for actual exceptional reasons.
There are large discussions about preventing defensive programming which mask the core logic of an application and smart people on both sides of the conversation. My personal preference is towards using optionals and so I provided the library to support that preference. Doing it with exceptions (or returning None) are acceptable alternatives, just not my preference.
To your original question: Raise an exception instead of returning None. This is an idiom in Python and therefore well understood by users. For example
def get_surname(name):
if name == "Monty":
return "Python"
else:
raise KeyError(name)
You'll see this usage pattern a lot in Python:
try:
print(get_surname("foo"))
except KeyError:
print("Oops, no 'foo' found")
Based on your feedback, it also seemed like you wanted to make sure that certain return values are actually used. This is quite tricky, and I don't think there is an elegant way to enforce this in Python, but I'll give it a try.
First, we'll put the return value in a property so we can track if it has indeed been read.
class Result(object):
def __init__(self, value):
self._value = value
self.used = False
#property
def value(self):
self.used = True # value was read
return self._value
Secondly, we require that the Result object must be retrieved in a with-block. When the block is exited (__exit__), we check if the value has been read. To handle cases where a user doesn't use the with-statement, we also check that the value has been read when it is being garbage collected (__del__). An exception there will be converted into a warning. I usually shy away from __del__ though.
class RequireUsage(object):
def __init__(self, value):
self._result = Result(value)
def __enter__(self):
return self._result
def __exit__(self, type, value, traceback):
if type is None: # No exception raised
if not self._result.used:
raise RuntimeError("Result was unused")
def __del__(self):
if not self._result.used:
raise RuntimeError("Result was unused (gc)")
To use this system, simply wrap the return value in RequireUsage like so:
def div2(n):
return RequireUsage(n/2)
To use such functions, wrap them in a block and extract the value:
with div2(10) as result:
print(result.value)
If you don't invoke result.value anywhere, you will now get an exception:
with div2(10) as result:
pass # exception will be thrown
Likewise, if you just call div2 without with at all, you will also get an exception:
div2(10) # will also raise an exception
I wouldn't particularly recommend this exact approach in real programs, because I feel it makes usage more difficult while adding little. Also, there are still edge-cases that aren't covered, like if you simply do res = div2(10), because a held reference prevents __del__ from being invoked.
One approach would be to return either () or (<type>,) (int in your case). You could then come up with some helper functions like this one:
def get_or_else(option: tuple, default):
if option:
return option[0]
else:
return default
One benefit of this approach is that you don't need to depend on a third-party library to implement this. As a counterpart, you basically have to make your own little library.
I'm having trouble getting my head around assigning a function to a variable when the function uses arguments. The arguments appear to be required but no matter what arguments I enter it doesn't work.
The scenario is that I'm creating my first GUI which has been designed in QT Designer. I need the checkbox to be ticked before the accept button allows the user to continue.
Currently this is coded to let me know if ticking the checkbox returns anything (which is does) however I don't know how to pass that result onto the next function 'accept_btn'. I thought the easiest way would be to create a variable however it requires positional arguments and that's where I'm stuck.
My code:
class MainWindow(QtWidgets.QMainWindow, Deleter_Main.Ui_MainWindow):
def __init__(self):
super(MainWindow, self).__init__()
self.setupUi(self)
self.ConfirmBox.stateChanged.connect(self.confirm_box)
self.Acceptbtn.clicked.connect(self.accept_btn)
def confirm_box(self, state):
if self.ConfirmBox.isChecked():
print("checked")
else:
print("not checked")
checked2 = confirm_box(self, state)
def accept_btn(self):
if checked2 == True:
print("clicked")
else:
print("not clicked")
app = QApplication(sys.argv)
form = MainWindow()
form.show()
app.exec_()
The code gets stuck on 'checked2' with the error:
NameError: name 'self' is not defined
I thought there might be other solutions for running this all within one function but I can't seem to find a way whilst the below is required.
self.ConfirmBox.stateChanged.connect(self.confirm_box)
Would extra appreciate if anyone could help me understand exactly why I need the 'self' argument in the function and variable.
Thanks in advance,
If you just need to enable a button when the checkbox is checked, it can be easily done within the signal connection:
self.ConfirmBox.toggled.connect(self.Acceptbtn.setEnabled)
QWidget.setEnabled requires a bool argument, which is the argument type passed on by the toggled signal, so the connection is very simple in this case.
Apart from this, there are some mistakes in your understanding of classes in Python: it seems like you are thinking in a "procedural" way, which doesn't work well with general PyQt implementations and common python usage, unless you really need some processing to be done when the class is created, for example to define some class attributes or manipulate the way some methods behave. But, even in this case, they will be class attributes, which will be inherited by every new instance.
The line checked2 = confirm_box(self, state) will obviously give you an error, since you are defining checked2 as a class atribute. This means that its value will be processed and assigned when the class is being created: at this point, the instance of the class does not exist yet, Python just executes the code that is not part of the methods until it reaches the end of the class definition (its primary indentation). When it reaches the checked2 line, it will try to call the confirm_box method, but the arguments "self" and "state" do not exist yet, as they have not been defined in the class attributes, hence the NameError exception.
Conceptually, what you have done is something similar to this:
class SomeObject(object):
print(something)
This wouldn't make any sense, since there is no "something" defined anywhere.
self is a python convention used for class methods: it is a keyword commonly used to refer to the instance of a class, you could actually use any valid python keyword at all.
The first argument of any class method is always the reference to the class instance, the only exceptions are classmethod and staticmethod decorators, but that's another story. When you call a method of an instanciated class, the instance object is automatically bound to the first argument of the called method: the self is the instance itself.
For example, you could create a class like this:
class SomeObject(object):
def __init__(Me):
Me.someValue = 0
def setSomeValue(Myself, value):
Myself.someValue = value
def multiplySomeValue(I, multi):
I.setSomeValue(I.someValue * multi)
return I.someValue
But that would be a bit confusing...
Is there any way to know the context in which an object is instantiated? So far I've been searching and tried inspect module (currentcontext) with poor results.
For example
class Item:
pass
class BagOfItems:
def __init__(self):
item_1 = Item()
item_2 = Item()
item_3 = Item()
I'd want to raise an exception in the instantiation of item_3 (because its outside a BagOfItems), while not doing so in item_1 and item_2. I dont know if a metaclass could be a solution to this, since the problem occurs at instantiation not at declaration.
The holder class (BagOfItems) can't implement the check because when Item intantiation happens outside it there would be no check.
When you instantiate an object with something like Item(), you are basically doing type(Item).__call__(), which will call Item.__new__() and Item.__init__() at some point in the calling sequence. That means that if you browse up the sequence of calls that led to Item.__init__(), you will eventually find code that does not live in Item or in type(Item). Your requirement is that the first such "context" up the stack belong to BagOfItem somehow.
In the general case, you can not determine the class that contains the method responsible for a stack frame1. However, if you make your requirement that you can only instantiate in a class method, you are no longer working with the "general case". The first argument to a method is always an instance of the class. We can therefore move up the stack trace until we find a method call whose first argument is neither an instance of Item nor a subclass of type(Item). If the frame has arguments (i.e., it is not a module or class body), and the first argument is an instance of BagOfItems, proceed. Otherwise, raise an error.
Keep in mind that the non-obvious calls like type(Item).__call__() may not appear in the stack trace at all. I just want to be prepared for them.
The check can be written something like this:
import inspect
def check_context(base, restriction):
it = iter(inspect.stack())
next(it) # Skip this function, jump to caller
for f in it:
args = inspect.getargvalues(f.frame)
self = args.locals[args.args[0]] if args.args else None
# Skip the instantiating calling stack
if self is not None and isinstance(self, (base, type(base))):
continue
if self is None or not isinstance(self, restriction):
raise ValueError('Attempting to instantiate {} outside of {}'.format(base.__name__, restriction.__name__))
break
You can then embed it in Item.__init__:
class Item:
def __init__(self):
check_context(Item, BagOfItems)
print('Made an item')
class BagOfItems:
def __init__(self):
self.items = [Item(), Item()]
boi = BagOfItems()
i = Item()
The result will be:
Made an item
Made an item
Traceback (most recent call last):
...
ValueError: Attempting to instantiate Item outside of BagOfItems
Caveats
All this prevents you from calling methods of one class outside the methods of another class. It will not work properly in a staticmethod or classmethod, or in the module scope. You could probably work around that if you had the motivation. I have already learned more about introspection and stack tracing than I wanted to, so I will call it a day. This should be enough to get you started, or better yet, show you why you should not continue down this path.
The functions used here might be CPython-specific. I really don't know enough about inspection to be able to tell for sure. I did try to stay away from the CPython-specific features as much as I could based on the docs.
References
1. Python: How to retrieve class information from a 'frame' object?
2. How to get value of arguments passed to functions on the stack?
3. Check if a function is a method of some object
4. Get class that defined method
5. Python docs: inspect.getargvalues
6. Python docs: inspect.stack
I have a QWidget created via Qt Designer that has a QPushButton named foo, and the QWidget has a method named on_foo_clicked:
class MyWidget(QWidget):
def __init__(self):
...
#pyqtSlot()
def on_foo_clicked(self):
pass
Firstly, the on_foo_clicked is only called if it is decorated with pyqtSlot(). If I remove that decorator, I have to manually connect the self.ui.foo (of type QPushButton) to self.on_foo_clicked in MyWidget's initializer, yet I could not find any documentation about this.
Secondly, if I want to use my own decorator like this:
def auto_slot(func):
#pyqtSlot()
def wrapper(self):
func(self)
return wrapper
...
#auto_slot
def on_foo_clicked(self):
pass
it no longer works. But the following works:
def auto_slot(func):
return pyqtSlot()(func)
So the issue is not the replacement of pyqtSlot by another decorator, but rather that for some reason the wrapper function causes the auto connection mechanism to fail. Note that the above issue only affects automatic connections; if I add a line in MyWidget.__init__ to explicitely connect the self.ui.foo button to self.on_foo_clicked, then the auto_slot decorator with wrapper works as expected and the method gets called when click button.
Any ideas if there is something I can do to auto_slot with wrapper so that it will work even with automatically connected slots?
The reason I want this is so that the wrapper can trap exception raised by slot (indication of a bug) and print to console.
Just occurred to me that the problem is that wrapper function does not have the right name to be found by the auto connection system. So the following fixes the problem:
from functools import wraps
def auto_slot(func):
#wraps(func)
def wrapper(self):
func(self)
return pyqtSlot()(wrapper)