Simplifying Init Method Python - python-3.x

Is there a better way of doing this?
def __init__(self,**kwargs):
self.ServiceNo = kwargs["ServiceNo"]
self.Operator = kwargs["Operator"]
self.NextBus = kwargs["NextBus"]
self.NextBus2 = kwargs["NextBus2"]
self.NextBus3 = kwargs["NextBus3"]
The attributes (ServiceNo,Operator,...) always exist

That depends on what you mean by "simpler".
For example, is what you wrote simpler than what I would write, namely
def __init__(self,ServiceNo, Operator, NextBus, NextBus2, NextBus3):
self.ServiceNo = ServiceNo
self.Operator = Operator
self.NextBus = NextBus
self.NextBus2 = NextBus2
self.NextBus3 = NextBus3
True, I've repeated each attribute name an additional time, but I've made it much clearer which arguments are legal for __init__. The caller is not free to add any additional keyword argument they like, only to see it silently ignored.
Of course, there's a lot of boilerplate here; that's something a dataclass can address:
from dataclasses import dataclass
#dataclass
class Foo:
ServiceNo: int
Operator: str
NextBus: Bus
NextBus2: Bus
NextBus3: Bus
(Adjust the types as necessary.)
Now each attribute is mentioned once, and you get the __init__ method shown above for free.

Better how? You don’t really describe what problem you’re trying to solve.
If it’s error handling, you can use the dictionary .get() method in the event that key doesn’t exist.
If you just want a more succinct way of initializing variables, you could remove the ** and have the dictionary as a variable itself, then use it elsewhere in your code, but that depends on what your other methods are doing.

A hacky solution available since the attributes and the argument names match exactly is to directly copy from the kwargs dict to the instance's dict, then check that you got all the keys you expected, e.g.:
def __init__(self,**kwargs):
vars(self).update(kwargs)
if vars(self).keys() != {"ServiceNo", "Operator", "NextBus", "NextBus2", "NextBus3"}:
raise TypeError(f"{type(self).__name__} missing required arguments")
I don't recommend this; chepner's options are all superior to this sort of hackery, and they're more reliable (for example, this solution fails if you use __slots__ to prevent autovivication of attributes, as the instance won't having a backing dict you can pull with vars).

Related

Extract type hints for object attributes in Python [duplicate]

I want to get the type hints for an object's attributes. I can only get the hints for the class and not an instance of it.
I have tried using foo_instance.__class__ from here but that only shows the class variables.
So in the example how do I get the type hint of bar?
class foo:
var: int = 42
def __init__(self):
self.bar: int = 2
print(get_type_hints(foo)) # returns {'var': <class 'int'>}
I just had the same problem. The python doc isn't that clear since the example is made with what is now officially called dataclass.
Student(NamedTuple):
name: Annotated[str, 'some marker']
get_type_hints(Student) == {'name': str}
get_type_hints(Student, include_extras=False) == {'name': str}
get_type_hints(Student, include_extras=True) == {
'name': Annotated[str, 'some marker']
}
It give the impression that get_type_hints() works on class directly. Turns out get_type_hints() returns hints based on functions, not on class. That way it can be use with both if we know that. A normal class obviously not being instantiated at it's declaration, it does not have any of the variables set within the __init__() method who hasn't yet been called. It couldn't be that way either if we want the possibility to get the type hints from class-wide variables.
So you could either call it on __init__(), that is if variables are passed in arguments though (yes i seen it's not in your example but might help others since i didn't seen this anywhere in hours of search);
class foo:
var: int = 42
def __init__(self, bar: int = 2):
self.bar = int
print(get_type_hints(foo.__init__))
At last for your exact example i believe you have two choices. You could instantiate a temporary object and use del to clean it right after if your logic allows it. Or declare your variables as class ones with or without default values so you can get them with get_type_hints() and assign them later in instantiations.
Maybe this is a hack, and you have to be the creator of your instances, but there are a subset of cases in which using a data class will get you what you want;
Python 3.7+
#dataclass
class Foo:
bar: str = 2
if __name__ == '__main__':
f = Foo()
print(f.bar)
print(get_type_hints(f))
2
{'bar': <class 'str'>}
Hints only exist at the class level — by the time an instance is created the type of its attributes will be that of whatever value has been assigned to them. You can get the type of any instance attribute by using the first form of the built-in type() function — e.g. type(foo_instance.var).
This information isn't evaluated and only exists in the source code.
if you must get this information, you can use the ast module and extract the information from the source code yourself, if you have access to the source code.
You should also ask yourself if you need this information because in most cases reevaluating the source code will be to much effort.

Check if string is part of object variables

I want to pass a string to a method/class function which resolves the correct attribute to modify. I'm pretty sure i've done this before, but I seem to have forgotten how to.
class A:
def __init__(self):
self.word = B.getWord()
self.phrase = "Some default string"
def set_dynamically(self, attribute, value):
self[attribute] = value
This would let me do something like A.set_dynamically('word', C.getWord())
I've tried searching for a question and answer for this but I'm having a hard time defining what this is called, so I didn't really find anything.
Python objects have a built-in method called __setattr__(self, name, value) that does this. You can invoke this method by calling setattr() with an object as the argument:
A = A()
setattr(A, 'word', C.getWord())
There's no reason to do this when you could just do something like A.word = C.getWord() (which, in fact, resolves down to calling __setattr__() the same way as the built-in setattr() function does), but if the property you're setting is named dynamically, then this is how you get around that limitation.
If you want to customize the behavior of how your class acts when you try to call setattr() on it (or when you try to set an attribute normally), you can override the __setattr__(self, name, value) method in much the same way as you're overriding __init__(). Be careful if you do this, because it's really easy to accidentally produce an infinite recursion error - to avoid this you can use object.__setattr__(self, name_value) inside your overridden __setattr__(self, name, value).
Just wanted to add my own solution as well. I created a mapping object;
def _mapper(self, attr, object):
m = { "funcA" : object.funcA,
"funcB" : object.funcB,
... : ...,
}
return m.get(attr)
def call_specific_func_(self, attr):
--do stuff--
for a in some-list:
attr = a.get_attr()
retvals = self._mapper(attr, a)
-- etc --

Mypy: annotating a variable with a class type

I am having some trouble assigning the variables in a Python 3.6 class to a particular type--a Pathlib path. Following an example from link, I tried to create a TypeVar, but mypy is still throwing errors. I want to make sure that the class variables initialized in the __init__.py only receive a particular type at compile time. So this is just a check to make sure I don't inadvertently set a string or something else to these class variables.
Can anyone suggest the correct way to do this?
Here is some simple code.
import pathlib
from typing import Union, Dict, TypeVar, Type
Pathtype = TypeVar('Pathtype', bound=pathlib.Path)
class Request:
def __init__(self, argsdict):
self._dir_file1: Type[Pathtype] = argsdict['dir_file1']
self._dir_file2: Type[Pathtype] = argsdict['dir_file2']
The error that I am getting is:
Request.py:13: error: Invalid type "Request.Pathtype"
Request.py:14: error: Invalid type "Request.Pathtype"
Neither Type, TypeVar nor NewType are correct to use here. What you simply want to do is use Path itself:
from pathlib import Path
class Request:
def __init__(self, argsdict):
self._dir_file1: Path = argsdict['dir_file1']
self._dir_file2: Path = argsdict['dir_file2']
If you annotate your argsdict as being of type Dict[str, Path], you can skip having to annotate your fields entirely: mypy will infer the correct type:
from typing import Dict
from pathlib import Path
class Request:
def __init__(self, argsdict: Dict[str, Path]):
self._dir_file1 = argsdict['dir_file1']
self._dir_file2 = argsdict['dir_file2']
Here's a brief explanation of what the various type constructs you were attempting to use/was suggested to you actually do:
TypeVar is used when you are trying to create a generic data structure or function. For example, take List[int], which represents a list containing ints. List[...] is an example of a generic data structure: it can be parameterized by any arbitrary type.
You use TypeVar as a way of adding "parameterizable holes" if you decide you want to create your own generic data structure.
It's also possible to use TypeVars when writing generic functions. For example, suppose you want to declare that you have some function that can accept a value of any type -- but that function is guaranteed to return a value of the exact same type. You can express ideas like these using TypeVars.
The Type[...] annotation is used to indicate that some expression must be the type of a type. For example, to declare that some variable must hold an int, we would write my_var: int = 4. But what if we want to write something like my_var = int? What sort of type hint could we give that variable? In this case, we could do my_var: Type[int] = int.
NewType basically lets you "pretend" that you're taking some type and making a subclass of it -- but without requiring you to actually subclass anything at runtime. If you're careful, you can take advantage of this feature to help catch bugs where you mix different "kinds" of strings or ints or whatever -- e.g. passing in a string representing HTML into a function expecting a string representing SQL.
Replace TypeVar with NewType and remove the Type[] modifier.

Patching superclass methods with mocks

There are a number of similar(ish) questions here about how, in Python, you are supposed to patch the superclasses of your class, for testing. I've gleaned some ideas from them, but I'm still not where I need to be.
Imagine I have two base classes:
class Foo(object):
def something(self, a):
return a + 1
class Bar(object):
def mixin(self):
print("Hello!")
Now I define the class that I want to test as such:
class Quux(Foo, Bar):
def something(self, a):
self.mixin()
return super().something(a) + 2
Say I want to test that mixin has been called and I want to replace the return value of the mocked Foo.something, but importantly (and necessarily) I don't want to change any of the control flow or logic in Quux.something. Presuming patching superclasses "just worked", I tried unittest.mock.patch:
with patch("__main__.Foo", spec=True) as mock_foo:
with patch("__main__.Bar", spec=True) as mock_bar:
mock_foo.something.return_value = 123
q = Quux()
assert q.something(0) == 125
mock_bar.mixin.assert_called_once()
This doesn't work: The superclasses' definitions of something and mixin aren't being mocked when Quux is instantiated, which is not unsurprising as the class' inheritance is defined before the patch.
I can get around the mixin problem, at least, by explicitly setting it:
# This works to mock the mixin method
q = Quux()
setattr(q, "mixin", mock_bar.mixin)
However, a similar approach doesn't work for the overridden method, something.
As I mentioned, other answers to this question suggest overriding Quux's __bases__ value with the mocks. However, this doesn't work at all as __bases__ must be a tuple of classes and the mocks' classes appear to just be the originals:
# This doesn't do what I want
Quux.__bases__ = (mock_foo.__class__, mock_bar.__class__)
q = Quux()
Other answers suggested overriding super. This does work, but I feel that it's a bit dangerous as any calls to super you don't want to patch will probably break things horribly.
So is there a better way of doing what I want than this:
with patch("builtins.super") as mock_super:
mock_foo = MagicMock(spec=Foo)
mock_foo.something.return_value = 123
mock_super.return_value = mock_foo
mock_bar = MagicMock(spec=Bar)
q = Quux()
setattr(q, "mixin", mock_bar.mixin)
assert q.something(0) == 125
mock_bar.mixin.assert_called_once()
The matter is actually simple -
the subclass will contain a reference to the original classes
inside its own structure (the public visible attributes __bases__ and __mro__). That reference is not changed when you mock those base classes -
the mocking would only affect one using those objects explicitly, while the patching is "turned on". In other words, they would only be used if your Quux class would itself be defined inside the with blocks. And that would not work either, as the "mock" object replacing the classes can not be a proper superclass.
However, the workaround, and the right way to do it are quite simple - you just have to mock the methods you want replaced, not the classes.
The question is a bit old now, and I hope you had moved on, but the right thing to do there is:
with patch("__main__.Foo.something", spec=True) as mock_foo:
with patch("__main__.Bar.mixin", spec=True) as mock_bar:
mock_foo.return_value = 123
q = Quux()
assert q.something(0) == 125
mock_bar.assert_called_once()

Having trouble returning through multiple classes in Python

I'm still learning and like to build things that I will eventually be doing on a regular basis in the future, to give me a better understanding on how x does this or y does that.
I haven't learned much about how classes work entirely yet, but I set up a call that will go through multiple classes.
getattr(monster, monster_class.str().lower())(1)
Which calls this:
class monster:
def vampire(x):
monster_loot = {'Gold':75, 'Sword':50.3, 'Good Sword':40.5, 'Blood':100.0, 'Ore':.05}
if x == 1:
loot_table.all_loot(monster_loot)
Which in turn calls this...
class loot_table:
def all_loot(monster_loot):
loot = ['Gold', 'Sword', 'Good Sword', 'Ore']
loot_dropped = {}
for i in monster_loot:
if i in loot:
loot_dropped[i] = monster_loot[i]
drop_chance.chance(loot_dropped)
And then, finally, gets to the last class.
class drop_chance:
def chance(loot_list):
loot_gained = []
for i in loot_list:
x = random.uniform(0.0,100.0)
if loot_list[i] >= x:
loot_gained.append(i)
return loot_gained
And it all works, except it's not returning loot_gained. I'm assuming it's just being returned to the loot_table class and I have no idea how to bypass it all the way back down to the first line posted. Could I get some insight?
Keep using return.
def foo():
return bar()
def bar():
return baz()
def baz():
return 42
print foo()
I haven't learned much about how classes work entirely yet...
Rather informally, a class definition is a description of the object of that class (a.k.a. instance of the class) that is to be created in future. The class definition contains the code (definitions of the methods). The object (the class instance) basically contains the data. The method is a kind of function that can take arguments and that is capable to manipulate the object's data.
This way, classes should represent the behaviour of the real-world objects, the class instances simulate existence of the real-world objects. The methods represent actions that the object apply on themselves.
From that point of view, a class identifier should be a noun that describes category of objects of the class. A class instance identifier should also be a noun that names the object. A method identifier is usually a verb that describes the action.
In your case, at least the class drop_chance: is suspicious at least because of naming it this way.
If you want to print something reasonable about the object--say using the print(monster)--then define the __str__() method of the class -- see the doc.

Resources