Access instance from passed calllback - python-3.x

I'm writing a game in python that has items in it. The player can use items, and different items can do different things. Instead of making a bunch of subclasses for each item type, I'm passing a callback function upon initialization. However, in some cases I need to access the item instance from the callback function - this is where I'm stuck.
Here's what I'm trying to do:
class Item:
def __init__(self, use_callback, regen=0):
self.use_callback = use_callback
self.regen = regen
def heal(self, player):
player.health += self.regen
item = Item(heal, regen=30)
item.use_callback(player)
However, only the player object is passed to the heal function and not the item object: TypeError: heal() missing 1 required positional argumentIt's inconvenient for me to use subclasses since I'm using a table of item drops for each enemy which contains information about the items they drop and it's easier to instantiate an Item upon death than figure out which subclass I need to instantiate.
Any ideas on how to get the reference to the item object?

How about wrapping the callback to pass in the object:
class Item:
def __init__(self, use_callback, regen=0):
self.use_callback = lambda *args, **kwargs: use_callback(self, *args, **kwargs)
self.regen = regen
def heal(item, player):
if item.regen is not None:
player.health += item.regen
item = Item(heal, regen=30)
item.use_callback(player)
Example code
An alternate architecture I would put some thought to is having the Player object have a consume method. The advantage here is the complexity is taken out of your Item object. There is a probably slightly neater way to write this.
item = Item(effects=[(heal, regen=30), (gravity_boost, multiplier=2), (speed)])
class Player
def consume(self, item):
for effect in item.effects:
callback = effect[0]
**kwargs = effect[1:]
callback(player, item, **kwargs)
Beyond this it might be worth considering a simple 'publish subscriber' system that would separate your objects so that they would not have cross dependencies. This will add architectural simplicity at the cost of some code complexity.

Related

Accessing variables from a method in class A and using it in Class B in python3.5

I have a BaseClass and two classes (Volume and testing) which inherits from the BaseClass. The class "Volume" use a method "driving_style" from another python module. I am trying to write another method "test_Score" which wants to access variables computed in the method "driving_style" which I want to use to compute further. These results will be accessed to the class "testing" as shown.
from training import Accuracy
import ComputeData
import model
class BaseClass(object):
def __init__(self, connections):
self.Type = 'Stock'
self.A = connections.A
self.log = self.B.log
def getIDs(self, assets):
ids = pandas.Series(assets.ids, index=assets.B)
return ids
class Volume(BaseClass):
def __init__(self, connections):
BaseClass.__init__(self, connections)
self.daystrade = 30
self.high_low = True
def learning(self, data, rootClass):
params.daystrade = self.daystrade
params.high_low = self.high_low
style = Accuracy.driving_style()
return self.Object(data.universe, style)
class testing(BaseClass):
def __init__(self, connections):
BaseClass.__init__(self, connections)
def learning(self, data, rootClass):
test_score = Accuracy.test_score()
return self.Object(data.universe, test_score)
def driving_style(date, modelDays, params):
daystrade = params.daystrade
high_low = params.high_low
DriveDays = model.DateRange(date, params.daystrade)
StopBy = ComputeData.instability(DriveDays)
if high_low:
style = ma.average(StopBy)
else:
style = ma.mean(StopBy)
return style
def test_score(date, modelDays, params):
"want to access the following from the method driving_style:"
DriveDays =
StopBy =
return test_score ("which i compute using values DriveDays and StopBy and use test_score in the method learning inside
the 'class - testing' which inherits some params from the BaseClass")
You can't use locals from a call to a function that was made elsewhere and has already returned.
A bad solution is to store them as globals that you can read from later (but that get replaced on every new call). A better solution might to return the relevant info to the caller along with the existing return values (return style, DriveDays, StopBy) and somehow get it to where it needs to go. If necessary, you could wrap the function into a class and store the computed values as attributes on an instance of the class, while keeping the return type the same.
But the best solution is probably to refactor, so the stuff you want is computed by dedicated methods that you can call directly from test_score and driving_style independently, without duplicating code or creating complicated state dependencies.
In short, basically any time you think you need to access locals from another function, you're almost certainly experiencing an XY problem.

Collection of objects that are set up only if actually used

I'm writing a class to manage a collection of objects that I'd like to "load" only if actually used (immagine that each object is an heavy document). Also I want to refer to each object both with a numeric key and a string name.
I decided to create a class that inherits from OrderedDict:
from collections import OrderedDict
class MyClass:
def load_me(self, key):
print(f"Object {key} loaded")
class MyClassColl(OrderedDict):
def __getitem__(self, key):
if isinstance(key, int):
key = list(self.keys())[key]
res = super().get(key).load_me(key)
return res
When I initialise the collection and retrieve a single object everything works well and:
my_coll = MyClassColl([('Obj1', MyClass()), ('Obj2', MyClass()), ('Obj3', MyClass())])
my_obj = my_coll['Obj2'] # or my_obj = my_coll[1]
prints:
Object Obj2 loaded
But using a loop, the objects are not properly loaded so:
for key, item in my_coll.items():
obj = item
has not output.
This is because the __getitem__ method is not getting called when you loop through the dictionary like that. It is only called when you use an index operator (as far as I know). So, a super easy fix would be to do your for loop like this:
for key in my_coll:
item = my_coll[key]
Alternatively you could try playing around with the __iter__ method but I think the way you've done it is probably ideal.

Python: why do I need super().__init__() call in metaclasses?

I have got one question: why do I need to call super().--init--() in metaclasses? Because metaclass is factory of classes, I think we don`t need to call initialization for making objects of class Shop. Or with using super().--init-- we initializing the class? (Because my IDE says, that I should call it. But without super().--init-- nothing happens, my class working without mistakes).
Can you explane me, why?
Thanks in advance!
class Descriptor:
_counter = 0
def __init__(self):
self.attr_name = f'Descriptor attr#{Descriptor._counter}'
Descriptor._counter += 1
def __get__(self, instance, owner):
return self if instance is None else instance.__dict__[self.attr_name]
def __set__(self, instance, value):
if value > 0:
instance.__dict__[self.attr_name] = value
else:
msg = 'Value must be > 0!'
raise AttributeError(msg)
class Shop():
weight = Descriptor()
price = Descriptor()
def __init__(self, name, price, weight):
self.name = name
self.price = price
self.weight = weight
def __repr__(self):
return f'{self.name}: price - {self.price} weight - {self.weight}'
def buy(self):
return self.price * self.weight
class Meta(type):
def __init__(cls, name, bases, attr_dict):
super().__init__(name, bases, attr_dict) # <- this is that func. call
for key, value in attr_dict.items():
if isinstance(value, Descriptor): # Here I rename attributes name of descriptor`s object.
value.attr_name = key
#classmethod
def __prepare__(metacls, name, bases):
return OrderedDict()
You don't "need" to - and if your code use no other custom metaclasses, not calling the metaclass'__init__.super() will work just the same.
But if one needs to combine your metaclass with another, through inheritance, without the super() call, it won't work "out of the box": the super() call is the way to ensure all methods in the inheritance chain are called.
And if at first it looks like that a metaclass is extremely rare, and combining metaclasses would likely never take place: a few libraries or frameworks have their own metaclasses, including Python's "abc"s (abstract base classes), PyQT, ORM frameworks, and so on. If any metaclass under your control is well behaved with proper super() calls on the __new__, __init__ and __call__ methods, (if you override those), what you need to do to combine both superclasses and have a working metaclass can be done in a single line:
CompatibleMeta = type("CompatibleMeta", (meta, type(OtherClassBase)), {})
This way, for example, if you want to use the mechanisms in your metaclass in a class using the ABCMeta functionalities in Python, you just do it. The __init__ method in your Meta will call the other metaclass __init__. Otherwise it would not run, and some subtle unexpectd thing would not be initialized in your classes, and this could be a very hard to find bug.
On a side note: there is no need to declare __prepare__ in a metaclass if all it does is creating an OrderedDict on a Python newer than 3.6: Since that version, dicitionaries used as the "locals()" while executing class bodies are ordered by default. Also, if another metaclass you are combining with also have a __prepare__, there is no way to make that work automatically by using "super()" - you have to check the code and verify which of the two __prepare__s should be used, or create a new mapping type with features to attend both metaclasses.

Load inconsistent data in pymongo

I am working with pymongo and am wanting to ensure that data saved can be loaded even if additional data elements have been added to the schema.
I have used this for classes that don't need to have the information processed before assigning it to class attributes:
class MyClass(object):
def __init__(self, instance_id):
#set default values
self.database_id = instance_id
self.myvar = 0
#load values from database
self.__load()
def __load(self):
data_dict = Collection.find_one({"_id":self.database_id})
for key, attribute in data_dict.items():
self.__setattr__(key,attribute)
However, in classes that I have to process the data from the database this doesn't work:
class Example(object):
def __init__(self, name):
self.name = name
self.database_id = None
self.member_dict = {}
self.load()
def load(self):
data_dict = Collection.find_one({"name":self.name})
self.database_id = data_dict["_id"]
for element in data_dict["element_list"]:
self.process_element(element)
for member_name, member_info in data_dict["member_class_dict"].items():
self.member_dict[member_name] = MemberClass(member_info)
def process_element(self, element):
print("Do Stuff")
Two example use cases I have are:
1) List of strings the are used to set flags, this is done by calling a function with the string as the argument. (def process_element above)
2) A dictionary of dictionaries which are used to create a list of instances of a class. (MemberClass(member_info) above)
I tried creating properties to handle this but found that __setattr__ doesn't look for properties.
I know I could redefine __setattr__ to look for specific names but it is my understanding that this would slow down all set interactions with the class and I would prefer to avoid that.
I also know I could use a bunch of try/excepts to catch the errors but this would end up making the code very bulky.
I don't mind the load function being slowed down a bit for this but very much want to avoid anything that will slow down the class outside of loading.
So the solution that I came up with is to use the idea of changing the __setattr__ method but instead to handle the exceptions in the load function instead of the __setattr__.
def load(self):
data_dict = Collection.find_one({"name":self.name})
for key, attribute in world_data.items():
if key == "_id":
self.database_id = attribute
elif key == "element_list":
for element in attribute:
self.process_element(element)
elif key == "member_class_dict":
for member_name, member_info in attribute.items():
self.member_dict[member_name] = MemberClass(member_info)
else:
self.__setattr__(key,attribute)
This provides all of the functionality of overriding the __setattr__ method without slowing down any future calls to __setattr__ outside of loading the class.

Round robin on iterable

Consider the following, simple round-robin implementation:
from itertools import chain, repeat
class RoundRobin:
def __init__(self, iterable):
self._iterable = set(iterable)
def __iter__(self):
for value in chain.from_iterable(repeat(self._iterable)):
yield value
Example usage:
machines = ['test1', 'test2',
'test3', 'test4']
rr_machines = RoundRobin(machines)
for machine in rr_machines:
# Do something
pass
While this works, I was wondering if there was a way to modify the iterable in the RoundRobin class that would also impact existing iterators.
E.g. suppose that while I'm consuming values from the iterator, one of the machines from the set has become unavailable, and I want to prevent it from being returned.
The only solution I could think of was to implement a separate Iterator class. Of course, that still leave the question what to do when all machines have become unavailable and no more values can be returned (StopIteration exception?).
Itertools' repeat makes a copy of the underlying iterator, in this case, your set containing the elements.
It is a mater of creating another implementation of repeat which would re-create such a copy at each iteration of the whole set. That is possible, because in this case, we know the iterator to be repeated is a container, while itertools.repeat has to work with any iterator (and so, remember the values from the first iteration):
def mutable_repeat(container):
while True:
for item in container.copy():
yield item
Just using this in place of repeat, allows you to make "on the fly" changes to your self._iterable set, and new values can be added/removed from that set. (Although a removed value most likely will be issued one last time before being removed)
If you need to guard against issuing a removed value even once, you can easily guard against it by adding some more logic to the whole thing -
instead of iteracting with self._iterable directly from outside your class, you could do:
class RoundRobin:
def __init__(self, iterable):
self._iterable = set(iterable)
self._removed = set()
def __iter__(self):
for value in chain.from_iterable(self.repeat()):
yield value
def remove(self, item):
self._removed.add(item)
self._iterable.remove(item)
def add(self, item):
self._iterable.add(item)
def repeat(self):
while True:
for item in self._iterable.copy():
if not item in self._removed:
yield item
self._removed = set()

Resources