First off, I'm relatively new to Python, so pardon the rough code I have. Here goes: I have a class with a constructor. It inherits a few properties from a superclass. This same class can also be a child object of itself. So an Epic can hold an Epic, which I'm appending to the Children property as a list.
class Epic(V1Superclass):
Type = 'Epic'
Children = []
ParentAssetID = []
#classmethod
def __init__(self, **kwargs):
for key, value in kwargs.items():
if key == 'Name':
self.Name = value
elif key == 'Description':
self.Description = value
elif key == 'Super':
self.Super = value
elif key == 'Scope':
self.Scope = value
if not bool(self.Super):
self.Super = None
if not bool(self.Scope):
self.Scope = 445082
def populateChildren(self,env,key):
children = V1.utl.getChildren(env,key,'Epic',self.AssetID)
for child in children:
if child['_oid'].split(':')[0] == 'Story':
print('Got another backlog child')
elif child['_oid'].split(':')[0] == 'Defect':
print('Got another Defect child')
elif child['_oid'].split(':')[0] == 'Epic':
childEpic = V1.Epic()
self.Children.append(childEpic)
If I create two instances of this class in the python console, everything's fine. I do "a = V1.Epic()" and "b = V1.Epic()", and the world is good. They all initialize to the proper default values (or empty). However, when I run my code in the populateChildren function, when I instantiate a new Epic object, rather than create a default version of the Epic, it is creating a new instance, but with all the properties of the parent (self) object. In essence, it's an exact copy, but if I do a "self is childEpic" command, it returns false, which (if I understand things correctly) means that childEpic is not a copy of the parent object. I can manipulate the child object and set properties with no problem, but obviously that's not how this should work. This is kind of maddening, as I'm not even sure what to google to see what I'm doing wrong syntax wise.
TIA!
I've tried adding an addChild function which tries to instantiate an instance outside the scope of the parent object, but even within that function, the object created is a duplicate of the parent
Your __init__ function is defined as a class method, which means that its first parameter is not a reference to the instance of the new object, but instead a reference to the current class. These properties are present on each of the class's instances, so it appears that they are being copied over.
Remove the #classmethod decorator from the __init__ function and the self parameter will refer to the object being instantiated.
Related
I'm writing a class to manage a collection of objects that I'd like to "load" only if actually used (immagine that each object is an heavy document). Also I want to refer to each object both with a numeric key and a string name.
I decided to create a class that inherits from OrderedDict:
from collections import OrderedDict
class MyClass:
def load_me(self, key):
print(f"Object {key} loaded")
class MyClassColl(OrderedDict):
def __getitem__(self, key):
if isinstance(key, int):
key = list(self.keys())[key]
res = super().get(key).load_me(key)
return res
When I initialise the collection and retrieve a single object everything works well and:
my_coll = MyClassColl([('Obj1', MyClass()), ('Obj2', MyClass()), ('Obj3', MyClass())])
my_obj = my_coll['Obj2'] # or my_obj = my_coll[1]
prints:
Object Obj2 loaded
But using a loop, the objects are not properly loaded so:
for key, item in my_coll.items():
obj = item
has not output.
This is because the __getitem__ method is not getting called when you loop through the dictionary like that. It is only called when you use an index operator (as far as I know). So, a super easy fix would be to do your for loop like this:
for key in my_coll:
item = my_coll[key]
Alternatively you could try playing around with the __iter__ method but I think the way you've done it is probably ideal.
I'm writing a game in python that has items in it. The player can use items, and different items can do different things. Instead of making a bunch of subclasses for each item type, I'm passing a callback function upon initialization. However, in some cases I need to access the item instance from the callback function - this is where I'm stuck.
Here's what I'm trying to do:
class Item:
def __init__(self, use_callback, regen=0):
self.use_callback = use_callback
self.regen = regen
def heal(self, player):
player.health += self.regen
item = Item(heal, regen=30)
item.use_callback(player)
However, only the player object is passed to the heal function and not the item object: TypeError: heal() missing 1 required positional argumentIt's inconvenient for me to use subclasses since I'm using a table of item drops for each enemy which contains information about the items they drop and it's easier to instantiate an Item upon death than figure out which subclass I need to instantiate.
Any ideas on how to get the reference to the item object?
How about wrapping the callback to pass in the object:
class Item:
def __init__(self, use_callback, regen=0):
self.use_callback = lambda *args, **kwargs: use_callback(self, *args, **kwargs)
self.regen = regen
def heal(item, player):
if item.regen is not None:
player.health += item.regen
item = Item(heal, regen=30)
item.use_callback(player)
Example code
An alternate architecture I would put some thought to is having the Player object have a consume method. The advantage here is the complexity is taken out of your Item object. There is a probably slightly neater way to write this.
item = Item(effects=[(heal, regen=30), (gravity_boost, multiplier=2), (speed)])
class Player
def consume(self, item):
for effect in item.effects:
callback = effect[0]
**kwargs = effect[1:]
callback(player, item, **kwargs)
Beyond this it might be worth considering a simple 'publish subscriber' system that would separate your objects so that they would not have cross dependencies. This will add architectural simplicity at the cost of some code complexity.
I am working with pymongo and am wanting to ensure that data saved can be loaded even if additional data elements have been added to the schema.
I have used this for classes that don't need to have the information processed before assigning it to class attributes:
class MyClass(object):
def __init__(self, instance_id):
#set default values
self.database_id = instance_id
self.myvar = 0
#load values from database
self.__load()
def __load(self):
data_dict = Collection.find_one({"_id":self.database_id})
for key, attribute in data_dict.items():
self.__setattr__(key,attribute)
However, in classes that I have to process the data from the database this doesn't work:
class Example(object):
def __init__(self, name):
self.name = name
self.database_id = None
self.member_dict = {}
self.load()
def load(self):
data_dict = Collection.find_one({"name":self.name})
self.database_id = data_dict["_id"]
for element in data_dict["element_list"]:
self.process_element(element)
for member_name, member_info in data_dict["member_class_dict"].items():
self.member_dict[member_name] = MemberClass(member_info)
def process_element(self, element):
print("Do Stuff")
Two example use cases I have are:
1) List of strings the are used to set flags, this is done by calling a function with the string as the argument. (def process_element above)
2) A dictionary of dictionaries which are used to create a list of instances of a class. (MemberClass(member_info) above)
I tried creating properties to handle this but found that __setattr__ doesn't look for properties.
I know I could redefine __setattr__ to look for specific names but it is my understanding that this would slow down all set interactions with the class and I would prefer to avoid that.
I also know I could use a bunch of try/excepts to catch the errors but this would end up making the code very bulky.
I don't mind the load function being slowed down a bit for this but very much want to avoid anything that will slow down the class outside of loading.
So the solution that I came up with is to use the idea of changing the __setattr__ method but instead to handle the exceptions in the load function instead of the __setattr__.
def load(self):
data_dict = Collection.find_one({"name":self.name})
for key, attribute in world_data.items():
if key == "_id":
self.database_id = attribute
elif key == "element_list":
for element in attribute:
self.process_element(element)
elif key == "member_class_dict":
for member_name, member_info in attribute.items():
self.member_dict[member_name] = MemberClass(member_info)
else:
self.__setattr__(key,attribute)
This provides all of the functionality of overriding the __setattr__ method without slowing down any future calls to __setattr__ outside of loading the class.
How do I create instances of classes from a list of classes? I've looked at other SO answers but did understand them.
I have a list of classes:
list_of_classes = [Class1, Class2]
Now I want to create instances of those classes, where the variable name storing the class is the name of the class. I have tried:
for cls in list_of_classes:
str(cls) = cls()
but get the error: "SyntaxError: can't assign to function call". Which is of course obvious, but I don't know what to do else.
I really want to be able to access the class by name later on. Let's say we store all the instance in a dict and that one of the classes are called ClassA, then I would like to be able to access the instance by dict['ClassA'] later on. Is that possible? Is there a better way?
You say that you want "the variable name storing the class [to be] the name of the class", but that's a very bad idea. Variable names are not data. The names are for programmers to use, so there's seldom a good reason to generate them using code.
Instead, you should probably populate a list of instances, or if you are sure that you want to index by class name, use a dictionary mapping names to instances.
I suggest something like:
list_of_instances = [cls() for cls in list_of_classes]
Or this:
class_name_to_instance_mapping = {cls.__name__: cls() for cls in list_of_classes}
One of the rare cases where it can sometimes make sense to automatically generate variables is when you're writing code to create or manipulate class objects themselves (e.g. producing methods automatically). This is somewhat easier and less fraught than creating global variables, since at least the programmatically produced names will be contained within the class namespace rather than polluting the global namespace.
The collections.NamedTuple class factory from the standard library, for example, creates tuple subclasses on demand, with special descriptors as attributes that allow the tuple's values to be accessed by name. Here's a very crude example of how you could do something vaguely similar yourself, using getattr and setattr to manipulate attributes on the fly:
def my_named_tuple(attribute_names):
class Tup:
def __init__(self, *args):
if len(args) != len(attribute_names):
raise ValueError("Wrong number of arguments")
for name, value in zip(attribute_names, args):
setattr(self, name, value) # this programmatically sets attributes by name!
def __iter__(self):
for name in attribute_names:
yield getattr(self, name) # you can look up attributes by name too
def __getitem__(self, index):
name = attribute_names[index]
if isinstance(index, slice):
return tuple(getattr(self, n) for n in name)
return getattr(self, name)
return Tup
It works like this:
>>> T = my_named_tuple(['foo', 'bar'])
>>> i = T(1, 2)
>>> i.foo
1
>>> i.bar
2
If i did understood your question correctly, i think you can do something like this using globals():
class A:
pass
class B:
pass
class C:
pass
def create_new_instances(classes):
for cls in classes:
name = '{}__'.format(cls.__name__)
obj = cls()
obj.__class__.__name__ = name
globals()[name] = obj
if __name__ == '__main__':
classes = [A, B, C]
create_new_instances(classes)
classes_new = [globals()[k] for k in globals() if k.endswith('__') and not k.startswith('__')]
for cls in classes_new:
print(cls.__class__.__name__, repr(cls))
Output (you'll get a similar ouput):
A__ <__main__.A object at 0x7f7fac6905c0>
C__ <__main__.C object at 0x7f7fac6906a0>
B__ <__main__.B object at 0x7f7fac690630>
This question already has answers here:
Why does comparing strings using either '==' or 'is' sometimes produce a different result?
(15 answers)
Closed 8 years ago.
So I am pretty new to python so be kind.
I am trying to create a global sort of object directory as a base for all my programs as a means to keep track of all created object instances in my programs.
so..
I create a class that holds a list of object references ( not id's ).
example:
class objDirectory:
name = ""
count = 0
objDir = []
def __init__( self ):
print( "Initiating object directory." )
name = "objDirectory"
self.count= 1
self.objDir = [ self ]
return
def getObj( self, name ):
print( "Searching directory for:", name )
for o in self.objDir:
if o.name is name:
print( "Returning:", name )
return obj
else:
print( "Search failed" )
return
But, once an object is added to a list and I run the get script it does not return my object. I even verify by using directory.objDir[x]. ( this always references my object) .
What am I not doing, or doing wrong?
Thanks.
Results:
setDirectory()
Initiating object directory.
Global directory reference set
test = obj( "test" )
Initiating: test
Duplicate check
Logging: test1
Object count: 2
t1 = directory.getObj( "test1" )
Searching directory for: test1
Search failed
print( directory.objDir )
[<main.objDirectory object at 0x032C86F0>, <main.obj object at 0x032C8710>]
I don't quite understand how you plan to use this object, however I find that there are a few problems with the __init__ method.
First of all, I guess that your object should be a singleton, however it is not. You define the class attributes here:
class objDirectory:
name = ""
count = 0
objDir = []
But in your __init__ method you always override them, instead of adding to them. This means that every time you will write something like a=objDirectory(), count will be set to 1 for that instance. Also, notice that (at least in python2.7), you'll need a #classmethod decorator for every function you wish be able to modify class attributes (as opposed to the instance's attributes). Like so:
class objDirectory:
count=0
#classmethod
def add_object(cls):
cls.count+=1
def __init__(self):
self.add_object()
This goes for objDir as well.
Second, in your __init__ method, you have a variable called name, which is never used. Perhaps you meant self.name?
There are better ways to implement singletons, however here is a simple example that might be easier to understand if you're new to python:
class mySingleton(object):
singletonObject = None
def __new__(cls):
if not cls.singletonObject:
cls.singletonObject = super(mySingleton, cls).__new__(cls)
return cls.singletonObject
The __new__ method is a static method which creates the actual instance. The return value is the new instance created. Overriding it like so will let me get the same instance for each call to mySingleton(); the statement mySingleton() is mySingleton() will always be true.