Create instances from list of classes - python-3.x

How do I create instances of classes from a list of classes? I've looked at other SO answers but did understand them.
I have a list of classes:
list_of_classes = [Class1, Class2]
Now I want to create instances of those classes, where the variable name storing the class is the name of the class. I have tried:
for cls in list_of_classes:
str(cls) = cls()
but get the error: "SyntaxError: can't assign to function call". Which is of course obvious, but I don't know what to do else.
I really want to be able to access the class by name later on. Let's say we store all the instance in a dict and that one of the classes are called ClassA, then I would like to be able to access the instance by dict['ClassA'] later on. Is that possible? Is there a better way?

You say that you want "the variable name storing the class [to be] the name of the class", but that's a very bad idea. Variable names are not data. The names are for programmers to use, so there's seldom a good reason to generate them using code.
Instead, you should probably populate a list of instances, or if you are sure that you want to index by class name, use a dictionary mapping names to instances.
I suggest something like:
list_of_instances = [cls() for cls in list_of_classes]
Or this:
class_name_to_instance_mapping = {cls.__name__: cls() for cls in list_of_classes}
One of the rare cases where it can sometimes make sense to automatically generate variables is when you're writing code to create or manipulate class objects themselves (e.g. producing methods automatically). This is somewhat easier and less fraught than creating global variables, since at least the programmatically produced names will be contained within the class namespace rather than polluting the global namespace.
The collections.NamedTuple class factory from the standard library, for example, creates tuple subclasses on demand, with special descriptors as attributes that allow the tuple's values to be accessed by name. Here's a very crude example of how you could do something vaguely similar yourself, using getattr and setattr to manipulate attributes on the fly:
def my_named_tuple(attribute_names):
class Tup:
def __init__(self, *args):
if len(args) != len(attribute_names):
raise ValueError("Wrong number of arguments")
for name, value in zip(attribute_names, args):
setattr(self, name, value) # this programmatically sets attributes by name!
def __iter__(self):
for name in attribute_names:
yield getattr(self, name) # you can look up attributes by name too
def __getitem__(self, index):
name = attribute_names[index]
if isinstance(index, slice):
return tuple(getattr(self, n) for n in name)
return getattr(self, name)
return Tup
It works like this:
>>> T = my_named_tuple(['foo', 'bar'])
>>> i = T(1, 2)
>>> i.foo
1
>>> i.bar
2

If i did understood your question correctly, i think you can do something like this using globals():
class A:
pass
class B:
pass
class C:
pass
def create_new_instances(classes):
for cls in classes:
name = '{}__'.format(cls.__name__)
obj = cls()
obj.__class__.__name__ = name
globals()[name] = obj
if __name__ == '__main__':
classes = [A, B, C]
create_new_instances(classes)
classes_new = [globals()[k] for k in globals() if k.endswith('__') and not k.startswith('__')]
for cls in classes_new:
print(cls.__class__.__name__, repr(cls))
Output (you'll get a similar ouput):
A__ <__main__.A object at 0x7f7fac6905c0>
C__ <__main__.C object at 0x7f7fac6906a0>
B__ <__main__.B object at 0x7f7fac690630>

Related

Can anyone explain me about __init_ in Python?

List item
class Car:
def __init__(self, color, brand, number_of_seats):
self.color = color
self.brand = brand
self.number_of_seats = number_of_seats
self.number_of_wheels = 4
self.registration_number = GenerateRegistrationNumber()
Hi all,
1)Referring to the above example, could anyone tell me what is the difference between specific attributed and "the other" attributes? What will happen if registration_number is treated as a specific attribute?
2)
class MyInteger:
def __init__(self, newvalue):
# imagine self as an index card.
# under the heading of "value", we will write
# the contents of the variable newvalue.
self.value = newvalue
If we consider this example, shouldn't it be self.newvalue = newvalue?
I think I know what you're asking (let me know if I'm wrong), but I think you're asking what the difference is between the attributes that are assigned by the parameters of __init__ (Instance Attributes), ones that are assigned inside the __init__ method but not with parameters (also Instance Attributes), and ones that are not assigned in the initialiser at all (Class Attributes). The difference here is that all (well, pretty much all) cars have 4 wheels, and the number plate is generated, not supplied. You could also do this, for example:
class Car:
number_of_wheels = 4
def __init__(self, color, brand, number_of_seats):
self.color = color
self.brand = brand
self.number_of_seats = number_of_seats
self.registration_number = GenerateRegistrationNumber()
As the number of wheels here is always assigned to the same value, across all instances, it is said to be a "Class Attribute" in this case. All other attributes here are “Instance Attributes” as they are specifically assigned to each instance. For a slightly better explanation, I recommend reading this:
https://www.geeksforgeeks.org/class-instance-attributes-python/
It doesn't actually matter what the instance attribute (self.value here) is called, you could call it whatever you want and it'd still work, but in most cases, you would indeed want to name the attribute the same as the parameter.
init function also called as magic function is a constructor function for a class. if you remember in java whenever we make a class the constructor method should have the same name as the classname but this is not the scenario in python . In python we make use of the init method
the difference between the class attributes and instance attributes is that the class attributes are common to every object that is created but the instance attributes are only accessible by the object that is created.
consider a example where data of students in a class is stored. In such case the class division will be same for all the students of that particular class so it can be common but names of all students are different and also their marks and so on and hence they should be different for everyone
in previous scenario the class division can be class attribute and the data of student like name , marks has to be instance attributes
examples of class attribute is as shown
class A:
Division = 5A
here the division is a class attribute
class B:
def __init__(self,name,marks):
self.name = name
self.marks = marks
here the name and marks are instance variables
now here we can also write self.username = name because we are storing the value of name variable in self.username so you can write any name there is no constraint on that
Also whenever you write __ in front of method or variable it means that the attribute is private and accessible by only class.

Python: why do I need super().__init__() call in metaclasses?

I have got one question: why do I need to call super().--init--() in metaclasses? Because metaclass is factory of classes, I think we don`t need to call initialization for making objects of class Shop. Or with using super().--init-- we initializing the class? (Because my IDE says, that I should call it. But without super().--init-- nothing happens, my class working without mistakes).
Can you explane me, why?
Thanks in advance!
class Descriptor:
_counter = 0
def __init__(self):
self.attr_name = f'Descriptor attr#{Descriptor._counter}'
Descriptor._counter += 1
def __get__(self, instance, owner):
return self if instance is None else instance.__dict__[self.attr_name]
def __set__(self, instance, value):
if value > 0:
instance.__dict__[self.attr_name] = value
else:
msg = 'Value must be > 0!'
raise AttributeError(msg)
class Shop():
weight = Descriptor()
price = Descriptor()
def __init__(self, name, price, weight):
self.name = name
self.price = price
self.weight = weight
def __repr__(self):
return f'{self.name}: price - {self.price} weight - {self.weight}'
def buy(self):
return self.price * self.weight
class Meta(type):
def __init__(cls, name, bases, attr_dict):
super().__init__(name, bases, attr_dict) # <- this is that func. call
for key, value in attr_dict.items():
if isinstance(value, Descriptor): # Here I rename attributes name of descriptor`s object.
value.attr_name = key
#classmethod
def __prepare__(metacls, name, bases):
return OrderedDict()
You don't "need" to - and if your code use no other custom metaclasses, not calling the metaclass'__init__.super() will work just the same.
But if one needs to combine your metaclass with another, through inheritance, without the super() call, it won't work "out of the box": the super() call is the way to ensure all methods in the inheritance chain are called.
And if at first it looks like that a metaclass is extremely rare, and combining metaclasses would likely never take place: a few libraries or frameworks have their own metaclasses, including Python's "abc"s (abstract base classes), PyQT, ORM frameworks, and so on. If any metaclass under your control is well behaved with proper super() calls on the __new__, __init__ and __call__ methods, (if you override those), what you need to do to combine both superclasses and have a working metaclass can be done in a single line:
CompatibleMeta = type("CompatibleMeta", (meta, type(OtherClassBase)), {})
This way, for example, if you want to use the mechanisms in your metaclass in a class using the ABCMeta functionalities in Python, you just do it. The __init__ method in your Meta will call the other metaclass __init__. Otherwise it would not run, and some subtle unexpectd thing would not be initialized in your classes, and this could be a very hard to find bug.
On a side note: there is no need to declare __prepare__ in a metaclass if all it does is creating an OrderedDict on a Python newer than 3.6: Since that version, dicitionaries used as the "locals()" while executing class bodies are ordered by default. Also, if another metaclass you are combining with also have a __prepare__, there is no way to make that work automatically by using "super()" - you have to check the code and verify which of the two __prepare__s should be used, or create a new mapping type with features to attend both metaclasses.

python property referring to property/attribute of member attribute?

I'm wondering if I have:
class A(object):
def __init__(self):
self.attribute = 1
self._member = 2
def _get_member(self):
return self._member
def _set_member(self, member):
self._member = member
member = property(_get_member, _set_member)
class B(object):
def __init__(self):
self._member = A()
def _get_a_member(self):
return self._member.member
def _set_a_member(self, member):
self._member.member = member
member = property(_get_a_member, _set_a_member)
Can I somehow avoid to write get/setters for A.member, and simply refer to the attribute or property of the A object?
Where the get/setters do logic, its of course needed, but if I simply wan't to expose the member/attributes of a member attribute, then writing get/setters seems like overhead.
I think even if I could write the get/setters inline that would help?
I find the question a bit unclear, however I try to explain some context.
Where the get/setters do logic, its of course needed, but if I simply wan't to expose the member/attributes of a member attribute
If there is no logic in getter/setters, then there is no need to define the attribute as a property, but the attribute can be used directly (in any context).
So
class A(object):
def __init__(self):
self.attribute = 1
self.member = 2
class B(object):
def __init__(self):
self.member = A()
B().member.member # returns 2
B().member.member = 10
In some languages, it's considered good practice to abstract instance properties with getter/setter methods, That's not necessarily the case in Python.
Python properties are useful when you'd need more control over the attribute, for example:
when there is logic (validation, etc.)
to define a readonly attribute (so only providing a getter without a setter)
Update (after the comment)
properties are not necessarily a tool to "hide" some internal implementation. Hiding in Python is a bit different than say in Java, due to very dynamic nature of Python language. It's always possible to introspect and even change objects on the fly, you can add new attributes (even methods) to objects on runtime:
b = B()
b.foo = 4 # define a new attribute on runtime
b.foo # returns 4
So Python developers rely more on conventions to hint their intentions of abstractions.
About the polymorphic members, I think it's most natural for Python classes to just share an interface, that's what's meant by Duck typing. So as long as your next implementation of A supports the same interface (provides the same methods for callers), it should not be any issue to change its implementation.
So this is what I came up with - use a method to generate the properties, with the assumption that the obj has an attribute of _member:
def generate_cls_a_property(name):
"""Small helper method for generating a 'dumb' property for the A object"""
def getter(obj):
return getattr(obj._member, name)
def setter(obj, new_value):
setattr(obj._member, name, new_value)
return property(getter, setter)
This allows me to add properties like so:
class B(object):
def __init__(self):
self._member = A()
member = generate_cls_a_property('member') # generates a dumb/pass-through property
I'll accept my own, unless someone tops it within a week.. :)

Load inconsistent data in pymongo

I am working with pymongo and am wanting to ensure that data saved can be loaded even if additional data elements have been added to the schema.
I have used this for classes that don't need to have the information processed before assigning it to class attributes:
class MyClass(object):
def __init__(self, instance_id):
#set default values
self.database_id = instance_id
self.myvar = 0
#load values from database
self.__load()
def __load(self):
data_dict = Collection.find_one({"_id":self.database_id})
for key, attribute in data_dict.items():
self.__setattr__(key,attribute)
However, in classes that I have to process the data from the database this doesn't work:
class Example(object):
def __init__(self, name):
self.name = name
self.database_id = None
self.member_dict = {}
self.load()
def load(self):
data_dict = Collection.find_one({"name":self.name})
self.database_id = data_dict["_id"]
for element in data_dict["element_list"]:
self.process_element(element)
for member_name, member_info in data_dict["member_class_dict"].items():
self.member_dict[member_name] = MemberClass(member_info)
def process_element(self, element):
print("Do Stuff")
Two example use cases I have are:
1) List of strings the are used to set flags, this is done by calling a function with the string as the argument. (def process_element above)
2) A dictionary of dictionaries which are used to create a list of instances of a class. (MemberClass(member_info) above)
I tried creating properties to handle this but found that __setattr__ doesn't look for properties.
I know I could redefine __setattr__ to look for specific names but it is my understanding that this would slow down all set interactions with the class and I would prefer to avoid that.
I also know I could use a bunch of try/excepts to catch the errors but this would end up making the code very bulky.
I don't mind the load function being slowed down a bit for this but very much want to avoid anything that will slow down the class outside of loading.
So the solution that I came up with is to use the idea of changing the __setattr__ method but instead to handle the exceptions in the load function instead of the __setattr__.
def load(self):
data_dict = Collection.find_one({"name":self.name})
for key, attribute in world_data.items():
if key == "_id":
self.database_id = attribute
elif key == "element_list":
for element in attribute:
self.process_element(element)
elif key == "member_class_dict":
for member_name, member_info in attribute.items():
self.member_dict[member_name] = MemberClass(member_info)
else:
self.__setattr__(key,attribute)
This provides all of the functionality of overriding the __setattr__ method without slowing down any future calls to __setattr__ outside of loading the class.

Dynamically add methods to a class in Python 3.0

I'm trying to write a Database Abstraction Layer in Python which lets you construct SQL statments using chained function calls such as:
results = db.search("book")
.author("J. K. Rowling")
.price("<40.00")
.title("Harry")
.execute()
but I am running into problems when I try to dynamically add the required methods to the db class.
Here is the important parts of my code:
import inspect
def myName():
return inspect.stack()[1][3]
class Search():
def __init__(self, family):
self.family = family
self.options = ['price', 'name', 'author', 'genre']
#self.options is generated based on family, but this is an example
for opt in self.options:
self.__dict__[opt] = self.__Set__
self.conditions = {}
def __Set__(self, value):
self.conditions[myName()] = value
return self
def execute(self):
return self.conditions
However, when I run the example such as:
print(db.search("book").price(">4.00").execute())
outputs:
{'__Set__': 'harry'}
Am I going about this the wrong way? Is there a better way to get the name of the function being called or to somehow make a 'hard copy' of the function?
You can simply add the search functions (methods) after the class is created:
class Search: # The class does not include the search methods, at first
def __init__(self):
self.conditions = {}
def make_set_condition(option): # Factory function that generates a "condition setter" for "option"
def set_cond(self, value):
self.conditions[option] = value
return self
return set_cond
for option in ('price', 'name'): # The class is extended with additional condition setters
setattr(Search, option, make_set_condition(option))
Search().name("Nice name").price('$3').conditions # Example
{'price': '$3', 'name': 'Nice name'}
PS: This class has an __init__() method that does not have the family parameter (the condition setters are dynamically added at runtime, but are added to the class, not to each instance separately). If Search objects with different condition setters need to be created, then the following variation on the above method works (the __init__() method has a family parameter):
import types
class Search: # The class does not include the search methods, at first
def __init__(self, family):
self.conditions = {}
for option in family: # The class is extended with additional condition setters
# The new 'option' attributes must be methods, not regular functions:
setattr(self, option, types.MethodType(make_set_condition(option), self))
def make_set_condition(option): # Factory function that generates a "condition setter" for "option"
def set_cond(self, value):
self.conditions[option] = value
return self
return set_cond
>>> o0 = Search(('price', 'name')) # Example
>>> o0.name("Nice name").price('$3').conditions
{'price': '$3', 'name': 'Nice name'}
>>> dir(o0) # Each Search object has its own condition setters (here: name and price)
['__doc__', '__init__', '__module__', 'conditions', 'name', 'price']
>>> o1 = Search(('director', 'style'))
>>> o1.director("Louis L").conditions # New method name
{'director': 'Louis L'}
>>> dir(o1) # Each Search object has its own condition setters (here: director and style)
['__doc__', '__init__', '__module__', 'conditions', 'director', 'style']
Reference: http://docs.python.org/howto/descriptor.html#functions-and-methods
If you really need search methods that know about the name of the attribute they are stored in, you can simply set it in make_set_condition() with
set_cond.__name__ = option # Sets the function name
(just before the return set_cond). Before doing this, method Search.name has the following name:
>>> Search.price
<function set_cond at 0x107f832f8>
after setting its __name__ attribute, you get a different name:
>>> Search.price
<function price at 0x107f83490>
Setting the method name this way makes possible error messages involving the method easier to understand.
Firstly, you are not adding anything to the class, you are adding it to the instance.
Secondly, you don't need to access dict. The self.__dict__[opt] = self.__Set__ is better done with setattr(self, opt, self.__Set__).
Thirdly, don't use __xxx__ as attribute names. Those are reserved for Python-internal use.
Fourthly, as you noticed, Python is not easily fooled. The internal name of the method you call is still __Set__, even though you access it under a different name. :-) The name is set when you define the method as a part of the def statement.
You probably want to create and set the options methods with a metaclass. You also might want to actually create those methods instead of trying to use one method for all of them. If you really want to use only one __getattr__ is the way, but it can be a bit fiddly, I generally recommend against it. Lambdas or other dynamically generated methods are probably better.
Here is some working code to get you started (not the whole program you were trying to write, but something that shows how the parts can fit together):
class Assign:
def __init__(self, searchobj, key):
self.searchobj = searchobj
self.key = key
def __call__(self, value):
self.searchobj.conditions[self.key] = value
return self.searchobj
class Book():
def __init__(self, family):
self.family = family
self.options = ['price', 'name', 'author', 'genre']
self.conditions = {}
def __getattr__(self, key):
if key in self.options:
return Assign(self, key)
raise RuntimeError('There is no option for: %s' % key)
def execute(self):
# XXX do something with the conditions.
return self.conditions
b = Book('book')
print(b.price(">4.00").author('J. K. Rowling').execute())

Resources