Setting instance variables explicitly or via function - python-3.x

If we have a instance variable which can be set either randomly or via a list input is it better to set the instance variable explicitly (via a function return) or as a side-effect of a function? E.i., which of the versions below is better?
class A():
def __init__(self, *input):
if input:
self.property = self.create_property_from_input(input)
else:
self.property = self.create_property_randomly()
#staticmethod
def create_property_from_input(input)
# Do something useful with the input.
return result
#staticmethod
def create_property_randomly():
# Do something useful
return result
or
class A():
def __init__(self, *input):
if parents:
self.create_property_from_input(input)
else:
self.create_property_randomly()
def create_property_from_input(self, input)
# Do something useful with the input.
self.property = result
# return None
def create_property_randomly(self):
# Do something useful
self.property = result
# return None
I think that in the first version, it is not strictly needed to set the two create_property-functions as static methods. However, since they do not need to know anything about the instance I thought it was more clear to do it that way. Personally, I tend to think that the first version is more explicit, but the use of static methods tend to make it look more advanced than it is.
Which version would you think is closer to best practices?

Related

Change the class inside a class with different arguments

I'm a structured programming guy. So my attempts with object oriented programming are always "work in progress..."
My intent is to have a class which will adapt itself according to an external input. I saw in another post (which I was unable to find again) that I can change the class of an object, so I made this MWE, which works:
class Base:
def __init__(self, name):
self.name = name
def set_text(self, text):
self.text = text
class Terminator(Base):
terminator = '!'
def __init__(self):
super().__init__('terminator')
def get(self):
return self.text + terminator
class Prefix(Base):
def __init__(self):
super().__init__('prefix')
def get(self):
return str(len(self.text)) + self.text
class_list = {
'terminator': Terminator,
'prefix': Prefix
}
class Selector():
def __init__(self, option):
self.__class__ = class_list[option]
def main():
selection = input("Choose 'terminator' or 'prefix': ")
obj = Selector(selection)
obj.set_text('something')
print(obj.get())
if __name__ == '__main__':
main()
Terminator is a class to produce a text terminated with a special character (!); Prefix produces the same text prefixed with its length.
With Selector, I can use o = Selector('prefix') to get o as a Prefix instance.
The question
My question is if I can add extra arguments to Selector and pass them to the respective class. For example:
o = Selector('prefix', number_of_digits = 2) # '05hello' intead of '5hello'
or
o = Selector('terminator', terminator = '$') # use '$' instead of '!'
For now, I couldn't figure out how to accomplish this task. I tried to use *args and **kwargs, but unsuccessfully.
Additional information
The code I'm working on is intended to undergraduate students and I want to make it simple for teaching purposes, so Selector should be used to hide other classes and their details from the students (to hide Terminator and Prefix, for example).
I expect to have about 15 distinct classes to hide behind Selector.
Also, I'm ready to hear I'm completely wrong about this approach if there are alternatives.
Try calling the appropriate class's __init__() manually, and set the variables like you otherwise would:
class Terminator(Base):
# make terminator an instance variable instead of a class variable,
# and set it as an overridable default arg for the constructor
def __init__(self, terminator='!'):
super().__init__('terminator')
self.terminator = terminator
def get(self):
return self.text + self.terminator
class Selector():
def __init__(self, option, *args, **kwargs):
self.__class__ = class_list[option]
self.__class__.__init__(self, *args, **kwargs)
...
o = Selector('terminator', terminator='$')
o.set_text("Hello World")
print(o.get())
# Hello World$
I should leave a disclaimer: what you're trying to do is essentially a version of the Factory method pattern, which is usually easier to maintain if you bundle it into a method instead of messing with class types and reflection:
def Selector(option: str, *args, **kwargs) -> Base:
return class_list[option](*args, **kwargs)
# this will do .__new()__ and .__init__() normally,
# and is indistinguishable from normal class creation
Using a method to do this instead of overriding the class metadata also has the advantage of being easy to fit into a type system (see the type hinting in the above snippet), which is difficult to do with .__init__(). This is a common design pattern in Java, for example, which is very strongly and statically typed, requires a factory method to have a signature with the superclass of anything it could possibly return, and makes it impossible for an object to change its own type at runtime.
The disadvantage of your current approach, dynamically changing .__class__, is that the .__new__() and .__init__() methods which were called on the resulting object will not match with each other (it would be using Selector.__new__() but Terminator.__init__(), for example), which may cause weird and hard-to-diagnose problems in the future. It's a fun experiment, but be knowledgeable of the risks before using this in something you'll have to maintain for a long time.

How to switch between two sets of attribute values, depending on an internal state?

I have a class holding some scientific data. Depending on an internal state, the values of this class can appear as normalized (i.e. unitless), or non-normalized. The values are always stored as normalized, but if the object is set in non-normalized status, the user-accessible properties (and methods) will give the non-normalized values. This way the class appears as non-normalized, while there's no need to duplicate the stored values.
Right now I implemented this using getters. While it works, it gives a lot of repeating structure, and I wonder if there's a more Pythonic way of managing this without overcomplicating things.
Am I doing this right? Is there a more elegant way to switch between two sets of data in a similar fashion?
class CoolPhysicsData(object):
def __init__(self, lambda0, *args, normed=False):
self.lambda0 = lambda0 # some normalization factor (wavelength of some wave)
self.normalized = normed # user can change this state as he pleases
self._normed_tmin, self._normed_tmax, self._normed_r = self.calculate_stuffs(*args)
...
#property
def tmin(self):
if self.normalized:
return self._normed_tmin
else:
return denormalize(self.lambda0, self._normed_tmin, unit_type="time")
#property
def tmax(self):
if self.normalized:
return self._normed_tmax
else:
return denormalize(self.lambda0, self._normed_tmax, unit_type="time")
#property
def r(self):
if self.normalized:
return self._normed_r
else:
return denormalize(self.lambda0, self._normed_r, unit_type="len")
... # about 15 getters alike these
One way is to avoid using properties, and implement __getattr__, __setattr__ and __delattr__. Since you need to know which quantity you're denormalizing, there's really no way to escape definitions: these must be handcoded somewhere. I'd do this way:
class CoolPhysicsData:
def _get_normalization_params(self, value):
# set up how individual properties should be denormalized..
params = {
# 'property_name' : (norm_factor, norm_value, 'unit_type')
'tmin': (self.lambda0, self._normed_tmin, 'time'),
'tmax': (self.lambda0, self._normed_tmax, 'time'),
'r': (self.lambda0, self._normed_r, 'len'),
}
return params[value]
and I would implement __getattr__ something like this:
...
def __getattr__(self, value):
# extract the parameters needed
norm_factor, normed_value, unit_type = self._get_normalization_params(f'{value}')
if self.normed:
return normed_value
else:
return self.denormalize(norm_factor, normed_value, unit_type)
...
Note that you might want to write __setattr__ and __delattr__ too.
One little addition: dataclasses might be useful to you. I'm not sure if *args in your __init__ function is the exact signature, or you just simplified for the sake of the example. If you have known arguments (no varargs), this can be easily turned into a dataclass.
from dataclasses import dataclass, field
#dataclass
class CoolPhysicsData:
lambda0: float
normed: bool = field(default=False)
def __post_init__(self):
# set up some test values for simplicity
# of course you can run custom calculations here..
self._normed_tmin = 1
self._normed_tmax = 2
self._normed_r = 3
def __getattr__(self, value):
norm_factor, normed_value, unit_type = self._get_normalization_params(f'{value}')
if self.normed:
return normed_value
else:
return self.denormalize(norm_factor, normed_value, unit_type)
# you may want to implement the following methods too:
# def __setattr__(self, name, value):
# # your custom logic here
# ...
# def __delattr__(self, name):
# # your custom logic here
# ...
def denormalize(self, v1, v2, v3):
# just for simplicity
return 5
def _get_normalization_params(self, value):
# setup how individual properties should be denormalized..
params = {
# 'property_name' : (norm_factor, norm_value, 'unit_type')
'tmin': (self.lambda0, self._normed_tmin, 'time'),
'tmax': (self.lambda0, self._normed_tmax, 'time'),
'r': (self.lambda0, self._normed_r, 'len'),
}
return params[value]
Is it more pythonic? It's up to you to decide. It surely takes away some repetition, but you introduce a little more complexity, and - in my opinion - it's more prone to bugs.

Python: why do I need super().__init__() call in metaclasses?

I have got one question: why do I need to call super().--init--() in metaclasses? Because metaclass is factory of classes, I think we don`t need to call initialization for making objects of class Shop. Or with using super().--init-- we initializing the class? (Because my IDE says, that I should call it. But without super().--init-- nothing happens, my class working without mistakes).
Can you explane me, why?
Thanks in advance!
class Descriptor:
_counter = 0
def __init__(self):
self.attr_name = f'Descriptor attr#{Descriptor._counter}'
Descriptor._counter += 1
def __get__(self, instance, owner):
return self if instance is None else instance.__dict__[self.attr_name]
def __set__(self, instance, value):
if value > 0:
instance.__dict__[self.attr_name] = value
else:
msg = 'Value must be > 0!'
raise AttributeError(msg)
class Shop():
weight = Descriptor()
price = Descriptor()
def __init__(self, name, price, weight):
self.name = name
self.price = price
self.weight = weight
def __repr__(self):
return f'{self.name}: price - {self.price} weight - {self.weight}'
def buy(self):
return self.price * self.weight
class Meta(type):
def __init__(cls, name, bases, attr_dict):
super().__init__(name, bases, attr_dict) # <- this is that func. call
for key, value in attr_dict.items():
if isinstance(value, Descriptor): # Here I rename attributes name of descriptor`s object.
value.attr_name = key
#classmethod
def __prepare__(metacls, name, bases):
return OrderedDict()
You don't "need" to - and if your code use no other custom metaclasses, not calling the metaclass'__init__.super() will work just the same.
But if one needs to combine your metaclass with another, through inheritance, without the super() call, it won't work "out of the box": the super() call is the way to ensure all methods in the inheritance chain are called.
And if at first it looks like that a metaclass is extremely rare, and combining metaclasses would likely never take place: a few libraries or frameworks have their own metaclasses, including Python's "abc"s (abstract base classes), PyQT, ORM frameworks, and so on. If any metaclass under your control is well behaved with proper super() calls on the __new__, __init__ and __call__ methods, (if you override those), what you need to do to combine both superclasses and have a working metaclass can be done in a single line:
CompatibleMeta = type("CompatibleMeta", (meta, type(OtherClassBase)), {})
This way, for example, if you want to use the mechanisms in your metaclass in a class using the ABCMeta functionalities in Python, you just do it. The __init__ method in your Meta will call the other metaclass __init__. Otherwise it would not run, and some subtle unexpectd thing would not be initialized in your classes, and this could be a very hard to find bug.
On a side note: there is no need to declare __prepare__ in a metaclass if all it does is creating an OrderedDict on a Python newer than 3.6: Since that version, dicitionaries used as the "locals()" while executing class bodies are ordered by default. Also, if another metaclass you are combining with also have a __prepare__, there is no way to make that work automatically by using "super()" - you have to check the code and verify which of the two __prepare__s should be used, or create a new mapping type with features to attend both metaclasses.

How to Call Multiple Methods in Python Class Without Calling Each on Individually?

I have a class that contains a number of methods:
class PersonalDetails(ManagedObjectABC):
def __init__(self, personal_details):
self.personal_details = personal_details
def set_gender(self):
self.gender='Male:
def set_age(self):
self.set_age=22
etc.
I have many such methods, all that begin with the word `set. I want to create a new method within this class that will execute all methods that begin with set, like this:
def execute_all_settings(self):
'''
wrapper for setting all variables that start with set.
Will skip anything not matching regex '^set'
'''
to_execute=[f'''self.{i}()''' for i in dir(self) if re.search('^set',i)
print(to_execute)
[exec(i) for i in to_execute]
However, this reports an error:
NameError: name 'self' is not defined
How can I go about doing this?
more info
The reason I want to do it this way, rather than simply call each method individually, is that new methods may be added in the future, so I want to execute all methods (that start with "set" no matter what they are)
Do not use either exec or eval. Instead use getattr.
Also note that set_age is both a method and an attribute, try to avoid that.
import re
class PersonalDetails:
def __init__(self, personal_details):
self.personal_details = personal_details
def set_gender(self):
self.gender = 'Male'
def set_age(self):
self.age = 22
def execute_all_settings(self):
'''
wrapper for setting all variables that start with set.
Will skip anything not matching regex '^set'
'''
to_execute = [i for i in dir(self) if re.search('^set', i)]
print(to_execute)
for func_name in to_execute:
getattr(self, func_name)()
pd = PersonalDetails('')
pd.execute_all_settings()
print(pd.gender)
# ['set_age', 'set_gender']
# Male
This solution will work as long as all the "set" methods either do not expect any arguments (which is the current use-case), or they all expect the same arguments.

How to ignore a parameter in functools. lru_cache?

This is a skeleton of the function I want to enhance with a cache, because doing RPC (remote procedure call) involves a TCP connection to other host.
def rpc(rpc_server, rpc_func, arg):
return rpc_server.do_rpc(rpc_func, arg)
However, the most convenient way of simply decorating it with:
#functools.lru_cache()
does not work well, beacuse rpc_server objects come and go and this parameter should be ignored by the cache.
I can write a simple memoizing code myself. No problem with that. Actually, I see no other solution.
I am unable to rewrite this function in such way that #lru_cache() decorator can be applied and rpc_server will be passed as an argument (i.e. I don't want to make rpc_server a global variable). Is it possible?
I'm posting this just for completness. Comments are welcome, but please do not vote.
I have found a way how to satisfy the conditions from my question. I'm not going to use this code. But it shows how flexible Python is.
import functools
class BlackBox:
"""All BlackBoxes are the same."""
def __init__(self, contents):
# TODO: use a weak reference for contents
self._contents = contents
#property
def contents(self):
return self._contents
def __eq__(self, other):
return isinstance(other, type(self))
def __hash__(self):
return hash(type(self))
#functools.lru_cache()
def _cached_func(blackbox, real_arg):
print("called with args:", blackbox.contents, real_arg)
return real_arg + 1000
def cached_func(ignored_arg, real_arg):
# ignored means ignored by the cache
return _cached_func(BlackBox(ignored_arg), real_arg)
cached_func("foo", 1) # cache miss
cached_func("bar", 1) # cache hit
cached_func("bar", 2) # cache miss
cached_func("foo", 2) # cache hit

Resources