How to inherit from a class with __init__ and take multiple input parms with default values - python-3.x

I have two classes as follows:
class Buttons():
def __init__(self, dDict):
self.TCT = Tool()
self.dDict = dDict
def btnName(self):
# I will use both self.TCT and self.dDict here
a = self.dDict['setup']
class Switch(Buttons):
def __init__(self, iButtonType, sButtName=None):
self.TCT =Tool()
self.iButtonType = iButtonType
#sButtName being used below ....
I need to instantiate Switch class, but I need to provide dDict so that Buttons class will have the data it is looking for. What is the right way to instantiate the class Switch? I'm not sure how to handle multiple input parms with default data.
python python-3.x
This other possible solution is not an exact match. Both my classes take input parms and one of the classes has a default value set for its input parm.

Related

Python create dynamic class and set multi bases from imported module

I found several example here, but is not what exactly looking for, will try to explain here
from this answer tried to achieve my result but is not what looking for
How can I dynamically create derived classes from a base class
i have a module that holds many classes
importing the module
import importlib
# are awalable many classes here
forms = importlib.import_module('my_forms')
Now, based on forms i need to create a new class and add bases to my new class all classes that are availabe in forms
this what i tried, but can not find a way to assign the bases
import inspect
def create_DynamicClass():
class DynamicClass(BaseClass):
pass
for form_name, class_name in inspect.getmembers(forms):
for i in class_name():
# here the code to added all bases to DynamicClass
return DynamicClass()
example how my_forms module looks
class MyClass1(BaseClass):
attr1 = 1
attr2 = 2
#coroutine
def prepare(self):
# some code for each class
class MyClass2(BaseClass):
attr3 = 3
attr4 = 4
#coroutine
def prepare(self):
# some code for each class
class MyClass3(BaseClass):
attr5 = 5
attr6 = 6
#coroutine
def prepare(self):
# some code for each class
The result that i want to achieve is the following, will make a static class to show desired result but need to be dynamic
I need to create my class dynamic because in my_forms module can be any amount of classes
# inherits all classes from my_forms module
class MyResultClass(MyClass1, MyClass2, MyClass3):
# here get all available attributes from all classes
#coroutine
def prepare(self):
# as well need each prepare function for each class as well
yield MyClass1().prepare()
yield MyClass2().prepare()
yield MyClass3().prepare()
Simply declare the dynamic class with all of your base classes. To do so, put all of your base classes in a list, and unpack the list in the class definition statement with the * operator like this:
def createClass(baseClasess):
class NewClass(*baseClasses):
pass
return NewClass
DynamicClass = createClass([class1, class2, ...])
i have managed to find a solution, will post here, if any recommendation to make it better will appreciate
forms = importlib.import_module('my_forms')
class Form(BaseForm):
#coroutine
def prepare(self):
for form_name, class_name in inspect.getmembers(forms, inspect.isclass):
try:
yield class_name().prepare()
except TypeError:
continue
def createClass(meta):
for form_name, class_name in inspect.getmembers(forms, inspect.isclass):
try:
Form.__bases__ += (class_name, )
for field in class_name():
field_type = fl.SelectField() if hasattr(field, 'choices') else fl.StringField()
setattr(Form, field.name, field_type)
except TypeError:
continue
return Form(meta=meta)

Accessing variables from a method in class A and using it in Class B in python3.5

I have a BaseClass and two classes (Volume and testing) which inherits from the BaseClass. The class "Volume" use a method "driving_style" from another python module. I am trying to write another method "test_Score" which wants to access variables computed in the method "driving_style" which I want to use to compute further. These results will be accessed to the class "testing" as shown.
from training import Accuracy
import ComputeData
import model
class BaseClass(object):
def __init__(self, connections):
self.Type = 'Stock'
self.A = connections.A
self.log = self.B.log
def getIDs(self, assets):
ids = pandas.Series(assets.ids, index=assets.B)
return ids
class Volume(BaseClass):
def __init__(self, connections):
BaseClass.__init__(self, connections)
self.daystrade = 30
self.high_low = True
def learning(self, data, rootClass):
params.daystrade = self.daystrade
params.high_low = self.high_low
style = Accuracy.driving_style()
return self.Object(data.universe, style)
class testing(BaseClass):
def __init__(self, connections):
BaseClass.__init__(self, connections)
def learning(self, data, rootClass):
test_score = Accuracy.test_score()
return self.Object(data.universe, test_score)
def driving_style(date, modelDays, params):
daystrade = params.daystrade
high_low = params.high_low
DriveDays = model.DateRange(date, params.daystrade)
StopBy = ComputeData.instability(DriveDays)
if high_low:
style = ma.average(StopBy)
else:
style = ma.mean(StopBy)
return style
def test_score(date, modelDays, params):
"want to access the following from the method driving_style:"
DriveDays =
StopBy =
return test_score ("which i compute using values DriveDays and StopBy and use test_score in the method learning inside
the 'class - testing' which inherits some params from the BaseClass")
You can't use locals from a call to a function that was made elsewhere and has already returned.
A bad solution is to store them as globals that you can read from later (but that get replaced on every new call). A better solution might to return the relevant info to the caller along with the existing return values (return style, DriveDays, StopBy) and somehow get it to where it needs to go. If necessary, you could wrap the function into a class and store the computed values as attributes on an instance of the class, while keeping the return type the same.
But the best solution is probably to refactor, so the stuff you want is computed by dedicated methods that you can call directly from test_score and driving_style independently, without duplicating code or creating complicated state dependencies.
In short, basically any time you think you need to access locals from another function, you're almost certainly experiencing an XY problem.

How a class property can have function and getter setter on arbitrary number of arguments?

Lets say I have the following code and I would like to reverse engineer it:
class ParentClass(object):
settings = None
def __init__(self):
pass
class MyClass(ParentClass):
def __init__(self):
super().__init__()
self.settings.add(ConfigParam(
name='setting_name', conf_type=str))
# I also can get that new setting too
my_setting_val = self.settings.setting_name.value
My parent class has a settings attribute and inside of the MyClass I can add a new configuration parameter to it and I also can get my newly added setting using self.settings.setting_name.value. If I use #property it uses a specific getter, and here my settings are dynamic!
How can I implement this behavior in Python3?

Templating Python class level attributes to create generic rest serializers

I'm using the Django Rest Framework and would like to serialize different types using the same format. The format being a list of all instances of a specific type as well a certain selected instance.
My problem is that I have to write a different serializer class for every type that I want to serialize. In C++ I'd solve this by giving the type and type serializer as a template argument. How can I do this in Python?
The generic Object I'd like to serialize:
class OptionSelect(object):
def __init__(self, options, selected):
self.options = options
self.selected = selected
What I currently need to serialize it:
class TypeAOptionSerializer(serializers.Serializer):
options = TypeASerializer(many=True)
selected = TypeASerializer()
class TypeBOptionSerializer(serializers.Serializer):
options = TypeBSerializer(many=True)
selected = TypeBSerializer()
class TypeCOptionSerializer(serializers.Serializer):
options = TypeCSerializer(many=True)
selected = TypeCSerializer()
Instead I'd like to create a Serializer like this:
class OptionSerializer(serializers.Serializer):
options = serializer(many=True)
selected = serializer()
def __init__(self, serializer):
self.serializer = serializer
super().__init__()
Is there maybe a different approach that I should be taking?
You can try the following:
def create_serializer(serializer):
class MySerializer(serializers.Serializer):
options = serializer(many=True)
selected = serializer()
return MySerializer
TypeAOptionSerializer = create_serializer(TypeASerializer)
TypeBOptionSerializer = create_serializer(TypeBSerializer)
TypeCOptionSerializer = create_serializer(TypeCSerializer)
This should be equivalent to your current approach with three separate classes.

python property referring to property/attribute of member attribute?

I'm wondering if I have:
class A(object):
def __init__(self):
self.attribute = 1
self._member = 2
def _get_member(self):
return self._member
def _set_member(self, member):
self._member = member
member = property(_get_member, _set_member)
class B(object):
def __init__(self):
self._member = A()
def _get_a_member(self):
return self._member.member
def _set_a_member(self, member):
self._member.member = member
member = property(_get_a_member, _set_a_member)
Can I somehow avoid to write get/setters for A.member, and simply refer to the attribute or property of the A object?
Where the get/setters do logic, its of course needed, but if I simply wan't to expose the member/attributes of a member attribute, then writing get/setters seems like overhead.
I think even if I could write the get/setters inline that would help?
I find the question a bit unclear, however I try to explain some context.
Where the get/setters do logic, its of course needed, but if I simply wan't to expose the member/attributes of a member attribute
If there is no logic in getter/setters, then there is no need to define the attribute as a property, but the attribute can be used directly (in any context).
So
class A(object):
def __init__(self):
self.attribute = 1
self.member = 2
class B(object):
def __init__(self):
self.member = A()
B().member.member # returns 2
B().member.member = 10
In some languages, it's considered good practice to abstract instance properties with getter/setter methods, That's not necessarily the case in Python.
Python properties are useful when you'd need more control over the attribute, for example:
when there is logic (validation, etc.)
to define a readonly attribute (so only providing a getter without a setter)
Update (after the comment)
properties are not necessarily a tool to "hide" some internal implementation. Hiding in Python is a bit different than say in Java, due to very dynamic nature of Python language. It's always possible to introspect and even change objects on the fly, you can add new attributes (even methods) to objects on runtime:
b = B()
b.foo = 4 # define a new attribute on runtime
b.foo # returns 4
So Python developers rely more on conventions to hint their intentions of abstractions.
About the polymorphic members, I think it's most natural for Python classes to just share an interface, that's what's meant by Duck typing. So as long as your next implementation of A supports the same interface (provides the same methods for callers), it should not be any issue to change its implementation.
So this is what I came up with - use a method to generate the properties, with the assumption that the obj has an attribute of _member:
def generate_cls_a_property(name):
"""Small helper method for generating a 'dumb' property for the A object"""
def getter(obj):
return getattr(obj._member, name)
def setter(obj, new_value):
setattr(obj._member, name, new_value)
return property(getter, setter)
This allows me to add properties like so:
class B(object):
def __init__(self):
self._member = A()
member = generate_cls_a_property('member') # generates a dumb/pass-through property
I'll accept my own, unless someone tops it within a week.. :)

Resources