How do you define a custom primitive with parameters using the Featuretools package? - featuretools

I'm trying to create a custom transformation using the Featuretools package where I can input a parameter and change the behaviour of the the function
For example for the following custom log transformation class I wish to add a base parameter so I can do log transformations of features with different bases:
class Log(TransformPrimitive):
"""Computes the logarithm for a numeric column."""
name = 'log'
input_types = [Numeric]
return_type = Numeric
def get_function(self):
return np.log
How would I go about implementing such a primitive and moreover how would it be implemented using the featuretools.dfs() function?

Consider an __init__ function within the class.
For example,
class Log(TransformPrimitive):
"""Computes the logarithm for a numeric column."""
name = 'log'
input_types = [Numeric]
return_type = Numeric
def __init__(self, n=3):
self.n = n
def get_function(self):
return np.log
# adjust for variable base, probably using something like
# np.log(array) / np.log(self.n)
To call it: log_base_n = Log(n=2).
In DFS you would add the corresponding class instance to the list of primitives.

Related

Python create dynamic class and set multi bases from imported module

I found several example here, but is not what exactly looking for, will try to explain here
from this answer tried to achieve my result but is not what looking for
How can I dynamically create derived classes from a base class
i have a module that holds many classes
importing the module
import importlib
# are awalable many classes here
forms = importlib.import_module('my_forms')
Now, based on forms i need to create a new class and add bases to my new class all classes that are availabe in forms
this what i tried, but can not find a way to assign the bases
import inspect
def create_DynamicClass():
class DynamicClass(BaseClass):
pass
for form_name, class_name in inspect.getmembers(forms):
for i in class_name():
# here the code to added all bases to DynamicClass
return DynamicClass()
example how my_forms module looks
class MyClass1(BaseClass):
attr1 = 1
attr2 = 2
#coroutine
def prepare(self):
# some code for each class
class MyClass2(BaseClass):
attr3 = 3
attr4 = 4
#coroutine
def prepare(self):
# some code for each class
class MyClass3(BaseClass):
attr5 = 5
attr6 = 6
#coroutine
def prepare(self):
# some code for each class
The result that i want to achieve is the following, will make a static class to show desired result but need to be dynamic
I need to create my class dynamic because in my_forms module can be any amount of classes
# inherits all classes from my_forms module
class MyResultClass(MyClass1, MyClass2, MyClass3):
# here get all available attributes from all classes
#coroutine
def prepare(self):
# as well need each prepare function for each class as well
yield MyClass1().prepare()
yield MyClass2().prepare()
yield MyClass3().prepare()
Simply declare the dynamic class with all of your base classes. To do so, put all of your base classes in a list, and unpack the list in the class definition statement with the * operator like this:
def createClass(baseClasess):
class NewClass(*baseClasses):
pass
return NewClass
DynamicClass = createClass([class1, class2, ...])
i have managed to find a solution, will post here, if any recommendation to make it better will appreciate
forms = importlib.import_module('my_forms')
class Form(BaseForm):
#coroutine
def prepare(self):
for form_name, class_name in inspect.getmembers(forms, inspect.isclass):
try:
yield class_name().prepare()
except TypeError:
continue
def createClass(meta):
for form_name, class_name in inspect.getmembers(forms, inspect.isclass):
try:
Form.__bases__ += (class_name, )
for field in class_name():
field_type = fl.SelectField() if hasattr(field, 'choices') else fl.StringField()
setattr(Form, field.name, field_type)
except TypeError:
continue
return Form(meta=meta)

Accessing variables from a method in class A and using it in Class B in python3.5

I have a BaseClass and two classes (Volume and testing) which inherits from the BaseClass. The class "Volume" use a method "driving_style" from another python module. I am trying to write another method "test_Score" which wants to access variables computed in the method "driving_style" which I want to use to compute further. These results will be accessed to the class "testing" as shown.
from training import Accuracy
import ComputeData
import model
class BaseClass(object):
def __init__(self, connections):
self.Type = 'Stock'
self.A = connections.A
self.log = self.B.log
def getIDs(self, assets):
ids = pandas.Series(assets.ids, index=assets.B)
return ids
class Volume(BaseClass):
def __init__(self, connections):
BaseClass.__init__(self, connections)
self.daystrade = 30
self.high_low = True
def learning(self, data, rootClass):
params.daystrade = self.daystrade
params.high_low = self.high_low
style = Accuracy.driving_style()
return self.Object(data.universe, style)
class testing(BaseClass):
def __init__(self, connections):
BaseClass.__init__(self, connections)
def learning(self, data, rootClass):
test_score = Accuracy.test_score()
return self.Object(data.universe, test_score)
def driving_style(date, modelDays, params):
daystrade = params.daystrade
high_low = params.high_low
DriveDays = model.DateRange(date, params.daystrade)
StopBy = ComputeData.instability(DriveDays)
if high_low:
style = ma.average(StopBy)
else:
style = ma.mean(StopBy)
return style
def test_score(date, modelDays, params):
"want to access the following from the method driving_style:"
DriveDays =
StopBy =
return test_score ("which i compute using values DriveDays and StopBy and use test_score in the method learning inside
the 'class - testing' which inherits some params from the BaseClass")
You can't use locals from a call to a function that was made elsewhere and has already returned.
A bad solution is to store them as globals that you can read from later (but that get replaced on every new call). A better solution might to return the relevant info to the caller along with the existing return values (return style, DriveDays, StopBy) and somehow get it to where it needs to go. If necessary, you could wrap the function into a class and store the computed values as attributes on an instance of the class, while keeping the return type the same.
But the best solution is probably to refactor, so the stuff you want is computed by dedicated methods that you can call directly from test_score and driving_style independently, without duplicating code or creating complicated state dependencies.
In short, basically any time you think you need to access locals from another function, you're almost certainly experiencing an XY problem.

How to inherit from a class with __init__ and take multiple input parms with default values

I have two classes as follows:
class Buttons():
def __init__(self, dDict):
self.TCT = Tool()
self.dDict = dDict
def btnName(self):
# I will use both self.TCT and self.dDict here
a = self.dDict['setup']
class Switch(Buttons):
def __init__(self, iButtonType, sButtName=None):
self.TCT =Tool()
self.iButtonType = iButtonType
#sButtName being used below ....
I need to instantiate Switch class, but I need to provide dDict so that Buttons class will have the data it is looking for. What is the right way to instantiate the class Switch? I'm not sure how to handle multiple input parms with default data.
python python-3.x
This other possible solution is not an exact match. Both my classes take input parms and one of the classes has a default value set for its input parm.

Dynamically assigning sub class dependent decorators

I have a class that has a basic method, and subclasses that have the same base functionality, but additional behaviour, which can be implemented with decorators.
class cls_with_basic_method:
#if...exec("#decoratorA")
#if...exec("#decoratorB")
#...
def basic_method(arg):
#...
return arg
class cls_with_basic_method_and_decoratorA(class_with_basic_method):
#...
class cls_with_basic_method_and_decoratorB(class_with_basic_method):
#...
#...
It seems the quickest solution would be if I were able to execute the particular decorator as the subclass method is called, but can't think of a way of expressing it in python. Can this easily be done?
A decorated function or method is usually a different object than the function or method it decorates [*] - so, you can just wrap the original class' method in an explict way. This is rather straightforawrd, and rather boring - but it will work if you need to decorate just a few methods of the sub-classes:
class cls_with_basic_method:
def basic_method(arg):
#...
return arg
class cls_with_basic_method_and_decoratorA(class_with_basic_method):
basic_method = decoratorA(cls_with_basic_method.basic_method)
class cls_with_basic_method_and_decoratorB(class_with_basic_method):
basic_method = decoratorB(cls_with_basic_method.basic_method)
The only special thing done there is use the decorators with the syntax of regular function calls instead of usign the "#..." syntax - this way they can be used inside the expressions.
This method is further boring due to you have to hardcode the superclass name within the class body at each decoration, since you can't use super from the class body, just from inside methods.
[*] Although some decorators just add metadata to the callable object they decorate and return the object itself - this approach won't work for such decorators, as they will affect the method in the superclass as well.
Now, taking your problem further - what you want is just to wrap arbitrary methods on the superclass when they are called on the subclasses. That can be done more or less automatically if you override the class__getattribute__ - you then could create a class hierarchy with an special "decorator" attribute that would be called for each method call - more or less like this:
class cls_with_basic_method:
_auto_decorate = set(("basic_method", ...))
_decorator = lambda x: x # NOP decorator
def basic_method(arg):
#...
return arg
def __getattribute__(self, attrname):
attr = object.__getattribute__(self, attr)
# shortcircuit non-method retrievelas as fast as possible:
if not attrname in __class__._auto_decorate not callable(attr):
return attr
return self.__class__._decorator(attr)
class cls_with_basic_method_and_decoratorA(class_with_basic_method):
_decorator = decoratorA
class cls_with_basic_method_and_decoratorB(class_with_basic_method):
_decorator = decoratorB
Of course, if you need different decorators for different methods, just change the code in __getattribute__ accordingly - the easiest way would be to make the _decorator attribute be a dictionary instead of pointing to a simple function.
(on a side note: the __class__ magic variable, when used inside a method, is a Python 3 thing: it automatically contains a reference to the class it is defined in (in this case, cls_with_basic_method).
This approach will redecorate the method on each call - it is not as much overhead as it seems to be - Python's default method retrieval mechanism itself is similarly complicated - but if you prefer to decorate the methods at class creation instead, tehn you can use a similar mechanism in a metaclass instead of relying on __getattribute__.
from itertools import chain
class AutoDecorate(type):
def __new__(metacls, name, bases, dct):
if "_decorator" not in dct:
dct["_decorator"] = lambda x: x # NOP decorator
all_bases = list(chain(base.__mro__ for base in bases))
for base in all_bases:
if not "_auto_decorate" in base.__dict__:
continue
for method_name in base.auto_decorate:
if method_name not in dct:
dct[method_name] = dct["_decorator"](getattr(base, method_name))
return super().__new__(name, bases, dct)
class cls_with_basic_method(metaclass=AutoDecorate):
_auto_decorate = set(("basic_method", ...))
def basic_method(arg):
#...
return arg
class cls_with_basic_method_and_decoratorA(class_with_basic_method):
_decorator = decoratorA
class cls_with_basic_method_and_decoratorB(class_with_basic_method):
_decorator = decoratorB
This is actually simpler than it might look: Upon creating a new class on the hierarchy, it just searches all superclasses for those which have the _auto_decorate attribute - and then it fetches the methods in that list, and decorate them with the decorator in the _decorator attribute of the class being created.
From what you are asking, I'd say you are dealing with a project where you need an "aspect oriented programing" approach. There are several Python libraries that can provide that functionality - maybe you should take a look at that. If you think so, search for modules that can provide appropriate Python aspect oriented capabilities and use those.

Dynamically add methods to a class in Python 3.0

I'm trying to write a Database Abstraction Layer in Python which lets you construct SQL statments using chained function calls such as:
results = db.search("book")
.author("J. K. Rowling")
.price("<40.00")
.title("Harry")
.execute()
but I am running into problems when I try to dynamically add the required methods to the db class.
Here is the important parts of my code:
import inspect
def myName():
return inspect.stack()[1][3]
class Search():
def __init__(self, family):
self.family = family
self.options = ['price', 'name', 'author', 'genre']
#self.options is generated based on family, but this is an example
for opt in self.options:
self.__dict__[opt] = self.__Set__
self.conditions = {}
def __Set__(self, value):
self.conditions[myName()] = value
return self
def execute(self):
return self.conditions
However, when I run the example such as:
print(db.search("book").price(">4.00").execute())
outputs:
{'__Set__': 'harry'}
Am I going about this the wrong way? Is there a better way to get the name of the function being called or to somehow make a 'hard copy' of the function?
You can simply add the search functions (methods) after the class is created:
class Search: # The class does not include the search methods, at first
def __init__(self):
self.conditions = {}
def make_set_condition(option): # Factory function that generates a "condition setter" for "option"
def set_cond(self, value):
self.conditions[option] = value
return self
return set_cond
for option in ('price', 'name'): # The class is extended with additional condition setters
setattr(Search, option, make_set_condition(option))
Search().name("Nice name").price('$3').conditions # Example
{'price': '$3', 'name': 'Nice name'}
PS: This class has an __init__() method that does not have the family parameter (the condition setters are dynamically added at runtime, but are added to the class, not to each instance separately). If Search objects with different condition setters need to be created, then the following variation on the above method works (the __init__() method has a family parameter):
import types
class Search: # The class does not include the search methods, at first
def __init__(self, family):
self.conditions = {}
for option in family: # The class is extended with additional condition setters
# The new 'option' attributes must be methods, not regular functions:
setattr(self, option, types.MethodType(make_set_condition(option), self))
def make_set_condition(option): # Factory function that generates a "condition setter" for "option"
def set_cond(self, value):
self.conditions[option] = value
return self
return set_cond
>>> o0 = Search(('price', 'name')) # Example
>>> o0.name("Nice name").price('$3').conditions
{'price': '$3', 'name': 'Nice name'}
>>> dir(o0) # Each Search object has its own condition setters (here: name and price)
['__doc__', '__init__', '__module__', 'conditions', 'name', 'price']
>>> o1 = Search(('director', 'style'))
>>> o1.director("Louis L").conditions # New method name
{'director': 'Louis L'}
>>> dir(o1) # Each Search object has its own condition setters (here: director and style)
['__doc__', '__init__', '__module__', 'conditions', 'director', 'style']
Reference: http://docs.python.org/howto/descriptor.html#functions-and-methods
If you really need search methods that know about the name of the attribute they are stored in, you can simply set it in make_set_condition() with
set_cond.__name__ = option # Sets the function name
(just before the return set_cond). Before doing this, method Search.name has the following name:
>>> Search.price
<function set_cond at 0x107f832f8>
after setting its __name__ attribute, you get a different name:
>>> Search.price
<function price at 0x107f83490>
Setting the method name this way makes possible error messages involving the method easier to understand.
Firstly, you are not adding anything to the class, you are adding it to the instance.
Secondly, you don't need to access dict. The self.__dict__[opt] = self.__Set__ is better done with setattr(self, opt, self.__Set__).
Thirdly, don't use __xxx__ as attribute names. Those are reserved for Python-internal use.
Fourthly, as you noticed, Python is not easily fooled. The internal name of the method you call is still __Set__, even though you access it under a different name. :-) The name is set when you define the method as a part of the def statement.
You probably want to create and set the options methods with a metaclass. You also might want to actually create those methods instead of trying to use one method for all of them. If you really want to use only one __getattr__ is the way, but it can be a bit fiddly, I generally recommend against it. Lambdas or other dynamically generated methods are probably better.
Here is some working code to get you started (not the whole program you were trying to write, but something that shows how the parts can fit together):
class Assign:
def __init__(self, searchobj, key):
self.searchobj = searchobj
self.key = key
def __call__(self, value):
self.searchobj.conditions[self.key] = value
return self.searchobj
class Book():
def __init__(self, family):
self.family = family
self.options = ['price', 'name', 'author', 'genre']
self.conditions = {}
def __getattr__(self, key):
if key in self.options:
return Assign(self, key)
raise RuntimeError('There is no option for: %s' % key)
def execute(self):
# XXX do something with the conditions.
return self.conditions
b = Book('book')
print(b.price(">4.00").author('J. K. Rowling').execute())

Resources