Whats the correct way to call and use this class? Also have TypeError: missing 1 required positional argument: 'self' - python-3.x

I'm still learning the various uses for class methods. I have some code that performs linear regression. So I decided to make a general class called LinRegression and use more specific methods that call the class based on the type of linear regression (i.e use one trailing day, or 5 trailing days etc for the regression).
Anyways, here it goes. I feel like I am doing something wrong here with regards to how I defined the class and am calling the class.
This is from the main.py file:
lin_reg = LinRegression(daily_vol_result)
lin_reg.one_day_trailing()
And this is from the linear_regression file (just showing the one day trailing case):
class LinRegression:
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LinearRegression as lr
from sklearn.metrics import mean_squared_error as mse
from SEplot import se_plot as SE
def __init__(self, daily_vol_result):
"""
:param daily_vol_result: result from def daily_vol_calc
"""
import numpy as np
data = np.asarray(daily_vol_result['Volatility_Daily'])
self.data = data
#classmethod
def one_day_trailing(cls, self):
"""
Compute one day trailing volatility
:return: Mean Squared error, slope: b, and y-int: c
"""
x = self.data[:-1]
y = self.data[1:]
x = x.reshape(len(x), 1)
cls.lr.fit(x, y)
b = cls.lr.coef_[0]
c = cls.lr.intercept_
y_fit1 = b * x + c
MSE1 = cls.mse(y, y_fit1)
print("MSE1 is " + str(MSE1))
print("intercept is " + str(c))
print("slope is " + str(b))
cls.SE(y, y_fit1)
return MSE1, b, c
What I "think" I am doing is that when I call lin_reg, I already have the daily_vol_result passed, then lin_reg.one_day_trailing() should just execute the one_day_trailing def using the self defined in init.
However, I get TypeError: one_day_trailing() missing 1 required positional argument: 'self'. Some other info, the variable, daily_vol_result is a DataFrame and I convert to np array to do the linear regression with sklearn.
Also, when I tried messing around with the code to work, I had an additional issue where the line: lr.fit(x, y) gave me a type error with no positional arg for y. I checked the existence and length of y to see if it matched x and it checks out. I am pretty confused as to how I was only passing one arg.
Your ideas and advice are welcome, thanks!

The thing is, you are using wrong position for self in method one_day_trailing(cls, self). You have specified self at second position in method definition.
If not passed anything and executing the method simply as you did in the 2nd line of code:
lin_reg.one_day_trailing()
the class object self will be passed as first argument, so self is passed in cls argument. And thus, the self argument in one_day_trailing() remains unused.
Interchange the arguments in def like this:-
def one_day_trailing(self, cls):
will be better. But then you need to pass the cls object, whatever it is.
See the following questions to know more:
missing 1 required positional argument:'self'
TypeError: attack() missing 1 required positional argument: 'self'

I found out that the linear regression package was acting like a class and so lr.fit(self, x, y) was what it wanted as an input. I first instantiated the class as:
A = lr(), then A.fit(x,y).
I had this line in my main file:
ASDF = LinRegression.one_day_trailing(daily_vol_result)
I also figured out a more general way to produce these functions. I did not end up needing to use #classmethod or #staticmethod

Related

Class body visibility in Python

Even after reading the answer of #ncoghlan in
Python nonlocal statement in a class definition
(that I didn't get well, by the way),
I'm not able to understand this behavior.
# The script that fails and I can't explain such fail
class DW:
def r(): return 3
def rr(): return 2 + r()
y = r()
x = rr()
# a solution that I don't like: I don't want high order functions
class W:
def r(): return 3
def rr(r): return 2 + r()
y = r()
x = rr(r)
Class bodies acts more or less like a scripts. This feature is some what strange to me yet. Around this, I have some newbie questions. I thank you in advance if you help me to get them.
Can I define functions inside the body of a class, use them and delete them before the ending of the class definition? (In this case, naturally, the deleted functions will not exist to instances of such class)
How can I promote a better visibility between functions and properties inside a class definition, avoiding the usage of arguments to create access to them?
You can use the keyword self to indicate the method of the object itself
class W:
def r(): return 3
def rr(self): return 2 + self.r()
y = r()
x = rr(r)

Using `functools.partial` and `map` with built-in `getattr`?

I apologize if am completely missing something obvious or if I have not dug into the documentation hard enough, but after 30 mins or so I found a work around (without having understood the error I was getting) and ... hence the question here. Suppose I have a class:
class RGB(object):
def __init__(self, r, g, b):
super(RGB, self).__init__()
self.red = r
self.blue = b
self.green = g
and I define a list of RGB instances as follows:
from random import random
rr, gg, bb = [[random() for _ in range(20)] for _ in range(3)]
list_of_rgbs = [RGB(*item) for item in zip(rr, gg, bb)]
why can't I extract a list of red values by doing:
from functools import partial
*reds, = map(partial(getattr, name="red"), list_of_rgbs)
or
*reds, = map(partial(getattr, "red"), list_of_rgbs)
I know I can make it do what I want by saying reds = [x.red for x in list_of_rbgs] but that would be difficult if the list of attributes to extract comes from elsewhere like: attribs_to_get = ['red', 'blue']. In this particular case I can still do what I want by:
reds, blues = [[getattr(x, attrib) for x in list_of_rgbs] for attrib in attribs_to_get]
but my question is about what causes the error. Can someone explain why, or how to make it work using partial and map? I have a hunch it has something to do with this behavior (and so maybe the partial function needs a reference to self?) but I can't quite tease it out.
For reference I was on Python 3.7.
Partial can only set positional arguments starting at the first argument. You can't set the second argument as positional, but only as a keyword argument. As the first one for getattr is the object, it won't work well together with map and partial.
What you can use however is operator.attrgetter():
from operator import attrgetter
*reds, _ = map(attrgetter("red"), list_of_rgbs)

How to specify value of theano.tensor.ivector?

I would like to create a theano.tensor.ivector variable and specify its values. In most code examples on the internet, I find v = T.ivector(). This creates the tensor variable but don't specify its value.
I tried this :
import theano.tensor as T
val = [1,5]
v = T.ivector(value=val, name='v')
but I get the following error :
File "<stdin>", line 1, in <module>
TypeError: __call__() got an unexpected keyword argument 'value'
I think you may be a little confused about the use of tensors, as it isn't a traditional variable that you assign a value to on declaration. A tensor is really a placeholder variable with a specified format that you will use in a function later. Extending on your example:
import theano.tensor as T
from theano import function
val = [1, 5]
v = T.ivector('v')
f = function([v], [v]) # Create a function that just returns the input
# Evaluate the function
f(val)
In the above code we just create a function that takes the tensor v and returns it. The value is not assigned until we call the function f(val)
You may find the baby steps page of the documentation helpful

Mypy reports an incompatible supertype error with overridden method

Below is a simplified example of a problem I've encountered with mypy.
The A.transform method takes an iterable of objects, transforms each one (defined in the subclass B, and potentially other subclasses) and returns an iterable of transformed objects.
from typing import Iterable, TypeVar
T = TypeVar('T')
class A:
def transform(self, x: Iterable[T]) -> Iterable[T]:
raise NotImplementedError()
class B(A):
def transform(self, x: Iterable[str]) -> Iterable[str]:
return [x.upper() for x in x]
However mypy says:
error: Argument 1 of "transform" incompatible with supertype "A"
error: Return type of "transform" incompatible with supertype "A"
If I remove [T] from A.transform(), then the error goes away. But that seems like the wrong solution.
After reading about covariance and contravariance, I thought that setting
T = TypeVar('T', covariant=True) might be a solution, but this produces the same error.
How can I fix this? I have considered binning the design altogether and replacing the A class with a higher order function.
Making T covariant or contravariant isn't really going to help you in this case. Suppose that the code you had in your question was allowed by mypy, and suppose a user wrote the following snippet of code:
def uses_a_or_subclass(foo: A) -> None:
# This is perfectly typesafe (though it'll crash at runtime)
print(a.transform(3))
# Uh-oh! B.transform expects a str, so we just broke typesafety!
uses_a_or_subclass(B())
The golden rule to remember is that when you need to overwrite or redefine a function (when subclassing, like you're doing, for example), that functions are contravariant in parameters, and covariant in their return type. This means that when you're redefining a function, it's legal to make the parameters more broad/a superclass of the original parameter type, but not a subtype.
One possible fix is to make your entire class generic with respect to T. Then, instead of subclassing A (which is now equivalent to subclassing A[Any] and is probably not what you want if you'd like to stay perfectly typesafe), you'd subclass A[str].
Now, your code is perfectly typesafe, and your redefined function respects function variance:
from typing import Iterable, TypeVar, Generic
T = TypeVar('T')
class A(Generic[T]):
def transform(self, x: Iterable[T]) -> Iterable[T]:
raise NotImplementedError()
class B(A[str]):
def transform(self, x: Iterable[str]) -> Iterable[str]:
return [x.upper() for x in x]
Now, our uses_a_or_subclass function from up above should be rewritten to either be generic, or to accept specifically classes that subtype A[str]. Either way works, depending on what you're trying to do.

PyCharm: 'Function Doesn't Return Anything'

I just started working with PyCharm Community Edition 2016.3.2 today. Every time I assign a value from my function at_square, it warns me that 'Function at_square doesn't return anything,' but it definitely does in every instance unless an error is raised during execution, and every use of the function is behaving as expected. I want to know why PyCharm thinks it doesn't and if there's anything I can do to correct it. (I know there is an option to suppress the warning for that particular function, but it does so by inserting a commented line in my code above the function, and I find it just as annoying to have to remember to take that out at the end of the project.)
This is the function in question:
def at_square(self, square):
""" Return the value at the given square """
if type(square) == str:
file, rank = Board.tup_from_an(square)
elif type(square) == tuple:
file, rank = square
else:
raise ValueError("Expected tuple or AN str, got " + str(type(square)))
if not 0 <= file <= 7:
raise ValueError("File out of range: " + str(file))
if not 0 <= rank <= 7:
raise ValueError("Rank out of range: " + str(rank))
return self.board[file][rank]
If it matters, this is more precisely a method of an object. I stuck with the term 'function' because that is the language PyCharm is using.
My only thought is that my use of error raising might be confusing PyCharm, but that seems too simple. (Please feel free to critique my error raising, as I'm not sure this is the idiomatic way to do it.)
Update: Humorously, if I remove the return line altogether, the warning goes away and returns immediately when I put it back. It also goes away if I replace self.board[file][rank] with a constant value like 8. Changing file or rank to constant values does not remove the warning, so I gather that PyCharm is somehow confused about the nature of self.board, which is a list of 8 other lists.
Update: Per the suggestion of #StephenRauch, I created a minimal example that reflects everything relevant to data assignment done by at_square:
class Obj:
def __init__(self):
self.nested_list = [[0],[1]]
#staticmethod
def tup_method(data):
return tuple(data)
def method(self,data):
x,y = Obj.tup_method(data)
return self.nested_list[x][y]
def other_method(self,data):
value = self.method(data)
print(value)
x = Obj()
x.other_method([1,2])
PyCharm doesn't give any warnings for this. In at_square, I've tried commenting out every single line down to the two following:
def at_square(self, square):
file, rank = Board.tup_from_an(square)
return self.board[file][rank]
PyCharm gives the same warning. If I leave only the return line, then and only then does the warning disappear. PyCharm appears to be confused by the simultaneous assignment of file and rank via tup_from_an. Here is the code for that method:
#staticmethod
def tup_from_an(an):
""" Convert a square in algebraic notation into a coordinate tuple """
if an[0] in Board.a_file_dict:
file = Board.a_file_dict[an[0]]
else:
raise ValueError("Invalid an syntax (file out of range a-h): " + str(an))
if not an[1].isnumeric():
raise ValueError("Invalid an syntax (rank out of range 1-8): " + str(an))
elif int(an[1]) - 1 in Board.n_file_dict:
rank = int(an[1]) - 1
else:
raise ValueError("Invalid an syntax (rank out of range 1-8): " + str(an))
return file, rank
Update: In its constructor, the class Board (which is the parent class for all these methods) saves a reference to the instance in a static variable instance. self.at_square(square) gives the warning, while Board.instance.at_square(square) does not. I'm still going to use the former where appropriate, but that could shed some light on what PyCharm is thinking.
PyCharm assumes a missing return value if the return value statically evaluates to None. This can happen if initialising values using None, and changing their type later on.
class Foo:
def __init__(self):
self.qux = [None] # infers type for Foo().qux as List[None]
def bar(self):
return self.qux[0] # infers return type as None
At this point, Foo.bar is statically inferred as (self: Foo) -> None. Dynamically changing the type of qux via side-effects does not update this:
foo = Foo()
foo.qux = [2] # *dynamic* type of foo.bar() is now ``(self: Foo) -> int``
foo_bar = foo.bar() # Function 'bar' still has same *static* type
The problem is that you are overwriting a statically inferred class attribute by means of a dynamically assigned instance attribute. That is simply not feasible for static analysis to catch in general.
You can fix this with an explicit type hint.
import typing
class Foo:
def __init__(self):
self.qux = [None] # type: typing.List[int]
def bar(self):
return self.qux[0] # infers return type as int
Since Python 3.5, you can also use inline type hints. These are especially useful for return types.
import typing
class Foo:
def __init__(self):
# initial type hint to enable inference
self.qux: typing.List[int] = [None]
# explicit return type hint to override inference
def bar(self) -> int:
return self.qux[0] # infers return type as int
Note that it is still a good idea to rely on inference where it works! Annotating only self.qux makes it easier to change the type later on. Annotating bar is mostly useful for documentation and to override incorrect inference.
If you need to support pre-3.5, you can also use stub files. Say your class is in foomodule.py, create a file called foomodule.pyi. Inside, just add the annotated fields and function signatures; you can (and should) leave out the bodies.
import typing
class Foo:
# type hint for fields
qux: typing.List[int]
# explicit return type hint to override inference
def bar(self) -> int:
...
Type hinting as of Python 3.6
The style in the example below is now recommended:
from typing import typing
class Board:
def __init__(self):
self.board: List[List[int]] = []
Quick Documentation
Pycharm's 'Quick Documentation' show if you got the typing right. Place the cursor in the middle of the object of interest and hit Ctrl+Q. I suspect the types from tup_from_an(an) is not going to be as desired. You could, try and type hint all the args and internal objects, but it may be better value to type hint just the function return types. Type hinting means I don't need to trawl external documentation, so I focus effort on objects that'll be used by external users and try not to do too much internal stuff. Here's both arg and return type hinting:
#staticmethod
def tup_from_an(an: List[int]) -> (int, int):
...
Clear Cache
Pycharm can lock onto out dated definitions. Doesn't hurt to go help>find action...>clear caches
No bodies perfect
Python is constantly improving (Type hinting was updated in 3.7) Pycharm is also constantly improving. The price for the fast pace of development on these relatively immature advanced features means checking or submitting to their issue tracker may be the next call.

Resources