How do I return a value from a higher-order function? - python-3.x

guys how can I make it so that calling make_repeater(square, 0)(5) return 5 instead of 25? I'm guessing I would need to change the line "function_successor = h" because then I'm just getting square(5) but not sure what I need to change it to...
square = lambda x: x * x
def compose1(h, g):
"""Return a function f, such that f(x) = h(g(x))."""
def f(x):
return h(g(x))
return f
def make_repeater(h, n):
iterations = 1
function_successor = h
while iterations < n:
function_successor = compose1(h, function_successor)
iterations += 1
return function_successor
it needs to satisfy a bunch of other requirements like:
make_repeater(square, 2)(5) = square(square(5)) = 625
make_repeater(square, 4)(5) = square(square(square(square(5)))) = 152587890625

To achieve that, you have to use the identity function (f(x) = x) as the initial value for function_successor:
def compose1(h, g):
"""Return a function f, such that f(x) = h(g(x))."""
def f(x):
return h(g(x))
return f
IDENTITY_FUNCTION = lambda x: x
def make_repeater(function, n):
function_successor = IDENTITY_FUNCTION
# simplified loop
for i in range(n):
function_successor = compose1(function, function_successor)
return function_successor
if __name__ == "__main__":
square = lambda x: x * x
print(make_repeater(square, 0)(5))
print(make_repeater(square, 2)(5))
print(make_repeater(square, 4)(5))
and the output is
5
625
152587890625
This isn't most optimal for performance though since the identity function (which doesn't do anything useful) is always part of the composed function, so an optimized version would look like this:
def make_repeater(function, n):
if n <= 0:
return IDENTITY_FUNCTION
function_successor = function
for i in range(n - 1):
function_successor = compose1(function, function_successor)
return function_successor

Related

How to use scipy's fminbound function when function returns a dictionary

I have simple function for which I want to calculate the minimum value
from scipy import optimize
def f(x):
return x**2
optimize.fminbound(f, -1, 2)
This works fine. Now I modify above function which now returns a dictionary
def f1(x):
y = x ** 2
return_list = {}
return_list['x'] = x
return_list['y'] = y
return return_list
While it is returning multiple objects x and y, I want to apply fminbound only on y, the other object x is just for informational purpose for other use of this function.
How can I use fminbound for this setup?
You need a simple wrapper function that extracts y:
def f1(x):
y = x ** 2
return_list = {}
return_list['x'] = x
return_list['y'] = y
return return_list
optimize.fminbound(lambda x: f1(x)['y'], -1, 2)

Composing arbitrary functions in Python

I am following category theory for programmers and trying to implement a compose function in Python. What I have currently is
def compose(f, g):
def result(*args):
return f(g(*args))
return result
def square(x):
return x * x
def add(x, y):
return x + y
def f(x, y):
return min(x, y)
def g(x, y):
return x + y, x * y
result = compose(square, add)(1, 2)
print(result)
result = compose(f, g)(1, 2)
print(result)
But this will not work because I don't know whether g(*args) in compose will return an object which is an iterator or just a single return value. Is there a way to write compose in Python without using isinstance??
One option is to always return a list; then you can unpack g without checking:
def compose(f, g):
def result(*args):
return f(*g(*args)) # we can unpack g here if we know every function returns a list
return result
def square(x):
return [x * x]
def add(x, y):
return [x + y]
def f(x, y):
return [min(x, y)]
def g(x, y):
return x + y, x * y
result = compose(square, add)(1, 2)
print(result)
result = compose(f, g)(1, 2)
print(result)

Converting Python iterative function to recursive function

Please consider 2 crossed suites:
U0 = 1
V0 = 2
Un = 2U(n-1) + 3 V(n-1)
Vn = U(n-1) + V(n-1)
I want to resolve it with this Python iterative function:
def myfunction(n=5):
u = 1
v = 2
for i in range(n):
w = u
u = 2*u + 3*v
v = w + v
return(u,v)
I would prefer a recursive function but I have no idea to convert my function.
Do you have any idea?
Thanks.
As simple as:
def u(n):
if n == 0:
return 1
else:
return 2*u(n-1) + 3*v(n-1)
def v(n):
if n == 0:
return 2
else:
return u(n-1) + v(n-1)
print((u(5), v(5))) # -> (905, 393)
This is possible due to the nature of Python being an interpreted dynamic programing language.
Edit:
def myfunction(n):
def u(n):
if n == 0:
return 1
else:
return 2*u(n-1) + 3*v(n-1)
def v(n):
if n == 0:
return 2
else:
return u(n-1) + v(n-1)
return u(n), v(n)
print(myfunction(5)) # -> (905, 393)
Just to conform with your exact problem/function.
The naive recursive implementation proposed by #moctarjallo is very slow because the same values have to be calculated over and over again. For example, calculating u(22) takes the ridiculous 0.8 sec:
from timeit import timeit
timeit('u(22)', globals=globals(), number=10)
# 8.138428802136332
Consider using an lru_cache decorator that saves the previously computed values in an invisible table and actually calls the functions only when they have not been called with the same parameters before:
from functools import lru_cache
#lru_cache(None) # decorate with a cache
def fast_u(n):
return 1 if not n else 2 * fast_u(n-1) + 3 * fast_v(n-1)
#lru_cache(None) # decorate with a cache
def fast_v(n):
return 2 if not n else fast_u(n-1) + fast_v(n-1)
timeit('fast_u(22)',globals=globals(), number=10)
#9.34056006371975e-05
I also made the functions code a little more idiomatic. Enjoy the difference!
P.S. If you are looking for a one-function implementation:
def uv(n):
if n == 0: return 1, 2
u, v = uv(n-1)
return 2 * u + 3 * v, u + v
uv(5)
#(905, 393)
Caching still improves its performance by orders of magnitude.

Pytorch autograd: Make gradient of a parameter a function of another parameter

In Pytorch, how can I make the gradient of a parameter a function itself?
Here is a simple code snippet:
import torch
def fun(q):
def result(w):
l = w * q
l.backward()
return w.grad
return result
w = torch.tensor((2.), requires_grad=True)
q = torch.tensor((3.), requires_grad=True)
f = fun(q)
print(f(w))
In the code above, how can I make f(w) have gradient with respect to q?
EDIT: based on the accepted answer I was able to write a code that works. Essentially I am alternating between 2 optimization steps. For dim == 1 it works and for dim == 2 it does not. I get the error "RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time."
import torch
class f_class():
def __init__(self, dim):
self.dim = dim
if self.dim == 1:
self.w = torch.tensor((3.), requires_grad=True)
elif self.dim == 2:
self.w = [torch.tensor((3.), requires_grad=True), torch.tensor((5.), requires_grad=True)]
else:
raise ValueError("dim 1 or 2")
def forward(self, x):
if self.dim == 1:
return torch.mul(self.w, x)
elif self.dim == 2:
return torch.mul(torch.mul(self.w[0], self.w[1]), x)
def set_w(self, w):
self.w = w
def get_w(self):
return self.w
class g_class():
def __init__(self):
self.q = torch.tensor((4.), requires_grad=True)
def forward(self, f):
return torch.mul(self.q, f)
def set_q(self, q):
self.q = q
def get_q(self):
return self.q
def w_new(f, g, dim):
loss_g = g.forward(f.forward(xd))
if dim == 1:
grads = torch.autograd.grad(loss_g, f.get_w(), create_graph=True, only_inputs=True)[0]
temp = f.get_w().detach() + grads
else:
grads = torch.autograd.grad(loss_g, f.get_w(), create_graph=True, only_inputs=True)
temp = [wi.detach() + gi for wi, gi in zip(f.get_w(), grads)]
return temp
def q_new(f, g):
loss_f = 2 * f.forward(xd)
loss_f.backward()
temp = g.get_q().detach() + g.get_q().grad
temp.requires_grad = True
return temp
dim = 1
xd = torch.tensor((2.))
f = f_class(dim)
g = g_class()
for _ in range(3):
print(f.get_w(), g.get_q())
wnew = w_new(f, g, dim)
f.set_w(wnew)
print(f.get_w(), g.get_q())
qnew = q_new(f, g)
g.set_q(qnew)
print(f.get_w(), g.get_q())
When computing gradients, if you want to construct a computation graph for the gradient itself you need to specify create_graph=True to autograd.
A potential source of error in your code is using Tensor.backward within f. The problem here is that w.grad and q.grad will be populated with the gradient of l. This means that when you call f(w).backward(), the gradients of both f and l will be added to w.grad and q.grad. In effect you will end up with w.grad being equal to dl/dw + df/dw and similarly for q.grad. One way to get around this is to zero the gradients after f(w) but before .backward(). A better way is to use torch.autograd.grad within f. Using the latter approach, the grad attribute of w and q will not be populated when calling f, only when calling .backward(). This leaves room for things like gradient accumulation during training.
import torch
def fun(q):
def result(w):
l = w * q
return torch.autograd.grad(l, w, only_inputs=True, retain_graph=True)[0]
return result
w = torch.tensor((2.), requires_grad=True)
q = torch.tensor((3.), requires_grad=True)
f = fun(q)
f(w).backward()
print('w.grad:', w.grad)
print('q.grad:', q.grad)
which results in
w.grad: None
q.grad: tensor(1.)
Note that w.grad was not populated. This is because f(w) = dl/dw = q is not a function of w, and therefore w is not part of the computation graph. If you're using a standard pytorch optimizer this is fine since None gradients are implicitly assumed to be zero.
If l were instead a non-linear function of w, then w.grad would have been populated after f(w).backward(). For example
import torch
def fun(q):
def result(w):
# now dl/dw = 2 * w * q
l = w**2 * q
return torch.autograd.grad(l, w, only_inputs=True, create_graph=True)[0]
return result
w = torch.tensor((2.), requires_grad=True)
q = torch.tensor((3.), requires_grad=True)
f = fun(q)
f(w).backward()
print('w.grad:', w.grad)
print('q.grad:', q.grad)
which results in
w.grad: tensor(6.)
q.grad: tensor(4.)

How can I pass different types of parameters (ex: array) into a functional class?

I am trying to learn how to group functions by class. As an example, I tried to code a generalized least squares method to find the equation of a best-fitting line between a set of (x,y) coordinates. For my particular case, I chose a simple line y = x + 5, so slope should be close to 1 and y-intercept should be close to 5. Running my attempt at a coded solution below produces the error TypeError: set_x() takes 1 positional argument but 2 were given, though I am trying to pass an array of x-points. How can I circumvent this error?
import numpy as np
from scipy.optimize import minimize
class GeneralizedLeastSquares:
def __init__(self, residuals=None, parameters=None, x=None, y_true=None, y_fit=None, weights=None, method=None):
self.residuals = residuals
self.parameters = parameters
self.x = x
self.y_true = y_true
self.y_fit = y_fit
self.weights = weights
self.method = method
def set_residuals(self, residuals):
self.residuals = residuals
def set_parameters(self, parameters):
self.parameters = parameters
def set_x(self, x):
self.x = x
def set_y_true(self, y_true):
self.y_true = y_true
def set_y_fit(self, y_fit):
self.y_fit = y_fit
def set_weights(self, weights):
self.weights = weights
def set_method(self, method):
self.method = method
def get_residuals(self):
return [(self.y_true[idx] - self.y_fit[idx])**2 for idx in range(len(self.y_true)) if len(self.y_true) == len(self.y_fit) ]
def get_parameters(self):
return self.parameters
def get_x(self):
return self.x
def get_y_true(self):
return self.y_true
def get_y_fit(self):
return [self.parameters[0] * self.x[idx] + self.parameters[1] for idx in range(len(self.x))]
def get_weights(self):
return self.weights
def update_weights(self):
inverse_residuals = [1/self.residuals[idx] for idx in range(len(residuals))]
inverse_residuals_abs = [abs(inverse_residual) for inverse_residual in inverse_residuals]
residual_abs_total = sum(inverse_residuals_abs)
return [inverse_residuals_abs[idx]/residual_abs_total for idx in range(len(inverse_residuals_abs))]
def get_method(self):
return self.method
def get_error_by_residuals(self):
return sum([self.weights[idx] * self.residuals[idx] for idx in range(len(self.residuals))])
def get_error_by_std_mean(self):
return np.std(self.y_true)/np.sqrt(len(self.y_true))
def get_linear_fit(self):
"""
"""
if self.parameters == 'estimate':
slope_init = (self.y_true[-1] - self.y_true[0]) / (self.x[-1] - self.x[0])
b_init = np.mean([self.y_true[-1] - slope_init * self.x[-1], self.y_true[0] - slope_init * self.x[0]])
self.parameters = [slope_init, b_init]
elif not isinstance(self.parameters, (list, np.ndarray)):
raise ValueError("parameters = 'estimate' or [slope, y-intercept]")
meths = ['residuals', 'std of mean']
funcs = [get_error_by_residuals, get_error_by_std_mean]
func = dict(zip(meths, funcs))[self.method]
res = minimize(func, x0=self.parameters, args=(self,), method='Nelder-Mead')
self.parameters = [res.x[0], res.x[1]]
self.y_fit = get_y_fit(self)
self.residuals = get_residuals(self)
self.weights = update_weights(self)
return self.parameters, self.y_fit, self.residuals, self.weights
x = np.linspace(0, 4, 5)
y_true = np.linspace(5, 9, 5) ## using slope=1, y-intercept=5
y_actual = np.array([4.8, 6.2, 7, 8.1, 8.9]) ## test data
GLS = GeneralizedLeastSquares()
GLS.set_x(x)
GLS.set_y_true(y_actual)
GLS.set_weights(np.ones(len(x)))
GLS.set_parameters('estimate')
# GLS.set_parameters([1.2, 4.9])
GLS.set_method('residuals')
results = GLS.get_linear_fit()
print(results)
Your method is not taking an argument. It should be:
def set_x(self, x):
self.x = x
Wrapping properties in get/set methods is a very Java / outdated way of doing things. It is much easier to access the underlying property outside of your class. I.e. rather than: GLS.set_x(12), consider the more Pythonic: GLS.x = 12. This way you don't have to write a get and set method for each property.
Also, it might make more sense for the heavy lifting method of your object, get_linear_fit to be put in the __call__ method. This way, you can run the regression using by just typing GLS() rather than GLS.get_linear_fit()

Resources