Broadcast error in scipy but the arrays are the same shape - python-3.x

I am trying to use the scipy statistical functions,
prior = scipy.stats.truncnorm.pdf(x, a, b,
loc=loc, scale=scale)
I'm getting the following error,
<ipython-input-32-9c0390b95343> in prior(values1, values2)
69 # line 6352 return _norm_pdf(x) / self._delta
70 prior = scipy.stats.truncnorm.pdf(x, a, b,
---> 71 loc=loc, scale=scale)
72
/opt/anaconda3/envs/pymc3/lib/python3.6/site-packages/scipy/stats/_distn_infrastructure.py in pdf(self, x, *args, **kwds)
1669 goodargs = argsreduce(cond, *((x,)+args+(scale,)))
1670 scale, goodargs = goodargs[-1], goodargs[:-1]
-> 1671 place(output, cond, self._pdf(*goodargs) / scale)
1672 if output.ndim == 0:
1673 return output[()]
/opt/anaconda3/envs/pymc3/lib/python3.6/site-packages/scipy/stats/_continuous_distns.py in _pdf(self, x, a, b)
6350
6351 def _pdf(self, x, a, b):
-> 6352 return _norm_pdf(x) / self._delta
6353
6354 def _logpdf(self, x, a, b):
ValueError: operands could not be broadcast together with shapes (4084,) (4100,)
But the arrays are all the same shape. When I put print statements in right before the call to truncnorm.pdf,
print(x.shape, a.shape, b.shape, loc.shape, scale.shape)
prior = scipy.stats.truncnorm.pdf(x, a, b,
loc=loc, scale=scale)
I get,
(4100,) (4100,) (4100,) (4100,) (4100,)
confirming that.
I followed the process of calculating the pdf of the truncated normal in the _continuous_distns.py file to try to recreate it for myself and see if something was happening to mask one of the arrays internally,
import scipy.special as sc
def norm_cdf(x):
return sc.ndtr(x)
def norm_sf(x):
return norm_cdf(-x)
nb = norm_cdf(b)
na = norm_cdf(a)
sb = norm_sf(b)
sa = norm_sf(a)
# line 6345 - 6347, defining self._delta
print(np.where(a > 0,
-(sb - sa),
nb - na).shape)
# line 162 return np.exp(-x**2/2.0) / _norm_pdf_C
# defining _norm_pdf(x)
norm_pdf_C = np.sqrt(2*np.pi)
print((np.exp(-x**2/2.0) / norm_pdf_C).shape)
I get
(4100,)
(4100,)
which are correct and the same shape. So I'm at a loss as to what the problem is. I'd appreciate any suggestions.

Related

TypeError: bad operand type for abs(): 'vpython.cyvector.vector'

I'm trying to implement the gravitational force between two objects using this code:
def force(pos1, pos2, m1, m2):
"""
Returns the gravitational force exerted by object 2 on object 1.
Input:
- pos1 = position vector of first object
- pos2 = position vector of second object
- m1 = mass of first object
- m2 = mass of second object
Depends on:
- G = gravitational constant (global variable)
"""
f12 = - G * (m1*m2)/(abs(-pos2+pos1))**2
return(f12
def test_force(pos1, pos2, m1, m2, expected_force):
epsilon = 1e-10
f = force(pos1, pos2, m1, m2)
if not isinstance(f,vector):
print(f"ERROR: function should return a vector but returns {f}.")
return
args_as_string = f"({pos1}, {pos2}, {m1}, {m2})"
error = mag(f-expected_force)
if error<epsilon:
print(f"OK: correct results for input {args_as_string}")
else:
print(f"ERROR: wrong results for input {args_as_string}")
print(f" expected: {expected_force}")
print(f" got: {f}")
test_force(vector(0,0,0),vector(1,0,0),1,1,vector(1,0,0)) # distance = 1 in x direction
test_force(vector(1,0,0),vector(0,0,0),1,1,vector(-1,0,0)) # swap objects
test_force(vector(0,0,0),vector(2,0,0),1,1,vector(0.25,0,0)) # distance = 2
test_force(vector(0,0,0),vector(0,1,0),1,1,vector(0,1,0)) # distance = 1 in y direction
test_force(vector(10,0,0),vector(10,1,0),1,1,vector(0,1,0)) # displaced from origin
test_force(vector(0,0,0),vector(1,0,0),2,1,vector(2,0,0)) # non-unit mass 1
test_force(vector(0,0,0),vector(1,0,0),1,2,vector(2,0,0)) # non-unit mass
Then I got this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-fc2e944ca785> in <module>
22 print(f" got: {f}")
23
---> 24 test_force(vector(0,0,0),vector(1,0,0),1,1,vector(1,0,0)) # distance = 1 in x direction
25 test_force(vector(1,0,0),vector(0,0,0),1,1,vector(-1,0,0)) # swap objects
26 test_force(vector(0,0,0),vector(2,0,0),1,1,vector(0.25,0,0)) # distance = 2
<ipython-input-7-fc2e944ca785> in test_force(pos1, pos2, m1, m2, expected_force)
9 def test_force(pos1, pos2, m1, m2, expected_force):
10 epsilon = 1e-10
---> 11 f = force(pos1, pos2, m1, m2)
12 if not isinstance(f,vector):
13 print(f"ERROR: function should return a vector but returns {f}.")
<ipython-input-6-a94ae125c120> in force(pos1, pos2, m1, m2)
10 - G = gravitational constant (global variable)
11 """
---> 12 f12 = - G * (m1*m2)/(abs(-pos2+pos1))**2
13 return(f12)
TypeError: bad operand type for abs(): 'vpython.cyvector.vector'
TypeError appears because an invalid type is given to the abs function, at line:
f12 = - G * (m1*m2)/(abs(-pos2+pos1))**2
Builtin abs works only with scalars (int, float, complex number) but here a vpython.cyvector.vector is given as an argument.

Why Exception Handling doesn't print text?

My question is why Python doesn't execute the print statement in the Exception Handling code below. I am trying to calculate the log of volumes for a bunch of stocks. Each stock has 1259 volume values. But Python generates a RunTimeWarning "divide by zero encountered in log". So I try to use Exception Handling to locate where the log input is zero, but Python doesn't execute the print statement under except. The print statement is supposed to print the name of the stock and the index in the array where the volume is zero. Why?
Here is the code:
for i, stock in enumerate(df.columns):
volumes = df[stock].to_numpy()
for r in range(len(volumes)): # len(volumes) = 1259
try:
v = np.log(volumes[r])
except:
print(stock, r)
Here is the Error that follows after the RunTimeWarning.
LinAlgError Traceback (most recent call last)
<ipython-input-6-6aa283671e2c> in <module>
13 closes = df_close[stock].to_numpy()
14 volumes = df_vol[stock].to_numpy()
---> 15 indicator_values_all_stocks[i] = indicator.price_volume_fit(volumes, closes, histLength)
16
17 indicator_values_all_stocks_no_NaN = indicator_values_all_stocks[:, ~np.isnan(indicator_values_all_stocks).any(axis=0)]
~\Desktop\Python Projects Organized\Finance\Indicator Statistics\B.57. Price Volume Fit\indicator.py in price_volume_fit(volumes, closes, histLength)
1259 x = log_volumes[i - histLength:i]
1260 y = log_prices[i - histLength:i]
-> 1261 model = np.polyfit(x, y, 1, full = True)
1262 slope[i] = model[0][0]
1263
<__array_function__ internals> in polyfit(*args, **kwargs)
c:\users\donald seger\miniconda3\envs\tensorflow\lib\site-packages\numpy\lib\polynomial.py in polyfit(x, y, deg, rcond, full, w, cov)
629 scale = NX.sqrt((lhs*lhs).sum(axis=0))
630 lhs /= scale
--> 631 c, resids, rank, s = lstsq(lhs, rhs, rcond)
632 c = (c.T/scale).T # broadcast scale coefficients
633
<__array_function__ internals> in lstsq(*args, **kwargs)
c:\users\donald seger\miniconda3\envs\tensorflow\lib\site-packages\numpy\linalg\linalg.py in lstsq(a, b, rcond)
2257 # lapack can't handle n_rhs = 0 - so allocate the array one larger in that axis
2258 b = zeros(b.shape[:-2] + (m, n_rhs + 1), dtype=b.dtype)
-> 2259 x, resids, rank, s = gufunc(a, b, rcond, signature=signature, extobj=extobj)
2260 if m == 0:
2261 x[...] = 0
c:\users\donald seger\miniconda3\envs\tensorflow\lib\site-packages\numpy\linalg\linalg.py in _raise_linalgerror_lstsq(err, flag)
107
108 def _raise_linalgerror_lstsq(err, flag):
--> 109 raise LinAlgError("SVD did not converge in Linear Least Squares")
110
111 def get_linalg_error_extobj(callback):
LinAlgError: SVD did not converge in Linear Least Squares

How to handle JAX reshape with JIT

I am trying to implement entmax-alpha as is described in here.
Here is the code.
import jax
import jax.numpy as jnp
from jax import custom_jvp
from jax import jit
from jax import lax
from jax import vmap
#jax.partial(jit, static_argnums=(2,))
def p_tau(z, tau, alpha=1.5):
return jnp.clip((alpha - 1) * z - tau, a_min=0) ** (1 / (alpha - 1))
#jit
def get_tau(tau, tau_max, tau_min, z_value):
return lax.cond(z_value < 1,
lambda _: (tau, tau_min),
lambda _: (tau_max, tau),
operand=None
)
#jit
def body(kwargs, x):
tau_min = kwargs['tau_min']
tau_max = kwargs['tau_max']
z = kwargs['z']
alpha = kwargs['alpha']
tau = (tau_min + tau_max) / 2
z_value = p_tau(z, tau, alpha).sum()
taus = get_tau(tau, tau_max, tau_min, z_value)
tau_max, tau_min = taus[0], taus[1]
return {'tau_min': tau_min, 'tau_max': tau_max, 'z': z, 'alpha': alpha}, None
#jax.partial(jit, static_argnums=(1, 2,))
def map_row(z_input, alpha, T):
z = (alpha - 1) * z_input
tau_min, tau_max = jnp.min(z) - 1, jnp.max(z) - z.shape[0] ** (1 - alpha)
result, _ = lax.scan(body, {'tau_min': tau_min, 'tau_max': tau_max, 'z': z, 'alpha': alpha}, xs=None,
length=T)
tau = (result['tau_max'] + result['tau_min']) / 2
result = p_tau(z, tau, alpha)
return result / result.sum()
#jax.partial(custom_jvp, nondiff_argnums=(1, 2, 3,))
def entmax(input, axis=-1, alpha=1.5, T=10):
reduce_length = input.shape[axis]
input = jnp.swapaxes(input, -1, axis)
input = input.reshape(input.size / reduce_length, reduce_length)
result = vmap(jax.partial(map_row, alpha=alpha, T=T), 0)(input)
return jnp.swapaxes(result, -1, axis)
#jax.partial(jit, static_argnums=(1, 2,))
def _entmax_jvp_impl(axis, alpha, T, primals, tangents):
input = primals[0]
Y = entmax(input, axis, alpha, T)
gppr = Y ** (2 - alpha)
grad_output = tangents[0]
dX = grad_output * gppr
q = dX.sum(axis=axis) / gppr.sum(axis=axis)
q = jnp.expand_dims(q, axis=axis)
dX -= q * gppr
return Y, dX
#entmax.defjvp
def entmax_jvp(axis, alpha, T, primals, tangents):
return _entmax_jvp_impl(axis, alpha, T, primals, tangents)
When I call it with the following code:
import numpy as np
from jax import value_and_grad
input = jnp.array(np.random.randn(64, 10))
weight = jnp.array(np.random.randn(64, 10))
def toy(input, weight):
return (weight*entmax(input, axis=-1, alpha=1.5, T=20)).sum()
value_and_grad(toy)(input, weight)
I got the following error.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-3a62e54c67d2> in <module>()
7 return (weight*entmax(input, axis=-1, alpha=1.5, T=20)).sum()
8
----> 9 value_and_grad(toy)(input, weight)
35 frames
<ipython-input-1-d85b1daec668> in entmax(input, axis, alpha, T)
49 #jax.partial(custom_jvp, nondiff_argnums=(1, 2, 3,))
50 def entmax(input, axis=-1, alpha=1.5, T=10):
---> 51 reduce_length = input.shape[axis]
52 input = jnp.swapaxes(input, -1, axis)
53 input = input.reshape(input.size / reduce_length, reduce_length)
TypeError: tuple indices must be integers or slices, not DynamicJaxprTracer
It seems to be always connected to the reshape operations. I am not sure why this happens, and any help will be really appreciated.
To recreate the problem, here is the colab notebook
Thanks a lot.
The error comes from the fact that you are attempting to index a Python tuple with a traced quantity, axis. You can fix this error by making axis a static argument:
#jax.partial(jit, static_argnums=(0, 1, 2,))
def _entmax_jvp_impl(axis, alpha, T, primals, tangents):
...
Unfortunately, this uncovers another problem: p_tau declares that the alpha parameter is static, but body() calls this with a traced quantity. This quantity cannot be easily marked static in body because it is passed within a dictionary of parameters that contains the input that is being traced.
To fix this, you'll have to rewrite your function signatures, carefully marking in each one which inputs are static and which are not, and making sure the two do not mix across the layers of function calls.

Product feature optimization with constraints

I have trained a Lightgbm model on learning to rank dataset. The model predicts relevance score of a sample. So higher the prediction the better it is. Now that the model has learned I would like to find the best values of some features that gives me the highest prediction score.
So, lets say I have features u,v,w,x,y,z and the features I would like to optimize over are x,y,z.
maximize f(u,v,w,x,y,z) w.r.t features x,y,z where f is a lightgbm model
subject to constraints :
y = Ax + b
z = 4 if y < thresh_a else 4-0.5 if y >= thresh_b else 4-0.3
thresh_m < x <= thresh_n
The numbers are randomly made up but constraints are linear.
Objective function with respect to x looks like the following :
So the function is very spiky, non-smooth. I also don't have the gradient information as f is a lightgbm model.
Using Nathan's answer I wrote down the following class :
class ProductOptimization:
def __init__(self, estimator, features_to_change, row_fixed_values,
bnds=None):
self.estimator = estimator
self.features_to_change = features_to_change
self.row_fixed_values = row_fixed_values
self.bounds = bnds
def get_sample(self, x):
new_values = {k:v for k,v in zip(self.features_to_change, x)}
return self.row_fixed_values.replace({k:{self.row_fixed_values[k].iloc[0]:v}
for k,v in new_values.items()})
def _call_model(self, x):
pred = self.estimator.predict(self.get_sample(x))
return pred[0]
def constraint1(self, vector):
x = vector[0]
y = vector[2]
return # some float value
def constraint2(self, vector):
x = vector[0]
y = vector[3]
return #some float value
def optimize_slsqp(self, initial_values):
con1 = {'type': 'eq', 'fun': self.constraint1}
con2 = {'type': 'eq', 'fun': self.constraint2}
cons = ([con1,con2])
result = minimize(fun=self._call_model,
x0=np.array(initial_values),
method='SLSQP',
bounds=self.bounds,
constraints=cons)
return result
The results that I get are always around the initial guess. And I think its because of non-smoothness of the function and absence of any gradient information which is important for the SLSQP optimizer. Any advices how should I deal with this kind of problem ?
It's been a good minute since I last wrote some serious code, so I appologize if it's not entirely clear what everything does, please feel free to ask for more explanations
The imports:
from sklearn.ensemble import GradientBoostingRegressor
import numpy as np
from scipy.optimize import minimize
from copy import copy
First I define a new class that allows me to easily redefine values. This class has 5 inputs:
value: this is the 'base' value. In your equation y=Ax + b it's the b part
minimum: this is the minimum value this type will evaluate as
maximum: this is the maximum value this type will evaluate as
multipliers: the first tricky one. It's a list of other InputType objects. The first is the input type and the second the multiplier. In your example y=Ax +b you would have [[x, A]], if the equation was y=Ax + Bz + Cd it would be [[x, A], [z, B], [d, C]]
relations: the most tricky one. It's also a list of other InputType objects, it has four items: the first is the input type, the second defines if it's an upper boundary you use min, if it's a lower boundary you use max. The third item in the list is the value of the boundary, and the fourth the output value connected to it
Watch out if you define your input values too strangely I'm sure there's weird behaviour.
class InputType:
def __init__(self, value=0, minimum=-1e99, maximum=1e99, multipliers=[], relations=[]):
"""
:param float value: base value
:param float minimum: value can never be lower than x
:param float maximum: value can never be higher than y
:param multipliers: [[InputType, multiplier], [InputType, multiplier]]
:param relations: [[InputType, min, threshold, output_value], [InputType, max, threshold, output_value]]
"""
self.val = value
self.min = minimum
self.max = maximum
self.multipliers = multipliers
self.relations = relations
def reset_val(self, value):
self.val = value
def evaluate(self):
"""
- relations to other variables are done first if there are none then the rest is evaluated
- at most self.max
- at least self.min
- self.val + i_x * w_x
i_x is input i, w_x is multiplier (weight) of i
"""
for term, min_max, value, output_value in self.relations:
# check for each term if it falls outside of the expected terms
if min_max(term.evaluate(), value) != term.evaluate():
return self.return_value(output_value)
output_value = self.val + sum([i[0].evaluate() * i[1] for i in self.multipliers])
return self.return_value(output_value)
def return_value(self, output_value):
return min(self.max, max(self.min, output_value))
Using this, you can fix the input types sent from the optimizer, as shown in _call_model:
class Example:
def __init__(self, lst_args):
self.lst_args = lst_args
self.X = np.random.random((10000, len(lst_args)))
self.y = self.get_y()
self.clf = GradientBoostingRegressor()
self.fit()
def get_y(self):
# sum of squares, is minimum at x = [0, 0, 0, 0, 0 ... ]
return np.array([[self._func(i)] for i in self.X])
def _func(self, i):
return sum(i * i)
def fit(self):
self.clf.fit(self.X, self.y)
def optimize(self):
x0 = [0.5 for i in self.lst_args]
initial_simplex = self._get_simplex(x0, 0.1)
result = minimize(fun=self._call_model,
x0=np.array(x0),
method='Nelder-Mead',
options={'xatol': 0.1,
'initial_simplex': np.array(initial_simplex)})
return result
def _get_simplex(self, x0, step):
simplex = []
for i in range(len(x0)):
point = copy(x0)
point[i] -= step
simplex.append(point)
point2 = copy(x0)
point2[-1] += step
simplex.append(point2)
return simplex
def _call_model(self, x):
print(x, type(x))
for i, value in enumerate(x):
self.lst_args[i].reset_val(value)
input_x = np.array([i.evaluate() for i in self.lst_args])
prediction = self.clf.predict([input_x])
return prediction[0]
I can define your problem as shown below (be sure to define the inputs in the same order as the final list, otherwise not all the values will get updated correctly in the optimizer!):
A = 5
b = 2
thresh_a = 5
thresh_b = 10
thresh_c = 10.1
thresh_m = 4
thresh_n = 6
u = InputType()
v = InputType()
w = InputType()
x = InputType(minimum=thresh_m, maximum=thresh_n)
y = InputType(value = b, multipliers=([[x, A]]))
z = InputType(relations=[[y, max, thresh_a, 4], [y, min, thresh_b, 3.5], [y, max, thresh_c, 3.7]])
example = Example([u, v, w, x, y, z])
Calling the results:
result = example.optimize()
for i, value in enumerate(result.x):
example.lst_args[i].reset_val(value)
print(f"final values are at: {[i.evaluate() for i in example.lst_args]}: {result.fun)}")

How to define a callable derivative function

For a project I am working on, I am creating a class of polynomials that I can operate on. The polynomial class can do addition, subtraction, multiplication, synthetic division, and more. It also represents it properly.
For the project, we are required to do create a class for Newton's Method. I was able to create a callable function class for f, such that
>f=polynomial(2,3,4)
>f
2+3x+4x^2
>f(3)
47
I have a derivative function polynomial.derivative(f) outputs 3+8x.
I want to define a function labeled Df so that in my Newtons Method code, I can say, Df(x). It would work so that if x=2:
>Df(2)
19
The derivative of a polynomial is still a polynomial. Thus, instead of returning the string 3+8x, your polynomial.derivative function should return a new polynomial.
class polynomial:
def __init__(c, b, a):
self.coefs = [c, b, a]
[...]
def derivative(self):
return polynomial(*[i*c for i,c in enumerate(self.coefs) if i > 0], 0)
Hence you can use it as follow:
> f = polynomial(2, 3, 4)
> Df = f.derivative()
> f
2+3x+4x^2
> Df
3+8x+0x^2
> f(3)
47
> Df(2)
19
Edit
Of course, it is enumerate and not enumerates. As well, the __init__ misses the self argument. I code this directly on SO without any syntax check.
Of course you can write this in a .py file. Here is a complete working example:
class Polynomial:
def __init__(self, c, b, a):
self.coefs = [c, b, a]
self._derivative = None
#property
def derivative(self):
if self._derivative is None:
self._derivative = Polynomial(*[i*c for i,c in enumerate(self.coefs) if i > 0], 0)
return self._derivative
def __str__(self):
return "+".join([
str(c) + ("x" if i > 0 else "") + (f"^{i}" if i > 1 else "")
for i, c in enumerate(self.coefs)
if c != 0
])
def __call__(self, x):
return sum([c * (x**i) for i, c in enumerate(self.coefs)])
if __name__ == '__main__':
f = Polynomial(2, 3, 4)
print(f"f: y={f}")
print(f"f(3) = {f(3)}")
print(f"f': y={f.derivative}")
print(f"f'(2) = {f.derivative(2)}")
f: y=2+3x+4x^2
f(3) = 47
f': y=3+8x
f'(2) = 19
You can rename the property with the name you prefer: derivative, Df, prime, etc.

Resources