Theano: implementing an integral function - theano

I am trying to implement this function in theano.
This is not about solving the integral (which is immediate) but rather how to implement it.
So far I have gotten this
import theano
from theano import tensor as T
import numpy as np
import scipy.integrate as integrate
x = T.vector('x')
h = T.vector('h')
t = T.scalar('t')
A = np.asarray([[0,1],[1,0]])
A = theano.shared(name='A', value=A)
B = np.asarray([[-1,0],[0,-1]])
B = theano.shared(name='B', value=B)
xn = A.dot(x)
hn = B.dot(h)
res = (t + xn.dot(hn))**(-2)
g = theano.function([t,x,h],res) # this computes the integrand
f = theano.function([x,h], integrate.quad(lambda t: g(t,x,h), 10, np.inf))
Unfortunately, this doesn't work. I am getting the error missing 2 required positional arguments: 'x' and 'h'. Maybe the integrate.quad function cannot "see" the inputs x,h.
Thanks a lot for the help!

Related

can't convert expression to float problem

i am trying to use the "subs" function for differential equation
but i get the error: "can't convert expression to float"
i tryed to check the type of the arrays, but they all float
import sympy as sym
from sympy.integrals import inverse_laplace_transform
from sympy.abc import s,t,y
import numpy as np
U = 1
G =(s+1)/(s*(s+2))
Y = G*U
y = inverse_laplace_transform(Y, s, t)
tm = np.linspace(0,2,3)
y_val = np.zeros(len(tm))
for i in range(len(tm)):
y_val[i] = y.subs(t, tm[i])
print(y)
print(y_val)
line 17
y_val[i] = y.subs(t, tm[i])
TypeError: can't convert expression to float
Ths issue here is that, because tm[0] == 0, the evaluated y in the first iteration of your loop is Heaviside(0), which has no defined real value by default (see https://docs.sympy.org/latest/modules/functions/special.html#heaviside). This is because you have
from sympy.functions import exp, Heaviside
assert y == Heaviside(t) / 2 + exp(-2 * t) * Heaviside(t) / 2
The simplest workaround here is defining a linear space excluding 0, for instance
epsilon = 1e-15
tm = np.linspace(epsilon, 2, 3)
Using y_val = np.zeros(len(tm)), the default datatype of array is float. After modifying the code, you find that one of y_val elements is an object, not float. You can use a list object as a placeholder or you can specify the datatype of numpy array as object:
import sympy as sym
from sympy.integrals import inverse_laplace_transform
from sympy.abc import s,t,y
import numpy as np
U = 1
G =(s+1)/(s*(s+2))
Y = G*U
y = inverse_laplace_transform(Y, s, t)
tm = np.linspace(0,2,3)
# y_val = [0 for _ in range(len(tm))]
y_val = np.zeros(len(tm), dtype=object)
for i in range(len(tm)):
y_val[i] = y.subs(t, tm[i])
print(y_val)
result: [Heaviside(0.0) 0.567667641618306 0.509157819444367]
I have similar problem and your answers work for me, but I still need to put the data into graph.. I modified my problem for this question:
import sympy as sym
from sympy.integrals import inverse_laplace_transform
from sympy.abc import s,t,y
import numpy as np
import matplotlib.pyplot as plt
Y = (5*(1 - 5*s))/(s*(4*(s**2) + s + 1))*(1/s)
y = inverse_laplace_transform(Y, s, t)
tm = np.linspace(1e-15, 20, 100)
y_val = np.zeros(len(tm), dtype=object)
for i in range(len(tm)):
y_val[i] = y.subs(t, tm[i])
plt.plot(y_val, tm)
plt.show()
Running this code I got same error:
TypeError: can't convert expression to float

Finding conditional mutual information from 3 discrete variable

I am trying to find conditional mutual information between three discrete random variable using pyitlib package for python with the help of the formula:
I(X;Y|Z)=H(X|Z)+H(Y|Z)-H(X,Y|Z)
The expected Conditional Mutual information value is= 0.011
My 1st code:
import numpy as np
from pyitlib import discrete_random_variable as drv
X=[0,1,1,0,1,0,1,0,0,1,0,0]
Y=[0,1,1,0,0,0,1,0,0,1,1,0]
Z=[1,0,0,1,1,0,0,1,1,0,0,1]
a=drv.entropy_conditional(X,Z)
##print(a)
b=drv.entropy_conditional(Y,Z)
##print(b)
c=drv.entropy_conditional(X,Y,Z)
##print(c)
p=a+b-c
print(p)
The answer i am getting here is=0.4632245116328402
My 2nd code:
import numpy as np
from pyitlib import discrete_random_variable as drv
X=[0,1,1,0,1,0,1,0,0,1,0,0]
Y=[0,1,1,0,0,0,1,0,0,1,1,0]
Z=[1,0,0,1,1,0,0,1,1,0,0,1]
a=drv.information_mutual_conditional(X,Y,Z)
print(a)
The answer i am getting here is=0.1583445441575102
While the expected result is=0.011
Can anybody help? I am in big trouble right now. Any kind of help will be appreciable.
Thanks in advance.
I think that the library function entropy_conditional(x,y,z) has some errors. I also test my samples, the same problem happens.
however, the function entropy_conditional with two variables is ok.
So I code my entropy_conditional(x,y,z) as entropy(x,y,z), the results is correct.
the code may be not beautiful.
def gen_dict(x):
dict_z = {}
for key in x:
dict_z[key] = dict_z.get(key, 0) + 1
return dict_z
def entropy(x,y,z):
x = np.array([x,y,z]).T
x = x[x[:,-1].argsort()] # sorted by the last column
w = x[:,-3]
y = x[:,-2]
z = x[:,-1]
# dict_w = gen_dict(w)
# dict_y = gen_dict(y)
dict_z = gen_dict(z)
list_z = [dict_z[i] for i in set(z)]
p_z = np.array(list_z)/sum(list_z)
pos = 0
ent = 0
for i in range(len(list_z)):
w = x[pos:pos+list_z[i],-3]
y = x[pos:pos+list_z[i],-2]
z = x[pos:pos+list_z[i],-1]
pos += list_z[i]
list_wy = np.zeros((len(set(w)),len(set(y))), dtype = float , order ="C")
list_w = list(set(w))
list_y = list(set(y))
for j in range(len(w)):
pos_w = list_w.index(w[j])
pos_y = list_y.index(y[j])
list_wy[pos_w,pos_y] += 1
#print(pos_w)
#print(pos_y)
list_p = list_wy.flatten()
list_p = np.array([k for k in list_p if k>0]/sum(list_p))
ent_t = 0
for j in list_p:
ent_t += -j * math.log2(j)
#print(ent_t)
ent += p_z[i]* ent_t
return ent
X=[0,1,1,0,1,0,1,0,0,1,0,0]
Y=[0,1,1,0,0,0,1,0,0,1,1,0]
Z=[1,0,0,1,1,0,0,1,1,0,0,1]
a=drv.entropy_conditional(X,Z)
##print(a)
b=drv.entropy_conditional(Y,Z)
c = entropy(X, Y, Z)
p=a+b-c
print(p)
0.15834454415751043
Based on the definitions of conditional entropy, calculating in bits (i.e. base 2) I obtain H(X|Z)=0.784159, H(Y|Z)=0.325011, H(X,Y|Z) = 0.950826. Based on the definition of conditional mutual information you provide above, I obtain I(X;Y|Z)=H(X|Z)+H(Y|Z)-H(X,Y|Z)= 0.158344. Noting that pyitlib uses base 2 by default, drv.information_mutual_conditional(X,Y,Z) appears to be computing the correct result.
Note that your use of drv.entropy_conditional(X,Y,Z) in your first example to compute conditional entropy is incorrect, you can however use drv.entropy_conditional(XY,Z), where XY is a 1D array representing the joint observations about X and Y, for example XY = [2*xy[0] + xy[1] for xy in zip(X,Y)].

Python partial derivative

I am trying to put numbers in a function that has partial derivatives but I can't find a correct way to do it,I have searched all the internet and I always get an error.Here is the code:
from sympy import symbols,diff
import sympy as sp
import numpy as np
from scipy.misc import derivative
a, b, c, d, e, g, h, x= symbols('a b c d e g h x', real=True)
da=0.1
db=0.2
dc=0.05
dd=0
de=0
dg=0
dh=0
f = 4*a*b+a*sp.sin(c)+a**3+c**8*b
x = sp.sqrt(pow(diff(f, a)*da, 2)+pow(diff(f, b)*db, 2)+pow(diff(f, c)*dc, 2))
def F(a, b, c):
return x
print(derivative(F(2 ,3 ,5)))
I get the following error: derivative() missing 1 required positional argument: 'x0'
I am new to python so maybe it's a stupid question but I would feel grateful if someone helped me.
You can find three partial derivatives of function foo by variables a, b and c at the point (2,3,5):
f = 4*a*b+a*sp.sin(c)+a**3+c**8*b
foo = sp.sqrt(pow(diff(f, a)*da, 2)+pow(diff(f, b)*db, 2)+pow(diff(f, c)*dc, 2))
foo_da = diff(foo, a)
foo_db = diff(foo, b)
foo_dc = diff(foo, c)
print(foo_da," = ", float(foo_da.subs({a:2, b:3, c:5})))
print(foo_db," = ", float(foo_db.subs({a:2, b:3, c:5})))
print(foo_dc," = ", float(foo_dc.subs({a:2, b:3, c:5})))
I have used a python package 'sympy' to perform the partial derivative. The point at which the partial derivative is to be evaluated is val. The argument 'val' can be passed as a list or tuple.
# Sympy implementation to return the derivative of a function in x,y
# Enter ginput as a string expression in x and y and val as 1x2 array
def partial_derivative_x_y(ginput,der_var,val):
import sympy as sp
x,y = sp.symbols('x y')
function = lambda x,y: ginput
derivative_x = sp.lambdify((x,y),sp.diff(function(x,y),x))
derivative_y = sp.lambdify((x,y),sp.diff(function(x,y),y))
if der_var == 'x' :
return derivative_x(val[0],val[1])
if der_var == 'y' :
return derivative_y(val[0],val[1])
input1 = 'x*y**2 + 5*log(x*y +x**7) + 99'
partial_derivative_x_y(input1,'y',(3,1))

Lambifying Nested Parametric Integrals

I have a loss function L(u1,...,un) that takes the form
L(u) = Integral( (I**2 + J**2) * h, (t,c,d) ) //h=h(t), I=I(t,u)
where
I = Integral( f, (x,a,b) ) // f=f(x,t,u)
J = Integral( g, (x,a,b) ) // g=g(x,t,u)
I want to numerically minimize L with scipy, hence I need to lambdify the expression.
However at this point in time lambdify does not natively support translating integrals. With some tricks one can get it to work with single parametric integrals, see Lambdify A Parametric Integral. However I don't see how the proposed solution could possibly be extended to this generalised case.
One idea that in principle should work is the following:
Take the computational graph defining L. Recursively, starting from the leaves, replace each symbolic operation with the corresponding numerical operation, expressed as a lambda function. However this would lead to an immense nesting of lambda function, which I suspect has a very bad influence on performance.
Ideally we would want to arrive at the same result as one would by hand crafting:
L = lambda u: quad(lambda t: (quad(lambda x: f,a,b)[0]**2
+ quad(lambda x: g,a,b)[0]**2)*h, c, d)[0]
MWE: (using code from old thread)
from sympy import *
from scipy.integrate import quad
import numpy as np
def integral_as_quad(function, limits):
x, a, b = limits
param = tuple(function.free_symbols - {x})
f = sp.lambdify((x, *param), function, modules=['numpy'])
return quad(f, a, b, args=param)[0]
x,a,b,t = sp.symbols('x a b t')
f = (x/t-a)**2
g = (x/t-b)**2
h = exp(-t)
I = Integral( f**2,(x,0,1))
J = Integral( g**2,(x,0,1))
K = Integral( (I**2 + J**2)*h,(t,1,+oo))
F = lambdify( (a,b), K, modules=['sympy',{"Integral": integral_as_quad}])
L = lambda a,b: quad(lambda t: (quad(lambda x: (x/t-a)**2,0,1)[0]**2
+ quad(lambda x: (x/t-b)**2,0,1)[0]**2)*np.exp(-t), 1, np.inf)[0]
display(F(1,1))
display(type(F(1,1)))
display(L(1,1))

python numba fingerprint error

I'm attempting numba to optimise some code. I've worked through the initial examples in section 1.3.1 in the 0.26.0 user guide (http://numba.pydata.org/numba-doc/0.26.0/user/jit.html) and get the expected results, so I don't think the problem is installation.
Here's my code:
import numba
import numpy
import random
a = 8
b = 4
def my_function(a, b):
all_values = numpy.fromiter(range(a), dtype = int)
my_array = []
for n in (range(a)):
some_values = (all_values[all_values != n]).tolist()
c = random.sample(some_values, b)
my_array.append(sorted([n] + c))
return my_array
print(my_function(a, b))
my_function_numba = numba.jit()(my_function)
print(my_function_numba(a, b))
Which after printing out the expected results from the my_function call returns the following error message:
ValueError Traceback (most recent call last)
<ipython-input-8-b5d8983a58f6> in <module>()
19 my_function_numba = numba.jit()(my_function)
20
---> 21 print(my_function_numba(a, b))
ValueError: cannot compute fingerprint of empty list
Fingerprint of empty list?
I'm not sure about that error in particular, but in general, to be fast numba requires a particular subset of numpy/python (see here and here for more). So I might rewrite it like this.
#numba.jit(nopython=True)
def fast_my_function(a, b):
all_values = np.arange(a)
my_array = np.empty((a, b + 1), dtype=np.int32)
for n in range(a):
some = all_values[all_values != n]
c = np.empty(b + 1, dtype=np.int32)
c[1:] = np.random.choice(some, b)
c[0] = n
c.sort()
my_array[n, :] = c
return my_array
Main things to note:
no lists, I'm pre-allocating everything.
no use of generators (in both python 2 & 3 for n in range(a) will get converted to a fast native loop)
adding nopython=True to the decorator makes it so numba will complain if I use something that can't be efficiently JITed.

Resources