User-defined joint prior distribution in Pymc3 & Theano exception - theano

I'm trying to build a model that samples from the following joint prior distribution in Pymc3:
f(a,b) ~ (a+b)^(-5/2) where a, b > 0
with pm.Model() as the:
def ab_dist(value=[1.0,1.0]):
return T.switch(any(T.le(value, 0)), -np.Inf, T.log(np.power((value[0] + value[1]), -2.5)))
ab = pm.DensityDist('ab', ab_dist, shape=2, testval = [1,1])
a = ab[0]
b = ab[1]
p = pm.Beta('p', a, b)
trace = pm.sample(20000)
I've followed an example from an issue opened on Pymc3's github page, but am still getting the following error:
ValueError: length not known: Elemwise{le,no_inplace} [id A] ''
|ab [id B]
|DimShuffle{x} [id C] ''
|TensorConstant{0} [id D]
I'm new to Theano and haven't had any success in debugging. I'd like to know the proper way to set this up as well as why I'm receiving the length not known exception. My code is below.

I believe I found my answer. It seems the issue may have been with the any(value) portion of the Theano.Tensor conditional flow. After changing it to T.le(value[0], 0)|T.le(value[1], 0), it seems to work without issues.
Updated code below:
with pm.Model() as the:
def ab_dist(value=[1.0,1.0]):
return T.switch(T.le(value[0], 0)|T.le(value[1], 0), -np.Inf, T.log(np.power((value[0] + value[1]), -2.5)))
ab = pm.DensityDist('ab', ab_dist, shape=2, testval = [1,1])
a = ab[0]
b = ab[1]
p = pm.Beta('p', a, b)
trace = pm.sample(10000)

Related

How to loop for choosing data in Jupyter Notebook by Python

I have 2 columns in data frames.
Column 1 = Source
Column 2 = Target
My data source
Source Target
A B
W X
B C
C D
C Z
A Z
Z Y
Input = A, The output should be display as below.
Source Target
A B
B C
C D
C Z
A Z
Z Y
I try to code as below but not finished yet.
In [1]:
a = input()
b = []
for Source, Target in zip(data.Source,data.Target):
if Source == a:
b.append(True)
else:
b.append(False)
Input = A
In [2]: is_long = pd.Series(b)
is_long
Out [2]: 0 True
1 False
2 True
3 True
4 ...
In [3]: data[is_long]
Out [3]: Source Target
A B
B C
C D
C Z
A Z
Z Y
As I understood, the idea is:
try in turn each vertice from the source DataFrame,
the current vertice is "OK" when its Source node has been visited
before,
the visited list contains initially only the node given by the user and
should be extended by Target node of each checked vertice.
Start from defining the following class:
class Visitor:
def __init__(self):
self.clear()
def addNode(self, node):
if not self.isVisited(node):
self._nodes.append(node)
def isVisited(self, node):
return node in self._nodes
def clear(self):
self._nodes = []
This class keeps a register of visited nodes. It will be used soon.
Then define a function checking the "continuation criterion" for the current
row:
def isContinued(row, vs):
res = vs.isVisited(row.Source)
if res:
vs.addNode(row.Target)
return res
The first argument is the current row and the second is a Visitor object.
Then run:
vs = Visitor()
vs.addNode(a)
df[df.apply(isContinued, axis=1, vs=vs)]
The first line creates a Visitor object.
The second adds the "starting node" (just given by the user) to the
"visited list".
Then df.apply(isContinued, axis=1, vs=vs) creates a Boolean vector
- the continuation criteria for edges.
As isContinued function is applied to subsequent edges, the "visited list"
is extended with Target node.
The (just updated) visited list is then used to compute the continuation
criteria for subsequent edges.
The result is a list of edges meeting the continuation criterion.

Finding conditional mutual information from 3 discrete variable

I am trying to find conditional mutual information between three discrete random variable using pyitlib package for python with the help of the formula:
I(X;Y|Z)=H(X|Z)+H(Y|Z)-H(X,Y|Z)
The expected Conditional Mutual information value is= 0.011
My 1st code:
import numpy as np
from pyitlib import discrete_random_variable as drv
X=[0,1,1,0,1,0,1,0,0,1,0,0]
Y=[0,1,1,0,0,0,1,0,0,1,1,0]
Z=[1,0,0,1,1,0,0,1,1,0,0,1]
a=drv.entropy_conditional(X,Z)
##print(a)
b=drv.entropy_conditional(Y,Z)
##print(b)
c=drv.entropy_conditional(X,Y,Z)
##print(c)
p=a+b-c
print(p)
The answer i am getting here is=0.4632245116328402
My 2nd code:
import numpy as np
from pyitlib import discrete_random_variable as drv
X=[0,1,1,0,1,0,1,0,0,1,0,0]
Y=[0,1,1,0,0,0,1,0,0,1,1,0]
Z=[1,0,0,1,1,0,0,1,1,0,0,1]
a=drv.information_mutual_conditional(X,Y,Z)
print(a)
The answer i am getting here is=0.1583445441575102
While the expected result is=0.011
Can anybody help? I am in big trouble right now. Any kind of help will be appreciable.
Thanks in advance.
I think that the library function entropy_conditional(x,y,z) has some errors. I also test my samples, the same problem happens.
however, the function entropy_conditional with two variables is ok.
So I code my entropy_conditional(x,y,z) as entropy(x,y,z), the results is correct.
the code may be not beautiful.
def gen_dict(x):
dict_z = {}
for key in x:
dict_z[key] = dict_z.get(key, 0) + 1
return dict_z
def entropy(x,y,z):
x = np.array([x,y,z]).T
x = x[x[:,-1].argsort()] # sorted by the last column
w = x[:,-3]
y = x[:,-2]
z = x[:,-1]
# dict_w = gen_dict(w)
# dict_y = gen_dict(y)
dict_z = gen_dict(z)
list_z = [dict_z[i] for i in set(z)]
p_z = np.array(list_z)/sum(list_z)
pos = 0
ent = 0
for i in range(len(list_z)):
w = x[pos:pos+list_z[i],-3]
y = x[pos:pos+list_z[i],-2]
z = x[pos:pos+list_z[i],-1]
pos += list_z[i]
list_wy = np.zeros((len(set(w)),len(set(y))), dtype = float , order ="C")
list_w = list(set(w))
list_y = list(set(y))
for j in range(len(w)):
pos_w = list_w.index(w[j])
pos_y = list_y.index(y[j])
list_wy[pos_w,pos_y] += 1
#print(pos_w)
#print(pos_y)
list_p = list_wy.flatten()
list_p = np.array([k for k in list_p if k>0]/sum(list_p))
ent_t = 0
for j in list_p:
ent_t += -j * math.log2(j)
#print(ent_t)
ent += p_z[i]* ent_t
return ent
X=[0,1,1,0,1,0,1,0,0,1,0,0]
Y=[0,1,1,0,0,0,1,0,0,1,1,0]
Z=[1,0,0,1,1,0,0,1,1,0,0,1]
a=drv.entropy_conditional(X,Z)
##print(a)
b=drv.entropy_conditional(Y,Z)
c = entropy(X, Y, Z)
p=a+b-c
print(p)
0.15834454415751043
Based on the definitions of conditional entropy, calculating in bits (i.e. base 2) I obtain H(X|Z)=0.784159, H(Y|Z)=0.325011, H(X,Y|Z) = 0.950826. Based on the definition of conditional mutual information you provide above, I obtain I(X;Y|Z)=H(X|Z)+H(Y|Z)-H(X,Y|Z)= 0.158344. Noting that pyitlib uses base 2 by default, drv.information_mutual_conditional(X,Y,Z) appears to be computing the correct result.
Note that your use of drv.entropy_conditional(X,Y,Z) in your first example to compute conditional entropy is incorrect, you can however use drv.entropy_conditional(XY,Z), where XY is a 1D array representing the joint observations about X and Y, for example XY = [2*xy[0] + xy[1] for xy in zip(X,Y)].

Python partial derivative

I am trying to put numbers in a function that has partial derivatives but I can't find a correct way to do it,I have searched all the internet and I always get an error.Here is the code:
from sympy import symbols,diff
import sympy as sp
import numpy as np
from scipy.misc import derivative
a, b, c, d, e, g, h, x= symbols('a b c d e g h x', real=True)
da=0.1
db=0.2
dc=0.05
dd=0
de=0
dg=0
dh=0
f = 4*a*b+a*sp.sin(c)+a**3+c**8*b
x = sp.sqrt(pow(diff(f, a)*da, 2)+pow(diff(f, b)*db, 2)+pow(diff(f, c)*dc, 2))
def F(a, b, c):
return x
print(derivative(F(2 ,3 ,5)))
I get the following error: derivative() missing 1 required positional argument: 'x0'
I am new to python so maybe it's a stupid question but I would feel grateful if someone helped me.
You can find three partial derivatives of function foo by variables a, b and c at the point (2,3,5):
f = 4*a*b+a*sp.sin(c)+a**3+c**8*b
foo = sp.sqrt(pow(diff(f, a)*da, 2)+pow(diff(f, b)*db, 2)+pow(diff(f, c)*dc, 2))
foo_da = diff(foo, a)
foo_db = diff(foo, b)
foo_dc = diff(foo, c)
print(foo_da," = ", float(foo_da.subs({a:2, b:3, c:5})))
print(foo_db," = ", float(foo_db.subs({a:2, b:3, c:5})))
print(foo_dc," = ", float(foo_dc.subs({a:2, b:3, c:5})))
I have used a python package 'sympy' to perform the partial derivative. The point at which the partial derivative is to be evaluated is val. The argument 'val' can be passed as a list or tuple.
# Sympy implementation to return the derivative of a function in x,y
# Enter ginput as a string expression in x and y and val as 1x2 array
def partial_derivative_x_y(ginput,der_var,val):
import sympy as sp
x,y = sp.symbols('x y')
function = lambda x,y: ginput
derivative_x = sp.lambdify((x,y),sp.diff(function(x,y),x))
derivative_y = sp.lambdify((x,y),sp.diff(function(x,y),y))
if der_var == 'x' :
return derivative_x(val[0],val[1])
if der_var == 'y' :
return derivative_y(val[0],val[1])
input1 = 'x*y**2 + 5*log(x*y +x**7) + 99'
partial_derivative_x_y(input1,'y',(3,1))

Using theano.scan within PyMC3 gives TypeError: slice indices must be integers or None or have an __index__ method

I would like to to use theano.scan within pymc3. I run into problems when I add more than two variables as sequences. Here is a simple example:
import numpy as np
import pymc3 as pm
import theano
import theano.tensor as T
a = np.ones(5)
b = np.ones(5)
basic_model = pm.Model()
with basic_model:
a_plus_b, _ = theano.scan(fn=lambda a, b: a + b, sequences=[a, b])
results in the following error:
Traceback (most recent call last):
File "StackOverflowExample.py", line 23, in <module>
sequences=[a, b])
File "\Anaconda3\lib\site-packages\theano\scan_module\scan.py", line 586, in scan
scan_seqs = [seq[:actual_n_steps] for seq in scan_seqs]
File "\Anaconda3\lib\site-packages\theano\scan_module\scan.py", line 586, in <listcomp>
scan_seqs = [seq[:actual_n_steps] for seq in scan_seqs]
TypeError: slice indices must be integers or None or have an __index__ method
However, when I run the same theano.scan outside a pymc model block, everything works fine:
a = T.vector('a')
b = T.vector('b')
a_plus_b, update = theano.scan(fn=lambda a, b: a + b, sequences=[a, b])
a_plus_b_function = theano.function(inputs=[a, b], outputs=a_plus_b, updates=update)
a = np.ones(5)
b = np.ones(5)
print(a_plus_b_function(a, b))
prints [2. 2. 2. 2. 2.], like it should.
In addition, the problem seems to be specific to adding more than one sequences. Everything works just fine when there is one variable in sequences and one in non-sequences. The following code works:
a = np.ones(5)
c = 2
basic_model = pm.Model()
with basic_model:
a_plus_c, _ = theano.scan(fn=lambda a, c: a + c, sequences=[a], non_sequences=[c])
a_plus_c_print = T.printing.Print('a_plus_c')(a_plus_c)
prints a_plus_c __str__ = [ 3. 3. 3. 3. 3.], as expected.
Note: I can't just use a + b instead of theano.scan because my actual function is more complex. I actually want to have something like this:
rewards = np.array([1, 1, 1, 1]) # reward (1) or no reward (0)
choices = np.array([1, 0, 1, 0]) # action left (1) or right (0)
Q_old = 0 # initial Q-value
alpha = 0.1 # learning rate
def update_Q(reward, choice, Q_old, alpha):
return Q_old + choice * alpha * (reward - Q_old)
Q_left, _ = theano.scan(fn=update_Q,
sequences=[rewards, choices],
outputs_info=[Q_old],
non_sequences=[alpha])
Turns out it was a simple mistake! Everything is working as soon as I define a and b as tensor variables. Adding those two lines did the job:
a = T.as_tensor_variable(np.ones(5))
b = T.as_tensor_variable(np.ones(5))

python numba fingerprint error

I'm attempting numba to optimise some code. I've worked through the initial examples in section 1.3.1 in the 0.26.0 user guide (http://numba.pydata.org/numba-doc/0.26.0/user/jit.html) and get the expected results, so I don't think the problem is installation.
Here's my code:
import numba
import numpy
import random
a = 8
b = 4
def my_function(a, b):
all_values = numpy.fromiter(range(a), dtype = int)
my_array = []
for n in (range(a)):
some_values = (all_values[all_values != n]).tolist()
c = random.sample(some_values, b)
my_array.append(sorted([n] + c))
return my_array
print(my_function(a, b))
my_function_numba = numba.jit()(my_function)
print(my_function_numba(a, b))
Which after printing out the expected results from the my_function call returns the following error message:
ValueError Traceback (most recent call last)
<ipython-input-8-b5d8983a58f6> in <module>()
19 my_function_numba = numba.jit()(my_function)
20
---> 21 print(my_function_numba(a, b))
ValueError: cannot compute fingerprint of empty list
Fingerprint of empty list?
I'm not sure about that error in particular, but in general, to be fast numba requires a particular subset of numpy/python (see here and here for more). So I might rewrite it like this.
#numba.jit(nopython=True)
def fast_my_function(a, b):
all_values = np.arange(a)
my_array = np.empty((a, b + 1), dtype=np.int32)
for n in range(a):
some = all_values[all_values != n]
c = np.empty(b + 1, dtype=np.int32)
c[1:] = np.random.choice(some, b)
c[0] = n
c.sort()
my_array[n, :] = c
return my_array
Main things to note:
no lists, I'm pre-allocating everything.
no use of generators (in both python 2 & 3 for n in range(a) will get converted to a fast native loop)
adding nopython=True to the decorator makes it so numba will complain if I use something that can't be efficiently JITed.

Resources