The functionality of updates in theano.function - theano

I am trying to understand the following piece of theano code.
self.sgd_step = theano.function(
[x, y, learning_rate, theano.Param(decay, default=0.9)],
[],
updates=[(E, E - learning_rate * dE / T.sqrt(mE + 1e-6)),
(U, U - learning_rate * dU / T.sqrt(mU + 1e-6)),
(W, W - learning_rate * dW / T.sqrt(mW + 1e-6)),
(V, V - learning_rate * dV / T.sqrt(mV + 1e-6)),
(b, b - learning_rate * db / T.sqrt(mb + 1e-6)),
(c, c - learning_rate * dc / T.sqrt(mc + 1e-6)),
(self.mE, mE),
(self.mU, mU),
(self.mW, mW),
(self.mV, mV),
(self.mb, mb),
(self.mc, mc)
])
Can someone please tell me, what the author of the above code is trying to do there? There is a value, [x, y, learning_rate, theano.Param(decay, default=0.9)] trying to be updated, and the value is gonna be updated by []? And what is the function of updates here?
I would be so grateful if I can have an idea what is going on in the above code?

The documentation of the updates is as follows (taken from here).
updates must be supplied with a list of pairs of the form (shared-variable, new expression). It can also be a dictionary whose keys are shared-variables and values are the new expressions. Either way, it means “whenever this function runs, it will replace the .value of each shared variable with the result of the corresponding expression”. Above, our accumulator replaces the state‘s value with the sum of the state and the increment amount.
So when you call the above theano function with the required inputs, it will update values of shared variables, namely E, U, W, V, b, c, ..., self.mc. The new value to be updated is given by the second quantity in the tuple. Basically, E = E - learning_rate * dE / T.sqrt(mE + 1e-6) and so on.

Related

Solving vector second order differential equation while indexing into an array

I'm attempting to solve the differential equation:
m(t) = M(x)x'' + C(x, x') + B x'
where x and x' are vectors with 2 entries representing the angles and angular velocity in a dynamical system. M(x) is a 2x2 matrix that is a function of the components of theta, C is a 2x1 vector that is a function of theta and theta' and B is a 2x2 matrix of constants. m(t) is a 2*1001 array containing the torques applied to each of the two joints at the 1001 time steps and I would like to calculate the evolution of the angles as a function of those 1001 time steps.
I've transformed it to standard form such that :
x'' = M(x)^-1 (m(t) - C(x, x') - B x')
Then substituting y_1 = x and y_2 = x' gives the first order linear system of equations:
y_2 = y_1'
y_2' = M(y_1)^-1 (m(t) - C(y_1, y_2) - B y_2)
(I've used theta and phi in my code for x and y)
def joint_angles(theta_array, t, torques, B):
phi_1 = np.array([theta_array[0], theta_array[1]])
phi_2 = np.array([theta_array[2], theta_array[3]])
def M_func(phi):
M = np.array([[a_1+2.*a_2*np.cos(phi[1]), a_3+a_2*np.cos(phi[1])],[a_3+a_2*np.cos(phi[1]), a_3]])
return np.linalg.inv(M)
def C_func(phi, phi_dot):
return a_2 * np.sin(phi[1]) * np.array([-phi_dot[1] * (2. * phi_dot[0] + phi_dot[1]), phi_dot[0]**2])
dphi_2dt = M_func(phi_1) # (torques[:, t] - C_func(phi_1, phi_2) - B # phi_2)
return dphi_2dt, phi_2
t = np.linspace(0,1,1001)
initial = theta_init[0], theta_init[1], dtheta_init[0], dtheta_init[1]
x = odeint(joint_angles, initial, t, args = (torque_array, B))
I get the error that I cannot index into torques using the t array, which makes perfect sense, however I am not sure how to have it use the current value of the torques at each time step.
I also tried putting odeint command in a for loop and only evaluating it at one time step at a time, using the solution of the function as the initial conditions for the next loop, however the function simply returned the initial conditions, meaning every loop was identical. This leads me to suspect I've made a mistake in my implementation of the standard form but I can't work out what it is. It would be preferable however to not have to call the odeint solver in a for loop every time, and rather do it all as one.
If helpful, my initial conditions and constant values are:
theta_init = np.array([10*np.pi/180, 143.54*np.pi/180])
dtheta_init = np.array([0, 0])
L_1 = 0.3
L_2 = 0.33
I_1 = 0.025
I_2 = 0.045
M_1 = 1.4
M_2 = 1.0
D_2 = 0.16
a_1 = I_1+I_2+M_2*(L_1**2)
a_2 = M_2*L_1*D_2
a_3 = I_2
Thanks for helping!
The solver uses an internal stepping that is problem adapted. The given time list is a list of points where the internal solution gets interpolated for output samples. The internal and external time lists are in no way related, the internal list only depends on the given tolerances.
There is no actual natural relation between array indices and sample times.
The translation of a given time into an index and construction of a sample value from the surrounding table entries is called interpolation (by a piecewise polynomial function).
Torque as a physical phenomenon is at least continuous, a piecewise linear interpolation is the easiest way to transform the given function value table into an actual continuous function. Of course one also needs the time array.
So use numpy.interp1d or the more advanced routines of scipy.interpolate to define the torque function that can be evaluated at arbitrary times as demanded by the solver and its integration method.

How to make a difference between two function close to zero through iteration?

I need to construct a loop (simulation) that will iterate a certain number of times and display a value of warrant once the new firm value is close to the guess firm value. Specifically, the idea is to start out with a guess for the firm value (for example the stock price multiplied by the number of shares). Then you value the warrant as a call option (the code below) on this value multiplied by dilution factor, using the same volatility as the vol of the share price. You recompute then the value of the firm (number of shares times share price plus number of warrants times warrant price). This value will be different from the value of the firm you started with. Then you redo the procedure and after a few iterations you will see that the difference in values of the firm tends to zero. For this, I have a following code, but what I get is the following:
TypeError: 'int' object is not subscriptable
Please, help me to figure out the error given the code below:
def bsm_call_value(S0, K, T, r, sigma):
from math import log, sqrt, exp
from scipy import stats
S0 = float(S0)
d1 = (log(S0 / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * sqrt(T))
d2 = (log(S0 / K) + (r - 0.5 * sigma ** 2) * T) / (sigma * sqrt(T))
value = (S0 * stats.norm.cdf(d1, 0.0, 1.0) - K * exp(-r * T) *stats.norm.cdf(d2, 0.0, 1.0))
return value
def warrant_1unobservable(S0, K, T, r, sigma, k, N, M, Iteration):
for i in range(1, Iteration):
Guess_FirmValue = S0*N
dilution = N/(N +k*M)
warrant[i] = bsm_call_value(Guess_FirmValue[i]/N,100,1,0.1,0.2)*dilution
New_FirmValue[i] = Guess_FirmValue[i]+ warrant[i]
Guess_FirmValue[i] - New_FirmValue[i] == 0
return warrant
print(warrant_1unobservable(100,100,1,0.1,0.2,1,100,10, 1000))
I'm not really a python expert and I'm not familiar with the algorithm you're using, but I'll point out a few things that could be causing the issue.
1) In warrant_1observable, you first assign Guess_FirmValue a scalar value (since both S0 and N are scalars the way you call the function), and then you try to access it with an index as Guess_FirmValue[i]. My guess would be that this is causing the error you displayed, since you're trying to index/subscript a variable that, based on your function input values, would be an integer.
2) Both warrant[i] and New_FirmValue[i] are attempts to assign values to an indexed position in a list, but nowhere do you initialize these variables as lists. Lists in python are initialized as warrant = []. Also, it's likely that you would have to either a) pre-allocate the lists to the correct size based on the Iteration or b) use append to push new values onto the back of the list.
3) Guess_FirmValue[i] - New_FirmValue[i] == 0 is a vacuous line of code. All this does is evaluate to either true or false, while performing no other operation. I imagine you're trying to check if the values are equal and then return, but that won't happen even if you stick this in an if statement. It is extremely unlikely that the floating-point representation of the values will ever be identical. This kind of break is accomplished by checking if the difference of the values is below some tolerance, which is set to be a very small number. Ex.:
if (abs(Guess_FirmValue[i] - New_FirmValue[i]) <= 1e-9):
return ...

Implementing Oja's Learning rule in Hopfield Network using python

I am following this paper to implement Oja's Learning rule in python
Oja's Learning Rule
u = 0.01
V = np.dot(self.weight , input_data.T)
print(V.shape , self.weight.shape , input_data.shape) #(625, 2) (625, 625) (2, 625)
So far, I am able to follow the paper, however on arriving at the final equation from the link, I run into numpy array dimension mismatch errors which seems to be expected. This is the code for the final equation
self.weight += u * V * (input_data.T - (V * self.weight)
If I break it down like so:
u = 0.01
V = np.dot(self.weight , input_data.T)
temp = u * V #(625, 2)
x = input_data - np.dot(V.T , self.weight) #(2, 625)
k = np.dot(temp , x) #(625, 625)
self.weight = np.add(self.weight , k , casting = 'same_kind')
This clears out the dimension constraints, but the answer pattern is wrong by a stretch (I was just fixing the dimension orders knowing well the result would be incorrect). I want to know if my interpretation of the equation is correct in the first approach which seemed like the logical way to do so. Any suggestions on implementing the equation properly?
I have implemented the rule based on this link Oja Rule. The results I get are similar to the hebbian learning rule. So I am not exactly sure on the correctness of the implementation. However posting it so anyone looking for an implementation can get few ideas and correct the code if wrong
u = 0.01
V = np.dot(self.weight , input_data.T)
i = 0
for inp in input_data:
v = V[ : , i].reshape((n_features , 1)) #n_features is # of columns
self.weight += (inp * v) - u * np.square(v) * self.weight
i += 1

Theano multiplying by zero

Can anybody explain to me what is the meaning behind these two lines of code from here: https://github.com/Newmu/Theano-Tutorials/blob/master/4_modern_net.py
acc = theano.shared(p.get_value() * 0.)
acc_new = rho * acc + (1 - rho) * g ** 2
Is it a mistake? Why do we instantiate acc to zero and then multiply it by rho in next line? It looks like it will not achieve anything this way and remain zero. Will there be any difference if we replace "rho * acc" by just "acc"?
The full function is given below:
def RMSprop(cost, params, lr=0.001, rho=0.9, epsilon=1e-6):
grads = T.grad(cost=cost, wrt=params)
updates = []
for p, g in zip(params, grads):
acc = theano.shared(p.get_value() * 0.)
acc_new = rho * acc + (1 - rho) * g ** 2
gradient_scaling = T.sqrt(acc_new + epsilon)
g = g / gradient_scaling
updates.append((acc, acc_new))
updates.append((p, p - lr * g))
return updates
This is just a way to tell Theano "create a shared variable and initialize its value to be zero in the same shape as p."
This RMSprop method is a symbolic method. It does not actually compute the RmsProp parameter updates, it only tells Theano how parameter updates should be computed when the eventual Theano function is executed.
If you look further down the tutorial code you linked to you'll see the symbolic execution graph for the parameter updates are constructed by RMSprop via a call on line 67. These updates are then compiled into a Theano function called train in Python on line 69 and the train function is executed many times on line 74 within the for loops of lines 72 and 73. The Python function RMSprop will be called only once, irrespective of how many times the train function is called within the for loops on lines 72 and 73.
Within RMSprop, we are telling Theano that, for each parameter p, we need a new Theano variable whose initial value has the same shape as p and is 0. throughout. We then go on to tell Theano how it should update both this new variable (unnamed as far as Theano is concerned but named acc in Python) and how to update the parameter p itself. These commands do not alter either p or acc, they just tell Theano how p and acc should be updated later, once the function has been compiled (line 69) each time it is executed (line 74).
The function executions on line 74 will not call the RMSprop Python function, they execute a compiled version of RMSprop. There will be no initialization inside the compiled version because that already happened in the Python version of RMSprop. Each train execution of the line acc_new = rho * acc + (1 - rho) * g ** 2 will use the current value of acc not its initial value.

SAS search for value of variable

I am relatively new to SAS with limited programming experience. I need to write code that searches for the value of a specific variable that will form an equality. For example, I need to find the value of k that makes the following algebraic equation hold:
A = B + {[(C - k(B)] / (1+k)} + {[(D - k(E)] / (1+k)^2}, etc.
In this equation, I know the values of A, B, C, D, etc. and need to search for a value of k (the discount rate) that fits the equality.
Here's the proc model code I'm trying to use:
proc model data = test noprint;
p = bv0 + ((e1 - (k * bv0)) / (1+k)) + ((e2 - (k * bv1)) / ((1+k)**2)) + ((e3 - (k * bv2)) / ((1+k)**3)) + ((e3 - k *(bv2)) * (1+g)) / (((1+k)**3) * (k - g));
ENDOGENOUS k;
solve k / out = est;
run;
When I run this code, I receive the following error message:
WARNING: No equations are defined in the model. (Check for missing VAR or ENDOGENOUS statement.)
ERROR: The following solve variables do not appear in any of the equations to be solved: k
Any help anyone can provide would be great! Thanks!
If p is supposed to be the name of an equation, try adding eq. prefix before p. If p is a variable that the expression on the right should be equal to, then replace p with eq.equation1 and put -p on the right side.

Resources