winbugs: expected a comma eroor - statistics

all!
I am using winbugs to do simple linear regression. However, the system always give the error message, expected a comma.
Here is my model statement:
model {
for (i in 1:I)
{
Z[i] ~ dnorm(beta0 + beta1 * X[i], tau)
}
tau <- 1/(sigma*sigma)
sigma ~ dunif(0, 100)
beta0 ~ dnorm(0, 1E-6)
beta1 ~ dnorm(0, 1E-6)
}
what is wrong with it? Thank you.

WinBugs does not allow expressions for parameters' distribution like you did in
dnorm(beta0 + beta1 * X[i], tau).
The solution of your problem is
model {
for (i in 1:I)
{
Z[i] ~ dnorm(mu[i], tau)
mu[i]<- beta0 + beta1 * X[i]
}
tau <- 1/(sigma*sigma)
sigma ~ dunif(0, 100)
beta0 ~ dnorm(0, 1.0E-6)
beta1 ~ dnorm(0, 1.0E-6)
}

Related

Tensor("pow:0", ...) must be from the same graph as Tensor("Cast_2:0", ...)

I am trying to model something which needs to do the definite integration. The code is showing as below:
import tensorflow as tf
from numpy import pi, inf
from tensorflow import log, sqrt, exp, pow
from scipy.integrate import quad # for integration
def risk_neutral_pdf(phi, a, S, K, r, sigma, Mt, p_dict):
phii = tf.complex(0., phi)
A = tf.cast(0., tf.complex64)
B = tf.cast(0., tf.complex64)
p_dict['gamma'] = p_dict['gamma'] + p_dict['lamda'] + .5
p_dict['lamda'] = -.5
for t in range(Mt-1, -1, -1):
temp = 1. - 2. * p_dict['alpha'] * B
A = A + (phii + a) * r + p_dict['omega'] * B - .5 * log(temp)
B = B * p_dict['beta'] + (phii + a) * (p_dict['lamda'] + p_dict['gamma']) - \
.5 * p_dict['gamma']**2. + (.5*((phii + a) - p_dict['gamma'])**2. / temp)
return tf.real(S**a * (S/K)**phii * exp(A + B * sigma**2.) / phii)
p_dict={'lamda': 0.205, 'omega': 5.02e-6, 'beta': 0.589, 'gamma': 421.39, 'alpha': 1.32e-6}
S = 100.
K = 100.
r = 0.
Mt = 0
sq_ht = sqrt(.15**2/252.)
sigma = sq_ht
P1 = tf.py_func(lambda z: quad(risk_neutral_pdf, z, inf, args=(1., S, K, r, sigma, Mt, p_dict))[0],
[0.], tf.float64)
with tf.Session() as sess:
res = sess.run(P1)
print(res)
The result returns "InvalidArgumentError (see above for traceback): ValueError: Tensor("pow:0", shape=(), dtype=float32) must be from the same graph as Tensor("Cast_2:0", shape=(), dtype=complex64)." However, no matter how I change the code or reference the solution in "ValueError: Tensor A must be from the same graph as Tensor B", it does not work. I am wondering if I did wrong when putting the tf.reset_default_graph() at the top place or should the code needs be done some changes.
Thank you. (Tensroflow version: 1.6.0)
Update:
I find that the sigma variable has been sqrt before passing into the risk_neutral_pdf function and be powered when return which is not necessary. So after modifying the return to return tf.real(S**a * (S/K)**phii * exp(A + B * sigma) / phii) and the sq_ht to .15**2/252.. The error changes to "TypeError: a float is required", which I think caused by quad and Tensor. Any ideas to solve??
Many thanks.

Define a value that minimizes a function through iterations

Currently I have the following code:
call = []
diff = []
def results(S0, K, T, r, sigma, k, N, M, Iteration):
for i in range(1, Iteration):
S0 = float(S0)
d1 = (log(S0 / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * sqrt(T))
d2 = (log(S0 / K) + (r - 0.5 * sigma ** 2) * T) / (sigma * sqrt(T))
call1 = (S0 * stats.norm.cdf(d1, 0.0, 1.0) - K * exp(-r * T) * stats.norm.cdf(d2, 0.0, 1.0))
call.append(call1)
dilution = N/(N +k*M)
Value_2 = Value_1 + call*M
diff1 = Value_1 - Value_2 == 0
diff.append(diff1)
return call
print(results(100,100,1,0.1,0.2,1,100,10, 1000))
I am trying to make make iterations so that the program find value of "call" that gives minimum value of "Value_1 - Value_2) based on the number of iterations. Can you please, advise me how to advance the code? Specifically, I dont know how to code - "return me the output of a "call" such that "Value_1 - Value_2" is minimum that is based on the number of iterations"

How can I set a random seed using the jags() function?

Each time I run my JAGS model using the jags() function, I get very different values of fitted parameters. However, I want other people to reproduce my results.
I tried to add set.seed(123), but it didn't help. This link describes how to achieve my goal using the run.jags() function. I wonder how I can do similar things using jags(). Thank you!
Below is my model in R:
##------------- read data -------------##
m <- 6
l <- 3
node <- read.csv("answer.csv", header = F)
n <- nrow(node)
# values of nodes
## IG
IG <- c(c(0.0, 1.0, 0.0), c(0.0, 0.0, 1.0), c(1.0, 0.0, 0.0), c(1.0, 0.0, 0.0), c(0.0, 1.0, 0.0), c(0.0, 0.0, 1.0))
IG <- matrix(IG, nrow=6, ncol=3, byrow=T)
V_IG <- array(0, dim=c(n, m, l))
for (i in 1:n){
for (j in 1:m){
for (k in 1:l)
{
V_IG[i,j,k] <- IG[j,k] # alternatively, V[i,j,k] <- PTS[j,k]
}
}
}
## PTS
PTS <- c(c(1.0, 0.5, 0.0), c(1.0, 0.0, 0.5), c(1.0, 1.0, 0.0), c(1.0, 0.0, 1.0), c(0.0, 0.5, 1.0), c(0.0, 1.0, 0.5))
PTS <- matrix(PTS, nrow=m, ncol=3, byrow=T)
V_PTS <- array(0, dim=c(n, m, l))
for (i in 1:n){
for (j in 1:m){
for (k in 1:l)
{
V_PTS[i,j,k] <- PTS[j,k]
}
}
}
##------------- fit model -------------##
set.seed(123)
data <- list("n", "m", "V_IG", "V_PTS", "node")
myinits <- list(list(tau = rep(1,n), theta = rep(0.5,n)))
parameters <- c("tau", "theta")
samples <- jags(data, inits=myinits, parameters,
model.file ="model.txt", n.chains=1, n.iter=10000,
n.burnin=1, n.thin=1, DIC=T)
And my model file model.txt:
model{
# data: which node (1, 2, 3) was chosen by each child in each puzzle
for(i in 1:n) # for each child
{
for (j in 1:m) # for each problem
{
# node chosen
node[i,j] ~ dcat(mu[i,j,1:3])
mu[i,j,1:3] <- exp_v[i,j,1:3] / sum(exp_v[i,j,1:3])
for (k in 1:3) {
exp_v[i,j,k] <- exp((V_IG[i,j,k]*theta[i] + V_PTS[i,j,k]*(1-theta[i]))/tau[i])
}
}
}
# priors on tau and theta
for (i in 1:n)
{
tau[i] ~ dgamma(0.001,0.001)
theta[i] ~ dbeta(1,1)
}
}
I know this is an older question, but for anyone using the jagsUI package, the jags() function has an argument for setting the seed, 'seed = ####'. So for example, a JAGS call could be;
np.sim1 <- jags(data = data1, parameters.to.save = params1, model.file = "mod1_all.txt",
n.chains = nc, n.iter = ni, n.burnin = nb, n.thin = nt, seed = 4879)
summary(np.sim1)
Here is a toy example for linear regression. First the model:
model{
a0 ~ dnorm(0, 0.0001)
a1 ~ dnorm(0, 0.0001)
tau ~ dgamma(0.001,0.001)
for (i in 1:100) {
y[i] ~ dnorm(mu[i], tau)
mu[i] <- a0 + a1 * x[i]
}
}
Now we generate some data and you the set.seed function to generate identical results from multiple calls to the jags function.
# make the data and prepare what we need to fit the model
x <- rnorm(100)
y <- 1 + 1.2 * x + rnorm(100)
data <- list("x", "y")
parameters <- c("a0", "a1", "tau")
inits = list(list(a0 = 1, a1=0.5, tau = 1))
# First fit
set.seed(121)
samples <- jags(data, inits,
parameters,model.file = "./sov/lin_reg.R",
n.chains = 1, n.iter = 5000, n.burnin = 1, n.thin = 1)
# second fit
set.seed(121) # with set.seed at same value
samples2 <- jags(data, inits,
parameters,model.file = "./sov/lin_reg.R",
n.chains = 1, n.iter = 5000, n.burnin = 1, n.thin = 1)
If we pull out the draws for one of the parameters from samples and samples2 we can see that they have generated the same values.
a0_1 <- samples$BUGSoutput$sims.list$a0
a0_2 <- samples2$BUGSoutput$sims.list$a0
head(cbind(a0_1, a0_2))
[,1] [,2]
[1,] 1.0392019 1.0392019
[2,] 0.9155636 0.9155636
[3,] 0.9497509 0.9497509
[4,] 1.0706620 1.0706620
[5,] 0.9901852 0.9901852
[6,] 0.9307072 0.9307072

SGD with L2 regularization in mllib

I am having difficulty reading open source mllib code for SGD with L2 regularization.
The code is
class SquaredL2Updater extends Updater {
override def compute(
weightsOld: Vector,
gradient: Vector,
stepSize: Double,
iter: Int,
regParam: Double): (Vector, Double) = {
// add up both updates from the gradient of the loss (= step) as well as
// the gradient of the regularizer (= regParam * weightsOld)
// w' = w - thisIterStepSize * (gradient + regParam * w)
// w' = (1 - thisIterStepSize * regParam) * w - thisIterStepSize * gradient
val thisIterStepSize = stepSize / math.sqrt(iter)
val brzWeights: BV[Double] = weightsOld.toBreeze.toDenseVector
brzWeights :*= (1.0 - thisIterStepSize * regParam)
brzAxpy(-thisIterStepSize, gradient.toBreeze, brzWeights)
val norm = brzNorm(brzWeights, 2.0)
(Vectors.fromBreeze(brzWeights), 0.5 * regParam * norm * norm)
}
The part I am having trouble with is
brzWeights :*= (1.0 - thisIterStepSize * regParam)
the breeze lib has documentation that explains the :*= operator
/** Mutates this by element-wise multiplication of b into this. */
final def :*=[TT >: This, B](b: B)(implicit op: OpMulScalar.InPlaceImpl2[TT, B]): This = {
op(repr, b)
repr
}
it looks like its just multiplication of a vector by a scalar.
The formula I found for gradient in case of L2 regularization is
How does the code represent this gradient in this update? Can someone help please.
Ok, I figured it out. The updater equation is
rearranging terms gives
recognizing the last term is just the gradient
This is equivalent to the code which has
brzAxpy(-thisIterStepSize, gradient.toBreeze, brzWeights)
breaking that out
brzWeights = brzWeights + -thisIterStepSize * gradient.toBreeze
in the previous line, brzWeights :*= (1.0 - thisIterStepSize * regParam)
which means
brzWeights = brzWeights * (1.0 - thisIterStepSize * regParam)
so, finally
brzWeights = brzWeights * (1.0 - thisIterStepSize * regParam) + (-thisIterStepSize) * gradient.toBreeze
Now the code and equation match within a normalization factor, which I believe is taken care of in the following line.

Color interpolation between 3 colors

I use the following equation to get a nice color gradient from colorA to colorB, but I have no idea how to do the same for 3 colors, so the gradient goes from colorA to colorB to colorC
colorT = colorA * p + colorB * (1.0 - p); where "p" is the a percentage from 0.0 to 1.0
Thanks
Thanks for the formula. But I had to make some modifications to it, as it didn't interpolate between the 3 colors properly (there was jumps in color change)
Here is the fix for that:
if (p < 0.5)
{
COLORx = (COLORb * p * 2.0) + COLORa * (0.5 - p) * 2.0;
}
else
{
COLORx = COLORc * (p - 0.5) * 2.0 + COLORb * (1.0 - p) * 2.0;
}
Well, for 3 colors, you can just to the same with p = 0.0 to 2.0:
if p <= 1.0
colorT = colorA * p + colorB * (1.0 - p);
else
colorT = colorB * (p - 1.0) + colorC * (2.0 - p);
Or scale it so you can still use p = 0.0 to 1.0:
if p <= 0.5
colorT = colorA * p * 2.0 + colorB * (0.5 - p) * 2.0;
else
colorT = colorB * (p - 0.5) * 2.0 + colorC * (1.0 - p) * 2.0;
It might be possible to construct a single expression for that, but the simplest is to use a condition to use different expressions depending on whether you are in the A - B part or B - C part of the range:
colorT =
p < 0.5
? colorA * p * 2.0 + colorB * (1.0 - p * 2.0)
: colorB * (p - 0.5) * 2.0 + colorC * (1.0 - (p - 0.5) * 2.0);
one possible solution is to use interpolation via Bézier Curve:
http://en.wikipedia.org/wiki/B%C3%A9zier_curve
if you look at the special case Quadratic Bézier Curve, you can see a formula that interpolate between 3 points, or colors in your case.
colorT=(1-p*p)Color0 + 2(1-p)Color1 + (p*p)Color2 , 0<=p<=1
This is a generalization of your linear formula.
EDIT:
on second though, this method doesn't get your results, as the intermediate point is never touched.
To get a smooth curve that touch all of your points (colors) you have to use a spline http://en.wikipedia.org/wiki/Spline_interpolation
You want to be able to create 3 color but equal gradients? Exactly the same: after you're done with this gradient, start a new one where colorA is the current colorB and colorB is the new color. Append the results and you're done:
colorA ---- colorB colorB ---- colorC
Good luck!

Resources