Error in update.jags(model, n.iter, ...) : Error in node sd[1] Invalid parent values - gaussian

I am having error in node sd[1], it says invalid parent values in the compiler. I am working with a gaussian model for "Galaxies" data from "MASS"p package of R.
library(rjags)
library(MASS)
library(mcsm)
data("galaxies")
summary(galaxies)
y = galaxies
ngroups = 2
jags_data = list(y=y, n=length(y), ngroups=ngroups)
gaussmodel = "
model {
for (i in 1:n) {
y[i] ~ dnorm(mu[z[i]], tau[z[i]])
z[i] ~ dcat(group_probs)
}
group_probs ~ ddirich(d)
for (j in 1:ngroups) {
mu_raw[j] ~ dnorm(0, 1E-6)
tau[j] ~ dgamma(0.001, 0.001)
sd[j] = pow(tau[j], -0.5)
d[j] = 2
}
mu = sort(mu_raw)
}
"
model = jags.model(textConnection(gaussmodel), data=jags_data,
n.chains=4)
update(model,n.iter=1E4)
samples = coda.samples(model=model, variable.names=c("mu", "sd", "group_probs"), n.iter=1E4, thin=5)

I don't know much about rjags and bayesian analysis in detail but I think your problem is in the sd line of the code where sd=pow(tau[j],-0.5)
I believe the -0.5 is the problem. I am not sure if you intended for the value to be negative but it seemed that the value caused some problems to suffix in your dirichlet model.
Taking away the negative value seemed to do the trick.

Related

hi all . i am trying to run the bym2 model . at this stage, i have a problem.are you help me?

library(“rstan”)
library(“rstudioapi”)
library(“parallel”)
library(“brms”)
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectgores())
library(pkgbuild) # load packAge
find_rtools() # should be TRUE, assuming you have Rtools 3.5
#fit model icar.stan to NYC census tracts neighborhood map
install.packages(‘tidyverse’, dependencies = TRUE)
install.packages(‘rstanarm’, dependencies = TRUE)
library(rstan);
library(tidyverse)
library(rstanarm)
"data {
int<lower=0> N;
int<lower=0> N_edges;
int<lower=1, upper=N> node1[N_edges]; // node1[i] adjacent to node2[i]
int<lower=1, upper=N> node2[N_edges]; // and node1[i] < node2[i]
int<lower=0> y[N]; // count outcomes
vector<lower=0>[N] E; // exposure
int<lower=1> K; // num covariates
matrix[N, K] x; // design matrix
real<lower=0> scaling_factor; // scales the variance of the spatial effects
}
transformed data {
vector[N] log_E = log(E);
}
parameters {
real beta0; // intercept
vector[K] betas; // covariates
real<lower=0> sigma; // overall standard deviation
real<lower=0, upper=1> rho; // proportion unstructured vs. spatially structured variance
vector[N] theta; // heterogeneous effects
vector[N] phi; // spatial effects
}
transformed parameters {
vector[N] convolved_re;
// variance of each component should be approximately equal to 1
convolved_re = sqrt(1 - rho) * theta + sqrt(rho / scaling_factor) * phi;
}
model {
y ~ poisson_log(log_E + beta0 + x * betas + convolved_re * sigma); // co-variates
// This is the prior for phi! (up to proportionality)
target += -0.5 * dot_self(phi[node1] - phi[node2]);
beta0 ~ normal(0.0, 1.0);
betas ~ normal(0.0, 1.0);
theta ~ normal(0.0, 1.0);
sigma ~ normal(0, 1.0);
rho ~ beta(0.5, 0.5);
// soft sum-to-zero constraint on phi)
sum(phi) ~ normal(0, 0.001 * N); // equivalent to mean(phi) ~ normal(0,0.001)
}
generated quantities {
real logit_rho = log(rho / (1.0 - rho));
vector[N] eta = log_E + beta0 + x * betas + convolved_re * sigma; // co-variates
vector[N] mu = exp(eta);
}"
options(mc.cores = parallel::detectCores())
library(INLA)
source(“mungecardata4stan.R”)
source(“iran_data.R”)
y = data$y;
E = data$E;
K = 1;
x = 0.1 * data$x;
nbs = mungeCARdata4stan(data$adj, data$num);
N = nbs$N;
node1 = nbs$node1;
node2 = nbs$node2;
N_edges = nbs$N_edges;
adj.matrix = sparseMatrix(i=nbs$node1,j=nbs$node2,x=1,symmetric=TRUE)
Q= Diagonal(nbs$N, rowSums(adj.matrix)) - adj.matrix
Q_pert = Q + Diagonal(nbs$N) * max(diag(Q)) * sqrt(.Machine$double.eps)
Q_inv = inla.qinv(Q_pert, constr=list(A = matrix(1,1,nbs$N),e=0))
scaling_factor = exp(mean(log(diag(Q_inv))))
scot_stanfit = stan(“bym2_predictor_plus_offset.stan”, data=list(N,N_edges,node1,node2,y,x,E,scaling_factor), warmup=5000, iter=6000);
Error in new_CppObject_xp(fields$.module, fields$.pointer, …) : **
** Exception: variable does not exist; processing stage=data initialization; variable name=N; base type=int (in ‘string’, line 3, column 2 to column 17)
In addition: Warning message:
In readLines(file, warn = TRUE) :
** incomplete final line found on ‘C:\Users\Uaer\Downloads\bym2_predictor_plus_offset.stan’
failed to create the sampler;** sampling not done
in my opinion , in source(“mungecardata4stan.R”) you should type the address of mungecardata4stan.R that is placed in your pc. and also for source(“iran_data.R”). like this: source("C:/Users/me/Desktop/iran_data.R").

Math.Net Exponential Moving Average

I'm using simple moving average in Math.Net, but now that I also need to calculate EMA (exponential moving average) or any kind of weighted moving average, I don't find it in the library.
I looked over all methods under MathNet.Numerics.Statistics and beyond, but didn't find anything similar.
Is it missing in library or I need to reference some additional package?
I don't see any EMA in MathNet.Numerics, however it's trivial to program. The routine below is based on the definition at Investopedia.
public double[] EMA(double[] x, int N)
{
// x is the input series
// N is the notional age of the data used
// k is the smoothing constant
double k = 2.0 / (N + 1);
double[] y = new double[x.Length];
y[0] = x[0];
for (int i = 1; i < x.Length; i++) y[i] = k * x[i] + (1 - k) * y[i - 1];
return y;
}
Occasionally I found this package: https://daveskender.github.io/Stock.Indicators/docs/INDICATORS.html It targets to the latest .NET framework and has very detailed documents.
Try this:
public IEnumerable<double> EMA(IEnumerable<double> items, int notationalAge)
{
double k = 2.0d / (notationalAge + 1), prev = 0.0d;
var e = items.GetEnumerator();
if (!e.MoveNext()) yield break;
yield return prev = e.Current;
while(e.MoveNext())
{
yield return prev = (k * e.Current) + (1 - k) * prev;
}
}
It will still work with arrays, but also List, Queue, Stack, IReadOnlyCollection, etc.
Although it's not explicitly stated I also get the sense this is working with money, in which case it really ought to use decimal instead of double.

Is possible to define a random limit for a loop in JAGS?

I am trying to implement a Weibull proportional hazards model with a cure fraction following the approach outlined by Hui, Ibrahim and Sinha (1999) - A New Bayesian Model for Survival Data with a Surviving Fraction. However, I am not sure if it is possible to define a random limit for a looping in JAGS.
library(R2OpenBUGS)
library(rjags)
set.seed(1234)
censored <- c(1, 1)
time_mod <- c(NA, NA)
time_cens <- c(5, 7)
tau <- 4
design_matrix <- rbind(c(1, 0, 0, 0), c(1, 0.2, 0.2, 0.04))
jfun <- function() {
for(i in 1:nobs) {
censored[i] ~ dinterval(time_mod[i], time_cens[i])
time_mod[i] <- ifelse(N[i] == 0, tau, min(Z))
for (k in 1:N[i]){
Z[k] ~ dweib(1, 1)
}
N[i] ~ dpois(fc[i])
fc[i] <- exp(inprod(design_matrix[i, ], beta))
}
beta[1] ~ dnorm(0, 10)
beta[2] ~ dnorm(0, 10)
beta[3] ~ dnorm(0, 10)
beta[4] ~ dnorm(0, 10)
}
inits <- function() {
time_init <- rep(NA, length(time_mod))
time_init[which(!status)] <- time_cens[which(!status)] + 1
out <- list(beta = rnorm(4, 0, 10),
time_mod = time_init,
N = rpois(length(time_mod), 5))
return(out)
}
data_base <- list('time_mod' = time_mod, 'time_cens' = time_cens,
'censored' = censored, 'design_matrix' = design_matrix,
'tau' = tau,
'nobs' = length(time_cens[!is.na(time_cens)]))
tc1 <- textConnection("jmod", "w")
write.model(jfun, tc1)
close(tc1)
# Calling JAGS
tc2 <- textConnection(jmod)
j <- jags.model(tc2,
data = data_base,
inits = inits(),
n.chains = 1,
n.adapt = 1000)
I observed the below error:
Error in jags.model(tc2, data = data_base, inits = inits(), n.chains = 1, :
RUNTIME ERROR:
Compilation error on line 6.
Unknown variable N
Either supply values for this variable with the data
or define it on the left hand side of a relation.
I am not entirely certain, but I am pretty sure that you cannot declare a random number of nodes in BUGS in general, so it would not be a specific JAGS' quirk.
Nevertheless, you can get a way around that.
Since BUGS is a declarative language instead of a procedural one, it is enough to declare an arbitrary but deterministic number of nodes (let's say "large enough") and then associate only a random number of them with a distribution and with observed data, leaving the remaining nodes deterministic.
Once you have observed the maximum value of N[i] (let's say N.max), you can pass it as a parameter to JAGS and then change this code of yours:
for (k in 1:N[i]){
Z[k] ~ dweib(1, 1)
}
into this:
for (k in 1:N.max){
if (k <= N[i]){
Z[k] ~ dweib(1, 1)
} else {
Z[k] <- 0
}
}
I hope this will do the trick in your case. So please give feedback latter about it.
Needless to say, if you have some non-zero, observed data associated to a deterministic Z[k], then all hell breaks loose inside Jags...

How to calculate a geometric cross field inside an arbitrary polygon?

I'm having troubles finding a way to calculate a "cross-field" inside an arbitrary polygon.
A Cross field, as defined by one paper is the smoothest field that is tangential to the domain boundary (in this case the polygon) I find it a lot in quad re-topology papers but surprisingly not even in Wikipedia I can find the definition of a Cross field.
I have images but since I'm new here the system said I need at least 10 reputation points to upload images.
Any ideas?
I think it could be something along the lines of an interpolation? given an inner point determine the distance to each edge and integrate or weight sum the tangent and perpendicular vector of every edge by the distance? (or any other factor in fact)
But other simpler approaches may exist?
Thanks in advance!
//I've come up with something like this (for the 3D case), very raw, educational purposes
float ditance2segment(Vector3D p, Vector3D p0, Vector3D p1){
Vector3D v = p1 - p0;
Vector3D w = p - p0;
float c1 = v.Dot(w);
if (c1 <= 0)
return (p - p1).Length();
float c2 = v.Dot(v);
if (c2 <= c1)
return (p - p1).Length();
float b = c1 / c2;
Vector3D pb = p0 + b*v;
return (p - pb).Length();
}
void CrossFieldInterpolation(List<Vector3D>& Contour, List<Vector3D>& ContourN, Vector3D p, Vector3D& crossU, Vector3D& crossV){
int N = Contour.Amount();
for (int i=0; i < N; i++){
Vector3D u = Contour[(i + 1) % N] - Contour[i];
Vector3D n = 0.5*(ContourN[(i + 1) % N] + ContourN[i]);
Vector3D v = -Vector3D::Cross(u,n); //perpendicular vector
u = Vector3D::Normalize(u);
n = Vector3D::Normalize(n);
v = Vector3D::Normalize(v);
float dist = ditance2segment(p, Contour[i], Contour[(i+1)%N]);
crossU += u / (1+dist); //to avoid infinity at points over the segment
crossV += v / (1+dist);
}
crossU = Vector3D::Normalize(crossU);
crossV = Vector3D::Normalize(crossV);
}
You can check the OpenSource Graphite software that I'm developping, it implements the "Periodic Global Parameterization" algorithm [1] that was developed in my research team. You may be also interested in the following research articles with algorithms that we developed more recently [2],[3]
Graphite website:
http://alice.loria.fr/software/graphite
How to use Periodic Global Parameterization:
http://alice.loria.fr/WIKI/index.php/Graphite/PGP
[1] http://alice.loria.fr/index.php/publications.html?Paper=TOG_pgp%402006
[2] http://alice.loria.fr/index.php/publications.html?Paper=DGF#2008
[3] http://alice.loria.fr/index.php/publications.html?redirect=0&Paper=DFD#2008&Author=vallet

Initial Conditions in OpenModelica

Will somebody please explain why the initial conditions are properly taken care of in the following openmodelica model compiled and simulated in OMEdit v1.9.1 beta2 in Windows, but if line 5 is commentd and 6 uncommented (x,y) is initialized to (0.5,0)?
Thank you.
class Pendulum "Planar Pendulum"
constant Real PI = 3.141592653589793;
parameter Real m = 1,g = 9.81,L = 0.5;
Real F "Force of the Rod";
output Real x(start=L*sin(PI/4)) ,y(start=-0.35355);
//output Real x(start = L * sin(PI / 4)), y(start=-L*sin(PI/4));
output Real vx,vy;
equation
m * der(vx) = -x / L * F;
m * der(vy) = (-y / L * F) - m * g;
der(x) = vx;
der(y) = vy;
x ^ 2 + y ^ 2 = L ^ 2;
end Pendulum;
The short answer is that initial values are treated merely as hints, you have to add the fixed=true attribute to force them as in:
output Real x(start=L*cos(PI/4),fixed=true);
If initialized variables are constrained, the fixed attribute should not be used on all initialized variables but on a 'proper' subset, in this case on just one.
The long answer can be found here

Resources