unused variable(s) warning in runjags model - jags

I am running JAGS models through the R package runjags. I just updated to JAGS 4.0.0 from JAGS 3.4, and have noticed some unexpected behavior that seems to be related to the update.
First, when I run a model, I now get a warning message WARNING: Unused variable(s) in data table: followed by a list of data objects that are referenced in the model and provided as data. It doesn't seem to affect the results (but it is very puzzling). I have, however, noticed a few times while playing around with this that for some variables the posteriors were virtually identical to the priors (indicating that no updating occured). I can't seem to recreate the update failure right now, but below is a reproducible code example illustrating the odd warning message. The code example on the run.jags help page also produces the same warning.
Second, I thought I'd check to see if the same message pops up if I use the R package R2jags instead of runjags, but R2jags won't load because apparently rjags (one of the dependencies) is not compatible with JAGS 4.0 (its looking for JAGS 3.X). Also, in the runjags function run.jags, the argument method="rjags" doesn't seem to work anymore, but method="parallel" does work.
I'm using runjags_2.0.1-4 and R 3.2.2.
So my questions are:
1) Is rjags really incompatible with JAGS 4.0? The motivation to go to 4.0 was to use vectors as indices (see https://martynplummer.wordpress.com/2015/08/16/whats-new-in-jags-4-0-0-part-34-r-style-features/).
2) What is up with the unused variable(s) warning, and should I be concerned about it?
Thanks,
Glenn
Code:
#--- GENERATE DATA ------------------------
rm(list=ls())
# Number of sites and observations per site
N <- 200
nobs <- 3
# generate covariates and standardize (where appropriate)
set.seed(123)
forest <- rnorm(N)
# relationship between occupancy and covariates
b0 <- 0.5
b.for <- 0.5
psi <- plogis(b0 + b.for*forest)
# draw occupancy for each site
z <- rbinom(n=N, size=1,prob=psi)
# specify detection probablility
p <- 0.5
pz <- p*z
# generate the observations
Y <- rbinom(n=N, size=nobs,prob=pz)
#---- BUGS model ------------------------
model1 <- "model {
for (i in 1:N){
logit(eta[i]) <- b0 + b.for*forest[i]
z[i] ~ dbern(eta[i])
pz[i] <- z[i]*p
y[i] ~ dbin(pz[i],nobs)
} #i
b0.0 ~ dunif(0,1)
b0 <- log(b0.0/(1-b0.0))
b.for ~ dnorm(0,0.01)
p ~ dunif(0,1)
}"
occ.data1 <-list(y=Y,N=N,nobs=nobs,forest=forest)
inits1 <- function(){list(b0.0=runif(1),b.for=rnorm(1),p=runif(1),z=as.numeric(Y>0))}
parameters1 <- c("b0","b.for","p")
#---- RUN MODEL ------------------------
library(runjags)
ni <- 2000
nt <- 1
nb <- 1000
nc <- 3
ad <- 100
out <- run.jags(model=model1,data=occ.data1,monitor=parameters1,n.chains=nc,inits=inits1,burnin=nb,
sample=ni,adapt=ad,thin=nt,modules=c("glm","dic"),method="parallel")

To answer your questions:
1) rjags and JAGS used linked (non-interchangable) versions, and CRAN systems are still using JAGS_3.4.0 so the version of rjags on CRAN matches. This will be updated soon, and in the meantime you can grab the correct version of rjags from the sourceforge page as #jbaums notes.
2) This is a helpful message from JAGS/rjags telling you that you have specified something as data that the model isn't using. Remember that variable names are case sensitive i.e.
library('runjags')
model <- "model {
m ~ dunif(-1000,1000)
#data# M
#inits# m
#monitor# m
}"
M <- 0
m <- list(-10, 10)
results <- run.jags(model, method="interruptible", n.chains=2)
results <- run.jags(model, method="rjags", n.chains=2)
... gives you a warning because M does not match m. Also note that the warning looks a bit different from the two function calls - in the first it comes half-way down the JAGS output and in the second it comes as a warning in R after the function is completed.
As for 'should I be concerned' - yes if you think these variables should be in your model. If you can't find the problem try posting the code you are using - it got cut off from your original post.
Matt

Related

Anova for multiple point patterns not working for Strauss model

I just started getting into spatial analysis and am fitting some models to my data. My main goal is to test for spatial regularity (whether there is inhibition between points).
I created my hyperframe for the data below. There are 6 point patterns (Areas), 4 in subhabitat 1, and 2 in subhabitat 2.
ALL_ppp <- list(a1ppp, a2ppp, a3ppp, a4ppp, a5ppp, a6ppp)
H <- hyperframe(Area = c("A1","A2","A3","A4","A5","A6"), Subhabitat = c("sbh1","sbh1","sbh1","sbh1","sbh2","sbh2"), Points = ALL_ppp )
I then created some models. This model fits a Strauss process with a different interaction radius for each area, with intensity depending on subhabitat type. It is very similar to the example in the book on page 700.
radii <- c(mean(area1$diameter), mean(area2$diameter),mean(area3$diameter),mean(area4$diameter),mean(area5$diameter),mean(area6$diameter))
Rad <- hyperframe(R=radii)
Str <- with(Rad, Strauss(R))
Int <- hyperframe(str=Str)
fittest8 <- mppm(Points ~ Subhabitat, H, interaction=Int, iformula = ~str:Area)
I would like to conduct a formal test for significance for the Strauss interaction parameters using anova.mppm to test for regularity. However, I am not sure if I am doing this properly, as I cannot seem to get this to work. I have tried:
fittest8 <- mppm(Points ~ Subhabitat, H, interaction=Int, iformula = ~str:Area)
fitex <- mppm(Points ~ Subhabitat, H)
anova.mppm(fittest8, fitex, test = "Chi")
I get the error "Error: Coefficient ‘str’ is missing from new.coef" and cannot find a way to resolve this. Any advice would be greatly appreciated.
Thanks!
First, please learn how to make a minimal reproducible example. This will make it easier for people to help you solve the problem, without having to guess what was in your data.
In your example, the columns named Area and Subhabitat in the hyperframe H are character vectors, but in your code, the call to mppm would require that they are factors. I assume you converted them to factors in order to be able to fit the model fittest8. (Another reason to make a working example)
You said that your example was similar to one on page 700 of the spatstat book which does work. In that case, a good strategy is to modify your example to make it as similar as possible to the example that works, because this will narrow down the possible cause.
A working example of the problem, similar to the one in the book, is:
Str <- hyperframe(str=with(simba, Strauss(mean(nndist(Points)))))
fit1 <- mppm(Points ~ group, simba, interaction=Str, iformula=~str:group)
fit0 <- mppm(Points ~ group, simba)
anova(fit0, fit1, test="Chi")
which yields the same error Error: Coefficient ‘str’ is missing from new.coef
The simplest way to avoid this is to replace the interaction formula ~str:group by str+str:group:
fit1x <- mppm(Points ~ group, simba, interaction=Str,
iformula = ~str + str:group)
anova(fit0, fit1x, test="Chi")
or in your example
fittest8X <- mppm(Points ~ Subhabitat, H, interaction=Int,
iformula=~str + str:Area)
anova(fittest8X, fitex, test="Chi")
Note that fittest8X and fittest8 are equivalent models but are expressed in a slightly different way.
The interaction formula and the trend formula are connected in a complicated way and the software is not always successful in disentangling them. If you get this kind of problem again, try different versions of the interaction formula.

Using Z3 with parallelization from SBV

I'd like to use Z3 via SBV using multiple cores. Based on this answer I should be able to do that just by passing parallel.enable=true to the z3 executable on the command line. Since I am using SBV, I need to go through SBV's interface to various SMTLib solvers, so here's what I tried:
foo = runSMTWith z3par $ do
...
where
z3par = z3
{ SBV.solver = (SBV.solver z3)
{ SBV.options = \cfg -> SBV.options (SBV.solver z3) cfg ++ ["parallel.enable=true"]
}
}
However, I am not seeing any signs of Z3 running with parallelization enabled:
CPU usage doesn't go above one core
No speedup compared to running without this flag
How do I enable Z3 parallelization, when going via SBV?
What you're doing is essentially how it is done from SBV. You might want to increase verbosity of z3 and output the diagnostics to a file to inspect later. Something like:
import Data.SBV
import Data.SBV.Control
foo :: IO (Word64, Word64)
foo = runSMTWith z3{solver = par} $ do
x <- sWord64 "x"
y <- sWord64 "y"
setOption $ DiagnosticOutputChannel "diagnostic_output"
constrain $ x * y .== 13
constrain $ x .> 1
constrain $ y .> 1
query $ do ensureSat
(,) <$> getValue x <*> getValue y
where par = (solver z3) {options = \cfg -> options (solver z3) cfg ++ extras}
extras = [ "parallel.enable=true"
, "-v:3"
]
Here, we're not only setting z3's parallel-mode, but we're also telling it to increase verbosity and put all the diagnostics in a file. (Side note: There are many other settings in the parallel section of z3 config, you can see what they are by issuing z3 -pd in your command line and looking at the output. You can set any other parameters from there by adding it to the extras variable above.)
When I run the above, I get:
*Main> foo
(6379316565415788539,3774100875216427415)
But I also get a file named diagnostic_output created in the current directory, which contains the following lines, amongst others:
(tactic.parallel :progress 0% :open 1)
(tactic.parallel :split-cube 0)
(parallel.tactic simplify-1)
(tactic.parallel :progress 100.00% :status sat :open 0)
So z3 is indeed in the parallel mode and things are happening. Of course, what exactly it does is more or less a black-box, and it's impossible to interpret the above output without inspecting z3 internals. (I don't think the meaning of these stats nor the strategies for the parallel solver are that well documented. If you find a good documentation on the details, please do report!)
Update
As of this commit, you can now simply say:
runSMTWith z3{extraArgs = ["parallel.enable=true"]} $ do ...
simplifying the programming a bit further.
Solver agnostic concurrency directly from SBV
Note that SBV also has combinators for running things concurrently directly from Haskell. See the functions:
satConcurrentWithAny
satConcurrentWithAll
proveConcurrentWithAny
proveConcurrentWithAll
These functions are solver agnostic, you can use them with any solver of your choosing. Of course, they require you to restructure your problem and do a manual decomposition to take advantage of the multiple-cores in your computer and stitch the solutions together yourself. But they also give you full control over how you want to structure your expensive search.

how to make an optimal combinatorial selection in R

The problem I'm trying to solve is basically the same as the one in this post:
https://stats.stackexchange.com/questions/339935/python-library-for-combinatorial-optimization
And my current implementation uses indeed a genetic algorithm based optimizer.
However, I would like to solve it as a binary linear programming problem (at least try, even though it's 'NP-hard', apparently).
My question is how to formulate the LP in the best way, because I am not sure I am doing it right.
The following is a simplified version of what I'm dealing with, which however shows exactly where the problem lies.
We make m*n (in this case 6) objects by a combinatorial process taking m (3) objects of type 'R1' (say {A,B,C}) and n (2) objects of type 'R2' (say {X,Y}).
The 6 objects {AX,AY,BX,BY,CX,CY} are evaluated and each gets a score D, in this case {0.8,0.7,0.5,0.9,0.4,0.0}, in this order.
CL <- cbind(expand.grid(R2=LETTERS[24:25],R1=LETTERS[1:3],stringsAsFactors = FALSE),D=c(0.8,0.7,0.5,0.9,0.4,0.0))
Now we want to select 2 distinct R1's and 1 R2 such that the sum of D is maximal.
In this example, the answer is R1 = {A,B}, R2 = {Y}.
However, one would not get to such conclusion taking, for instance, the 2 R1's and the R2 with the highest average D.
It would work for R1, but not for R2:
aggregate(D~R1,CL,mean)
# R1 D
#1 A 0.75
#2 B 0.70
#3 C 0.20
aggregate(D~R2,CL,mean)
# R2 D
#1 X 0.5666667
#2 Y 0.5333333
I know how to formulate this as a linear programming problem; only I am not sure my formulation is efficient, because basically it results in a problem with mn+m+n variables and 2(m+n)+2 constraints.
The main difficulty is that I need somehow to count the number of distinct R1's and R2's chosen, and I don't know any way of doing that apart from what I will show below (and is also described in my other post here).
This is what I would do:
CL["Entry"] <- seq_len(dim(CL)[[1]])
R1.mat <- table(CL$R1,CL$Entry)
R2.mat <- table(CL$R2,CL$Entry)
N_R1 <- dim(R1.mat)[[1]]
N_R2 <- dim(R2.mat)[[1]]
N_Entry <- dim(CL)[[1]]
constr.mat <- NULL
dir <- NULL
rhs <- NULL
constr.mat <- rbind(constr.mat,cbind(R1.mat,-diag(table(CL$R1)),matrix(0,N_R1,N_R2)))
dir <- c(dir,rep("<=",N_R1))
rhs <- c(rhs,rep(0,N_R1))
constr.mat <- rbind(constr.mat,cbind(R2.mat,matrix(0,N_R2,N_R1),-diag(table(CL$R2))))
dir <- c(dir,rep("<=",N_R2))
rhs <- c(rhs,rep(0,N_R2))
constr.mat <- rbind(constr.mat,constr.mat)
dir <- c(dir,rep(">=",N_R1+N_R2))
rhs <- c(rhs,1-table(CL$R1),1-table(CL$R2))
constr.mat <- rbind(constr.mat,c(rep(0,N_Entry),rep(1,N_R1),rep(0,N_R2)))
dir <- c(dir,"==")
rhs <- c(rhs,2)
constr.mat <- rbind(constr.mat,c(rep(0,N_Entry),rep(0,N_R1),rep(1,N_R2)))
dir <- c(dir,"==")
rhs <- c(rhs,1)
obj <- c(aggregate(D~Entry,CL,c)[["D"]],rep(0,N_R1+N_R2))
Which can be solved for instance by lpSolve:
sol <- lp("max", obj, constr.mat, dir, rhs, all.bin = TRUE,num.bin.solns = 1, use.rw=FALSE, transpose.constr=TRUE)
sol$solution
#[1] 0 1 0 1 0 0 1 1 0 0 1
showing that products {AY,BY} were selected, corresponding to R1 = {A,B} and R2 = {Y}:
CL[as.logical(sol$solution[1:N_Entry]),]
# R2 R1 D Entry
#2 Y A 0.7 2
#4 Y B 0.9 4
I found that on large problems lpSolve gets stuck for ages; Rsymphony seemed to perform better.
But again, my main question is: is this way of formulating the LP efficient? Should I do it differently?
Thanks!
EDIT
In the meantime, working on a somewhat related problem, I found that only one set of constraints may be sufficient, if one adds 'costs' (in this example, negative) to the objective vector for the 'distinct R1 and R2' variables.
So here, instead of:
obj <- c(aggregate(D~Entry,CL,c)[["D"]],rep(0,N_R1+N_R2))
I would do:
obj <- c(aggregate(D~Entry,CL,c)[["D"]],rep(-1,N_R1+N_R2))
This would make m+n constraints unnecessary.
It still remains a huge problem to solve, even for relatively small m, n, so if anyone can advise how to do it better...
I had a look at lp.transport, but that would be limited to 2 dimensions (i.e. only R1 and R2, not R1, R2, R3 for instance), and I don't think you can constrain the number of distinct objects per category in that kind of solver.

Random effects modeling using mgcv and using lmer. Basically identical fits but VERY different likelihoods and DF. Which to use for testing?

I am aware that there is a duality between random effects and smooth curve estimation. At this link, Simon Wood describes how to specify random effects using mgcv. Of particular note is the following passage:
For example if g is a factor then s(g,bs="re") produces a random coefficient for each level of g, with the radndom coefficients all modelled as i.i.d. normal.
After a quick simulation, I can see this is correct, and that the model fits are almost identical. However, the likelihoods and degrees of freedom are VERY different. Can anyone explain the difference? Which one should be used for testing?
library(mgcv)
library(lme4)
set.seed(1)
x <- rnorm(1000)
ID <- rep(1:200,each=5)
y <- x
for(i in 1:200) y[which(ID==i)] <- y[which(ID==i)] + rnorm(1)
y <- y + rnorm(1000)
ID <- as.factor(ID)
# gam (mgcv)
m <- gam(y ~ x + s(ID,bs="re"))
gam.vcomp(m)
coef(m)[1:2]
logLik(m)
# lmer
m2 <- lmer(y ~ x + (1|ID))
sqrt(VarCorr(m2)$ID[1])
summary(m2)$coef[,1]
logLik(m2)
mean( abs( fitted(m)-fitted(m2) ) )
Full disclosure: I encountered this problem because I want to fit a GAM that also includes random effects (repeated measures), but need to know if I can trust likelihood-based tests under those models.

JAGS: multivariate normal with both, unobservable and observable variables?

I have a "simple" problem with JAGS that drives me crazy. In essence, consider the following example that works:
x2[i] ~ dnorm(mu[i,1], tau1);
u[i] ~ dnorm(mu[i,2], tau2);
Here, x2 is an observable variable (that is, data), while u is a latent variable. In the example, both are drawn independently from two distinct normal distributions.
However, I want them to be (possibly) dependent, that is, to be drawn from one multivariate normal distribution. So I would like to do:
c(x2[i], u[i]) ~ dmnorm(mu[i,1:2], Omega[1:2,1:2]);
Unfortunately, this doesn't work because this syntax is not correct. However, having tried many different syntaxes, neither of them does work. E.g.,
y[i,1] <- x2[i];
y[i,2] <- u[i];
y[i,1:2] ~ dmnorm(mu[i,1:2], Omega[1:2,1:2]);
leads to the error Node y[1,1:2] overlaps previously defined nodes, what is obvious.
So what can I do? Please, help me, I'm getting mad...
UPDATE: I figured out that I can at least do the following:
(in R:)
p <- 1/(1+exp(-x2));
t <- rep(10000, length(x2));
s <- rbinom(length(x2), t, p2);
(in JAGS:)
nul[i,1] <- 0;
nul[i,2] <- 0;
e[i,1:2] ~ dmnorm(nul[i,1:2], Omega[1:2,1:2]);
u[i] <- mu[i,2] + e[i,2];
x2g[i] <- mu[i,1] + e[i,1];
pg[i] <- 1/(1+exp(-x2g[i]));
s[i] ~ dbin(pg[i], t[i]);
This works (a bit), but looses of course efficiency since an observable variable (x2) is treated as if it was only indirectly observable (through s).
You are defining y twice Once in:
y[i,1] <- x2[i];
y[i,2] <- u[i];
And once in
y[i,1:2] ~ dmnorm(mu[i,1:2], Omega[1:2,1:2]);
You might be able to get away with:
x2[i] <- y[i,1];
Or you might be able to simply write out the regression (after all it is bivariate, so not that difficulty).
You also might get a faster response on the JAGS mailing list (which Martyn Plummer regularly monitors).
You can use the data block as follows:
data{
for(i in 1:length(x2)) {
y[i,1] <- x2[i]
y[i,2] <- u[i]
}
}
model{
for(i in 1:length(x2)) {
y[i,1:2] ~ dmnorm(mu[i,], Tau)
}
# ... definition of mu, Tau, and their prior distribution
}
However, be sure there are no missing values in x2 or u, since a multivariate node cannot be partly observed.
Regards! :)

Resources