p-values for estimates in flexsurvreg - survival-analysis

I fitted a survival model using an inverse weibull distribution in flexsurvreg:
if (require("actuar")){
invweibull <- list(name="invweibull",
pars=c("shape","scale"),
location="scale",
transforms=c(log, log),
inv.transforms=c(exp, exp),
inits=function(t){ c(1, median(t)) })
invweibull <- flexsurvreg(formula = kpnsurv~iaas, data = kpnrs2,
dist=invweibull)
invweibull
}
And I got the following output:
Call:
flexsurvreg(formula = kpnsurv ~ iaas, data = kpnrs2, dist = invweibull)
Estimates:
data. mean. est L95% U95% se exp(est) L95% U95%
shape NA 0.4870 0.4002 0.5927 0.0488 NA NA NA
scale NA 62.6297 36.6327 107.0758 17.1371 NA NA NA
iaas 0.4470 -0.6764 -1.2138 -0.1391 0.2742 0.5084 0.2971 0.8701
N = 302, Events: 54, Censored: 248
Total time at risk: 4279
Log-likelihood = -286.7507, df = 3
AIC = 579.5015
How can I get the p-value of the covariate estimate (in this case iaas)? Thank you for your help.

Just in case this is still useful to anyone, this worked for me. First extract the matrix of coefficient information from the model:
invweibull.res <- invweibull$res
Then divide the estimated coefficients by their standard errors to calculate the Wald statistics, which have asymptotic standard normal distributions:
invweibull.wald <- invweibull.res[,1]/invweibull.res[,4]
Finally, get the p-values:
invweibull.p <- 2*pnorm(-abs(invweibull.wald))

Related

How I can plot multiple roc together?

I want to find some good predictors (genes). This is my data, log transformed RNA-seq:
TRG CDK6 EGFR KIF2C CDC20
Sample 1 TRG12 11.39 10.62 9.75 10.34
Sample 2 TRG12 10.16 8.63 8.68 9.08
Sample 3 TRG12 9.29 10.24 9.89 10.11
Sample 4 TRG45 11.53 9.22 9.35 9.13
Sample 5 TRG45 8.35 10.62 10.25 10.01
Sample 6 TRG45 11.71 10.43 8.87 9.44
I have calculated confusion matrix for different models like below
1- I tested each of 23 genes individually in this code and each of them gives p-value < 0.05 remained as a good predictor; For example for CDK6 I have done
glm=glm(TRG ~ CDK6, data = df, family = binomial(link = 'logit'))
Finally I obtained five genes and I put them in this model:
final <- glm(TRG ~ CDK6 + CXCL8 + IL6 + ISG15 + PTGS2 , data = df, family = binomial(link = 'logit'))
I want a plot like this for ROC curve of each model but I don't know how to do that
Any help please?
I will give you an answer using the pROC package. Disclaimer: I am the author and maintiner of the package. There are alternative ways to do it.
The plot your are seeing was probably generated by the ggroc function of pROC. In order to generate such a plot from glm models, you need to 1) use the predict function to generate the predictions, 2) generate the roc curves and store them in a list, preferably named to get a legend automatically, and 3) call ggroc.
glm.cdk6 <- glm(TRG ~ CDK6, data = df, family = binomial(link = 'logit'))
final <- glm(TRG ~ CDK6 + CXCL8 + IL6 + ISG15 + PTGS2 , data = df, family = binomial(link = 'logit'))
rocs <- list()
library(pROC)
rocs[["CDK6"]] <- roc(df$TRG, predict(glm.cdk6))
rocs[["final"]] <- roc(df$TRG, predict(final))
ggroc(rocs)

How to convert intensities to Probabilities in a point pattern using Spatstat in R?

I have two points pattern (ppp) objects p1 and p2. There are X and Y points in p1 and p2 respectively. I have fitted a ppm model (with location coordinates as independent variables) in p1 and then used it to predict "intensity" for each of the Y points in p2.
Now I want to get the probability for event occurrence at that point/zone in p2. How can I use the predicted intensities for this purpose?
Can I do this using Spatstat?
Are there any other alternative.
The intensity is the expected number of points per unit area. In small areas (such as pixels) you can just multiply the intensity by the pixel area to get the probability of presence of a point in the pixel.
fit <- ppm(p1, .......)
inten <- predict(fit)
pixarea <- with(inten, xstep * ystep)
prob <- inten * pixarea
This rule is accurate provided the prob values are smaller than about 0.4.
In a larger region W, the expected number of points is the integral of the intensity function over that region:
EW <- integrate(inten, domain=W)
The result EW is a numeric value, the expected total number of points in W. To get the probability of at least one point,
P <- 1- exp(-EW)
You can also compute prediction intervals for the number of points, using predict.ppm with argument interval="prediction".
Your question, objective and current method are not very clear to me. It
would be beneficial, if you could provide code and graphics, that explains
more clearly what you have done, and what you are trying to obtain. If you
cannot share your data you can use e.g. the built-in dataset chorley as an
example (or simply simulate artificial data):
library(spatstat)
plot(chorley, cols = c(rgb(0,0,0,1), rgb(.8,0,0,.2)))
X <- split(chorley)
X1 <- X$lung
X2 <- X$larynx
mod <- ppm(X1 ~ polynom(x, y, 2))
inten <- predict(mod)
summary(inten)
#> real-valued pixel image
#> 128 x 128 pixel array (ny, nx)
#> enclosing rectangle: [343.45, 366.45] x [410.41, 431.79] km
#> dimensions of each pixel: 0.18 x 0.1670312 km
#> Image is defined on a subset of the rectangular grid
#> Subset area = 315.291058349571 square km
#> Subset area fraction = 0.641
#> Pixel values (inside window):
#> range = [0.002812544, 11.11172]
#> integral = 978.5737
#> mean = 3.103715
plot(inten)
Predicted intensities at the 58 locations in X2
intenX2 <- predict.ppm(mod, locations = X2)
summary(intenX2)
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 0.1372 4.0025 6.0544 6.1012 8.6977 11.0375
These predicted intensities intenX2[i] say that in a small neighbourhood
around each point X2[i] the estimated number of points from X1 is Poisson
distributed with mean intenX2[i] times the area of the small neighbourhood.
So in fact you have estimated a model where in any small area you have a
probability distribution for any number of points happening in that area. If
you want the distribution in a bigger region you just have to integrate the
intensity over that region.
To get a better answer you have to provide more details about your problem.
Created on 2018-12-12 by the reprex package (v0.2.1)

How can I interpret the P-value of nonlinear term under rms: anova?

Here I demonstrated a survival model with rcs term. I was wondering whether the anova()under rms package is the way to test the linearity association? And How can I interpret the P-value of the Nonlinear term (see 0.094 here), does that support adding a rcs() term in the cox model?
library(rms)
data(pbc)
d <- pbc
rm(pbc, pbcseq)
d$status <- ifelse(d$status != 0, 1, 0)
dd = datadist(d)
options(datadist='dd')
# rcs model
m2 <- cph(Surv(time, status) ~ rcs(albumin, 4), data=d)
anova(m2)
Wald Statistics Response: Surv(time, status)
Factor Chi-Square d.f. P
albumin 82.80 3 <.0001
Nonlinear 4.73 2 0.094
TOTAL 82.80 3 <.0001
The proper way to test is with model comparison of the log-likelihood (aka deviance) across two models: a full and reduced:
m2 <- cph(Surv(time, status) ~ rcs(albumin, 4), data=d)
anova(m2)
m <- cph(Surv(time, status) ~ albumin, data=d)
p.val <- 1- pchisq( (m2$loglik[2]- m$loglik[2]), 2 )
You can see the difference in the inference using the less accurate Wald statistic (which in your case was not significant anyway since the p-value was > 0.05) versus this more accurate method in the example that Harrell used in his ?cph help page. Using his example:
> anova(f)
Wald Statistics Response: S
Factor Chi-Square d.f. P
age 57.75 3 <.0001
Nonlinear 8.17 2 0.0168
sex 18.75 1 <.0001
TOTAL 75.63 4 <.0001
You would incorrectly conclude that the nonlinear term was "significant" at conventional 0.05 level. This despite the fact that code creating the model was constructed as entirely linear in age (on the log-hazard scale):
h <- .02*exp(.04*(age-50)+.8*(sex=='Female'))
Create a reduced mode and compare:
f0 <- cph(S ~ age + sex, x=TRUE, y=TRUE)
anova(f0)
#-------------
Wald Statistics Response: S
Factor Chi-Square d.f. P
age 56.64 1 <.0001
sex 16.26 1 1e-04
TOTAL 75.85 2 <.0001
The difference in deviance is not significant with 2 degrees of freedom difference:
1-pchisq((f$loglik[2]- f0$loglik[2]),2)
[1] 0.1243212
I don't know why Harrell leaves this example in, because I've taken his RMS course and know that he endorses the cross-model comparison of deviance as the more accurate approach.

Using APCluster on Pre-Classified Data - Coloring Dendograms and Nicely Formatted Output

Question 1:
I am trying to work with the plot() function on an AggExResult object and the clusters in the documentation (https://cran.r-project.org/web/packages/apcluster/apcluster.pdf) work as expected.
In my own data, I have an additional column in the input which provides a pre-defined “target” for classification purposes, and I am wondering if there is a way to have the dendogram labels highlighted by color (e.g. red=class 0, blue=class 1) with the class of the targets being factors (or characters). I am ultimately trying to visually display how many clusters contain "pure" vs. "mixed" classes. Here is some slightly modified code from the online documentation to show roughly what my input data looks like:
cl1Targ <- matrix(nrow=50,ncol=1)
for(c1t in 1:nrow(cl1Targ)){ cl1Targ[c1t] <- as.factor(0) }
cl2Targ <- matrix(nrow=50,ncol=1)
for(c2t in 1:nrow(cl2Targ)){ cl2Targ[c2t] <- as.factor(1) }
## create two Gaussian clouds
#cl1 <- cbind(rnorm(50,0.2,0.05),rnorm(50,0.8,0.06))
#cl2 <- cbind(rnorm(50,0.7,0.08),rnorm(50,0.3,0.05))
cl1 <- cbind(rnorm(50,0.2,0.05),rnorm(50,0.8,0.06),cl1Targ)
cl2 <- cbind(rnorm(50,0.7,0.08),rnorm(50,0.3,0.05),cl2Targ)
x <- rbind(cl1,cl2)
colnames(x) <- c('Column 1','Column 2','Class_ID')
## compute similarity matrix (negative squared Euclidean)
sim <- negDistMat(x, r=2)
## run affinity propagation
apres <- apcluster(sim, q=0.7)
## compute agglomerative clustering from scratch
aggres1 <- aggExCluster(sim)
## plot dendrogram
plot(aggres1, main='aggres1 w/ target') #
How would I color the dendogram by the target defined in the input?
Question 2:
When I show() the example data’s APResult, I see the following:
show(apres)
APResult object
Number of samples = 100
Number of iterations = 165
Input preference = -0.01281384
Sum of similarities = -0.1222309
Sum of preferences = -0.1409522
Net similarity = -0.2631832
Number of clusters = 11
Exemplars:
8 17 24 37 43 52 58 68 92 95 99
Clusters:
Cluster 1, exemplar 8:
7 8 9 25 31 36 39 42 47 48
Cluster 2, exemplar 17:
6 11 13 15 17 18 19 23 32 35
Cluster 3, exemplar 24:
2 5 10 24 45
When I use my own data, I see the following (the row.names, which are the drugs being clustered by gene expression mean fold change values)
show(apclr2q05_mean)
APResult object
Number of samples = 1045
Number of iterations = 429
Input preference = -390.0822
Sum of similarities = -89326.99
Sum of preferences = -83477.58
Net similarity = -172804.6
Number of clusters = 214
Exemplars:
amantadine_58mg6h_fc amiodarone_147mg3d_fc clarithromycin_56mg1d_fc fluconazole_394mg5d_fc ketoconazole_114mg5d_fc ketoconazole_2274mg1d_fc
pantoprazole_1100mg1d_fc pantoprazole_1100mg3d_fc quetiapine_500mg5d_fc roxithromycin_312mg5d_fc torsemide_3mg3d_fc acetazolamide_250mg3d_fc
Clusters:
Cluster 1, exemplar amantadine_58mg6h_fc:
amantadine_58mg6h_fc promazine_100mg1d_fc cyproteroneAcetate_2500mg6h_fc danazol_2g5d_fc ivermectin_7500ug1d_fc letrozole_250mg6h_fc
mefenamicAcid_93mg3d_fc olanzapine_23mg1d_fc secobarbital_20mg6h_fc zaleplon_100mg3d_fc
Cluster 2, exemplar amiodarone_147mg3d_fc:
amiodarone_147mg3d_fc amiodarone_147mg5d_fc aspirin_375mg5d_fc betaNapthoflavone_80mg5d_fc clofibrate_130mg3d_fc finasteride_800mg5d_fc
Cluster 3, exemplar clarithromycin_56mg1d_fc:
ciprofloxacin_72mg5d_fc ciprofloxacin_450mg6h_fc clarithromycin_56mg1d_fc clarithromycin_56mg3d_fc clarithromycin_56mg5d_fc
Cluster 4, exemplar fluconazole_394mg5d_fc:
fluconazole_394mg5d_fc
Also what I would expect in terms of content but I would like to format this for reporting purposes. I have tried to export this using dput() but I get a lot of extra unnecessary information in the output file. I am wondering how I might be able to export the same type of information from above along with the object name and target classifier mentioned above into a table that would look like the following (and add the name of the object to the output):
Name of object = apclr2q05_mean
Number of samples = 1045
Number of iterations = 429
Input preference = -390.0822
Sum of similarities = -89326.99
Sum of preferences = -83477.58
Net similarity = -172804.6
Number of clusters = 214
Exemplars: Target
amantadine_58mg6h_fc 1
amiodarone_147mg3d_fc 1
clarithromycin_56mg1d_fc 1
fluconazole_394mg5d_fc 0
ketoconazole_114mg5d_fc 0
ketoconazole_2274mg1d_fc 0
Clusters:
Cluster 1, exemplar amantadine_58mg6h_fc:
Drug Target
amantadine_58mg6h_fc 1
promazine_100mg1d_fc 1
cyproteroneAcetate_2500mg6h_fc 1
danazol_2g5d_fc 0
ivermectin_7500ug1d_fc 0
Cluster 2, exemplar amiodarone_147mg3d_fc:
Drug Target
Etc…
A big THANK YOU to Ulrich for his quick response to these questions by email and we wanted to share our discussion with the community so I will let him respond with his solution so that he gets the credit he deserves :-)
As an update, I tried to implement the answer to Question 1 and the sample code works as expected, but I am having trouble getting this to work on my data. The input data has two parts. The first is a matrix with the numeric measurement data including column and row labels:
> fci[1:3,1:3]
M30596_PROBE1 AI231309_PROBE1 NM_012489_PROBE1
amantadine_58mg1d_fc 0.05630744 -0.10441722 0.41873201
amantadine_58mg6h_fc -0.42780274 -0.26222322 0.02703001
amantadine_220mg1d_fc 0.35260779 -0.09902214 0.04067055
The second is the "target" values in Factor format, each of which corresponds to same row in fci above:
> targs[1:3]
amantadine_58mg1d_fc amantadine_58mg6h_fc amantadine_220mg1d_fc
0 0 0
Levels: 0 1
From here, the tree was built as below:
# build the AggExResult:
aglomr1 <- aggExCluster(negDistMat(r=2), fci)
# convert the data
tree <- as.dendrogram(aglomr1)
# assign the color codes
colorCodes <- c("0"="red", "1"="green")
names(targs) <- rownames(fci)
xColor <- colorCodes[as.character(targs)]
names(xColor) <- rownames(fci)
# plot the colored tree
labels_colors(tree) <- xColor[order.dendrogram(tree)]
plot(tree, main="Colored Tree")
The tree was generated but the leaves were not colored. Doing some digging:
> head(xColor)
0 0 0 0 0 0
"red" "red" "red" "red" "red" "red"
That part seems to work as expected in terms of the targets having the correct colors assigned, but the rownames are not in xColor, and the line labels_colors(tree) <- xColor[order.dendrogram(tree)] does not return similar labels, but rather what appear to be row numbers, or NAs:
> head(order.dendrogram(tree))
[1] "295" "929" "488" "493" "233" "235"
> head(labels_colors(tree))
295 929 488 493 233 235
> head(xColor[order.dendrogram(tree)])
<NA> <NA> <NA> <NA> <NA> <NA>
NA NA NA NA NA NA
How would I get the line labels_colors(tree) <- xColor[order.dendrogram(tree)] to behave in the same way as the example provided? Specifically, what I am trying to show is the leaf lables such as amantadine_58mg1d_fc being highlighted in the color that corresponds to the target (0/1).
Here is my answer to your Question 1: the plot() method for 'AggExResult' objects internally uses the plot.dendrogram() method. Since this method does not allow for coloring leaves of dendrograms, this will not work. However, there is the 'dendextend' package which offers such a functionality. (BTW, I found that solution in another thread: Label and color leaf dendrogram in r) Since 'apcluster' offers some casts to 'hclust' and 'dendrogram' objects, this package's functionality can be used more or less directly.
So, here is some sample code:
library(apcluster)
## create two Gaussian clouds along with class labels 0/1
cl1 <- cbind(rnorm(50, 0.2, 0.05), rnorm(50, 0.8, 0.06))
cl2 <- cbind(rnorm(50, 0.7, 0.08), rnorm(50, 0.3, 0.05))
x <- cbind(Columns=data.frame(rbind(cl1, cl2)),
"Class_ID"=factor(as.character(c(rep(0, 50), rep(1, 50)))))
## compute similarity matrix (negative squared Euclidean)
sim <- negDistMat(x[, 1:2], r=2)
## compute agglomerative clustering from scratch
aggres1 <- aggExCluster(sim)
## load 'dendextend' package
## install.packages("dendextend") ## if not yet installed
library(dendextend)
## convert object
tree <- as.dendrogram(aggres1)
## assign color codes
colorCodes <- c("0"="red", "1"="green")
xColor <- colorCodes[x$Class_ID]
names(xColor) <- rownames(x)
## plot color-labeled tree
labels_colors(tree) <- xColor[order.dendrogram(tree)]
plot(tree)
Here is my answer to your Question 2: Sorry, no such functionality is implemented in the 'apcluster' package. And since this is quite a special request, I am reluctant to include it the package (let alone the fact that show() methods cannot have additional arguments). So, alternatively, I want to provide you with a custom function that allows for labeling/grouping exemplars and samples:
library(apcluster)
## create two Gaussian clouds along with class labels 0/1
cl1 <- cbind(rnorm(50, 0.2, 0.05), rnorm(50, 0.8, 0.06))
cl2 <- cbind(rnorm(50, 0.7, 0.08), rnorm(50, 0.3, 0.05))
x <- cbind(Columns=data.frame(rbind(cl1, cl2)),
"Class_ID"=factor(as.character(c(rep(0, 50), rep(1, 50)))))
## compute similarity matrix (negative squared Euclidean)
sim <- negDistMat(x[, 1:2], r=2)
## special show() function with labeled data
show.ExClust.labeled <- function(object, labels=NULL)
{
if (!is(object, "ExClust"))
stop("'object' is not of class 'ExClust'")
if (is.null(labels))
{
show(object)
return(invisible(NULL))
}
cat("\n", class(object), " object\n", sep="")
if (!is.finite(object#l) || !is.finite(object#it))
stop("object is not result of an affinity propagation run; ",
"it is pointless to create 'APResult' objects yourself.")
cat("\nNumber of samples = ", object#l, "\n")
if (length(object#sel) > 0)
{
cat("Number of sel samples = ", length(object#sel),
paste(" (", round(100*length(object#sel)/object#l,1),
"%)\n", sep=""))
cat("Number of sweeps = ", object#sweeps, "\n")
}
cat("Number of iterations = ", object#it, "\n")
cat("Input preference = ", object#p, "\n")
cat("Sum of similarities = ", object#dpsim, "\n")
cat("Sum of preferences = ", object#expref, "\n")
cat("Net similarity = ", object#netsim, "\n")
cat("Number of clusters = ", length(object#exemplars), "\n\n")
if (length(object#exemplars) > 0)
{
if (length(names(object#exemplars)) == 0)
{
cat("Exemplars:\n")
df <- data.frame("Sample"=object#exemplars,
Label=labels[object#exemplars])
print(df, row.names=FALSE)
for (i in 1:length(object#exemplars))
{
cat("\nCluster ", i, ", exemplar ",
object#exemplars[i], ":\n", sep="")
df <- data.frame(Sample=object#clusters[[i]],
Label=labels[object#clusters[[i]]])
print(df, row.names=FALSE)
}
}
else
{
df <- data.frame("Exemplars"=names(object#exemplars),
Label=labels[names(object#exemplars)])
print(df, row.names=FALSE)
for (i in 1:length(object#exemplars))
{
cat("\nCluster ", i, ", exemplar ",
names(object#exemplars)[i], ":\n", sep="")
df <- data.frame(Sample=names(object#clusters[[i]]),
Label=labels[names(object#clusters[[i]])])
print(df, row.names=FALSE)
}
}
}
else
{
cat("No clusters identified.\n")
}
}
## create label vector (with proper names)
label <- x$Class_ID
names(label) <- rownames(x)
## run apcluster()
apres <- apcluster(sim, q=0.3)
## show with labels
show.ExClust.labeled(apres, label)

Latent variable with Gaussian mixture model to impute the missing data

I'm currently trying to impute the missing data through Gaussian mixture model.
My reference paper is from here:
http://mlg.eng.cam.ac.uk/zoubin/papers/nips93.pdf
I currently focus on bivariate dataset with 2 Gaussian components.
This is the code to define the weight for each Gaussian component:
myData = faithful[,1:2]; # the data matrix
for (i in (1:N)) {
prob1 = pi1*dmvnorm(na.exclude(myData[,1:2]),m1,Sigma1); # probabilities of sample points under model 1
prob2 = pi2*dmvnorm(na.exclude(myData[,1:2]),m2,Sigma2); # same for model 2
Z<-rbinom(no,1,prob1/(prob1 + prob2 )) # Z is latent variable as to assign each data point to the particular component
pi1<-rbeta(1,sum(Z)+1/2,no-sum(Z)+1/2)
if (pi1>1/2) {
pi1<-1-pi1
Z<-1-Z
}
}
This is my code to define the missing values:
> whichMissXY<-myData[ which(is.na(myData$waiting)),1:2]
> whichMissXY
eruptions waiting
11 1.833 NA
12 3.917 NA
13 4.200 NA
14 1.750 NA
15 4.700 NA
16 2.167 NA
17 1.750 NA
18 4.800 NA
19 1.600 NA
20 4.250 NA
My constraint is, how to impute the missing data in "waiting" variable based on particular component.
This code is my first attempt to impute the missing data using conditional mean imputation. I know, it is definitely in the wrong way. The outcome would not lie to the particular component and produce outlier.
miss.B2 <- which(is.na(myData$waiting))
for (i in miss.B2) {
myData[i, "waiting"] <- m1[2] + ((rho * sqrt(Sigma1[2,2]/Sigma1[1,1])) * (myData[i, "eruptions"] - m1[1] ) + rnorm(1,0,Sigma1[2,2]))
#print(miss.B[i,])
}
I would appreciate if someone could give any advice on how to improve the imputation technique that could work with latent/hidden variable through Gaussian mixture model.
Thank you in advance
This is a solution for one type of covariance structure.
devtools::install_github("alexwhitworth/emclustr")
library(emclustr)
data(faithful)
set.seed(23414L)
ff <- apply(faithful, 2, function(j) {
na_idx <- sample.int(length(j), 50, replace=F)
j[na_idx] <- NA
return(j)
})
ff2 <- em_clust_mvn_miss(ff, nclust=2)
# hmm... seems I don't return the imputed values.
# note to self to update the code
plot(faithful, col= ff2$mix_est)
And the parameter outputs
$it
[1] 27
$clust_prop
[1] 0.3955708 0.6044292
$clust_params
$clust_params[[1]]
$clust_params[[1]]$mu
[1] 2.146797 54.833431
$clust_params[[1]]$sigma
[1] 13.41944
$clust_params[[2]]
$clust_params[[2]]$mu
[1] 4.317408 80.398192
$clust_params[[2]]$sigma
[1] 13.71741

Resources