Spatially adaptive smoothing using mgcv package in R - gam

I am using the gam function in the mgcv package to fit spatially adaptive smoothing for heterogeneous data. This is my R code for fitting.
library(MASS)
data(mcycle)
fit <- gam(accel ~ s(times, k = 20, bs = 'ad'), data = mcycle, method = 'REML')
The output contains 5 smoothing parameters. I am trying to extract the values for each smoothing parameter ( S[[i]] for $i =1,..5$) and I used fit$S[[1]] to get the first smoothing parameter values, but it does not work. Could someone help me with this?

You want the $sp component
> fit$sp
s(times)1 s(times)2 s(times)3 s(times)4 s(times)5
1.364206e+01 5.204389e-04 2.036490e-03 8.565542e+00 2.428618e+03
The $S component of the $smooth list contains the penalty matrices associated with the five smoothing parameters.
See ?gamObject and ?smooth.construct for further details on what is returned in the fit.
If you really want the penalty matrices, then look at the structure of the smooth component:
> str(fit$smooth, max = 1)
List of 1
$ :List of 26
..- attr(*, "class")= chr [1:2] "pspline.smooth" "mgcv.smooth"
..- attr(*, "qrc")=List of 4
.. ..- attr(*, "class")= chr "qr"
..- attr(*, "nCons")= int 1
Even if there is a single smooth, the $smooth is a list. So we need fit$smooth[[1]] to access this smooth. Now if we look at the $S component of the smooth we see
> str(fit$smooth[[1]]$S, max = 1)
List of 5
$ : num [1:19, 1:19] 0.4446 -0.2845 0.0913 0.0426 0.0943 ...
$ : num [1:19, 1:19] 0.3417 -0.2441 0.0845 0.0341 0.0654 ...
$ : num [1:19, 1:19] 0.0913 -0.0734 0.0271 0.0109 0.0141 ...
$ : num [1:19, 1:19] 4.13e-05 -3.46e-05 4.10e-05 1.32e-04 -3.96e-05 ...
$ : num [1:19, 1:19] 1.68e-06 2.43e-06 3.49e-06 4.62e-06 1.08e-05 ...
Which indicates that there are five penalty matrices associated with this smooth and that each matrix is a component of the S list. Hence, for the ith penalty matrix we need
fit$smooth[[1]]$S[[ i ]]
Hence for the second penalty matrix we need
fit$smooth[[1]]$S[[2]]
the first six rows and columns of which look like this
> fit$smooth[[1]]$S[[2]][1:6, 1:6]
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0.34168394 -0.24407752 0.084500619 0.03412496 0.06538967 0.054028500
[2,] -0.24407752 0.36254851 -0.255915616 0.05368650 -0.03418746 -0.019895116
[3,] 0.08450062 -0.25591562 0.352961000 -0.21961696 0.04421239 0.001082056
[4,] 0.03412496 0.05368650 -0.219616955 0.35168761 -0.18138207 0.077301400
[5,] 0.06538967 -0.03418746 0.044212389 -0.18138207 0.25012833 -0.178018503
[6,] 0.05402850 -0.01989512 0.001082056 0.07730140 -0.17801850 0.264159096

Related

Question about array multiplication in JAGS

I am working with race-stratified population estimates and I want to integrate race-stratified populations from three different data sources (census, PEP, and ACS). I developed a model to use information from all these three sources and estimate the true population which is defined as gamma.ctr for county c time t and race r (1=white and 2 for non-white).
The problem is that PEP data is not race-stratified and I need to find a way to estimate race-stratified pep data.
Before, I used one of the other two sources (census or ACS) to estimate ethnicity proportions and multiply PEP data by these proportions to obtain race-stratified PEP population as input data to the model.
Now I decided to do this multiplication within the model based on ethnicity proportions that are defined by gamma.ctr (true pop in county c, year t, and race r) which is updated by all data sources not one of them.
So I considered the input PEP data as peppop.ct (the population for county c and time t, not race-stratified). Then I defined ethnicity proportion as:
prob[c,t]=gamma.ctr[c,t,1]/(gamma.ctr[c,t,1]+gamma.ctr[c,t,2])
I want to multiply PEP data by these proportions to find race-stratified estimates within the JAGS model:
for (c in 1:Narea){
for (t in 1:nyears){
prob.ct[c,t]<-gamma.ctr[c,t,1]/(gamma.ctr[c,t,1]+gamma.ctr[c,1,2])
peppop.ctr[c,t,1]<-peppop.ct[c,t] * prob.ct[c,t]
peppop.ctr[c,t,2]<-peppop.ct[c,t] * (1-prob.ct[c,t])
}
}
I want to use this peppop.ctr as a response varaible later like this:
for (t in 1:nyears){
peppop.ctr[c,t,r] ~ dnorm(gamma.ctr[c,t,r], taupep.ctr[c,t,r])
}
But I receive this error:
Attempt to redefine node peppop.cpr[1,1,1]
It think the reason for this error is the fact that peppop.ctr are defined twice in left hand side of the equation and the error is related to redefining peppop.ctr in line:
peppop.ctr[c,t,1]<-peppop.ct[c,t] * prob.ct[c,t]
Is it possible to help me to solve this error. I need to estimate peppop.ctr first and then use these estimates to update gamma.ctr parameters. Any help is really appreciated.
You can use the zeros trick to both define a variable (e.g., y below) and then also use that variable as the dependent variable in some subsequent analysis. Here's an example:
library(runjags)
x <- rnorm(1000)
y <- 2 + 3 * x + rnorm(1000)
p <- runif(1000, .1, .9)
w <- y*p
z <- y-w
datl <- list(
x=x,
w=w,
z=z,
zeros = rep(0, length(x)),
N = length(x)
)
mod <- "model{
y <- w + z
C <- 10000 # this just has to be large enough to ensure all phi[i]'s > 0
for (i in 1:N) {
L[i] <- dnorm(y[i], mu[i], tau)
mu[i] <- b[1] + b[2]*x[i]
phi[i] <- -log(L[i]) + C
zeros[i] ~ dpois(phi[i])
}
#sig ~ dunif(0, sd(y))
#tau <- pow(sig, -2)
tau ~ dgamma(1,.1)
b[1] ~ dnorm(0, .0001)
b[2] ~ dnorm(0, .0001)
}
"
out <- run.jags(model = mod, data=datl, monitor = c("b", "tau"), n.chains = 2)
summary(out)
#> Lower95 Median Upper95 Mean SD Mode MCerr MC%ofSD
#> b[1] 1.9334538 1.991586 2.051245 1.991802 0.03026722 NA 0.0002722566 0.9
#> b[2] 2.9019547 2.963257 3.023057 2.963190 0.03057184 NA 0.0002744883 0.9
#> tau 0.9939587 1.087178 1.183521 1.087744 0.04845667 NA 0.0004280217 0.9
#> SSeff AC.10 psrf
#> b[1] 12359 -0.010240572 1.0000684
#> b[2] 12405 -0.006480322 0.9999677
#> tau 12817 0.010135609 1.0000195
summary(lm(y ~ x))
#>
#> Call:
#> lm(formula = y ~ x)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -3.2650 -0.6213 -0.0032 0.6528 3.3956
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 1.99165 0.03034 65.65 <2e-16 ***
#> x 2.96340 0.03013 98.34 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.9593 on 998 degrees of freedom
#> Multiple R-squared: 0.9065, Adjusted R-squared: 0.9064
#> F-statistic: 9671 on 1 and 998 DF, p-value: < 2.2e-16
Created on 2022-05-14 by the reprex package (v2.0.1)

How I can plot multiple roc together?

I want to find some good predictors (genes). This is my data, log transformed RNA-seq:
TRG CDK6 EGFR KIF2C CDC20
Sample 1 TRG12 11.39 10.62 9.75 10.34
Sample 2 TRG12 10.16 8.63 8.68 9.08
Sample 3 TRG12 9.29 10.24 9.89 10.11
Sample 4 TRG45 11.53 9.22 9.35 9.13
Sample 5 TRG45 8.35 10.62 10.25 10.01
Sample 6 TRG45 11.71 10.43 8.87 9.44
I have calculated confusion matrix for different models like below
1- I tested each of 23 genes individually in this code and each of them gives p-value < 0.05 remained as a good predictor; For example for CDK6 I have done
glm=glm(TRG ~ CDK6, data = df, family = binomial(link = 'logit'))
Finally I obtained five genes and I put them in this model:
final <- glm(TRG ~ CDK6 + CXCL8 + IL6 + ISG15 + PTGS2 , data = df, family = binomial(link = 'logit'))
I want a plot like this for ROC curve of each model but I don't know how to do that
Any help please?
I will give you an answer using the pROC package. Disclaimer: I am the author and maintiner of the package. There are alternative ways to do it.
The plot your are seeing was probably generated by the ggroc function of pROC. In order to generate such a plot from glm models, you need to 1) use the predict function to generate the predictions, 2) generate the roc curves and store them in a list, preferably named to get a legend automatically, and 3) call ggroc.
glm.cdk6 <- glm(TRG ~ CDK6, data = df, family = binomial(link = 'logit'))
final <- glm(TRG ~ CDK6 + CXCL8 + IL6 + ISG15 + PTGS2 , data = df, family = binomial(link = 'logit'))
rocs <- list()
library(pROC)
rocs[["CDK6"]] <- roc(df$TRG, predict(glm.cdk6))
rocs[["final"]] <- roc(df$TRG, predict(final))
ggroc(rocs)

Using APCluster on Pre-Classified Data - Coloring Dendograms and Nicely Formatted Output

Question 1:
I am trying to work with the plot() function on an AggExResult object and the clusters in the documentation (https://cran.r-project.org/web/packages/apcluster/apcluster.pdf) work as expected.
In my own data, I have an additional column in the input which provides a pre-defined “target” for classification purposes, and I am wondering if there is a way to have the dendogram labels highlighted by color (e.g. red=class 0, blue=class 1) with the class of the targets being factors (or characters). I am ultimately trying to visually display how many clusters contain "pure" vs. "mixed" classes. Here is some slightly modified code from the online documentation to show roughly what my input data looks like:
cl1Targ <- matrix(nrow=50,ncol=1)
for(c1t in 1:nrow(cl1Targ)){ cl1Targ[c1t] <- as.factor(0) }
cl2Targ <- matrix(nrow=50,ncol=1)
for(c2t in 1:nrow(cl2Targ)){ cl2Targ[c2t] <- as.factor(1) }
## create two Gaussian clouds
#cl1 <- cbind(rnorm(50,0.2,0.05),rnorm(50,0.8,0.06))
#cl2 <- cbind(rnorm(50,0.7,0.08),rnorm(50,0.3,0.05))
cl1 <- cbind(rnorm(50,0.2,0.05),rnorm(50,0.8,0.06),cl1Targ)
cl2 <- cbind(rnorm(50,0.7,0.08),rnorm(50,0.3,0.05),cl2Targ)
x <- rbind(cl1,cl2)
colnames(x) <- c('Column 1','Column 2','Class_ID')
## compute similarity matrix (negative squared Euclidean)
sim <- negDistMat(x, r=2)
## run affinity propagation
apres <- apcluster(sim, q=0.7)
## compute agglomerative clustering from scratch
aggres1 <- aggExCluster(sim)
## plot dendrogram
plot(aggres1, main='aggres1 w/ target') #
How would I color the dendogram by the target defined in the input?
Question 2:
When I show() the example data’s APResult, I see the following:
show(apres)
APResult object
Number of samples = 100
Number of iterations = 165
Input preference = -0.01281384
Sum of similarities = -0.1222309
Sum of preferences = -0.1409522
Net similarity = -0.2631832
Number of clusters = 11
Exemplars:
8 17 24 37 43 52 58 68 92 95 99
Clusters:
Cluster 1, exemplar 8:
7 8 9 25 31 36 39 42 47 48
Cluster 2, exemplar 17:
6 11 13 15 17 18 19 23 32 35
Cluster 3, exemplar 24:
2 5 10 24 45
When I use my own data, I see the following (the row.names, which are the drugs being clustered by gene expression mean fold change values)
show(apclr2q05_mean)
APResult object
Number of samples = 1045
Number of iterations = 429
Input preference = -390.0822
Sum of similarities = -89326.99
Sum of preferences = -83477.58
Net similarity = -172804.6
Number of clusters = 214
Exemplars:
amantadine_58mg6h_fc amiodarone_147mg3d_fc clarithromycin_56mg1d_fc fluconazole_394mg5d_fc ketoconazole_114mg5d_fc ketoconazole_2274mg1d_fc
pantoprazole_1100mg1d_fc pantoprazole_1100mg3d_fc quetiapine_500mg5d_fc roxithromycin_312mg5d_fc torsemide_3mg3d_fc acetazolamide_250mg3d_fc
Clusters:
Cluster 1, exemplar amantadine_58mg6h_fc:
amantadine_58mg6h_fc promazine_100mg1d_fc cyproteroneAcetate_2500mg6h_fc danazol_2g5d_fc ivermectin_7500ug1d_fc letrozole_250mg6h_fc
mefenamicAcid_93mg3d_fc olanzapine_23mg1d_fc secobarbital_20mg6h_fc zaleplon_100mg3d_fc
Cluster 2, exemplar amiodarone_147mg3d_fc:
amiodarone_147mg3d_fc amiodarone_147mg5d_fc aspirin_375mg5d_fc betaNapthoflavone_80mg5d_fc clofibrate_130mg3d_fc finasteride_800mg5d_fc
Cluster 3, exemplar clarithromycin_56mg1d_fc:
ciprofloxacin_72mg5d_fc ciprofloxacin_450mg6h_fc clarithromycin_56mg1d_fc clarithromycin_56mg3d_fc clarithromycin_56mg5d_fc
Cluster 4, exemplar fluconazole_394mg5d_fc:
fluconazole_394mg5d_fc
Also what I would expect in terms of content but I would like to format this for reporting purposes. I have tried to export this using dput() but I get a lot of extra unnecessary information in the output file. I am wondering how I might be able to export the same type of information from above along with the object name and target classifier mentioned above into a table that would look like the following (and add the name of the object to the output):
Name of object = apclr2q05_mean
Number of samples = 1045
Number of iterations = 429
Input preference = -390.0822
Sum of similarities = -89326.99
Sum of preferences = -83477.58
Net similarity = -172804.6
Number of clusters = 214
Exemplars: Target
amantadine_58mg6h_fc 1
amiodarone_147mg3d_fc 1
clarithromycin_56mg1d_fc 1
fluconazole_394mg5d_fc 0
ketoconazole_114mg5d_fc 0
ketoconazole_2274mg1d_fc 0
Clusters:
Cluster 1, exemplar amantadine_58mg6h_fc:
Drug Target
amantadine_58mg6h_fc 1
promazine_100mg1d_fc 1
cyproteroneAcetate_2500mg6h_fc 1
danazol_2g5d_fc 0
ivermectin_7500ug1d_fc 0
Cluster 2, exemplar amiodarone_147mg3d_fc:
Drug Target
Etc…
A big THANK YOU to Ulrich for his quick response to these questions by email and we wanted to share our discussion with the community so I will let him respond with his solution so that he gets the credit he deserves :-)
As an update, I tried to implement the answer to Question 1 and the sample code works as expected, but I am having trouble getting this to work on my data. The input data has two parts. The first is a matrix with the numeric measurement data including column and row labels:
> fci[1:3,1:3]
M30596_PROBE1 AI231309_PROBE1 NM_012489_PROBE1
amantadine_58mg1d_fc 0.05630744 -0.10441722 0.41873201
amantadine_58mg6h_fc -0.42780274 -0.26222322 0.02703001
amantadine_220mg1d_fc 0.35260779 -0.09902214 0.04067055
The second is the "target" values in Factor format, each of which corresponds to same row in fci above:
> targs[1:3]
amantadine_58mg1d_fc amantadine_58mg6h_fc amantadine_220mg1d_fc
0 0 0
Levels: 0 1
From here, the tree was built as below:
# build the AggExResult:
aglomr1 <- aggExCluster(negDistMat(r=2), fci)
# convert the data
tree <- as.dendrogram(aglomr1)
# assign the color codes
colorCodes <- c("0"="red", "1"="green")
names(targs) <- rownames(fci)
xColor <- colorCodes[as.character(targs)]
names(xColor) <- rownames(fci)
# plot the colored tree
labels_colors(tree) <- xColor[order.dendrogram(tree)]
plot(tree, main="Colored Tree")
The tree was generated but the leaves were not colored. Doing some digging:
> head(xColor)
0 0 0 0 0 0
"red" "red" "red" "red" "red" "red"
That part seems to work as expected in terms of the targets having the correct colors assigned, but the rownames are not in xColor, and the line labels_colors(tree) <- xColor[order.dendrogram(tree)] does not return similar labels, but rather what appear to be row numbers, or NAs:
> head(order.dendrogram(tree))
[1] "295" "929" "488" "493" "233" "235"
> head(labels_colors(tree))
295 929 488 493 233 235
> head(xColor[order.dendrogram(tree)])
<NA> <NA> <NA> <NA> <NA> <NA>
NA NA NA NA NA NA
How would I get the line labels_colors(tree) <- xColor[order.dendrogram(tree)] to behave in the same way as the example provided? Specifically, what I am trying to show is the leaf lables such as amantadine_58mg1d_fc being highlighted in the color that corresponds to the target (0/1).
Here is my answer to your Question 1: the plot() method for 'AggExResult' objects internally uses the plot.dendrogram() method. Since this method does not allow for coloring leaves of dendrograms, this will not work. However, there is the 'dendextend' package which offers such a functionality. (BTW, I found that solution in another thread: Label and color leaf dendrogram in r) Since 'apcluster' offers some casts to 'hclust' and 'dendrogram' objects, this package's functionality can be used more or less directly.
So, here is some sample code:
library(apcluster)
## create two Gaussian clouds along with class labels 0/1
cl1 <- cbind(rnorm(50, 0.2, 0.05), rnorm(50, 0.8, 0.06))
cl2 <- cbind(rnorm(50, 0.7, 0.08), rnorm(50, 0.3, 0.05))
x <- cbind(Columns=data.frame(rbind(cl1, cl2)),
"Class_ID"=factor(as.character(c(rep(0, 50), rep(1, 50)))))
## compute similarity matrix (negative squared Euclidean)
sim <- negDistMat(x[, 1:2], r=2)
## compute agglomerative clustering from scratch
aggres1 <- aggExCluster(sim)
## load 'dendextend' package
## install.packages("dendextend") ## if not yet installed
library(dendextend)
## convert object
tree <- as.dendrogram(aggres1)
## assign color codes
colorCodes <- c("0"="red", "1"="green")
xColor <- colorCodes[x$Class_ID]
names(xColor) <- rownames(x)
## plot color-labeled tree
labels_colors(tree) <- xColor[order.dendrogram(tree)]
plot(tree)
Here is my answer to your Question 2: Sorry, no such functionality is implemented in the 'apcluster' package. And since this is quite a special request, I am reluctant to include it the package (let alone the fact that show() methods cannot have additional arguments). So, alternatively, I want to provide you with a custom function that allows for labeling/grouping exemplars and samples:
library(apcluster)
## create two Gaussian clouds along with class labels 0/1
cl1 <- cbind(rnorm(50, 0.2, 0.05), rnorm(50, 0.8, 0.06))
cl2 <- cbind(rnorm(50, 0.7, 0.08), rnorm(50, 0.3, 0.05))
x <- cbind(Columns=data.frame(rbind(cl1, cl2)),
"Class_ID"=factor(as.character(c(rep(0, 50), rep(1, 50)))))
## compute similarity matrix (negative squared Euclidean)
sim <- negDistMat(x[, 1:2], r=2)
## special show() function with labeled data
show.ExClust.labeled <- function(object, labels=NULL)
{
if (!is(object, "ExClust"))
stop("'object' is not of class 'ExClust'")
if (is.null(labels))
{
show(object)
return(invisible(NULL))
}
cat("\n", class(object), " object\n", sep="")
if (!is.finite(object#l) || !is.finite(object#it))
stop("object is not result of an affinity propagation run; ",
"it is pointless to create 'APResult' objects yourself.")
cat("\nNumber of samples = ", object#l, "\n")
if (length(object#sel) > 0)
{
cat("Number of sel samples = ", length(object#sel),
paste(" (", round(100*length(object#sel)/object#l,1),
"%)\n", sep=""))
cat("Number of sweeps = ", object#sweeps, "\n")
}
cat("Number of iterations = ", object#it, "\n")
cat("Input preference = ", object#p, "\n")
cat("Sum of similarities = ", object#dpsim, "\n")
cat("Sum of preferences = ", object#expref, "\n")
cat("Net similarity = ", object#netsim, "\n")
cat("Number of clusters = ", length(object#exemplars), "\n\n")
if (length(object#exemplars) > 0)
{
if (length(names(object#exemplars)) == 0)
{
cat("Exemplars:\n")
df <- data.frame("Sample"=object#exemplars,
Label=labels[object#exemplars])
print(df, row.names=FALSE)
for (i in 1:length(object#exemplars))
{
cat("\nCluster ", i, ", exemplar ",
object#exemplars[i], ":\n", sep="")
df <- data.frame(Sample=object#clusters[[i]],
Label=labels[object#clusters[[i]]])
print(df, row.names=FALSE)
}
}
else
{
df <- data.frame("Exemplars"=names(object#exemplars),
Label=labels[names(object#exemplars)])
print(df, row.names=FALSE)
for (i in 1:length(object#exemplars))
{
cat("\nCluster ", i, ", exemplar ",
names(object#exemplars)[i], ":\n", sep="")
df <- data.frame(Sample=names(object#clusters[[i]]),
Label=labels[names(object#clusters[[i]])])
print(df, row.names=FALSE)
}
}
}
else
{
cat("No clusters identified.\n")
}
}
## create label vector (with proper names)
label <- x$Class_ID
names(label) <- rownames(x)
## run apcluster()
apres <- apcluster(sim, q=0.3)
## show with labels
show.ExClust.labeled(apres, label)

The running sequential average of a list of numbers in J

I'm trying to generate the Sierpinski triangle (chaos game version) in J. The general iterative algorithm to generate it, given 3 vertices, is:
point = (0, 0)
loop:
v = randomly pick one of the 3 vertices
point = (point + v) / 2
draw point
I'm trying to create the idiomatic version in J. So far this is what I have:
load 'plot'
numpoints =: 200000
verticesx =: 0 0.5 1
verticesy =: 0 , (2 o. 0.5) , 0
rolls =: ?. numpoints$3
pointsx =: -:#+ /\. rolls { verticesx
pointsy =: -:#+ /\. rolls { verticesy
'point' plot pointsx ; pointsy
This works, but I'm not sure I understand what's going on with -:#+ /\.. I think it's only working because of a mathematical quirk. I was trying to make a dyadic average function that would run as an accumulation through the list of points in the same way that + does in +/ \ i. 10, but I couldn't get anything like that to work. How would I do that?
Update:
To be clear, I'm trying to create a binary function avg that I could use in this way:
avg /\ randompoints
avg =: -:#+ doesn't work with this, for some reason. So I think what I'm having trouble with is properly defining an avg function with the proper variadicity.
To be as close to the algorithm as possible, I would probably do something like this:
v =: 3 2$ 0 0 0.5, (2 o. 0.5), 1 0
ps =: 1 2 $ (?3) { v
next =: 4 :'y,((?x){v) -:#+ ({: y)'
ps =: (3&next)^:20000 ps
'point' plot ({.;{:) |: ps
but your version is much more efficient.

R: Call matrixes from a vector of string names?

Imagine I've got 100 numeric matrixes with 5 columns each.
I keep the names of that matrixes in a vector or list:
Mat <- c("GON1EU", "GON2EU", "GON3EU", "NEW4", ....)
I also have a vector of coefficients "coef",
coef <- c(1, 2, 2, 1, ...)
And I want to calculate a resulting vector in this way:
coef[1]*GON1EU[,1]+coef[2]*GON2EU[,1]+coef[3]*GON3EU[,1]+coef[4]*NEW4[,1]+.....
How can I do it in a compact way, using the the vector of names?
Something like:
coef*(Object(Mat)[,1])
I think the key is how to call an object from a string with his name and use and vectorial notation. But I don't know how.
get() allows you to refer to an object by a string. It will only get you so far though; you'll still need to construct the repeated call to get() on the list matrices etc. However, I wonder if an alternative approach might be feasible? Instead of storing the matrices separately in the workspace, why not store the matrices in a list?
Then you can use sapply() on the list to extract the first column of each matrix in the list. The sapply() step returns a matrix, which we multiply by the coefficient vector. The column sums of that matrix are the values you appear to want from your above description. At least I'm assuming that coef[1]*GON1EU[,1] is a vector of length(GON1EU[,1]), etc.
Here's some code implementing this idea.
vec <- 1:4 ## don't use coef - there is a function with that name
mat <- matrix(1:12, ncol = 3)
myList <- list(mat1 = mat, mat2 = mat, mat3 = mat, mat4 = mat)
colSums(sapply(myList, function(x) x[, 1]) * vec)
Here is some output:
> sapply(myList, function(x) x[, 1]) * vec
mat1 mat2 mat3 mat4
[1,] 1 1 1 1
[2,] 4 4 4 4
[3,] 9 9 9 9
[4,] 16 16 16 16
> colSums(sapply(myList, function(x) x[, 1]) * vec)
mat1 mat2 mat3 mat4
30 30 30 30
The above example suggest you create, or read in, your 100 matrices as components of a list from the very beginning of your analysis. This will require you to alter the code you used to generate the 100 matrices. Seeing as you already have your 100 matrices in your workspace, to get myList from these matrices we can use the vector of names you already have and use a loop:
Mat <- c("mat","mat","mat","mat")
## loop
for(i in seq_along(myList2)) {
myList[[i]] <- get(Mat[i])
}
## or as lapply call - Kudos to Ritchie Cotton for pointing that one out!
## myList <- lapply(Mat, get)
myList <- setNames(myList, paste(Mat, 1:4, sep = ""))
## You only need:
myList <- setNames(myList, Mat)
## as you have the proper names of the matrices
I used "mat" repeatedly in Mat as that is the name of my matrix above. You would use your own Mat. If vec contains what you have in coef, and you create myList using the for loop above, then all you should need to do is:
colSums(sapply(myList, function(x) x[, 1]) * vec)
To get the answer you wanted.
See help(get) and that's that.
If you'd given us a reproducible example I'd have said a bit more. For example:
> a=1;b=2;c=3;d=4
> M=letters[1:4]
> M
[1] "a" "b" "c" "d"
> sum = 0 ; for(i in 1:4){sum = sum + i * get(M[i])}
> sum
[1] 30
Put whatever you need in the loop, or use apply over the vector M and get the object:
> sum(unlist(lapply(M,function(n){get(n)^2})))
[1] 30

Resources