I have a problem with the following code.
It calculates the drc curve correctly, but the ec50 wrongly, although the are closely related...
x <- c(-1, -0.114074, 0.187521, 0.363612, 0.488551, 0.585461, 0.664642, 0.730782, 0.788875, 0.840106, 0.885926, 0.92737, 0.965202, 1)
y <- c(100, 3.978395643, 0.851717911, 0.697307565, 0.512455497, 0.512455497, 0.482273052, 0.479293487, 0.361024717, 0.355324864, 0.303120838, 0.286539832, 0.465692047, 0.358045152)
mat <- cbind(x, y)
df <- as.data.frame(mat)
calc <- drm(
formula = y ~ x,
data = df,
fct = L.4(names = c("hill", "min_value", "max_value", "ec50"))
)
plot <- ggplot(df, aes(x=x, y=y), color="black") +
geom_point() +
labs(x = "x", y = "y") +
theme(
axis.title.x = element_text(color="black", size=10),
axis.title.y = element_text(color="black", size=10),
axis.line.x = element_line(color = "black"),
axis.line.y = element_line(color = "black")
) +
stat_smooth(
formula = y ~ x,
method = "drm", color="black",
method.args = list(fct = L.4(names = c("hill", "min_value", "max_value", "ec50"))),
se = FALSE
) +
theme(panel.background=element_rect(fill="white"))+
ylim(0, NA)
ec50 <- ED(calc,50)
print(ec50)
print(calc)
print(plot)
This is the graph I obtain:
But if I print the parameters of the function L.4, I have the following result:
hill:(Intercept) 6.3181
min_value:(Intercept) 0.3943
max_value:(Intercept) 111.0511
ec50:(Intercept) -0.6520
max_value:(Intercept) is obviously wrong (it has to be 100), and, as a consequence, ec50 is wrong too.
I would also add that for other sets of data, the min_value:(Intercept) is wrong too (with values < 0...)
I cannot find the mistake, because the graph derived from the same function L.4 shows the right values.
Thank you very much for your help!
The upper asymptote in your case assumes a symmetrical curve (due to 4PL fitting). Meaning that both bottom and upper asymptote have the same inflection.
Your data might max out at 100 but the formula calculates the upper asymptote further than 100 (111) because that's where the actual asymptote lies, not the end of your data.
So the graph is based on your data, but the estimated parameters forces a symmetrical curve to fit it, and your asymptote increases. This will also shift the EC50.
Due to privacy issues I don't have the original raw data matrices, but instead I can have covariance matrices of x and y (x'x, y'y, x'y) datasets or the correlation matrix between the two of them (or any other sort of matrix that is not the original data matrix).
I need to find a way to apply canonical correlation analysis directly on those matrices. Browsing the net I didn't find any solution to my problem. I want to ask if there is already an implemented algorithm able to work on these data, in R would be the best, but other languages are ok
Example from the tutorial in R for cca package: (https://stats.idre.ucla.edu/r/dae/canonical-correlation-analysis/)
mm <- read.csv("https://stats.idre.ucla.edu/stat/data/mmreg.csv")
colnames(mm) <- c("Control", "Concept", "Motivation", "Read", "Write", "Math",
"Science", "Sex")
You divide the dataset into x and y :
x <- mm[, 1:3]
y <- mm[, 4:8]
Then the function works taking as input these two datasets: cc(x,y) (note that the function standardizes the data by itself).
What I want to know if there is a way to perform cca starting by centering matrices around the mean:
x = scale(x, scale = F)
y = scale(Y, scale = F)
An then computing the covariance matrices x'x, y'y, xy'xy:
cvx = crossprod(x); cvy = crossprod(y); cvxy = crossprod(x,y)
And the algorithm should take in input those matrices to work and compute the canonical variates and correlation coefficients
like: f(cvx, cvy, cvxy)
In this article is written a solution starting from covariance matrices for example, but I don't if it is just theory or someone has actually implemented it
http://graphics.stanford.edu/courses/cs233-20-spring/ReferencedPapers/CCA_Weenik.pdf
I hope to be exhaustive enough!
In short: the correlation are using internally in most (probably all) CCA analysis.
In long: you will need to work out a bit how to do that depending on the case. Let me show you below a example.
What is Canonical-correlation analysis (CCA)?
Canonical-correlation analysis (CCA): help you to identify the best possible linear relations you could create between two datasets. See wikipedia. See references for examples. I will follow this post for the data and use libraries.
Set up libraries, upload the data, select some variables, removed nans, estandarizad the data.
import pandas as pd
import numpy as np
df = pd.read_csv('2016 School Explorer.csv')
# choose relevant features
df = df[['Rigorous Instruction %',
'Collaborative Teachers %',
'Supportive Environment %',
'Effective School Leadership %',
'Strong Family-Community Ties %',
'Trust %','Average ELA Proficiency',
'Average Math Proficiency']]
df.corr()
# drop missing values
df = df.dropna()
# separate X and Y groups
X = df[['Rigorous Instruction %',
'Collaborative Teachers %',
'Supportive Environment %',
'Effective School Leadership %',
'Strong Family-Community Ties %',
'Trust %'
]]
Y = df[['Average ELA Proficiency',
'Average Math Proficiency']]
for col in X.columns:
X[col] = X[col].str.strip('%')
X[col] = X[col].astype('int')
# Standardise the data
from sklearn.preprocessing import StandardScaler
sc = StandardScaler(with_mean=True, with_std=True)
X_sc = sc.fit_transform(X)
Y_sc = sc.fit_transform(Y)
What are Correlations?
I am pausing here to talk about the idea and the implementation.
First of all CCA analysis is naturally based on that idea however for the numerical resolution there are different ways to do that.
The definition from wikipedia. See the pic:
I am talking about this because I am going to modify a function of that library and I want you to really pay attention to that.
See Eq 4 in Bilenko et al 2016. But you need to be really careful with how to place that well.
Notice that strictly speaking you do not need the correlations.
Let me show the the function that is working out that expression, in pyrrcca library here
def kcca(data, reg=0., numCC=None, kernelcca=True,
ktype='linear',
gausigma=1.0, degree=2):
"""Set up and solve the kernel CCA eigenproblem
"""
if kernelcca:
kernel = [_make_kernel(d, ktype=ktype, gausigma=gausigma,
degree=degree) for d in data]
else:
kernel = [d.T for d in data]
nDs = len(kernel)
nFs = [k.shape[0] for k in kernel]
numCC = min([k.shape[1] for k in kernel]) if numCC is None else numCC
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
# Allocate left-hand side (LH) and right-hand side (RH):
LH = np.zeros((sum(nFs), sum(nFs)))
RH = np.zeros((sum(nFs), sum(nFs)))
# Fill the left and right sides of the eigenvalue problem
for i in range(nDs):
RH[sum(nFs[:i]) : sum(nFs[:i+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = (crosscovs[i * (nDs + 1)]
+ reg * np.eye(nFs[i]))
for j in range(nDs):
if i != j:
LH[sum(nFs[:j]) : sum(nFs[:j+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = crosscovs[nDs * j + i]
LH = (LH + LH.T) / 2.
RH = (RH + RH.T) / 2.
maxCC = LH.shape[0]
r, Vs = eigh(LH, RH, eigvals=(maxCC - numCC, maxCC - 1))
r[np.isnan(r)] = 0
rindex = np.argsort(r)[::-1]
comp = []
Vs = Vs[:, rindex]
for i in range(nDs):
comp.append(Vs[sum(nFs[:i]):sum(nFs[:i + 1]), :numCC])
return comp
The output from here the Canonical Covariates (comp), those are a and b in Eq4 in Bilenko et al 2016.
I just want you to pay attention to this:
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
That is exactly the place where that operation happens. Notice that is not exactly the definition from Wikipedia, however is mathematically equivalent.
Calculation of the correlations
I am going to calculate the correlations as in wikipedia but later I will modify that function, so it is going to bit a couple of details, to make sure this is answering the original questions clearly.
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
print(crosscovs)
[array([[1217. , 746.04496925, 736.14178336, 575.21073838,
517.52474332, 641.25363806],
[ 746.04496925, 1217. , 732.6297358 , 1094.38480773,
572.95747557, 1073.96490387],
[ 736.14178336, 732.6297358 , 1217. , 559.5753228 ,
682.15312862, 774.36607617],
[ 575.21073838, 1094.38480773, 559.5753228 , 1217. ,
495.79248754, 1047.31981248],
[ 517.52474332, 572.95747557, 682.15312862, 495.79248754,
1217. , 632.75610906],
[ 641.25363806, 1073.96490387, 774.36607617, 1047.31981248,
632.75610906, 1217. ]]), array([[367.74099904, 391.82683717],
[348.78464015, 355.81358426],
[440.88117453, 514.22183796],
[326.32173163, 311.97282341],
[216.32441793, 269.72859023],
[288.27601974, 304.20209135]]), array([[367.74099904, 348.78464015, 440.88117453, 326.32173163,
216.32441793, 288.27601974],
[391.82683717, 355.81358426, 514.22183796, 311.97282341,
269.72859023, 304.20209135]]), array([[1217. , 1139.05867099],
[1139.05867099, 1217. ]])]
Have a look to the output, I am going to change that a bit so is between -1 and 1. Again, this modification is minor. Following the definition from wikipedia the authors just care about the numerator, and I am just going to include now the denominator.
max_unit = 0
for crosscov in crosscovs:
max_unit = np.max([max_unit,np.max(crosscov)])
"""I normalice"""
crosscovs_new = []
for crosscov in crosscovs:
crosscovs_new.append(crosscov/max_unit)
print(crosscovs_new)
[array([[1. , 0.6130197 , 0.60488232, 0.47264646, 0.4252463 ,
0.52691342],
[0.6130197 , 1. , 0.6019965 , 0.89924799, 0.47079497,
0.88246911],
[0.60488232, 0.6019965 , 1. , 0.45979895, 0.56052024,
0.63629094],
[0.47264646, 0.89924799, 0.45979895, 1. , 0.40738906,
0.86057503],
[0.4252463 , 0.47079497, 0.56052024, 0.40738906, 1. ,
0.51993107],
[0.52691342, 0.88246911, 0.63629094, 0.86057503, 0.51993107,
1. ]]), array([[0.30217009, 0.32196125],
[0.28659379, 0.29236942],
[0.36226884, 0.42253232],
[0.26813618, 0.25634579],
[0.17775219, 0.22163401],
[0.2368743 , 0.24996063]]), array([[0.30217009, 0.28659379, 0.36226884, 0.26813618, 0.17775219,
0.2368743 ],
[0.32196125, 0.29236942, 0.42253232, 0.25634579, 0.22163401,
0.24996063]]), array([[1. , 0.93595618],
[0.93595618, 1. ]])]
For clarity I will show you in a slightly different way to see that the numbers and indeed correlations of the original data.
df.corr()
Average ELA Proficiency Average Math Proficiency
Average ELA Proficiency 1.000000 0.935956
Average Math Proficiency 0.935956 1.000000
That is a way to see as well the variables name. I just want to show you that the numbers above make sense, and are what you are calling correlations.
Calculations of the CCA
So now I will just modify a bit the function kcca from pyrrcca. The idea is for that function to accept the previously calculated correlations matrixes.
from rcca import _make_kernel
from scipy.linalg import eigh
def kcca_working(data, reg=0.,
numCC=None,
kernelcca=False,
ktype='linear',
gausigma=1.0,
degree=2,
crosscovs=None):
"""Set up and solve the kernel CCA eigenproblem
"""
if kernelcca:
kernel = [_make_kernel(d, ktype=ktype, gausigma=gausigma,
degree=degree) for d in data]
else:
kernel = [d.T for d in data]
nDs = len(kernel)
nFs = [k.shape[0] for k in kernel]
numCC = min([k.shape[1] for k in kernel]) if numCC is None else numCC
if crosscovs is None:
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
# Allocate left-hand side (LH) and right-hand side (RH):
LH = np.zeros((sum(nFs), sum(nFs)))
RH = np.zeros((sum(nFs), sum(nFs)))
# Fill the left and right sides of the eigenvalue problem
for i in range(nDs):
RH[sum(nFs[:i]) : sum(nFs[:i+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = (crosscovs[i * (nDs + 1)]
+ reg * np.eye(nFs[i]))
for j in range(nDs):
if i != j:
LH[sum(nFs[:j]) : sum(nFs[:j+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = crosscovs[nDs * j + i]
LH = (LH + LH.T) / 2.
RH = (RH + RH.T) / 2.
maxCC = LH.shape[0]
r, Vs = eigh(LH, RH, eigvals=(maxCC - numCC, maxCC - 1))
r[np.isnan(r)] = 0
rindex = np.argsort(r)[::-1]
comp = []
Vs = Vs[:, rindex]
for i in range(nDs):
comp.append(Vs[sum(nFs[:i]):sum(nFs[:i + 1]), :numCC])
return comp, crosscovs
Let run the function:
comp, crosscovs = kcca_working([X_sc, Y_sc], reg=0.,
numCC=2, kernelcca=False, ktype='linear',
gausigma=1.0, degree=2, crosscovs = crosscovs_new)
print(comp)
[array([[-0.00375779, 0.0078263 ],
[ 0.00061439, -0.00357358],
[-0.02054012, -0.0083491 ],
[-0.01252477, 0.02976148],
[ 0.00046503, -0.00905069],
[ 0.01415084, -0.01264106]]), array([[ 0.00632283, 0.05721601],
[-0.02606459, -0.05132531]])]
So I take the original function, and make possible to introduce the correlations, I also output that just for checking.
I print the Canonical Covariates (comp), those are a and b in Eq4 in Bilenko et al 2016.
Comparing results
Now I am going to compare results from the original and the modified function. I will show you that the results are equivalent.
I could obtain the original results this way. With crosscovs = None, so it is calculated as originally, instead of us introducing it:
comp, crosscovs = kcca_working([X_sc, Y_sc], reg=0.,
numCC=2, kernelcca=False, ktype='linear',
gausigma=1.0, degree=2, crosscovs = None)
print(comp)
[array([[-0.13109264, 0.27302457],
[ 0.02143325, -0.12466608],
[-0.71655285, -0.2912628 ],
[-0.43693303, 1.03824477],
[ 0.01622265, -0.31573818],
[ 0.49365965, -0.44098996]]), array([[ 0.2205752 , 1.99601077],
[-0.90927705, -1.79051045]])]
I print the Canonical Covariates (comp), those are a' and b' in Eq4 in Bilenko et al 2016.
a, b and a', b' are different but they are just different in the scale, so for all purpose they are equivalent. This is because of the correlations definitions.
To show that let me pick up numbers from each case and calculate the ratio:
print(0.00061439/-0.00375779)
-0.16349769412340764
print(0.02143325/-0.13109264)
-0.16349697435340382
They are the same result.
When that is modified you could just build in the top of that.
References:
Cool post with example and explanations in Python, using library pyrcca: https://towardsdatascience.com/understanding-how-schools-work-with-canonical-correlation-analysis-4c9a88c6b913
Bilenko, Natalia Y., and Jack L. Gallant. "Pyrcca: regularized kernel canonical correlation analysis in python and its applications to neuroimaging." Frontiers in neuroinformatics 10 (2016): 49. Paper in which pyrcca is explained: https://www.frontiersin.org/articles/10.3389/fninf.2016.00049/full
Let's say I have this dictionary :
cars = ["car1","car2","car3","car4","car5"]
x = LpVariable.dicts("car",cars, cat='Integer', lowBound=0, upBound=800)
Is there any way to add different lowBound and upBounds to each car, please?
Note
The easy code version looks like this :
car1 = LpVariable("car1", 0, 40)
car2 = LpVariable("car2", 0, 1000)
Please notice that the car1 upBound is 40 and the car 2 upBound is 1000.
Finally,
I 've done it, using his great code :
How do I generate PuLP variables and constrains without using exec?
Thanks a lot, DSM, bro !
prob = LpProblem("problem", LpMaximize)
# Setting LP variables
lpVars =["car1","car2","car3"]
upbounds=[40,80,30]
xs = [LpVariable("car{}".format(i+1), lowBound = 0, upBound = upbounds[i], cat='Integer' ) for i in range(len(lpVars))]
# add objective
margin = [3,2,3]
total_prof = sum(x * value for x,value in zip(xs, margin))
prob += total_prof
# add constraint
labour = [2,1,4]
total_labour = sum(x * w for x,w in zip(xs, labour))
prob += total_labour <= 100
# Solve the problem
prob.solve()
The next step is getting the arrays variables from the front end app (upbounds, margin, labour, etc ..) , thank you, bro, peep my github
I'm trying to run a binomial-beta model in JAGS (see example code below). I keep getting the error:
Error: The following error was encountered while attempting to run the JAGS model:
Error in node a0
Slicer stuck at value with infinite density
which I am struggling to make sense of. I thought perhaps the initial conditions were sending the beta distribution into infinite regions of parameter space but after some investigation that doesn't seem to be the case.
Any thoughts on what this error means or how to adjust the code to accomodate it?
I've put my code below along with some made up sample data. This is the kind of data I might expect in my dataset.
#Generate some sample data
counts = c(80,37,10,43,55,23,53,100,7,11)
n = c(100,57,25,78,55,79,65,100,9,11)
consp = c(1.00, 0.57, 0.25, 0.78, 0.55, 0.79, 0.65, 1.00, 0.09, 0.11)
treat = c(0.5,0.5,0.2,0.9,0.5,0.2,0.5,0.9,0.5,0.2)
#Model spec
model1.string <-"model{
for (i in 1:length(counts)){
counts[i] ~ dbin(p[i],n[i])
p[i] ~ dbeta( ( mu[i] * theta[i]) , ((1-mu[i])*theta[i]))
mu[i] <- ilogit(m0 + m1*consp[i] + m2*treat[i])
theta[i] <- exp(n0 + n1*consp[i])
}
m0 ~ dnorm(0, 1)
m1 ~ dnorm(0, 1)
m2~ dnorm(-1, 1)
k0 ~ dnorm(1, 1)
k1 ~ dnorm(0, 1)
}"
#Specify number of chains
chains=5
#Generate initial conditions
inits=replicate(chains, list(m0 = runif(1, 0.05, 0.25),
m1 = runif(1, 0,0.2),
m2=runif(1,-1,0),
k0 = runif(1, 0.5, 1.5),
k1 = runif(1, 0, 0.3)),simsplify = F)
#Run
model1.spec<-textConnection(model1.string)
results <- autorun.jags(model1.string,startsample = 10000,
data = list('counts' = counts,
'n' = n,
'consp'=consp,
"treat"=treat),
startburnin=5000,
psrf.target=1.02,
n.chains=5,
monitor = c("m0", "m1", "m2","k0", "k1"), inits = inits),
The slice sampler (which is used by JAGS) doesn't work when the probability density of the sampled variable is infinite at a point. This can happen with the Beta distribution at 0 or 1.
A work around is to truncate the node that creates the problem, as in:
p[i] ~ dbeta( ( mu[i] * theta[i]) , ((1-mu[i])*theta[i])) I(0.001,0.999)
(I don't really remember the syntax, but JAGS definitively allows truncated random variables)