Measurement invariance in Lavaan (subgroup analysis); Warning: covariance matrix of latent variables is not positive definite - statistics

I have a good fitting CFA model and am now conducting some additional analyses by different subgroups (e.g., gender, age, education, ...). Specifically, I'm calculating fit indices for measurement invariance (configural, metric, scalar). Everything looks fine, except for one of my subgroups. I get the following Warning message in Lavaan for a group that has a relatively small sample size:
Warning message:
In lav_object_post_check(object) :
lavaan WARNING: covariance matrix of latent variables
is not positive definite;
use lavInspect(fit, "cov.lv") to investigate.
Model specification and fit:
model <- '
F1 =~ a + b + c + d + e
F2 =~ f + g + h + i + j
F3 =~ k + l + m + n + o
F4 =~ p + q + r + s + t
F5 =~ u + v + w + x + y
Crossload =~ g + j + n + o + q
'
fit.model.group3 <- sem(model, data=MyData.group3, estimator='WLSMV', missing='pairwise', ordered = T)
fit.model.configural.edu <- sem(model, data=MyData.edu, estimator='WLSMV', missing='pairwise', ordered = T, group = "edu")
This grouping variable (edu) has 3 levels (Ns: group1 = 1150, group2 = 215, group3 = 120), and I only get the warning for group3 (the smallest group). I can fix it combining some factors (which were strongly correlated for this group), and then fitting the more parsimonious (4-factor) model. However, as the point of this subgroup analysis is to investigate the same CFA model across the different subgroups, I don't think it's appropriate to change the model specification(?). And so, is it okay to ignore this warning message for the sake of the measurement invariance analysis (the model still converges normally, and I get the necessary metrics)? Note that some of the eigenvalues are negative, and I get the following output from this call:
> det(lavInspect(fit.model.group3, "cov.lv"))
[1] 2.544325e-06
As it's close to zero, could it be a machine precision issue?
Any advice would be very much appreciated. Unfortunately, my technical knowledge is lacking and so I've struggled to follow some of the existing advice elsewhere.
Best wishes,
Mary

Related

Canonical correlation analysis on covariance matrices instead of raw data

Due to privacy issues I don't have the original raw data matrices, but instead I can have covariance matrices of x and y (x'x, y'y, x'y) datasets or the correlation matrix between the two of them (or any other sort of matrix that is not the original data matrix).
I need to find a way to apply canonical correlation analysis directly on those matrices. Browsing the net I didn't find any solution to my problem. I want to ask if there is already an implemented algorithm able to work on these data, in R would be the best, but other languages are ok
Example from the tutorial in R for cca package: (https://stats.idre.ucla.edu/r/dae/canonical-correlation-analysis/)
mm <- read.csv("https://stats.idre.ucla.edu/stat/data/mmreg.csv")
colnames(mm) <- c("Control", "Concept", "Motivation", "Read", "Write", "Math",
"Science", "Sex")
You divide the dataset into x and y :
x <- mm[, 1:3]
y <- mm[, 4:8]
Then the function works taking as input these two datasets: cc(x,y) (note that the function standardizes the data by itself).
What I want to know if there is a way to perform cca starting by centering matrices around the mean:
x = scale(x, scale = F)
y = scale(Y, scale = F)
An then computing the covariance matrices x'x, y'y, xy'xy:
cvx = crossprod(x); cvy = crossprod(y); cvxy = crossprod(x,y)
And the algorithm should take in input those matrices to work and compute the canonical variates and correlation coefficients
like: f(cvx, cvy, cvxy)
In this article is written a solution starting from covariance matrices for example, but I don't if it is just theory or someone has actually implemented it
http://graphics.stanford.edu/courses/cs233-20-spring/ReferencedPapers/CCA_Weenik.pdf
I hope to be exhaustive enough!
In short: the correlation are using internally in most (probably all) CCA analysis.
In long: you will need to work out a bit how to do that depending on the case. Let me show you below a example.
What is Canonical-correlation analysis (CCA)?
Canonical-correlation analysis (CCA): help you to identify the best possible linear relations you could create between two datasets. See wikipedia. See references for examples. I will follow this post for the data and use libraries.
Set up libraries, upload the data, select some variables, removed nans, estandarizad the data.
import pandas as pd
import numpy as np
df = pd.read_csv('2016 School Explorer.csv')
# choose relevant features
df = df[['Rigorous Instruction %',
'Collaborative Teachers %',
'Supportive Environment %',
'Effective School Leadership %',
'Strong Family-Community Ties %',
'Trust %','Average ELA Proficiency',
'Average Math Proficiency']]
df.corr()
# drop missing values
df = df.dropna()
# separate X and Y groups
X = df[['Rigorous Instruction %',
'Collaborative Teachers %',
'Supportive Environment %',
'Effective School Leadership %',
'Strong Family-Community Ties %',
'Trust %'
]]
Y = df[['Average ELA Proficiency',
'Average Math Proficiency']]
for col in X.columns:
X[col] = X[col].str.strip('%')
X[col] = X[col].astype('int')
# Standardise the data
from sklearn.preprocessing import StandardScaler
sc = StandardScaler(with_mean=True, with_std=True)
X_sc = sc.fit_transform(X)
Y_sc = sc.fit_transform(Y)
What are Correlations?
I am pausing here to talk about the idea and the implementation.
First of all CCA analysis is naturally based on that idea however for the numerical resolution there are different ways to do that.
The definition from wikipedia. See the pic:
I am talking about this because I am going to modify a function of that library and I want you to really pay attention to that.
See Eq 4 in Bilenko et al 2016. But you need to be really careful with how to place that well.
Notice that strictly speaking you do not need the correlations.
Let me show the the function that is working out that expression, in pyrrcca library here
def kcca(data, reg=0., numCC=None, kernelcca=True,
ktype='linear',
gausigma=1.0, degree=2):
"""Set up and solve the kernel CCA eigenproblem
"""
if kernelcca:
kernel = [_make_kernel(d, ktype=ktype, gausigma=gausigma,
degree=degree) for d in data]
else:
kernel = [d.T for d in data]
nDs = len(kernel)
nFs = [k.shape[0] for k in kernel]
numCC = min([k.shape[1] for k in kernel]) if numCC is None else numCC
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
# Allocate left-hand side (LH) and right-hand side (RH):
LH = np.zeros((sum(nFs), sum(nFs)))
RH = np.zeros((sum(nFs), sum(nFs)))
# Fill the left and right sides of the eigenvalue problem
for i in range(nDs):
RH[sum(nFs[:i]) : sum(nFs[:i+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = (crosscovs[i * (nDs + 1)]
+ reg * np.eye(nFs[i]))
for j in range(nDs):
if i != j:
LH[sum(nFs[:j]) : sum(nFs[:j+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = crosscovs[nDs * j + i]
LH = (LH + LH.T) / 2.
RH = (RH + RH.T) / 2.
maxCC = LH.shape[0]
r, Vs = eigh(LH, RH, eigvals=(maxCC - numCC, maxCC - 1))
r[np.isnan(r)] = 0
rindex = np.argsort(r)[::-1]
comp = []
Vs = Vs[:, rindex]
for i in range(nDs):
comp.append(Vs[sum(nFs[:i]):sum(nFs[:i + 1]), :numCC])
return comp
The output from here the Canonical Covariates (comp), those are a and b in Eq4 in Bilenko et al 2016.
I just want you to pay attention to this:
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
That is exactly the place where that operation happens. Notice that is not exactly the definition from Wikipedia, however is mathematically equivalent.
Calculation of the correlations
I am going to calculate the correlations as in wikipedia but later I will modify that function, so it is going to bit a couple of details, to make sure this is answering the original questions clearly.
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
print(crosscovs)
[array([[1217. , 746.04496925, 736.14178336, 575.21073838,
517.52474332, 641.25363806],
[ 746.04496925, 1217. , 732.6297358 , 1094.38480773,
572.95747557, 1073.96490387],
[ 736.14178336, 732.6297358 , 1217. , 559.5753228 ,
682.15312862, 774.36607617],
[ 575.21073838, 1094.38480773, 559.5753228 , 1217. ,
495.79248754, 1047.31981248],
[ 517.52474332, 572.95747557, 682.15312862, 495.79248754,
1217. , 632.75610906],
[ 641.25363806, 1073.96490387, 774.36607617, 1047.31981248,
632.75610906, 1217. ]]), array([[367.74099904, 391.82683717],
[348.78464015, 355.81358426],
[440.88117453, 514.22183796],
[326.32173163, 311.97282341],
[216.32441793, 269.72859023],
[288.27601974, 304.20209135]]), array([[367.74099904, 348.78464015, 440.88117453, 326.32173163,
216.32441793, 288.27601974],
[391.82683717, 355.81358426, 514.22183796, 311.97282341,
269.72859023, 304.20209135]]), array([[1217. , 1139.05867099],
[1139.05867099, 1217. ]])]
Have a look to the output, I am going to change that a bit so is between -1 and 1. Again, this modification is minor. Following the definition from wikipedia the authors just care about the numerator, and I am just going to include now the denominator.
max_unit = 0
for crosscov in crosscovs:
max_unit = np.max([max_unit,np.max(crosscov)])
"""I normalice"""
crosscovs_new = []
for crosscov in crosscovs:
crosscovs_new.append(crosscov/max_unit)
print(crosscovs_new)
[array([[1. , 0.6130197 , 0.60488232, 0.47264646, 0.4252463 ,
0.52691342],
[0.6130197 , 1. , 0.6019965 , 0.89924799, 0.47079497,
0.88246911],
[0.60488232, 0.6019965 , 1. , 0.45979895, 0.56052024,
0.63629094],
[0.47264646, 0.89924799, 0.45979895, 1. , 0.40738906,
0.86057503],
[0.4252463 , 0.47079497, 0.56052024, 0.40738906, 1. ,
0.51993107],
[0.52691342, 0.88246911, 0.63629094, 0.86057503, 0.51993107,
1. ]]), array([[0.30217009, 0.32196125],
[0.28659379, 0.29236942],
[0.36226884, 0.42253232],
[0.26813618, 0.25634579],
[0.17775219, 0.22163401],
[0.2368743 , 0.24996063]]), array([[0.30217009, 0.28659379, 0.36226884, 0.26813618, 0.17775219,
0.2368743 ],
[0.32196125, 0.29236942, 0.42253232, 0.25634579, 0.22163401,
0.24996063]]), array([[1. , 0.93595618],
[0.93595618, 1. ]])]
For clarity I will show you in a slightly different way to see that the numbers and indeed correlations of the original data.
df.corr()
Average ELA Proficiency Average Math Proficiency
Average ELA Proficiency 1.000000 0.935956
Average Math Proficiency 0.935956 1.000000
That is a way to see as well the variables name. I just want to show you that the numbers above make sense, and are what you are calling correlations.
Calculations of the CCA
So now I will just modify a bit the function kcca from pyrrcca. The idea is for that function to accept the previously calculated correlations matrixes.
from rcca import _make_kernel
from scipy.linalg import eigh
def kcca_working(data, reg=0.,
numCC=None,
kernelcca=False,
ktype='linear',
gausigma=1.0,
degree=2,
crosscovs=None):
"""Set up and solve the kernel CCA eigenproblem
"""
if kernelcca:
kernel = [_make_kernel(d, ktype=ktype, gausigma=gausigma,
degree=degree) for d in data]
else:
kernel = [d.T for d in data]
nDs = len(kernel)
nFs = [k.shape[0] for k in kernel]
numCC = min([k.shape[1] for k in kernel]) if numCC is None else numCC
if crosscovs is None:
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
# Allocate left-hand side (LH) and right-hand side (RH):
LH = np.zeros((sum(nFs), sum(nFs)))
RH = np.zeros((sum(nFs), sum(nFs)))
# Fill the left and right sides of the eigenvalue problem
for i in range(nDs):
RH[sum(nFs[:i]) : sum(nFs[:i+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = (crosscovs[i * (nDs + 1)]
+ reg * np.eye(nFs[i]))
for j in range(nDs):
if i != j:
LH[sum(nFs[:j]) : sum(nFs[:j+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = crosscovs[nDs * j + i]
LH = (LH + LH.T) / 2.
RH = (RH + RH.T) / 2.
maxCC = LH.shape[0]
r, Vs = eigh(LH, RH, eigvals=(maxCC - numCC, maxCC - 1))
r[np.isnan(r)] = 0
rindex = np.argsort(r)[::-1]
comp = []
Vs = Vs[:, rindex]
for i in range(nDs):
comp.append(Vs[sum(nFs[:i]):sum(nFs[:i + 1]), :numCC])
return comp, crosscovs
Let run the function:
comp, crosscovs = kcca_working([X_sc, Y_sc], reg=0.,
numCC=2, kernelcca=False, ktype='linear',
gausigma=1.0, degree=2, crosscovs = crosscovs_new)
print(comp)
[array([[-0.00375779, 0.0078263 ],
[ 0.00061439, -0.00357358],
[-0.02054012, -0.0083491 ],
[-0.01252477, 0.02976148],
[ 0.00046503, -0.00905069],
[ 0.01415084, -0.01264106]]), array([[ 0.00632283, 0.05721601],
[-0.02606459, -0.05132531]])]
So I take the original function, and make possible to introduce the correlations, I also output that just for checking.
I print the Canonical Covariates (comp), those are a and b in Eq4 in Bilenko et al 2016.
Comparing results
Now I am going to compare results from the original and the modified function. I will show you that the results are equivalent.
I could obtain the original results this way. With crosscovs = None, so it is calculated as originally, instead of us introducing it:
comp, crosscovs = kcca_working([X_sc, Y_sc], reg=0.,
numCC=2, kernelcca=False, ktype='linear',
gausigma=1.0, degree=2, crosscovs = None)
print(comp)
[array([[-0.13109264, 0.27302457],
[ 0.02143325, -0.12466608],
[-0.71655285, -0.2912628 ],
[-0.43693303, 1.03824477],
[ 0.01622265, -0.31573818],
[ 0.49365965, -0.44098996]]), array([[ 0.2205752 , 1.99601077],
[-0.90927705, -1.79051045]])]
I print the Canonical Covariates (comp), those are a' and b' in Eq4 in Bilenko et al 2016.
a, b and a', b' are different but they are just different in the scale, so for all purpose they are equivalent. This is because of the correlations definitions.
To show that let me pick up numbers from each case and calculate the ratio:
print(0.00061439/-0.00375779)
-0.16349769412340764
print(0.02143325/-0.13109264)
-0.16349697435340382
They are the same result.
When that is modified you could just build in the top of that.
References:
Cool post with example and explanations in Python, using library pyrcca: https://towardsdatascience.com/understanding-how-schools-work-with-canonical-correlation-analysis-4c9a88c6b913
Bilenko, Natalia Y., and Jack L. Gallant. "Pyrcca: regularized kernel canonical correlation analysis in python and its applications to neuroimaging." Frontiers in neuroinformatics 10 (2016): 49. Paper in which pyrcca is explained: https://www.frontiersin.org/articles/10.3389/fninf.2016.00049/full

Calculating a custom probability distribution in python (numerically)

I have a custom (discrete) probability distribution defined somewhat in the form: f(x)/(sum(f(x')) for x' in a given discrete set X). Also, 0<=x<=1.
So I have been trying to implement it in python 3.8.2, and the problem is that the numerator and denominator both come out to be really small and python's floating point representation just takes them as 0.0.
After calculating these probabilities, I need to sample a random element from an array, whose each index may be selected with the corresponding probability in the distribution. So if my distribution is [p1,p2,p3,p4], and my array is [a1,a2,a3,a4], then probability of selecting a2 is p2 and so on.
So how can I implement this in an elegant and efficient way?
Is there any way I could use the np.random.beta() in this case? Since the difference between the beta distribution and my actual distribution is only that the normalization constant differs and the domain is restricted to a few points.
Note: The Probability Mass function defined above is actually in the form given by the Bayes theorem and f(x)=x^s*(1-x)^f, where s and f are fixed numbers for a given iteration. So the exact problem is that, when s or f become really large, this thing goes to 0.
You could well compute things by working with logs. The point is that while both the numerator and denominator might underflow to 0, their logs won't unless your numbers are really astonishingly small.
You say
f(x) = x^s*(1-x)^t
so
logf (x) = s*log(x) + t*log(1-x)
and you want to compute, say
p = f(x) / Sum{ y in X | f(y)}
so
p = exp( logf(x) - log sum { y in X | f(y)}
= exp( logf(x) - log sum { y in X | exp( logf( y))}
The only difficulty is in computing the second term, but this is a common problem, for example here
On the other hand computing logsumexp is easy enough to to by hand.
We want
S = log( sum{ i | exp(l[i])})
if L is the maximum of the l[i] then
S = log( exp(L)*sum{ i | exp(l[i]-L)})
= L + log( sum{ i | exp( l[i]-L)})
The last sum can be computed as written, because each term is now between 0 and 1 so there is no danger of overflow, and one of the terms (the one for which l[i]==L) is 1, and so if other terms underflow, that is harmless.
This may however lose a little accuracy. A refinement would be to recognize the set A of indices where
l[i]>=L-eps (eps a user set parameter, eg 1)
And then compute
N = Sum{ i in A | exp(l[i]-L)}
B = log1p( Sum{ i not in A | exp(l[i]-L)}/N)
S = L + log( N) + B

How is the full adder's carry out term derived?

I'm reading the section of the full adder in Digital Design by Morris Mano and I can't seem to figure out how it got from equation A to equation B.
From a full adder's truth table and k-map using inputs x, y, and z, the carry out term, C, is defined as:
C = xy + xz + yz (equation A)
I could understand the above, but in order to leverage the xor already used by the summation term of x, y, and z, the book redefines C as:
C = z(xy' + x'y) + xy = xy'z + x'yz + xy (equation B)
How are these two equivalent? I've tried to derive one from the other on paper but I'm not able to come up with the steps in between.
Sorry my comment (which I removed) was hastily stated.
Consider the following logic table (I'm using ^ to represent XOR for brevity):
The results of xy + xz + yz are the same as xy + (x ^ y)z because, for the first 6 cases, the value of x + y and x ^ y are the same. For the last two cases where they are different, the xy term being OR'ed in is 1 which makes their difference irrelevant to the final value.

gam in mgcv R with big number of covariates

I would like to know if there is another way to write the function:
gam(VariableResponse ~ s(CovariateName1) + s(CovariateName2) + ... + s(CovariateName100),
family = gaussian(link = identity), data = MyData)
in mgcv package without typing 100 covariates' name as above?
Supposing that in MyData I have only VariableResponse in column 1, CovariateName1 in column 2, etc.
Many thank!
Yes, use the brute force approach to generate a formula by pasting together the covariate names with the strings 's(' and ')' and then collapsing the whole things with ' + '. The convert the resultant string to a formula and pass that to gam(). You may need to fix issues with the formula's environment if gam() can't find the variable you name as it is going to do some NSE on the formula to identify which terms need smooths estimating and hence need to be replaced by a basis expansion.
library(mgcv)
set.seed(2) ## simulate some data...
df <- gamSim(1, n=400, dist = "normal", scale = 2)
> names(df)
[1] "y" "x0" "x1" "x2" "x3" "f" "f0" "f1" "f2" "f3"
We'll ignore the last 5 of those columns for the purposes of this example
df <- df[1:5]
Make the formula
fm <- paste('s(', names(df[ -1 ]), ')', sep = "", collapse = ' + ')
fm <- as.formula(paste('y ~', fm))
Now fit the model
m <- gam(fm, data = df)
> m
Family: gaussian
Link function: identity
Formula:
y ~ s(x0) + s(x1) + s(x2) + s(x3)
Estimated degrees of freedom:
2.5 2.4 7.7 1.0 total = 14.6
GCV score: 4.050519
You do have to be careful about fitting GAMs this way however; concurvity (the nonlinear counterpart to multicolinearlity in linear models) can cause catastrophically bad estimates of smooth functions.

Line segment intersection

I found this code snippet on raywenderlich.com, however the link to the explanation wasn't valid anymore. I "translated" the answer into Swift, I hope you can understand, it's actually quite easy even without knowing the language. Could anyone explain what exactly is going on here? Thanks for any help.
class func linesCross(#line1: Line, line2: Line) -> Bool {
let denominator = (line1.end.y - line1.start.y) * (line2.end.x - line2.start.x) -
(line1.end.x - line1.start.x) * (line2.end.y - line2.start.y)
if denominator == 0 { return false } //lines are parallel
let ua = ((line1.end.x - line1.start.x) * (line2.start.y - line1.start.y) -
(line1.end.y - line1.start.y) * (line2.start.x - line1.start.x)) / denominator
let ub = ((line2.end.x - line2.start.x) * (line2.start.y - line1.start.y) -
(line2.end.y - line2.start.y) * (line2.start.x - line1.start.x)) / denominator
//lines may touch each other - no test for equality here
return ua > 0 && ua < 1 && ub > 0 && ub < 1
}
You can find a detailed segment-intersection algorithm
in the book Computational Geometry in C, Sec. 7.7.
The SegSegInt code described there is available here.
I recommend avoiding slope calculations.
There are several "degenerate" cases that require care: collinear segments
overlapping or not, one segment endpoint in the interior of the other segments,
etc. I wrote the code to return an indication of these special cases.
This is what the code is doing.
Every point P in the segment AB can be described as:
P = A + u(B - A)
for some constant 0 <= u <= 1. In fact, when u=0 you get P=A, and you getP=B when u=1. Intermediate values of u will give you intermediate values of P in the segment. For instance, when u = 0.5 you will get the point in the middle. In general, you can think of the parameter u as the ratio between the lengths of AP and AB.
Now, if you have another segment CD you can describe the points Q on it in the same way, but with a different u, which I will call v:
Q = C + v(D - C)
Again, keep in mind that Q lies between C and D if, and only if, 0 <= v <= 1 (same as above for P).
To find the intersection between the two segments you have to equate P=Q. In other words, you need to find u and v, both between 0 and 1 such that:
A + u(B - A) = C + v(D - C)
So, you have this equation and you have to see if it is solvable within the given constraints on u and v.
Given that A, B, C and D are points with two coordinates x,y each, you can open the equation above into two equations:
ax + u(bx - ax) = cx + v(dx - cx)
ay + u(by - ay) = cy + v(dy - cy)
where ax = A.x, ay = A.y, etc., are the coordinates of the points.
Now we are left with a 2x2 linear system. In matrix form:
|bx-ax cx-dx| |u| = |cx-ax|
|by-ay cy-dy| |v| |cy-ay|
The determinant of the matrix is
det = (bx-ax)(cy-dy) - (by-ay)(cx-dx)
This quantity corresponds to the denominator of the code snippet (please check).
Now, multiplying both sides by the cofactor matrix:
|cy-dy dx-cx|
|ay-by bx-ax|
we get
det*u = (cy-dy)(cx-ax) + (dx-cx)(cy-ay)
det*v = (ay-by)(cx-ax) + (bx-ax)(cy-ay)
which correspond to the variables ua and ub defined in the code (check this too!)
Finally, once you have u and v you can check whether they are both between 0 and 1 and in that case return that there is intersection. Otherwise, there isn't.
For a given line the slope is
m=(y_end-y_start)/(x_end-x_start)
if two slopes are equal, the lines are parallel
m1=m1
(y1_end-y_start)/(x1_end-x1_start)=(y2_end-y2_start)/(x2_end-x2_start)
And this is equivalent to checking that the denominator is not zero,
Regarding the rest of the code, find the explanation on wikipedia under "Given two points on each line"

Resources