Implications of regressing Y = f(x1, x2) where Y = x1 + x2 + x3 - statistics

In various papers I seen regressions of the sort of Y = f(x1, x2), where f() is usually a simple OLS and, importantly, Y = x1 + x2 + x3. In other words, regressors are exactly a part of Y.
These papers used regressors as a way to describe data rather than isolating a causal inference between X and Y. I was wondering what are the implication of the above strategy. To begin with, do numbers / significance test make any sense at all? thanks.
I understand that the approach mechanically fails if regressors included in the analyisis completely describe Y for obvious reasons (perfect collinearity). However, I would like to understand better the implication of only including some of the x in it.

Related

multiple regression correlation effect

I would like to investigate the effects of two independent variables on a dependent variable. Suppose we have X1, X2 independent variables, and Y dependent variable.
I use two different approaches. In the first approach, to eliminate the effect of X1 on Y, I generate the conditional distribution of Y|X1 and perform regression using the second variable X2. When I check the correlations between X2 and Y|X1, I obtain relatively high correlations (R2>0.50). However, when I perform multiple regression over a wide range of data (X1 and X2), the effect of X2 on Y is decreased and becomes insignificant. How do these approaches give conflicting results? What is the most appropriate approach to determine the effect of X2 on Y for a given X1 value? Thanks.
It could be good to see the code or the above in mathematical notation.
For instance: did you include the constant terms?
What do you see when:
Y = B0 + B1X1 + B2X2
That will be the easiest to check, and B2 will give you probably what you want.
That model is still simple, you could explore something like:
Y = B0 + B1X1 + B2X2 + B3X1X2
or
Y = B0 + B1X1 + B2X2 + B3X1X2 + B4X1^2 + B5X2^2
And see if there are changes in the coefficients and if there are new significant coefficients.
You could go further and explore Structural Equation Models

How is the full adder's carry out term derived?

I'm reading the section of the full adder in Digital Design by Morris Mano and I can't seem to figure out how it got from equation A to equation B.
From a full adder's truth table and k-map using inputs x, y, and z, the carry out term, C, is defined as:
C = xy + xz + yz (equation A)
I could understand the above, but in order to leverage the xor already used by the summation term of x, y, and z, the book redefines C as:
C = z(xy' + x'y) + xy = xy'z + x'yz + xy (equation B)
How are these two equivalent? I've tried to derive one from the other on paper but I'm not able to come up with the steps in between.
Sorry my comment (which I removed) was hastily stated.
Consider the following logic table (I'm using ^ to represent XOR for brevity):
The results of xy + xz + yz are the same as xy + (x ^ y)z because, for the first 6 cases, the value of x + y and x ^ y are the same. For the last two cases where they are different, the xy term being OR'ed in is 1 which makes their difference irrelevant to the final value.

How can I scale a 2D rotation vector without trig functions?

I have a normalized 2D vector that I am using to rotate other 2D vectors. In one instance it indicates "spin" (or "angular momentum") and is used to rotate the "orientation" of a simple polygon. My vector class contains this method:
rotateByXY(x, y) {
let rotX = x * this.x - y * this.y;
let rotY = y * this.x + x * this.y;
this.x = rotX;
this.y = rotY;
}
So far, this is all efficient and uses no trig whatsoever.
However, I want the "spin" to decay over time. This means that the angle of the spin should tend towards zero. And here I'm at a loss as to how to do this without expensive trig calls like this:
let angle = Math.atan2(spin.y, spin.x);
angle *= SPIN_DECAY;
spin = new Vector2D(Math.cos(angle), Math.sin(angle));
Is there a better/faster way to accomplish this?
If it's really the trigonometric functions what is slowing down your computation, you might try to approximate them with their Taylor expansions.
For x close to zero the following identities hold:
cos(x) = 1 - (x^2)/2! + (x^4)/4! - (x^6)/6! + ...
sin(x) = x - (x^3)/3! + (x^5)/5! - (x^7)/7! + ...
atan(x) = x - (x^3)/3 + (x^5)/5 - (x^7)/7 + ...
Based on the degree of accuracy you need for your application you can trim the series. For instance,
cos(x) = 1 - (x^2)/2
with an error of the order of x^3 (actually, x^4, as the term with x^3 is zero anyway).
However, I don't think that this is going to solve your problem: the actual implementation of atan is likely to be already using the same trick, written by someone with lots of experience of speeding these things up. So this is not really a proper answer but I hope it could still be useful.

Spectrogram of two audio files (Added together)

Assume for a moment I have two input signals f1 and f2. I could add these signals to produce a third signal f3 = f1 + f2. I would then compute the spectrogram of f3 as log(|stft(f3)|^2).
Unfortunately I don't have the original signals f1 and f2. I have, however, their spectrograms A = log(|stft(f1)|^2) and B = log(|stft(f2)|^2). What I'm looking for is a way to approximate log(|stft(f3)|^2) as closely as possible using A and B. If we do some math we can derive:
log(|stft(f1 + f2)|^2) = log(|stft(f1) + stft(f2)|^2)
express stft(f1) = x1 + i * y1 & stft(f2) = x2 + i * y2 to write
... = log(|x1 + i * y1 + x2 + i * y2|^2)
... = log((x1 + x2)^2 + (y1 + y2)^2)
... = log(x1^2 + x2^2 + y1^2 + y2^2 + 2 * (x1 * x2 + y1 * y2))
... = log(|stft(f1)|^2 + |stft(f2)|^2 + 2 * (x1 * x2 + y1 * y2))
So at this point I could use the approximation:
log(|stft(f3)|^2) ~ log(exp(A) + exp(B))
but I would ignore the last part 2 * (x1 * x2 + y1 * y2). So my question is: Is there a better approximation for this?
Any ideas? Thanks.
I'm not 100% understanding your notation but I'll give it a shot. Addition in the time domain corresponds to addition in the frequency domain. Adding two time domain signals x1 and x2 produces a 3rd time domain signal x3. x1, x2 and x3 all have a frequency domain spectrum, F(x1), F(x2) and F(x3). F(x3) is also equal to F(x1) + F(x2) where the addition is performed by adding the real parts of F(x1) to the real parts of F(x2) and adding the imaginary parts of F(x1) to the imaginary parts of F(x2). So if x1[0] is 1+0j and x2[0] is 0.5+0.5j then the sum is 1.5+0.5j. Judging from your notation you are trying to add the magnitudes, which with this example would be |1+0j| + |0.5+0.5j| = sqrt(1*1) + sqrt(0.5*0.5+0.5*0.5) = sqrt(2) + sqrt(0.5). Obviously not the same thing. I think you want something like this:
log((|stft(a) + stft(b)|)^2) = log(|stft(a)|^2) + log(|stft(b)|^2)
Take the exp() of the 2 log magnitudes, add them, then take the log of the sum.
Stepping back from the math for a minute, we can see that at a fundamental level, this just isn't possible.
Consider a 1st signal f1 that is a pure tone at frequency F and amplitude A.
Consider a 2nd signal f2 that is a pure tone at frequency F and amplitude A, but perfectly out of phase with f1.
In this case, the spectrograms of f1 & f2 are identical.
Now consider two possible combined signals.
f1 added to itself is a pure tone at frequency F and amplitude 2A.
f1 added to f2 is complete silence.
From the spectrograms of f1 and f2 alone (which are identical), you've no way to know which of these very different situations you're in. And this doesn't just hold for pure tones. Any signal and its reflection about the axis suffer the same problem. Generalizing even further, there's just no way to know how much your underlying signals cancel and how much they reinforce each other. That said, there are limits. If, for a particular frequency, your underlying signals had amplitudes A1 and A2, the biggest possible amplitude is A1+A2 and the smallest possible is abs(A1-A2).

Random effects modeling using mgcv and using lmer. Basically identical fits but VERY different likelihoods and DF. Which to use for testing?

I am aware that there is a duality between random effects and smooth curve estimation. At this link, Simon Wood describes how to specify random effects using mgcv. Of particular note is the following passage:
For example if g is a factor then s(g,bs="re") produces a random coefficient for each level of g, with the radndom coefficients all modelled as i.i.d. normal.
After a quick simulation, I can see this is correct, and that the model fits are almost identical. However, the likelihoods and degrees of freedom are VERY different. Can anyone explain the difference? Which one should be used for testing?
library(mgcv)
library(lme4)
set.seed(1)
x <- rnorm(1000)
ID <- rep(1:200,each=5)
y <- x
for(i in 1:200) y[which(ID==i)] <- y[which(ID==i)] + rnorm(1)
y <- y + rnorm(1000)
ID <- as.factor(ID)
# gam (mgcv)
m <- gam(y ~ x + s(ID,bs="re"))
gam.vcomp(m)
coef(m)[1:2]
logLik(m)
# lmer
m2 <- lmer(y ~ x + (1|ID))
sqrt(VarCorr(m2)$ID[1])
summary(m2)$coef[,1]
logLik(m2)
mean( abs( fitted(m)-fitted(m2) ) )
Full disclosure: I encountered this problem because I want to fit a GAM that also includes random effects (repeated measures), but need to know if I can trust likelihood-based tests under those models.

Resources