Difference between Least Squares (LS) and Ordinary Least Squares (OLS) with respect to Linear regression.
What I found:-
On searching a bit, I got a difference that in ordinary least squares we consider only the vertical distance between the predicted value and the given dependant variable, whereas, in the least Squares, we consider vertical and horizontal distance.
Please correct me if I'm wrong, add details if the explanation not complete or true.
This is even more confusing because LS depends on wrt what deviation is considered.
Generally the usual OLS is wrt the deviations on ordinates of the points.
Copied from pp. 7-8 in https://fr.scribd.com/doc/14819165/Regressions-coniques-quadriques-circulaire-spherique
Related
I have to compute the intersection of two lines in 3D space.
This qestion itself has already been adressed.
The reason why I am posting is that the lines both come with some uncertainties
in their direction.
This is represented on the figure below. Each line comes with its own coordinate axes. Uncertainty is represented by a matrix covariance, this one is generally diagonal with the azimut and elevation variance as elements. The uncertainty is geometrically represented by a conic with the line as center axe and delimited by the standard deviation (square of variance).
So, ideally, what I would like to compute is the intersection volume between these conics. Whether these ones really intersects depend on the implicit probability distributions. If you assume them to be gaussian, the intersection will always exist except that if line directions are very distant, it will have a very poor probability.
This is what I would like to assess numerically : get the volume intersection and its probability.
I assume the probability distributions of both lines to be independent.
For the moment, what I figured out is to compute the distance between both lines.
If this one is zero, they really intersect. This distance is a segment perpendicular to both lines. My guess is that the middle point of this segment will represent the point with best probability for lines to intersect.
Then, I would assume the probability distribution around that point to be gaussian and compute its covariance matrix numerically.
Do you agree with this method ? or think there would be a better way ?
Regards
I'm using a polygonal chain to approximate a curve. I want to approximate the average of a function of curvature of all points that lie on the curve. One function of curvature that I need is, for example, the square of curvature.
I can get near that by choosing some points on the chain, calculating the curvature in those points, applying the function on it (for example squaring it), and then averaging the calculated values.
I need both accuracy and speed. I appreciate both — fast, but approximate; as well as accurate, but slow solutions. I'm working in Java, but the answer doesn't need to be written in Java — it doesn't even need to contain any code at all.
Polygonal chain with uniform segment length
If the polygonal chain's segments all have equal length, I can just calculate the curvature in the vertices and then average that. I see two ways to get the curvature in a vertex.
One way is to get the circle that goes through the selected vertex, the vertex before it, and the one after it. The curvature is then 1/radius of the circle.
The other way is to calculate the external angle (in radians) of the two segments connected at the selected vertex and then divide its absolute value by the length of a segment. In the following image, φ marks the external angle:
I am not sure if this method is correct, as I haven't mathematically derived it, but I've noticed through experimentation that it gives similar results to the above method.
Polygonal chain with non-uniform segment length
Unfortunately, though, there's no guarantee that the segments have uniform length.
If I try using the first of the above methods, vertices connected to longer segments give lower curvatures, even if they are visibly sharper. I tried substituting previous and next vertices with a point x units before the selected vertex and a point x units after it. I don't know what to set the x constant to, to get accurate results. All the values I've tried seemed to give inaccurate results.
If I try using the second method, I don't know what length to divide the angle by. If I don't divide by anything at all, I actually get pretty good results for comparing two curves and determining which one is curvier, but I need to be able to determine the actual curvature in a point.
With both of these methods there's also the problem that parts with shorter segments (where points are denser) will affect the average more.
Another possible solution would be to ignore the vertices and instead use an array of points on the chain that are evenly spaced, treat them as a new polygonal chain (connect the points with straight lines), and then calculate curvatures on this new chain instead, using one of the methods I mentioned under the header titled "Polygonal chain with uniform segment length".
Finding such an array of points is not trivial, though, because I have to choose a segment length, and only after producing the points, I can see if the length of the resulting chain is divisible by the chosen segment length.
If you aren't short on space, the last solution you mentioned would be the best, because the "sphere" approximation, as you've perhaps realized, would give awful results in more extreme cases, especially if the curvature is large or changes sign quickly.
There are many ways to do interpolations, the simplest being quadratic and cubic splines. However if you have more pre-processing time, Lagrange polynomials produce very good results: https://en.wikipedia.org/wiki/Lagrange_polynomial.
Side note on your angle division method, consider this diagram:
(From simple geometry the inside angle there is also theta)
For a << l. So the curvature:
So your approximation is in fact correct for small curvatures.
An alternative is to use a local parabola approximation to estimate the curvature. Basically, to estimate the curvature at point P(i), you take P(i-1), P(i) and P(i+1) and construct a parabola from these 3 points. Then, you compute the curvature at P(i) from the parabola. Remember to use chord-length (or centripetal) parametrization when constructing the parabola.
I've seen several similar questions, and have some ideas of what I might try, but I don't remember seeing anything about spread.
So: I am working on a measurement system, ultimately computer vision based.
I take N captures, and process them using a library which outputs pose estimations in the form of 4x4 affine transformation matrices of translation and rotation.
There's some noise in these pose estimations. The standard deviation in Euler angles for each axis of rotation is less than 2.5 degrees, so all orientations are pretty close to each other (for a case where all Euler angles are close to 0 or 180). Standard errors of less than 0.25 degrees are important to me. But I have already run into the problems endemic to Euler angles.
I want to average all these pretty-close-together pose estimates to get a single final pose estimate. And I also want to find some measure of spread so that I can estimate accuracy.
I'm aware that "average" isn't actually well defined for rotations.
(For the record, my code is in Numpy-heavy Python.)
I also may want to weight this average, since some captures (and some axes) are known to be more accurate than others.
My impression is that I can just take the mean and standard deviation of the translation vector, and that for the rotation I can convert to quaternions, take the mean, and re-normalize with OK accuracy since these quaternions are pretty close together.
I've also heard mentions of least-squares across all the quaternions, but most of my research into how this would be implemented has been a dismal failure.
Is this workable? Is there a reasonably well-defined measure of spread in this context?
Without more info about your geometry setup is hard to answer. Anyway for rotations I would:
create 3 unit vectors
x=(1,0,0),y=(0,1,0),z=(0,0,1)
and apply the rotation on them and call the output
x(i),y(i),z(i)
it is just applying the matrix(i) with position at (0,0,0)
do this for all measurements you have
now average all vectors
X=avg(x(1),x(2),...x(n))
Y=avg(y(1),y(2),...y(n))
Z=avg(z(1),z(2),...z(n))
correct the vector values
so make each of the X,Y,Z unit vectors again and take the axis which is more closest to the rotation axis as main axis. It will stay as is and recompute the remaining two axises as cross product of main axis and the other vector to ensure orthogonality. Beware of the multiplication order (wrong order of operands will negate the output)
construct averaged transform matrix
see transform matrix anatomy as origin you can use averaged origin of the measurement matrices
Moakher wrote a paper that explains there are basically two ways to take an average of Rotation matrices. The first is a weighted average followed by a projection back to SO(3) using the SVD. The second is the Riemannian center of mass. That one is a closer notion to the geometric mean, and its more complicated to compute.
here is what I want to do (preferably with Matlab):
Basically I have several traces of cars driving on an intersection. Each one is noisy, so I want to take the mean over all measurements to get a better approximation of the real route. In other words, I am looking for a way to approximate the Curve, which has the smallest distence to all of the meassured traces (in a least-square sense).
At the first glance, this is quite similar what can be achieved with spap2 of the CurveFitting Toolbox (good example in section Least-Squares Approximation here).
But this algorithm has some major drawback: It assumes a function (with exactly one y(x) for every x), but what I want is a curve in 2d (which may have several y(x) for one x). This leads to problems when cars turn right or left with more then 90 degrees.
Futhermore it takes the vertical offsets and not the perpendicular offsets (according to the definition on wolfram).
Has anybody an idea how to solve this problem? I thought of using a B-Spline and change the number of knots and the degree until I reached a certain fitting quality, but I can't find a way to solve this problem analytically or with the functions provided by the CurveFitting Toolbox. Is there a way to solve this without numerical optimization?
mbeckish is right. In order to get sufficient flexibility in the curve shape, you must use a parametric curve representation (x(t), y(t)) instead of an explicit representation y(x). See Parametric equation.
Given n successive points on the curve, assign them their true time if you know it or just integers 0..n-1 if you don't. Then call spap2 twice with vectors T, X and T, Y instead of X, Y. Now for arbitrary t you get a point (x, y) on the curve.
This won't give you a true least squares solution, but should be good enough for your needs.
I want to use a linear regression model, but I want to use ordinary least squares, which I think it is a type of linear regression. The software I use is SPSS. It only has linear regression, partial least squares and 2-stages least squares. I have no idea which one is ordinary least squares (OLS).
Yes, although 'linear regression' refers to any approach to model the relationship between one or more variables, OLS is the method used to find the simple linear regression of a set of data.
Linear regression is a vast term that just says we are finding a relationship between the dependent and independent variable(s), no matter what technique we are using.
OLS is just one of the technique to do linear reg.
Lets say,
error(e) = (observed value - predicted value)
Observed values - blue dots in picture
predicted values - points on the line(vertically below to the observed values)
The vertical lines below represent 'e'. We square them -> add them and get total err. And we try to reduce this total error.
For OLS, as the name says (ordinary least squared method), here we reduce the sum of all e^2 i.e. we try to make the error least.