I apologise for the newbishness of this question in advance but I am stuck. I am trying to solve this question,
I can do parts i)-1v) but I am stuck on v. I know to calculate the margin y, you do
y=2/||W||
and I know that W is the normal to the hyperplane, I just don't know how to calculate it. Is this always
W=[1;1] ?
Similarly, the bias, W^T * x + b = 0
how do I find the value x from the data points? Thank you for your help.
Consider building an SVM over the (very little) data set shown in Picture for an example like this, the maximum margin weight vector will be parallel to the shortest line connecting points of the two classes, that is, the line between and , giving a weight vector of . The optimal decision surface is orthogonal to that line and intersects it at the halfway point. Therefore, it passes through . So, the SVM decision boundary is:
Working algebraically, with the standard constraint that , we seek to minimize . This happens when this constraint is satisfied with equality by the two support vectors. Further we know that the solution is for some . So we have that:
Therefore a=2/5 and b=-11/5, and . So the optimal hyperplane is given by
and b= -11/5 .
The margin boundary is
This answer can be confirmed geometrically by examining picture.
Related
I start this thread asking for your help in Excel.
The main goal is to determine the coordinates of the intersection point P=(x,y) between two curves (curve A, curve B) modeled by points.
The curves are non-linear and each defining point is determined using complex equations (equations are dependent by a lot of parameters chosen by user, as well as user will choose the number of points which will define the accuracy of the curves). That is to say that each curve (curve A and curve B) is always changing in the plane XY (Z coordinate is always zero, we are working on the XY plane) according to the input parameters and the number of the defining points is also depending by the user choice.
My first attempt was to determine the intersection point through the trend equations of each curve (I used the LINEST function to determine the coefficients of the polynomial equation) and by solving the solution putting them into a system. The problem is that Excel is not interpolating very well the curves because they are too wide, then the intersection point (the solution of the system) is very far from the real solution.
Then, what I want to do is to shorten the ranges of points to be able to find two defining trend equations for the curves, cutting away the portion of curves where cannot exist the intersection.
Today, in order to find the solution, I plot the curves on Siemens NX cad using multi-segment splines with order 3 and then I can easily find the coordinates of the intersection point. Please notice that I am using the multi-segment splines to be more precise with the approximation of the functions curve A and curve B.
Since I want to avoid the CAD tool and stay always on Excel, is there a way to select a shorter range of the defining points close to the intersection point in order to better approximate curve A and curve B with trend equations (Linest function with 4 points and 3rd order spline) and then find the solution?
I attach a picture to give you an example of Curve A and Curve B on the plane:
https://postimg.cc/MfnKYqtk
At the following link you can find the Excel file with the coordinate points and the curve plot:
https://www.mediafire.com/file/jqph8jrnin0i7g1/intersection.xlsx/file
I hope to solve this problem with your help, thank you in advance!
kalo86
Your question gave me some days of thinking and research.
With the help of https://pomax.github.io/bezierinfo/
§ 27 - Intersections (Line-line intersections)
and
§ 28 - Curve/curve intersection
your problem can be solved in Excel.
About the mystery of Excel smoothed lines you find details here:
https://blog.splitwise.com/2012/01/31/mystery-solved-the-secret-of-excel-curved-line-interpolation/
The author of this fit is Dr. Brian T. Murphy, PhD, PE from www.xlrotor.com. You find details here:
https://www.xlrotor.com/index.php/our-company/about-dr-murphy
https://www.xlrotor.com/index.php/knowledge-center/files
=>see Smooth_curve_bezier_example_file.xls
https://www.xlrotor.com/smooth_curve_bezier_example_file.zip
These knitted together you get the following results for the intersection of your given curves:
for the straight line intersection:
(x = -1,02914127711195 / y = 23,2340949174492)
for the smooth line intersection:
(x = -1,02947493047196 / y = 23,2370611219553)
For a full automation of your task you would need to add more details regarding the needed accuracy and what details you need for further processing (and this is actually not the scope of this website ;-).
Intersection of the straight lines:
Intersection of the smoothed lines:
comparison charts:
solution,
Thank you very much for the anwer, you perfectly centered my goal.
Your solution (for the smoothed lines) is very very close to what I determine in Siemens NX.
I'm going to read the documentation at the provided link https://pomax.github.io/bezierinfo/ in order to better understand the math behind this argument.
Then, to resume my request, you have been able to find the coordinates (x,y) of the intersection point between two curves without passing through an advanced CAD system with a very good precision.
I am starting to study now, best regards!
kalo86
I was working with one dataset and found the curve to be sigmoidal. i have fitted the curve and got the equation A2+((A1-A2)/1+exp((x-x0)/dx)) where:
x0 : Mid point of the curve
dx : slope of the curve
I need to find the slope and midpoint in order to give generalized equation. any suggestions?
You should be able to simplify the modeling of the sigmoid with a function of the following form:
The source includes code in R showing how to fit your data to the sigmoid curve, which you can adapt to whatever language you're writing in. The source also notes the following form:
Which you can adapt the linked R code to solve for. The nice thing about the general functions here will be that you can solve for the derivative from them. Also, you should note that the midpoint of the sigmoid is just where the derivative of dx (or dx^2) is 0 (where it goes from neg to pos or vice versa).
Assuming your equation is a misprint of
A2+(A1-A2)/(1+exp((x-x0)/dx))
then your graph does not reflect zero residual, since in your graph the upper shoulder is sharper than the lower shoulder.
Likely the problem is your starting values. Try using the native R function SSfpl, as in
nls(y ~ SSfpl(x,A2,A1,x0,dx))
I have an issue with curve fitting process using Gnuplot. I have data with the time starting at 0.5024. I want to use a linear sin/cos combo to fit a value M over time (M=a+bsin(wt)+ccos(wt)). For further processing I only need the c value.
My code is
f(x)=a+b*sin(w*x)+c*cos(w*x)
fit f(x) "data.dat" using 1:2 via a,b,c,w
the asymptotic standard error ist 66% for parameter c which seems quite high. I suspect that it has to do with the fact, that the time starts at 0.5024 instead of 0. What I could do of course is
fit f(x) "data.dat" using ($1-0.5024):2 via a,b,c,w
with an asymptotic error of about 10% which is way lower. The question is: Can I do that? Does my new plot with the time offset still represent the original curve? Any other ideas?
Thanks in advance for your help :-)
It's a bit difficult to answer this without having seen your data, but your observation is typical.
The problem is an effect of the fit itself, or even your formula. Let me explain it using an example data set. (Well, this will become offtopic...)
An statistics excourse
The data follows the function f(x)=x and all y-values have been shifted by gassian random numbers. In addtion, the data is in the x-dange [600:800].
You can now simply apply a linear fit f(x)=m*x+b. According to Gauß' error distribution, the error is df(x)=sqrt((dm*x)²+(db)²). So, you can plot the data, the linear function and the error margin f(x) +/- df(x)
Here is the result:
The parameters:
m = 0.981822 +/- 0.1212 (12.34%)
b = 0.974375 +/- 85.13 (8737%)
The correlation matrix:
m b
m 1.000
b -0.997 1.000
You may notice three things:
The error for b is very large!
The error margin is small at x=0, but increases with x. Shouldn't it be smallest where the data is, i.e. at x=700?
The correlation between m and b is -0.997, which is near the maximum (absolute) value of 1.
The third point can be understood at the plot: If you increase the slope m, the y-offset decreases, too. Both parameters are very correlated, and an error on one of them is distributed to the other!
From statistics you may know, that a linear regression function always goes through the center of gravity (cog) of your data. So, let's shift the data so that the cog is the origin (it's enough to shift it so that the cog is on the y-axis, but I did it so)
Result:
m = 1.0465 +/- 0.1211 (11.57%)
b = -12.0611 +/- 7.027 (58.26%)
Correlation:
m b
m 1.000
b -0.000 1.000
Compared to the first plot, the value and error for m is almost the same, but the very large error ob b is much smaller now. The reason is that m and b are not correlated any more, and so a (tiny) variation m does not give a (very big) variation of b. It is also nice to see that the error margin has shrunk a lot.
Here is a last plot with the original data, the first fit function and the "back-shifted function for the shifted data":
About your fit function:
First, there is a big correlation problem: b and c are extremely correlated, as both together define the phase and amplitude of your oscillation. It would help a lot to use another, equivalent function:
f(x)=a+N*sin(w*x+p)
Here, you have phase and amplitude separated. You can still calculate your c from the fit results, and I guess, the error is much better for it.
Like in my example, if the data is far away from the y-axis, a small variation of w will have a big impact on p . So, I would suggest to shift your data so that it's cog is on the y-axis to get almost rid of this.
Is this shift allowed?
Yes. You do not alter the data, you simply change your coordinate system to get better errors. Also, the fit function should describe the data, so it should be very accurate in the range where your data is. In my first plot, the highest accuracy is at the y-axis, not where the data is.
Important
You should always remark which tricks you applied. Otherwise, someome may check your results and fit the data without the tricks, sees the red curve instead youre green one, and may accuse you of cheating...
Whether you can do that or not depends on whether the curve you're fitting to represents the physical phenomena you're studying and is consistent with the physical model you need to comply with. My suggestion is that you provide those and ask this question again in a physics forum (or chemistry, biology, etc., depending on your field).
here is what I want to do (preferably with Matlab):
Basically I have several traces of cars driving on an intersection. Each one is noisy, so I want to take the mean over all measurements to get a better approximation of the real route. In other words, I am looking for a way to approximate the Curve, which has the smallest distence to all of the meassured traces (in a least-square sense).
At the first glance, this is quite similar what can be achieved with spap2 of the CurveFitting Toolbox (good example in section Least-Squares Approximation here).
But this algorithm has some major drawback: It assumes a function (with exactly one y(x) for every x), but what I want is a curve in 2d (which may have several y(x) for one x). This leads to problems when cars turn right or left with more then 90 degrees.
Futhermore it takes the vertical offsets and not the perpendicular offsets (according to the definition on wolfram).
Has anybody an idea how to solve this problem? I thought of using a B-Spline and change the number of knots and the degree until I reached a certain fitting quality, but I can't find a way to solve this problem analytically or with the functions provided by the CurveFitting Toolbox. Is there a way to solve this without numerical optimization?
mbeckish is right. In order to get sufficient flexibility in the curve shape, you must use a parametric curve representation (x(t), y(t)) instead of an explicit representation y(x). See Parametric equation.
Given n successive points on the curve, assign them their true time if you know it or just integers 0..n-1 if you don't. Then call spap2 twice with vectors T, X and T, Y instead of X, Y. Now for arbitrary t you get a point (x, y) on the curve.
This won't give you a true least squares solution, but should be good enough for your needs.
That is, I want to check if the linear system derived from a radiosity problem is convergent.
I also want to know is there any book/paper giving a proof on the convergence of the radiosity problem?
Thanks.
I assume you're solving B = (I - rho*F) B (based on the wikipedia article)
Gauss-Seidel and Jacobi iteration methods are both guaranteed to converge if the matrix is diagonally dominant (Gauss-Seidel is also guaranteed to converge if the matrix is symmetric and positive definite).
The rows of the F matrix (view factors) sum to 1, so if rho (reflectivity) is < 1, which physically it should be, the matrix will be diagonally dominant.