I saw a few posts before regarding the Excel solver, but I have been unable to find a clear answer as to how to implement the GRG nonlinear solver in python (also with the option to apply constraints). Is this possible? IF so, could somebody please let me know how to achieve this, possibly with an example?
Related
"Fit" in Gnuplot uses which method (Algorithm) for fitting any curve? How does it calculate the error in fitting parameters?
A rough idea about the method or the algorithm would be enough.
I am using the fit command quite often. GNUPLOT is using the least squares with the the Marquardt-Levenberg-algorithm. All available information are on this link [fit]. What they said about the error can be found here [error].
These are quite robust methods and can not be easily implemented. However, if you want have a look at the code that does the job you can find it on GitHub [code], that's the advantage with opensource :-).
Hope that helps.
I have a mixed integer/binary linear programming problem, the free version of excel solver can find a solution that satisfies all the constraints. however, the lp_solve API that I call from C++ could not find a solution. I suspect that excel solver simplifies the problem. So far I locate 2 parameters in excel solver option: mip gap and constraint precision. the former is set to 1%, i set the lp_solve's mip gap to 1%, but I do not know what is the equivalent constraint precision in lp_solve. Anyone can help? Thanks!
Well, I have a data set (length 20) and I want to check its gaussianity or non-gaussianity. I'm an somewhat advanced user of matlab so I already know (it doesn't fit a gaussian) but I have to prove it with excel and I have never used its statistical tools very much. What's the best way to do it?
EDIT: I had several ideas, but none of those appear to be practical in excel. First, I thought it would be a fitting tool (something like matlab's histfit) but I didn't find it. Second, I thought I could say my data is approximate gaussian if the deciles of my data set are the approximate the same of the gaussian distribution with mean=dataSetMean and variance=dataSetVariance, but excel doesn't have any of those.
I'm trying to solve a question from a Chinese "linear statistical models",
and the chapter containing this question is about weighted least squares.
The question and the way I solve it are as following:
As you can see, the predicted values is very different to the actual value, so I wonder about whether I solve it right or not.
Could somebody tell me what is wrong with it?
And if there are mistakes how I correct it?
The predicted values are actually not that far off from the actual values. This seems fine and a seems like a sensible result here
Yesterday, I posted a question about general concept of SVM Primal Form Implementation:
Support Vector Machine Primal Form Implementation
and "lejlot" helped me out to understand that what I am solving is a QP problem.
But I still don't understand how my objective function can be expressed as QP problem
(http://en.wikipedia.org/wiki/Support_vector_machine#Primal_form)
Also I don't understand how QP and Quasi-Newton method are related
All I know is Quasi-Newton method will SOLVE my QP problem which supposedly formulated from
my objective function (which I don't see the connection)
Can anyone walk me through this please??
For SVM's, the goal is to find a classifier. This problem can be expressed in terms of a function that you are trying to minimize.
Let's first consider the Newton iteration. Newton iteration is a numerical method to find a solution to a problem of the form f(x) = 0.
Instead of solving it analytically we can solve it numerically by the follwing iteration:
x^k+1 = x^k - DF(x)^-1 * F(x)
Here x^k+1 is the k+1th iterate, DF(x)^-1 is the inverse of the Jacobian of F(x) and x is the kth x in the iteration.
This update runs as long as we make progress in terms of step size (delta x) or if our function value approaches 0 to a good degree. The termination criteria can be chosen accordingly.
Now consider solving the problem f'(x)=0. If we formulate the Newton iteration for that, we get
x^k+1 = x - HF(x)^-1 * DF(x)
Where HF(x)^-1 is the inverse of the Hessian matrix and DF(x) the gradient of the function F. Note that we are talking about n-dimensional Analysis and can not just take the quotient. We have to take the inverse of the matrix.
Now we are facing some problems: In each step, we have to calculate the Hessian matrix for the updated x, which is very inefficient. We also have to solve a system of linear equations, namely y = HF(x)^-1 * DF(x) or HF(x)*y = DF(x).
So instead of computing the Hessian in every iteration, we start off with an initial guess of the Hessian (maybe the identity matrix) and perform rank one updates after each iterate. For the exact formulas have a look here.
So how does this link to SVM's?
When you look at the function you are trying to minimize, you can formulate a primal problem, which you can the reformulate as a Dual Lagrangian problem which is convex and can be solved numerically. It is all well documented in the article so I will not try to express the formulas in a less good quality.
But the idea is the following: If you have a dual problem, you can solve it numerically. There are multiple solvers available. In the link you posted, they recommend coordinate descent, which solves the optimization problem for one coordinate at a time. Or you can use subgradient descent. Another method is to use L-BFGS. It is really well explained in this paper.
Another popular algorithm for solving problems like that is ADMM (alternating direction method of multipliers). In order to use ADMM you would have to reformulate the given problem into an equal problem that would give the same solution, but has the correct format for ADMM. For that I suggest reading Boyds script on ADMM.
In general: First, understand the function you are trying to minimize and then choose the numerical method that is most suited. In this case, subgradient descent and coordinate descent are most suited, as stated in the Wikipedia link.