Question about implicit method in Navier-Stokes equations - implicit

I am working on incompressible Navier-Stokes equations in 2D and trying to solve these equations by using implicit method in time. My question is: do we need to derive pressure Poisson equation to solve it? If not, how can I ensure that the continuity equation is satisfied?
Thank you in advance.
Best,
Ann

Related

Adding time dependence on the variables

I am trying to add time dependence on the variables. I have used sympy to define the variables (theta and theta_dot). There is no problem when computing the partial derivative but I am having trouble when calculating the total derivative with respect to time.
The equation I am handling is the Euler Lagrange equation.
I have used sympy
diff(L,theta)
and
diff(L,theta_dot)
to find the partial derivatives.
Ideally, I would like to know a good method to integrate the time derivative into the equation.
You could use dynamicsymbols in sympy.
list_of_variables=[dynamicsymbols("theta"),dynamicsymbols("theta",1)]
f = diff(diff(i, list_of_variables[1]), 't') - diff(i, list_of_variables[0])
Here is a similar post.
https://math.stackexchange.com/questions/3014868/euler-lagrange-formalism-with-sympy

Generalized reduced gradient nonlinear solver in Python

I saw a few posts before regarding the Excel solver, but I have been unable to find a clear answer as to how to implement the GRG nonlinear solver in python (also with the option to apply constraints). Is this possible? IF so, could somebody please let me know how to achieve this, possibly with an example?

Fitting a regression model

I'm trying to solve a question from a Chinese "linear statistical models",
and the chapter containing this question is about weighted least squares.
The question and the way I solve it are as following:
As you can see, the predicted values is very different to the actual value, so I wonder about whether I solve it right or not.
Could somebody tell me what is wrong with it?
And if there are mistakes how I correct it?
The predicted values are actually not that far off from the actual values. This seems fine and a seems like a sensible result here

Quadratic Programming and quasi newton method BFGS

Yesterday, I posted a question about general concept of SVM Primal Form Implementation:
Support Vector Machine Primal Form Implementation
and "lejlot" helped me out to understand that what I am solving is a QP problem.
But I still don't understand how my objective function can be expressed as QP problem
(http://en.wikipedia.org/wiki/Support_vector_machine#Primal_form)
Also I don't understand how QP and Quasi-Newton method are related
All I know is Quasi-Newton method will SOLVE my QP problem which supposedly formulated from
my objective function (which I don't see the connection)
Can anyone walk me through this please??
For SVM's, the goal is to find a classifier. This problem can be expressed in terms of a function that you are trying to minimize.
Let's first consider the Newton iteration. Newton iteration is a numerical method to find a solution to a problem of the form f(x) = 0.
Instead of solving it analytically we can solve it numerically by the follwing iteration:
x^k+1 = x^k - DF(x)^-1 * F(x)
Here x^k+1 is the k+1th iterate, DF(x)^-1 is the inverse of the Jacobian of F(x) and x is the kth x in the iteration.
This update runs as long as we make progress in terms of step size (delta x) or if our function value approaches 0 to a good degree. The termination criteria can be chosen accordingly.
Now consider solving the problem f'(x)=0. If we formulate the Newton iteration for that, we get
x^k+1 = x - HF(x)^-1 * DF(x)
Where HF(x)^-1 is the inverse of the Hessian matrix and DF(x) the gradient of the function F. Note that we are talking about n-dimensional Analysis and can not just take the quotient. We have to take the inverse of the matrix.
Now we are facing some problems: In each step, we have to calculate the Hessian matrix for the updated x, which is very inefficient. We also have to solve a system of linear equations, namely y = HF(x)^-1 * DF(x) or HF(x)*y = DF(x).
So instead of computing the Hessian in every iteration, we start off with an initial guess of the Hessian (maybe the identity matrix) and perform rank one updates after each iterate. For the exact formulas have a look here.
So how does this link to SVM's?
When you look at the function you are trying to minimize, you can formulate a primal problem, which you can the reformulate as a Dual Lagrangian problem which is convex and can be solved numerically. It is all well documented in the article so I will not try to express the formulas in a less good quality.
But the idea is the following: If you have a dual problem, you can solve it numerically. There are multiple solvers available. In the link you posted, they recommend coordinate descent, which solves the optimization problem for one coordinate at a time. Or you can use subgradient descent. Another method is to use L-BFGS. It is really well explained in this paper.
Another popular algorithm for solving problems like that is ADMM (alternating direction method of multipliers). In order to use ADMM you would have to reformulate the given problem into an equal problem that would give the same solution, but has the correct format for ADMM. For that I suggest reading Boyds script on ADMM.
In general: First, understand the function you are trying to minimize and then choose the numerical method that is most suited. In this case, subgradient descent and coordinate descent are most suited, as stated in the Wikipedia link.

Calculating margin and bias for SVM's

I apologise for the newbishness of this question in advance but I am stuck. I am trying to solve this question,
I can do parts i)-1v) but I am stuck on v. I know to calculate the margin y, you do
y=2/||W||
and I know that W is the normal to the hyperplane, I just don't know how to calculate it. Is this always
W=[1;1] ?
Similarly, the bias, W^T * x + b = 0
how do I find the value x from the data points? Thank you for your help.
Consider building an SVM over the (very little) data set shown in Picture for an example like this, the maximum margin weight vector will be parallel to the shortest line connecting points of the two classes, that is, the line between and , giving a weight vector of . The optimal decision surface is orthogonal to that line and intersects it at the halfway point. Therefore, it passes through . So, the SVM decision boundary is:
Working algebraically, with the standard constraint that , we seek to minimize . This happens when this constraint is satisfied with equality by the two support vectors. Further we know that the solution is for some . So we have that:
Therefore a=2/5 and b=-11/5, and . So the optimal hyperplane is given by
and b= -11/5 .
The margin boundary is
This answer can be confirmed geometrically by examining picture.

Resources