If Then Constraints in non-linear programming - modeling

I have several constrains in a No linear problem.
For example:
In m(x+y-n)^2
If x+y-n>=0 Then m=0,
Else m=1.
How can I write this conditional constraint as linear or non-linear constraint?

Well you could write this as [min(x+y-n,0)]^2. Not sure if that will do you any good (this is non-differentiable, and thus difficult for many solvers). We can make the min() expression linear using additional binary variables:
z <= x+y-n
z <= 0
z >= x+y-n - b * M
z >= 0 - (1-b) * M
b in {0,1}
with M a large enough constant. In many cases better reformulations can be applied but that depends on the rest of the model.

If you use a constraint-programming solver, such as Choco Solver, then you can use IfThenElse constraints directly as well as other non linear constraints, such as square.

Related

Conditional Constraint Solving

How would you approach the following constraint optimization problem:
I have a set of integer variables x[i] that can takes only 4 values in the [1,4] range
There are constraints of the form C <= x[i], x[i] <= C, and x[i] <= x[j]
There are also conditional constraints, but exclusively of the form "if 2 <= x[i] then 3 <= x[j]"
I want to minimize the number of variables that have the value 3
Edit: because I have a large (thousands) number of variables and constraints and performance is critical, I’m looking for a dedicated algorithm, not using a general-purpose constraint solver.
You could encode each variable as a pair of binary variables:
x[i] = 1 + 2*x2[i] + x1[i]
The inequality constraints can now be partly resolved:
1 <= x[i] can be ignored, as always true for any variable
2 <= x[i] implies (x2[i] or x1[i])
3 <= x[i] implies x2[i]
4 <= x[i] implies (x2[i] and x1[i])
1 >= x[i] implies (!x2[i] and !x1[i])
2 >= x[i] implies !x2[i]
3 >= x[i] implies (!x2[i] or !x1[i])
4 >= x[i] can be ignored, as always true for any variable
x[i] <= x[j] implies (!x2[i] or x2[j]) and
(!x1[i] or x2[j] o x1[j]) and
(!x2[i] or !x1[i] or x1[j])
Conditional constraint
if 2 <= x[i] then 3 <= x[j]
translates to
x2[j] or !x1[i]
The encoding shown above can be directly written as Conjunctive Normal Form (CNF) suitable for a SAT solver. Tools like SATInterface or bc2cnf help to automate this translation.
To minimize the number of variables which have value 3, a counting circuit combined with a digital comparator could be constructed/modelled.
Variable x[i] has value 3, if (x2[i] and !x1[i]) is true. These expressions could be inputs of a counter. The counting result could then be compared to some value which is decreased until no more solutions can be found.
Bottom line:
The problem can be solved with a general purpose solver like a SAT solver (CaDiCal, Z3, Cryptominisat) or a constraint solver like Minizinc. I am not aware of a dedicated algorithm which would outperform the general purpose solvers.
Actually, there is a fairly simple and efficient algorithm for this particular problem.
It is enough to maintain and propagate intervals and start propagating the conditional constraints when the lower bounds become >= 2.
At the end, if the interval is exactly [3,4], the optimal solution is to select 4.
More precisely:
initialize l[i]:=1, u[i]:=4
propagate constraints until fixpoint as follows:
Constraint "C<=x[i]": l[i]:=max(l[i],C)
Constraint "x[i]<=C": u[i]:=min(u[i],C)
Constraint "x[i]<=x[j]": l[j]:=max(l[j],l[i]) and u[i]:=min(u[i],u[j])
Constraint 2<=x[i] ==> 3<=x[j]: if 2<=l[i], then l[j]:=max(l[j], 3)
If u[i]<l[i], there is no solution
Otherwise, select:
x[i]=l[i] if u[i]<=3
x[i]=u[i] otherwise
This is correct and optimal because:
any solution is such that l[i]<=x[i]<=u[i], so if u[i]<l[i], there is no solution
otherwise, x[i]=l[i] is clearly a solution (but NOT x[i]=u[i] because it can be that u[i]>=2 but u[j] is not >=3)
bumping all x[i] from 3 to 4 when possible is still a solution because this change doesn't activate any new conditional constraints
what remains are the variables that are forced to be 3 (l[i]=u[i]=3), so we have found a solution with the minimal number of 3
In more details, here is a full proof:
assume that a solution x[i] is such that l[i]<=x[i]<=u[i] and let's prove that this invariant is preserved by application of any propagation rule:
Constraint "x[i]<=x[j]": x[i]<=x[j]<=u[j] and so x[i] is both <=u[i] and <=u[j] and hence <=min(u[i],u[j]). Similarly, l[i]<=x[i]<=x[j] so max(l[i],l[j])<=x[j]
The constraints "x[i]<=C" and "C<=x[i]" are similar
For the constraint "2<=x[i] ==> 3<=x[j]": either l[i]<2 and the propagation rule doesn't apply or 2<=l[i] and then 2<=l[i]<=x[i] implying 3<=x[j]. So 3<=x[j] and l[j]<=x[j] hence max(3,l[j])<=x[j]
as a result, when the fixpoint is reached and no rule can be applied anymore, if any i is such that u[i]<l[i], then there is no solution
otherwise, let's prove that this x[i] is a solution where: x[i]=l[i] if u[i]<=3 and x[i]=u[i] otherwise:
Note that x[i] is either l[i] or u[i], so l[i]<=x[i]<=u[i]
For all constraints "C<=x[i]", at fixpoint, we have l[i]=max(l[i],C), i.e., C<=l[i]<=x[i] and the constraint is satisfied
For all constraints "x[i]<=C", at fixpoint, we similarly have u[i]<=C and x[i]<=u[i]<=C and the constraint is satisfied
For all "x[i]<=x[j]", at fixpoint, we have: u[i] = min(u[i],u[j]) so u[i]<=u[j] and l[j] = max(l[j],l[i]), so l[i]<=l[j]. Then:
If u[j]<=3 then u[i]<=u[j]<=3 so x[i]=l[i]<=l[j]=x[j]
Otherwise, x[j]=u[j] and x[i]<=u[i]<=u[j]=x[j]
For all "2<=x[i] ==> 3<=x[j]": assume 2<=x[i]:
If u[i]<=3, then either:
l[i]<=2 and the fixpoint means l[j]:=max(l[j], 3) so 3<=l[j]<=x[j]
or l[i]=3 and 3=l[i]<=l[j]<=x[j]
If u[i]>3, then 3<u[i]<=u[j] and 3<u[i]<=u[j]=x[j]
Finally the solution is optimal because:
if l[i]=u[i]=3, any solution must have x[i]=3
otherwise, x[i] != 3: if u[i]<=3, then either u[i]=3 and x[i]=l[i]<3 or x[i]<=u[i]<3; and if u[i]>3 then x[i]=u[i]!=3

XOR statement in PuLP python

I am working on PuLP in python and I want to model the following statement :
x is positive XOR y is positive, where x ans y are integer.
How can I convert this in PuLP code ?
I started with
XOR
I agree with #kabdulla. Binary variables would be the way to go here.
Expanding on that idea a little further: You can use binary variables to indicate whether X is positive (TRUE/1) by a constraint such as M*binary_variable_for_x <= X where M is a sufficiently large number for the problem that would not limit X. Then the binary_variable_for_x can be 1 if X > 0.
Do the same for when Y is positive (TRUE/1).
And then you could write another constraint that requires the sum of these booleans to be >= 1.
binary_variable_for_x + binary_variable_for_y >= 1
There's multiple ways to formulate the problem, but this can be one way.

Bachelier Normal Implied Vol Python Calculation (Help) Jekel

Writing a python script to calc Implied Normal Vol ; in line with Jekel article (Industry Standard).
https://jaeckel.000webhostapp.com/ImpliedNormalVolatility.pdf
They say they are using a Generalized Incomplete Gamma Function Inverse.
For a call:
F(x)=v/(K - F) -> find x that makes this true
Where F is Inverse Incomplete Gamma Function
And x = (K - F)/(T*sqrt(T) ; v is the value of a call
for that x, IV is =(K-F)/x*sqrt(T)
Example I am working with:
F=40
X=38
T=100/365
v=5.25
Vol= 20%
Using the equations I should be able to backout Vol of 20%
Scipy has upper and lower Incomplete Gamma Function Inverse in their special functions.
Lower: scipy.special.gammaincinv(a, y) : {a must be positive param}
Upper: scipy.special.gammainccinv(a, y) : {a must be positive param}
Implementation:
SIG= sympy.symbols('SIG')
F=40
T=100/365
K=38
def Objective(sig):
SIG=sig
return(special.gammaincinv(.5,((F-K)**2)/(2*T*SIG**2))+special.gammainccinv(.5,((F-K)**2)/(2*T*SIG**2))+5.25/(K-F))
x=optimize.brentq(Objective, -20.00,20.00, args=(), xtol=1.48e-8, rtol=1.48e-8, maxiter=1000, full_output=True)
IV=(K-F)/x*T**.5
Print(IV)
I know I am wrong, but Where am I going wrong / how do I fix it and use what I read in the article ?
Did you also post this on the Quantitative Finance Stack Exchange? You may get a better response there.
This is not my field, but it looks like your main problem is that brentq requires the passed Objective function to return values with opposite signs when passed the -20 and 20 arguments. However, this will not end up happening because according to the scipy docs, gammaincinv and gammainccinv always return a value between 0 and infinity.
I'm not sure how to fix this, unfortunately. Did you try implementing the analytic solution (rather than iterative root finding) in the second part of the paper?

Rendering Fractals: The Mobius Transformation and The Newtonian Basin

I understand how to render (two dimensional) "Escape Time Group" fractals (Julia and Mandelbrot), but I can't seem to get a Mobius Transformation or a Newton Basin rendered.
I'm trying to render them using the same method (by recursively using the polynomial equation on each pixel 'n' times), but I have a feeling these fractals are rendered using totally different methods. Mobius 'Transformation' implies that an image must already exist, and then be transformed to produce the geometry, and the Newton Basin seems to plot each point, not just points that fall into a set.
How are these fractals graphed? Are they graphed using the same iterative methods as the Julia and Mandelbrot?
Equations I'm Using:
Julia: Zn+1 = Zn^2 + C
Where Z is a complex number representing a pixel, and C is a complex constant (Correct).
Mandelbrot: Cn+1 = Cn^2 + Z
Where Z is a complex number representing a pixel, and C is the complex number (0, 0), and is compounded each step (The reverse of the Julia, correct).
Newton Basin: Zn+1 = Zn - (Zn^x - a) / (Zn^y - a)
Where Z is a complex number representing a pixel, x and y are exponents of various degrees, and a is a complex constant (Incorrect - creating a centered, eight legged 'line star').
Mobius Transformation: Zn+1 = (aZn + b) / (cZn + d)
Where Z is a complex number representing a pixel, and a, b, c, and d are complex constants (Incorrect, everything falls into the set).
So how are the Newton Basin and Mobius Transformation plotted on the complex plane?
Update: Mobius Transformations are just that; transformations.
"Every Möbius transformation is
a composition of translations,
rotations, zooms (dilations) and
inversions."
To perform a Mobius Transformation, a shape, picture, smear, etc. must be present already in order to transform it.
Now how about those Newton Basins?
Update 2: My math was wrong for the Newton Basin. The denominator at the end of the equation is (supposed to be) the derivative of the original function. The function can be understood by studying 'NewtonRoot.m' from the MIT MatLab source-code. A search engine can find it quite easily. I'm still at a loss as to how to graph it on the complex plane, though...
Newton Basin:
f(x) = x - f(x) / f'(x)
In Mandelbrot and Julia sets you terminate the inner loop if it exceeds a certain threshold as a measurement how fast the orbit "reaches" infinity
if(|z| > 4) { stop }
For newton fractals it is the other way round: Since the newton method is usually converging towards a certain value we are interested how fast it reaches its limit, which can be done by checking when the difference of two consecutive values drops below a certain value (usually 10^-9 is a good value)
if(|z[n] - z[n-1]| < epsilon) { stop }

Is there a language with constrainable types?

Is there a typed programming language where I can constrain types like the following two examples?
A Probability is a floating point number with minimum value 0.0 and maximum value 1.0.
type Probability subtype of float
where
max_value = 0.0
min_value = 1.0
A Discrete Probability Distribution is a map, where: the keys should all be the same type, the values are all Probabilities, and the sum of the values = 1.0.
type DPD<K> subtype of map<K, Probability>
where
sum(values) = 1.0
As far as I understand, this is not possible with Haskell or Agda.
What you want is called refinement types.
It's possible to define Probability in Agda: Prob.agda
The probability mass function type, with sum condition is defined at line 264.
There are languages with more direct refinement types than in Agda, for example ATS
You can do this in Haskell with Liquid Haskell which extends Haskell with refinement types. The predicates are managed by an SMT solver at compile time which means that the proofs are fully automatic but the logic you can use is limited by what the SMT solver handles. (Happily, modern SMT solvers are reasonably versatile!)
One problem is that I don't think Liquid Haskell currently supports floats. If it doesn't though, it should be possible to rectify because there are theories of floating point numbers for SMT solvers. You could also pretend floating point numbers were actually rational (or even use Rational in Haskell!). With this in mind, your first type could look like this:
{p : Float | p >= 0 && p <= 1}
Your second type would be a bit harder to encode, especially because maps are an abstract type that's hard to reason about. If you used a list of pairs instead of a map, you could write a "measure" like this:
measure total :: [(a, Float)] -> Float
total [] = 0
total ((_, p):ps) = p + probDist ps
(You might want to wrap [] in a newtype too.)
Now you can use total in a refinement to constrain a list:
{dist: [(a, Float)] | total dist == 1}
The neat trick with Liquid Haskell is that all the reasoning is automated for you at compile time, in return for using a somewhat constrained logic. (Measures like total are also very constrained in how they can be written—it's a small subset of Haskell with rules like "exactly one case per constructor".) This means that refinement types in this style are less powerful but much easier to use than full-on dependent types, making them more practical.
Perl6 has a notion of "type subsets" which can add arbitrary conditions to create a "sub type."
For your question specifically:
subset Probability of Real where 0 .. 1;
and
role DPD[::T] {
has Map[T, Probability] $.map
where [+](.values) == 1; # calls `.values` on Map
}
(note: in current implementations, the "where" part is checked at run-time, but since "real types" are checked at compile-time (that includes your classes), and since there are pure annotations (is pure) inside the std (which is mostly perl6) (those are also on operators like *, etc), it's only a matter of effort put into it (and it shouldn't be much more).
More generally:
# (%% is the "divisible by", which we can negate, becoming "!%%")
subset Even of Int where * %% 2; # * creates a closure around its expression
subset Odd of Int where -> $n { $n !%% 2 } # using a real "closure" ("pointy block")
Then you can check if a number matches with the Smart Matching operator ~~:
say 4 ~~ Even; # True
say 4 ~~ Odd; # False
say 5 ~~ Odd; # True
And, thanks to multi subs (or multi whatever, really – multi methods or others), we can dispatch based on that:
multi say-parity(Odd $n) { say "Number $n is odd" }
multi say-parity(Even) { say "This number is even" } # we don't name the argument, we just put its type
#Also, the last semicolon in a block is optional
Nimrod is a new language that supports this concept. They are called Subranges. Here is an example. You can learn more about the language here link
type
TSubrange = range[0..5]
For the first part, yes, that would be Pascal, which has integer subranges.
The Whiley language supports something very much like what you are saying. For example:
type natural is (int x) where x >= 0
type probability is (real x) where 0.0 <= x && x <= 1.0
These types can also be implemented as pre-/post-conditions like so:
function abs(int x) => (int r)
ensures r >= 0:
//
if x >= 0:
return x
else:
return -x
The language is very expressive. These invariants and pre-/post-conditions are verified statically using an SMT solver. This handles examples like the above very well, but currently struggles with more complex examples involving arrays and loop invariants.
For anyone interested, I thought I'd add an example of how you might solve this in Nim as of 2019.
The first part of the questions is straightfoward, since in the interval since since this question was asked, Nim has gained the ability to generate subrange types on floats (as well as ordinal and enum types). The code below defines two new float subranges types, Probability and ProbOne.
The second part of the question is more tricky -- defining a type with constrains on a function of it's fields. My proposed solution doesn't directly define such a type but instead uses a macro (makePmf) to tie the creation of a constant Table[T,Probability] object to the ability to create a valid ProbOne object (thus ensuring that the PMF is valid). The makePmf macro is evaluated at compile time, ensuring that you can't create an invalid PMF table.
Note that I'm a relative newcomer to Nim so this may not be the most idiomatic way to write this macro:
import macros, tables
type
Probability = range[0.0 .. 1.0]
ProbOne = range[1.0..1.0]
macro makePmf(name: untyped, tbl: untyped): untyped =
## Construct a Table[T, Probability] ensuring
## Sum(Probabilities) == 1.0
# helper templates
template asTable(tc: untyped): untyped =
tc.toTable
template asProb(f: float): untyped =
Probability(f)
# ensure that passed value is already is already
# a table constructor
tbl.expectKind nnkTableConstr
var
totprob: Probability = 0.0
fval: float
newtbl = newTree(nnkTableConstr)
# create Table[T, Probability]
for child in tbl:
child.expectKind nnkExprColonExpr
child[1].expectKind nnkFloatLit
fval = floatVal(child[1])
totprob += Probability(fval)
newtbl.add(newColonExpr(child[0], getAst(asProb(fval))))
# this serves as the check that probs sum to 1.0
discard ProbOne(totprob)
result = newStmtList(newConstStmt(name, getAst(asTable(newtbl))))
makePmf(uniformpmf, {"A": 0.25, "B": 0.25, "C": 0.25, "D": 0.25})
# this static block will show that the macro was evaluated at compile time
static:
echo uniformpmf
# the following invalid PMF won't compile
# makePmf(invalidpmf, {"A": 0.25, "B": 0.25, "C": 0.25, "D": 0.15})
Note: A cool benefit of using a macro is that nimsuggest (as integrated into VS Code) will even highlight attempts to create an invalid Pmf table.
Modula 3 has subrange types. (Subranges of ordinals.) So for your Example 1, if you're willing to map probability to an integer range of some precision, you could use this:
TYPE PROBABILITY = [0..100]
Add significant digits as necessary.
Ref: More about subrange ordinals here.

Resources