How to define a predicate in SMT-LIB - predicate

How would I define a predicate such as even: Int -> Bool, which takes an integer and outputs whether it is even or not?
I tried something like
(set-logic AUFNIRA)
(declare-fun even (Int) Bool)
I want to know how to declare, for example, that even(2) is true.

There are roughly 3 ways to do this.
You can use the interpreted predicate (_ divisible
2).
(assert ((_ divisible 2) 6))
You can use a define-fun to capture even exactly.
(define-fun even ((x Int)) Bool ((_ divisible 2) x))
Note that this may not be in your logic of choice, say QF_LIA.
You can declare an uninterpreted predicate, and define
its semantics pointwise.
(declare-fun even (Int) Bool)
(assert (even 2))
(assert (not (even 3)))
You can declare an uninterpreted predicate and define
its semantics via quantifiers.
(declare-fun even (Int) Bool)
(assert (forall ((x Int)) (= (even x) (exists ((y Int)) (= x (* 2 y))))))

Related

Cosine Similarity in linear time using Lisp

The cosine similarity of two lists can be calculated in linear time using a for-loop. I'm curious as to how one would achieve this using a Lisp-like language. Below is an example of my code in Python and Hy (Hylang).
Python:
def cos_sim(A,B):
import math as _math
n,da,db,d = 0,0,0,0
for a,b in zip(A,B):
n += a*b
da += a*a
db += b*b
da = _math.sqrt(da)
db = _math.sqrt(db)
d = da*db
return n / (d + 1e-32)
Hy (Lisp):
(import math)
(defn l2norm [a]
(math.sqrt (reduce + (map (fn [s](* s s)) a))))
(defn dot [a b]
(reduce + (map * a b)))
(defn cossim [a b]
(/ (dot a b) (* (l2norm a) (l2norm b))))
"I'm curious as to how one would achieve this using a Lisp-like language." This really depends on which Lisp you are using. In Scheme you might do something similar to the posted Hy solution:
(define (cos-sim-1 u v)
(/ (dot-prod u v)
(* (norm u) (norm v))))
(define (dot-prod u v)
(fold-left + 0 (map * u v)))
(define (norm u)
(sqrt (fold-left (lambda (acc x) (+ acc (* x x)))
0
u)))
This is linear in time complexity, but it could be improved by a constant factor by passing over the input only once. Scheme provides a named let construct that can be used to bind a name to a procedure; this is convenient here as it provides a simple mechanism for building the dot product and norms:
(define (cos-sim-2 u v)
(let iter ((u u)
(v v)
(dot-product 0)
(U^2 0)
(V^2 0))
(if (null? u)
(/ dot-product (sqrt (* U^2 V^2)))
(let ((x (car u))
(y (car v)))
(iter (cdr u)
(cdr v)
(+ dot-product (* x y))
(+ U^2 (* x x))
(+ V^2 (* y y)))))))
Both of these procedures assume that the input lists have the same length; it might be useful to add some validation code that checks this. Note that fold-left is standard in R6RS Scheme, but other standards rely on SRFIs for this, and some implementations may use different names, but the fold-left functionality is commonly available (perhaps as foldl or reduce).
It is possible to solve the problem in Common Lisp using either of the basic methods shown above, though in Common Lisp you would use labels instead of named let. But it would be typical to see a Common Lisp solution using the loop macro. The Common Lisp standard does not guarantee tail call elimination (though some implementations do support that), so explicit loops are seen much more often than in Scheme. The loop macro is pretty powerful, and one way that you could solve this problem while passing over the input lists only once is this:
(defun cos-sim (u v)
(loop :for x :in u
:for y :in v
:sum (* x y) :into dot-product
:sum (* x x) :into u2
:sum (* y y) :into y2
:finally (return (/ dot-product (sqrt (* u2 y2))))))
Here are some sample interactions:
Scheme (Chez Scheme):
> (cos-sim-1 '(1 0 0) '(1 0 0))
1
> (cos-sim-1 '(1 0 0) '(-1 0 0))
-1
> (cos-sim-1 '(1 0 0) '(0 1 0))
0
> (cos-sim-1 '(1 1 0) '(0 1 0))
0.7071067811865475
> (cos-sim-2 '(1 0 0) '(1 0 0))
1
> (cos-sim-2 '(1 0 0) '(-1 0 0))
-1
> (cos-sim-2 '(1 0 0) '(0 1 0))
0
> (cos-sim-2 '(1 1 0) '(0 1 0))
0.7071067811865475
Common Lisp:
CL-USER> (cos-sim '(1 0 0) '(1 0 0))
1.0
CL-USER> (cos-sim '(1 0 0) '(-1 0 0))
-1.0
CL-USER> (cos-sim '(1 0 0) '(0 1 0))
0.0
CL-USER> (cos-sim '(1 1 0) '(0 1 0))
0.70710677
A simple option is to translate the Python version literally to Hy, like this:
(defn cos_sim [A B]
(import math :as _math)
(setv [n da db d] [0 0 0 0])
(for [[a b] (zip A B)]
(+= n (* a b))
(+= da (* a a))
(+= db (* b b)))
(setv
da (_math.sqrt da)
db (_math.sqrt db)
d (* da db))
(/ n (+ d 1e-32)))
I think your proposed solution is fairly 'lispy': build several short, easy to read functions that combine into your solution. EG:
(defun n (A B)
(sqrt (reduce #'+ (map 'list #'* A B))))
(defun da (A)
(sqrt (reduce #'+ (map 'list #'* A A))))
(defun db (B)
(sqrt (reduce #'+ (map 'list #'* B B))))
(defun cos-sim (A B)
(let ((n (n A B))
(da (da A))
(db (db B)))
(/ (* n n) (+ (* da db) 1e-32)))
But, notice that n, da, and db look very similar. We can see if we can make those a single function, or macro. In this case, a function with an optional second list parameter is easy enough. (And note that I've defined n in a slightly weird way to emphasize this, but we might prefer not to take a square root and then square it for our final calculation. This would be easy to change by checking for passing the optional parameter (included as B-p below); I chose to move the square root inside the combined function) Anyway, this gives us:
(defun d (A &optional (B A B-p))
(reduce #'+ (map 'list #'* A B)))
(defun cos-sim (A B)
(let ((n (d A B))
(da (sqrt (d A)))
(db (sqrt (d B))))
(/ n (+ (* da db) 1e-32))))
Alternately, using Loop is very Common Lisp-y, and is more directly similar to the python:
(defun cos-sim (A B)
(loop for a in A
for b in B
sum (* a b) into n
sum (* a a) into da
sum (* b b) into db
finally (return (/ n (+ (sqrt (* da db)) 1e-32)))))
Here is a fairly natural (I think) approach in Racket. Essentially this is a process of folding a pair of sequences of numbers, so that's what we do. Note that this uses no explicit assignment, and also pulls the square root up a level (sqrt(a) * sqrt(b) = sqrt(a*b) as taking roots is likely expensive (this probably does not matter in practice). It also doesn't do the weird adding of a tiny float, which I presume was an attempt to coerce a value which might not be a float to a float? If so that's the wrong way to do that, and it's also not needed in a language like Racket (and most Lisps) which strive to do arithmetic correctly where possible.
(define (cos-sim a b)
;; a and b are sequences of numbers
(let-values ([(a^2-sum b^2-sum ab-sum)
(for/fold ([a^2-running 0]
[b^2-running 0]
[ab-running 0])
([ai a] [bi b])
(values (+ (* ai ai) a^2-running)
(+ (* bi bi) b^2-running)
(+ (* ai bi) ab-running)))])
(/ ab-sum (sqrt (* a^2-sum b^2-sum)))))
You can relatively easily turn this into typed Racket:
(define (cos-sim (a : (Sequenceof Number))
(b : (Sequenceof Number)))
: Number
(let-values ([(a^2-sum b^2-sum ab-sum)
(for/fold ([a^2-running : Number 0]
[b^2-running : Number 0]
[ab-running : Number 0])
([ai a] [bi b])
(values (+ (* ai ai) a^2-running)
(+ (* bi bi) b^2-running)
(+ (* ai bi) ab-running)))])
(/ ab-sum (sqrt (* a^2-sum b^2-sum)))))
This probably is no faster, but it is fussier.
This might be faster though:
(define (cos-sim/flonum (a : (Sequenceof Flonum))
(b : (Sequenceof Flonum)))
: Flonum
(let-values ([(a^2-sum b^2-sum ab-sum)
(for/fold ([a^2-running : Flonum 0.0]
[b^2-running : Flonum 0.0]
[ab-running : Flonum 0.0])
([ai a] [bi b])
(values (+ (* ai ai) a^2-running)
(+ (* bi bi) b^2-running)
(+ (* ai bi) ab-running)))])
(/ ab-sum (assert (sqrt (* a^2-sum b^2-sum)) flonum?))))
I have not checked it is however.
Your Hy example is already linear time. None of the nested loops multiply their number of iterations based on the length of input. It could be simplified to make this easier to see
(import math)
(defn dot [a b]
(sum (map * a b)))
(defn l2norm [a]
(math.sqrt (dot a a)))
(defn cossim [a b]
(/ (dot a b) (* (l2norm a) (l2norm b))))
I think this version is clearer than the Python version, because it's closer to the math notation.
Let's also inline the l2norm to make the number of loops easier to see.
(defn cossim [a b]
(/ (dot a b)
(* (math.sqrt (dot a a))
(math.sqrt (dot b b)))))
Python's map() is lazy, so the sum() and map() together only loop once. You effectively have three loops, one for each dot, and none of them are nested. Your Python version had one loop, but it was doing more calculations each iteration. Theoretically, it doesn't matter if you calculate row-by-row or column-by-column: multiplication is commutative, either rows by columns or columns by rows are the same number of calculations.
However, in practice, Python does have significant overhead for function calls, so I would expect the Hy version using higher-order functions to be slower than the Python version that doesn't have any function calls in the loop body. This is a constant factor slowdown, so it's still linear time.
If you want fast loops for calculations in Python, put your data in a matrix and use Numpy.

Number type boundaries in Common LISP and Stack flowing over in GHCI

First question ever here, and newbie in both Common LISP and Haskell, please be kind.
I have a function in Common LISP - code below - which is intended to tell whether the area of a triangle is an integral number (integer?).
(defun area-int-p (a b c)
(let* ((s (/ (+ a b c) 2))
(area (sqrt (* s (- s a) (- s b) (- s c)))))
(if (equal (ceiling area) (floor area))
t
nil)))
This is supposed to use Heron's formula to calculate the area of the triangle, given the size of the three sides, and decide whether it is an integer comparing the ceiling and the floor. We are told that the area of an equilateral triangle is never an integer. Therefore, to test whether the function is working, I ran it with the arguments 333. Here is what I got in return:
CL-USER> (area-int-p 333 333 333)
NIL
Perfect! It works. To test it even more, I ran it with the arguments 3333. This is what I got in return:
CL-USER> (area-int-p 3333 3333 3333)
T
Something is wrong, this is not supposed to happen!
So, I try the following, hopefully equivalent Haskell function to see what happens:
areaIntP :: (Integral a) => a -> a -> a -> Bool
areaIntP a b c =
let aa = fromIntegral a
bb = fromIntegral b
cc = fromIntegral c
perimeter = aa + bb + cc
s = perimeter/2
area = sqrt(s * (s - aa) * (s - bb) * (s - cc))
in if ceiling area == floor area
then True
else False
This is what I get:
*Main> areaIntP 3333 3333 3333
False
*Main> areaIntP 333 333 333
False
Looks perfect. Encouraged by this, I use the below functions in Haskell to sum the perimeters of of isosceles triangles with the third side differing just one unit from the other sides, an integral area, and perimeter below 1,000,000,000.
toplamArtilar :: Integral a => a -> a -> a -> a
toplamArtilar altSinir ustSinir toplam =
if ustSinir == altSinir
then toplam
else if areaIntP ustSinir ustSinir (ustSinir + 1) == True
then toplamArtilar altSinir (ustSinir - 1) (toplam + (3 * ustSinir + 1))
else toplamArtilar altSinir (ustSinir - 1) toplam
toplamEksiler :: Integral a => a -> a -> a -> a
toplamEksiler altSinir ustSinir toplam =
if ustSinir == altSinir
then toplam
else if areaIntP ustSinir ustSinir (ustSinir - 1) == True
then toplamEksiler altSinir (ustSinir - 1) (toplam + (3 * ustSinir - 1))
else toplamEksiler altSinir (ustSinir - 1) toplam
sonuc altSinir ustSinir =
toplamEksiler altSinir ustSinir (toplamArtilar altSinir ustSinir 0)
(ustSinir means upper limit, altSinir lower limit by the way.)
Running sonuc with the arguments 2 and 333333333 however, my stack flows over. Runnning the equivalent functions in Common LISP the stack is OK, but area-int-p function is not reliable, probably because of the boundaries of the number type the interpreter deduces.
After all this, my question is two-fold:
1) How do I get round the problem in the Common LISP function area-int-p?
2) How do I prevent the stack overflow with the Haskell functions above, either within Emacs or with GHCi run from the terminal?
Note for those who figure out what I am trying to achieve here: please don't tell me to use Java BigDecimal and BigInteger.
Edit after very good replies: I asked two questions in one, and received perfectly satisfying, newbie friendly answers and a note on style from very helpful people. Thank you.
Let's define an intermediate Common Lisp function:
(defun area (a b c)
(let ((s (/ (+ a b c) 2)))
(sqrt (* s (- s a) (- s b) (- s c)))))
Your tests give:
CL-USER> (area 333 333 333)
48016.344
CL-USER> (area 3333 3333 3333)
4810290.0
In the second case, it should be clear that both the ceiling and floor are equal. This is not the case in Haskell where the second test, with 3333, returns:
4810290.040910754
Floating point
In Common Lisp, the value from which we take a square root is:
370222244442963/16
This is because computations are made with rational numbers. Up to this point, the precision is maximal. However, SQRT is free to return either a rational, when possible, or an approximate result. As a special case, the result can be an integer on some implementations, as Rainer Joswig pointed out in a comment. It makes sense because both integer and ratio are disjoint subtypes of the rational type. But as your problem shows, some square roots are irrational (e.g. √2), and in that case CL can return a float approximating the value (or a complex float).
The relevant section regarding floats and mathematical functions is 12.1.3.3 Rule of Float Substitutability. Long story short, the result is converted to a single-float when you compute the square root, which happens to loose some precision. In order to have a double, you have to be more explicit:
(defun area (a b c)
(let ((s (/ (+ a b c) 2)))
(sqrt (float (* s (- s a) (- s b) (- s c)) 0d0))))
I could also have used (coerce ... 'double-float), but here
I chose to call the FLOAT conversion function. The optional second argument is a float prototype, ie. a value of the target type. Above, it is 0d0, a double float. You could also use 0l0 for long doubles or 0s0 for short. This parameter is useful if you want to have the same precision as an input float, but can be used with literals too, like in the example. The exact meaning of short, single, double or long float types is implementation-defined, but they shall respect some rules. Current implementations generally give more precision that the minimum required.
CL-USER> (area 3333 3333 3333)
4810290.040910754d0
Now, if I wanted to test if the result is integral, I would truncate the float and look if the second returned value, the remainder, is zero.
CL-USER> (zerop (nth-value 1 (truncate 4810290.040910754d0)))
NIL
Arbitrary-precision
Note that regardless of the implementation language (Haskell, CL or another one) the approach is going to give incorrect results for some inputs, given how floats are represented. Indeed, the same problem you had with CL could arise for some inputs with more precise floats, where the result would be very close to an integer. You might need another mathematical approach or something like MPFR for arbitrary precision floating point computations. SBCL ships with sb-mpfr:
CL-USER> (require :sb-mpfr)
("SB-MPFR" "SB-GMP")
CL-USER> (in-package :sb-mpfr)
#<PACKAGE "SB-MPFR">
And then:
SB-MPFR> (with-precision 256
(sqrt (coerce 370222244442963/16 'mpfr-float)))
.4810290040910754427104204965311207243133723228299086361205561385039201180068712e+7
-1
I will answer your second question, I'm not sure about the first. In Haskell, because it's a lazy language, when you use tail recursion with an accumulator parameter, an "accumulation of thunks" can take place. A thunk is an expression that is suspended and not yet evaluated. To take a much simpler example, summing all the numbers from 0 to n:
tri :: Int -> Int -> Int
tri 0 accum = accum
tri n accum = tri (n-1) (accum + n)
If we trace the evaluation, we can see what's going on:
tri 3 0
= tri (3-1) (0+3)
= tri 2 (0+3)
= tri (2-1) ((0+3)+2)
= tri 1 ((0+3)+2)
= tri (1-1) (((0+3)+2)+1)
= tri 0 (((0+3)+2)+1)
= ((0+3)+2)+1 -- here is where ghc uses the C stack
= (0+3)+2 (+1) on stack
= 0+3 (+2) (+1) on stack
= 0 (+3) (+2) (+1) on stack
= 3 (+2) (+1) on stack
= 5 (+1) on stack
= 6
This is a simplification of course, but it's an intuition that can help you understand both stack overflows and space leaks caused by thunk buildup. GHC only evalues a thunk when it's needed. We ask whether the value of n is 0 each time through tri so there is no thunk buildup in that parameter, but nobody needs to know the value of accum until the very end, which might be a really huge thunk as you can see from the example. In evaluating that huge thunk the stack can overflow.
The solution is to make tri evaluate accum sooner. This is usually done using a BangPattern (but can be done with seq if you don't like extensions).
{-# LANGUAGE BangPatterns #-}
tri :: Int -> Int -> Int
tri 0 !accum = accum
tri n !accum = tri (n-1) (accum + n)
The ! before accum means "evaluate this parameter at the moment of pattern matching" (even though the pattern doesn't technically need to know its value). Then we get this evaluation trace:
tri 3 0
= tri (3-1) (0+3)
= tri 2 3 -- we evaluate 0+3 because of the bang pattern
= tri (2-1) (3+2)
= tri 1 5
= tri (1-1) (5+1)
= tri 0 6
= 6
I hope this helps.
About style:
(if (predicate? ...) t nil)
is just
(predicate? ...)
You are checking with your IF, if T is T and then return T. But T is already T, so you can just return it.

Odd reuse in the evaluation of these expressions

I'm trying to use Debug.Trace.trace to figure out how many times a function is evaluated, and seeing some surprising results.
ghci> import Debug.Trace
ghci> let x = (trace " Eval'd!" (* 2)) 3 in (x, x)
( Eval'd!
6, Eval'd!
6)
ghci> let x = (trace " Eval'd!" (* 2)) 3 in x `seq` (x, x)
Eval'd!
( Eval'd!
6, Eval'd!
6)
ghci> let x = (trace " Eval'd!" (* 2)) (3 :: Int) in (x, x)
( Eval'd!
6,6)
I'm making the assumption that Eval'd is printed once for each evaluation of the (* 2) function. Is that a correct assumption?
Secondly, why is the function ever printed more than once? I suspect that x being of some unspecified type of the Num typeclass has to do with it given that the third case works but I can't think of an explanation.
(x, x) :: Num a => (a, a) guarantees that the two elements of the tuples have the same value just as much as (x, x) :: (Int, Int), so why eval x twice?
UPDATE:
Actually I had assumed that the type of (x, x) was Num a => (a, a). But it's apparently (x, x) :: (Num t, Num t1) => (t, t1).
Why does GHC not realize that t ~ t1 here? I suspect it's related to the answer to my question.
They're not guaranteed to be the same type:
Prelude Debug.Trace> :t let x = (trace " Eval'd!" (* 2)) 3 in (x, x)
let x = (trace " Eval'd!" (* 2)) 3 in (x, x)
:: (Num t, Num t1) => (t, t1)
Also, if you put it in a file, it only gets evaluated once, even when called from GHCi. (This is because in declarations in files but not GHCi, the dreaded monomorphism restriction is on by default):
import Debug.Trace
main = print $ let x = (trace " Eval'd!" (* 2)) 3 in (x, x)
Also,
let x = (trace " Eval'd!" 6) in (x,x)
behaves about the same, so it's really all in the type (class) ambiguity.
The reason why it doesn't share all uses of x is because at GHC core level x, unoptimized, is really a function taking a Num typeclass dictionary argument. To share it, it has to do enough analysis to see that the types are the same.
The reason why it doesn't realize is basically that GHCi is intended for fast code experimentation turnaround, not for creating good code, so it does nearly no optimization at all, so it's almost pure luck whether it detects such things or not.
(There's also an outstanding hole in the GHCi bytecode design that means you cannot enable optimization levels for it, even if you'd want to. Basically it doesn't support "unboxed tuples", which GHC optimization uses a lot.)

How to compare two functions for equivalence, as in (λx.2*x) == (λx.x+x)?

Is there a way to compare two functions for equality? For example, (λx.2*x) == (λx.x+x) should return true, because those are obviously equivalent.
It's pretty well-known that general function equality is undecidable in general, so you'll have to pick a subset of the problem that you're interested in. You might consider some of these partial solutions:
Presburger arithmetic is a decidable fragment of first-order logic + arithmetic.
The universe package offers function equality tests for total functions with finite domain.
You can check that your functions are equal on a whole bunch of inputs and treat that as evidence for equality on the untested inputs; check out QuickCheck.
SMT solvers make a best effort, sometimes responding "don't know" instead of "equal" or "not equal". There are several bindings to SMT solvers on Hackage; I don't have enough experience to suggest a best one, but Thomas M. DuBuisson suggests sbv.
There's a fun line of research on deciding function equality and other things on compact functions; the basics of this research is described in the blog post Seemingly impossible functional programs. (Note that compactness is a very strong and very subtle condition! It's not one that most Haskell functions satisfy.)
If you know your functions are linear, you can find a basis for the source space; then every function has a unique matrix representation.
You could attempt to define your own expression language, prove that equivalence is decidable for this language, and then embed that language in Haskell. This is the most flexible but also the most difficult way to make progress.
This is undecidable in general, but for a suitable subset, you can indeed do it today effectively using SMT solvers:
$ ghci
GHCi, version 8.0.1: http://www.haskell.org/ghc/ :? for help
Prelude> :m Data.SBV
Prelude Data.SBV> (\x -> 2 * x) === (\x -> x + x :: SInteger)
Q.E.D.
Prelude Data.SBV> (\x -> 2 * x) === (\x -> 1 + x + x :: SInteger)
Falsifiable. Counter-example:
s0 = 0 :: Integer
For details, see: https://hackage.haskell.org/package/sbv
In addition to practical examples given in the other answer, let us pick the subset of functions expressible in typed lambda calculus; we can also allow product and sum types. Although checking whether two functions are equal can be as simple as applying them to a variable and comparing results, we cannot build the equality function within the programming language itself.
ETA: λProlog is a logic programming language for manipulating (typed lambda calculus) functions.
2 years have passed, but I want to add a little remark to this question. Originally, I asked if there is any way to tell if (λx.2*x) is equal to (λx.x+x). Addition and multiplication on the λ-calculus can be defined as:
add = (a b c -> (a b (a b c)))
mul = (a b c -> (a (b c)))
Now, if you normalize the following terms:
add_x_x = (λx . (add x x))
mul_x_2 = (mul (λf x . (f (f x)))
You get:
result = (a b c -> (a b (a b c)))
For both programs. Since their normal forms are equal, both programs are obviously equal. While this doesn't work in general, it does work for many terms in practice. (λx.(mul 2 (mul 3 x)) and (λx.(mul 6 x)) both have the same normal forms, for example.
In a language with symbolic computation like Mathematica:
Or C# with a computer algebra library:
MathObject f(MathObject x) => x + x;
MathObject g(MathObject x) => 2 * x;
{
var x = new Symbol("x");
Console.WriteLine(f(x) == g(x));
}
The above displays 'True' at the console.
Proving two functions equal is undecidable in general but one can still prove functional equality in special cases as in your question.
Here's a sample proof in Lean
def foo : (λ x, 2 * x) = (λ x, x + x) :=
begin
apply funext, intro x,
cases x,
{ refl },
{ simp,
dsimp [has_mul.mul, nat.mul],
have zz : ∀ a : nat, 0 + a = a := by simp,
rw zz }
end
One can do the same in other dependently typed language such as Coq, Agda, Idris.
The above is a tactic style proof. The actual definition of foo (the proof) that gets generated is quite a mouthful to be written by hand:
def foo : (λ (x : ℕ), 2 * x) = λ (x : ℕ), x + x :=
funext
(λ (x : ℕ),
nat.cases_on x (eq.refl (2 * 0))
(λ (a : ℕ),
eq.mpr
(id_locked
((λ (a a_1 : ℕ) (e_1 : a = a_1) (a_2 a_3 : ℕ) (e_2 : a_2 = a_3), congr (congr_arg eq e_1) e_2)
(2 * nat.succ a)
(nat.succ a * 2)
(mul_comm 2 (nat.succ a))
(nat.succ a + nat.succ a)
(nat.succ a + nat.succ a)
(eq.refl (nat.succ a + nat.succ a))))
(id_locked
(eq.mpr
(id_locked
(eq.rec (eq.refl (0 + nat.succ a + nat.succ a = nat.succ a + nat.succ a))
(eq.mpr
(id_locked
(eq.trans
(forall_congr_eq
(λ (a : ℕ),
eq.trans
((λ (a a_1 : ℕ) (e_1 : a = a_1) (a_2 a_3 : ℕ) (e_2 : a_2 = a_3),
congr (congr_arg eq e_1) e_2)
(0 + a)
a
(zero_add a)
a
a
(eq.refl a))
(propext (eq_self_iff_true a))))
(propext (implies_true_iff ℕ))))
trivial
(nat.succ a))))
(eq.refl (nat.succ a + nat.succ a))))))

Partial Application of Infix Functions in F#

In haskell it is posible to partially apply an infix function using sections, for instance given the infix function < (less than) one can partially apply any of the function's arguments: (5 <) , (< 5)
In other words, in haskell we have the following shorthand notation:
op :: a -> b -> c
(`op` y) === \x -> x `op` y
(x `op`) === \y -> x `op` y
Does F# have a similar concept?
No, neither of those (apart from standard partial application like (=) x).
Whereas I like the succinctness of Seq.find ((=) x), things like Seq.filter ((<) 3) (or even Seq.map (flip (-) 1)) are simply awkward to read and should immediately be replaced by a lambda expression, imo.
If you want to invent your own standards...
let lsection x f y -> f x y
let rsection f y x -> f x y
Then lsection 5 (<) === (5 <) and rsection (<) 5 === (< 5).
Though really, without language support, just put a lambda in there and it'll be clearer.

Resources