Related
I'm working with linear problems on rationals in Z3. To use Z3 I take SBV.
An example of a problem I pose is:
import Data.SBV
solution1 = do
x <- sRational "x"
w <- sRational "w"
constrain $ x.< w
constrain $ x + 2*w .>=0 .|| x .== 1
My question is:
Are these kinds of problems decidable?
I couldn't find a list of decidable theories or a way to tell if a theory is decidable.
The closest I found is this. The theory about the real ones is decidable, but is it the same for rational numbers? Intuition tells me that it is, but I have not found the information that allows me to assure it.
Thanks in advance
SBV models rationals using the standard "two integers" idea; that is, it represents the numerator and the denominator separately as integers. This means that if you add two symbolic rationals, you'll have a non-linear term over the integers. So, in theory, the problem will be in the semi-decidable fragment. That is, even if you restrict your multiplications to concrete scalars, addition of symbolic rationals will give rise to non-linear terms over integers.
Having said that, I had good luck using rationals; where z3 was able to decide most problems of interest without much difficulty. If it proves to be an issue, you should switch to SReal type (i.e., algebraic reals), for which z3 has a decision procedure. But of course, the models you get can now include algebraic reals, such as square-root-of-2, etc. (i.e., the roots of any polynomial with integer coefficients.)
Side note If your problem allows for delta-sat (i.e., satisfiability with perturbations), you should look into dReal (http://dreal.github.io), which SBV also supports as a backend solver. But perhaps that's not what you had in mind.
Theoretical note
Strictly speaking, linear arithmetic over rationals is decidable; see Section 3 of https://www.cs.ox.ac.uk/people/james.worrell/lecture15-2015.pdf for a proof. However, SMT solvers do not support rationals out-of-the-box; and SBV (as I mentioned above), uses two symbolic integers to represent rationals. So, adding two rationals will give rise to multiplication of two symbolic integers, taking you out of the decidable fragment. Of course, in practice, the solvers are quite adept at coming up with solutions even in the presence of non-linear terms; it's just that you're not always guaranteed. So, a more strict answer to your question is while linear arithmetic over rationals is decidable, the translation used by SBV puts the problem into the non-linear integer arithmetic domain, and hence decidability is not guaranteed. In any case, SMTLib does not come with a theory of rationals, so you're kind of out-of-luck when it comes to first class support for them.
I guess a rational solution will exist iff an integer solution exists to a suitably scaled collection of constraints. For example, x=1/2(=5/10), w=3/5(=6/10) is a solution to your example problem. Scaling your problem by 10, we have the equivalent constraint set:
10*x < 10*w
(10*x + 20*w >= 0) || (10*x == 10)
Writing x'=10*x and w'=10*w, this means that x'=5, w'=6 is an integer solution to:
x' < w'
(x' + w' >= 0) || (x' == 10)
Presburger famously showed that first-order logic plus integers and addition is decidable. (Multiplication by a constant is also allowed, since it can be expanded to an addition -- e.g. 3*x is x+x+x.)
I guess the only trick left is to show that it's possible to choose what scaling to use without having solved the problem yet. Nothing obvious occurs to me off the top of my head, but it seems reasonable that this should be doable. For example, perhaps if you take the product of all the nonzero numerators and denominators in your constraint set, you can show that the set of rationals with that product as their denominator is indistinguishable from the full set of rationals. (If so, you could look through the proof to see if it still works with a smaller denominator.)
I'm not a z3 expert, so I can't talk about how this translates to whether that tool specifically is suitable, but it seems likely to me that it is possible to create a suitable tool.
A total (functional) language is one in which everything can be shown to terminate. Obviously, there are lots of places where I don't want this - throwing exceptions is sometimes handy, a web-server isn't supposed to terminate, etc. But sometimes, I would like a local totality check to enable certain optimizations. For example, if I have a provably-total function
commutativity :: forall (n :: Nat) (m :: Nat). n + m :~: m + n
commutativity = ...
then, since :~: has exactly one inhabitant (Refl), GHC could optimize
gcastWith (commutativity #n #m) someExpression
==>
someExpression
And my proof of commutativity goes from having an O(n) runtime cost to being free. So, now for my question:
What are some of the subtle difficulties in making a totality checker for Haskell?
Obviously, such a checker is conservative so whenever GHC isn't sure something is total (or is to lazy to check) it could assume it isn't... Seems to me it might not be too difficult to cobble together a not-so-smart checker that would still be very useful (at least it should be straightforward to eliminate all my arithmetic proofs). Yet, I can't seem to find any efforts to build such a thing into GHC, so obviously I'm missing some pretty big constraints. Go ahead SO, crush my dreams. :)
Relevant but not recent: Unfailing Haskell by Neil Mitchell, 2005.
Liquid Haskell has totality checking: https://github.com/ucsd-progsys/liquidhaskell#termination-check
By default a termination check is performed on all recursive functions.
Use the no-termination option to disable the check
liquid --no-termination test.hs
In recursive functions the first algebraic or integer argument should be decreasing.
The default decreasing measure for lists is length and Integers its value.
(I included screenshot and quote for posterity.)
Similar to in Agda or other languages with totality checking, the arguments to the function must become structurally smaller over time to reach the base case. Combined with a totality checker, this makes for a reliable check for a number of functions. LH also supports helping the checker along by indicating how things decrease, which you could do with an abstract data type that's opaque or from an FFI. It's really quite practical.
I have a program that's almost pure mathematical computation. The problem is that some of those computations operate on monte carlo generated values.
It seems like I have two design options:
Either all my computation functions take additional parameter which contains a pre-generated monte carlo chain. This lets me keep pure functions everywhere, but since there's functions that call other functions this adds a lot of line noise to the code base.
The other option is to make all the computation functions monadic. This seems unfortunate since some of the functions aren't even using those random values they're just calling a function which calls a function which needs the random values.
Is there any guidance regarding the preferred design here? Specifically, the separation of monadic / non-monadic functions in the code where monte carlo values are concerned?
The other option is to make all the computation functions monadic. This seems unfortunate since some of the functions aren't even using those random values they're just calling a function which calls a function which needs the random values.
I would suggest following this approach, and I disagree with your assessment that it's "unfortunate." What monads are good at precisely is separating your pure code from your side effecting code. Your pure functions can just have pure types, and the Functor/Applicative/Monad methods serve to "hook them up" with the random generation parts. Meditate on the signatures of the standard operations (here specialized to some idealized Random monad type):
-- Apply a pure function to a randomly selected value.
fmap :: (a -> b) -> Random a -> Random b
-- Apply a randomly selected function to a randomly selected argument.
-- The two random choices are independent.
(<*>) :: Random (a -> b) -> Random a -> Random b
-- Apply a two-argument function to a randomly selected arguments.
-- The two random choices are independent.
liftA2 :: (a -> b -> c) -> Random a -> Random b -> Random c
-- Make a `Random b` choice whose distribution depends on the value
-- sampled from the `Random a`.
(>>=) :: Random a -> (a -> Random b) -> Random b
So the reformulated version of your approach is:
Write pure functions wherever you can.
Adapt these pure functions to work on the random values by using the Functor/Applicative/Monad class operations.
Wherever you spot a function that's mentioning the Random type superfluously, figure out how to factor the Random part out using those classes' operations (or the copious utility functions that exist for them).
This is not specific to random number generation, by the way, but applies to any monad.
You might enjoy reading this article, and might want to check out the author's random generation monad library:
"Encoding Statistical Independence, Statically"
https://hackage.haskell.org/package/mwc-probability
I doubt you need to follow the article's approach of using free monads for modeling, but the conceptual bits about probability distribution monads will likely be of some help.
tl;dr:
Consider to abstract the random function generator and pass it as an argument. Haskells type classes should help you to hide that abstraction as much as possible.
Unfortunately, there is no silver bullet here. Since you are using side effects, your "functions" simply aren't functions in the proper sense. Haskell does not allow you to hide that fact (which makes up the largest part of its safety guarantees). So in some way you will need to express this fact. You also seem to confuse the difference between monadic operations and (plain) functions: A function that (indirectly) uses random values is implicitly monadic. A non-monadic function can always be used inside a monadic operation. So you should probably implement all truly non-monadic functions as such and see how far that carries.
As a completely unrelated side-note: If lazyness is not a requirement and Haskells strong safety is too much a burden for you, but you still want to write (mostly) functional code, you could give OCaml a try (or any other ML dialect for that matter).
I'm starting to learn functional programming language like Haskell, ML and most of the exercises will show off things like:
foldr (+) 0 [ 1 ..10]
which is equivalent to
sum = 0
for( i in [1..10] )
sum += i
So that leads me to think why can't compiler know that this is Arithmetic Progression and use O(1) formula to calculate?
Especially for pure FP languages without side effect?
The same applies for
sum reverse list == sum list
Given a + b = b + a
and definition of reverse, can compilers/languages prove it automatically?
Compilers generally don't try to prove this kind of thing automatically, because it's hard to implement.
As well as adding the logic to the compiler to transform one fragment of code into another, you have to be very careful that it only tries to do it when it's actually safe - i.e. there are often lots of "side conditions" to worry about. For example in your example above, someone might have written an instance of the type class Num (and hence the (+) operator) where the a + b is not b + a.
However, GHC does have rewrite rules which you can add to your own source code and could be used to cover some relatively simple cases like the ones you list above, particularly if you're not too bothered about the side conditions.
For example, and I haven't tested this, you might use the following rule for one of your examples above:
{-# RULES
"sum/reverse" forall list . sum (reverse list) = sum list
#-}
Note the parentheses around reverse list - what you've written in your question actually means (sum reverse) list and wouldn't typecheck.
EDIT:
As you're looking for official sources and pointers to research, I've listed a few.
Obviously it's hard to prove a negative but the fact that no-one has given an example of a general-purpose compiler that does this kind of thing routinely is probably quite strong evidence in itself.
As others have pointed out, even simple arithmetic optimisations are surprisingly dangerous, particularly on floating point numbers, and compilers generally have flags to turn them off - for example Visual C++, gcc. Even integer arithmetic isn't always clear-cut and people occasionally have big arguments about how to deal with things like overflow.
As Joachim noted, integer variables in loops are one place where slightly more sophisticated optimisations are applied because there are actually significant wins to be had. Muchnick's book is probably the best general source on the topic but it's not that cheap. The wikipedia page on strength reduction is probably as good an introduction as any to one of the standard optimisations of this kind, and has some references to the relevant literature.
FFTW is an example of a library that does all kinds of mathematical optimization internally. Some of its code is generated by a customised compiler the authors wrote specifically for the purpose. It's worthwhile because the authors have domain-specific knowledge of optimizations that in the specific context of the library are both worth the effort and safe
People sometimes use template metaprogramming to write "self-optimising libraries" that again might rely on arithmetic identities, see for example Blitz++. Todd Veldhuizen's PhD dissertation has a good overview.
If you descend into the realms of toy and academic compilers all sorts of things go. For example my own PhD dissertation is about writing inefficient functional programs along with little scripts that explain how to optimise them. Many of the examples (see Chapter 6) rely on applying arithmetic rules to justify the underlying optimisations.
Also, it's worth emphasising that the last few examples are of specialised optimisations being applied only to certain parts of the code (e.g. calls to specific libraries) where it is expected to be worthwhile. As other answers have pointed out, it's simply too expensive for a compiler to go searching for all possible places in an entire program where an optimisation might apply. The GHC rewrite rules that I mentioned above are a great example of a compiler exposing a generic mechanism for individual libraries to use in a way that's most appropriate for them.
The answer
No, compilers don’t do that kind of stuff.
One reason why
And for your examples, it would even be wrong: Since you did not give type annotations, the Haskell compiler will infer the most general type, which would be
foldr (+) 0 [ 1 ..10] :: Num a => a
and similar
(\list -> sum (reverse list)) :: Num a => [a] -> a
and the Num instance for the type that is being used might well not fulfil the mathematical laws required for the transformation you suggest. The compiler should, before everything else, avoid to change the meaning (i.e. the semantics) of your program.
More pragmatically: The cases where the compiler could detect such large-scale transformations rarely occur in practice, so it would not be worth it to implement them.
An exception
Note notable exceptions are linear transformations in loops. Most compilers will rewrite
for (int i = 0; i < n; i++) {
... 200 + 4 * i ...
}
to
for (int i = 0, j = 200; i < n; i++, j += 4) {
... j ...
}
or something similar, as that pattern does often occur in code working on array.
The optimizations you have in mind will probably not be done even in the presence of monomorphic types, because there are so many possibilities and so much knowledge required. For example, in this example:
sum list == sum (reverse list)
The compiler would need to know or take into account the following facts:
sum = foldl (+) 0
(+) is commutative
reverse list is a permutation of list
foldl x c l, where x is commutative and c is a constant, yields the same result for all permutations of l.
This all seems trivial. Sure, the compiler can most probably look up the definition of sumand inline it. It could be required that (+) be commutative, but remember that +is just another symbol without attached meaning to the compiler. The third point would require the compiler to prove some non trivial properties about reverse.
But the point is:
You don't want to perform the compiler to do those calculations with each and every expression. Remember, to make this really useful, you'd have to heap up a lot of knowledge about many, many standard functions and operators.
You still can't replace the expression above with True unless you can rule out the possibility that list or some list element is bottom. Usually, one cannot do this. You can't even do the following "trivial" optimization of f x == f x in all cases
f x `seq` True
For, consider
f x = (undefined :: Bool, x)
then
f x `seq` True ==> True
f x == f x ==> undefined
That being said, regarding your first example slightly modified for monomorphism:
f n = n * foldl (+) 0 [1..10] :: Int
it is imaginable to optimize the program by moving the expression out of its context and replace it with the name of a constant, like so:
const1 = foldl (+) 0 [1..10] :: Int
f n = n * const1
This is because the compiler can see that the expression must be constant.
What you're describing looks like super-compilation. In your case, if the expression had a monomorphic type like Int (as opposed to polymorphic Num a => a), the compiler could infer that the expression foldr (+) 0 [1 ..10] has no external dependencies, therefore it could be evaluated at compile time and replaced by 55. However, AFAIK no mainstream compiler currently does this kind of optimization.
(In functional programming "proving" is usually associated with something different. In languages with dependent types types are powerful enough to express complex proposition and then through the Curry-Howard correspondence programs become proofs of such propositions.)
As others have noted, it's unclear that your simplifications even hold in Haskell. For instance, I can define
newtype NInt = N Int
instance Num NInt where
N a + _ = N a
N b * _ = N b
... -- etc
and now sum . reverse :: Num [a] -> a does not equal sum :: Num [a] -> a since I can specialize each to [NInt] -> NInt where sum . reverse == sum clearly does not hold.
This is one general tension that exists around optimizing "complex" operations—you actually need quite a lot of information in order to successfully prove that it's okay to optimize something. This is why the syntax-level compiler optimization which do exist are usually monomorphic and related to the structure of programs---it's usually such a simplified domain that there's "no way" for the optimization to go wrong. Even that is often unsafe because the domain is never quite so simplified and well-known to the compiler.
As an example, a very popular "high-level" syntactic optimization is stream fusion. In this case the compiler is given enough information to know that stream fusion can occur and is basically safe, but even in this canonical example we have to skirt around notions of non-termination.
So what does it take to have \x -> sum [0..x] get replaced by \x -> x*(x + 1)/2? The compiler would need a theory of numbers and algebra built-in. This is not possible in Haskell or ML, but becomes possible in dependently typed languages like Coq, Agda, or Idris. There you could specify things like
revCommute :: (_+_ :: a -> a -> a)
-> Commutative _+_
-> foldr _+_ z (reverse as) == foldr _+_ z as
and then, theoretically, tell the compiler to rewrite according to revCommute. This would still be difficult and finicky, but at least we'd have enough information around. To be clear, I'm writing something very strange above, a dependent type. The type not only depends on the ability to introduce both a type and a name for the argument inline, but also the existence of the entire syntax of your language "at the type level".
There are a lot of differences between what I just wrote and what you'd do in Haskell, though. First, in order to form a basis where such promises can be taken seriously, we must throw away general recursion (and thus we already don't have to worry about questions of non-termination like stream-fusion does). We also must have enough structure around to create something like the promise Commutative _+_---this likely depends upon there being an entire theory of operators and mathematics built into the language's standard library else you would need to create that yourself. Finally, the richness of type system required to even express these kinds of theories adds a lot of complexity to the entire system and tosses out type inference as you know it today.
But, given all that structure, I'd never be able to create an obligation Commutative _+_ for the _+_ defined to work on NInts and so we could be certain that foldr (+) 0 . reverse == foldr (+) 0 actually does hold.
But now we'd need to tell the compiler how to actually perform that optimization. For stream-fusion, the compiler rules only kick in when we write something in exactly the right syntactic form to be "clearly" an optimization redex. The same kinds of restrictions would apply to our sum . reverse rule. In fact, already we're sunk because
foldr (+) 0 . reverse
foldr (+) 0 (reverse as)
don't match. They're "obviously" the same due to some rules we could prove about (.), but that means that now the compiler must invoke two built-in rules in order to perform our optimization.
At the end of the day, you need a very smart optimization search over the sets of known laws in order to achieve the kinds of automatic optimizations you're talking about.
So not only do we add a lot of complexity to the entire system, require a lot of base work to build-in some useful algebraic theories, and lose Turing completeness (which might not be the worst thing), we also only get a finicky promise that our rule would even fire unless we perform an exponentially painful search during compilation.
Blech.
The compromise that exists today tends to be that sometimes we have enough control over what's being written to be mostly certain that a certain obvious optimization can be performed. This is the regime of stream fusion and it requires a lot of hidden types, carefully written proofs, exploitations of parametricity, and hand-waving before it's something the community trusts enough to run on their code.
And it doesn't even always fire. For an example of battling that problem take a look at the source of Vector for all of the RULES pragmas that specify all of the common circumstances where Vector's stream-fusion optimizations should kick in.
All of this is not at all a critique of compiler optimizations or dependent type theories. Both are really incredible. Instead it's just an amplification of the tradeoffs involved in introducing such an optimization. It's not to be done lightly.
Fun fact: Given two arbitrary formulas, do they both give the same output for the same inputs? The answer to this trivial question is not computable! In other words, it is mathematically impossible to write a computer program that always gives the correct answer in finite time.
Given this fact, it's perhaps not surprising that nobody has a compiler that can magically transform every possible computation into its most efficient form.
Also, isn't this the programmer's job? If you want the sum of an arithmetic sequence commonly enough that it's a performance bottleneck, why not just write some more efficient code yourself? Similarly, if you really want Fibonacci numbers (why?), use the O(1) algorithm.
Consider these (more or less) equivalent type signatures:
f1 :: [a] -> Int -> a
f2 :: Integral b => [a] -> b -> a
f2 is more general than f1, and that is a big advantage, but is there an advantage of f1 over f2?
It seems that in the solutions to the H-99 problems, and in many posted solutions to various Project Euler problems, the f1 form appears much more often than the f2.
I'm not sure why. Is it simply programmer laziness, or is there a performance cost in going with the more general version (f2), or is there another reason?
Often this is due to people wanting to use the integral in conjunction with results from length and other functions already constrained to Int. While there are often generic* functions such as genericLength, these names are longer and not in the Prelude.
Performance of Int is more easily understood. Performance for Integral a depends on a large part on if the function gets specialized for the particular instance of Integral.
Int is often not just sufficient but also honest. For example, strict ByteStrings can't be over an Int size (typically, please don't comment with esoteric examples) assuming your compiler uses machine word or larger Ints and not the darned standard 29+ bits.
Overuse of type classes muddies up the signatures, making the language less readable. Were extreme generalization to become common, particularly for the primitive functions, I think we would need to find a more succinct way to express the constraints to keep from driving programmers mad.