Structural induction for multi-way (rose) trees - haskell

Since multi-way trees can be defined as a recursive type:
data RoseTree a = Node {leaf :: a, subTrees :: [RoseTree a]}
is there a corresponding principle for performing structural induction on this type?

To state that property P holds for all (*) rose trees, you have to prove that
if l :: [RoseTree] is a list of rose trees whose elements satisfy P, and x :: a is arbitrary, then Note x l satisfies P
The part about P holding on the elements of l is the induction hypothesis, which you can use to prove P(Node x l).
There is no explicit base case here: this is because there's no explicit base case constructor. Yet, Node x [] acts as an implicit base case for the trees,
and indeed when l is empty we get a base case for induction implicitly. Concretely, the hypothesis "all the elements of l satisfy P" becomes vacuously true when l is empty, so we get P(Node x []) from the induction principle above.
(*) More precisely, this principle proves P for every finite-depth rose tree. If you really have to consider infinite-depth ones (e.g. circular trees), you need coinduction.

Related

What is the main difference between Free Monoid and Monoid?

Looks like I have a pretty clear understanding what a Monoid is in Haskell, but last time I heard about something called a free monoid.
What is a free monoid and how does it relate to a monoid?
Can you provide an example in Haskell?
As you already know, a monoid is a set with an element e and an operation <> satisfying
e <> x = x <> e = x (identity)
(x<>y)<>z = x<>(y<>z) (associativity)
Now, a free monoid, intuitively, is a monoid which satisfies only those equations above, and, obviously, all their consequences.
For instance, the Haskell list monoid ([a], [], (++)) is free.
By contrast, the Haskell sum monoid (Sum Int, Sum 0, \(Sum x) (Sum y) -> Sum (x+y)) is not free, since it also satisfies additional equations. For instance, it's commutative
x<>y = y<>x
and this does not follow from the first two equations.
Note that it can be proved, in maths, that all the free monoids are isomorphic to the list monoid [a]. So, "free monoid" in programming is only a fancy term for any data structure which 1) can be converted to a list, and back, with no loss of information, and 2) vice versa, a list can be converted to it, and back, with no loss of information.
In Haskell, you can mentally substitute "free monoid" with "list-like type".
In a programming context, I usually translate free monoid to [a]. In his excellent series of articles about category theory for programmers, Bartosz Milewski describes free monoids in Haskell as the list monoid (assuming one ignores some problems with infinite lists).
The identity element is the empty list, and the binary operation is list concatenation:
Prelude Data.Monoid> mempty :: [Int]
[]
Prelude Data.Monoid> [1..3] <> [7..10]
[1,2,3,7,8,9,10]
Intuitively, I think of this monoid to be 'free' because it a monoid that you can always apply, regardless of the type of value you want to work with (just like the free monad is a monad you can always create from any functor).
Additionally, when more than one monoid exists for a type, the free monoid defers the decision on which specific monoid to use. For example, for integers, infinitely many monoids exist, but the most common are addition and multiplication.
If you have two (or more integers), and you know that you may want to aggregate them, but you haven't yet decided which type of aggregation you want to apply, you can instead 'aggregate' them using the free monoid - practically, this means putting them in a list:
Prelude Data.Monoid> [3,7]
[3,7]
If you later decide that you want to add them together, then that's possible:
Prelude Data.Monoid> getSum $ mconcat $ Sum <$> [3,7]
10
If, instead, you wish to multiply them, you can do that as well:
Prelude Data.Monoid> getProduct $ mconcat $ Product <$> [3,7]
21
In these two examples, I've deliberately chosen to elevate each number to a type (Sum, Product) that embodies a more specific monoid, and then use mconcat to perform the aggregation.
For addition and multiplication, there are more succinct ways to do this, but I did it that way to illustrate how you can use a more specific monoid to interpret the free monoid.
A free monoid is a specific type of monoid. Specifically, it’s the monoid you get by taking some fixed set of elements as characters and then forming all possible strings from those elements. Those strings, with the underlying operation being string concatenation, form a monoid, and that monoid is called the free monoid.
A monoid (M,•,1) is a mathematical structure such that:
M is a set
1 is a member of M
• : M * M -> M
a•1 = a = 1•a
Given elements a, b and c in M, we have a•(b•c) = (a•b)•c.
A free monoid on a set M is a monoid (M',•,0) and function e : M -> M' such that, for any monoid (N,*,1), given a (set) map f : M -> N we can extend this to a monoid morphism f' : (M',•,0) -> (N,*,1), i.e
f a = f' (e a)
f' 0 = 1
f' (a•b) = (f' a) • (f' b)
In other words, it is a monoid that does nothing special.
An example monoid is the integers with the operation being addition and the identity being 0. Another monoid is sequences of integers with the operation being concatenation and the identity being the empty sequence. Now the integers under addition is not a free monoid on the integers. Consider the map into sequences of integers taking n to (n). Then for this to be free we would need to extend this to a map taking n + m to (n,m), i.e. it must take 0 to (0) and to (0,0) and to (0,0,0) and so on.
On the other hand if we try to look at sequences of integers as a free monoid on the integers, we see that it seems to work in this case. The extension of the map into the integers with addition is one that takes the sum of a sequence (with the sum of () being 0).
So what is the free monoid on a set S? Well one thing we could try is just arbitrary binary trees of S. In a Haskell type this would look like:
data T a = Unit | Single a | Conc (T a) (T a)
And it would have an identity of Unit, e = Single and (•) = Conc.
And we can write a function to show how it is free:
-- here the second argument represents a monoid structure on b
free :: (a -> b) -> (b -> b -> b, b) -> T a -> b
free f ((*),zero) = f' where
f' (Single a) = f a
f' Unit = zero
f' (Conc a b) = f' a * f' b
It should be quite obvious that this satisfies the required laws for a free monoid on a. Except for one: T a is not a monoid because it does not quite satisfy laws 4 or 5.
So now we should ask if we can make this into a simpler free monoid, ie one that is an actual monoid. The answer is yes. One way is to observe that Conc Unit a and Conc a Unit and Single a should be the same. So let’s make the first two types unrepresentable:
data TInner a = Single a | Conc (TInner a) (TInner a)
data T a = Unit | Inner (TInner a)
A second observation we can make is that there should be no difference between Conc (Conc a b) c and Conc a (Conc b c). This is due to law 5 above. We can then flatten our tree:
data TInner a = Single a | Conc (a,TInner a)
data T a = Unit | Inner (TInner a)
The strange construction with Conc forces us to only have a single way to represent Single a and Unit. But we see we can merge these all together: change the definition of Conc to Conc [a] and then we can change Single x to Conc [x], and Unit to Conc [] so we have:
data T a = Conc [a]
Or we can just write:
type T a = [a]
And the operations are:
unit = []
e a = [a]
(•) = append
free f ((*),zero) = f' where
f' [] = zero
f' (x:xs) = f x * f' xs
So in Haskell, the list type is called the free monoid.

number of leaves is one greater than number of nodes in Haskell

I have to prove that given a binary tree, the number of leaves is equal to the number of nodes plus one using induction in Haskell.
Given the following type called Tree:
data Tree = Leaf Int | Node Tree Tree
I defined two functions called leaves and nodes which return the number of leaves and nodes respectively:
With induction, I know that I need to prove the base case which is when the number of nodes is 0 and for the induction step, I need to use the induction hypothesis. But that's throwing me off here is that there are two functions and I don't really know to proceed.In the base case, am I supposed to show that if the number of nodes is 0, the number of leaves is 1 or?
The tractable way to do this "by induction" is not using induction on natural numbers but rather using structural induction. The proof breaks down like this:
Base case
The base case is for Leaf x, where x is an Int. So you have to prove that for any x
leaves (Leaf x) = 1 + nodes (Leaf x)
Inductive step
In the inductive step, you assume two inductive hypotheses:
leaves t = 1 + nodes t
leaves u = 1 + nodes u
to prove that
leaves (Node t u) = 1 + nodes (Node t u)
I'll let you fill in the actual proofs.
Side note:
Structural induction is a generalization of induction on natural numbers. In particular, you can define the natural numbers as
data Nat = Z | S Nat
You can now do induction with a base case of p Z, and an inductive step that assumes p n and proves p (S n).
Structural induction can itself be generalized further, to well-founded induction, which is the most general mathematical notion of induction of which I am aware. Note that the Wikipedia page is based on a classical notion of well-foundedness; nLab gives a constructive version that is more tightly tied to well-founded induction.

Would the ability to detect cyclic lists in Haskell break any properties of the language?

In Haskell, some lists are cyclic:
ones = 1 : ones
Others are not:
nums = [1..]
And then there are things like this:
more_ones = f 1 where f x = x : f x
This denotes the same value as ones, and certainly that value is a repeating sequence. But whether it's represented in memory as a cyclic data structure is doubtful. (An implementation could do so, but this answer explains that "it's unlikely that this will happen in practice".)
Suppose we take a Haskell implementation and hack into it a built-in function isCycle :: [a] -> Bool that examines the structure of the in-memory representation of the argument. It returns True if the list is physically cyclic and False if the argument is of finite length. Otherwise, it will fail to terminate. (I imagine "hacking it in" because it's impossible to write that function in Haskell.)
Would the existence of this function break any interesting properties of the language?
Would the existence of this function break any interesting properties of the language?
Yes it would. It would break referential transparency (see also the Wikipedia article). A Haskell expression can be always replaced by its value. In other words, it depends only on the passed arguments and nothing else. If we had
isCycle :: [a] -> Bool
as you propose, expressions using it would not satisfy this property any more. They could depend on the internal memory representation of values. In consequence, other laws would be violated. For example the identity law for Functor
fmap id === id
would not hold any more: You'd be able to distinguish between ones and fmap id ones, as the latter would be acyclic. And compiler optimizations such as applying the above law would not longer preserve program properties.
However another question would be having function
isCycleIO :: [a] -> IO Bool
as IO actions are allowed to examine and change anything.
A pure solution could be to have a data type that internally distinguishes the two:
import qualified Data.Foldable as F
data SmartList a = Cyclic [a] | Acyclic [a]
instance Functor SmartList where
fmap f (Cyclic xs) = Cyclic (map f xs)
fmap f (Acyclic xs) = Acyclic (map f xs)
instance F.Foldable SmartList where
foldr f z (Acyclic xs) = F.foldr f z xs
foldr f _ (Cyclic xs) = let r = F.foldr f r xs in r
Of course it wouldn't be able to recognize if a generic list is cyclic or not, but for many operations it'd be possible to preserve the knowledge of having Cyclic values.
In the general case, no you can't identify a cyclic list. However if the list is being generated by an unfold operation then you can. Data.List contains this:
unfoldr :: (b -> Maybe (a, b)) -> b -> [a]
The first argument is a function that takes a "state" argument of type "b" and may return an element of the list and a new state. The second argument is the initial state. "Nothing" means the list ends.
If the state ever recurs then the list will repeat from the point of the last state. So if we instead use a different unfold function that returns a list of (a, b) pairs we can inspect the state corresponding to each element. If the same state is seen twice then the list is cyclic. Of course this assumes that the state is an instance of Eq or something.

Why in algebraic data types, if I can define a special `from` and `to` function for two types, the two types can be considered equality?

I'm reading this blog: http://chris-taylor.github.io/blog/2013/02/10/the-algebra-of-algebraic-data-types/
It says:
However, when I talk about equality, I don’t mean Haskell equality, in the sense of the (==) function. Instead, I mean that the two types are in one-to-one correspondence – that is, when I say that two types a and b are equal, I mean that you could write two functions
from :: a -> b
to :: b -> a
that pair up values of a with values of b, so that the following equations always hold (here the == is genuine, Haskell-flavored equality):
to (from a) == a
from (to b) == b
And later, there are many laws based on this definition:
Add Void a === a
Add a b === Add b a
Mul Void a === Void
Mul () a === a
Mul a b === Mul b a
I can't understand why we can safely get these laws based on the definition of "equality"? Can use use other definitions? What can we do with this definition? Does it make sense for Haskell type systems?
The term that the author is skating around, so as not to "mention category theory or advanced math", is cardinality. He defines two types to be ===-equal to each other if they have equal cardinality -- that is, if there are as many possible values of one as there are of the other.
Because if two types have equal cardinality, there exists an isomorphism between them. Mul () Bool may be a different type than Bool, but there are exactly as many members of one as the other, and one can trivially define a function to go from one to the other, or the other to the one. (Not that there is only one such isomorphism -- the point is, you could choose one.)
It's not a great approach. It works fine for finite sets, basically, but it introduces unfortunate side effects for infinite sets, like Add Int Int === Int. Still, for the basic description of addition and multiplication of types, it seems to serve.
Informally speaking, when two mathematical structures A,B have two "nice" functions from,to satifying from . to == id and to . from == id, the structures A,B are said to be isomorphic.
The actual definition of "nice" function varies with the kind of structure at hand (and sometimes, different definitions of "nice" give rise to distinct notions of isomorphism).
The idea behind isomorphic structures is that, roughly, they "work" in exactly the same way. For instance, consider a structure A made by the booleans True,False with &&,|| as operations. Let then structure B made of the two naturals1,0 with min,max as operations. These are different structures, yet they share the same rules. For instance True && x == x and 1 `min` x == x for all x. A and B are isomorphic: function to will map True to 1, and False to 0, while from will perform the opposite mapping.
(Note that while we could map True to 0 and False to 1, and we would still get from . to == id and its dual, this mapping would not be considered "nice" since it would not preserve the structure: e.g., to (True && x) == to x yet to (True && x) == to True `min` to x == 0 `min` to x == 0 .)
Another example in a different setting: consider A to be a circle in a plane, while B is a square in such plane. One can then define continuous mappings to,from between them. This can be done with any two "closed loops", loosely speaking, which can be said to be isomorphic. Instead a circle and an "eight" shape do not admit such continuous mappings: the self-intersecting point in the "eight" can not be mapped to any point in the circle in a continuous way (roughly, four "ways" depart from it, while points in the circle have only two such "ways").
In Haskell, types are similarly said to be isomorphic when two Haskell-definable functions from,to exist between them satisfying the rules above. Here being a "nice" function just means being definable in Haskell. The linked web blog shows a few such isomorphic types. Here's another example, using recursive types:
List1 a = Add Unit (Mul a (List1 a))
List2 a = Add Unit (Add a (Mul a (Mul a (List2 a))))
Intuitively, the first reads as: "a list is either the empty list, or a pair made of an element and a list". The second reads as: "a list is either the empty list, or a single element, or a triple make of an element, another element, and a list". One can convert between the two by handling the elements two at a time.
Another example:
Tree a = Add Unit (Mul a (Mul (Tree a) (Tree a)))
You can prove that the type Tree Unit is isomorphic to List1 (Tree Unit) by exploiting the algebraic laws fond in the blog. Below, = stands for isomorphism.
List1 (Tree Unit)
-- definition of List1 a
= Add Unit (Mul (Tree Unit) (List1 (Tree Unit)))
-- by inductive hypothesis, the inner `List1 (Tree Unit)` is isomorphic to `Tree Unit`
= Add Unit (Mul (Tree Unit) (Tree Unit))
-- definition of Tree a
= Tree Unit
The above proof sketch induces the function to as follows.
data Add a b = InL a | InR b
data Mul a b = P a b
type Unit = ()
newtype List1 a = List1 (Add Unit (Mul a (List1 a)))
newtype Tree a = Tree (Add Unit (Mul a (Mul (Tree a) (Tree a))))
to :: List1 (Tree Unit) -> Tree Unit
to (List1 (InL ())) = Tree (InL ())
to (List1 (InR (P t ts))) = Tree (InR (P () (P t (to ts))))
Note how recursive call plays the role the inductive hypothesis has in the proof.
Writing from is left as an exercise :-P
Why in algebraic data types, if I can define a special from and to function for two types, the two types can be considered equal?
Well, the better term to use here isn't "equal," but rather isomorphic. The thing is that when two types are isomorphic, they are basically interchangeable with each other; any program written in terms of A could, in principle, be written in terms of B instead, without changing the meaning of the program. Suppose you have:
from :: A -> B
to :: B -> A
and these two functions constitute an isomorphism, that is:
to (from a) == a
from (to b) == b
Now, if you have any function that takes A as an argument, you can for example write a counterpart that takes B as an argument instead:
foo :: B -> Something
foo = originalFoo . from
where originalFoo :: A -> Something
originalFoo a = ...
And for any function that produces an A, you can likewise do this:
bar :: Something -> B
bar = to . originalBar
where originalBar :: Something -> A
originalBar something = ...
Now you've hidden all uses of A inside the where subdefinitions. You could continue down this path and mechanically eliminate all uses of A entirely, and you're guaranteed the program will work the same as when you started.

How can quotient types help safely expose module internals?

Reading up on quotient types and their use in functional programming, I came across this post. The author mentions Data.Set as an example of a module which provides a ton of functions which need access to module's internals:
Data.Set has 36 functions, when all that are really needed to ensure the meaning of a set ("These elements are distinct") are toList and fromList.
The author's point seems to be that we need to "open up the module and break the abstraction" if we forgot some function which can be implemented efficiently only using module's internals.
He then says
We could alleviate all of this mess with quotient types.
but gives no explanation to that claim.
So my question is: how are quotient types helping here?
EDIT
I've done a bit more research and found a paper "Constructing Polymorphic Programs with Quotient Types". It elaborates on declaring quotient containers and mentions the word "efficient" in abstract and introduction. But if I haven't misread, it does not give any example of an efficient representation "hiding behind" a quotient container.
EDIT 2
A bit more is revealed in "[PDF] Programming in Homotopy Type Theory" paper in Chapter 3. The fact that quotient type can be implemented as a dependent sum is used. Views on abstract types are introduced (which look very similar to type classes to me) and some relevant Agda code is provided. Yet the chapter focuses on reasoning about abstract types, so I'm not sure how this relates to my question.
I recently made a blog post about quotient types, and I was led here by a comment. The blog post may provide some additional context in addition to the papers referenced in the question.
The answer is actually pretty straightforward. One way to arrive at it is to ask the question: why are we using an abstract data type in the first place for Data.Set?
There are two distinct and separable reasons. The first reason is to hide the internal type behind an interface so that we can substitute a completely new type in the future. The second reason is to enforce implicit invariants on values of the internal type. Quotient type and their dual subset types allow us to make the invariants explicit and enforced by the type checker so that we no longer need to hide the representation. So let me be very clear: quotient (and subset) types do not provide you with any implementation hiding. If you implement Data.Set with quotient types using lists as your representation, then later decide you want to use trees, you will need to change all code that uses your type.
Let's start with a simpler example (leftaroundabout's). Haskell has an Integer type but not a Natural type. A simple way to specify Natural as a subset type using made up syntax would be:
type Natural = { n :: Integer | n >= 0 }
We could implement this as an abstract type using a smart constructor that threw an error when given a negative Integer. This type says that only a subset of the values of type Integer are valid. Another approach we could use to implement this type is to use a quotient type:
type Natural = Integer / ~ where n ~ m = abs n == abs m
Any function h :: X -> T for some type T induces a quotient type on X quotiented by the equivalence relation x ~ y = h x == h y. Quotient types of this form are more easily encoded as abstract data types. In general, though, there may not be such a convenient function, e.g.:
type Pair a = (a, a) / ~ where (a, b) ~ (x, y) = a == x && b == y || a == y && b == x
(As to how quotient types relate to setoids, a quotient type is a setoid that enforces that you respect its equivalence relation.) This second definition of Natural has the property that there are two values that represent 2, say. Namely, 2 and -2. The quotient type aspect says we are allowed to do whatever we want with the underlying Integer, so long as we never produce a result that differentiates between these two representatives. Another way to see this is that we can encode a quotient type using subset types as:
X/~ = forall a. { f :: X -> a | forEvery (\(x, y) -> x ~ y ==> f x == f y) } -> a
Unfortunately, that forEvery is tantamount to checking equality of functions.
Zooming back out, subset types add constraints on producers of values and quotient types add constraints on consumers of values. Invariants enforced by an abstract data type may be a mixture of these. Indeed, we may decide to represent a Set as the following:
data Tree a = Empty | Branch (Tree a) a (Tree a)
type BST a = { t :: Tree a | isSorted (toList t) }
type Set a = { t :: BST a | noDuplicates (toList t) } / ~
where s ~ t = toList s == toList t
Note, nothing about this ever requires us to actually execute isSorted, noDuplicates, or toList. We "merely" need to convince the type checker that the implementations of functions on this type would satisfy these predicates. The quotient type allows us to have a redundant representation while enforcing that we treat equivalent representations in the same way. This doesn't mean we can't leverage the specific representation we have to produce a value, it just means that we must convince the type checker that we would have produced the same value given a different, equivalent representation. For example:
maximum :: Set a -> a
maximum s = exposing s as t in go t
where go Empty = error "maximum of empty Set"
go (Branch _ x Empty) = x
go (Branch _ _ r) = go r
The proof obligation for this is that the right-most element of any binary search tree with the same elements is the same. Formally, it's go t == go t' whenever toList t == toList t'. If we used a representation that guaranteed the tree would be balanced, e.g. an AVL tree, this operation would be O(log N) while converting to a list and picking the maximum from the list would be O(N). Even with this representation, this code is strictly more efficient than converting to a list and getting the maximum from the list. Note, that we could not implement a function that displayed the tree structure of the Set. Such a function would be ill-typed.
I'll give a simpler example where it's reasonably clear. Admittedly I myself don't really see how this would translate to something like Set, efficiently.
data Nat = Nat (Integer / abs)
To use this safely, we must be sure that any function Nat -> T (with some non-quotient T, for simplicity's sake) does not depend on the actual integer value, but only on its absolute. To do so, it's not really necessary to hide Integer completely; it would be sufficient to prevent you from matching on it directly. Instead, the compiler might rewrite the matches, e.g.
even' :: Nat -> Bool
even' (Nat 0) = True
even' (Nat 1) = False
even' (Nat n) = even' . Nat $ n - 2
could be rewritten to
even' (Nat n') = case abs n' of
[|abs 0|] -> True
[|abs 1|] -> False
n -> even' . Nat $ n - 2
Such a rewriting would point out equivalence violations, e.g.
bad (Nat 1) = "foo"
bad (Nat (-1)) = "bar"
bad _ = undefined
would rewrite to
bad (Nat n') = case n' of
1 -> "foo"
1 -> "bar"
_ -> undefined
which is obviously an overlapped pattern.
Disclaimer: I just read up on quotient types upon reading this question.
I think the author's just saying that sets can be described as quotient types over lists. Ie: (making up some haskell-like syntax):
data Set a = Set [a] / (sort . nub) deriving (Eq)
Ie, a Set a is just a [a] with equality between two Set a's determined by whether the sort . nub of the underlying lists are equal.
We could do this explicitly like this, I guess:
import Data.List
data Set a = Set [a] deriving (Show)
instance (Ord a, Eq a) => Eq (Set a) where
(Set xs) == (Set ys) = (sort $ nub xs) == (sort $ nub ys)
Not sure if this is actually what the author intended as this isn't a particularly efficient way of implementing a set. Someone can feel free to correct me.

Resources