I'm struggling a little to come up with a notion of membership proof for Data.AVL trees. I would like to be able to pass around a value of type n ∈ m, to mean that n appears as a key in in the AVL tree, so that given such a proof, get n m can always successfully yield a value associated with n. You can assume my AVL trees always contain values drawn from a join semilattice (A, ∨, ⊥) over a setoid (A, ≈), although below the idempotence is left implicit.
module Temp where
open import Algebra.FunctionProperties
open import Algebra.Structures renaming (IsCommutativeMonoid to IsCM)
open import Data.AVL
open import Data.List
open import Data.Nat hiding (_⊔_)
import Data.Nat.Properties as ℕ-Prop
open import Data.Product hiding (_-×-_)
open import Function
open import Level
open import Relation.Binary renaming (IsEquivalence to IsEq)
open import Relation.Binary.PropositionalEquality
module ℕ-AVL {v} (V : Set v)
= Data.AVL (const V) (StrictTotalOrder.isStrictTotalOrder ℕ-Prop.strictTotalOrder)
data ≈-List {a ℓ : Level} {A : Set a} (_≈_ : Rel A ℓ) : Rel (List A) (a ⊔ ℓ) where
[] : ≈-List _≈_ [] []
_∷_ : {x y : A} {xs ys : List A} → (x≈y : x ≈ y) → (xs≈ys : ≈-List _≈_ xs ys) → ≈-List _≈_ (x ∷ xs) (y ∷ ys)
_-×-_ : {a b c d ℓ₁ ℓ₂ : Level} {A : Set a} {B : Set b} {C : Set c} {D : Set d} →
REL A C ℓ₁ → REL B D ℓ₂ → A × B → C × D → Set (ℓ₁ ⊔ ℓ₂)
(R -×- S) (a , b) (c , d) = R a c × S b d
-- Two AVL trees are equal iff they have equal toList images.
≈-AVL : {a ℓ : Level} {A : Set a} → Rel A ℓ → Rel (ℕ-AVL.Tree A) (a ⊔ ℓ)
≈-AVL _≈_ = ≈-List ( _≡_ -×- _≈_ ) on (ℕ-AVL.toList _)
_∈_ : {a ℓ : Level} {A : Set a} {_≈_ : Rel A ℓ} {_∨_ : Op₂ A} {⊥ : A}
{{_ : IsCM _≈_ _∨_ ⊥}} → ℕ → ℕ-AVL.Tree A → Set (a ⊔ ℓ)
n ∈ m = {!!}
get : {a ℓ : Level} {A : Set a} {_≈_ : Rel A ℓ} {_∨_ : Op₂ A} {⊥ : A} →
{{_ : IsCM _≈_ _∨_ ⊥}} → (n : ℕ) → (m : ℕ-AVL.Tree A) → n ∈ m → A
get n m n∈m = {!!}
I feel like this should be easy, but I'm finding it hard. One option would be to use my notion of equivalence for AVL-trees, which says that two trees are equal iff they have the same toList image, and exploit the commutative monoid over A, defining
n ∈ m = ≈-AVL ≈ m (ℕ-AVL.unionWith _ ∨ m (ℕ-AVL.singleton _ n ⊥))
This essentially says that m contains n iff the singleton map (n, ⊥) is "below" m in the partial order induced by the commutative monoid (technically we need the idempotence for this interpretation to make sense). However given such a definition I'm not at all sure how to implement get.
I have also experimented with defining my own inductive ∈ relation but found that hard as I seemed to end up having to know about the complicated internal indices used by Data.AVL.
Finally I also tried using a value of type n ∈? m ≡ true, where ∈? is defined in Data.AVL, but didn't have much success there either. I would appreciate any suggestions.
I think your best bet is to define _∈_ as an inductive relation. Yes, this requires you to know the internals of Data.AVL, but I'm fairly sure this will be the most pleasant representation to work with.
The internal structure of Data.AVL is actually quite simple. We have a type Indexed.Tree, which is indexed by three values: the lower bound, the upper bound and the height. Given a tree t : Indexed.Tree lb ub h, all values inside t are within the range (lb, ub).
There's a slight twist to it, though: since we need to have a tree that can contain arbitrary values, we need to artifically extend the _<_ relation given by isStrictTotalOrder with two new values - you can think of those as a negative and a positive infinity. In the Data.AVL module, these are called ⊥⁺ and ⊤⁺. Trees that can contain arbitrary values are then of type Tree ⊥⁺ ⊤⁺ h.
The last piece is the balancing: each node requires heights of its subtrees to be at most one level apart. We don't actually need to touch balancing, but the function signatures might mention it.
Anyways, I'm working directly with this raw (indexed) variant. The opaque, unindexed version is just something like:
data Tree : Set ? where
tree : ∀ {h} → Indexed.Tree ⊥⁺ ⊤⁺ h
Some boilerplate first:
open import Data.Empty
open import Data.Product
open import Level
open import Relation.Binary
open import Relation.Binary.PropositionalEquality as P using (_≡_)
open import Relation.Nullary
import Data.AVL
module Membership
{k v ℓ}
{Key : Set k} (Value : Key → Set v)
{_<_ : Rel Key ℓ}
(isStrictTotalOrder : IsStrictTotalOrder _≡_ _<_) where
open Data.AVL Value isStrictTotalOrder public
open Extended-key public
open Height-invariants public
open IsStrictTotalOrder isStrictTotalOrder
Here's the _∈_ as an inductive relation:
data _∈_ {lb ub} (K : Key) :
∀ {n} → Indexed.Tree lb ub n → Set (k ⊔ v ⊔ ℓ) where
here : ∀ {hˡ hʳ} V
{l : Indexed.Tree lb [ K ] hˡ}
{r : Indexed.Tree [ K ] ub hʳ}
{b : hˡ ∼ hʳ} →
K ∈ Indexed.node (K , V) l r b
left : ∀ {hˡ hʳ K′} {V : Value K′}
{l : Indexed.Tree lb [ K′ ] hˡ}
{r : Indexed.Tree [ K′ ] ub hʳ}
{b : hˡ ∼ hʳ} →
K < K′ →
K ∈ l →
K ∈ Indexed.node (K′ , V) l r b
right : ∀ {hˡ hʳ K′} {V : Value K′}
{l : Indexed.Tree lb [ K′ ] hˡ}
{r : Indexed.Tree [ K′ ] ub hʳ}
{b : hˡ ∼ hʳ} →
K′ < K →
K ∈ r →
K ∈ Indexed.node (K′ , V) l r b
This is the sort of inductive definition you'd expect: either the key is in this inner node or it's down in one of the subtrees. We could also do without the less-than, greater-than proofs, but this is more convenient - when you want to show that a tree does not contain a particular element, you only have to track the path lookup would take, instead of searching the whole tree.
How to interpret those l and r implicit arguments? Notice that is makes perfect sense: we have a key K and naturally we require that the values contained in l fall between lb and K (it's actually [ K ], since we are using the extended _<_) and the values in r fall between K and ub. The balancing (b : hˡ ∼ hʳ) is there just so we can construct an actual tree node.
Your get function is then very simple:
get : ∀ {h lb ub n} {m : Indexed.Tree lb ub h} → n ∈ m → Value n
get (here V) = V
get (left _ p) = get p
get (right _ p) = get p
Well, I told you that this representation is fairly convenient to work it and I'm going to prove it. One of the properties we'd like _∈_ to have is decidability: that is, we can construct a program that tells us whether an element is in a tree or not:
find : ∀ {h lb ub} n (m : Indexed.Tree lb ub h) → Dec (n ∈ m)
find will return either yes p, where p is a proof that n is inside m (n ∈ m), or no ¬p, where ¬p is refutation of n ∈ m, n ∈ m → ⊥. We'll need one lemma:
lem : ∀ {lb ub hˡ hʳ K′ n} {V : Value K′}
{l : Indexed.Tree lb [ K′ ] hˡ}
{r : Indexed.Tree [ K′ ] ub hʳ}
{b : hˡ ∼ hʳ} →
n ∈ Indexed.node (K′ , V) l r b →
(n ≯ K′ → n ≢ K′ → n ∈ l) × (n ≮ K′ → n ≢ K′ → n ∈ r)
lem (here V) =
(λ _ eq → ⊥-elim (eq P.refl)) , (λ _ eq → ⊥-elim (eq P.refl))
lem (left x p) = (λ _ _ → p) , (λ ≮ _ → ⊥-elim (≮ x))
lem (right x p) = (λ ≯ _ → ⊥-elim (≯ x)) , (λ _ _ → p)
This tells us that if we know n is contained in t and we know n is less than the key of the root of t, then n must be in the left subtree (and similarly for right subtree).
Here's the implementation of find function:
find : ∀ {h lb ub} n (m : Indexed.Tree lb ub h) → Dec (n ∈ m)
find n (Indexed.leaf _) = no λ ()
find n (Indexed.node (k , v) l r _) with compare n k
find n (Indexed.node (k , v) l r _) | tri< a ¬b ¬c with find n l
... | yes p = yes (left a p)
... | no ¬p = no λ ¬∈l → ¬p (proj₁ (lem ¬∈l) ¬c ¬b)
find n (Indexed.node (k , v) l r _) | tri≈ ¬a b ¬c
rewrite (P.sym b) = yes (here v)
find n (Indexed.node (k , v) l r _) | tri> ¬a ¬b c with find n r
... | yes p = yes (right c p)
... | no ¬p = no λ ¬∈r → ¬p (proj₂ (lem ¬∈r) ¬a ¬b)
The implementation is fairly straightforward, but I would suggest loading it up in Emacs, trying to replace some of the right-hand-sides with holes and see what the types are. And finally, here are some tests:
open import Data.Nat
open import Data.Nat.Properties
open Membership
(λ _ → ℕ)
(StrictTotalOrder.isStrictTotalOrder strictTotalOrder)
un-tree : Tree → ∃ λ h → Indexed.Tree ⊥⁺ ⊤⁺ h
un-tree (tree t) = , t
test : Indexed.Tree _ _ _
test = proj₂ (un-tree
(insert 5 55 (insert 7 77 (insert 4 44 empty))))
Extract : ∀ {p} {P : Set p} → Dec P → Set _
Extract {P = P} (yes _) = P
Extract {P = P} (no _) = ¬ P
extract : ∀ {p} {P : Set p} (d : Dec P) → Extract d
extract (yes p) = p
extract (no ¬p) = ¬p
∈-test₁ : ¬ (2 ∈ test)
∈-test₁ = extract (find 2 test)
∈-test₂ : 4 ∈ test
∈-test₂ = extract (find 4 test)
Related
I'm fairly new to Lean, so apologies if this is obvious. I'm trying to learn Lean and category theory, by doing some category theory exercises in Lean. I have these definitions for arrows and categories:
variable {α : Type u}
inductive Arrow : α → α → Type u
| Id (x : α) : Arrow x x
| Comp (g : Arrow b c) (f : Arrow a b) : Arrow a c
notation a " -→ " b => Arrow a b
notation a " ∘ " b => Arrow.Comp a b
structure Category :=
(assoc {a b c d : α} : ∀ (f : a -→ b) (g : b -→ c) (k : c -→ d),
(k ∘ (g ∘ f)) = ((k ∘ g) ∘ f))
(unitl {a b : α} : ∀ (f : a -→ b), ((Arrow.Id b) ∘ f) = f)
(unitr {a b : α} : ∀ (f : a -→ b), (f ∘ (Arrow.Id a)) = f)
This all compiles fine, so I try to define a discrete category as follows:
def IsDiscrete :=
∀ (x y : α) (f : Arrow x y), x = y ∧ f = Arrow.Id x
The intent is to express "all arrows are identity arrows", but the compiler complains that f has type "Arrow x y", not "Arrow x x". Of course, the whole point is that if arrow f exists, then x = y, so the comparison between f and Id x is sensible. How do I express this in Lean?
Alternatively, is there a better way to express arrows and/or categories in Lean? If so, why is that way better?
Let us look at the example of some lemma (whose statement and whether it is true or not is irrelevant for this discussion):
lemma L1 : forall (n m: ℕ) (p : ℕ → Prop), (p n ∧ ∃ (u:ℕ), p u ∧ p m) ∨ (¬p n ∧ p m) → n = m :=
begin
intros n m p H, cases H with H H,
{cases H with H1 H2, cases H2 with u H2, cases H2 with H2 H3, sorry},
{cases H with H1 H2, sorry}
end
The point I wish to highlight here is when destructing my hypothesis with the cases tactic,
I did not know any other way but to use the tactic several times (once for each 'layer' so to speak).
If I look at the same lemma in Coq:
Lemma L1 : forall (n m:nat) (p:nat -> Prop),
(p n /\ exists (u:nat), p u /\ p m) \/ (~p n /\ p m) -> n = m.
Proof.
intros n m p [[H1 [u [H2 H3]]]|[H1 H2]].
- admit.
-
Show.
I am able to destruct my assumption with a single nested pattern match.
I am guessing I can do the same sort of thing in Lean but I do not know how. I would be grateful to be told as I find the nested pattern match very convenient in practice.
You'll need mathlib for this, and import tactic.rcases. You can use the rcases tactic.
import tactic.rcases
lemma L1 : forall (n m: ℕ) (p : ℕ → Prop), (p n ∧ ∃ (u:ℕ), p u ∧ p m) ∨ (¬p n ∧ p m) → n = m :=
begin
intros n m p H,
rcases H with ⟨H1, u, H2, H3⟩ | ⟨H1, H2⟩,
end
This is an almost valid zipWith definition in Morte:
zipWith
= λ (u : *)
-> λ (f : (u -> u -> u))
-> λ (a : (#List u))
-> λ (b : (#List u))
-> λ (List : *)
-> λ (cons : (u -> List -> List))
-> λ (nil : List)
-> ((λ (A:*) -> λ (B:*) ->
(a (B -> List)
(λ (h:u) -> λ (t : (B -> List) -> λ k : B -> (k h t)))
(λ (k:B) -> nil)
(b (u -> A -> List)
(λ (h:u) -> λ (t:(u -> A -> List)) -> λ (H:u) -> λ (k:A) -> (cons (f h H) (k t)))
(λ (H:u) -> λ (k:A) -> nil)))
) (fix A . ((u -> A -> List) -> List))
(fix B . (u -> (B -> List) -> List)))
It isn't actually typeable due to the use of fix, which Morte lacks. András posted this clever Agda solution without fix last year. It isn't obvious to me how it translates to Morte, though, because it also lacks inductive types. How to approach this problem?
Edit: seems like my zipWith was incorrect even with fix. This one seems to check, though.
I'll be using regular Haskell lists for simplicity. First, let's define zipWith using foldr:
zipWith' :: (a -> b -> c) -> [a] -> [b] -> [c]
zipWith' f xs ys = foldr step (const []) xs ys where
step x r [] = []
step x r (y:ys) = f x y : r ys
Here we fold over xs, pass ys as an argument and split it at each iteration. The problem is that we want to emulate Church-encoded lists, but they can't be pattern matched on. It is however possible to define split
split :: [a] -> Maybe (a, [a])
split [] = Nothing
split (x:xs) = Just (x, xs)
in terms of foldr:
split :: [a] -> Maybe (a, [a])
split = snd . foldr (\x ~(r, _) -> (x : r, Just (x, r))) ([], Nothing)
Now we can define zipWith using only right folds:
czipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
czipWith f xs ys = foldr step (const []) xs ys where
step x r = maybe [] (\(y, ys) -> f x y : r ys) . split
However while split traverses a list lazily (so split [1..] ≡ Just (1, [2..])), it nevertheless deconstructs and reconstructs an entire list, and hence each split introduces an O(n) overhead, where n is the length of the list being splitted. Since ys is splitted at each iteration, total complexity of the algorithm is O(n^2).
So yes, you can type zipWith using only non-recursive types, but it'll be O(n^2).
Also, eliminators are dependent paramorphisms and paramorphisms do give you pattern matching, so if you have eliminators, it's straightforward to define O(n) zipWith and it doesn't have to be as complicated as in András' answer you linked.
Some reading:
In a typed setting Church encoding is called Boehm-Berarducci encoding.
How to zip folds.
How to take a TAIL of a functional stream.
That definition of split in terms of foldr I'm using is described somewhere in TAPL.
The clever definition of zipWith (which comes from Launchbury et al., I believe) doesn't work in Morte, because typing it without negative recursive types (which Morte doesn't have, and which imply fix, as seen in my mentioned previous answer) requires induction at least on natural numbers. Here's a simple Agda version of Launchbury's definition without Church encoding; to reproduce this in Morte we'd need functions whose return type depends on natural numbers (the lengths of input lists).
Without induction, the best we can do is an O(N^2) definition that uses O(N) pattern matching on lists, i. e. a List A -> Maybe (A, List A) function. It's O(N) because we can only get the tail of the list by rebuilding it from the end.
In Morte-compliant Agda (to get Morte, we need to desugar let style definitions to applications and function definitions to annotated lambdas):
Pair : Set → Set → Set
Pair A B = ∀ P → (A → B → P) → P
pair : ∀ A B → A → B → Pair A B
pair A B a b P p = p a b
List : Set → Set
List = λ A → ∀ L → (A → L → L) → L → L
Maybe : Set → Set
Maybe A = ∀ M → (A → M) → M → M
just : ∀ A → A → Maybe A
just A a M j n = j a
nothing : ∀ A → Maybe A
nothing A M j n = n
nil : ∀ A → List A
nil A L c n = n
cons : ∀ A → A → List A → List A
cons A a as L c n = c a (as L c n)
match : ∀ A → List A → Maybe (Pair A (List A))
match A as =
as
(Maybe (Pair A (List A)))
(λ a m M j n →
m M
(λ p → p M (λ a' as → j (pair A (List A) a (cons A a' as))))
(j (pair A (List A) a (nil A))))
(nothing (Pair A (List A)))
zipWith : ∀ A B C → (A → B → C) → List A → List B → List C
zipWith A B C f as =
as
(List B → List C)
(λ a hyp bs → match B bs (List C)
(λ p → p (List C) (λ b bs' → cons C (f a b) (hyp bs')))
(nil C))
(λ _ → nil C)
I'm trying to derive a commutative monoid of AVL trees of element type A, given a commutative monoid (A, +, epsilon), where the derived operation is unionWith +. The notion of equivalence for AVL trees is such that two trees are equal iff they have the same toList image.
I'm stuck trying to prove that unionWith + is associative (up to my notion of equivalence). I have commutativity and +-cong as postulates, because I want to use them in the proof of associativity, but haven't yet proved them.
I've isolated the problem to the term bibble in the following code:
module Temp
{A : Set}
where
open import Algebra.FunctionProperties
open import Algebra.Structures
import Data.AVL
import Data.Nat.Properties as ℕ-Prop
open import Function
open import Relation.Binary
open import Relation.Binary.PropositionalEquality
open ≡-Reasoning
-- Specialise AVL trees to keys of type ℕ.
module ℕ-AVL
= Data.AVL (const A) (StrictTotalOrder.isStrictTotalOrder ℕ-Prop.strictTotalOrder)
open ℕ-AVL
-- Equivalence of AVL tree normal form (ordered list of key-value pairs).
_≈_ : Tree → Tree → Set
_≈_ = _≡_ on toList
infix 4 _≈_
-- Extend a commutative monoid (A, ⊕, ε) to AVL trees of type A, with union and empty.
comm-monoid-AVL-∪ : {⊕ : Op₂ A} → IsCommutativeMonoid _≈_ (unionWith ⊕) empty
comm-monoid-AVL-∪ {⊕} = record {
isSemigroup = record {
isEquivalence = record { refl = refl; sym = sym; trans = trans }; assoc = assoc; ∙-cong = {!!}
};
identityˡ = λ _ → refl;
comm = comm
} where
_∪_ = unionWith ⊕
postulate comm : Commutative _≈_ _∪_
postulate ∙-cong : _∪_ Preserves₂ _≈_ ⟶ _≈_ ⟶ _≈_
assoc : Associative _≈_ _∪_
assoc (tree (Indexed.leaf l<u)) y z = refl
assoc x (tree (Indexed.leaf l<u)) z =
let bibble : (x ∪ tree (Indexed.leaf l<u)) ∪ z ≈ ((tree (Indexed.leaf l<u)) ∪ x) ∪ z
bibble = ∙-cong (comm x (tree (Indexed.leaf l<u))) refl in
begin
toList ((x ∪ tree (Indexed.leaf l<u)) ∪ z)
≡⟨ bibble ⟩
toList (((tree (Indexed.leaf l<u)) ∪ x) ∪ z)
≡⟨ refl ⟩
toList (x ∪ z)
≡⟨ refl ⟩
toList (x ∪ ((tree (Indexed.leaf l<u)) ∪ z))
∎
assoc x (tree (Indexed.node kv τ₁ τ₂ bal)) z = {!!} -- TODO
In bibble, I have a proof that z ≈ z (namely refl), and also a proof that x ∪ tree (Indexed.leaf l<u) ≈ (tree (Indexed.leaf l<u)) ∪ x (by commutativity), and I believe I should then be able to use ∙-cong to derive a proof that the union of the ≈ arguments is also ≈.
However, the compiler seems to be left with some unresolved meta-variables, and unfortunately I don't really understand how to read the messages. Am I just doing something wrong (proof-wise), or do I just need to make some arguments explicit, or what?
The proof is fine, only the compiler needs few more hints. Implicit arguements are filled in by unification and while it can do some cool stuff, sometimes you have to help it out and push it in the right direction.
Let's take a look at the definition of Preserves₂:
_Preserves₂_⟶_⟶_ :
∀ {a b c ℓ₁ ℓ₂ ℓ₃} {A : Set a} {B : Set b} {C : Set c} →
(A → B → C) → Rel A ℓ₁ → Rel B ℓ₂ → Rel C ℓ₃ → Set _
_+_ Preserves₂ P ⟶ Q ⟶ R =
∀ {x y u v} → P x y → Q u v → R (x + u) (y + v)
If we specialize it to _+_ = _∪_ and P, Q, R = _≈_, we get:
∀ {x y u v} → x ≈ y → u ≈ v → (x ∪ u) ≈ (y ∪ v)
There are four implicit arguments, let's fill in some of them and watch what happens:
bibble = ∙-cong
{x = x ∪ tree (Indexed.leaf l<u)}
(comm x (tree (Indexed.leaf l<u)))
refl
Nope, it's still yellow. Apparently, knowing only x is not enough. Now y:
bibble = ∙-cong
{x = x ∪ tree (Indexed.leaf l<u)}
{y = tree (Indexed.leaf l<u) ∪ x}
(comm x (tree (Indexed.leaf l<u)))
refl
And that's it! Giving x and y explicitly was enough for the compiler to figure out u and v.
Also, I wrote something about implicit arguments in this answer, you might want to check that out.
Anyways, the "unsolved meta" error messages are indeed a bit scary (and usually not very helpful), what I usually do is something like this:
bibble = ∙-cong
{?} {?} {?} {?}
(comm x (tree (Indexed.leaf l<u)))
refl
That is, I just prepare holes for all (or the most relevant) implicit arguments and fill them one by one until the compiler is happy.
In Agda, how can I define a pair of vectors that must have the same length?
-- my first try, but need to have 'n' implicitly
b : ∀ ( n : ℕ ) → Σ (Vec ℕ n) (λ _ → Vec ℕ n)
b 2 = (1 ∷ 2 ∷ []) , (3 ∷ 4 ∷ [])
b 3 = (1 ∷ 2 ∷ 3 ∷ []) , (4 ∷ 5 ∷ 6 ∷ [])
b _ = _
-- how can I define VecSameLength correctly?
VecSameLength : Set
VecSameLength = _
c : VecSameLength
c = (1 ∷ 2 ∷ []) , (1 ∷ 2 ∷ [])
d : VecSameLength
d = (1 ∷ 2 ∷ 3 ∷ []) , (4 ∷ 5 ∷ 6 ∷ [])
-- another try
VecSameLength : Set
VecSameLength = Σ (Vec ℕ ?) (λ v → Vec ℕ (len v))
If you want to keep the length as a part of the type, you just need to pack up two vectors with the same size index. Necessary imports first:
open import Data.Nat
open import Data.Product
open import Data.Vec
Nothing extra fancy: just as you would write your ordinary vector of size n, you can do this:
2Vec : ∀ {a} → Set a → ℕ → Set a
2Vec A n = Vec A n × Vec A n
That is, 2Vec A n is the type of pairs of vectors of As, both with n elements. Note that I took the opportunity to generalize it to arbitary universe level - which means you can have vectors of Sets, for example.
The second useful thing to note is that I used _×_, which is an ordinary non-dependent pair. It is defined in terms of Σ as a special case where the second component doesn't depend on the value of first.
And before I move to the example where we'd like to keep the size hidden, here's an example of a value of this type:
test₁ : 2Vec ℕ 3
-- We can also infer the size index just from the term:
-- test₁ : 2Vec ℕ _
test₁ = 0 ∷ 1 ∷ 2 ∷ [] , 3 ∷ 4 ∷ 5 ∷ []
You can check that Agda rightfully complains when you attempt to stuff two vectors of uneven size into this pair.
Hiding indices is job exactly suited for dependent pair. As a starter, here's how you'd hide the length of one vector:
data SomeVec {a} (A : Set a) : Set a where
some : ∀ n → Vec A n → SomeVec A
someVec : SomeVec ℕ
someVec = some _ (0 ∷ 1 ∷ [])
The size index is kept outside of the type signature, so we only know that the vector inside has some unknown size (effectively giving you a list). Of course, writing a new data type everytime we needed to hide an index would be tiresome, so standard library gives us Σ.
someVec : Σ ℕ λ n → Vec ℕ n
-- If you have newer version of standard library, you can also write:
-- someVec : Σ[ n ∈ ℕ ] Vec ℕ n
-- Older version used unicode colon instead of ∈
someVec = _ , 0 ∷ 1 ∷ []
Now, we can easily apply this to the type 2Vec given above:
∃2Vec : ∀ {a} → Set a → Set a
∃2Vec A = Σ[ n ∈ ℕ ] 2Vec A n
test₂ : ∃2Vec ℕ
test₂ = _ , 0 ∷ 1 ∷ 2 ∷ [] , 3 ∷ 4 ∷ 5 ∷ []
copumpkin raises an excellent point: you can get the same guarantee just by using a list of pairs. These two representations encode exactly the same information, let's take a look.
Here, we'll use a different import list to prevent name clashes:
open import Data.List
open import Data.Nat
open import Data.Product as P
open import Data.Vec as V
open import Function
open import Relation.Binary.PropositionalEquality
Going from two vectors to one list is a matter of zipping the two vectors together:
vec⟶list : ∀ {a} {A : Set a} → ∃2Vec A → List (A × A)
vec⟶list (zero , [] , []) = []
vec⟶list (suc n , x ∷ xs , y ∷ ys) = (x , y) ∷ vec⟶list (n , xs , ys)
-- Alternatively:
vec⟶list = toList ∘ uncurry V.zip ∘ proj₂
Going back is just unzipping - taking a list of pairs and producing a pair of lists:
list⟶vec : ∀ {a} {A : Set a} → List (A × A) → ∃2Vec A
list⟶vec [] = 0 , [] , []
list⟶vec ((x , y) ∷ xys) with list⟶vec xys
... | n , xs , ys = suc n , x ∷ xs , y ∷ ys
-- Alternatively:
list⟶vec = ,_ ∘ unzip ∘ fromList
Now, we know how to get from one representation to the other, but we still have to show that these two representations give us the same information.
Firstly, we show that if we take a list, convert it to vector (via list⟶vec) and then back to list (via vec⟶list), then we get the same list back.
pf₁ : ∀ {a} {A : Set a} (xs : List (A × A)) → vec⟶list (list⟶vec xs) ≡ xs
pf₁ [] = refl
pf₁ (x ∷ xs) = cong (_∷_ x) (pf₁ xs)
And then the other way around: first vector to list and then list to vector:
pf₂ : ∀ {a} {A : Set a} (xs : ∃2Vec A) → list⟶vec (vec⟶list xs) ≡ xs
pf₂ (zero , [] , []) = refl
pf₂ (suc n , x ∷ xs , y ∷ ys) =
cong (P.map suc (P.map (_∷_ x) (_∷_ y))) (pf₂ (n , xs , ys))
In case you are wondering what cong does:
cong : ∀ {a b} {A : Set a} {B : Set b}
(f : A → B) {x y} → x ≡ y → f x ≡ f y
cong f refl = refl
We've shown that list⟶vec together with vec⟶list form an isomorphism between List (A × A) and ∃2Vec A, which means that these two representations are isomorphic.