Suppose I have this definition of Subset in Agda
Subset : ∀ {α} → Set α → {ℓ : Level} → Set (α ⊔ suc ℓ)
Subset A {ℓ} = A → Set ℓ
and I have a set
data Q : Set where
a : Q
b : Q
Is it possible to prove that all subset of q is decidable and why?
Qs? : (qs : Subset Q {zero}) → Decidable qs
Decidable is defined here:
-- Membership
infix 10 _∈_
_∈_ : ∀ {α ℓ}{A : Set α} → A → Subset A → Set ℓ
a ∈ p = p a
-- Decidable
Decidable : ∀ {α ℓ}{A : Set α} → Subset A {ℓ} → Set (α ⊔ ℓ)
Decidable as = ∀ a → Dec (a ∈ as)
Not for that definition of Subset, since decidability would require to check whether "p a" is inhabited or not, i.e. excluded middle.
Decidable subsets would exactly be maps into Bool:
Subset : ∀ {α} (A : Set α) -> Set
Subset A = A → Bool
_∈_ : ∀ {α}{A : Set α} → A → Subset A → Set
a ∈ p = T (p a)
But if you want more flexibility on the shape of the membership proofs you could use your definition of Subset and carry around a proof that it is Decidable.
Related
In Lean, the axiom choice is implemented as follows:
axiom choice {α : Sort u} : nonempty α → α
Given only the assertion h that α is nonempty, choice h magically produces an element of α.
Now if I read literature (Jech) on set theory and the axiom of choice:
Axiom of Choice (AC). Every family of nonempty sets has a choice function.
If S is a family of sets and ∅ not in S, then a choice function for S is a function f on S such that f(X) ∈ X for every X ∈ S.
To me, these things don't seem to be equivalent. Can someone elaborate on this?
The axiom choice in Lean is indeed not the same as the axiom of choice in set theory. The axiom choice in Lean doesn't really have a corresponding statement in set theory. It says that there is a function that takes a proof that some type α is nonempty, and produces an inhabitant of α. In set theory, we cannot define functions that take proofs as arguments since proofs are not objects in set theory; they are in the layer of logic on top of that.
That said, the two choice axioms are not completely unrelated. From Lean's axiom choice, you can prove the more familiar axiom of choice from set theory, which you can find here:
theorem axiomOfChoice {α : Sort u} {β : α → Sort v} {r : ∀ x, β x → Prop}
(h : ∀ x, ∃ y, r x y) : ∃ (f : ∀ x, β x), ∀ x, r x (f x) :=
⟨_, fun x => choose_spec (h x)⟩
In other parts of the library, other equivalent statements to the axiom of choice are proven, like the one stating that every surjective function has a right inverse.
Maybe the statement closest to the version of the axiom of choice you quoted is the following:
theorem axiomOfChoice' {α : Sort u} {β : α → Sort v} (h : ∀ x, Nonempty (β x)) :
Nonempty (∀ x, β x) :=
⟨fun x => Classical.choice (h x)⟩
In words, this says that given a family of nonempty types (sets), the type of choice functions is nonempty. As you can see the proof is immediate from Lean's choice.
In the Idris Effects library effects are represented as
||| This type is parameterised by:
||| + The return type of the computation.
||| + The input resource.
||| + The computation to run on the resource given the return value.
Effect : Type
Effect = (x : Type) -> Type -> (x -> Type) -> Type
If we allow resources to be values and swap the first two arguments, we get (the rest of the code is in Agda)
Effect : Set -> Set
Effect R = R -> (A : Set) -> (A -> R) -> Set
Having some basic type-context-membership machinery
data Type : Set where
nat : Type
_⇒_ : Type -> Type -> Type
data Con : Set where
ε : Con
_▻_ : Con -> Type -> Con
data _∈_ σ : Con -> Set where
vz : ∀ {Γ} -> σ ∈ Γ ▻ σ
vs_ : ∀ {Γ τ} -> σ ∈ Γ -> σ ∈ Γ ▻ τ
we can encode lambda terms constructors as follows:
app-arg : Bool -> Type -> Type -> Type
app-arg b σ τ = if b then σ ⇒ τ else σ
data TermE : Effect (Con × Type) where
Var : ∀ {Γ σ } -> σ ∈ Γ -> TermE (Γ , σ ) ⊥ λ()
Lam : ∀ {Γ σ τ} -> TermE (Γ , σ ⇒ τ ) ⊤ (λ _ -> Γ ▻ σ , τ )
App : ∀ {Γ σ τ} -> TermE (Γ , τ ) Bool (λ b -> Γ , app-arg b σ τ)
In TermE i r i′ i is an output index (e.g. lambda abstractions (Lam) construct function types (σ ⇒ τ) (for ease of description I'll ignore that indices also contain contexts besides types)), r represents a number of inductive positions (Var doesn't (⊥) receive any TermE, Lam receives one (⊤), App receives two (Bool) — a function and its argument) and i′ computes an index at each inductive position (e.g. the index at the first inductive position of App is σ ⇒ τ and the index at the second is σ, i.e. we can apply a function to a value only if the type of the first argument of the function equals the type of the value).
To construct a real lambda term we must tie the knot using something like a W data type. Here is the definition:
data Wer {R} (Ψ : Effect R) : Effect R where
call : ∀ {r A r′ B r′′} -> Ψ r A r′ -> (∀ x -> Wer Ψ (r′ x) B r′′) -> Wer Ψ r B r′′
It's the indexed variant of the Oleg Kiselyov's Freer monad (effects stuff again), but without return. Using this we can recover the usual constructors:
_<∨>_ : ∀ {B : Bool -> Set} -> B true -> B false -> ∀ b -> B b
(x <∨> y) true = x
(x <∨> y) false = y
_⊢_ : Con -> Type -> Set
Γ ⊢ σ = Wer TermE (Γ , σ) ⊥ λ()
var : ∀ {Γ σ} -> σ ∈ Γ -> Γ ⊢ σ
var v = call (Var v) λ()
ƛ_ : ∀ {Γ σ τ} -> Γ ▻ σ ⊢ τ -> Γ ⊢ σ ⇒ τ
ƛ b = call Lam (const b)
_·_ : ∀ {Γ σ τ} -> Γ ⊢ σ ⇒ τ -> Γ ⊢ σ -> Γ ⊢ τ
f · x = call App (f <∨> x)
The whole encoding is very similar to the corresponding encoding in terms of indexed containers: Effect corresponds to IContainer and Wer corresponds to ITree (the type of Petersson-Synek Trees). However the above encoding looks simpler to me, because you don't need to think about things you have to put into shapes to be able to recover indices at inductive positions. Instead, you have everything in one place and the encoding process is really straightforward.
So what am I doing here? Is there some real relation to the indexed containers approach (besides the fact that this encoding has the same extensionality problems)? Can we do something useful this way? One natural thought is to built an effectful lambda calculus as we can freely mix lambda terms with effects, since a lambda term is itself just an effect, but it's an external effect and we either need other effects to be external as well (which means that we can't say something like tell (var vz), because var vz is not a value — it's a computation) or we need to somehow internalize this effect and the whole effects machinery (which means I don't know what).
The code used.
Interesting work! I don't know much about effects and i have only a basic understanding of indexed containers, but i am doing stuff with generic programming so here's my take on it.
The type of TermE : Con × Type → (A : Set) → (A → Con × Type) → Set reminds me of the type of descriptions used to formalize indexed induction recursion in [1]. The second chapter of that paper tells us that there is an equivalence between Set/I = (A : Set) × (A → I) and I → Set. This means that the type of TermE is equivalent to Con × Type → (Con × Type → Set) → Set or (Con × Type → Set) → Con × Type → Set. The latter is an indexed functor, which is used in the polynomial functor ('sum-of-products') style of generic programming, for instance in [2] and [3]. If you have not seen it before, it looks something like this:
data Desc (I : Set) : Set1 where
`Σ : (S : Set) → (S → Desc I) → Desc I
`var : I → Desc I → Desc I
`ι : I → Desc I
⟦_⟧ : ∀{I} → Desc I → (I → Set) → I → Set
⟦ `Σ S x ⟧ X o = Σ S (λ s → ⟦ x s ⟧ X o)
⟦ `var i xs ⟧ X o = X i × ⟦ xs ⟧ X o
⟦ `ι o′ ⟧ X o = o ≡ o′
data μ {I : Set} (D : Desc I) : I → Set where
⟨_⟩ : {o : I} → ⟦ D ⟧ (μ D) o → μ D o
natDesc : Desc ⊤
natDesc = `Σ Bool (λ { false → `ι tt ; true → `var tt (`ι tt) })
nat-example : μ natDesc tt
nat-example = ⟨ true , ⟨ true , ⟨ false , refl ⟩ , refl ⟩ , refl ⟩
finDesc : Desc Nat
finDesc = `Σ Bool (λ { false → `Σ Nat (λ n → `ι (suc n))
; true → `Σ Nat (λ n → `var n (`ι (suc n)))
})
fin-example : μ finDesc 5
fin-example = ⟨ true , 4 , ⟨ true , 3 , ⟨ false , 2 , refl ⟩ , refl ⟩ , refl ⟩
So the fixpoint μ corresponds directly to your Wer datatype, and the interpreted descriptions (using ⟦_⟧) correspond to your TermE. I'm guessing that some of the literature on this topic will be relevant for you. I don't remember whether indexed containers and indexed functors are really equivalent but they are definitely related. I do not entirely understand your remark about tell (var vz), but could that be related to the internalization of fixpoints in these kinds of descriptions? In that case maybe [3] can help you with that.
[1]: Peter Hancock, Conor McBride, Neil Ghani, Lorenzo Malatesta, Thorsten Altenkirch - Small Induction Recursion (2013)
[2]: James Chapman, Pierre-Evariste Dagand, Conor McBride, Peter Morris - The gentle art of levitation (2010)
[3]: Andres Löh, José Pedro Magalhães - Generic programming with indexed functors
Given the following type family (supposed to reflect the isomorphism A×1 ≅ A)
type family P (x :: *) (a :: *) :: * where
P x () = x
P x a = (x, a)
and data type defined in terms thereof
data T a = T Integer (P (T a) a)
is it possible by some type hackery to write a Functor instance for the latter?
instance Functor T where
fmap f = undefined -- ??
Intuitively, it's obvious what to do depending on the type of f, but I don't know how to express it in Haskell.
I tend to reason about these kind higher kind programs using Agda.
The problem here is that you want to pattern match on * (Set in Agda), violate parametericity, as mentioned in the comment. That is not good, so you cannot just do it. You have to provide witness. I.e. following is not possible
P : Set → Set → Set
P Unit b = b
P a b = a × b
You can overcome the limitiation by using aux type:
P : Aux → Set → Set
P auxunit b = b
P (auxpair a) b = a × b
Or in Haskell:
data Aux x a = AuxUnit x | AuxPair x a
type family P (x :: Aux * *) :: * where
P (AuxUnit x) = x
P (AuxPair x a) = (x, a)
But doing so you'll have problems expressing T, as you need to pattern match on its parameter again, to select right Aux constructor.
"Simple" solution, is to express T a ~ Integer when a ~ (), and T a ~ (Integer, a) directly:
module fmap where
record Unit : Set where
constructor tt
data ⊥ : Set where
data Nat : Set where
zero : Nat
suc : Nat → Nat
data _≡_ {ℓ} {a : Set ℓ} : a → a → Set ℓ where
refl : {x : a} → x ≡ x
¬_ : ∀ {ℓ} → Set ℓ → Set ℓ
¬ x = x → ⊥
-- GADTs
data T : Set → Set1 where
tunit : Nat → T Unit
tpair : (a : Set) → ¬ (a ≡ Unit) → a → T a
test : T Unit → Nat
test (tunit x) = x
test (tpair .Unit contra _) with contra refl
test (tpair .Unit contra x) | ()
You can try to encode this in Haskell to.
You can express it using e.g. 'idiomatic' Haskell type inequality
I'll leave the Haskell version as an exercise :)
Hmm or did you meant by data T a = T Integer (P (T a) a):
T () ~ Integer × (P (T ()) ())
~ Integer × (T ())
~ Integer × Integer × ... -- infinite list of integers?
-- a /= ()
T a ~ Integer × (P (T a) a)
~ Integer × (T a × a) ~ Integer × T a × a
~ Integer × Integer × ... × a × a
Those are easier to encode directly as well.
I have an Agda formalisation of pi-calculus with de Bruijn indices. Most of the setup is irrelevant to my problem, so I'll use empty types for renamings Ren and actions, and simply postulate a basic renaming sucᴿ, plus some useful operations and properties of renamings.
module Concur where
open import Relation.Binary.PropositionalEquality
-- de Bruijn indices:
open import Data.Fin using () renaming (Fin to Name; module Fin to Name)
open import Data.Nat as Nat using () renaming (ℕ to Cxt; module ℕ to Cxt)
-- empty types will do here
data Ren : Cxt → Cxt → Set where
data Action (Γ : Cxt) : Set where
postulate
_∘_ : ∀ {Γ₁ Γ₂ Γ₃} → Ren Γ₂ Γ₃ → Ren Γ₁ Γ₂ → Ren Γ₁ Γ₃
-- push a renaming under a binder
suc : ∀ {Γ Γ′} → Ren Γ Γ′ → Ren (Cxt.suc Γ) (Cxt.suc Γ′)
-- successor function on ℕ, qua renaming
sucᴿ : ∀ {Γ} → Ren Γ (Cxt.suc Γ)
-- apply a renaming to an action
_*ᴬ_ : ∀ {Γ Γ′} → Ren Γ Γ′ → Action Γ → Action Γ′
-- here's a useful property
suc-comm : ∀ {Γ Γ′} (ρ : Ren Γ Γ′) (a : Action Γ) →
suc ρ *ᴬ (sucᴿ *ᴬ a) ≡ sucᴿ *ᴬ (ρ *ᴬ a)
Now I'll define processes, transitions and the application of a renaming to process, but limiting myself just to two kinds of process and a single kind of transition.
data Proc (Γ : Cxt) : Set where
Ο : Proc Γ
ν_ : Proc (Cxt.suc Γ) → Proc Γ
data _—[_]→_ {Γ} : Proc Γ → Action Γ → Proc Γ → Set where
ν_ : ∀ {P R} {a : Action Γ} → P —[ sucᴿ *ᴬ a ]→ R → ν P —[ a ]→ ν R
infixl 0 _—[_]→_
-- Apply a renaming to a process.
_*_ : ∀ {Γ Γ′} → Ren Γ Γ′ → Proc Γ → Proc Γ′
ρ * Ο = Ο
ρ * (ν P) = ν (suc ρ * P)
Here's my problem. I want to use my useful property suc-comm to show that I can lift an arbitrary renaming ρ to a transition. In the ν-binder case, this involves recursing into the sub-transition E, using suc to push the renaming under the binder, and the suc-comm lemma to show that things commute in the right way.
-- A transition survives an arbitrary renaming.
_*¹_ : ∀ {Γ Γ′ P R} {a : Action Γ} (ρ : Ren Γ Γ′) →
P —[ a ]→ R → ρ * P —[ ρ *ᴬ a ]→ ρ * R
ρ *¹ (ν_ {a = a} E) rewrite sym (suc-comm ρ a) =
ν {!suc ρ *¹ E!} -- rewriting here doesn't help
Unfortunately, and as indicated above, using rewrite to apply the lemma doesn't help because the goal isn't of a suitable type until I'm inside the ν. But I can only apply rewrite at the top level of a function body. So I seem to need an auxiliary function, so that I can rewrite in the correct context:
_*²_ : ∀ {Γ Γ′ P R} {a : Action Γ} (ρ : Ren Γ Γ′) →
P —[ a ]→ R → ρ * P —[ ρ *ᴬ a ]→ ρ * R
aux : ∀ {Γ Γ′ P R} (a : Action Γ) (ρ : Ren Γ Γ′) →
P —[ sucᴿ *ᴬ a ]→ R → suc ρ * P —[ sucᴿ *ᴬ (ρ *ᴬ a) ]→ suc ρ * R
ρ *² (ν_ {a = a} E) = ν aux a ρ E
aux a ρ E rewrite sym (suc-comm ρ a) = suc ρ *² E
Indeed this works, but it seems a bit awkward. Am I missing a trick here? Is there a way to avoid the auxiliary function, and somehow make use of the lemma inside the earlier goal?
You could inline rewrite, but it's still awkward:
_*¹_ : ∀ {Γ Γ′ P R} {a : Action Γ} (ρ : Ren Γ Γ′) →
P —[ a ]→ R → ρ * P —[ ρ *ᴬ a ]→ ρ * R
ρ *¹ (ν_ {a = a} E) with suc ρ *ᴬ (sucᴿ *ᴬ a) | suc-comm ρ a | suc ρ *¹ E
... | ._ | refl | E' = ν E'
Or introduce suc ρ *¹ E first and then rewrite:
_*¹_ : ∀ {Γ Γ′ P R} {a : Action Γ} (ρ : Ren Γ Γ′) →
P —[ a ]→ R → ρ * P —[ ρ *ᴬ a ]→ ρ * R
ρ *¹ (ν_ {a = a} E) with suc ρ *¹ E
... | E' rewrite suc-comm ρ a = ν E'
I have two closely related questions:
First, how can the Haskell's Arrow class be modeled / represented in Agda?
class Arrow a where
arr :: (b -> c) -> a b c
(>>>) :: a b c -> a c d -> a b d
first :: a b c -> a (b,d) (c,d)
second :: a b c -> a (d,b) (d,c)
(***) :: a b c -> a b' c' -> a (b,b') (c,c')
(&&&) :: a b c -> a b c' -> a b (c,c')
(the following Blog Post states that it should be possible...)
Second, in Haskell, the (->) is a first-class citizen and just another higher-order type and its straightforward to define (->) as one instance of the Arrow class. But how is that in Agda? I could be wrong, but I feel, that Agdas -> is a more integral part of Agda, than Haskell's -> is. So, can Agdas -> be seen as a higher-order type, i.e. a type function yielding Set which can be made an instance of Arrow?
Type classes are usually encoded as records in Agda, so you can encode the Arrow class as something like this:
open import Data.Product -- for tuples
record Arrow (A : Set → Set → Set) : Set₁ where
field
arr : ∀ {B C} → (B → C) → A B C
_>>>_ : ∀ {B C D} → A B C → A C D → A B D
first : ∀ {B C D} → A B C → A (B × D) (C × D)
second : ∀ {B C D} → A B C → A (D × B) (D × C)
_***_ : ∀ {B C B' C'} → A B C → A B' C' → A (B × B') (C × C')
_&&&_ : ∀ {B C C'} → A B C → A B C' → A B (C × C')
While you can't refer to the function type directly (something like _→_ is not valid syntax), you can write your own name for it, which you can then use when writing an instance:
_=>_ : Set → Set → Set
A => B = A → B
fnArrow : Arrow _=>_ -- Alternatively: Arrow (λ A B → (A → B)) or even Arrow _
fnArrow = record
{ arr = λ f → f
; _>>>_ = λ g f x → f (g x)
; first = λ { f (x , y) → (f x , y) }
; second = λ { f (x , y) → (x , f y) }
; _***_ = λ { f g (x , y) → (f x , g y) }
; _&&&_ = λ f g x → (f x , g x)
}
While hammar's answer is a correct port of the Haskell code, the definition of _=>_ is too limited compared to ->, since it doesn't support dependent functions. When adapting code from Haskell, that's a standard necessary change if you want to apply your abstractions to the functions you can write in Agda.
Moreover, by the usual convention of the standard library, this typeclass would be called RawArrow because to implement it you do not need to provide proofs that your instance satisfies the arrow laws; see RawFunctor and RawMonad for other examples (note: definitions of Functor and Monad are nowhere in sight in the standard library, as of version 0.7).
Here's a more powerful variant, which I wrote and tested with Agda 2.3.2 and the 0.7 standard library (should also work on version 0.6). Note that I only changed the type declaration of RawArrow's parameter and of _=>_, the rest is unchanged. When creating fnArrow, though, not all alternative type declarations work as before.
Warning: I only checked that the code typechecks and that => can be used sensibly, I didn't check whether examples using RawArrow typecheck.
module RawArrow where
open import Data.Product --actually needed by RawArrow
open import Data.Fin --only for examples
open import Data.Nat --ditto
record RawArrow (A : (S : Set) → (T : {s : S} → Set) → Set) : Set₁ where
field
arr : ∀ {B C} → (B → C) → A B C
_>>>_ : ∀ {B C D} → A B C → A C D → A B D
first : ∀ {B C D} → A B C → A (B × D) (C × D)
second : ∀ {B C D} → A B C → A (D × B) (D × C)
_***_ : ∀ {B C B' C'} → A B C → A B' C' → A (B × B') (C × C')
_&&&_ : ∀ {B C C'} → A B C → A B C' → A B (C × C')
_=>_ : (S : Set) → (T : {s : S} → Set) → Set
A => B = (a : A) -> B {a}
test1 : Set
test1 = ℕ => ℕ
-- With → we can also write:
test2 : Set
test2 = (n : ℕ) → Fin n
-- But also with =>, though it's more cumbersome:
test3 : Set
test3 = ℕ => (λ {n : ℕ} → Fin n)
--Note that since _=>_ uses Set instead of being level-polymorphic, it's still
--somewhat limited. But I won't go the full way.
--fnRawArrow : RawArrow _=>_
-- Alternatively:
fnRawArrow : RawArrow (λ A B → (a : A) → B {a})
fnRawArrow = record
{ arr = λ f → f
; _>>>_ = λ g f x → f (g x)
; first = λ { f (x , y) → (f x , y) }
; second = λ { f (x , y) → (x , f y) }
; _***_ = λ { f g (x , y) → (f x , g y) }
; _&&&_ = λ f g x → (f x , g x)
}