Do Hask or Agda have equalisers? - haskell

I was somewhat undecided as to whether this was a math.SE question or an SO one, but I suspect that mathematicians in general are fairly unlikely to know or care much about this category in particular, whereas Haskell programmers might well do.
So, we know that Hask has products, more or less (I'm working with idealised-Hask, here, of course). I'm interested in whether or not it has equalisers (in which case it would have all finite limits).
Intuitively it seems not, since you can't do separation like you can on sets, and so subobjects seem hard to construct in general. But for any specific case you'd like to come up with, it seems like you'd be able to hack it by working out the equaliser in Set and counting it (since after all, every Haskell type is countable and every countable set is isomorphic either to a finite type or the naturals, both of which Haskell has). So I can't see how I'd go about finding a counterexample.
Now, Agda seems a bit more promising: there it is relatively easy to form subobjects. Is the obvious sigma type Σ A (λ x → f x == g x) an equaliser? If the details don't work, is it morally an equaliser?

tl;dr the proposed candidate is not quite an equaliser, but its irrelevant counterpart is
The candidate for an equaliser in Agda looks good. So let's just try it. We'll need some basic kit. Here are my refusenik ASCII dependent pair type and homogeneous intensional equality.
record Sg (S : Set)(T : S -> Set) : Set where
constructor _,_
field
fst : S
snd : T fst
open Sg
data _==_ {X : Set}(x : X) : X -> Set where
refl : x == x
Here's your candidate for an equaliser for two functions
Q : {S T : Set}(f g : S -> T) -> Set
Q {S}{T} f g = Sg S \ s -> f s == g s
with the fst projection sending Q f g into S.
What it says: an element of Q f g is an element s of the source type, together with a proof that f s == g s. But is this an equaliser? Let's try to make it so.
To say what an equaliser is, I should define function composition.
_o_ : {R S T : Set} -> (S -> T) -> (R -> S) -> R -> T
(f o g) x = f (g x)
So now I need to show that any h : R -> S which identifies f o h and g o h must factor through the candidate fst : Q f g -> S. I need to deliver both the other component, u : R -> Q f g and the proof that indeed h factors as fst o u. Here's the picture: (Q f g , fst) is an equalizer if whenever the diagram commutes without u, there is a unique way to add u with the diagram still commuting.
Here goes existence of the mediating u.
mediator : {R S T : Set}(f g : S -> T)(h : R -> S) ->
(q : (f o h) == (g o h)) ->
Sg (R -> Q f g) \ u -> h == (fst o u)
Clearly, I should pick the same element of S that h picks.
mediator f g h q = (\ r -> (h r , ?0)) , ?1
leaving me with two proof obligations
?0 : f (h r) == g (h r)
?1 : h == (\ r -> h r)
Now, ?1 can just be refl as Agda's definitional equality has the eta-law for functions. For ?0, we are blessed by q. Equal functions respect application
funq : {S T : Set}{f g : S -> T} -> f == g -> (s : S) -> f s == g s
funq refl s = refl
so we may take ?0 = funq q r.
But let us not celebrate prematurely, for the existence of a mediating morphism is not sufficient. We require also its uniqueness. And here the wheel is likely to go wonky, because == is intensional, so uniqueness means there's only ever one way to implement the mediating map. But then, our assumptions are also intensional...
Here's our proof obligation. We must show that any other mediating morphism is equal to the one chosen by mediator.
mediatorUnique :
{R S T : Set}(f g : S -> T)(h : R -> S) ->
(qh : (f o h) == (g o h)) ->
(m : R -> Q f g) ->
(qm : h == (fst o m)) ->
m == fst (mediator f g h qh)
We can immediately substitute via qm and get
mediatorUnique f g .(fst o m) qh m refl = ?
? : m == (\ r -> (fst (m r) , funq qh r))
which looks good, because Agda has eta laws for records, so we know that
m == (\ r -> (fst (m r) , snd (m r)))
but when we try to make ? = refl, we get the complaint
snd (m _) != funq qh _ of type f (fst (m _)) == g (fst (m _))
which is annoying, because identity proofs are unique (in the standard configuration). Now, you can get out of this by postulating extensionality and using a few other facts about equality
postulate ext : {S T : Set}{f g : S -> T} -> ((s : S) -> f s == g s) -> f == g
sndq : {S : Set}{T : S -> Set}{s : S}{t t' : T s} ->
t == t' -> _==_ {Sg S T} (s , t) (s , t')
sndq refl = refl
uip : {X : Set}{x y : X}{q q' : x == y} -> q == q'
uip {q = refl}{q' = refl} = refl
? = ext (\ s -> sndq uip)
but that's overkill, because the only problem is the annoying equality proof mismatch: the computable parts of the implementations match on the nose. So the fix is to work with irrelevance. I replace Sg by the Existential quantifier, whose second component is marked as irrelevant with a dot. Now it matters not which proof we use that the witness is good.
record Ex (S : Set)(T : S -> Set) : Set where
constructor _,_
field
fst : S
.snd : T fst
open Ex
and the new candidate equaliser is
Q : {S T : Set}(f g : S -> T) -> Set
Q {S}{T} f g = Ex S \ s -> f s == g s
The entire construction goes through as before, except that in the last obligation
? = refl
is accepted!
So yes, even in the intensional setting, eta laws and the ability to mark fields as irrelevant give us equalisers.
No undecidable typechecking was involved in this construction.

Hask
Hask doesn't have equalizers. An important thing to remember is that thinking about a type (or the objects in any category) and their isomorphism classes really requires thinking about the arrows. What you say about the underlying sets is true, but types with isomorphic underlying sets certainly aren't necessarily isomorphic. One difference between Hask and Set as that Hask's arrows must be computable, and in fact for idealized Hask, they must be total.
I spent a while trying to come up with a real defensible counterexample, and found some references suggesting it cannot be done, but without proofs. However, I do have some "moral" counterexamples if you will; I cannot prove that no equalizer exists in Haskell, but it certainly seems impossible!
Example 1
f, g: ([Int], Int) -> Int
f (p,v) = treat p as a polynomial with given coefficients, and evaluate p(v).
g _ = 0
The equalizer "should" be the type of all pairs (p,n) where p(n) = 0, along with a function injecting these pairs into ([Int], Int). By Hilbert's 10th problem, this set is undecidable. It seems to me that this should exclude the possibility of it being a Haskell type, but I can't prove that (is it possible that there's some bizarre way to construct this type that nobody has discovered?). It maybe I haven't connected a dot or two -- perhaps proving this is impossible isn't hard?
Example 2
Say you have a programing language. You have a compiler that takes the source code and an input and produces a function, for which the fixed point of the function is the output. (While we don't have compilers like this, specifying semantics sort of like this isn't unheard of). So, you have
compiler : String -> Int -> (Int -> Int)
(Un)curry that into a function
compiler' : (String, Int, Int) -> Int
and add a function
id' : (String, Int, Int) -> Int
id' (_,_,x) = x
Then the equalizer of compiler', id' would be the collection of triplets of source program, input, output -- and this is uncomputable because the programing language is fully general.
More Examples
Pick your favorite undecidable problem: it generally involves deciding if an object is the member of some set. You often have a total function that can be used to check this property for a particular object. You can use this function to create an equalizer where the type should be all the items in your undecidable set. That's where the first two examples came from, and there are tons more.
Agda
I'm not as familiar with Agda. My intuition is that your sigma-type should be an equalizer: you can write the type down, along with the necessary injection function, and it looks like it satisfies the definition entirely. However, as someone who doesn't use Agda, I don't think I'm really qualified to check the details.
The real practical issue though is that typechecking that sigma type won't always be computable, so it's not always useful to do this. In all the examples above, you can write down the Sigma type you provided, but you won't be able to readily check if something is a member of that type without a proof.
Incidentally, this is why Haskell shouldn't be able to have equalizers: if it did, the typechecking would be undecidable! Dependent types is what makes everything tick. They should be able to express interesting mathematical structures in its types, while Haskell can't since its typesystem is decideable. So, I would naturally expect idealized Agda to have all finite limits (I would be disappointed otherwise). The same goes for other dependently typed languages; Coq, for example, should definitely have all limits.

Related

Are codatatypes really terminal algebras?

(Disclaimer: I'm not 100% sure how codatatype works, especially when not referring to terminal algebras).
Consider the "category of types", something like Hask but with whatever adjustment that fits the discussion. Within such a category, it is said that (1) the initial algebras define datatypes, and (2) terminal algebras define codatatypes.
I'm struggling to convince myself of (2).
Consider the functor T(t) = 1 + a * t. I agree that the initial T-algebra is well-defined and indeed defines [a], the list of a. By definition, the initial T-algebra is a type X together with a function f :: 1+a*X -> X, such that for any other type Y and function g :: 1+a*Y -> Y, there is exactly one function m :: X -> Y such that m . f = g . T(m) (where . denotes the function combination operator as in Haskell). With f interpreted as the list constructor(s), g the initial value and the step function, and T(m) the recursion operation, the equation essentially asserts the unique existance of the function m given any initial value and any step function defined in g, which necessitates an underlying well-behaved fold together with the underlying type, the list of a.
For example, g :: Unit + (a, Nat) -> Nat could be () -> 0 | (_,n) -> n+1, in which case m defines the length function, or g could be () -> 0 | (_,n) -> 0, then m defines a constant zero function. An important fact here is that, for whatever g, m can always be uniquely defined, just as fold does not impose any contraint on its arguments and always produce a unique well-defined result.
This does not seem to hold for terminal algebras.
Consider the same functor T defined above. The definition of the terminal T-algebra is the same as the initial one, except that m is now of type X -> Y and the equation now becomes m . g = f . T(m). It is said that this should define a potentially infinite list.
I agree that this is sometimes true. For example, when g :: Unit + (Unit, Int) -> Int is defined as () -> 0 | (_,n) -> n+1 like before, m then behaves such that m(0) = () and m(n+1) = Cons () m(n). For non-negative n, m(n) should be a finite list of units. For any negative n, m(n) should be of infinite length. It can be verified that the equation above holds for such g and m.
With any of the two following modified definition of g, however, I don't see any well-defined m anymore.
First, when g is again () -> 0 | (_,n) -> n+1 but is of type g :: Unit + (Bool, Int) -> Int, m must satisfy that m(g((b,i))) = Cons b m(g(i)), which means that the result depends on b. But this is impossible, because m(g((b,i))) is really just m(i+1) which has no mentioning of b whatsoever, so the equation is not well-defined.
Second, when g is again of type g :: Unit + (Unit, Int) -> Int but is defined as the constant zero function g _ = 0, m must satisfy that m(g(())) = Nil and m(g(((),i))) = Cons () m(g(i)), which are contradictory because their left hand sides are the same, both being m(0), while the right hand sides are never the same.
In summary, there are T-algebras that have no morphism into the supposed terminal T-algebra, which implies that the terminal T-algebra does not exist. The theoretical modeling of the codatatype Stream (or infinite list), if any, cannot be based on the nonexistant terminal algebra of the functor T(t) = 1 + a * t.
Many thanks to any hint of any flaw in the story above.
(2) terminal algebras define codatatypes.
This is not right, codatatypes are terminal coalgebras. For your T functor, a coalgebra is a type x together with f :: x -> T x. A T-coalgebra morphism between (x1, f1) and (x2, f2) is a g :: x1 -> x2 such that fmap g . f1 = f2 . g. Using this definition, the terminal T-algebra defines the possibly infinite lists (so-called "colists"), and the terminality is witnessed by the unfold function:
unfold :: (x -> Unit + (a, x)) -> x -> Colist a
Note though that a terminal T-algebra does exist: it is simply the Unit type together with the constant function T Unit -> Unit (and this works as a terminal algebra for any T). But this is not very interesting for writing programs.
it is said that (1) the initial algebras define datatypes, and (2) terminal algebras define codatatypes.
On the second point, it is actually said that terminal coalgebras define codatatypes.
A datatype t is defined by its constructors and a fold.
Constructors can be modelled by an algebra F t -> t (for example, the Peano constructors O : nat S : Nat -> Nat are collected as a single function in : Unit + Nat -> Nat).
The fold then gives the catamorphism fold f : t -> x for any algebra f : F x -> x (for nats, fold : ((Unit + x) -> x) -> Nat -> x).
A codatatype t is defined by its destructors and an unfold.
Destructors can be modelled by a coalgebra t -> F t (for example, streams have two destructors head : Stream a -> a and tail : Stream a -> Stream a, and they are collected as a single function out : Stream a -> a * Stream a).
The unfold then gives the anamorphism unfold f : x -> t for any coalgebra f : x -> F x (for streams, unfold : (x -> a * x) -> x -> Stream a).
(Disclaimer: I'm not 100% sure how codatatype works, especially when not referring to terminal algebras).
A codata type, or coinductive data type, is just one defined by its eliminations rather than its introductions.
It seems that sometimes terminal algebra is used (very confusingly) to refer to a final coalgebra, which is what actually defines a codata type.
Consider the same functor T defined above. The definition of the terminal T-algebra is the same as the initial one, except that m is now of type X -> Y and the equation now becomes m . g = f . T(m). It is said that this should define a potentially infinite list.
So I think this is where you’ve gone wrong: “m ∘ g = f ∘ T(m)” should be reversed, and read “T(m) ∘ f = g ∘ m”. That is, the final coalgebra is defined by a carrier set S and a map g : S → T(S) such that for any other coalgebra (R, f : R → T(R)) there is a unique map m : R → S such that T(m) ∘ f = g ∘ m.
m is uniquely defined recursively by the map that returns Left () whenever f maps to Left (), and Right (x, m xs) whenever f maps to Right (x, xs), i.e. it’s the assignment of the coalgebra to its unique morphism to the final coalgebra, and denotes the unique anamorphism/unfold of this type, which should be easy to convince yourself is in fact a possibly-empty & possibly-infinite stream.

Proper way of applying two (or many) option values to a function in F#

Recently I discovered a style of programming which is very useful (and pretty) in functional world called Railway oriented programming.
For example when want to create a pipeline of functions that produce option type, and we want to return None if any of them fails we can do something like this:
someOptionValue // : 'a option
>>= func1 // func1: 'a -> 'b option
>>= func2 // func2: 'b -> 'c option
and so on...
where (>>=) : 'a option -> (a' -> 'b option) -> 'b option operator applies a value to the left hands side if it's Some value or None otherwise.
But, here is my problem, what if we have a function that "takes" two (or many) option types, let say funcAB : 'a -> 'b -> 'c option, valA : 'a option and valB : 'b option and we still want to create this pipeline or use some nice operator (not create a new one specifically for this, but use some standard approach, in particular I don't want to use match ... with to "unpack" option values)
Currently I have something like this:
valA
>>= (funcAB >> Some)
>>= (fun ctr -> valB >>= ctr)
But is doesn't seem 'correct' (or fun is the better word ;] ), and it doesn't scale well if a function takes more parameters or we want to create a longer pipeline. Is there a better way to do this?
I've used F# syntax but I think this question can be applied to any functional programming language, like OCaml and Haskell.
EDIT (Solution):
Thanks to the chi's answer I've created following code F# which is much more idiomatic then what I previously had:
funcAB <!> valA <*> valB |> Option.flatten
And it looks well if we have more values: funcAB <!> valA <*> valB <*> valC <*> ....
I've used operators defined in YoLo.
In Haskell, we can use Applicative syntax for that:
If
valA :: f a
valB :: f b
funAB :: a -> b -> f c
then
join $ funAB <$> valA <*> valB :: f c
provided f is a monad (like Maybe, Haskell's option).
It should be adaptable to F# as well, I guess, as long as you define your operators
(<$>) :: (a -> b) -> f a -> f b
(<*>) :: f (a -> b) -> f a -> f b
join :: f (f a) -> f a
The above trick is a poor man's version of Idris !-notation (bang notation).
Another common option is using do
do a <- valA
b <- valB
funAB a b
but this is comparable with using >>=, indeed:
valA >>= \a ->
valB >>= \b ->
funAB a b
is not much more complex.
One option is to use computation expressions. For option there is no standard one but you can easily create your own:
type OptionBuilder() =
member this.Bind (x, f) = Option.bind f x
member this.Return x = Some x
let optional = OptionBuilder()
let a, b = Some(42), Some(7)
let f x y = x + y
let res = optional {
let! x = a
let! y = b
return f x y
}
which closely resembles Haskells do notation.
For more advanced features, have a look at F#+ which also has a generic applicative functor operator <*>.

Does a natural monoidal structure on copoints of a Functor induce a Comonad?

The situation is as follows (I changed to more standard-ish Haskell notation):
class Functor f => MonoidallyCopointed f where
copointAppend :: (∀r.f(r)->r) -> (∀r.f(r)->r) -> (∀r.f(r)->r)
copointEmpty :: ∀r.f(r)->r
such that for all instance F of MonoidallyCopointed and for all
x,y,z::∀r.F(r)->r
The following holds:
x `copointAppend` copointEmpty == copointEmpty `copointAppend` x == x
x `copointAppend` (y `copointAppend` z) == (x `copointAppend` y) `copointAppend` z
Then is it true that F has a natural Comonad instance defined from copointAppend and copointEmpty?
N.B. The converse holds (with copointEmpty = extract and copointAppend f g = f . g . duplicate.)
EDIT
As Bartosz pointed out in the comment, this is mostly the definition of comonads using the co-Kleisli adjunction. So the question is really about the constructivity of this notion. Accordingly,
the following question is probably more interesting in terms of real-world applications:
Does there exist a constructive isomorphism between the set of possible Comonad instances of f and the set of possible MonoidallyCopointed instances of f?
This can be useful in practice because a direct definition of Comonad instance can involve a bit of
technical, hard-to-read code that cannot be verified by the type checker. For example,
data W a = W (Maybe a) (Int -> a) (Either (String -> a) (a,a,a,a))
has a Comonad instance but the direct definition (with the proof that it's indeed a Comonad!) may not be so easy.
On the other hand, providing a MonoidallyCopointed instance may be a little easier (but I'm not perfectly
sure of this point).

Pattern matching in Observational Type Theory

In the end of the "5. Full OTT" section of Towards Observational Type Theory the authors show how to define coercible-under-constructors indexed data types in OTT. The idea is basically to turn indexed data types into parameterized like this:
data IFin : ℕ -> Set where
zero : ∀ {n} -> IFin (suc n)
suc : ∀ {n} -> IFin n -> IFin (suc n)
data PFin (m : ℕ) : Set where
zero : ∀ {n} -> suc n ≡ m -> PFin m
suc : ∀ {n} -> suc n ≡ m -> PFin n -> PFin m
Conor also mentions this technique at the bottom of observational type theory (delivery):
The fix, of course, is to do what the GADT people did, and define
inductive families explicitly upto propositional equality. And then of
course you can transport them, by transisitivity.
However a type checker in Haskell is aware of equality constraints in scope and actually uses them during type checking. E.g. we can write
f :: a ~ b => a -> b
f x = x
It doesn't work so in type theory, since it's not enough to have a proof of a ~ b in scope to be able to rewrite by this equation: that proof also must be refl, because in the presense of a false hypothesis type checking becomes undecidable due to termination issues (something like this). So when you pattern match on Fin m in Haskell m gets rewritten to suc n in each branch, but that can't happen in type theory, instead you're left with an explicit proof of suc n ~ m. In OTT it's not possible to pattern match on proofs at all, hence you can neither pretend the proof is refl nor actually require that. It's only possible to supply the proof to coerce or just ignore it.
This makes it very hard to write anything that involves indexed data types. E.g. the usual three-lines (including the type signature) lookup for vectors becomes this beast:
vlookupₑ : ∀ {n m a} {α : Level a} {A : Univ α} -> ⟦ n ≅ m ⇒ fin n ⇒ vec A m ⇒ A ⟧
vlookupₑ p (fzeroₑ q) (vconsₑ r x xs) = x
vlookupₑ {n} {m} p (fsucₑ {n′} q i) (vconsₑ {m′} r x xs) =
vlookupₑ (left (suc n′) {m} {suc m′} (trans (suc n′) {n} {m} q p) r) i xs
vlookupₑ {n} {m} p (fzeroₑ {n′} q) (vnilₑ r) =
⊥-elim $ left (suc n′) {m} {0} (trans (suc n′) {n} {m} q p) r
vlookupₑ {n} {m} p (fsucₑ {n′} q i) (vnilₑ r) =
⊥-elim $ left (suc n′) {m} {0} (trans (suc n′) {n} {m} q p) r
vlookup : ∀ {n a} {α : Level a} {A : Univ α} -> Fin n -> Vec A n -> ⟦ A ⟧
vlookup {n} = vlookupₑ (refl n)
It could be a bit simplified, since if two elements of a data type that has decidable equality are observably equal, then they are also equal in the usual intensional sense, and natural numbers do have decidable equality, so we can coerce all the equations to their intensional counterparts and pattern match on them, but that would break some computational properties of vlookup and is verbose anyway. It's nearly impossible to deal in more complicated cases with indices which equality cannot be decided.
Is my reasoning correct? How is pattern matching in OTT meant to work? If this is a problem indeed, are there any ways to mitigate it?
I guess I'll field this one. I find it a strange question, but that's because of my own particular journey. The short answer is: don't do pattern matching in OTT, or in any kernel type theory. Which is not the same thing as to not do pattern matching ever.
The long answer is basically my PhD thesis.
In my PhD thesis, I show how to elaborate high-level programs written in a pattern matching style into a kernel type theory which has only the induction principles for inductive datatypes and a suitable treatment of propositional equality. The elaboration of pattern matching introduces propositional equations on datatype indices, then solves them by unification. Back then, I was using an intensional equality, but observational equality gives you at least the same power. That is: my technology for elaborating pattern matching (and thus keeping it out of the kernel theory), hiding all the equational piggery-jokery, predates the upgrade to observational equality. The ghastly vlookup you've used to illustrate your point might correspond to the output of the elaboration process, but the input need not be that bad. The nice definition
vlookup : Fin n -> Vec X n -> X
vlookup fz (vcons x xs) = x
vlookup (fs i) (vcons x xs) = vlookup i xs
elaborates just fine. The equation-solving that happens along the way is just the same equation-solving that Agda does at the meta-level when checking a definition by pattern matching, or that Haskell does. Don't be fooled by programs like
f :: a ~ b => a -> b
f x = x
In kernel Haskell, that elaborates to some sort of
f {q} x = coerce q x
but it's not in your face. And it's not in compiled code, either. OTT equality proofs, like Haskell equality proofs, can be erased before computing with closed terms.
Digression. To be clear about the status of equality data in Haskell, the GADT
data Eq :: k -> k -> * where
Refl :: Eq x x
really gives you
Refl :: x ~ y -> Eq x y
but because the type system is not logically sound, type safety relies on strict pattern matching on that type: you can't erase Refl and you really must compute it and match it at run time, but you can erase the data corresponding to the proof of x~y. In OTT, the entire propositional fragment is proof-irrelevant for open terms and erasable for closed computation. End of digression.
The decidability of equality on this or that datatype is not especially relevant (at least, not if you have uniqueness of identity proofs; if you don't always have UIP, decidability is one way to get it sometimes). The equational problems which show up in pattern matching are on arbitrary open expressions. That's a lot of rope. But a machine can certainly decide the fragment which consists of first-order expressions built from variables and fully applied constructors (and that's what Agda does when you split cases: if the constraints are too weird, the thing just barfs). OTT should allow us to push a bit further into the decidable fragments of higher-order unification. If you know (forall x. f x = t[x]) for unknown f, that's equivalent to f = \ x -> t[x].
So, "no pattern matching in OTT" has always been a deliberate design choice, as we always intended it to be an elaboration target for a translation we already knew how to do. Rather, it's a strict upgrade in kernel theory power.

How can a function be "transparently augmented" in Haskell?

Situation
I have function f, which I want to augment with function g, resulting in function named h.
Definitions
By "augment", in the general case, I mean: transform either input (one or more arguments) or output (return value) of function f.
By "augment", in the specific case, (specific to my current situation) I mean: transform only the output (return value) of function f while leaving all the arguments intact.
By "transparent", in the context of "augmentation", (both the general case and the specific case) I mean: To couple g's implementation as loosely to f's implementation as possible.
Specific case
In my current situation, this is what I need to do:
h a b c = g $ f a b c
I am interested in rewriting it to something like this:
h = g . f -- Doesn't type-check.
Because from the perspective of h and g, it doesn't matter what arguments f take, they only care about the return value, hence it would be tight coupling to mention the arguments in any way. For instance, if f's argument count changes in the future, h will also need to be changed.
So far
I asked lambdabot on the #haskell IRC channel: #pl h a b c = g $ f a b c to which I got the response:
h = ((g .) .) . f
Which is still not good enough since the number of (.)'s is dependent on the number of f's arguments.
General case
I haven't done much research in this direction, but erisco on #haskell pointed me towards http://matt.immute.net/content/pointless-fun which hints to me that a solution for the general case could be possible.
So far
Using the functions defined by Luke Palmer in the above article this seems to be an equivalent of what we have discussed so far:
h = f $. id ~> id ~> id ~> g
However, it seems that this method sadly also suffers from being dependent on the number of arguments of f if we want to transform the return value of f -- just as the previous methods.
Working example
In JavaScript, for instance, it is possible to achieve transparent augmentation like this:
function h () { return g(f.apply(this, arguments)) }
Question
How can a function be "transparently augmented" in Haskell?
I am mainly interested in the specific case, but it would be also nice to know how to handle the general case.
You can sort-of do it, but since there is no way to specify a behavior for everything that isn't a function, you'll need a lot of trivial instances for all the other types you care about.
{-# LANGUAGE TypeFamilies, DefaultSignatures #-}
class Augment a where
type Result a
type Result a = a
type Augmented a r
type Augmented a r = r
augment :: (Result a -> r) -> a -> Augmented a r
default augment :: (a -> r) -> a -> r
augment g x = g x
instance Augment b => Augment (a -> b) where
type Result (a -> b) = Result b
type Augmented (a -> b) r = a -> Augmented b r
augment g f x = augment g (f x)
instance Augment Bool
instance Augment Char
instance Augment Integer
instance Augment [a]
-- and so on for every result type of every function you want to augment...
Example:
> let g n x ys = replicate n x ++ ys
> g 2 'a' "bc"
"aabc"
> let g' = augment length g
> g' 2 'a' "bc"
4
> :t g
g :: Int -> a -> [a] -> [a]
> :t g'
g' :: Int -> a -> [a] -> Int
Well, technically, with just enough IncoherentInstances you can do pretty much anything:
{-# LANGUAGE MultiParamTypeClasses, TypeFamilies,
FlexibleInstances, UndecidableInstances, IncoherentInstances #-}
class Augment a b f h where
augment :: (a -> b) -> f -> h
instance (a ~ c, h ~ b) => Augment a b c h where
augment = ($)
instance (Augment a b d h', h ~ (c -> h')) => Augment a b (c -> d) h where
augment g f = augment g . f
-- Usage
t1 = augment not not
r1 = t1 True
t2 = augment (+1) (+)
r2 = t2 2 3
t3 = augment (+1) foldr
r3 = t3 (+) 0 [2,3]
The problem is that the real return value of something like a -> b -> c isn't
c, but b -> c. What you want require some kind of test that tells you if a type isn't
a function type. You could enumerate the types you are interested in, but that's not so
nice. I think HList solve this problem somehow, look at the paper. I managed to understand a bit of the solution with overlapping instances, but the rest goes a bit over my head I'm afraid.
JavaScript works, because its arguments are a sequence, or a list, so there is just one argument, really. In that sense it is the same as a curried version of the functions with a tuple representing the collection of arguments.
In a strongly typed language you need a lot more information to do that "transparently" for a function type - for example, dependent types can express this idea, but require the functions to be of specific types, not a arbitrary function type.
I think I saw a workaround in Haskell that can do this, too, but, again, that works only for specific types, which capture the arity of the function, not any function.

Resources