Partially applying a type in a type constructor - haskell

In the book "Learn you a Haskell", chapter 11 we have the introduction of the newtype keyword and it all makes sense to me up to the part where we see the Pair type being made an instance of a Functor.
This discussion begins with a problem statement that "only type constructors that take exactly one parameter can be made an instance of Functor." I don't understand this statement. Where is this restriction ever described?
In the context of the discussion we have a type Pair defined as:
newtype Pair b a = Pair { getPair :: (a,b) }
Given this, I would have thought that we could have used something like:
instance Functor (Pair a b) where ...
I would have thought that (Pair a b) would have substituted for 'f' in the definition of Functor without any problem. What keeps this from working?
Next, the author goes on to make Pair an instance of a Functor using the following:
instance Functor (Pair c) where ...
Here is where I get lost. I don't recall ever seeing 'partially applying' types to a type constructor before. I am not really sure what this even means in this context, let alone how it solves the problem identified above it.
I think I have a misconception somewhere but I don't know where to go looking for it. I did find this answer but it only seems to go part of the way in answering the question.

"only type constructors that take exactly one parameter can be made an instance of Functor." I don't understand this statement. Where is this restriction ever described?
The restriction comes from the type of fmap in the definition of Functor:
class Functor f where
fmap :: (a -> b) -> f a -> f b
Because the class variable f is applied to one argument in f a, the type used to instantiate Functor must be able to take one parameter. Because the application f a is part of a function type, f must not take more than one parameter. (There's one more subtle piece about what kinds of arguments f can take that's specified here, but I think it's worth skipping that for now; hopefully your tutorial will touch on this subtlety more later.)
Given this, I would have thought that we could have used something like:
instance Functor (Pair a b) where ...
I would have thought that (Pair a b) would have substituted for 'f' in the definition of Functor without any problem. What keeps this from working?
It is once again the type of fmap that keeps this from working. Filling in Pair c d for f (modifying a to c and b to d to avoid name clashes), we would get
fmap :: (a -> b) -> Pair c d a -> Pair c d b
which doesn't make a lot of sense.
Here is where I get lost. I don't recall ever seeing 'partially applying' types to a type constructor before.
Well... now you have. =)
I am not really sure what this even means in this context, let alone how it solves the problem identified above it.
It means basically the same thing partial application at the term level means: Pair a is akin to a type function which takes another type as an argument. So, e.g., Pair a applied to Int would give the Pair a Int type. It solves the problem because now we are supplying a "type-function-like" thing rather than a "type-like" thing, so applying it to another type makes sense; that is, the substituted type of fmap for a Pair c instance would be
fmap :: (a -> b) -> Pair c a -> Pair c b
which now makes sense because Pair c a and Pair c b are sensible types, whereas before Pair c d a and Pair c d b were over-applied and therefore not sensible types.

Related

What is the difference between functor and monad Intuitively

I have learned the definitions of functor and monad, But I am still unable to figure out the difference between them except for the definitions. I have read some answers to this problem, In What is the difference between a Functor and a Monad? one comment say
A functor takes a pure function (and a functorial value) whereas a monad takes a Kleisli arrow, i.e. a function that returns a monad (and a monadic value). Hence you can chain two monads and the second monad can depend on the result of the previous one. You cannot do this with functors.
This comment is interesting and gives me a read of their difference. But I still have some questions.
why functor cannot use the result of previous one? since fmap :: (a -> b) -> f a -> f b, when I currying fmap with a pure function, I can get a f a -> f b function,f b depends on f a, Does the result mean data inside the functor?
In category theory I can understand the comment since I cannot get the element in category theory, but In Haskell I find out that I can use the result of functor since Haskell can remember the data constructor, Does Haskell prevent me from understanding this comment? Should I understand this in pure category theory?
When using fmap :: Functor f => (a -> b) -> f a -> f b, the argument function can never depend on any of the structure from the functor f itself; how can it, when all it ever gets passed are a values, nothing to do with f? When you use fmap to get an f a -> f b function, the code that is responsible for building the final f b value is the code of fmap itself, not the code of the a -> b function being mapped. The function being mapped might not even be called at all (e.g. if you're using the Maybe functor and the f a value is Nothing, there are no a values inside that for fmap to pass to the a -> b function, and it will never be called).
But fmap is fully polymorphic in a and b, meaning the code of fmap doesn't know what those types are and cannot assume anything about them. So fmap has to build the final f b value in a way that only depends on the functor structure added by f; the code of fmap cannot look at any a values and make decisions about what to do. In fact really the only thing a lawful fmap can do is return a value with exactly the same structure as the input f a value, only with any as that were inside replaced by the b value returned by the a -> b function. That's why you can just deriving Functor any data type; there's at most one way to make any given data type a Functor, and the compiler can just do it for you.
But the monad =<< function1 has this type: Monad m => (a -> m b) -> m a -> m b. Here the mapped function has type a -> m b. That means the final m b structure returned at the end isn't purely determined by the code of =<< (at least not necessarily). The a -> m b function knows what monad is being used and returns some monadic structure in its m b value, not just a raw b. The mapped function still doesn't receive any monadic structure, so its intermediate m b result can't depend on that; any dependency the final m b value has on the monadic structure in the original m a has to be determined by the code for =<< (which again, cannot make any decisions based on a values itself). But the mapped function (if it is called) does get to contribute some monadic structure, and that can depend on the a values (because the mapped function doesn't have to be completely polymorphic in a; it can inspect a values and make decisions based on them).
This means that ultimately any part of the final output m b might depend on any part of the input (if we don't know what function was mapped, or how the =<< function works internally). This is quite different from the functor case, where even without knowing anything about the specific functor or the mapped function we know that the final output will always look like "a copy of the input with as replaced by bs" (and specifically which b replaces each a is determined by the mapped function).
It's possibly worth noting: almost everything I've said here is a direct consequence of what these types mean:
Functor f => (a -> b) -> f a -> f b
Monad m => (a -> m a) -> m a -> m b
These aren't special properties of functors and monads you have to remember, it's just a consequence of those operations having those types. (The functor and monad laws express things that the types don't, but I've not needed to invoke those to explain this difference)
Once you are really deeply used to thinking about types the Haskell way, you can figure this out for yourself just by looking at the types. I didn't of course; I had it explained to me (a dozen times) when I was learning, and I've remembered it since then. But now I can figure out similar things about new APIs I've never seen before, without any tutorials or explanations.
1 I'm using the reversed =<< operator rather than the traditional >>= operator merely because it lines up better with the fmap / <$> operator.
A monad is a functor, or rather a functor along with two additional operations. (The original(?) name for a monad was a triple.)
It helps to look at the original mathematical definition, which provided an operation unit :: Monad m => a -> m a, but instead of (>>=) :: Monad m => m a -> (a -> m b) -> m b, defined an operation called join :: Monad m => m (m a) -> m a, which lets you "remove" a layer of lifting. The two formulations are equivalent, as join can be defined in terms of (>>=)
join ma = ma >>= id
and vice versa:
ma >>= f = join $ fmap f ma
The last definition gives a different interpretation of a monad: it's a functor that you can "compress"; chaining comes for free by mapping a function over an already lifted value, then combining the two layers of lifting back to a single layer.
unit is required in either case because there's no way to get a lifted value in the first place with just fmap.

What's the meaning of `fmap` over set of predicates in Haskell?

My student assignment I'm doing for Haskell programming includes a task I'm a little bit puzzled to solve. The things are given so: an instance of Functor class to be created just for a set-based new type. There is a declaration of that:
newtype Set x = Set { contains :: (x -> Bool) }
It's a case for me to understand what means if fmap serves to be applied to something like a set of predicates. When doing about previous tasks, I've already defined fmap rather with functions like (+3) (to alter integers), (toUpper) (to strings) etc. It's first time I'm dealing with anything beyond normal types (various numerics, String, Char). There is my humble attempt to start:
instance Functor Set where
fmap f (Set x) = if (contains x) == True then Set (f x) else Set x
Surely, it's a nonsense code, but I suppose some True/False need to be evaluated before fmap apllication goes well. But, first of all, would you please explain the matter of set of predicates to elaborate more sensible approach?
With this definition, it is actually impossible to define a Functor instance for Set. The reason for this is that your Set a type contains a in a negative position... that is, a is an argument to a function, not a value. When that happens, the type constructor can be a contravariant functor (type class Contravariant from the contravariant package), but it cannot be a covariant functor (the Functor type class).
Here's are definitions of those classes:
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Contravariant f where
contramap :: (a -> b) -> f b -> f a
See the difference? In a contravariant functor, the direction of the function you pass in is flipped when it's lifted to operate on the functor type.
In the end, this should make sense. Your notion of a "set" is a test that tells you whether something qualifies or not. This is a perfectly good mathematical definition of a set, but it gives you different computational powers than the standard one. For example, you cannot get at the elements of your predicate-based sets; only wait to be given a potential element. If you have a Set Integer, and a function f :: String -> Integer, then you can test Strings by first converting them to Integers and testing those, so you can get from Set Integer to Set String. But having a g :: Integer -> String doesn't let you test Strings!
If this is an assignment, then either you've misunderstood the assignment, you've done some earlier step differently than expected (e.g., if you defined Set yourself in an earlier part, maybe you need to change the definition), or perhaps your instructor is hoping you'll struggle with this and understand why Functor cannot be defined.

What does adjoining mean in practical uses?

I often run into term "adjoin" when trying to understand some concepts. Those things are too abstract for me to understand as I'm neither expert in field nor category theory.
The simplest case I found is a Monoid Maybe a instance which often behaves not as I would sometimes expect with regard to Nothing.
From Wikipedia we can learn that by "adjoining" an element to a semigroup we can get a different Monoid instance. I don't understand the sentence but the equations given suggest it's exactly what I need (and is not default for some reason):
Any semigroup S may be turned into a monoid simply by adjoining an element e not in S and defining e • s = s = s • e for all s ∈ S.
Doe "adjoining" mean the same as "adding" at least in this case?
Are there other simple examples of this concept?
What would be simplest possible instance of something that is "left-adjoint"?
Sometimes "adjoining" means "adding something new", as in the semigroup-related sentence you quote. E.g. someone might say that using Maybe a means adding/adjoining a new element Nothing to the type a. Personally, I would only use "adding" for this, though.
This has nothing to do with adjoints in the categorical sense, which are a tricky concept.
Roughly put, assume you have a functional type of the form
F a -> b
where F is some mapping from types to types (more precisely, a functor). Sometimes, you can express an isomorphic type to the above one having the form
a -> G b
where "magically" the function F on the left side moved to the right side, changing into G.
The canonical example is currying: Let e.g.
F T = (T, Int)
G T = Int -> T
Then we have
(F a) -> b
-- definition of F
= (a, Int) -> b
-- currying
=~ a -> (Int -> b)
-- definition of G
= a -> G b
In such case, we write F -| G which is read as "F is left adjoint to G".
Every time you can "nicely move" an operation on the input type on the other side of the arrow, changing it into another operation on the output type, you have an adjoint. (Technically, "nicely" means we have a natural isomorphism)

Any advantage of using type constructors in type classes?

Take for example the class Functor:
class Functor a
instance Functor Maybe
Here Maybe is a type constructor.
But we can do this in two other ways:
Firstly, using multi-parameter type classes:
class MultiFunctor a e
instance MultiFunctor (Maybe a) a
Secondly using type families:
class MonoFunctor a
instance MonoFunctor (Maybe a)
type family Element
type instance Element (Maybe a) a
Now there's one obvious advantage of the two latter methods, namely that it allows us to do things like this:
instance Text Char
Or:
instance Text
type instance Element Text Char
So we can work with monomorphic containers.
The second advantage is that we can make instances of types that don't have the type parameter as the final parameter. Lets say we make an Either style type but put the types the wrong way around:
data Silly t errorT = Silly t errorT
instance Functor Silly -- oh no we can't do this without a newtype wrapper
Whereas
instance MultiFunctor (Silly t errorT) t
works fine and
instance MonoFunctor (Silly t errorT)
type instance Element (Silly t errorT) t
is also good.
Given these flexibility advantages of only using complete types (not type signatures) in type class definitions, is there any reason to use the original style definition, assuming you're using GHC and don't mind using the extensions? That is, is there anything special you can do putting a type constructor, not just a full type in a type class that you can't do with multi-parameter type classes or type families?
Your proposals ignore some rather important details about the existing Functor definition because you didn't work through the details of writing out what would happen with the class's member function.
class MultiFunctor a e where
mfmap :: (e -> ??) -> a -> ????
instance MultiFunctor (Maybe a) a where
mfmap = ???????
An important property of fmap at the moment is that its first argument can change types. fmap show :: (Functor f, Show a) => f a -> f String. You can't just throw that away, or you lose most of the value of fmap. So really, MultiFunctor would need to look more like...
class MultiFunctor s t a b | s -> a, t -> b, s b -> t, t a -> s where
mfmap :: (a -> b) -> s -> t
instance (a ~ c, b ~ d) => MultiFunctor (Maybe a) (Maybe b) c d where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (f a)
Note just how incredibly complicated this has become to try to make inference at least close to possible. All the functional dependencies are in place to allow instance selection without annotating types all over the place. (I may have missed a couple possible functional dependencies in there!) The instance itself grew some crazy type equality constraints to allow instance selection to be more reliable. And the worst part is - this still has worse properties for reasoning than fmap does.
Supposing my previous instance didn't exist, I could write an instance like this:
instance MultiFunctor (Maybe Int) (Maybe Int) Int Int where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (if f a == a then a else f a * 2)
This is broken, of course - but it's broken in a new way that wasn't even possible before. A really important part of the definition of Functor is that the types a and b in fmap don't appear anywhere in the instance definition. Just looking at the class is enough to tell the programmer that the behavior of fmap cannot depend on the types a and b. You get that guarantee for free. You don't need to trust that instances were written correctly.
Because fmap gives you that guarantee for free, you don't even need to check both Functor laws when defining an instance. It's sufficient to check the law fmap id x == x. The second law comes along for free when the first law is proven. But with that broken mfmap I just provided, mfmap id x == x is true, even though the second law is not.
As the implementer of mfmap, you have more work to do to prove your implementation is correct. As a user of it, you have to put more trust in the implementation's correctness, since the type system can't guarantee as much.
If you work out more complete examples for the other systems, you find that they have just as many issues if you want to support the full functionality of fmap. And this is why they aren't really used. They add a lot of complexity for only a small gain in utility.
Well, for one thing the traditional functor class is just much simpler. That alone is a valid reason to prefer it, even though this is Haskell and not Python. And it also represents the mathematical idea better of what a functor is supposed to be: a mapping from objects to objects (f :: *->*), with extra property (->Constraint) that each (forall (a::*) (b::*)) morphism (a->b) is lifted to a morphism on the corresponding object mapped to (-> f a->f b). None of that can be seen very clearly in the * -> * -> Constraint version of the class, or its TypeFamilies equivalent.
On a more practical account, yes, there are also things you can only do with the (*->*)->Constraint version.
In particular, what this constraint guarantees you right away is that all Haskell types are valid objects you can put into the functor, whereas for MultiFunctor you need to check every possible contained type, one by one. Sometimes that's just not possible (or is it?), like when you're mapping over infinitely many types:
data Tough f a = Doable (f a)
| Tough (f (Tough f (a, a)))
instance (Applicative f) = Semigroup (Tough f a) where
Doable x <> Doable y = Tough . Doable $ (,)<$>x<*>y
Tough xs <> Tough ys = Tough $ xs <> ys
-- The following actually violates the semigroup associativity law. Hardly matters here I suppose...
xs <> Doable y = xs <> Tough (Doable $ fmap twice y)
Doable x <> ys = Tough (Doable $ fmap twice x) <> ys
twice x = (x,x)
Note that this uses the Applicative instance of f not just on the a type, but also on arbitrary tuples thereof. I can't see how you could express that with a MultiParamTypeClasses- or TypeFamilies-based applicative class. (It might be possible if you make Tough a suitable GADT, but without that... probably not.)
BTW, this example is perhaps not as useless as it may look – it basically expresses read-only vectors of length 2n in a monadic state.
The expanded variant is indeed more flexible. It was used e.g. by Oleg Kiselyov to define restricted monads. Roughly, you can have
class MN2 m a where
ret2 :: a -> m a
class (MN2 m a, MN2 m b) => MN3 m a b where
bind2 :: m a -> (a -> m b) -> m b
allowing monad instances to be parametrized over a and b. This is useful because you can restrict those types to members of some other class:
import Data.Set as Set
instance MN2 Set.Set a where
-- does not require Ord
return = Set.singleton
instance Prelude.Ord b => MN3 SMPlus a b where
-- Set.union requires Ord
m >>= f = Set.fold (Set.union . f) Set.empty m
Note than because of that Ord constraint, we are unable to define Monad Set.Set using unrestricted monads. Indeed, the monad class requires the monad to be usable at all types.
Also see: parameterized (indexed) monad.

Why doesn't Haskell have a stronger alternative to Eq?

The reason why Set is not a functor is given here. It seems to boil down to the fact that a == b && f a /= f b is possible. So, why doesn't Haskell have as standard an alternative to Eq, something like
class Eq a => StrongEq a where
(===) :: a -> a -> Bool
(/==) :: a -> a -> Bool
x /== y = not (x === y)
x === y = not (x /== y)
for which instances are supposed to obey the laws
∀a,b,f. not (a === b) || (f a === f b)
∀a. a === a
∀a,b. (a === b) == (b === a)
and maybe some others? Then we could have:
instance StrongEq a => Functor (Set a) where
-- ...
Or am I missing something?
Edit: my problem is not “Why are there types without an Eq instance?”, like some of you seem to have answered. It's the opposite: “Why are there instances of Eq that aren't extensionally equal? Why are there too many Eq instances?”, combined with “If a == b does imply extensional equality, why is Set not an instance of Functor?”.
Also, my instance declaration is rubbish (thanks #n.m.). I should have said:
newtype StrongSet a = StrongSet (Set a)
instance Functor StrongSet where
fmap :: (StrongEq a, StrongEq b) => (a -> b) -> StrongSet a -> StrongSet b
fmap (StrongSet s) = StrongSet (map s)
instance StrongEq a => Functor (Set a) where
This makes sense neither in Haskell nor in the grand mathematical/categorical scheme of things, regardless of what StrongEq means.
In Haskell, Functor requires a type constructor of kind * -> *. The arrow reflects the fact that in category theory, a functor is a kind of mapping. [] and (the hypothetical) Set are such type constructors. [a] and Set a have kind * and cannot be functors.
In Haskell, it is hard to define Set such that it can be made into a Functor because equality cannot be sensibly defined for some types no matter what. You cannot compare two things of type Integer->Integer, for example.
Let's suppose there is a function
goedel :: Integer -> Integer -> Integer
goedel x y = -- compute the result of a function with
-- Goedel number x, applied to y
Suppose you have a value s :: Set Integer. What fmap goedel s should look like? How do you eliminate duplicates?
In your typical set theory equality is magically defined for everything, including functions, so Set (or Powerset to be precise) is a functor, no problem with that.
Since I'm not a category theorist, I'll try to write a more concrete/practical explanation (i.e., one I can understand):
The key point is the one that #leftaroundabout made in a comment:
== is supposed to
witness "equivalent by all observable means" (that doesn't necessarily
require a == b must hold only for identical implementations; but
anything you can "officially" do with a and b should again yield
equivalent results. So unAlwaysEq should never be exposed in the first
place). If you can't ensure this for some type, you shouldn't give it
an Eq instance.
That is, there should be no need for your StrongEq because that's what Eq is supposed to be already.
Haskell values are often intended to represent some sort of mathematical or "real-life" value. Many times, this representation is one-to-one. For example, consider the type
data PlatonicSolid = Tetrahedron | Cube |
Octahedron | Dodecahedron | Icosahedron
This type contains exactly one representation of each Platonic solid. We can take advantage of this by adding deriving Eq to the declaration, and it will produce the correct instance.
In many cases, however, the same abstract value may be represented by more than one Haskell value. For example, the red-black trees Node B (Node R Leaf 1 Leaf) 2 Leaf and Node B Leaf 1 (Node R Leaf 2 Leaf) can both represent the set {1,2}. If we added deriving Eq to our declaration, we would get an instance of Eq that distinguishes things we want to be considered the same (outside of the implementation of the set operations).
It's important to make sure that types are only made instances of Eq (and Ord) when appropriate! It's very tempting to make something an instance of Ord just so you can stick it in a data structure that requires ordering, but if the ordering is not truly a total ordering of the abstract values, all manner of breakage may ensue. Unless the documentation absolutely guarantees it, for example, a function called sort :: Ord a => [a] -> [a] may not only be an unstable sort, but may not even produce a list containing all the Haskell values that go into it. sort [Bad 1 "Bob", Bad 1 "James"] can reasonably produce [Bad 1 "Bob", Bad 1 "James"], [Bad 1 "James", Bad 1 "Bob"], [Bad 1 "James", Bad 1 "James"], or [Bad 1 "Bob", Bad 1 "Bob"]. All of these are perfectly legitimate. A function that uses unsafePerformIO in the back room to implement a Las Vegas-style randomized algorithm or to race threads against each other to get an answer from the fastest may even give different results different times, as long as they're == to each other.
tl;dr: Making something an instance of Eq is a way of making a very strong statement to the world; don't make that statement if you don't mean it.
Your second Functor instance also doesn't make any sense. The biggest reason why Set can't be a Functor in Haskell is fmap can't have constraints. Inventing different notions of equality as StrongEq doesn't change the fact that you can't write those constraints on fmap in your Set instance.
fmap in general shouldn't have the constraints you need. It makes perfect sense to have functors of functions, for example (without it the whole notion of using Applicative to apply functions inside a functor breaks down), and functions can't be members of Eq or your StrongEq in general.
fmap can't have extra constraints on only some instances, because of code like this:
fmapBoth :: (Functor f, Functor g) => (a -> b, c -> d) -> (f a, g c) -> (f b, g d)
fmapBoth (h, j) (x, y) = (fmap h x, fmap j y)
This code claims to work regardless of the functors f and g, and regardless of the functions h and j. It has no way of checking whether one of the functors is a special one that has extra constraints on fmap, nor any way of checking whether one of the functions it's applying would violate those constraints.
Saying that Set is a Functor in Haskell, is saying that there is a (lawful) operation fmap :: (a -> b) -> Set a -> Set b, with that exact type. That is precisely what Functor means. fmap :: (Eq a -> Eq b) => (a -> b) -> Set a -> Set b is not an example of such an operation.
It is possible, I understand, to use the ConstraintKinds GHC extendsion to write a different Functor class that permits constraints on the values which vary by Functor (and what you actually need is an Ord constraint, not just Eq). This blog post talks about doing so to make a new Monad class which can have an instance for Set. I've never played around with code like this, so I don't know much more than that the technique exists. It wouldn't help you hand off Sets to existing code that needs Functors, but you should be able to use it instead of Functor in your own code if you wish.
This notion of StrongEq is tough. In general, equality is a place where computer science becomes significantly more rigorous than typical mathematics in the kind of way which makes things challenging.
In particular, typical mathematics likes to talk about objects as though they exist in a set and can be uniquely identified. Computer programs usually deal with types which are not always computable (as a simple counterexample, tell me what the set corresponding to the type data U = U (U -> U) is). This means that it may be undecidable as to whether two values are identifiable.
This becomes an enormous topic in dependently typed languages since typechecking requires identifying like types and dependently typed languages may have arbitrary values in their types and thus need a way to project equality.
So, StrongEq could be defined over a restricted part of Haskell containing only the types which can be decidably compared for equality. We can consider this a category with the arrows as computable functions and then see Set as an endofunctor from types to the type of sets of values of that type. Unfortunately, these restrictions have taken us far from standard Haskell and make defining StrongEq or Functor (Set a) a little less than practical.

Resources