I am working through the section on Functors in the Typeclassopedia.
A simple intuition is that a Functor represents a “container” of some sort, along with the ability to apply a function uniformly to every element in the container.
OK. So, functors appear pretty natural for inductive types like lists or trees.
Functors also appear pretty simple if the number of elements is fixed to a low number. For example, with Maybe you just have to be concerned about "Nothing" or "Just a" -- two things.
So, how would you make something like a graph, that could potentially have loops, an instance of Functor? I think a more generalized way to put it is, how do non-inductive types "fit into" Functors?
The more I think about it, the more I realize that inductive / non-inductive doesn't really matter. Inductive types are just easier to define fmap for...
If I wanted to make a graph an instance of Functor, I would have to implement a graph traversal algorithm inside fmap; for example it would probably have to use a helper function that would keep track of the visited nodes. At this point, I am now wondering why bother defining it as a Functor instead of just writing this as a function itself? E.g. map vs fmap for lists...?
I hope someone with experience, war stories, and scars can shed some light. Thanks!
Well let's assume you define a graph like this
data Graph a = Node a [Graph a]
Then fmap is just defined precisely as you would expect
instance Functor Graph where
fmap f (Node a ns) = Node (f a) (map (fmap f) ns)
Now, if there's a loop then we'd have had to do something like
foo = Node 1 [bar]
bar = Node 2 [foo]
Now fmap is sufficiently lazy that you can evaluate part of it's result without forcing the rest of the computation, so it works just as well as any knot-tied graph representation would!
In general this is the trick: fmap is lazy so you can treat it's results just as you would treat any non-inductive values in Haskell (: carefully).
Also, you should define fmap vs the random other functions since
fmap is a good, well known API with rules
Your container now places well with things expecting Functors
You can abstract away other bits of your program so they depend on Functor, not your Graph
In general when I see something is a functor I think "Ah wonderful, I know just how to use that" and when I see
superAwesomeTraversal :: (a -> b) -> Foo a -> Foo b
I get a little worried that this will do unexpected things..
Related
I'm trying to understand what exactly is needed from the Applicative interface in order to perform any traverse. I'm stuck as they are not used in the default implementation as if the constraint was to strict. Is Haskell's type system too weak to describe the actual requirements?
-- | Map each element of a structure to an action, evaluate these actions
-- from left to right, and collect the results. For a version that ignores
-- the results see 'Data.Foldable.traverse_'.
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
traverse f = sequenceA . fmap f
-- | Evaluate each action in the structure from left to right, and
-- and collect the results. For a version that ignores the results
-- see 'Data.Foldable.sequenceA_'.
sequenceA :: Applicative f => t (f a) -> f (t a)
sequenceA = traverse id
A possibly related side question, why is sequenceA_ defined in Foldable?
traverse and sequenceA both need to deal with what happens when the Traversable is empty. Then you won't have any elements in an Applicative context that you can use to glom other stuff onto so you'll need pure.
The definitions you've presented are a bit misleading since, as you pointed out, they're mutually dependent. When you go to actually implement one of them you'll run into the empty collection problem. And you'll run into the need for <*> as Functor provides no facility to aggregate different values of f a for some functor f.
Therefore the Applicative constraint is there because for most types, in order to implement either traverse or sequenceA you'll need the tools that Applicative provides.
That being said there are certain types where you don't need pure or don't need <*>. If your collection can never be empty you don't need pure, e.g. NonEmpty. If your collection never has more than one element you don't need <*>, e.g. Maybe. Sometimes you don't need either and you can get away with just fmap, e.g. a tuple section such as (a,)).
Haskell could have a more fine-grained typeclass hierarchy that breaks Applicative down into more fine-grained parts with separate classes for pure and <*> which would then allow you to make different versions of Traversable with weaker constraints. Edward Kmett's library semigroupoids goes in this direction, although it isn't perfect since it can't add actual superclasses to the base classes. It has Apply which is Applicative but without pure, and Traversable1 which is a variant of Traversable that uses Apply instead of Applicative and thus requires that its types can never be empty.
Note that other ecosystems have chosen to have a more fine-grained typeclass hierarchy (see Scala's cats or scalaz libraries). I personally find such a distinction occasionally useful but not overwhelmingly so.
As for your second question if all you know how to do is tear down something, you can still perform effects along the way but you can't necessarily recover the original structure. Hence why sequenceA_ is in Foldable. It is strictly less powerful than sequenceA.
It seems like there are a lot of functions that do the same thing, particularly relating to Monads, Functors, and Applicatives.
Examples (from most to least generic):
fmap == liftA == liftM
(<*>) == ap
liftA[2345] == liftM[2345]
pure == return
(*>) == (>>)
An example not directly based on the FAM class tree:
fmap == map
(I thought there were quite a few more with List, Foldable, Traversable, but it looks like most were made more generic some time ago, as I only see the old, less generic type signatures in old stack overflow / message board questions)
I personally find this annoying, as it means that if I need to do x, and some function such as liftM allows me to do x, then I will have made my function less generic than it could have been, and I am only going to notice that kind of thing by thoroughly reasoning about the differences between types (such as FAM, or perhaps List, Foldable, Traversable combinations as well), which is not beginner friendly at all, as while simply using those types isn't all that hard, reasoning about their properties and laws requires a lot more mental effort.
I am guessing a lot of these equivalencies come from the Applicative Monad Proposal. If that is the reason for them (and not some other reason I am missing for having less generic functions available for confusion), are they going to be deprecated / deleted ever? I can understand waiting a long time to delete them, due to breaking existing code, but surely deprecation is a good idea?
The short answers are "history" and "regularity".
Originally "map" was defined for lists. Then type-classes were introduced, with the Functor type class, so the generalised version of "map" for any functor had to be called something different, otherwise existing code would be broken. Hence "fmap".
Then monads came along. Instances of monads did not need to be functors, so "liftM" was created, along with "liftM2", "liftM3" etc. Of course if a type is an instance of both Monad and Functor then fmap = liftM.
Monads also have "ap", used in expressions like f `ap` arg1 `ap` arg2. This was very handy, but then Applicative Functors were added. (<*>) did the same job for applicative functors as 'ap', but because many applicative functors are not monads it had to be called something different. Likewise liftAx versus liftMx and "pure" versus "return".
They aren't equivalent though. equivalent things in haskell can be interchanged with no difference at all in functionality. Consider for example pure and return
EDIT: I wrote some examples down, but they were really bad since they involved Maybe a, a type that is both an applicative and a monad, so the functions could be used pretty interchangeably.
There are types that are applicatives but not monads though (see this question for examples), and by studying the type of the following expression, we can see that this could lead to some roadbumps:
pure 1 >>= pure :: (Monad m, Num b) => m b
I personally find this annoying, as it means that if I need to do x, and some function such as liftM allows me to do x, then I will have made my function less generic than it could have been
This logic is backwards.
Normally you know in advance the type of the thing you want to write, be it IO String or (Foldable f, Monoid t, Monad m) => f (m t) -> m t or whatever. Let's take the first case, getLineCapitalized :: IO String. You could write it as
getLineCapitalized = liftM (map toUpper) getLine
or
getLineCapitalized = fmap (fmap toUpper) getLine
Is the former "less generic" because it uses the specialized functions liftM and map? Of course not. This is intrinsically an IO action that produces a list. It cannot become "more generic" by changing it to the second version since those fmaps will have their types fixed to IO and [] anyways. So, there is no advantage to the second version.
By writing the first version, you provide contextual information to the reader for free. In liftM (map foo) bar, the reader knows that bar is going to be an action in some monad that returns a list. In fmap (fmap foo) bar, it could be any sort of doubly-nested structure whatsoever. If bar is something complicated rather than just getLine, then this kind of information is helpful for understanding more easily what is going on in bar.
In general, you should write a function in two steps.
Decide what the type of the function should be. Make it as general or as specific as you want. The more general the type of the function, the stronger guarantees you get on its behavior from parametricity.
Once you have decided on the type of your function, implement it using the most specific available functions. By doing so, you are providing the most information to the reader of your function. You never lose any generality or parametricity guarantees by doing so, since those only depend on the type, which you already determined in step 1.
Edit in response to comments: I was reminded of the biggest reason to use the most specific function available, which is catching bugs. The type length :: [a] -> Int is essentially the entire reason that I still use GHC 7.8. It's never happened that I wanted to take the length of an unknown Foldable structure. On the other hand, I definitely do not want to ever accidentally take the length of a pair, or take the length of foo bar baz which I think has type [a], but actually has type Maybe [a].
In the use cases for Foldable that are not already covered by the rest of the Haskell standard, lens is a vastly more powerful alternative. If I want the "length" of a Maybe t, lengthOf _Just :: Maybe t -> Int expresses my intent clearly, and the compiler can check that the program actually matches my intent; and I can go on to write lengthOf _Nothing, lengthOf _Left, etc. Explicit is better than implicit.
There are some "redundant" functions like liftM, ap, and liftA that have a very real use and taking them out would cause loss of functionality --- you can use liftM, ap, and liftA to implement your Functor or Applicative instances if all you've written is a Monad instance. It lets you be lazy and do, say:
instance Monad Foo where
return = ...
(>>=) = ...
Now you've done all of the rewarding work of defining a Monad instance, but this won't compile. Why? Because you also need a Functor and Applicative instance.
So, because you're quickly prototyping, or lazy, or can't think of a better way, you can just get a free Functor and Applicative instance:
instance Functor Foo where
fmap = liftM
instance Applicative Foo where
pure = return
(<*>) = ap
In fact, you can just copy-and-paste that chunk of code everywhere you need to quickly define a Functor or Applicative instance when you already have a Monad instance defined.
The same goes for fmapDefault from Data.Traversable. If you've implemented Traversable, you can also implement Foldable and Functor:
instance Functor Bar where
fmap = fmapDefault
no extra work required!
There are some redundant functions, however, that really have no actual usage other than being historical accidents from a time when Functor was not a superclass of Monad. These have literally zero use/point in existing...and include things like the liftM2, liftM3 etc., and (>>) and friends.
Comonoids are mentioned, for example, in Haskell's distributive library docs:
Due to the lack of non-trivial comonoids in Haskell, we can restrict ourselves to requiring a Functor rather than some Coapplicative class.
After a little searching I found a StackOverflow answer that explains this a bit more with the laws that comonoids would have to satisfy. So I think I understand why there's only one possible instance for a hypothetical Comonoid typeclass in Haskell.
Thus, to find a nontrivial comonoid, I suppose we'd have to look in some other category. Surely, if category theorists have a name for comonoids, then there are some interesting ones. The other answers on that page seem to hint at an example involving Supply, but I couldn't figure one out that still satisfies the laws.
I also turned to Wikipedia: there's a page for monoids that doesn't reference category theory, which seems to me as an adequate description of Haskell's Monoid typeclass, but "comonoid" redirects to a category-theoretic description of monoids and comonoids together that I can't understand, and there still don't seem to be any interesting examples.
So my questions are:
Can comonoids be explained in non-category-theoretic terms like monoids?
What is a simple example of an interesting comonoid, even if it's not a Haskell type? (Could one be found in a Kleisli category over a familiar Haskell monad?)
edit: I am not sure if this is actually category-theoretically correct, but what I was imagining in the parenthetical of question 2 was nontrivial definitions of delete :: a -> m () and split :: a -> m (a, a) for some specific Haskell type a and Haskell monad m that satisfy Kleisli-arrow versions of the comonoid laws in the linked answer. Other examples of comonoids are still welcome.
As Phillip JF mentioned, comonoids are interesting to talk about in substructural logics. Let's talk about linear lambda calculus. This is much like your normal typed lambda calculus except that every variable must be used exactly once.
To get a feel, let's count linear functions of given types, i.e.
a -> a
has exactly one inhabitant, id. While
(a,a) -> (a,a)
has two, id and flip. Note that in regular lambda calculus (a,a) -> (a,a) has four inhabitants
(a, b) ↦ (a, a)
(a, b) ↦ (b, b)
(a, b) ↦ (a, b)
(a, b) ↦ (b, a)
but the first two require that we use one of the arguments twice while discarding the other. This is exactly the essence of linear lambda calculus—disallowing those kinds of functions.
As a quick aside, what's the point of linear LC? Well, we can use it to model linear effects or resource usage. If, for instance, we have a file type and a few transformers it might look like
data File
open :: String -> File
close :: File -> () -- consumes a file, but we're ignoring purity right now
t1 :: File -> File
t2 :: File -> File
and then the following are valid pipelines:
close . t1 . t2 . open
close . t2 . t1 . open
close . t1 . open
close . t2 . open
but this "branching" computation isn't
let f1 = open "foo"
f2 = t1 f1
f3 = t2 f1
in close f3
since we used f1 twice.
Now, you might be wondering something at this point about what things must follow the linear rules. For instance, I decided that some pipelines don't have to include both t1 and t2 (compare the enumeration exercise from before). Further, I introduced the open and close functions which happily create and destroy the File type despite that being a violation of linearity.
Indeed, we might posit the existence of functions which violate linearity—but not all clients may. It's much like the IO monad—all of the secrets live inside the implementation of IO so that users work in a "pure" world.
And this is where Comonoid comes in.
class Comonoid m where
destroy :: m -> ()
split :: m -> (m, m)
A type that instantiates Comonoid in a linear lambda calculus is a type which has carry-along destruction and duplication rules. In other words, it's a type which isn't very much bound by linear lambda calculus at all.
Since Haskell doesn't implement the linear lambda calculus rules at all, we can always instantiate Comonoid
instance Comonoid a where
destroy a = ()
split a = (a, a)
Or, perhaps the other way to think of it is that Haskell is a linear LC system that just happens to instantiate Comonoid for every type and applies destroy and split for you automatically.
A monoid in the usual sense is the same as a categorical monoid in the category of sets. One would expect that a comonoid in the usual sense is the same as a categorical comonoid in the category of sets. But every set in the category of sets is a comonoid in a trivial way, so apparently there is no non-categorical description of comonoids which would be parallel to that of monoids.
Just like a monad is a monoid in the category of endofunctors (what's the problem?), a comonad is a comonoid in the category of endofunctors (what's the coproblem?) So yes, any comonad in Haskell would be an example of a comonoid.
Well one way we can think of a monoid is as hooked to any particular product construction that we're using, so in Set we'd take this signature:
mul : A * A -> A
one : A
to this one:
dup : A -> A * A
one : A
but the idea of duality is that the logical statements that you can make all have duals which can be applied to the dual objects, and there is another way of stating what a monoid is, and that's being agnostic to the choice of product construction and then when we take the costructure we can take the coproduct in the output, like:
div : A -> A + A
one : A
where + is a tagged sum. Here we essentially have that every single term which is in this type is always ready to produce a new bit, which is implicitly derived from the tag used to denote the left or the right instance of A. I personally think this is really damn cool. I think the cool version of the things that people were talking about above is when you don't particularly construct that for monoids, but for monoid actions.
A monoid M is said to act on a set A if there's a function
act : M * A -> A
where we have the following rules
act identity a = a
act f (act g a) = act (f * g) a
If we want a co-action, what exactly do we want?
act : A -> M * A
this generates us a stream of the type of our comonoid! I'm having a lot of trouble coming up with the laws for these systems, but I think they must be around somewhere so I'm gonna keep looking tonight. If somebody can tell me them or that I'm wrong about these things in some way or another, also interested in that.
As a physicist, the most common example I deal with is coalgebras, which are comonoid objects in the category of vector spaces, with the monoidal structure usually given by the tensor product.
In that case, there is a bijection between monoid and comonoid objects, since you can just take the adjoint or transpose of the product and unit maps to get a coproduct and a counit that satisfy the comonoid axioms.
In some branches of physics, it is very common to see objects that have both an algebra and a coalgebra structure with some compatibility axioms. The two most common cases are Hopf algebras and Frobenius algebras. They are very convenient for constructing states or solution that are entangled or correlated.
In programming, the simplest nontrivial example I can think of would be reference counted pointers such as shared_ptr in C++ and Rc in Rust, along with their weak equivalents. You can copy them, which is a nontrivial operation that bumps up the refcount (so the two copies are distinct from the initial state). You can drop (call the destructor) on one, which is nontrivial because it bumps down the refcount of any other refcounted pointer that points to the same piece of data.
Furthermore, weak pointers are a great example of a comonoid action. You can use the co-action to generate a weak pointer from a shared pointer. This can be easily checked by noting that creating one from a shared pointer and immediately dropping it is a unit operation, and creating one & cloning it is equivalent to creating two from the shared pointer.
This is a general thing you see with nontrivial coproducts and their co-actions: when they don't reduce to a copying operation, they intuitively imply some form of action at a distance between the two halves, while also adding an operation that erases one half to leave the other independent.
I'm new to Haskell, and I'm reading about functors and applicative functors. Ok, I understand functors and how I can use them, but I don't understand why applicative functors are useful and how I can use them in Haskell. Can you explain to me with a simple example why I need applicative functors?
Applicative functors are a construction that provides the midpoint between functors and monads, and are therefore more widespread than monads, while more useful than functors. Normally you can just map a function over a functor. Applicative functors allow you to take a "normal" function (taking non-functorial arguments) use it to operate on several values that are in functor contexts. As a corollary, this gives you effectful programming without monads.
A nice, self-contained explanation fraught with examples can be found here. You can also read a practical parsing example developed by Bryan O'Sullivan, which requires no prior knowledge.
Applicative functors are useful when you need sequencing of actions, but don't need to name any intermediate results. They are thus weaker than monads, but stronger than functors (they do not have an explicit bind operator, but they do allow running arbitrary functions inside the functor).
When are they useful? A common example is parsing, where you need to run a number of actions that read parts of a data structure in order, then glue all the results together. This is like a general form of function composition:
f a b c d
where you can think of a, b and so on as the arbitrary actions to run, and f as the functor to apply to the result.
f <$> a <*> b <*> c <*> d
I like to think of them as overloaded 'whitespace'. Or, that regular Haskell functions are in the identity applicative functor.
See "Applicative Programming with Effects"
Conor McBride and Ross Paterson's Functional Pearl on the style has several good examples. It's also responsible for popularizing the style in the first place. They use the term "idiom" for "applicative functor", but other than that it's pretty understandable.
It is hard to come up with examples where you need applicative functors. I can understand why an intermediate Haskell programmer would ask them self that question since most introductory texts present instances derived from Monads using Applicative Functors only as a convenient interface.
The key insight, as mentioned both here and in most introductions to the subject, is that Applicative Functors are between Functors and Monads (even between Functors and Arrows). All Monads are Applicative Functors but not all Functors are Applicative.
So necessarily, sometimes we can use applicative combinators for something that we can't use monadic combinators for. One such thing is ZipList (see also this SO question for some details), which is just a wrapper around lists in order to have a different Applicative instance than the one derived from the Monad instance of list. The Applicative documentation uses the following line to give an intuitive notion of what ZipList is for:
f <$> ZipList xs1 <*> ... <*> ZipList xsn = ZipList (zipWithn f xs1 ... xsn)
As pointed out here, it is possible to make quirky Monad instances that almost work for ZipList.
There are other Applicative Functors that are not Monads (see this SO question) and they are easy to come up with. Having an alternative Interface for Monads is nice and all, but sometimes making a Monad is inefficient, complicated, or even impossible, and that is when you need Applicative Functors.
disclaimer: Making Applicative Functors might also be inefficient, complicated, and impossible, when in doubt, consult your local category theorist for correct usage of Applicative Functors.
In my experience, Applicative functors are great for the following reasons:
Certain kinds of data structures admit powerful types of compositions, but cannot really be made monads. In fact, most of the abstractions in functional reactive programming fall into this category. While we might technically be able to make e.g. Behavior (aka Signal) a monad, it typically cannot be done efficiently. Applicative functors allow us to still have powerful compositions without sacrificing efficiency (admittedly, it is a bit trickier to use an applicative than a monad sometimes, just because you don't have quite as much structure to work with).
The lack of data-dependence in an applicative functor allows you to e.g. traverse an action looking for all the effects it might produce without having the data available. So you could imagine a "web form" applicative, used like so:
userData = User <$> field "Name" <*> field "Address"
and you could write an engine which would traverse to find all the fields used and display them in a form, then when you get the data back run it again to get the constructed User. This cannot be done with a plain functor (because it combines two forms into one), nor a monad, because with a monad you could express:
userData = do
name <- field "Name"
address <- field $ name ++ "'s address"
return (User name address)
which cannot be rendered, because the name of the second field cannot be known without already having the response from the first. I'm pretty sure there's a library that implements this forms idea -- I've rolled my own a few times for this and that project.
The other nice thing about applicative functors is that they compose. More precisely, the composition functor:
newtype Compose f g x = Compose (f (g x))
is applicative whenever f and g are. The same cannot be said for monads, which has creates the whole monad transformer story which is complicated in some unpleasant ways. Applicatives are super clean this way, and it means you can build up the structure of a type you need by focusing on small composable components.
Recently the ApplicativeDo extension has appeared in GHC, which allows you to use do notation with applicatives, easing some of the notational complexity, as long as you don't do any monady things.
One good example: applicative parsing.
See [real world haskell] ch16 http://book.realworldhaskell.org/read/using-parsec.html#id652517
This is the parser code with do-notation:
-- file: ch16/FormApp.hs
p_hex :: CharParser () Char
p_hex = do
char '%'
a <- hexDigit
b <- hexDigit
let ((d, _):_) = readHex [a,b]
return . toEnum $ d
Using functor make it much shorter:
-- file: ch16/FormApp.hs
a_hex = hexify <$> (char '%' *> hexDigit) <*> hexDigit
where hexify a b = toEnum . fst . head . readHex $ [a,b]
'lifting' can hide the underlying details of some repeating code. then you can just use fewer words to tell the exact & precise story.
I would also suggest to take a look at this
In the end of the article there's an example
import Control.Applicative
hasCommentA blogComments =
BlogComment <$> lookup "title" blogComments
<*> lookup "user" blogComments
<*> lookup "comment" blogComments
Which illustrates several features of applicative programming style.
I'm making a fibonacci heap implementation in Haskell, and I'm not sure exactly what the clean way to do it.
For example, I want to order the nodes. So I can do something like:
instance Ord (FibNode e) where
f1 `compare` f2 = (key f1) `compare` (key f2)
This would be more easily done if I made FibNode a monad. But other times I want to fold across the node's siblings, or fold across their children etc. So defining a functor where f x = f $ key x won't work all the time.
Apart from defining my own fmapKey, fmapSibs, fmapKids... is there a way to do this?
You can't make a type constructor an instance of Functor in more than one way.
But you can make various newtype wrappers around your type that each has its own Functor instance. But that's not really any more convenient than defining your own fmap functions.
The (Fibonacci) heap is an inherently impure, destructively-updated data structure with specific asymptotic runtime guarantees for various operations on it. I would be surprised if you can maintain these guarantees with a straightforward translation to a pure version. My suggestion would be to to make it an impure data structure using something like the ST or IO arrays. This would make for a much more direct implementation of the classic algorithms.