I'm currently working on understanding the lens library in detail by following the explanation in https://en.wikibooks.org/wiki/Haskell/Lenses_and_functional_references#The_scenic_route_to_lenses, which starts with a Traversal and then introduces the Lens. The wiki makes the distinction that a Traversal can have multiple targets, while a Lens has a single target.
This arguments is also made in the documentation for Traversal, "[Traversals] have also been known as multilenses".
What would be example code to illustrate this distinction? E.g. how can I use a Traversal to get or set multiple values in a way I cannot do with a Lens?
Suppose you have a type
data Pair a = Pair a a
This offers two natural lenses and two additional natural traversals.
-- Work with the first or second component
_1, _2 :: Lens' (Pair a) a
-- Work with both components, either left to
-- right or right to left
forwards, backwards :: Traversal' (Pair a) a
There are fewer things you can do with a traversal than a lens, but you (often) get more traversals than lenses.
Related
So, there's a lot of buzz about categories all around the Haskell ecosystem. But I feel one piece is missing from the common sense I have so far absorbed by osmosis. (I did read the first few pages of Mac Lane's famous introduction as well, but I don't believe I have enough mathematical maturity to carry the wisdom from this text to actual programming I have at hand.) I will now follow with a real world example involving a binary function that I have trouble depicting in categorical terms.
So, I have this function chain that allows me to S -> A, where A is a type synonym for a function, akin to a -> b. Now, I want to depict a process that does S -> a -> b, but I end up with an arrow pointing to another arrow rather than an object. How do I deal with such predicament?
I did overhear someone talking about a thing called n-category but I don't know if I should even try to understand what it is and how it's useful.
Though I believe my abstraction is accurate, the actual functions are parsePath >>> either error id >>> toAxis :: String -> Text.XML.Cursor.Axis from selectors and Axis = Text.XML.Cursor.Cursor -> [Text.XML.Cursor.Cursor] from xml-conduit.
There are two approaches to model binary functions as morphism in category theory (n-ary functions are dealt with similarly -- no new machinery is needed). One is to consider the uncurried version:
(A * B) -> C
where we take the product of the types A and B as a starting object. For that we need the category to contain such a products. (In Haskell, products are written (A, B). Well, technically in Haskell this is not exactly the product as in categories, but let's ignore that.)
Another is to consider the result type (B -> C) as an object in the category. Usually, this is called an exponential object, written as C^B. Assuming our category has such objects, we can write
A -> C^B
These two representations of binary functions are isomorphic: using curry and uncurry we can transform each one into the other.
Indeed, when there is such a (natural) isomorphism, we get a so called cartesian closed category, which is the simplest form of category which can describe a simply typed lambda calculus -- the core of every typed functional language.
This isomorphism is often cited as an adjunction between two functors
(- * B) -| (- ^ B)
I can use tuple projections to depict this situation, as follows:
-- Or, in actual Haskell terms:
This diagram features backwards fst & snd arrows in place of a binary function that constructs the tuple from its constituents, and that I can in no way depict directly. The caveat is that, while in this diagram Cursor has only one incoming arrow, I should remember that in actual code some real arrows X -> Axis & Y -> Cursor should go to both of the projections of the tuple, not just the symbolic projecting functions. The flow will then be uniformly left to right.
Pragmatically speaking, I traded an arrow with two sources (that constructs a tuple and isn't a morphism) for two reversed arrows (the tuple's projections that are legal morphisms in all regards).
I'm trying to understand what exactly is needed from the Applicative interface in order to perform any traverse. I'm stuck as they are not used in the default implementation as if the constraint was to strict. Is Haskell's type system too weak to describe the actual requirements?
-- | Map each element of a structure to an action, evaluate these actions
-- from left to right, and collect the results. For a version that ignores
-- the results see 'Data.Foldable.traverse_'.
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
traverse f = sequenceA . fmap f
-- | Evaluate each action in the structure from left to right, and
-- and collect the results. For a version that ignores the results
-- see 'Data.Foldable.sequenceA_'.
sequenceA :: Applicative f => t (f a) -> f (t a)
sequenceA = traverse id
A possibly related side question, why is sequenceA_ defined in Foldable?
traverse and sequenceA both need to deal with what happens when the Traversable is empty. Then you won't have any elements in an Applicative context that you can use to glom other stuff onto so you'll need pure.
The definitions you've presented are a bit misleading since, as you pointed out, they're mutually dependent. When you go to actually implement one of them you'll run into the empty collection problem. And you'll run into the need for <*> as Functor provides no facility to aggregate different values of f a for some functor f.
Therefore the Applicative constraint is there because for most types, in order to implement either traverse or sequenceA you'll need the tools that Applicative provides.
That being said there are certain types where you don't need pure or don't need <*>. If your collection can never be empty you don't need pure, e.g. NonEmpty. If your collection never has more than one element you don't need <*>, e.g. Maybe. Sometimes you don't need either and you can get away with just fmap, e.g. a tuple section such as (a,)).
Haskell could have a more fine-grained typeclass hierarchy that breaks Applicative down into more fine-grained parts with separate classes for pure and <*> which would then allow you to make different versions of Traversable with weaker constraints. Edward Kmett's library semigroupoids goes in this direction, although it isn't perfect since it can't add actual superclasses to the base classes. It has Apply which is Applicative but without pure, and Traversable1 which is a variant of Traversable that uses Apply instead of Applicative and thus requires that its types can never be empty.
Note that other ecosystems have chosen to have a more fine-grained typeclass hierarchy (see Scala's cats or scalaz libraries). I personally find such a distinction occasionally useful but not overwhelmingly so.
As for your second question if all you know how to do is tear down something, you can still perform effects along the way but you can't necessarily recover the original structure. Hence why sequenceA_ is in Foldable. It is strictly less powerful than sequenceA.
I am working through the section on Functors in the Typeclassopedia.
A simple intuition is that a Functor represents a “container” of some sort, along with the ability to apply a function uniformly to every element in the container.
OK. So, functors appear pretty natural for inductive types like lists or trees.
Functors also appear pretty simple if the number of elements is fixed to a low number. For example, with Maybe you just have to be concerned about "Nothing" or "Just a" -- two things.
So, how would you make something like a graph, that could potentially have loops, an instance of Functor? I think a more generalized way to put it is, how do non-inductive types "fit into" Functors?
The more I think about it, the more I realize that inductive / non-inductive doesn't really matter. Inductive types are just easier to define fmap for...
If I wanted to make a graph an instance of Functor, I would have to implement a graph traversal algorithm inside fmap; for example it would probably have to use a helper function that would keep track of the visited nodes. At this point, I am now wondering why bother defining it as a Functor instead of just writing this as a function itself? E.g. map vs fmap for lists...?
I hope someone with experience, war stories, and scars can shed some light. Thanks!
Well let's assume you define a graph like this
data Graph a = Node a [Graph a]
Then fmap is just defined precisely as you would expect
instance Functor Graph where
fmap f (Node a ns) = Node (f a) (map (fmap f) ns)
Now, if there's a loop then we'd have had to do something like
foo = Node 1 [bar]
bar = Node 2 [foo]
Now fmap is sufficiently lazy that you can evaluate part of it's result without forcing the rest of the computation, so it works just as well as any knot-tied graph representation would!
In general this is the trick: fmap is lazy so you can treat it's results just as you would treat any non-inductive values in Haskell (: carefully).
Also, you should define fmap vs the random other functions since
fmap is a good, well known API with rules
Your container now places well with things expecting Functors
You can abstract away other bits of your program so they depend on Functor, not your Graph
In general when I see something is a functor I think "Ah wonderful, I know just how to use that" and when I see
superAwesomeTraversal :: (a -> b) -> Foo a -> Foo b
I get a little worried that this will do unexpected things..
Comonoids are mentioned, for example, in Haskell's distributive library docs:
Due to the lack of non-trivial comonoids in Haskell, we can restrict ourselves to requiring a Functor rather than some Coapplicative class.
After a little searching I found a StackOverflow answer that explains this a bit more with the laws that comonoids would have to satisfy. So I think I understand why there's only one possible instance for a hypothetical Comonoid typeclass in Haskell.
Thus, to find a nontrivial comonoid, I suppose we'd have to look in some other category. Surely, if category theorists have a name for comonoids, then there are some interesting ones. The other answers on that page seem to hint at an example involving Supply, but I couldn't figure one out that still satisfies the laws.
I also turned to Wikipedia: there's a page for monoids that doesn't reference category theory, which seems to me as an adequate description of Haskell's Monoid typeclass, but "comonoid" redirects to a category-theoretic description of monoids and comonoids together that I can't understand, and there still don't seem to be any interesting examples.
So my questions are:
Can comonoids be explained in non-category-theoretic terms like monoids?
What is a simple example of an interesting comonoid, even if it's not a Haskell type? (Could one be found in a Kleisli category over a familiar Haskell monad?)
edit: I am not sure if this is actually category-theoretically correct, but what I was imagining in the parenthetical of question 2 was nontrivial definitions of delete :: a -> m () and split :: a -> m (a, a) for some specific Haskell type a and Haskell monad m that satisfy Kleisli-arrow versions of the comonoid laws in the linked answer. Other examples of comonoids are still welcome.
As Phillip JF mentioned, comonoids are interesting to talk about in substructural logics. Let's talk about linear lambda calculus. This is much like your normal typed lambda calculus except that every variable must be used exactly once.
To get a feel, let's count linear functions of given types, i.e.
a -> a
has exactly one inhabitant, id. While
(a,a) -> (a,a)
has two, id and flip. Note that in regular lambda calculus (a,a) -> (a,a) has four inhabitants
(a, b) ↦ (a, a)
(a, b) ↦ (b, b)
(a, b) ↦ (a, b)
(a, b) ↦ (b, a)
but the first two require that we use one of the arguments twice while discarding the other. This is exactly the essence of linear lambda calculus—disallowing those kinds of functions.
As a quick aside, what's the point of linear LC? Well, we can use it to model linear effects or resource usage. If, for instance, we have a file type and a few transformers it might look like
data File
open :: String -> File
close :: File -> () -- consumes a file, but we're ignoring purity right now
t1 :: File -> File
t2 :: File -> File
and then the following are valid pipelines:
close . t1 . t2 . open
close . t2 . t1 . open
close . t1 . open
close . t2 . open
but this "branching" computation isn't
let f1 = open "foo"
f2 = t1 f1
f3 = t2 f1
in close f3
since we used f1 twice.
Now, you might be wondering something at this point about what things must follow the linear rules. For instance, I decided that some pipelines don't have to include both t1 and t2 (compare the enumeration exercise from before). Further, I introduced the open and close functions which happily create and destroy the File type despite that being a violation of linearity.
Indeed, we might posit the existence of functions which violate linearity—but not all clients may. It's much like the IO monad—all of the secrets live inside the implementation of IO so that users work in a "pure" world.
And this is where Comonoid comes in.
class Comonoid m where
destroy :: m -> ()
split :: m -> (m, m)
A type that instantiates Comonoid in a linear lambda calculus is a type which has carry-along destruction and duplication rules. In other words, it's a type which isn't very much bound by linear lambda calculus at all.
Since Haskell doesn't implement the linear lambda calculus rules at all, we can always instantiate Comonoid
instance Comonoid a where
destroy a = ()
split a = (a, a)
Or, perhaps the other way to think of it is that Haskell is a linear LC system that just happens to instantiate Comonoid for every type and applies destroy and split for you automatically.
A monoid in the usual sense is the same as a categorical monoid in the category of sets. One would expect that a comonoid in the usual sense is the same as a categorical comonoid in the category of sets. But every set in the category of sets is a comonoid in a trivial way, so apparently there is no non-categorical description of comonoids which would be parallel to that of monoids.
Just like a monad is a monoid in the category of endofunctors (what's the problem?), a comonad is a comonoid in the category of endofunctors (what's the coproblem?) So yes, any comonad in Haskell would be an example of a comonoid.
Well one way we can think of a monoid is as hooked to any particular product construction that we're using, so in Set we'd take this signature:
mul : A * A -> A
one : A
to this one:
dup : A -> A * A
one : A
but the idea of duality is that the logical statements that you can make all have duals which can be applied to the dual objects, and there is another way of stating what a monoid is, and that's being agnostic to the choice of product construction and then when we take the costructure we can take the coproduct in the output, like:
div : A -> A + A
one : A
where + is a tagged sum. Here we essentially have that every single term which is in this type is always ready to produce a new bit, which is implicitly derived from the tag used to denote the left or the right instance of A. I personally think this is really damn cool. I think the cool version of the things that people were talking about above is when you don't particularly construct that for monoids, but for monoid actions.
A monoid M is said to act on a set A if there's a function
act : M * A -> A
where we have the following rules
act identity a = a
act f (act g a) = act (f * g) a
If we want a co-action, what exactly do we want?
act : A -> M * A
this generates us a stream of the type of our comonoid! I'm having a lot of trouble coming up with the laws for these systems, but I think they must be around somewhere so I'm gonna keep looking tonight. If somebody can tell me them or that I'm wrong about these things in some way or another, also interested in that.
As a physicist, the most common example I deal with is coalgebras, which are comonoid objects in the category of vector spaces, with the monoidal structure usually given by the tensor product.
In that case, there is a bijection between monoid and comonoid objects, since you can just take the adjoint or transpose of the product and unit maps to get a coproduct and a counit that satisfy the comonoid axioms.
In some branches of physics, it is very common to see objects that have both an algebra and a coalgebra structure with some compatibility axioms. The two most common cases are Hopf algebras and Frobenius algebras. They are very convenient for constructing states or solution that are entangled or correlated.
In programming, the simplest nontrivial example I can think of would be reference counted pointers such as shared_ptr in C++ and Rc in Rust, along with their weak equivalents. You can copy them, which is a nontrivial operation that bumps up the refcount (so the two copies are distinct from the initial state). You can drop (call the destructor) on one, which is nontrivial because it bumps down the refcount of any other refcounted pointer that points to the same piece of data.
Furthermore, weak pointers are a great example of a comonoid action. You can use the co-action to generate a weak pointer from a shared pointer. This can be easily checked by noting that creating one from a shared pointer and immediately dropping it is a unit operation, and creating one & cloning it is equivalent to creating two from the shared pointer.
This is a general thing you see with nontrivial coproducts and their co-actions: when they don't reduce to a copying operation, they intuitively imply some form of action at a distance between the two halves, while also adding an operation that erases one half to leave the other independent.
I'm currently trying to come up with a data structure that fits the needs of two automata learning algorithms I'd like to implement in Haskell: RPNI and EDSM.
Intuitively, something close to what zippers are to trees would be perfect: those algorithms are state merging algorithms that maintain some sort of focus (the Blue Fringe) on states and therefore would benefit of some kind of zippers to reach interesting points quickly. But I'm kinda lost because a DFA (Determinist Finite Automaton) is more a graph-like structure than a tree-like structure: transitions can make you go back in the structure, which is not likely to make zippers ok.
So my question is: how would you go about representing a DFA (or at least its transitions) so that you could manipulate it in a fast fashion?
Let me begin with the usual opaque representation of automata in Haskell:
newtype Auto a b = Auto (a -> (b, Auto a b))
This represents a function that takes some input and produces some output along with a new version of itself. For convenience it's a Category as well as an Arrow. It's also a family of applicative functors. Unfortunately this type is opaque. There is no way to analyze the internals of this automaton. However, if you replace the opaque function by a transparent expression type you should get automata that you can analyze and manipulate:
data Expr :: * -> * -> * where
-- Stateless
Id :: Expr a a
-- Combinators
Connect :: Expr a b -> Expr b c -> Expr a c
-- Stateful
Counter :: (Enum b) => b -> Expr a b
This gives you access to the structure of the computation. It is also a Category, but not an arrow. Once it becomes an arrow you have opaque functions somewhere.
Can you just use a graph to get started? I think the fgl package is part of the Haskell Platform.
Otherwise you can try defining your own structure with 'deriving (Data)' and use the "Scrap Your Zipper" library to get the Zipper.
If you don't need any fancy graph algorithms you can represent your DFA as a Map State State. This gives you fast access and manipulation. You also get focus by keeping track of the current state.
Take a look at the regex-tdfa package: http://hackage.haskell.org/package/regex-tdfa
The source is pretty complex, but it's an implementations of regexes with tagged DFAs tuned for performance, so it should illustrate some good practices for representing DFAs efficiently.