Lens setter to add element to end of tuple - haskell

Is there any lens that will help me do the following transformation for a tuple of any length (say up 10-15 elements, at least):
(a, b, c) -> d -> (a, b, c, d)

To get a lens you need a getter an setter functions. Unfortunately, there is no way to obtain a fourth element of a triple, (except for Nothing, or any other unit type). So, you end up with a bunch of setters, which are trivial pattern-matching functions (one for each n-tuple), but not a lens.
Probably, you need a simple list, or some free construction, if you really need non-uniform container?

Related

Manipulating Tuples in Haskell

I'm new to Haskell, I have a question regarding tuples. Is there not a way to traverse a tuple? I understand that traversal is very easy with lists but if the input is given as a tuple is there not a way to check the entire tuple as you do with a list? If that's not the case would it possible to just extract the values from the tuple into a list and perform traversal that way?
In Haskell, it’s not considered idiomatic (nor is it really possible) to use the tuple as a general-purpose traversable container. Any tuple you deal with is going to have a fixed number of elements, with the types of these elements also being fixed. (This is quite different from how tuples are idiomatically used in, for example, Python.) You ask about a situation where “the input is given as a tuple” but if the input is going to have a flexible number of elements then it definitely won’t be given as a tuple—a list is a much more likely choice.
This makes tuples seem less flexible than in some other languages. The upside is that you can examine them using pattern matching. For example, if you want to evaluate some predicate for each element of a tuple and return True if the predicate passes for all of them, you would write something like
all2 :: (a -> Bool) -> (a, a) -> Bool
all2 predicate (x, y) = predicate x && predicate y
Or, for three-element tuples,
all3 :: (a -> Bool) -> (a, a, a) -> Bool
all3 predicate (x, y, z) = predicate x && predicate y && predicate z
You might be thinking, “Wait, you need a separate function for each tuple size?!” Yes, you do, and you can start to see why there’s not a lot of overlap between the use cases for tuples and the use cases for lists. The advantages of tuples are exactly that they are kind of inflexible: you always know how many values they contain, and what type those values have. The former is not really true for lists.
Is there not a way to traverse a tuple?
As far as I know, there’s no built-in way to do this. It would be easy enough to write down instructions for traversing a 2-tuple, traversing a 3-tuple, and so on, but this would have the big limitation that you’d only be able to deal with tuples whose elements all have the same type.
Think about the map function as a simple example. You can apply map to a list of type [a] as long as you have a function a -> b. In this case map looks at each a value in turn, passes it to the function, and assembles the list of resulting b values. But with a tuple, you might have three elements whose values are all different types. Your function for converting as to bs isn’t sufficient if the tuple consists of two a values and a c! If you try to start writing down the Foldable instance or the Traversable instance even just for two-element tuples, you quickly realize that those typeclasses aren’t designed to handle containers whose values might have different types.
Would it be possible to just extract the values from the tuple into a list?
Yes, but you would need a separate function for each possible size of the input tuple. For example,
tupleToList2 :: (a, a) -> [a]
tupleToList2 (x, y) = [x, y]
tupleToList3 :: (a, a, a) -> [a]
tupleToList3 (x, y, z) = [x, y, z]
The good news, of course, is that you’re never going to get a situation where you have to deal with tuples of arbitrary size, because that isn’t a thing that can happen in Haskell. Think about the type signature of a function that accepted a tuple of any size: how could you write that?
In any situation where you’re accepting a tuple as input, it’s probably not necessary to convert the tuple to a list first, because the pattern-matching syntax means that you can just address each element of the tuple individually—and you always know exactly how many such elements there are going to be.
If your tuple is a homogeneous tuple, and you don't mind to use the third-party package, then lens provides some functions to traverse each elements in an arbitrary tuple.
ghci> :m +Control.Lens
ghci> over each (*10) (1, 2, 3, 4, 5) --traverse each element
(10,20,30,40,50)
Control.Lens.Tuple provides some lens to get and set the nth element up to 19th.
You can explore the lens package for more information. If you want to learn the lens package, Optics by examples by Chris Penner is a good book.

Are there useful applications for the Divisible Type Class?

I've lately been working on an API in Elm where one of the main types is contravariant. So, I've googled around to see what one can do with contravariant types and found that the Contravariant package in Haskell defines the Divisible type class.
It is defined as follows:
class Contravariant f => Divisible f where
divide :: (a -> (b, c)) -> f b -> f c -> f a
conquer :: f a
It turns out that my particular type does suit the definition of the Divisible type class. While Elm does not support type classes, I do look at Haskell from time to time for some inspiration.
My question: Are there any practical uses for this type class? Are there known APIs out there in Haskell (or other languages) that benefit from this divide-conquer pattern? Are there any gotchas I should be aware of?
Thank you very much for your help.
One example:
Applicative is useful for parsing, because you can turn Applicative parsers of parts into a parser of wholes, needing only a pure function for combining the parts into a whole.
Divisible is useful for serializing (should we call this coparsing now?), because you can turn Divisible serializers of parts into a serializer of wholes, needing only a pure function for splitting the whole into parts.
I haven't actually seen a project that worked this way, but I'm (slowly) working on an Avro implementation for Haskell that does.
When I first came across Divisible I wanted it for divide, and had no idea what possible use conquer could be other than cheating (an f a out of nowhere, for any a?). But to make the Divisible laws check out for my serializers conquer became a "serializer" that encodes anything to zero bytes, which makes a lot of sense.
Here's a possible use case.
In streaming libraries, one can have fold-like constructs like the ones from the foldl package, that are fed a sequence of inputs and return a summary value when the sequence is exhausted.
These folds are contravariant on their inputs, and can be made Divisible. This means that if you have a stream of elements where each element can be somehow decomposed into b and c parts, and you also happen to have a fold that consumes bs and another fold that consumes cs, then you can build a fold that consumes the original stream.
The actual folds from foldl don't implement Divisible, but they could, using a newtype wrapper. In my process-streaming package I have a fold-like type that does implement Divisible.
divide requires the return values of the constituent folds to be of the same type, and that type must be an instance of Monoid. If the folds return different, unrelated monoids, a workaround is to put each return value in a separate field of a tuple, leaving the other field as mempty. This works because a tuple of monoids is itself a Monoid.
I'll examine the example of the core data types in Fritz Henglein's generalized radix sort techniques as implemented by Edward Kmett in the discrimination package.
While there's a great deal going on there, it largely focuses around a type like this
data Group a = Group (forall b . [(a, b)] -> [[b]])
If you have a value of type Group a you essentially must have an equivalence relationship on a because if I give you an association between as and some type b completely unknown to you then you can give me "groupings" of b.
groupId :: Group a -> [a] -> [[a]]
groupId (Group grouper) = grouper . map (\a -> (a, a))
You can see this as a core type for writing a utility library of groupings. For instance, we might want to know that if we can Group a and Group b then we can Group (a, b) (more on this in a second). Henglein's core idea is that if you can start with some basic Groups on integers—we can write very fast Group Int32 implementations via radix sort—and then use combinators to extend them over all types then you will have generalized radix sort to algebraic data types.
So how might we build our combinator library?
Well, f :: Group a -> Group b -> Group (a, b) is pretty important in that it lets us make groups of product-like types. Normally, we'd get this from Applicative and liftA2 but Group, you'll notice, is Contravaiant, not a Functor.
So instead we use Divisible
divided :: Group a -> Group b -> Group (a, b)
Notice that this arises in a strange way from
divide :: (a -> (b, c)) -> Group b -> Group c -> Group a
as it has the typical "reversed arrow" character of contravariant things. We can now understand things like divide and conquer in terms of their interpretation on Group.
Divide says that if I want to build a strategy for equating as using strategies for equating bs and cs, I can do the following for any type x
Take your partial relation [(a, x)] and map over it with a function f :: a -> (b, c), and a little tuple manipulation, to get a new relation [(b, (c, x))].
Use my Group b to discriminate [(b, (c, x))] into [[(c, x)]]
Use my Group c to discriminate each [(c, x)] into [[x]] giving me [[[x]]]
Flatten the inner layers to get [[x]] like we need
instance Divisible Group where
conquer = Group $ return . fmap snd
divide k (Group l) (Group r) = Group $ \xs ->
-- a bit more cleverly done here...
l [ (b, (c, d)) | (a,d) <- xs, let (b, c) = k a] >>= r
We also get interpretations of the more tricky Decidable refinement of Divisible
class Divisible f => Decidable f where
lose :: (a -> Void) -> f a
choose :: (a -> Either b c) -> f b -> f c -> f a
instance Decidable Group where
lose :: (a -> Void) -> Group a
choose :: (a -> Either b c) -> Group b -> Group c -> Group a
These read as saying that for any type a of which we can guarantee there are no values (we cannot produce values of Void by any means, a function a -> Void is a means of producing Void given a, thus we must not be able to produce values of a by any means either!) then we immediately get a grouping of zero values
lose _ = Group (\_ -> [])
We also can go a similar game as to divide above except instead of sequencing our use of the input discriminators, we alternate.
Using these techniques we build up a library of "Groupable" things, namely Grouping
class Grouping a where
grouping :: Group a
and note that nearly all the definitions arise from the basic definition atop groupingNat which uses fast monadic vector manipuations to achieve an efficient radix sort.

Are functions of arity-n really just an n-category due to currying? Can they be made into a 1-category?

This question has a longish prelude before I can actually ask it :)
Let's say type A and B represent categories, then the function
f :: B -> A
is a morphism between the two categories. We can create a new category with A and B as objects and f as the arrow like this:
Now, let's introduce a new category C and function g:
g :: C -> B -> A
I would like to be able to add C and g to my category above, but am unsure how to do it. Intuitively, I want something that looks like this:
But I've never seen anything like that in a category diagram before. To make this kosher, I could introduce a dummy arrow g' and construct a 2-category like this:
But that seems like an obtuse picture. (We could, of course, use the picture I drew above as shorthand for the proper one.) Also, it's not exactly clear anymore what g and g' even are. g is no longer a function that takes as input a category C and returns a morphism :: B -> A. Instead,
g' :: (C -> C)
g :: (C -> C) -> (B -> A)
If we pass g the identity, then everything will work fine. But if we pass it some other function, then who knows what could happen?
So my question is: Is an n-arrow within an n-category really the way we should think about functions with arity n? Or is there some easier way to represent this function down into a standard category that I missed?
Talking about "morphisms between categories" here sounds like a possible category error (ha, ha). In Haskell we most often talk about an alleged category Hask, which is some inconsistently idealized(0) version of the category whose objects are types of kind * and morphisms are functions. It's unclear what "functions" between categories would be here, if they're not morphisms of Hask.
On the other hand, in a more general setting you can certainly define a category whose objects are other categories(1), with whatever morphisms you want such that the necessary properties are satisfied. The usual example of this is Cat, the category of small categories whose morphisms are functors.
That said, in either case the answer to your question is essentially the same. To talk about the collection of morphisms between two objects as if that collection was itself an object--i.e., as source or destination of other morphisms--you need an object to fill that role and some way to indirectly talk about morphisms so that you can translate back and forth.
One way to do this, if we already have a way to talk about pairs of objects as a single object (usually called a "product" of some sort), is to define an equivalence between the collection of morphisms A⊗B→C and the collection of morphisms A→CB, which allows the object CB to stand in for the collection of morphisms B→C.
If the "pairs of objects" in question are in fact a categorical product, we have a cartesian closed category, which both Hask and Cat are. In Haskell, the above equivalence are the functions curry and uncurry(2).
That's not the only way to talk about morphisms-as-objects, of course. The general concept is simply called a "closed category". But if you're thinking in terms of higher-order functions and functional programming, a cartesian closed category is probably what you have in mind.
(0) This usually involves things like pretending ⊥ doesn't exist (so that all functions are total) and treating functions that produce the same output as identical (e.g., ignoring differences in performance).
(1) But don't try to talk about a category whose objects are all categories, or else Bertrand Russell's gonna give you the business.
(2) Named after, of course, the logician Haskell Curry.
I'm fairly unschooled in category theory, but:
In Haskell programming we often (pretend that we) work with the category Hask, whose objects are Haskell types and morphisms are Haskell functions.
Applying that understanding to your example, I see that B and A are objects, and f is a morphism between them.
g is however not a morphism between C and f, so there should be no attempt to draw g as an arrow between C and the f arrow.
If we apply the right-associativity of the -> type constructor, we get g :: C -> (B -> A). B -> A itself is a Haskell type, and so should be an object of Hask in its own right. f is however not that object; it is one particular value in the type B -> A, but the B -> A object would be the type itself.
This also makes sense thinking purely in Haskell terms. Just because g applied to a value of type C gives us some function of type B -> A, that doesn't mean g's return value has anything to do with f, which is some other function of type B -> A.
So that gives us f as a morphism drawn between the object B and the object A, and g as a morphism drawn between the object C and the object B -> A.
Here's where my category theory knowledge breaks down. It seems obvious that there should be some sort of relationship between f and the object B -> A, as in Haskell f is a value in the type B -> A. I don't know what that relationship is in category theory terms.
C. A. McCann's answer makes it sound like you need to handle that by some "extra" relationships that aren't modelled directly by the category. So as far as the category is concerned, the object B -> A might as well be called D; it has no relationship to anything else except as given by the morphisms connecting it to other objects. It's only in combination with other information from "outside" the category that we can identify the connection between A, B, f, and D (really B -> A). But I may be misunderstanding that description.

Haskell: "Append" type class

From what I understand Foldable basically represents structures that have a number of elements of the same type that can be iterated over, i.e. lists, maps, sets, etc.
Is there a class like Appendable or Insertable which basically represents structures one can add elements to? Of course there would be no guarantee of the order of which elements are retrieved.
I'd rather not create a class myself if there already is one.
You should look at the Data.Collections package. It contains the Unfoldable typeclass with the following methods:
class Unfoldable c i | c -> i where
insert :: i -> c -> c
empty :: c
singleton :: i -> c
It also provides the insertMany and insertManySorted methods, to insert all the elements from a Foldable into an Unfoldable.
If you make your type an instance of both Foldable and Unfoldable then you can both insert and retrieve elements from it.
I think insertion itself isn't really a sensible concept. There are better ways to generalize this. For example Alternative is a sensible type class there. You get pure for singletons and some generic union operation in the form of <|>.
Is there a class like Appendable or Insertable which basically represents structures one can add elements to?
You want to be clearer on what you mean by "add elements." Because there are two ways this could go:
class Insertable c where
-- Add one element to the collection.
insert :: a -> c a -> c a
class Appendable c where
-- Append a collection to another.
append :: c a -> c a -> c a
The latter, you will note, does not support adding a lone a to the collection unless you add an operation like this:
class Pointed c where
singleton :: a -> c a
Note that if you have Appendable and Pointed instances you can define an Insertable:
instance (Appendable c, Pointed c) => Insertable c where
insert x xs = append (singleton x) xs
The Insertable class, together with operations for actually accessing the collection's elements (e.g. the Foldable class), can likewise be used to define an Appendable instance.
In any case, my mock Appendable class above is really just Monoid in disguise. My Insertable can likewise be seen as a disguised version of Chris Taylor's suggested Unfoldable class. I'd go with the Collection class from that package, though, which combines both Unfoldable with Foldable.

How do I define map and fold on search trees?

I have a search tree that's defined as:
data (Ord a) => Stree a = Null | Fork (Stree a) a (Stree a) deriving Show
and I have to define two functions, mapStree:
mapStree :: (Ord b, Ord a) => (a -> b) -> Stree a -> Stree b
and foldStree:
foldStree :: (Ord a) => (b -> a -> b -> b) -> b -> Stree a -> b
I don't fully understand what's going on and don't know how to do this.
You want your map to apply a function to any label carried by your tree. This means that any occurrence of a is to be changed to an occurrence to b, using the function given as a transformation function.
To do this, you'll need to figure out what to do with each possible constructor of the Stree. Now, Null is easy -- it won't depend on a in the first place. Trickier is what to do with Fork. In Fork, there is one a, and two further Strees sitting around, so you need functions that take a -> b and that take Stree a -> Stree b. For the former, the invocation of mapStree gives you a function, and for the latter, mapStree f has the call signature you need (by partial application!).
For foldStree, you have some accumulation type b and your labeltype a, and an accumulation function that takes two values of type b and a value of type a and produces a b. This is helpful, not in the least because that accumulation function mirrors what you might have at any given Fork in the tree: by recursion you can assume you have results from both left and right Stree, and it only remains to combine those with the a value you have in the middle to give a new b value to hand up the recursion. The b parameter to foldStree provides you with enough of a standard value to get the whole thing started by getting a value for each leaf.
Thus, your foldStree will also need to be defined on the possible constructors: picking out the parameter for a Null value, and then for a Fork value, it needs to recurse into both Stree values before combining everything with the parameter combining function.
Please clarify in comments whether this helps you enough to deal with the problem: I (and many others here) can clarify, but the hope is for you to learn how to do it rather than to just hand you code.
I highly recommend Lecture 5 from this course.

Resources