I'm trying to match data constructors in a generic way, so that any Task of a certain type will be executed.
data Task = TaskTypeA Int | TaskTypeB (Float,Float)
genericTasks :: StateLikeMonad s
genericTasks = do
want (TaskTypeA 5)
TaskTypeA #> \input -> do
want (TaskTypeB (1.2,4.3))
runTaskTypeA input
TaskTypeB #> \(x,y) -> runTaskTypeB x y
main = runTask genericTasks
In this, the genericTasks function goes through the do-instructions, building a list of stuff to do from want handled by some sort of state monad, and a list of ways to do it, via the (#>) function. The runTask function will run the genericTasks, use the resulting list of to-do and how-to-do, and do the computations.
However, I'm having quite some trouble figuring out how to extract the "type" (TaskTypeA,B) from (#>), such that one can call it later. If you do a :t TaskTypeA, you get a Int -> Task.
I.e., How to write (#>)?
I'm also not entirely confident that it's possible to do what I'm thinking here in such a generic way. For reference, I'm trying to build something similar to the Shake library, where (#>) is similar to (*>). However Shake uses a String as the argument to (*>), so the matching is done entirely using String matching. I'd like to do it without requiring strings.
Your intuition is correct, it's not possible to write (#>) as you have specified. The only time a data constructor acts as a pattern is when it is in pattern position, namely, appearing as a parameter to a function
f (TaskTypeA z) = ...
as one of the alternatives of a case statement
case tt of
TaskTypeA z -> ...
or in a monadic or pattern binding
do TaskTypeA z <- Just tt
return z
When used in value position (e.g. as an argument to a function), it loses its patterny nature and becomes a regular function. That means, unfortunately, that you cannot abstract over patterns this easily.
There is, however, a simple formalization of patterns:
type Pattern d a = d -> Maybe a
It's a little bit of work to make them.
taskTypeA :: Pattern Task Int
taskTypeA (TaskTypeA z) = Just z
taskTypeA _ = Nothing
If you also need need to use the constructor "forwards" (i.e. a -> d), then you could pair the two together (plus some functions to work with it):
data Constructor d a = Constructor (a -> d) (d -> Maybe a)
apply :: Constructor d a -> a -> d
apply (Constructor f _) = f
match :: Constructor d a -> d -> Maybe a
match (Constructor _ m) = m
taskTypeA :: Constructor Task Int
taskTypeA = Constructor TaskTypeA $ \case TaskTypeA z -> Just z
_ -> Nothing
This is known as a "prism", and (a very general form of) it is implemented in lens.
There are advantages to using an abstraction like this -- namely, that you can construct prisms which may have more structure than data types are allowed to (e.g. d can be a function type), and you can write functions that operate on constructors, composing simpler ones to make more complex ones generically.
If you are using plain data types, though, it is a pain to have to implement the Constructor objects for each constructor like I did for TaskTypeA above. If you have a lot of these to work with, you can use Template Haskell to write your boilerplate for you. The necessary Template Haskell routine is already implemented in lens -- it may be worth it to learn how to use the lens library because of that. (But it can be a bit daunting to navigate)
(Style note: the second Constructor above and its two helper functions can be written equivalently using a little trick:
data Constructor d a = Constructor { apply :: a -> d, match :: d -> Maybe a }
)
With this abstraction in place, it is now possible to write (#>). A simple example would be
(#>) :: Constructor d a -> (a -> State d ()) -> State d ()
cons #> f = do
d <- get
case match cons d of
Nothing -> return ()
Just a -> f a
or perhaps something more sophisticated, depending on what precisely you want.
Related
Reading http://www.seas.upenn.edu/~cis194/spring13/lectures/04-higher-order.html it states
In particular, note that function arrows associate to the right, that
is, W -> X -> Y -> Z is equivalent to W -> (X -> (Y -> Z)). We can
always add or remove parentheses around the rightmost top-level arrow
in a type.
Function arrows associate to the right but as function application associates to the left then what is usefulness of this information ? I feel I'm not understanding something as to me it is a meaningless point that function arrows associate to the right. As function application always associates to the left then this the only associativity I should be concerned with ?
Function arrows associate to the right but [...] what is usefulness of this information?
If you see a type signature like, for example, f : String -> Int -> Bool you need to know the associativity of the function arrow to understand what the type of f really is:
if the arrow associates to the left, then the type means (String -> Int) -> Bool, that is, f takes a function as argument and returns a boolean.
if the arrow associates to the right, then the type means String -> (Int -> Bool), that is, f takes a string as argument and returns a function.
That's a big difference, and if you want to use f, you need to know which one it is. Since the function arrow associates to the right, you know that it has to be the second option: f takes a string and returns a function.
Function arrows associate to the right [...] function application associates to the left
These two choices work well together. For example, we can call the f from above as f "answer" 42 which really means (f "answer") 42. So we are passing the string "answer" to f which returns a function. And then we're passing the number 42 to that function, which returns a boolean. In effect, we're almost using f as a function with two arguments.
This is the standard way of writing functions with two (or more) arguments in Haskell, so it is a very common use case. Because of the associativity of function application and of the function arrow, we can write this common use case without parentheses.
When defining a two-argument curried function, we usually write something like this:
f :: a -> b -> c
f x y = ...
If the arrow did not associate to the right, the above type would instead have to be spelled out as a -> (b -> c). So the usefulness of ->'s associativity is that it saves us from writing too many parentheses when declaring function types.
If an operator # is 'right associative', it means this:
a # b # c # d = a # (b # (c # d))
... for any number of arguments. It behaves like foldr
This means that:
a -> b -> c -> d = a -> (b -> (c -> d))
Note: a -> (b -> (c -> d)) =/= ((a -> b) -> c) -> d ! This is very important.
What this tells us is that, say, foldr:
λ> :t foldr
foldr :: (a -> b -> b) -> b -> [a] -> b
Takes a function of type (a -> b -> b), and then returns... a function that takes a b, and then returns... a function that takes a [a], and then returns... a b. This means that we can apply functions like this
f a b c
because
f a b c = ((f a) b) c
and f will return two functions each time an argument is given.
Essentially, this isn't very useful as such, but is important information for when we want to interpret and call function types.
However, in functions like (++), associativity matters. If (++) were left associative, it would be very slow, so it's right associative.
Early functional language Lisp suffered from excessively nested parenthesis (which make code (or even text (if you do not mind to consider a broader context)) difficult to read. With time functional language designers opted to make functional code easy to read and write for pros even at cost of confusing rookies with less uniform rules.
In functional code,
function type declaration like (String -> Int) -> Bool are much more rare than functions like String -> (Int -> Bool), because functions that return functions are trade mark of functional style. Thus associating arrows to right helps reduce parentheses number (on overage, you might need to map a function to a primitive type). For function applications it is vise-versa.
The main purposes is convenience, because partial function application goes from left to right.
Every time you partially apply a function to a set of values, the remaining type has to be valid.
You can think of arrow types as a queue of types, where the queue itself is a type. During partial function application, you dequeue as many types from the queue as the number of arguments, yielding whatever remains of the queue. The resulting queue is still a valid type.
This is why types associate to the right. If types associate to the left, it will behave like a stack, and you won't be able to partially apply it the same way without leaving "holes" or undefined domains. For instance, say you have the following function:
foo :: a -> b -> c -> d
If Haskell types were left-associative, then passing a single parameter to foo would yield the following invalid type:
((? -> b) -> c) -> d
You will then be forced to circumvent it by adding parentheses, which could hamper readability.
I'm struggling with using the lens library for a particular problem. I'm trying to pass
an updated data structure
a lens focussed on part of that updated structure
to another function, g. I pass both the lens and the data structure because g needs some shared information from the data structure as well as a piece of information. (If it helps, the data structure contains information on a joint probability distribution, but g only works on either marginal and needs to know which marginal I'm looking at. The only difference between the two marginals is their mean with the rest of their definition being shared in the data structure).
My first attempt looked like this
f :: Functor f => Params -> ((Double -> f Double) -> Params -> f Params) -> a
f p l = g (l %~ upd $ p) l
where upd = ...
g p x = go p p^.x
but that fails during compilation because f gets inferred as being Identity for the update and Const Double for the getter.
What's the best way to accomplish what I want to do? I can imagine being able to do one of the following:
make a copy of the lens so that the type inference can be different in each case
rather than passing the updated structure and the lens, I pass the original structure and a lens which returns a modified value (if I only want to update the part of the structure that the lens looks at).
making a better design choice for my functions/data structure
something completely different
Thanks for any help!
András Kovács answer shows how to achieve this with RankNTypes. If you wish to avoid RankNTypes, then you can use ALens and cloneLens:
f :: a -> ALens' a Int -> (Int, a)
f a l = (newvalue, a & cloneLens l .~ newvalue)
where oldvalue = a^.cloneLens l
newvalue = if oldvalue == 0 then 0 else oldvalue - 1
Control.Lens.Loupe provides operators and functions that work on ALens instead of Lens.
Note that in many cases, you should also be able to use <<%~, which is like %~ but also returns the old value, or <%~, which returns the new value:
f :: a -> LensLike' ((,) Int) a Int -> (Int, a)
f a l = a & l <%~ g
where g oldvalue = if oldvalue == 0 then 0 else oldvalue - 1
This has the advantage that it can also work with Isos or sometimes also with Traversals (when the target type is a Monoid).
You want your type signature to look like this:
f :: Params -> Lens Params Params Double Double -> ...
-- alternatively, instead of the long Lens form you can write
-- Lens' Params Double
This is not equivalent to what you wrote out in the signature, because the functor parameter is quantified inside Lens:
type Lens s t a b = forall f. Functor f => (a -> f b) -> (s -> f t)
The correct signature translates to :
f :: Params -> (forall f. Functor f => (Double -> f Double) -> Params -> f Params) -> ...
This prevents the compiler from unifying the different f parameters of different lens usages, i. e. you can use the lens polymorphically. Note that you need the RankNTypes or Rank2Types GHC extension in order to be able to write out the signature.
Benno gave the best general purpose answer.
There is two other options, however, which I offer here for completeness.
1.)
There are several Loupe combinators in Lens.
http://hackage.haskell.org/package/lens-4.1.2/docs/Control-Lens-Loupe.html
They all have names that involve #.
^# and #%= both take ALens which is a lens instantiated at a particular concrete choice of functor.
This can be useful if you need to pass around lists of lenses, or if you really really need multiple passes.
2.)
Another option, and my preferred tactic, is to figure out how to do both operations a the same time.
Here you are modifying, but want the value you just set. Well, yes can give you that by using <%~ instead of %~.
Now you only instantiate the lens at one choice of functor and your code gets faster.
In Haskell, you can use unsafeCoerce to override the type system. How to do the same in F#?
For example, to implement the Y-combinator.
I'd like to offer a different solution, based on embedding the untyped lambda calculus in a typed functional language. The idea is to create a data type that allows us to change between types α and α → α, which subsequently allows to escape the restrictions of a type system. I'm not very familiar with F# so I'll give my answer in Haskell, but I believe it could be adapted easily (perhaps the only complication could be F#'s strictness).
-- | Roughly represents morphism between #a# and #a -> a#.
-- Therefore we can embed a arbitrary closed λ-term into #Any a#. Any time we
-- need to create a λ-abstraction, we just nest into one #Any# constructor.
--
-- The type parameter allows us to embed ordinary values into the type and
-- retrieve results of computations.
data Any a = Any (Any a -> a)
Note that the type parameter isn't significant for combining terms. It just allows us to embed values into our representation and extract them later. All terms of a particular type Any a can be combined freely without restrictions.
-- | Embed a value into a λ-term. If viewed as a function, it ignores its
-- input and produces the value.
embed :: a -> Any a
embed = Any . const
-- | Extract a value from a λ-term, assuming it's a valid value (otherwise it'd
-- loop forever).
extract :: Any a -> a
extract x#(Any x') = x' x
With this data type we can use it to represent arbitrary untyped lambda terms. If we want to interpret a value of Any a as a function, we just unwrap its constructor.
First let's define function application:
-- | Applies a term to another term.
($$) :: Any a -> Any a -> Any a
(Any x) $$ y = embed $ x y
And λ abstraction:
-- | Represents a lambda abstraction
l :: (Any a -> Any a) -> Any a
l x = Any $ extract . x
Now we have everything we need for creating complex λ terms. Our definitions mimic the classical λ-term syntax, all we do is using l to construct λ abstractions.
Let's define the Y combinator:
-- λf.(λx.f(xx))(λx.f(xx))
y :: Any a
y = l (\f -> let t = l (\x -> f $$ (x $$ x))
in t $$ t)
And we can use it to implement Haskell's classical fix. First we'll need to be able to embed a function of a -> a into Any a:
embed2 :: (a -> a) -> Any a
embed2 f = Any (f . extract)
Now it's straightforward to define
fix :: (a -> a) -> a
fix f = extract (y $$ embed2 f)
and subsequently a recursively defined function:
fact :: Int -> Int
fact = fix f
where
f _ 0 = 1
f r n = n * r (n - 1)
Note that in the above text there is no recursive function. The only recursion is in the Any data type, which allows us to define y (which is also defined non-recursively).
In Haskell, unsafeCoerce has the type a -> b and is generally used to assert to the compiler that the thing being coerced actually has the destination type and it's just that the type-checker doesn't know it.
Another, less common use, is to reinterpret a pattern of bits as another type. For example an unboxed Double# could be reinterpreted as an unboxed Int64#. You have to be sure about the underlying representations for this to be safe.
In F#, the first application can be achieved with box |> unbox as John Palmer said in a comment on the question. If possible use explicit type arguments to make sure that you don't accidentally have the wrong coercion inferred, e.g. box<'a> |> unbox<'b> where 'a and 'b are type variables or concrete types that are already in scope in your code.
For the second application, look at the BitConverter class for specific conversions of bit-patterns. In theory you could also do something like interfacing with unmanaged code to achieve this, but that seems very heavyweight.
These techniques won't work for implementing the Y combinator because the cast is only valid if the runtime objects actually do have the target type, but with the Y combinator you actually need to call the same function again but with a different type. For this you need the kinds of encoding tricks mentioned in the question John Palmer linked to.
For a silly challenge I am trying to implement a list type using as little of the prelude as possible and without using any custom types (the data keyword).
I can construct an modify a list using tuples like so:
import Prelude (Int(..), Num(..), Eq(..))
cons x = (x, ())
prepend x xs = (x, xs)
head (x, _) = x
tail (_, x) = x
at xs n = if n == 0 then xs else at (tail xs) (n-1)
I cannot think of how to write an at (!!) function. Is this even possible in a static language?
If it is possible could you try to nudge me in the right direction without telling me the answer.
There is a standard trick known as Church encoding that makes this easy. Here's a generic example to get you started:
data Foo = A Int Bool | B String
fooValue1 = A 3 False
fooValue2 = B "hello!"
Now, a function that wants to use this piece of data must know what to do with each of the constructors. So, assuming it wants to produce some result of type r, it must at the very least have two functions, one of type Int -> Bool -> r (to handle the A constructor), and the other of type String -> r (to handle the B constructor). In fact, we could write the type that way instead:
type Foo r = (Int -> Bool -> r) -> (String -> r) -> r
You should read the type Foo r here as saying "a function that consumes a Foo and produces an r". The type itself "stores" a Foo inside a closure -- so that it will effectively apply one or the other of its arguments to the value it closed over. Using this idea, we can rewrite fooValue1 and fooValue2:
fooValue1 = \consumeA consumeB -> consumeA 3 False
fooValue2 = \consumeA consumeB -> consumeB "hello!"
Now, let's try applying this trick to real lists (though not using Haskell's fancy syntax sugar).
data List a = Nil | Cons a (List a)
Following the same format as before, consuming a list like this involves either giving a value of type r (in case the constructor was Nil) or telling what to do with an a and another List a, so. At first, this seems problematic, since:
type List a r = (r) -> (a -> List a -> r) -> r
isn't really a good type (it's recursive!). But we can instead demand that we first reduce all the recursive arguments to r first... then we can adjust this type to make something more reasonable.
type List a r = (r) -> (a -> r -> r) -> r
(Again, we should read the type List a r as being "a thing that consumes a list of as and produces an r".)
There's one final trick that's necessary. What we would like to do is to enforce the requirement that the r that our List a r returns is actually constructed from the arguments we pass. That's a little abstract, so let's give an example of a bad value that happens to have type List a r, but which we'd like to rule out.
badList = \consumeNil consumeCons -> False
Now, badList has type List a Bool, but it's not really a function that consumes a list and produces a Bool, since in some sense there's no list being consumed. We can rule this out by demanding that the type work for any r, no matter what the user wants r to be:
type List a = forall r. (r) -> (a -> r -> r) -> r
This enforces the idea that the only way to get an r that gets us off the ground is to use the (user-supplied) consumeNil function. Can you see how to make this same refinement for our original Foo type?
If it is possible could you try and nudge me in the right direction without telling me the answer.
It's possible, in more than one way. But your main problem here is that you've not implemented lists. You've implemented fixed-size vectors whose length is encoded in the type.
Compare the types from adding an element to the head of a list vs. your implementation:
(:) :: a -> [a] -> [a]
prepend :: a -> b -> (a, b)
To construct an equivalent of the built-in list type, you'd need a function like prepend with a type resembling a -> b -> b. And if you want your lists to be parameterized by element type in a straightforward way, you need the type to further resemble a -> f a -> f a.
Is this even possible in a static language?
You're also on to something here, in that the encoding you're using works fine in something like Scheme. Languages with "dynamic" systems can be regarded as having a single static type with implicit conversions and metadata attached, which obviously solves the type mismatch problem in a very extreme way!
I cannot think of how to write an at (!!) function.
Recalling that your "lists" actually encode their length in their type, it should be easy to see why it's difficult to write functions that do anything other than increment/decrement the length. You can actually do this, but it requires elaborate encoding and more advanced type system features. A hint in this direction is that you'll need to use type-level numbers as well. You'd probably enjoy doing this as an exercise as well, but it's much more advanced than encoding lists.
Solution A - nested tuples:
Your lists are really nested tuples - for example, they can hold items of different types, and their type reveals their length.
It is possible to write indexing-like function for nested tuples, but it is ugly, and it won't correspond to Prelude's lists. Something like this:
class List a b where ...
instance List () b where ...
instance List a b => List (b,a) b where ...
Solution B - use data
I recommend using data construct. Tuples are internally something like this:
data (,) a b = Pair a b
so you aren't avoiding data. The division between "custom types" and "primitive types" is rather artificial in Haskell, as opposed to C.
Solution C - use newtype:
If you are fine with newtype but not data:
newtype List a = List (Maybe (a, List a))
Solution D - rank-2-types:
Use rank-2-types:
type List a = forall b. b -> (a -> b -> b) -> b
list :: List Int
list = \n c -> c 1 (c 2 n) -- [1,2]
and write functions for them. I think this is closest to your goal. Google for "Church encoding" if you need more hints.
Let's set aside at, and just think about your first four functions for the moment. You haven't given them type signatures, so let's look at those; they'll make things much clearer. The types are
cons :: a -> (a, ())
prepend :: a -> b -> (a, b)
head :: (a, b) -> a
tail :: (a, b) -> b
Hmmm. Compare these to the types of the corresponding Prelude functions1:
return :: a -> [a]
(:) :: a -> [a] -> [a]
head :: [a] -> a
tail :: [a] -> [a]
The big difference is that, in your code, there's nothing that corresponds to the list type, []. What would such a type be? Well, let's compare, function by function.
cons/return: here, (a,()) corresponds to [a]
prepend/(:): here, both b and (a,b) correspond to [a]
head: here, (a,b) corresponds to [a]
tail: here, (a,b) corresponds to [a]
It's clear, then, that what you're trying to say is that a list is a pair. And prepend indicates that you then expect the tail of the list to be another list. So what would that make the list type? You'd want to write type List a = (a,List a) (although this would leave out (), your empty list, but I'll get to that later), but you can't do this—type synonyms can't be recursive. After all, think about what the type of at/!! would be. In the prelude, you have (!!) :: [a] -> Int -> a. Here, you might try at :: (a,b) -> Int -> a, but this won't work; you have no way to convert a b into an a. So you really ought to have at :: (a,(a,b)) -> Int -> a, but of course this won't work either. You'll never be able to work with the structure of the list (neatly), because you'd need an infinite type. Now, you might argue that your type does stop, because () will finish a list. But then you run into a related problem: now, a length-zero list has type (), a length-one list has type (a,()), a length-two list has type (a,(a,())), etc. This is the problem: there is no single "list type" in your implementation, and so at can't have a well-typed first parameter.
You have hit on something, though; consider the definition of lists:
data List a = []
| a : [a]
Here, [] :: [a], and (:) :: a -> [a] -> [a]. In other words, a list is isomorphic to something which is either a singleton value, or a pair of a value and a list:
newtype List' a = List' (Either () (a,List' a))
You were trying to use the same trick without creating a type, but it's this creation of a new type which allows you to get the recursion. And it's exactly your missing recursion which allows lists to have a single type.
1: On a related note, cons should be called something like singleton, and prepend should be cons, but that's not important right now.
You can implement the datatype List a as a pair (f, n) where f :: Nat -> a and n :: Nat, where n is the length of the list:
type List a = (Int -> a, Int)
Implementing the empty list, the list operations cons, head, tail, and null, and a function convert :: List a -> [a] is left as an easy exercise.
(Disclaimer: stole this from Bird's Introduction to Functional Programming in Haskell.)
Of course, you could represent tuples via functions as well. And then True and False and the natural numbers ...
I tried to learn the meaning of arrows, but I didn't understand them.
I used the Wikibooks tutorial. I think Wikibook's problem is mainly that it seems to be written for somebody who already understands the topic.
Can somebody explain what arrows are and how I can use them?
I don't know a tutorial, but I think it's easiest to understand arrows if you look at some concrete examples. The biggest problem I had learning how to use arrows was that none of the tutorials or examples actually show how to use arrows, just how to compose them. So, with that in mind, here's my mini-tutorial. I'll examine two different arrows: functions and a user-defined arrow type MyArr.
-- type representing a computation
data MyArr b c = MyArr (b -> (c,MyArr b c))
1) An Arrow is a calculation from input of a specified type to output of a specified type. The arrow typeclass takes three type arguments: the arrow type, the input type, and the output type. Looking at the instance head for arrow instances we find:
instance Arrow (->) b c where
instance Arrow MyArr b c where
The Arrow (either (->) or MyArr) is an abstraction of a computation.
For a function b -> c, b is the input and c is the output.
For a MyArr b c, b is the input and c is the output.
2) To actually run an arrow computation, you use a function specific to your arrow type. For functions you simply apply the function to an argument. For other arrows, there needs to be a separate function (just like runIdentity, runState, etc. for monads).
-- run a function arrow
runF :: (b -> c) -> b -> c
runF = id
-- run a MyArr arrow, discarding the remaining computation
runMyArr :: MyArr b c -> b -> c
runMyArr (MyArr step) = fst . step
3) Arrows are frequently used to process a list of inputs. For functions these can be done in parallel, but for some arrows output at any given step depends upon previous inputs (e.g. keeping a running total of inputs).
-- run a function arrow over multiple inputs
runFList :: (b -> c) -> [b] -> [c]
runFList f = map f
-- run a MyArr over multiple inputs.
-- Each step of the computation gives the next step to use
runMyArrList :: MyArr b c -> [b] -> [c]
runMyArrList _ [] = []
runMyArrList (MyArr step) (b:bs) = let (this, step') = step b
in this : runMyArrList step' bs
This is one reason Arrows are useful. They provide a computation model that can implicitly make use of state without ever exposing that state to the programmer. The programmer can use arrowized computations and combine them to create sophisticated systems.
Here's a MyArr that keeps count of the number of inputs it has received:
-- count the number of inputs received:
count :: MyArr b Int
count = count' 0
where
count' n = MyArr (\_ -> (n+1, count' (n+1)))
Now the function runMyArrList count will take a list length n as input and return a list of Ints from 1 to n.
Note that we still haven't used any "arrow" functions, that is either Arrow class methods or functions written in terms of them.
4) Most of the code above is specific to each Arrow instance[1]. Everything in Control.Arrow (and Control.Category) is about composing arrows to make new arrows. If we pretend that Category is part of Arrow instead of a separate class:
-- combine two arrows in sequence
>>> :: Arrow a => a b c -> a c d -> a b d
-- the function arrow instance
-- >>> :: (b -> c) -> (c -> d) -> (b -> d)
-- this is just flip (.)
-- MyArr instance
-- >>> :: MyArr b c -> MyArr c d -> MyArr b d
The >>> function takes two arrows and uses the output of the first as input to the second.
Here's another operator, commonly called "fanout":
-- &&& applies two arrows to a single input in parallel
&&& :: Arrow a => a b c -> a b c' -> a b (c,c')
-- function instance type
-- &&& :: (b -> c) -> (b -> c') -> (b -> (c,c'))
-- MyArr instance type
-- &&& :: MyArr b c -> MyArr b c' -> MyArr b (c,c')
-- first and second omitted for brevity, see the accepted answer from KennyTM's link
-- for further details.
Since Control.Arrow provides a means to combine computations, here's one example:
-- function that, given an input n, returns "n+1" and "n*2"
calc1 :: Int -> (Int,Int)
calc1 = (+1) &&& (*2)
I've frequently found functions like calc1 useful in complicated folds, or functions that operate on pointers for example.
The Monad type class provides us with a means to combine monadic computations into a single new monadic computation using the >>= function. Similarly, the Arrow class provides us with means to combine arrowized computations into a single new arrowized computation using a few primitive functions (first, arr, and ***, with >>> and id from Control.Category). Also similar to Monads, the question of "What does an arrow do?" can't be generally answered. It depends on the arrow.
Unfortunately I don't know of many examples of arrow instances in the wild. Functions and FRP seem to be the most common applications. HXT is the only other significant usage that comes to mind.
[1] Except count. It's possible to write a count function that does the same thing for any instance of ArrowLoop.
From a glance at your history on Stack Overflow, I'm going to assume that you're comfortable with some of the other standard type classes, particularly Functor and Monoid, and start with a brief analogy from those.
The single operation on Functor is fmap, which serves as a generalized version of map on lists. This is pretty much the entire purpose of the type class; it defines "things that you can map over". So, in a sense Functor represents a generalization of that particular aspect of lists.
The operations for Monoid are generalized versions of the empty list and (++), and it defines "things that can be combined associatively, with a particular thing that's an identity value". Lists are pretty much the simplest thing that fits that description, and Monoid represents a generalization of that aspect of lists.
In the same way as the above two, the operations on the Category type class are generalized versions of id and (.), and it defines "things connecting two types in a particular direction, that can be connected head-to-tail". So, this represents a generalization of that aspect of functions. Notably not included in the generalization are currying or function application.
The Arrow type class builds off of Category, but the underlying concept is the same: Arrows are things that compose like functions and have an "identity arrow" defined for any type. The additional operations defined on the Arrow class itself just define a way to lift an arbitrary function to an Arrow and a way to combine two arrows "in parallel" as a single arrow between tuples.
So, the first thing to keep in mind here is that expressions building Arrows are, essentially, elaborate function composition. The combinators like (***) and (>>>) are for writing "pointfree" style, while proc notation gives a way to assign temporary names to inputs and outputs while wiring things up.
A useful thing to note here is that, even though Arrows are sometimes described as being "the next step" from Monads, there's really not a terribly meaningful relationship there. For any Monad you can work with Kleisli arrows, which are just functions with a type like a -> m b. The (<=<) operator in Control.Monad is arrow composition for these. On the other hand, Arrows don't get you a Monad unless you also include the ArrowApply class. So there's no direct connection as such.
The key difference here is that whereas Monads can be used to sequence computations and do things step-by-step, Arrows are in some sense "timeless" just like regular functions. They can include extra machinery and functionality that gets spliced in by (.), but it's more like building a pipeline, not accumulating actions.
The other related type classes add additional functionality to an arrow, such as being able to combine arrows with Either as well as (,).
My favorite example of an Arrow is stateful stream transducers, which look something like this:
data StreamTrans a b = StreamTrans (a -> (b, StreamTrans a b))
A StreamTrans arrow converts an input value to an output and an "updated" version of itself; consider the ways that this differs from a stateful Monad.
Writing instances for Arrow and its related type classes for the above type may be a good exercise for understanding how they work!
I also wrote a similar answer previously that you may find helpful.
I'd like to add that arrows in Haskell are far simpler than they might appear
based on the literature. They are simply abstractions of functions.
To see how this is practically useful, consider that you have a bunch of
functions you want to compose, where some of them are pure and some are
monadic. For example, f :: a -> b, g :: b -> m1 c, and h :: c -> m2 d.
Knowing each of the types involved, I could build a composition by hand, but
the output type of the composition would have to reflect the intermediate
monad types (in the above case, m1 (m2 d)). What if I just wanted to treat
the functions as if they were just a -> b, b -> c, and c -> d? That is,
I want to abstract away the presence of monads and reason only about the
underlying types. I can use arrows to do exactly this.
Here is an arrow which abstracts away the presence of IO for functions in the
IO monad, such that I can compose them with pure functions without the
composing code needing to know that IO is involved. We start by defining an
IOArrow to wrap IO functions:
data IOArrow a b = IOArrow { runIOArrow :: a -> IO b }
instance Category IOArrow where
id = IOArrow return
IOArrow f . IOArrow g = IOArrow $ f <=< g
instance Arrow IOArrow where
arr f = IOArrow $ return . f
first (IOArrow f) = IOArrow $ \(a, c) -> do
x <- f a
return (x, c)
Then I make some simple functions that I want to compose:
foo :: Int -> String
foo = show
bar :: String -> IO Int
bar = return . read
And use them:
main :: IO ()
main = do
let f = arr (++ "!") . arr foo . IOArrow bar . arr id
result <- runIOArrow f "123"
putStrLn result
Here I am calling IOArrow and runIOArrow, but if I were passing these arrows
around in a library of polymorphic functions, they would only need to accept
arguments of type "Arrow a => a b c". None of the library code would need to
be made aware that a monad was involved. Only the creator and end user of the
arrow needs to know.
Generalizing IOArrow to work for functions in any Monad is called the "Kleisli
arrow", and there is already a builtin arrow for doing just that:
main :: IO ()
main = do
let g = arr (++ "!") . arr foo . Kleisli bar . arr id
result <- runKleisli g "123"
putStrLn result
You could of course also use arrow composition operators, and proc syntax, to
make it a little clearer that arrows are involved:
arrowUser :: Arrow a => a String String -> a String String
arrowUser f = proc x -> do
y <- f -< x
returnA -< y
main :: IO ()
main = do
let h = arr (++ "!")
<<< arr foo
<<< Kleisli bar
<<< arr id
result <- runKleisli (arrowUser h) "123"
putStrLn result
Here it should be clear that although main knows the IO monad is involved,
arrowUser does not. There would be no way of "hiding" IO from arrowUser
without arrows -- not without resorting to unsafePerformIO to turn the
intermediate monadic value back into a pure one (and thus losing that context
forever). For example:
arrowUser' :: (String -> String) -> String -> String
arrowUser' f x = f x
main' :: IO ()
main' = do
let h = (++ "!") . foo . unsafePerformIO . bar . id
result = arrowUser' h "123"
putStrLn result
Try writing that without unsafePerformIO, and without arrowUser' having to
deal with any Monad type arguments.
There are John Hughes's Lecture Notes from an AFP (Advanced Functional Programming) workshop. Note they were written before the Arrow classes were changed in the Base libraries:
http://www.cse.chalmers.se/~rjmh/afp-arrows.pdf
When I began exploring Arrow compositions (essentially Monads), my approach was to break out of the functional syntax and composition it is most commonly associated with and begin by understanding its principles using a more declarative approach. With this in mind, I find the following breakdown more intuitive:
function(x) {
func1result = func1(x)
if(func1result == null) {
return null
} else {
func2result = func2(func1result)
if(func2result == null) {
return null
} else {
func3(func2result)
}
So, essentially, for some value x, call one function first that we assume may return null (func1), another that may retun null or be assigned to null interchangably, finally, a third function that may also return null. Now given the value x, pass x to func3, only then, if it does not return null, pass this value into func2, and only if this value is not null, pass this value into func1. It's more deterministic and the control flow allows you to construct more sophisticated exception handling.
Here we can utilise the arrow composition: (func3 <=< func2 <=< func1) x.