I have some pretty common Haskell boilerplate that shows up in a lot of places. It looks something like this (when instantiating classes):
a <= b = (modify a) <= (modify b)
like this (with normal functions):
fn x y z = fn (foo x) (foo y) (foo z)
and sometimes even with tuples, as in:
mod (x, y) = (alt x, alt y)
It seems like there should be an easy way to reduce all of this boilerplate and not have to repeat myself quite so much. (These are simple examples, but it does get annoying). I imagine that there are abstractions created for removing such boilerplate, but I'm not sure what they're called nor where to look. Can any haskellites point me in the right direction?
For the (<=) case, consider defining compare instead; you can then use Data.Ord.comparing, like so:
instance Ord Foo where
compare = comparing modify
Note that comparing can simply be defined as comparing f = compare `on` f, using Data.Function.on.
For your fn example, it's not clear. There's no way to simplify this type of definition in general. However, I don't think the boilerplate is too bad in this instance.
For mod:
mod = alt *** alt
using Control.Arrow.(***) — read a b c as b -> c in the type signature; arrows are just a general abstraction (like functors or monads) of which functions are an instance. You might like to define both = join (***) (which is itself shorthand for both f = f *** f); I know at least one other person who uses this alias, and I think it should be in Control.Arrow proper.
So, in general, the answer is: combinators, combinators, combinators! This ties directly in with point-free style. It can be overdone, but when the combinators exist for your situation, such code can not only be cleaner and shorter, it can be easier to read: you only have to learn an abstraction once, and can then apply it everywhere when reading code.
I suggest using Hoogle to find these combinators; when you think you see a general pattern underlying a definition, try abstracting out what you think the common parts are, taking the type of the result, and searching for it on Hoogle. You might find a combinator that does just what you want.
So, for instance, for your mod case, you could abstract out alt, yielding \f (a,b) -> (f a, f b), then search for its type, (a -> b) -> (a, a) -> (b, b) — there's an exact match, but it's in the fgl graph library, which you don't want to depend on. Still, you can see how the ability to search by type can be very valuable indeed!
There's also a command-line version of Hoogle with GHCi integration; see its HaskellWiki page for more information.
(There's also Hayoo, which searches the entirety of Hackage, but is slightly less clever with types; which one you use is up to personal preference.)
For some of the boilerplate, Data.Function.on can be helpful, although in these examples, it doesn't gain much
instance Ord Foo where
(<=) = (<=) `on` modify -- maybe
mod = uncurry ((,) `on` alt) -- Not really
Related
One of the most powerful ways pattern matching and lazy evaluation can come together is to bypass expensive computation. However I am still shocked that Haskell only permits the pattern matching of constructors, which is barely pattern matching at all!
Is there some way to impliment the following functionality in Haskell:
exp :: Double -> Double
exp 0 = 1
exp (log a) = a
--...
log :: Double -> Double
log 1 = 0
log (exp a) = a
--...
The original problem I found this useful in was writing an associativity preference / rule in a Monoid class:
class Monoid m where
iden :: m
(+) m -> m -> m
(+) iden a = a
(+) a iden = a
--Line with issue
(+) ((+) a b) c = (+) a ((+) b c)
There's no reason to be shocked about this. How would it be even remotely feasible to pattern match on arbitrary functions? Most functions aren't invertible, and even for those that are it is typically nontrivial to actually compute the inverses.
Of course the compiler could in principle handle trivial examples like replacing literal exp (log x) with x, but that would be almost completely useless in practice (in the unlikely event somebody were to literally write that, they could as well reduce it right there in the source), and would generally lead to very weird unpredictable behaviour if inlining order changes whether or not the compiler can see that a match applies.
(There is however a thing called rewrite rules, which is similar to what you proposed but is seen as only an optimisation tool.)
Even the two lines from the Monoid class that don't error don't make sense, but for different reasons. First, when you write
(+) iden a = a
(+) a iden = a
this doesn't do what you seem to think. These are actually two redundant catch-call clauses, equivalent to
(+) x y = y
(+) x y = x
...which is an utterly nonsensical thing to write. What you want to state could in fact be written as
default (+) :: Eq a => a -> a -> a
x+y
| x==iden = y
| y==iden = x
| otherwise = ...
...but this still doesn't accomplish anything useful, because this is never going to be a full definition of +. And as soon as a concrete instance even begins to define its own + operator, the complete default one is going to be ignored.
Moreover, if you were to have these kind of clauses all over your Haskell project it would in practice just mean your performing a lot of unnecessary, redundant extra checks. A law-abiding Monoid instance needs to fulfill mempty <> a ≡ a anyway, no point explicitly special-casing it.
I think what you really want is tests. It would make sense to specify laws right in a class declaration in a way that they could automatically be checked, but standard Haskell has no syntax for this. Most projects just do it in a separate test suite, using QuickCheck to generate example inputs. I think there's also a tool that allow you to put the test cases right in your source file, but I forgot what it's called.
To me it seems that you can always pass function arguments rather than using a typeclass. For example rather than defining equality typeclass:
class Eq a where
(==) :: a -> a -> Bool
And using it in other functions to indicate type argument must be an instance of Eq:
elem :: (Eq a) => a -> [a] -> Bool
Can't we just define our elem function without using a typeclass and instead pass a function argument that does the job?
Yes. This is called "dictionary passing style". Sometimes when I am doing some especially tricky things, I need to scrap a typeclass and turn it into a dictionary, because dictionary passing is more powerful1, yet often quite cumbersome, making conceptually simple code look quite complicated. I use dictionary passing style sometimes in languages that aren't Haskell to simulate typeclasses (but have learned that that is usually not as great an idea as it sounds).
Of course, whenever there is a difference in expressive power, there is a trade-off. While you can use a given API in more ways if it is written using DPS, the API gets more information if you can't. One way this shows up in practice is in Data.Set, which relies on the fact that there is only one Ord dictionary per type. The Set stores its elements sorted according to Ord, and if you build a set with one dictionary, and then inserted an element using a different one, as would be possible with DPS, you could break Set's invariant and cause it to crash. This uniqueness problem can be mitigated using a phantom existential type to mark the dictionary, but, again, at the cost of quite a bit of annoying complexity in the API. This also shows up in pretty much the same way in the Typeable API.
The uniqueness bit doesn't come up very often. What typeclasses are great at is writing code for you. For example,
catProcs :: (i -> Maybe String) -> (i -> Maybe String) -> (i -> Maybe String)
catProcs f g = f <> g
which takes two "processors" which take an input and might give an output, and concatenates them, flattening away Nothing, would have to be written in DPS something like this:
catProcs f g = (<>) (funcSemi (maybeSemi listSemi)) f g
We essentially had to spell out the type we're using it at again, even though we already spelled it out in the type signature, and even that was redundant because the compiler already knows all the types. Because there's only one way to construct a given Semigroup at a type, the compiler can do it for you. This has a "compound interest" type effect when you start defining a lot of parametric instances and using the structure of your types to compute for you, as in the Data.Functor.* combinators, and this is used to great effect with deriving via where you can essentially get all the "standard" algebraic structure of your type written for you.
And don't even get me started on MPTC's and fundeps, which feed information back into typechecking and inference. I have never tried converting such a thing to DPS -- I suspect it would involve passing around a lot of type equality proofs -- but in any case I'm sure it would be a lot more work for my brain than I would be comfortable with.
--
1Unless you use reflection in which case they become equivalent in power -- but reflection can also be cumbersome to use.
Yes. That (called dictionary passing) is basically what the compiler does to typeclasses anyway. For that function, done literally, it would look a bit like this:
elemBy :: (a -> a -> Bool) -> a -> [a] -> Bool
elemBy _ _ [] = False
elemBy eq x (y:ys) = eq x y || elemBy eq x ys
Calling elemBy (==) x xs is now equivalent to elem x xs. And in this specific case, you can go a step further: eq has the same first argument every time, so you can make it the caller's responsibility to apply that, and end up with this:
elemBy2 :: (a -> Bool) -> [a] -> Bool
elemBy2 _ [] = False
elemBy2 eqx (y:ys) = eqx y || elemBy2 eqx ys
Calling elemBy2 (x ==) xs is now equivalent to elem x xs.
...Oh wait. That's just any. (And in fact, in the standard library, elem = any . (==).)
I don't think I quite understand currying, since I'm unable to see any massive benefit it could provide. Perhaps someone could enlighten me with an example demonstrating why it is so useful. Does it truly have benefits and applications, or is it just an over-appreciated concept?
(There is a slight difference between currying and partial application, although they're closely related; since they're often mixed together, I'll deal with both terms.)
The place where I realized the benefits first was when I saw sliced operators:
incElems = map (+1)
--non-curried equivalent: incElems = (\elems -> map (\i -> (+) 1 i) elems)
IMO, this is totally easy to read. Now, if the type of (+) was (Int,Int) -> Int *, which is the uncurried version, it would (counter-intuitively) result in an error -- but curryied, it works as expected, and has type [Int] -> [Int].
You mentioned C# lambdas in a comment. In C#, you could have written incElems like so, given a function plus:
var incElems = xs => xs.Select(x => plus(1,x))
If you're used to point-free style, you'll see that the x here is redundant. Logically, that code could be reduced to
var incElems = xs => xs.Select(curry(plus)(1))
which is awful due to the lack of automatic partial application with C# lambdas. And that's the crucial point to decide where currying is actually useful: mostly when it happens implicitly. For me, map (+1) is the easiest to read, then comes .Select(x => plus(1,x)), and the version with curry should probably be avoided, if there is no really good reason.
Now, if readable, the benefits sum up to shorter, more readable and less cluttered code -- unless there is some abuse of point-free style done is with it (I do love (.).(.), but it is... special)
Also, lambda calculus would get impossible without using curried functions, since it has only one-valued (but therefor higher-order) functions.
* Of course it actually in Num, but it's more readable like this for the moment.
Update: how currying actually works.
Look at the type of plus in C#:
int plus(int a, int b) {..}
You have to give it a tuple of values -- not in C# terms, but mathematically spoken; you can't just leave out the second value. In haskell terms, that's
plus :: (Int,Int) -> Int,
which could be used like
incElem = map (\x -> plus (1, x)) -- equal to .Select (x => plus (1, x))
That's way too much characters to type. Suppose you'd want to do this more often in the future. Here's a little helper:
curry f = \x -> (\y -> f (x,y))
plus' = curry plus
which gives
incElem = map (plus' 1)
Let's apply this to a concrete value.
incElem [1]
= (map (plus' 1)) [1]
= [plus' 1 1]
= [(curry plus) 1 1]
= [(\x -> (\y -> plus (x,y))) 1 1]
= [plus (1,1)]
= [2]
Here you can see curry at work. It turns a standard haskell style function application (plus' 1 1) into a call to a "tupled" function -- or, viewed at a higher level, transforms the "tupled" into the "untupled" version.
Fortunately, most of the time, you don't have to worry about this, as there is automatic partial application.
It's not the best thing since sliced bread, but if you're using lambdas anyway, it's easier to use higher-order functions without using lambda syntax. Compare:
map (max 4) [0,6,9,3] --[4,6,9,4]
map (\i -> max 4 i) [0,6,9,3] --[4,6,9,4]
These kinds of constructs come up often enough when you're using functional programming, that it's a nice shortcut to have and lets you think about the problem from a slightly higher level--you're mapping against the "max 4" function, not some random function that happens to be defined as (\i -> max 4 i). It lets you start to think in higher levels of indirection more easily:
let numOr4 = map $ max 4
let numOr4' = (\xs -> map (\i -> max 4 i) xs)
numOr4 [0,6,9,3] --ends up being [4,6,9,4] either way;
--which do you think is easier to understand?
That said, it's not a panacea; sometimes your function's parameters will be the wrong order for what you're trying to do with currying, so you'll have to resort to a lambda anyway. However, once you get used to this style, you start to learn how to design your functions to work well with it, and once those neurons starts to connect inside your brain, previously complicated constructs can start to seem obvious in comparison.
One benefit of currying is that it allows partial application of functions without the need of any special syntax/operator. A simple example:
mapLength = map length
mapLength ["ab", "cde", "f"]
>>> [2, 3, 1]
mapLength ["x", "yz", "www"]
>>> [1, 2, 3]
map :: (a -> b) -> [a] -> [b]
length :: [a] -> Int
mapLength :: [[a]] -> [Int]
The map function can be considered to have type (a -> b) -> ([a] -> [b]) because of currying, so when length is applied as its first argument, it yields the function mapLength of type [[a]] -> [Int].
Currying has the convenience features mentioned in other answers, but it also often serves to simplify reasoning about the language or to implement some code much easier than it could be otherwise. For example, currying means that any function at all has a type that's compatible with a ->b. If you write some code whose type involves a -> b, that code can be made work with any function at all, no matter how many arguments it takes.
The best known example of this is the Applicative class:
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
And an example use:
-- All possible products of numbers taken from [1..5] and [1..10]
example = pure (*) <*> [1..5] <*> [1..10]
In this context, pure and <*> adapt any function of type a -> b to work with lists of type [a]. Because of partial application, this means you can also adapt functions of type a -> b -> c to work with [a] and [b], or a -> b -> c -> d with [a], [b] and [c], and so on.
The reason this works is because a -> b -> c is the same thing as a -> (b -> c):
(+) :: Num a => a -> a -> a
pure (+) :: (Applicative f, Num a) => f (a -> a -> a)
[1..5], [1..10] :: Num a => [a]
pure (+) <*> [1..5] :: Num a => [a -> a]
pure (+) <*> [1..5] <*> [1..10] :: Num a => [a]
Another, different use of currying is that Haskell allows you to partially apply type constructors. E.g., if you have this type:
data Foo a b = Foo a b
...it actually makes sense to write Foo a in many contexts, for example:
instance Functor (Foo a) where
fmap f (Foo a b) = Foo a (f b)
I.e., Foo is a two-parameter type constructor with kind * -> * -> *; Foo a, the partial application of Foo to just one type, is a type constructor with kind * -> *. Functor is a type class that can only be instantiated for type constrcutors of kind * -> *. Since Foo a is of this kind, you can make a Functor instance for it.
The "no-currying" form of partial application works like this:
We have a function f : (A ✕ B) → C
We'd like to apply it partially to some a : A
To do this, we build a closure out of a and f (we don't evaluate f at all, for the time being)
Then some time later, we receive the second argument b : B
Now that we have both the A and B argument, we can evaluate f in its original form...
So we recall a from the closure, and evaluate f(a,b).
A bit complicated, isn't it?
When f is curried in the first place, it's rather simpler:
We have a function f : A → B → C
We'd like to apply it partially to some a : A – which we can just do: f a
Then some time later, we receive the second argument b : B
We apply the already evaluated f a to b.
So far so nice, but more important than being simple, this also gives us extra possibilities for implementing our function: we may be able to do some calculations as soon as the a argument is received, and these calculations won't need to be done later, even if the function is evaluated with multiple different b arguments!
To give an example, consider this audio filter, an infinite impulse response filter. It works like this: for each audio sample, you feed an "accumulator function" (f) with some state parameter (in this case, a simple number, 0 at the beginning) and the audio sample. The function then does some magic, and spits out the new internal state1 and the output sample.
Now here's the crucial bit – what kind of magic the function does depends on the coefficient2 λ, which is not quite a constant: it depends both on what cutoff frequency we'd like the filter to have (this governs "how the filter will sound") and on what sample rate we're processing in. Unfortunately, the calculation of λ is a bit more complicated (lp1stCoeff $ 2*pi * (νᵥ ~*% δs) than the rest of the magic, so we wouldn't like having to do this for every single sample, all over again. Quite annoying, because νᵥ and δs are almost constant: they change very seldom, certainly not at each audio sample.
But currying saves the day! We simply calculate λ as soon as we have the necessary parameters. Then, at each of the many many audio samples to come, we only need to perform the remaining, very easy magic: yⱼ = yⱼ₁ + λ ⋅ (xⱼ - yⱼ₁). So we're being efficient, and still keeping a nice safe referentially transparent purely-functional interface.
1 Note that this kind of state-passing can generally be done more nicely with the State or ST monad, that's just not particularly beneficial in this example
2 Yes, this is a lambda symbol. I hope I'm not confusing anybody – fortunately, in Haskell it's clear that lambda functions are written with \, not with λ.
It's somewhat dubious to ask what the benefits of currying are without specifying the context in which you're asking the question:
In some cases, like functional languages, currying will merely be seen as something that has a more local change, where you could replace things with explicit tupled domains. However, this isn't to say that currying is useless in these languages. In some sense, programming with curried functions make you "feel" like you're programming in a more functional style, because you more typically face situations where you're dealing with higher order functions. Certainly, most of the time, you will "fill in" all of the arguments to a function, but in the cases where you want to use the function in its partially applied form, this is a bit simpler to do in curried form. We typically tell our beginning programmers to use this when learning a functional language just because it feels like better style and reminds them they're programming in more than just C. Having things like curry and uncurry also help for certain conveniences within functional programming languages too, I can think of arrows within Haskell as a specific example of where you would use curry and uncurry a bit to apply things to different pieces of an arrow, etc...
In some cases, you want to think about more than functional programs, you can present currying / uncurrying as a way to state the elimination and introduction rules for and in constructive logic, which provides a connection to a more elegant motivation for why it exists.
In some cases, for example, in Coq, using curried functions versus tupled functions can produce different induction schemes, which may be easier or harder to work with, depending on your applications.
I used to think that currying was simple syntax sugar that saves you a bit of typing. For example, instead of writing
(\ x -> x + 1)
I can merely write
(+1)
The latter is instantly more readable, and less typing to boot.
So if it's just a convenient short cut, why all the fuss?
Well, it turns out that because function types are curried, you can write code which is polymorphic in the number of arguments a function has.
For example, the QuickCheck framework lets you test functions by feeding them randomly-generated test data. It works on any function who's input type can be auto-generated. But, because of currying, the authors were able to rig it so this works with any number of arguments. Were functions not curried, there would be a different testing function for each number of arguments - and that would just be tedious.
let a = b in c can be thought as a syntactic sugar for (\a -> c) b, but in a typed setting in general it's not the case. For example, in the Milner calculus let a = \x -> x in (a True, a 1) is typable, but seemingly equivalent (\a -> (a True, a 1)) (\x -> x) is not.
However, the latter is typable in System F with a rank 2 type for the first lambda.
My questions are:
Is let polymorphism a rank 2 feature that sneaked secretly in the otherwise rank 1 world of Milner calculus?
The purpose of having of separate let construct seems to specify which types should be generalized by type checker, and which are not. Does it serve any other purposes? Are there any reasons to extend more powerful systems e.g. System F with separate let which is not sugar? Are there any papers on the rationale behind the design of the Milner calculus which no longer seems obvious to me?
Is there the most general type for \a -> (a True, a 1) in System F?
Are there type systems closed under beta expansion? I.e. if P is typable and M N = P then M is typable?
Some clarifications:
By equivalence I mean equivalence modulo type annotations. Is 'System F a la Church' the correct term for that?
I know that in general the principal typing property doesn't hold in F, but a principal type could exist for my particular term.
By let I mean the non-recursive flavour of let. Extension of system F with recursive let is obviously useful as it allows for non-termination.
W.r.t. to the four questions asked:
A key insight in this matter is that rather than just typing a
lambda-abstraction with a potentially polymorphic argument type, we
are typing a (sugared) abstraction that is (1) applied exactly once
and, moreover, that is (2) applied to a statically known argument.
That is, we can first subject the "argument" (i.e. the definiens of
the local definition) to type reconstruction to find its
(polymorphic) type; then assign the found type to the "parameter"
(the definiendum); and then, finally, type the body in the extended
type context.
Note that that is considerably more easy than general rank-2 type
inference.
Note that, strictly speaking, let .. = .. in .. is only syntactic sugar in System F if you demand that the definiendum carries a type annotation: let .. : .. = .. in .. .
Here are two solutions for T in (\a :: T -> (a True, a 1)) in System F: forall b. (forall a. a -> b) -> (b, b) and forall c d. (forall a b. a -> b) -> (c, d). Note that neither one of them is more general than the other. In general, System F does not admit principal types.
I suppose this holds for the simply typed lambda-calculus?
Types are not preserved under beta-expansion in any calculus that can express the concept of "dead code". You can probably figure out how to write something similar to this in any usable language:
if True then something typable else utter nonsense
For example, let M = (\x y -> x) (something typable) and N = (utter nonsense) and P = (something typable), so that M N = P, and P is typable, but M N isn't.
...rereading your question, I see that you only demand that M be typable, but that seems like a very strange meaning to give to "preserved under beta-expansion" to me. Anyway, I don't see why some argument like the above couldn't apply: simply let M have some untypable dead code in it.
You could type (\a -> (a True, a 1)) (\x -> x) if instead of generalizing only let expressions, you generalized all lambda abstractions. Having done so, one also needs to instantiate type schemas at every use point, not simply at the point where the binder which refers to them is actually used. I'm don't think there's any problem with this actually, outside of the fact that its vastly less efficient. I recall some discussion of this in TAPL, in fact, making similar points.
I recall many years ago seeing in a book about lambda calculus (possibly Barendregt) a type system preserved by beta expansion. It had no quantification, but it had disjunction to express that a term needed to be of more than one type simultaneously, as well as a special type omega which every term inhabited. As I recall, the latter avoids Daniel Wagner's dead code objection. While every expression was well-typed, restricting the position of omega in the type allowed you to charactize which expressions had (weak?) head normal forms.
Also if I recall correctly, fully normal form expressions had principal types, which did not contain omega.
For example the principal type of \f x -> f (f x) (the Church numeral 2) would be something like ((A -> B) /\ (B -> C)) -> A -> C
Not able to answer all your very specialized questions, but no, its not a rank 2 feature. As you write, it's just that let definitions are being quantified which yields a fully polymorphic rank-1 type unless the definition depends on some monomorphic value ina nouter scope.
Please also note that Haskell let is known as let rec in other languages and allows definition of mutually recursive functions and values. This is something you would not want to code manually with lambda-expressions and Y-combinators.
Closed. This question is opinion-based. It is not currently accepting answers.
Closed 5 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do these functions in the Applicative typeclass work?
(<*>) :: f (a -> b) -> f a -> f b
(*>) :: f a -> f b -> f b
(<*) :: f a -> f b -> f a
(That is, if they weren't operators, what might they be called?)
As a side note, if you could rename pure to something more friendly to non-mathematicians, what would you call it?
Sorry, I don't really know my math, so I'm curious how to pronounce the functions in the Applicative typeclass
Knowing your math, or not, is largely irrelevant here, I think. As you're probably aware, Haskell borrows a few bits of terminology from various fields of abstract math, most notably Category Theory, from whence we get functors and monads. The use of these terms in Haskell diverges somewhat from the formal mathematical definitions, but they're usually close enough to be good descriptive terms anyway.
The Applicative type class sits somewhere between Functor and Monad, so one would expect it to have a similar mathematical basis. The documentation for the Control.Applicative module begins with:
This module describes a structure intermediate between a functor and a monad: it provides pure expressions and sequencing, but no binding. (Technically, a strong lax monoidal functor.)
Hmm.
class (Functor f) => StrongLaxMonoidalFunctor f where
. . .
Not quite as catchy as Monad, I think.
What all this basically boils down to is that Applicative doesn't correspond to any concept that's particularly interesting mathematically, so there's no ready-made terms lying around that capture the way it's used in Haskell. So, set the math aside for now.
If we want to know what to call (<*>) it might help to know what it basically means.
So what's up with Applicative, anyway, and why do we call it that?
What Applicative amounts to in practice is a way to lift arbitrary functions into a Functor. Consider the combination of Maybe (arguably the simplest non-trivial Functor) and Bool (likewise the simplest non-trivial data type).
maybeNot :: Maybe Bool -> Maybe Bool
maybeNot = fmap not
The function fmap lets us lift not from working on Bool to working on Maybe Bool. But what if we want to lift (&&)?
maybeAnd' :: Maybe Bool -> Maybe (Bool -> Bool)
maybeAnd' = fmap (&&)
Well, that's not what we want at all! In fact, it's pretty much useless. We can try to be clever and sneak another Bool into Maybe through the back...
maybeAnd'' :: Maybe Bool -> Bool -> Maybe Bool
maybeAnd'' x y = fmap ($ y) (fmap (&&) x)
...but that's no good. For one thing, it's wrong. For another thing, it's ugly. We could keep trying, but it turns out that there's no way to lift a function of multiple arguments to work on an arbitrary Functor. Annoying!
On the other hand, we could do it easily if we used Maybe's Monad instance:
maybeAnd :: Maybe Bool -> Maybe Bool -> Maybe Bool
maybeAnd x y = do x' <- x
y' <- y
return (x' && y')
Now, that's a lot of hassle just to translate a simple function--which is why Control.Monad provides a function to do it automatically, liftM2. The 2 in its name refers to the fact that it works on functions of exactly two arguments; similar functions exist for 3, 4, and 5 argument functions. These functions are better, but not perfect, and specifying the number of arguments is ugly and clumsy.
Which brings us to the paper that introduced the Applicative type class. In it, the authors make essentially two observations:
Lifting multi-argument functions into a Functor is a very natural thing to do
Doing so doesn't require the full capabilities of a Monad
Normal function application is written by simple juxtaposition of terms, so to make "lifted application" as simple and natural as possible, the paper introduces infix operators to stand in for application, lifted into the Functor, and a type class to provide what's needed for that.
All of which brings us to the following point: (<*>) simply represents function application--so why pronounce it any differently than you do the whitespace "juxtaposition operator"?
But if that's not very satisfying, we can observe that the Control.Monad module also provides a function that does the same thing for monads:
ap :: (Monad m) => m (a -> b) -> m a -> m b
Where ap is, of course, short for "apply". Since any Monad can be Applicative, and ap needs only the subset of features present in the latter, we can perhaps say that if (<*>) weren't an operator, it should be called ap.
We can also approach things from the other direction. The Functor lifting operation is called fmap because it's a generalization of the map operation on lists. What sort of function on lists would work like (<*>)? There's what ap does on lists, of course, but that's not particularly useful on its own.
In fact, there's a perhaps more natural interpretation for lists. What comes to mind when you look at the following type signature?
listApply :: [a -> b] -> [a] -> [b]
There's something just so tempting about the idea of lining the lists up in parallel, applying each function in the first to the corresponding element of the second. Unfortunately for our old friend Monad, this simple operation violates the monad laws if the lists are of different lengths. But it makes a fine Applicative, in which case (<*>) becomes a way of stringing together a generalized version of zipWith, so perhaps we can imagine calling it fzipWith?
This zipping idea actually brings us full circle. Recall that math stuff earlier, about monoidal functors? As the name suggests, these are a way of combining the structure of monoids and functors, both of which are familiar Haskell type classes:
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Monoid a where
mempty :: a
mappend :: a -> a -> a
What would these look like if you put them in a box together and shook it up a bit? From Functor we'll keep the idea of a structure independent of its type parameter, and from Monoid we'll keep the overall form of the functions:
class (Functor f) => MonoidalFunctor f where
mfEmpty :: f ?
mfAppend :: f ? -> f ? -> f ?
We don't want to assume that there's a way to create an truly "empty" Functor, and we can't conjure up a value of an arbitrary type, so we'll fix the type of mfEmpty as f ().
We also don't want to force mfAppend to need a consistent type parameter, so now we have this:
class (Functor f) => MonoidalFunctor f where
mfEmpty :: f ()
mfAppend :: f a -> f b -> f ?
What's the result type for mfAppend? We have two arbitrary types we know nothing about, so we don't have many options. The most sensible thing is to just keep both:
class (Functor f) => MonoidalFunctor f where
mfEmpty :: f ()
mfAppend :: f a -> f b -> f (a, b)
At which point mfAppend is now clearly a generalized version of zip on lists, and we can reconstruct Applicative easily:
mfPure x = fmap (\() -> x) mfEmpty
mfApply f x = fmap (\(f, x) -> f x) (mfAppend f x)
This also shows us that pure is related to the identity element of a Monoid, so other good names for it might be anything suggesting a unit value, a null operation, or such.
That was lengthy, so to summarize:
(<*>) is just a modified function application, so you can either read it as "ap" or "apply", or elide it entirely the way you would normal function application.
(<*>) also roughly generalizes zipWith on lists, so you can read it as "zip functors with", similarly to reading fmap as "map a functor with".
The first is closer to the intent of the Applicative type class--as the name suggests--so that's what I recommend.
In fact, I encourage liberal use, and non-pronunciation, of all lifted application operators:
(<$>), which lifts a single-argument function into a Functor
(<*>), which chains a multi-argument function through an Applicative
(=<<), which binds a function that enters a Monad onto an existing computation
All three are, at heart, just regular function application, spiced up a little bit.
Since I have no ambitions of improving on C. A. McCann's technical answer, I'll tackle the more fluffy one:
If you could rename pure to something more friendly to podunks like me, what would you call it?
As an alternative, especially since there is no end to the constant angst-and-betrayal-filled cried against the Monad version, called "return", I propose another name, which suggests its function in a way that can satisfy the most imperative of imperative programmers, and the most functional of...well, hopefully, everyone can complain the same about: inject.
Take a value. "Inject" it into the Functor, Applicative, Monad, or what-have-you. I vote for "inject", and I approved this message.
In brief:
<*> you can call it apply. So Maybe f <*> Maybe a can be pronounced as apply Maybe f over Maybe a.
You could rename pure to of, like many JavaScript libraries do. In JS you can create a Maybe with Maybe.of(a).
Also, Haskell's wiki has a page on pronunciation of language operators here
(<*>) -- Tie Fighter
(*>) -- Right Tie
(<*) -- Left Tie
pure -- also called "return"
Source: Haskell Programming from First Principles, by Chris Allen and Julie Moronuki