I'm writing a custom language that features some functional elements. When I get stuck somewhere I usually check how Haskell does it. This time though, the problem is a bit to complicated for me to think of an example to give to Haskell.
Here's how it goes.
Say we have the following line
a . b
in Haskell.
Obviously, we are composing two functions, a and b. But what if the function a took another two functions as parameters. What's stopping it from operating on . and b? You can surround it in brackets but that shouldn't make a difference since the expression still evaluates to a function, a prefix one, and prefix functions have precedence over infix functions.
If you do
(+) 2 3 * 5
for example, it will output 25 instead of 17.
Basically what I'm asking is, what mechanism does Haskell use when you want an infix function to operate before a preceding prefix function.
So. If "a" is a function that takes two functions as its parameters. How do you stop Haskell from interpreting
a . b
as "apply . and b to the function a"
and Interpret it as "compose functions a and b".
If you don't put parens around an operator, it's always parsed as infix; i.e. as an operator, not an operand.
E.g. if you have f g ? i j, there are no parens around ?, so the whole thing is a call to (?) (parsed as (f g) ? (i j), equivalent to (?) (f g) (i j)).
I think what you're looking for are fixity declarations (see The Haskell Report).
They basically allow you to declare the operator precedence of infix functions.
For instance, there is
infixl 7 *
infixl 6 +
which means that + and * are both left associative infix operators.
* has precedence 7 while + has precendence 6, i.e * binds stronger than +.
In the report page, you can also see that . is defined as infixr 9 .
Basically what I'm asking is, what mechanism does Haskell use when you
want an infix function to operate before a preceding prefix function.
Just to point out a misconception: This is purely a matter of how expressions are parsed. The Haskell compiler does not know (or: does not need to know) if, in
f . g
f, g and (.) are functions, or whatever.
It goes the other way around:
Parser sees f . g (or, the syntactically equivalent: i + j)
Hands this up as something like App (App (.) f) g following the lexical and syntax rules.
Only then, when the typechecker sees App a b it concludes that a must be a function.
(+) 2 3 * 5
is parsed as
((+) 2 3) * 5
and thus
(2 + 3) * 5
That is, because function applications (like (+) 2 3) get evaluated first, before functions in infix notation, like *.
Related
I have 2 snippets of Haskell code.
This one one works fine:
(3+) $ 5
This one throws an error:
3+ $ 5
Why is this happening ? Aren't both of them parsed the same because of the higher precedence of + relative to $ ?
Short Answer: The expression (3+) is an example of a "section". It is syntactically distinct from usual infix operator syntax (like 2+3) or usual function application by juxtaposition (like sqrt 16), and the enclosing parentheses are not the usual grouping parentheses used for ordering operations. Instead, the parentheses in the section are part of the syntax, and you simply cannot write a section without those enclosing parentheses, just as you cannot write a tuple (1,2) without parentheses.
Very Long Answer:
There are three distinct function call expression syntaxes in Haskell:
Usual function application by juxtaposition, where an expression representing a function is placed adjacent to (i.e., juxtaposed with) an expression representing an argument, as in f x. In this example, both f and x are expressions consisting of single identifiers. Parentheses are not needed, but can be added without affecting the meaning ((f) (((x)))), and with more complicated expressions, parentheses might be required to get the order of operations right. However, these are "grouping parentheses", part of the syntax of expressions used to order operations, and they are separate from the juxtaposition syntax itself.
Infix operators where a binary operator is placed between two arguments, as in x / y, and the function represented by the operator / is applied to argument x and the result of that application is applied to argument y. Again, parentheses might be added to the two expression arguments or around the entire resulting expression: ((x) / (y)). In more complicated expressions, they may be required to get the order of operations right. However, again these are "grouping parentheses", part of the syntax of the expressions themselves and not part of the infix operator syntax.
A section, which comes in two varieties: a right section (/ 2) and a left section (2 /) consisting of an infix operator and an expression placed next to each other, in either order (operator first or operator last) between parentheses. Note that the parentheses are part of the syntax. A section cannot be written without parentheses, just as a tuple cannot be written without parentheses. A section creates a partially applied function: (2 /) applies the operator / to 2 as if it had appeared on the left-hand side of a more usual infix expression. The result is a function that expects one more argument (the missing right-hand side). Therefore, (2 /) 3 == 2 / 3. Similarly, the section (/ 2) applies the operator / to 2 as if 2 had appeared on the right-hand side of an infix expression. The result is a function expecting the missing left-hand side. Therefore (/ 2) 3 == 3 / 2. To be clear, that's juxtaposition of a section and a number on the left of == versus infix operator syntax on the right.
There's never any ambiguity about whether a piece of syntax is juxtaposition within grouping parentheses (i.e., parentheses not part of the syntax) versus a section (i.e., with parentheses part of the syntax). Even though the expressions (f x) and (+ x) look similar, f and x are expressions while + is an operator. Because + is an operator and not an expression, (+ x) must be a section. Likewise, because both f and x are expressions and not operators, (f x) must be juxtaposition placed within grouping parentheses.
There is a further complication. The operator + may be turned into a standalone expression by placing it within parentheses, like (+). Again, the parentheses are part of the syntax. In much the same way that sections (2 +) and (+ 2) turn the operator into a function expression (in this case, of a function of one variable), the syntax (+) turns the operator into the function expression representing the two-argument function associated with the operator. Just as with sections, there's no possibility of confusing (+) (parentheses part of the syntax) with (15) (grouping parentheses) because + is an operator while 15 is an expression.
As a result, the following are all equivalent ways of applying the two operators + and $
(3+) $ 5 -- infix operator $ applied to section (3+) and 5
((3)+) $ 5 -- add optional grouping parentheses to expression part of section
($5) (3+) -- juxtaposition of section ($5) and (3+)
((3+)$) 5 -- juxtaposition of section ((3+)$) with 5; here, the section ((3+)$) itself
-- consists of a section expression (3+) and the operator $.
-- both sets of parentheses are part of the section syntax and must be present
((+)3) $ 5 -- infix $ applied to juxtaposition of (+) 3 and 5: the parentheses in (+) are
-- part of the syntax, the outer parentheses in ((+)3) are grouping parentheses
(+)3 $ 5 -- because of order of operations (juxtaposition before infix),
-- those grouping parentheses are optional
In contrast, there's no valid syntactic construction that allows two binary operators to appear next to each other without intervening parentheses, which is why the following is invalid syntax:
3+ $ 5
Bonus trivia #1: The expression 3+$5 is valid syntax, but that's because +$ is considered a new multi-character operator, like ++ or >>=.
Bonus trivia #2: There is one case where two operators may be placed next to each other. A binary operator can be placed next to the unary negation operator -, provided the binary operator is of lower precedence (than the usual precedence for the binary operator -, so lower than precedence level 6).
-5 == -5
^^^^ these two operators can appear next to each other, because == is
precedence level 4
-5 + -3
^^^ precedence parsing error because + is level 6
+ and $ are infix operators, and you cannot directly give + one argument on the left to curry it - you need to put it in parentheses as (3+), (+3) or (+) to make it a function that can be used with $. Your second piece of code is syntactically invalid because + cannot be treated as a postfix operator, although you can partially apply it inside parentheses.
If $ had been a function f instead, it would be parsed as 3 + (f 5), but there is no way to parse two operators next to each other.
I wanted to flip a list constructor usage, to have type:
[a] -> a -> [a]
(for use in a fold), so tried:
(flip :)
but it gives the type:
Prelude> :t (flip :)
(flip :) :: [(a -> b -> c) -> b -> a -> c] -> [(a -> b -> c) -> b -> a -> c]
This surprised me, but it appears that this was parsed as a left section of (:), instead of a partial application of flip. Rewriting it using flip as infix seems to overcome this,
Prelude> :t ((:) `flip`)
((:) `flip`) :: [a] -> a -> [a]
But I couldn't find the rule defining this behavior, and I thought that function application was the highest precedence, and was evaluated left->right, so I would have expected these two forms to be equivalent.
What you want to do is this:
λ> :t (flip (:))
(flip (:)) :: [a] -> a -> [a]
Operators in Haskell are infix. So when you do flip : it operates in an infix fashion i.e. flip is applied to : function. By putting parenthesis explicitly in flip (:), you tell that : should be applied to flip. You can also use the backtick operator in flip for making that infix which you have tried already.
It was putting : in parentheses that made your second example work, not using backticks around flip.
We often say that "function application has highest precedence" to emphasise that e.g. f x + 1 should be read as (f x) + 1, and not as f (x + 1). But this isn't really wholly accurate. If it was, and (flip :) parsed as you expected, then the highest precedence after (f x) + 1 would be the application of (f x) to +; the whole expression f x + 1 would end up being parsed as f applied to 3 arguments: x, +, and 1. But this would happen with all expressions involving infix operators! Even a simple 1 + 1 would be recognised as 1 applied to + and 1 (and then complain about the missing Num instance that would allow 1 to be a function).
Essentially this strict understanding of "function application has highest precedence" would mean that function application would be all that ever happens; infix operators would always end up as arguments to some function, never actually working as infix operators.
Actually precedence (and associativity) are mechanisms for resolving the ambiguity of expressions involving multiple infix operators. Function application is not an infix operator, and simply doesn't take part in the precedence/associativity system. Chains of terms that don't involve operators are resolved as function application before precedence is invoked to resolve the operator applications (hence "highest precedence"), but it's not really precedence that causes it.
Here's how it works. You start with a linear sequence of terms and operators; there's no structure, they were simply written next to each other.
What I'm calling a "term" here can be a non-operator identifier like flip; or a string, character, or numeric literal; or a list expression; or a parenthesised subexpression; etc. They're all opaque as far as this process is concerned; we only know (and only need to know) that they're not infix operators. We can always tell an operator because it will either be a "symbolic" identifier like ++!#>, or an alphanumeric identifier in backticks.
So, sequence of terms and operators. You find all chains of one or more terms in a row that contain no operators. Each such chain is a chain of function applications, and becomes a single term.1
Now if you have two operators directly next to each other you've got an error. If your sequence starts or ends in an operator, that's also an error (unless this is an operator section).
At this point you're guaranteed to have a strictly alternating sequence like term operator term operator term operator term, etc. So you pick the operator with the highest precedence together with the terms to its left and right, call that an operator application, and those three items become a single term. Associativity acts as a tie break when you have multiple operators with the same precedence. Rinse and repeat until the whole expression has become a single term (or associativity fails to break a tie, which is also an error). This means that in an expression involving operators, the "top level application" is always one of the operators, never ordinary function application.
A consequence of this is that there are no circumstances under which an operator can end up passed as the argument to a function. It's simply impossible. This is why we need the (:) syntax to disable the "operator-ness" of operators, and get at their identity as values.
For flip : the only chain of non-operator terms is just flip, so there's no ordinary function application to resolve "at highest precedence". : then goes looking for its left and right arguments (but this is a section, so there's no right argument), and finds flipon its left.
To make flip receive : as an argument instead of the other way around, you must write flip (:). (:) is not an operator (it's in parentheses, so it doesn't matter what's inside), and so we have a chain of two terms with no operators, so that gets resolved to a single expression by applying flip to (:).
1 The other way to look at this is that you identify all sequences of terms not otherwise separated by operators and insert the "function application operator" between them. This "operator" has higher precedence than it's possible to assign to other operators and is left-associative. Then the operator-resolution logic will automatically treat function application the way I've been describing.
I want to update a record using lens with a value parsed by attoparsec.
fmap (myRecord & _2 . someField .~) double
And it totally doesn't work:
Iddq3.hs:99:48:
The operator ‘.~’ [infixr 4] of a section
must have lower precedence than that of the operand,
namely ‘&’ [infixl 1]
in the section: ‘myRecord & _2 . someField .~’
What does this error mean? What are infixr and infixl? How can I rewrite the function to correct it?
You just can't mix operators of that fixity. Fixity in Haskell is just operator precedence, just like your "Please Excuse My Dear Aunt Sally" (if you're American, this is probably what you learned) for remembering what order to apply operations in, i.e. Parentheses, Exponent, Multiplication, Division, Addition, Subtraction. Here, the .~ operator is said to have a higher right associative precedence than the left associative low precedence of the & operator. The real problem comes from mixing the right and left associative operators, the compiler doesn't know what order to apply them in!
Instead, you can re-formulate it as two operator sections composed together
fmap ((myRecord &) . (_2 . someField .~)) double
So that you give the compiler an explicit grouping, or you can use the prefix set function for a cleaner look
fmap (\v -> set (_2 . someField) v myRecord) double
Or if you want to get rid of the lambda (my preference is to leave it alone), you can use flip as
fmap (flip (set (_2 . someField)) myRecord) double
This is a somewhat strange restriction on the way operators can be used in sections.
Basically, when you have an operator section like (e1 & e2 .~), there are two ways you can imagine desugaring it to sectionless notation. One is to turn the operator into prefix position:
(.~) (e1 & e2)
(If the operator were in front, a flip would also need to be added.)
The other way is to turn it into a lambda expression:
\x -> e1 & e2 .~ x
These two ways of thinking of sections are supposed to give the same result.
There's a problem, though, if as here, there is another operator & of lower fixity/precedence than the sectioned operator. Because that means the lambda expression parses as
\x -> e1 & (e2 .~ x)
In other words, the lambda expression definitely isn't equivalent to simply moving the operator and keeping the rest as a unified expression.
While the Haskell designers could have chosen to interpret a section in one of the two ways, instead they chose to disallow sections where the two interpretations don't match, and make them errors. Possibly because, as I understand it, the lambda expression interpretation is more intuitive to humans, while the operator movement is easier to implement in a parser/compiler.
You can always use the lambda expression explicitly, though, or make your own point-free version like #bheklilr showed.
The Haskell definition says:
An expression is in weak head normal form (WHNF), if it is either:
a constructor (eventually applied to arguments) like True, Just (square 42) or (:) 1
a built-in function applied to too few arguments (perhaps none) like (+) 2 or sqrt.
or a lambda abstraction \x -> expression.
Why do built-in functions receive special treatment? According to lambda calculus, there is no difference between a partially applied function and any other function, because at the end we have only one argument functions.
A normal function applied to an argument, like the following:
(\x y -> x + 1 : y) 1
Can be reduced, to give:
\y -> 1 + 1 : y
In the first expression, the "outermost" thing was an application, so it was not in WHNF. In the second, the outermost thing is a lambda abstraction, so it is in WHNF (even though we could do more reductions inside the function body).
Now lets consider the application of a built-in (primitive) function:
(+) 1
Because this is a built-in, there's no function body in which we can substitute 1 for the first parameter. The evaluator "just knows" how to evaluate fully "saturated" applications of (+), like (+) 1 2. But there's nothing that can be done with a partially applied built-in; all we can do is produce a data structure describing "apply (+) to 1 and wait for one more argument", but that's exactly what the thing we're trying to reduce is. So we do nothing.
Built-ins are special because they're not defined by lambda calculus expressions, so the reduction process can't "see inside" their definition. Thus, unlike normal functions applications, built-in function applications have to be "reduced" by just accumulating arguments until they are fully "saturated" (in which case reduction to WHNF is by running whatever the magic implementation of the built-in is). Unsaturated built-in applications cannot be reduced any further, and so are already in WHNF.
Consider
Prelude> let f n = [(+x) | x <- [1..]] !! n
Prelude> let g = f 20000000 :: Int -> Int
g is at this point not in WHNF! You can see this by evaluating, say, g 3, which takes a noticable lag because you need WHNF before you can apply an argument. That's when the list is traversed in search for the right built-in function. But afterwards, this choice is then fixed, g is WHNF (and indeed NF: that's the same for lambdas, perhaps what you meant with your question) and thus any subsequent calls will give a result immediately.
I want to know what the Haskell operator % does. It is hard to find on google, I could not find it in the Haskell Report either.
I saw it used in this piece of code:
fi=zz.bu
bu=zz.(:).(++"zz")
[]#zz=zz;zz#__=zz
zZ%zz=zZ zz$zZ%zz
zz=(([[],[]]++).)
zZ=zipWith;z=zZ((#).show)[1..]$zZ(++)(bu%"Fi")(fi%"Bu")
taken from: https://codegolf.stackexchange.com/questions/88/obfuscated-fizzbuzz-golf/110#110
Here is the relevant section of the Haskell Report:
Haskell provides special syntax to support infix notation. An operator is a function that can be applied using infix syntax (Section 3.4), or partially applied using a section (Section 3.5).
An operator is either an operator symbol, such as + or $$, or is an ordinary identifier enclosed in grave accents (backquotes), such as `op`. For example, instead of writing the prefix application op x y, one can write the infix application x `op` y. If no fixity declaration is given for `op` then it defaults to highest precedence and left associativity (see Section 4.4.2).
Dually, an operator symbol can be converted to an ordinary identifier by enclosing it in parentheses. For example, (+) x y is equivalent to x + y, and foldr (*) 1 xs is equivalent to foldr (\x y -> x * y) 1 xs.
That is to say, there is nothing special about "operators" in Haskell other than their syntax. A function whose name is made from symbols defaults to infix, a function whose name is alphanumeric defaults to prefix, and either can be used in the other style with a bit of extra syntax.
Incidentally, since it's often impossible to search based on operator names using Google, to find operators that are declared in the standard libraries there are two search engines specifically for finding things on Hackage.
In general, we can define a new function foo like so:
foo a b c = (something involving a, b, and c)
Similarly, we can define a binary operator % (constructed out of any combination of symbol characters) like so:
a % b = (something involving a and b)