I want to update a record using lens with a value parsed by attoparsec.
fmap (myRecord & _2 . someField .~) double
And it totally doesn't work:
Iddq3.hs:99:48:
The operator ‘.~’ [infixr 4] of a section
must have lower precedence than that of the operand,
namely ‘&’ [infixl 1]
in the section: ‘myRecord & _2 . someField .~’
What does this error mean? What are infixr and infixl? How can I rewrite the function to correct it?
You just can't mix operators of that fixity. Fixity in Haskell is just operator precedence, just like your "Please Excuse My Dear Aunt Sally" (if you're American, this is probably what you learned) for remembering what order to apply operations in, i.e. Parentheses, Exponent, Multiplication, Division, Addition, Subtraction. Here, the .~ operator is said to have a higher right associative precedence than the left associative low precedence of the & operator. The real problem comes from mixing the right and left associative operators, the compiler doesn't know what order to apply them in!
Instead, you can re-formulate it as two operator sections composed together
fmap ((myRecord &) . (_2 . someField .~)) double
So that you give the compiler an explicit grouping, or you can use the prefix set function for a cleaner look
fmap (\v -> set (_2 . someField) v myRecord) double
Or if you want to get rid of the lambda (my preference is to leave it alone), you can use flip as
fmap (flip (set (_2 . someField)) myRecord) double
This is a somewhat strange restriction on the way operators can be used in sections.
Basically, when you have an operator section like (e1 & e2 .~), there are two ways you can imagine desugaring it to sectionless notation. One is to turn the operator into prefix position:
(.~) (e1 & e2)
(If the operator were in front, a flip would also need to be added.)
The other way is to turn it into a lambda expression:
\x -> e1 & e2 .~ x
These two ways of thinking of sections are supposed to give the same result.
There's a problem, though, if as here, there is another operator & of lower fixity/precedence than the sectioned operator. Because that means the lambda expression parses as
\x -> e1 & (e2 .~ x)
In other words, the lambda expression definitely isn't equivalent to simply moving the operator and keeping the rest as a unified expression.
While the Haskell designers could have chosen to interpret a section in one of the two ways, instead they chose to disallow sections where the two interpretations don't match, and make them errors. Possibly because, as I understand it, the lambda expression interpretation is more intuitive to humans, while the operator movement is easier to implement in a parser/compiler.
You can always use the lambda expression explicitly, though, or make your own point-free version like #bheklilr showed.
Related
How does outermost evaluation work on an application of a curried function? says:
in Haskell, whitespace is an operator: it applies the lhs function to the rhs argument.
Is it true? I can't find it in documents.
When Haskell compiler lexical analyzing a Haskell program, is a whitespace recognized as either a function application operator or a token separator?
I’ve never heard anyone say that whitespace is an operator before. I suppose you could consider it to be an operator in the context of a function application, but in most contexts it is not an operator. For instance, I don’t see any way to consider whitespace as an operator in the following code sample, where whitespace is used only to separate tokens:
module Main where
x = "test 1"
y = "test 2"
main = do
(z : zs) <- getLine
putStrLn $ z : (x ++ y ++ zs)
It seems fairly obvious here that whitespace is acting purely as a token separator. The apparent ‘operator-ness’ in something like f x y z can be best thought of as saying that if two values are placed next to each other, the second is applied to the first. For instance, putStrLn"xxx" and putStrLn "xxx" both apply "xxx" to putStrLn; the space is completely irrelevant.
EDIT: In a comment, #DanielWagner provided two great examples. Firstly, (f)x is the same as f x, yet has no whitespace; here we see confirmation that the space is acting purely as a token separator, so can be replaced by a bracket (which also separates tokens) without any impact on the lexemes of the expression. Secondly, f {x=y} does not apply {x=y} to f, but rather uses record syntax to create a new record based on f; again, we can remove the space to get f{x=y}, which does an equally good job of separating the lexemes.
The white space in most cases is "function application", meaning apply the function of the right, to the argument to the left, just like the ($) operator, but it can be used to be more clear on your code, some examples:
plusOne = (1 +)
you can either do
plusOne 2
or
plusOne $ 2
:t ($)
($) :: (a -> b) -> a -> b
I forgot a usefull example:
imagine you want to filter the greater than 3, but before you want to add one to each element:
filter (>3) $ map plusOne [1,2,3,4]
That will compile, but this wont:
filter (>3) map plusOne [1,2,3,4]
But in other cases, is not function application, like the other #bradrn answer or #Daniel warner comment just shows.
While in Haskell, the following works:
> (+) `liftM` (Just 3) `ap` (Just 5)
Just 8
Frege hints to use parantheses:
frege> (+) `liftM` (Just 3) `ap` (Just 5)
E <console>.fr:12: invalid expression, none-associative operator liftM
found on same level as none-associative operator ap
H <console>.fr:12: Use parentheses to disambiguate an expression like a
liftM b ap c
I found this section in Haskell report:
Expressions involving infix operators are disambiguated by the
operator's fixity (see Section 4.4.2). Consecutive unparenthesized
operators with the same precedence must both be either left or right
associative to avoid a syntax error. Given an unparenthesized
expression "x qop(a,i) y qop(b,j) z", parentheses must be added around
either "x qop(a,i) y" or "y qop(b,j) z" when i=j unless a=b=l or
a=b=r.
In the code above, both the "operators" have no associativity and have the same default precedence so it seems like Frege's behavior is consistent with Haskell report.
Am I understanding this right? Why Frege needs parentheses in this case whereas Haskell is able to disambiguate? or How is Haskell able to disambiguate in this case?
Well, this is because, as it stands, `foo` defaults to non-associativity In Frege, while in Haskell is it left associativity.
This should be corrected in the Frege compiler in order to make it more Haskell compatible.
Say I have a general recursive definition in haskell like this:
foo a0 a1 ... = base_case
foo b0 b1 ...
| cond1 = recursive_case_1
| cond2 = recursive_case_2
...
Can it always rewritten using foldr? Can it be proved?
If we interpret your question literally, we can write const value foldr to achieve any value, as #DanielWagner pointed out in a comment.
A more interesting question is whether we can instead forbid general recursion from Haskell, and "recurse" only through the eliminators/catamorphisms associated to each user-defined data type, which are the natural generalization of foldr to inductively defined data types. This is, essentially, (higher-order) primitive recursion.
When this restriction is performed, we can only compose terminating functions (the eliminators) together. This means that we can no longer define non terminating functions.
As a first example, we lose the trivial recursion
f x = f x
-- or even
a = a
since, as said, the language becomes total.
More interestingly, the general fixed point operator is lost.
fix :: (a -> a) -> a
fix f = f (fix f)
A more intriguing question is: what about the total functions we can express in Haskell? We do lose all the non-total functions, but do we lose any of the total ones?
Computability theory states that, since the language becomes total (no more non termination), we lose expressiveness even on the total fragment.
The proof is a standard diagonalization argument. Fix any enumeration of programs in the total fragment so that we can speak of "the i-th program".
Then, let eval i x be the result of running the i-th program on the natural x as input (for simplicity, assume this is well typed, and that the result is a natural). Note that, since the language is total, then a result must exist. Moreover, eval can be implemented in the unrestricted Haskell language, since we can write an interpreter of Haskell in Haskell (left as an exercise :-P), and that would work as fine for the fragment. Then, we simply take
f n = succ $ eval n n
The above is a total function (a composition of total functions) which can be expressed in Haskell, but not in the fragment. Indeed, otherwise there would be a program to compute it, say the i-th program. In such case we would have
eval i x = f x
for all x. But then,
eval i i = f i = succ $ eval i i
which is impossible -- contradiction. QED.
In type theory, it is indeed the case that you can elaborate all definitions by dependent pattern-matching into ones only using eliminators (a more strongly-typed version of folds, the generalisation of lists' foldr).
See e.g. Eliminating Dependent Pattern Matching (pdf)
I wanted to flip a list constructor usage, to have type:
[a] -> a -> [a]
(for use in a fold), so tried:
(flip :)
but it gives the type:
Prelude> :t (flip :)
(flip :) :: [(a -> b -> c) -> b -> a -> c] -> [(a -> b -> c) -> b -> a -> c]
This surprised me, but it appears that this was parsed as a left section of (:), instead of a partial application of flip. Rewriting it using flip as infix seems to overcome this,
Prelude> :t ((:) `flip`)
((:) `flip`) :: [a] -> a -> [a]
But I couldn't find the rule defining this behavior, and I thought that function application was the highest precedence, and was evaluated left->right, so I would have expected these two forms to be equivalent.
What you want to do is this:
λ> :t (flip (:))
(flip (:)) :: [a] -> a -> [a]
Operators in Haskell are infix. So when you do flip : it operates in an infix fashion i.e. flip is applied to : function. By putting parenthesis explicitly in flip (:), you tell that : should be applied to flip. You can also use the backtick operator in flip for making that infix which you have tried already.
It was putting : in parentheses that made your second example work, not using backticks around flip.
We often say that "function application has highest precedence" to emphasise that e.g. f x + 1 should be read as (f x) + 1, and not as f (x + 1). But this isn't really wholly accurate. If it was, and (flip :) parsed as you expected, then the highest precedence after (f x) + 1 would be the application of (f x) to +; the whole expression f x + 1 would end up being parsed as f applied to 3 arguments: x, +, and 1. But this would happen with all expressions involving infix operators! Even a simple 1 + 1 would be recognised as 1 applied to + and 1 (and then complain about the missing Num instance that would allow 1 to be a function).
Essentially this strict understanding of "function application has highest precedence" would mean that function application would be all that ever happens; infix operators would always end up as arguments to some function, never actually working as infix operators.
Actually precedence (and associativity) are mechanisms for resolving the ambiguity of expressions involving multiple infix operators. Function application is not an infix operator, and simply doesn't take part in the precedence/associativity system. Chains of terms that don't involve operators are resolved as function application before precedence is invoked to resolve the operator applications (hence "highest precedence"), but it's not really precedence that causes it.
Here's how it works. You start with a linear sequence of terms and operators; there's no structure, they were simply written next to each other.
What I'm calling a "term" here can be a non-operator identifier like flip; or a string, character, or numeric literal; or a list expression; or a parenthesised subexpression; etc. They're all opaque as far as this process is concerned; we only know (and only need to know) that they're not infix operators. We can always tell an operator because it will either be a "symbolic" identifier like ++!#>, or an alphanumeric identifier in backticks.
So, sequence of terms and operators. You find all chains of one or more terms in a row that contain no operators. Each such chain is a chain of function applications, and becomes a single term.1
Now if you have two operators directly next to each other you've got an error. If your sequence starts or ends in an operator, that's also an error (unless this is an operator section).
At this point you're guaranteed to have a strictly alternating sequence like term operator term operator term operator term, etc. So you pick the operator with the highest precedence together with the terms to its left and right, call that an operator application, and those three items become a single term. Associativity acts as a tie break when you have multiple operators with the same precedence. Rinse and repeat until the whole expression has become a single term (or associativity fails to break a tie, which is also an error). This means that in an expression involving operators, the "top level application" is always one of the operators, never ordinary function application.
A consequence of this is that there are no circumstances under which an operator can end up passed as the argument to a function. It's simply impossible. This is why we need the (:) syntax to disable the "operator-ness" of operators, and get at their identity as values.
For flip : the only chain of non-operator terms is just flip, so there's no ordinary function application to resolve "at highest precedence". : then goes looking for its left and right arguments (but this is a section, so there's no right argument), and finds flipon its left.
To make flip receive : as an argument instead of the other way around, you must write flip (:). (:) is not an operator (it's in parentheses, so it doesn't matter what's inside), and so we have a chain of two terms with no operators, so that gets resolved to a single expression by applying flip to (:).
1 The other way to look at this is that you identify all sequences of terms not otherwise separated by operators and insert the "function application operator" between them. This "operator" has higher precedence than it's possible to assign to other operators and is left-associative. Then the operator-resolution logic will automatically treat function application the way I've been describing.
I'm writing a custom language that features some functional elements. When I get stuck somewhere I usually check how Haskell does it. This time though, the problem is a bit to complicated for me to think of an example to give to Haskell.
Here's how it goes.
Say we have the following line
a . b
in Haskell.
Obviously, we are composing two functions, a and b. But what if the function a took another two functions as parameters. What's stopping it from operating on . and b? You can surround it in brackets but that shouldn't make a difference since the expression still evaluates to a function, a prefix one, and prefix functions have precedence over infix functions.
If you do
(+) 2 3 * 5
for example, it will output 25 instead of 17.
Basically what I'm asking is, what mechanism does Haskell use when you want an infix function to operate before a preceding prefix function.
So. If "a" is a function that takes two functions as its parameters. How do you stop Haskell from interpreting
a . b
as "apply . and b to the function a"
and Interpret it as "compose functions a and b".
If you don't put parens around an operator, it's always parsed as infix; i.e. as an operator, not an operand.
E.g. if you have f g ? i j, there are no parens around ?, so the whole thing is a call to (?) (parsed as (f g) ? (i j), equivalent to (?) (f g) (i j)).
I think what you're looking for are fixity declarations (see The Haskell Report).
They basically allow you to declare the operator precedence of infix functions.
For instance, there is
infixl 7 *
infixl 6 +
which means that + and * are both left associative infix operators.
* has precedence 7 while + has precendence 6, i.e * binds stronger than +.
In the report page, you can also see that . is defined as infixr 9 .
Basically what I'm asking is, what mechanism does Haskell use when you
want an infix function to operate before a preceding prefix function.
Just to point out a misconception: This is purely a matter of how expressions are parsed. The Haskell compiler does not know (or: does not need to know) if, in
f . g
f, g and (.) are functions, or whatever.
It goes the other way around:
Parser sees f . g (or, the syntactically equivalent: i + j)
Hands this up as something like App (App (.) f) g following the lexical and syntax rules.
Only then, when the typechecker sees App a b it concludes that a must be a function.
(+) 2 3 * 5
is parsed as
((+) 2 3) * 5
and thus
(2 + 3) * 5
That is, because function applications (like (+) 2 3) get evaluated first, before functions in infix notation, like *.