Why is it that when I do range in Haskell, this works:
[LT .. GT]
but this doesn't:
[LT..GT]
and what does this cryptic error mean:
<interactive>:1:2:
Failed to load interface for `LT':
Use -v to see a list of the files searched for.
<interactive>:1:2:
A section must be enclosed in parentheses thus: (`LT..` GT)
However, When I use Ints, the second form (without spaces) works:
[1..3]
It's because LT.. is interpreted as the . operator in the LT module.
<interactive>:1:2:
Failed to load interface for `LT':
Use -v to see a list of the files searched for.
It means GHC cannot find a module named LT. The same message appears if you use a qualified name with a non-existing library:
Prelude> SDJKASD.sdfhj
<interactive>:1:1:
Failed to load interface for `SDJKASD':
Use -v to see a list of the files searched for.
<interactive>:1:2:
A section must be enclosed in parentheses thus: (`LT..` GT)
In Haskell, a section is an infix operator with a partial application, e.g. (* 3), which is equivalent to \x -> x * 3.
In your case, LT.. is interpreted as an infix . operator, and the GT is part of the section formed with this operator.
A section must be enclosed in parenthesis, and since the misinterpretation does not, the parser will complain like this.
Another example of the error:
Prelude> [* 3]
<interactive>:1:2:
A section must be enclosed in parentheses thus: (* 3)
Because of the maximal munch rule, LT.. gets interpreted as the qualified name of the (.) operator in the LT module. Since you can define your own operators in Haskell, the language allows you to fully qualify the names of operators in the same way as you can with functions.
This leads to an ambiguity with the .. used in ranges when the name of the operator starts with ., which is resolved by using the maximal munch rule, which says that the longest match wins.
For example, Prelude.. is the qualified name of the function composition operator.
> :info Prelude..
(.) :: (b -> c) -> (a -> b) -> a -> c -- Defined in GHC.Base
infixr 9 .
> (+3) Prelude.. (*2) $ 42
87
The reason why [1..3] or [x..y] works, is because a module name must begin with an upper case letter, so 1.. and x.. cannot be qualified names.
Failed to load interface for `LT':
Kenny and Hammar have explained what this means: LT.. is assumed to be the . function in the LT module. Since there is no LT module, your interpreter naturally cannot load it.
A section must be enclosed in parentheses thus: (LT.. GT)
Along the same vein, assuming that LT.. is a reference to the . function in the LT module, your interpreter is apparently assuming that you made the mistake of using square brackets instead of parens in order to for a "section" ( a section is, for example, (+1) ).
This is simply an obnoxious little wart in the Haskell language; just remember to use spaces.
Related
I wanted to flip a list constructor usage, to have type:
[a] -> a -> [a]
(for use in a fold), so tried:
(flip :)
but it gives the type:
Prelude> :t (flip :)
(flip :) :: [(a -> b -> c) -> b -> a -> c] -> [(a -> b -> c) -> b -> a -> c]
This surprised me, but it appears that this was parsed as a left section of (:), instead of a partial application of flip. Rewriting it using flip as infix seems to overcome this,
Prelude> :t ((:) `flip`)
((:) `flip`) :: [a] -> a -> [a]
But I couldn't find the rule defining this behavior, and I thought that function application was the highest precedence, and was evaluated left->right, so I would have expected these two forms to be equivalent.
What you want to do is this:
λ> :t (flip (:))
(flip (:)) :: [a] -> a -> [a]
Operators in Haskell are infix. So when you do flip : it operates in an infix fashion i.e. flip is applied to : function. By putting parenthesis explicitly in flip (:), you tell that : should be applied to flip. You can also use the backtick operator in flip for making that infix which you have tried already.
It was putting : in parentheses that made your second example work, not using backticks around flip.
We often say that "function application has highest precedence" to emphasise that e.g. f x + 1 should be read as (f x) + 1, and not as f (x + 1). But this isn't really wholly accurate. If it was, and (flip :) parsed as you expected, then the highest precedence after (f x) + 1 would be the application of (f x) to +; the whole expression f x + 1 would end up being parsed as f applied to 3 arguments: x, +, and 1. But this would happen with all expressions involving infix operators! Even a simple 1 + 1 would be recognised as 1 applied to + and 1 (and then complain about the missing Num instance that would allow 1 to be a function).
Essentially this strict understanding of "function application has highest precedence" would mean that function application would be all that ever happens; infix operators would always end up as arguments to some function, never actually working as infix operators.
Actually precedence (and associativity) are mechanisms for resolving the ambiguity of expressions involving multiple infix operators. Function application is not an infix operator, and simply doesn't take part in the precedence/associativity system. Chains of terms that don't involve operators are resolved as function application before precedence is invoked to resolve the operator applications (hence "highest precedence"), but it's not really precedence that causes it.
Here's how it works. You start with a linear sequence of terms and operators; there's no structure, they were simply written next to each other.
What I'm calling a "term" here can be a non-operator identifier like flip; or a string, character, or numeric literal; or a list expression; or a parenthesised subexpression; etc. They're all opaque as far as this process is concerned; we only know (and only need to know) that they're not infix operators. We can always tell an operator because it will either be a "symbolic" identifier like ++!#>, or an alphanumeric identifier in backticks.
So, sequence of terms and operators. You find all chains of one or more terms in a row that contain no operators. Each such chain is a chain of function applications, and becomes a single term.1
Now if you have two operators directly next to each other you've got an error. If your sequence starts or ends in an operator, that's also an error (unless this is an operator section).
At this point you're guaranteed to have a strictly alternating sequence like term operator term operator term operator term, etc. So you pick the operator with the highest precedence together with the terms to its left and right, call that an operator application, and those three items become a single term. Associativity acts as a tie break when you have multiple operators with the same precedence. Rinse and repeat until the whole expression has become a single term (or associativity fails to break a tie, which is also an error). This means that in an expression involving operators, the "top level application" is always one of the operators, never ordinary function application.
A consequence of this is that there are no circumstances under which an operator can end up passed as the argument to a function. It's simply impossible. This is why we need the (:) syntax to disable the "operator-ness" of operators, and get at their identity as values.
For flip : the only chain of non-operator terms is just flip, so there's no ordinary function application to resolve "at highest precedence". : then goes looking for its left and right arguments (but this is a section, so there's no right argument), and finds flipon its left.
To make flip receive : as an argument instead of the other way around, you must write flip (:). (:) is not an operator (it's in parentheses, so it doesn't matter what's inside), and so we have a chain of two terms with no operators, so that gets resolved to a single expression by applying flip to (:).
1 The other way to look at this is that you identify all sequences of terms not otherwise separated by operators and insert the "function application operator" between them. This "operator" has higher precedence than it's possible to assign to other operators and is left-associative. Then the operator-resolution logic will automatically treat function application the way I've been describing.
I want to know what the Haskell operator % does. It is hard to find on google, I could not find it in the Haskell Report either.
I saw it used in this piece of code:
fi=zz.bu
bu=zz.(:).(++"zz")
[]#zz=zz;zz#__=zz
zZ%zz=zZ zz$zZ%zz
zz=(([[],[]]++).)
zZ=zipWith;z=zZ((#).show)[1..]$zZ(++)(bu%"Fi")(fi%"Bu")
taken from: https://codegolf.stackexchange.com/questions/88/obfuscated-fizzbuzz-golf/110#110
Here is the relevant section of the Haskell Report:
Haskell provides special syntax to support infix notation. An operator is a function that can be applied using infix syntax (Section 3.4), or partially applied using a section (Section 3.5).
An operator is either an operator symbol, such as + or $$, or is an ordinary identifier enclosed in grave accents (backquotes), such as `op`. For example, instead of writing the prefix application op x y, one can write the infix application x `op` y. If no fixity declaration is given for `op` then it defaults to highest precedence and left associativity (see Section 4.4.2).
Dually, an operator symbol can be converted to an ordinary identifier by enclosing it in parentheses. For example, (+) x y is equivalent to x + y, and foldr (*) 1 xs is equivalent to foldr (\x y -> x * y) 1 xs.
That is to say, there is nothing special about "operators" in Haskell other than their syntax. A function whose name is made from symbols defaults to infix, a function whose name is alphanumeric defaults to prefix, and either can be used in the other style with a bit of extra syntax.
Incidentally, since it's often impossible to search based on operator names using Google, to find operators that are declared in the standard libraries there are two search engines specifically for finding things on Hackage.
In general, we can define a new function foo like so:
foo a b c = (something involving a, b, and c)
Similarly, we can define a binary operator % (constructed out of any combination of symbol characters) like so:
a % b = (something involving a and b)
Many functions in Haskell made up of special characters in Haskell are infix functions. These include *, +, ==, /, etc. To get the type signatures of such functions you put the function in parentheses and execute :t, like so:
GHCi> :t (==)
(==) :: Eq a => a -> a -> Bool
I wanted to try and get the type signature of the range function, [a..a], but it seems that this function is infix, but can only be used within a list []. I tried all the following, but none worked:
GHCi> :t (..)
<interactive>:1:2: parse error on input `..'
GHCi> :t ([..])
<interactive>:1:3: parse error on input `..'
GHCi> :t [..]
<interactive>:1:2: parse error on input `..'
GHCi> :t ..
<interactive>:1:1: parse error on input `..'
Does anyone know how to get the type signature of the range function?
The .. is not a function, it's actually syntax sugar. It gets translated to one of several functions: enumFrom, enumFromThen, enumFromTo or enumFromThenTo.
It can't be a normal function because it has four forms that work in different ways. That is, all four of these are valid:
[1..] -- enumFrom 1
[1,2..] -- enumFromThen 1 2
[1..10] -- enumFromTo 1 10
[1,2..10] -- enumFromThenTo 1 2 10
These forms use the four functions I mentioned respectively.
If it was just a normal operator, 1.. would give you a partially applied function; instead, it produces a list. Moreover, for a normal function, the [1,2..10] notation would be parsed as [1,(2..10)] where in reality it all gets turned into a single function taking all three numbers as arguments.
These functions are all part of the Enum class, so the .. notation works for any type that is part of it. For example, you could write [False ..] and get the list [False, True]. (Unfortunately, due to current parsing ambiguities, you can't write [False..] because it then assumes False is a module.)
Try using a lambda.
> :t \x y -> [x..y]
The notation is just syntactic sugar for enumFrom and enumFromTo so it doesn't really have a conventional type.
It is considered good practice to enable GHC warnings with -Wall. However, I've found out that fixing those warnings has a negative effect for some types of code constructs.
Example 1:
Using the do-notation equivalent of f >> will generate a warning if I don't explicitly use the _ <- f form:
Warning: A do-notation statement discarded a result of type Char.
Suppress this warning by saying "_ <- f",
or by using the flag -fno-warn-unused-do-bind
I understand that I can forget to do something with the result of f. However, it is legitimate to ignore the result (very common in parsers). There is no warning when using >>, right? Using _ <- is heavier than it should.
Example 2:
Naming a pattern variable with the same name of a visible function will give:
Warning: This binding for `map' shadows the existing binding
imported from Prelude
This is getting worse when using record syntax as namespace gets polluted quickly. The solution is to give an alternate name in the pattern expression. So I end up using a less appropriate name just to avoid a warning. I don't feel it's a good-enough reason.
I know I can use -fno-warn-... options but should I stick with -Wall after all?
Example 1:
I have re-learned to write parsers in Applicative style -- they are much more concise. Eg, instead of:
funCallExpr :: Parser AST
funCallExpr = do
func <- atom
token "("
arg <- expr
token ")"
return $ FunCall func arg
I instead write:
funCallExpr :: Parser AST
funCallExpr = FunCall <$> atom <* token "(" <*> expr <* token ")"
But what can I say, if you don't like the warning, disable it as it suggests.
Example 2:
Yeah I find that warning a bit irritating as well. But it has saved me a couple times.
It ties into naming conventions. I like to keep modules pretty small, and keep most imports qualified (except for "notation" imports like Control.Applicative and Control.Arrow). That keeps the chances of name conflict low, and it just makes things easy to work with. hothasktags makes this style tolerable if you are using tags.
If you are just pattern matching on a field with the same name, you can use -XNamedFieldPuns or -XRecordWildCards to reuse the name:
data Foo = Foo { baz :: Int, bar :: String }
-- RecordWildCards
doubleBaz :: Foo -> Int
doubleBaz (Foo {..}) = baz*baz
-- NamedFieldPuns
reverseBar :: Foo -> String
reverseBar (Foo {bar}) = reverse bar
Another common convention is to add a hungarian prefix to record labels:
data Foo = Foo { fooBaz :: Int, fooBar :: String }
But yeah, records are no fun to work with in Haskell. Anyway, keep your modules small and your abstractions tight and this shouldn't be a problem. Consider it as a warning that says simplifyyyy, man.
I think that use of -Wall may lead to less readable code. Especially, if it is doing some arithmetics.
Some other examples, where the use of -Wall suggests modifications with worse readability.
(^) with -Wall requires type signatures for exponents
Consider this code:
norm2 x y = sqrt (x^2 + y^2)
main = print $ norm2 1 1
With -Wall it gives two warnings like this:
rt.hs:1:18:
Warning: Defaulting the following constraint(s) to type `Integer'
`Integral t' arising from a use of `^' at rt.hs:2:18-20
In the first argument of `(+)', namely `x ^ 2'
In the first argument of `sqrt', namely `(x ^ 2 + y ^ 2)'
In the expression: sqrt (x ^ 2 + y ^ 2)
Writing (^(2::Int) everywhere instead of (^2) is not nice.
Type signatures are required for all top-levels
When writing quick and dirty code, it's annoying. For simple code, where there are at most one or two data types in use (for exapmle, I know that I work only with Doubles), writing type signatures everywhere may complicate reading. In the example above there are two warnings just for the lack of type signature:
rt.hs:1:0:
Warning: Definition but no type signature for `norm2'
Inferred type: norm2 :: forall a. (Floating a) => a -> a -> a
...
rt.hs:2:15:
Warning: Defaulting the following constraint(s) to type `Double'
`Floating a' arising from a use of `norm2' at rt.hs:2:15-23
In the second argument of `($)', namely `norm2 1 1'
In the expression: print $ norm2 1 1
In the definition of `main': main = print $ norm2 1 1
As a distraction, one of them refers to the line different from the one where the type signature is needed.
Type signatures for intermediate calculations with Integral are necessary
This is a general case of the first problem. Consider an example:
stripe x = fromIntegral . round $ x - (fromIntegral (floor x))
main = mapM_ (print . stripe) [0,0.1..2.0]
It gives a bunch of warnings. Everywhere with fromIntegral to convert back to Double:
rt2.hs:1:11:
Warning: Defaulting the following constraint(s) to type `Integer'
`Integral b' arising from a use of `fromIntegral' at rt2.hs:1:11-22
In the first argument of `(.)', namely `fromIntegral'
In the first argument of `($)', namely `fromIntegral . round'
In the expression:
fromIntegral . round $ x - (fromIntegral (floor x))
And everyone knows how often one needs fromIntegral in Haskell...
There are more cases like these the numeric code risks to become unreadable just to fulfill the -Wall requirements. But I still use -Wall on the code I'd like to be sure of.
I would recommend continuing to use '-Wall' as the default option, and disable any checks you need to on local, per-module basis using an OPTIONS_GHC pragma at the top of relevant files.
The one I might make an exception for is indeed '-fno-warn-unused-do-bind', but one suggestion might be to use an explicit 'void' function ... writing 'void f' seems nicer than '_ <- f'.
As for name shadowing - I think it's generally good to avoid if you can - seeing 'map' in the middle of some code will lead most Haskellers to expect the standard library fn.
Name shadowing can be quite dangerous. In particular, it can become difficult to reason about what scope a name is introduced in.
Unused pattern binds in do notation are not as bad, but can indicate that a less efficient function than necessary is being used (e.g. mapM instead of mapM_).
As BenMos pointed out, using void or ignore to explicitly discard unused values is a nice way to be explicit about things.
It would be quite nice to be able to disable warnings for just a section of code, rather than for everything at once. Also, cabal flags and command line ghc flags take precedence over flags in a file, so I can't have -Wall by default everywhere and even easily just disable it for the entirety of a single file.
All these warnings help to prevent mistakes and should be respected, not suppressed.
If you want to define a function with a name from Prelude, you can hide it using
import Prelude hiding (map)
'Hiding' syntax should only be used for Prelude and modules of the same package, otherwise you risk code breakage by API changes in the imported module.
See: http://www.haskell.org/haskellwiki/Import_modules_properly
There is also the much less intrusive -W option, which enables a set of reasonable warnings mostly related to general coding style (unused imports, unused variables, incomplete pattern matches, etc.).
In particular it does not include the two warnings you mentioned.
From what I'm reading, $ is described as "applies a function to its arguments." However, it doesn't seem to work quite like (apply ...) in Lisp, because it's a binary operator, so really the only thing it looks like it does is help to avoid parentheses sometimes, like foo $ bar quux instead of foo (bar quux). Am I understanding it right? Is the latter form considered "bad style"?
$ is preferred to parentheses when the distance between the opening and closing parens would otherwise be greater than good readability warrants, or if you have several layers of nested parentheses.
For example
i (h (g (f x)))
can be rewritten
i $ h $ g $ f x
In other words, it represents right-associative function application. This is useful because ordinary function application associates to the left, i.e. the following
i h g f x
...can be rewritten as follows
(((i h) g) f) x
Other handy uses of the ($) function include zipping a list with it:
zipWith ($) fs xs
This applies each function in a list of functions fs to a corresponding argument in the list xs, and collects the results in a list. Contrast with sequence fs x which applies a list of functions fs to a single argument x and collects the results; and fs <*> xs which applies each function in the list fs to every element of the list xs.
You're mostly understanding it right---that is, about 99% of the use of $ is to help avoid parentheses, and yes, it does appear to be preferred to parentheses in most cases.
Note, though:
> :t ($)
($) :: (a -> b) -> a -> b
That is, $ is a function; as such, it can be passed to functions, composed with, and anything else you want to do with it. I think I've seen it used by people screwing with combinators before.
The documentation of ($) answers your question. Unfortunately it isn't listed in the automatically generated documentation of the Prelude.
However it is listed in the sourcecode which you can find here:
http://darcs.haskell.org/packages/base/Prelude.hs
However this module doesn't define ($) directly. The following, which is imported by the former, does:
http://darcs.haskell.org/packages/base/GHC/Base.lhs
I included the relevant code below:
infixr 0 $
...
-- | Application operator. This operator is redundant, since ordinary
-- application #(f x)# means the same as #(f '$' x)#. However, '$' has
-- low, right-associative binding precedence, so it sometimes allows
-- parentheses to be omitted; for example:
--
-- > f $ g $ h x = f (g (h x))
--
-- It is also useful in higher-order situations, such as #'map' ('$' 0) xs#,
-- or #'Data.List.zipWith' ('$') fs xs#.
{-# INLINE ($) #-}
($) :: (a -> b) -> a -> b
f $ x = f x
Lots of good answers above, but one omission:
$ cannot always be replace by parentheses
But any application of $ can be eliminated by using parentheses, and any use of ($) can be replaced by id, since $ is a specialization of the identity function. Uses of (f$) can be replaced by f, but a use like ($x) (take a function as argument and apply it to x) don't have any obvious replacement that I see.
If I look at your question and the answers here, Apocalisp and you are both right:
$ is preferred to parentheses under certain circumstances (see his answer)
foo (bar quux) is certainly not bad style!
Also, please check out difference between . (dot) and $ (dollar sign), another SO question very much related to yours.