Different results between interactive and compiled Haskell (Project Euler 20) - haskell

I'm doing problem 20 on Project Euler - finding the sum of the digits of 100! (factorial, not enthusiasm).
Here is the program I wrote:
import Data.Char
main = print $ sumOfDigits (product [1..100])
sumOfDigits :: Int -> Int
sumOfDigits n = sum $ map digitToInt (show n)
I compiled it with ghc -o p20 p20.hs and executed it, getting only 0 on my command line.
Puzzled, I invoked ghci and ran the following line:
sum $ map Data.Char.digitToInt (show (product [1..100]))
This returned the correct answer. Why didn't the compiled version work?

The reason is the type signature
sumOfDigits :: Int -> Int
sumOfDigits n = sum $ map digitToInt (show n)
use
sumOfDigits :: Integer -> Int
and you will get the same thing as in GHCi (what you want).
Int is the type for machine word sized "ints" while, Integer is the type for mathematically correct, arbitrary precision Integers.
if you type
:t product [1..100]
into GHCi you will get something like
product [1..100] :: (Enum a, Num a) => a
that is, for ANY type that has instances of the Enum and Num type classes, product [1..100] could be a value of that type
product [1..100] :: Integer
should return 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000 which is far bigger than your machine is likely to be able to represent as a word on your machine. Probably, because of roll over
product [1..100] :: Int
will return 0
given this, you might think
sum $ map Data.Char.digitToInt (show (product [1..100]))
would not type check, because it has multiple possible incompatible interpretations. But, in order to be usable as a calculator, Haskell defaults to using Integer in situations like this, thus explaining your behavior.
For the same reason, if you had NOT given sumOfDigits an explicit type signature it would have done what you want, since the most general type is
sumOfDigits :: Show a => a -> Int

Related

Why does ghc warn that ^2 requires "defaulting the constraint to type 'Integer'?

If I compile the following source file with ghc -Wall:
main = putStr . show $ squareOfSum 5
squareOfSum :: Integral a => a -> a
squareOfSum n = (^2) $ sum [1..n]
I get:
powerTypes.hs:4:18: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Integer’
(Integral b0) arising from a use of ‘^’ at powerTypes.hs:4:18-19
(Num b0) arising from the literal ‘2’ at powerTypes.hs:4:19
• In the expression: (^ 2)
In the expression: (^ 2) $ sum [1 .. n]
In an equation for ‘squareOfSum’:
squareOfSum n = (^ 2) $ sum [1 .. n]
|
4 | squareOfSum n = (^2) $ sum [1..n]
| ^^
I understand that the type of (^) is:
Prelude> :t (^)
(^) :: (Integral b, Num a) => a -> b -> a
which means it works for any a^b provided a is a Num and b is an Integral. I also understand the type hierarchy to be:
Num --> Integral --> Int or Integer
where --> denotes "includes" and the first two are typeclasses while the last two are types.
Why does ghc not conclusively infer that 2 is an Int, instead of "defaulting the constraints to Integer". Why is ghc defaulting anything? Is replacing 2 with 2 :: Int a good way to resolve this warning?
In Haskell, numeric literals have a polymorphic type
2 :: Num a => a
This means that the expression 2 can be used to generate a value in any numeric type. For instance, all these expression type-check:
2 :: Int
2 :: Integer
2 :: Float
2 :: Double
2 :: MyCustomTypeForWhichIDefinedANumInstance
Technically, each time we use 2 we would have to write 2 :: T to choose the actual numeric type T we want. Fortunately, this is often not needed since type inference can frequently deduce T from the context. E.g.,
foo :: Int -> Int
foo x = x + 2
Here, x is an Int because of the type annotation, and + requires both operands to have the same type, hence Haskell infers 2 :: Int. Technically, this is because (+) has type
(+) :: Num a => a -> a -> a
Sometimes, however, type inference can not deduce T from the context. Consider this example involving a custom type class:
class C a where bar :: a -> String
instance C Int where bar x = "Int: " ++ show x
instance C Integer where bar x = "Integer: " ++ show x
test :: String
test = bar 2
What is the value of test? Well, if 2 is an Int, then we have test = "Int: 2". If it is an Integer, then we have test = "Integer: 2". If it's another numeric type T, we can not find an instance for C T.
This code is inherently ambiguous. In such a case, Haskell mandates that numeric types that can not be deduced are defaulted to Integer (the programmer can change this default to another type, but it's not relevant now). Hence we have test = "Integer: 2".
While this mechanism makes our code type check, it might cause an unintended result: for all we know, the programmer might have wanted 2 :: Int instead. Because of this, GHC chooses the default, but warns about it.
In your code, (^) can work with any Integral type for the exponent. But, in principle, x ^ (2::Int) and x ^ (2::Integer) could lead to different results. We know this is not the case since we know the semantics of (^), but for the compiler (^) is only a random function with that type, which could behave differently on Int and Integer. Consider, e.g.,
a ^ n = if n + 3000000000 < 0 then 0 else 1
When n = 2, if we use n :: Int the if guard could be true on a 32 bit system. This is not the case when using n :: Integer which never overflows.
The standard solution, in these cases, is to resolve the warning using something like x ^ (2 :: Int).

Besides as-pattern, what else can # mean in Haskell?

I am studying Haskell currently and try to understand a project that uses Haskell to implement cryptographic algorithms. After reading Learn You a Haskell for Great Good online, I begin to understand the code in that project. Then I found I am stuck at the following code with the "#" symbol:
-- | Generate an #n#-dimensional secret key over #rq#.
genKey :: forall rq rnd n . (MonadRandom rnd, Random rq, Reflects n Int)
=> rnd (PRFKey n rq)
genKey = fmap Key $ randomMtx 1 $ value #n
Here the randomMtx is defined as follows:
-- | A random matrix having a given number of rows and columns.
randomMtx :: (MonadRandom rnd, Random a) => Int -> Int -> rnd (Matrix a)
randomMtx r c = M.fromList r c <$> replicateM (r*c) getRandom
And PRFKey is defined below:
-- | A PRF secret key of dimension #n# over ring #a#.
newtype PRFKey n a = Key { key :: Matrix a }
All information sources I can find say that # is the as-pattern, but this piece of code is apparently not that case. I have checked the online tutorial, blogs and even the Haskell 2010 language report at https://www.haskell.org/definition/haskell2010.pdf. There is simply no answer to this question.
More code snippets can be found in this project using # in this way too:
-- | Generate public parameters (\( \mathbf{A}_0 \) and \(
-- \mathbf{A}_1 \)) for #n#-dimensional secret keys over a ring #rq#
-- for gadget indicated by #gad#.
genParams :: forall gad rq rnd n .
(MonadRandom rnd, Random rq, Reflects n Int, Gadget gad rq)
=> rnd (PRFParams n gad rq)
genParams = let len = length $ gadget #gad #rq
n = value #n
in Params <$> (randomMtx n (n*len)) <*> (randomMtx n (n*len))
I deeply appreciate any help on this.
That #n is an advanced feature of modern Haskell, which is usually not covered by tutorials like LYAH, nor can be found the the Report.
It's called a type application and is a GHC language extension. To understand it, consider this simple polymorphic function
dup :: forall a . a -> (a, a)
dup x = (x, x)
Intuitively calling dup works as follows:
the caller chooses a type a
the caller chooses a value x of the previously chosen type a
dup then answers with a value of type (a,a)
In a sense, dup takes two arguments: the type a and the value x :: a. However, GHC is usually able to infer the type a (e.g. from x, or from the context where we are using dup), so we usually pass only one argument to dup, namely x. For instance, we have
dup True :: (Bool, Bool)
dup "hello" :: (String, String)
...
Now, what if we want to pass a explicitly? Well, in that case we can turn on the TypeApplications extension, and write
dup #Bool True :: (Bool, Bool)
dup #String "hello" :: (String, String)
...
Note the #... arguments carrying types (not values). Those are something that exists at compile time, only -- at runtime the argument does not exist.
Why do we want that? Well, sometimes there is no x around, and we want to prod the compiler to choose the right a. E.g.
dup #Bool :: Bool -> (Bool, Bool)
dup #String :: String -> (String, String)
...
Type applications are often useful in combination with some other extensions which make type inference unfeasible for GHC, like ambiguous types or type families. I won't discuss those, but you can simply understand that sometimes you really need to help the compiler, especially when using powerful type-level features.
Now, about your specific case. I don't have all the details, I don't know the library, but it's very likely that your n represents a kind of natural-number value at the type level. Here we are diving in rather advanced extensions, like the above-mentioned ones plus DataKinds, maybe GADTs, and some typeclass machinery. While I can't explain everything, hopefully I can provide some basic insight. Intuitively,
foo :: forall n . some type using n
takes as argument #n, a kind-of compile-time natural, which is not passed at runtime. Instead,
foo :: forall n . C n => some type using n
takes #n (compile-time), together with a proof that n satisfies constraint C n. The latter is a run-time argument, which might expose the actual value of n. Indeed, in your case, I guess you have something vaguely resembling
value :: forall n . Reflects n Int => Int
which essentially allows the code to bring the type-level natural to the term-level, essentially accessing the "type" as a "value". (The above type is considered an "ambiguous" one, by the way -- you really need #n to disambiguate.)
Finally: why should one want to pass n at the type level if we then later on convert that to the term level? Wouldn't be easier to simply write out functions like
foo :: Int -> ...
foo n ... = ... use n
instead of the more cumbersome
foo :: forall n . Reflects n Int => ...
foo ... = ... use (value #n)
The honest answer is: yes, it would be easier. However, having n at the type level allows the compiler to perform more static checks. For instance, you might want a type to represent "integers modulo n", and allow adding those. Having
data Mod = Mod Int -- Int modulo some n
foo :: Int -> Mod -> Mod -> Mod
foo n (Mod x) (Mod y) = Mod ((x+y) `mod` n)
works, but there is no check that x and y are of the same modulus. We might add apples and oranges, if we are not careful. We could instead write
data Mod n = Mod Int -- Int modulo n
foo :: Int -> Mod n -> Mod n -> Mod n
foo n (Mod x) (Mod y) = Mod ((x+y) `mod` n)
which is better, but still allows to call foo 5 x y even when n is not 5. Not good. Instead,
data Mod n = Mod Int -- Int modulo n
-- a lot of type machinery omitted here
foo :: forall n . SomeConstraint n => Mod n -> Mod n -> Mod n
foo (Mod x) (Mod y) = Mod ((x+y) `mod` (value #n))
prevents things to go wrong. The compiler statically checks everything. The code is harder to use, yes, but in a sense making it harder to use is the whole point: we want to make it impossible for the user to try adding something of the wrong modulus.
Concluding: these are very advanced extensions. If you're a beginner, you will need to slowly progress towards these techniques. Don't be discouraged if you can't grasp them after only a short study, it does take some time. Make a small step at a time, solve some exercises for each feature to understand the point of it. And you'll always have StackOverflow when you are stuck :-)

why is this snippet valid with an explicit value, but invalid as a function?

I'm trying to work a problem where I need to calculate the "small" divisors of an integer. I'm just bruteforcing through all numbers up to the square root of the given number, so to get the divisors of 10 I'd write:
[k|k<-[1...floor(sqrt 10)],rem 10 k<1]
This seems to work well. But as soon as I plug this in a function
f n=[k|k<-[1...floor(sqrt n)],rem n k<1]
And actually call this function, I do get an error
f 10
No instance for (Floating t0) arising from a use of `it'
The type variable `t0' is ambiguous
Note: there are several potential instances:
instance Floating Double -- Defined in `GHC.Float'
instance Floating Float -- Defined in `GHC.Float'
In the first argument of `print', namely `it'
In a stmt of an interactive GHCi command: print it
As far as I undrestand the actual print function that prints the result to the console is causing trouble, but I cannot find out what is wrong. It says the type is ambiguous, but the function can clearly only return a list of integers. Then again I checked the type, and it the (inferred) type of f is
f :: (Floating t, Integral t, RealFrac t) => t -> [t]
I can understand that fshould be able to accept any real numerical value, but can anyone explain why the return type should be anything else than Integral or int?
[k|k<-[1...floor(sqrt 10)],rem 10 k<1]
this works because the first 10 is not the same as the latter one - to see this, we need the type signature of your functions:
sqrt :: Floating a => a -> a
rem :: Integral a => a -> a -> a
so the first one means that it works for stuff that have a floating point representation - a.k.a. Float, Double ..., and the second one works for Int, Integer (bigint), Word8 (unsigned 8bit integers)...
so for the 10 in sqrt 10 the compiler says - ahh this is a floating point number, null problemo, and for the 10 in rem 10 k, ahh this is an integer like number, null problemo as well.
But when you bundle them up in a function - you are saying n has to be a floating point and an integral number, the compiler knows no such thing and - complains.
So what do we do to fix that (and a side note ranges in haskell are indicated by .. not ...!). So let us start by taking a concrete solution and generalize it.
f :: Int -> [Int]
f n = [k|k <- [1..n'],rem n k < 1]
where n' = floor $ sqrt $ fromIntegral n
the neccessary part was converting the Int to a floating point number. But if you are putting that in a library all your users need to stick with using Int which is okay, but far from ideal - so how do we generalize (as promised)? We use GHCi to do that for us, using a lazy language we ourselves tend to be lazy as well.
We start by commenting out the type-signature
-- f :: Int -> [Int]
f n = [k|k <- [1..n'],rem n k < 1]
where n' = floor $ sqrt $ fromIntegral n
$> ghci MyLib.hs
....
MyLib > :type f
f :: Integral a => a -> [a]
then we can take this and put it into the library and if someone worked with Word8 or Integer that would work as well.
Another solution would be to use rem (floor n) k < 1 and have
f :: Floating a, Integral b => a -> [b]
as the type, but that would be kind of awkward.

Error with divides, Haskell

My code, where n = 4:
f n = map (4/) [1..n]
main = do
n <- getLine
print(f (product(map read $ words n :: [Int])))
If I use in terminal map (4/) [1..n] I get a right answer: [4.0,2.0,1.3333333333333333,1.0].
But in my program it's doesn't work, error message is: No instance for (Fractional Int) arising from a use off'`
Where is my mistake?
Your n is an Int which isn't a Fractional type. Those are the ones that support division using the / operator. You can either replace / with div to get integer division (truncates to whole numbers) or you can add fromIntegral to convert the n to the correct type.
Your code should look something like
f n = map (4/) [1..fromIntegral n]
To clarify a bit more: Your function f ultimately does division on the parameter that's given to it. This leads the type inference engine to determine that those parameters should be Fractional types. Then you use that function in your main you explicitly give it an Int.
That's why the error says "You gave me an Int. There's no instance for Fractional Int (read as, 'Int isn't a Fractional type') and I need it to be because you're passing an Int into something that requires that instance."

Haskell ASCII codes

I'm trying to make a function that takes an a (which can be any type: int, char...) and creates a list which has that input replicated the number of times that corresponds to its ASCII code.
I've created this:
toList n = replicate (fromEnum n) n
When trying to use the function in the cmd it says it could match the expected type int with char, however if i use my function directly in the cmd with an actual value it does what it's supposed.
What i mean is: toList 'a' --> gives me an error
replicate (fromEnum 'a') 'a' --> gives a result without problem
I've loaded the module Data.Char (ord)
How can I fix this, and why does this happens?
Thanks in advance :)
What you're missing is a type declaration. You say that you want it to be able to take any type, but what you really want is toList to take something that is an instance of Enum. When you play around with it in GHCi, it'll let you do let toList n = replicate (fromEnum n) n, because GHCi will automatically pick some defaults that seem to make sense, but when compiling a module with GHC, it won't work without the type declaration. You want
toList :: (Enum a) => a -> [a]
toList n = replicate (fromEnum n) n
The reason why you have to have the (Enum a) => in the type signature is because fromEnum has the type signature (Enum a) => a -> Int. So you see it doesn't just take any type, only those that have an instance for Enum.

Resources