Generating a list of random values and printing them to standard output in Haskell - haskell

I am pretty new to Haskell and I am struggling to achieve something relatively simple: to generate a list of random numbers and print them to standard output.
Since the random concept is pretty contrary to the function purity in FP world (i.e. methods should return always the same result for the same input), I understand that in this case, the System.Random module in Haskell returns IO actions instead.
My code so far looks like the following:
import System.Random
randomNumber :: (Random a) => (a, a) -> IO a
randomNumber (a,b) = randomRIO(a,b)
main :: IO ()
main = do
points <- sequence (map (\n -> randomNumber ((-1.0), 1.0)) [1..10])
print points
The idea is simple: to generate a list of ten random elements (probably there are better ways to achieve that). My first approach has been creating a function that returns a random number (randomNumber in this case, of type IO a) and using it when mapping over a list of elements (producing a list of IO actions, IO [a]).
From my understanding, sequence (map (\n -> randomNumber ((-1.0), 1.0)) [1..10]) type is IO [a] but I do not know how I can use it. How can I really use points as some value of type [a] instead of IO [a]?
EDIT: Adding the print function within the do "block" produces some errors I don't really know how to get rid of.
Main.hs:8:40:
No instance for (Random a0) arising from a use of ‘randomNumber’
The type variable ‘a0’ is ambiguous
Note: there are several potential instances:
instance Random Bool -- Defined in ‘System.Random’
instance Random Foreign.C.Types.CChar -- Defined in ‘System.Random’
instance Random Foreign.C.Types.CDouble
-- Defined in ‘System.Random’
...plus 33 others
In the expression: randomNumber ((- 1.0), 1.0)
In the first argument of ‘map’, namely
‘(\ n -> randomNumber ((- 1.0), 1.0))’
In the first argument of ‘sequence’, namely
‘(map (\ n -> randomNumber ((- 1.0), 1.0)) [1 .. 10])’
Main.hs:8:55:
No instance for (Num a0) arising from a use of syntactic negation
The type variable ‘a0’ is ambiguous
Note: there are several potential instances:
instance Num Double -- Defined in ‘GHC.Float’
instance Num Float -- Defined in ‘GHC.Float’
instance Integral a => Num (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus 37 others
In the expression: (- 1.0)
In the first argument of ‘randomNumber’, namely ‘((- 1.0), 1.0)’
In the expression: randomNumber ((- 1.0), 1.0)
Main.hs:8:56:
No instance for (Fractional a0) arising from the literal ‘1.0’
The type variable ‘a0’ is ambiguous
Note: there are several potential instances:
instance Fractional Double -- Defined in ‘GHC.Float’
instance Fractional Float -- Defined in ‘GHC.Float’
instance Integral a => Fractional (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus three others
In the expression: 1.0
In the expression: (- 1.0)
In the first argument of ‘randomNumber’, namely ‘((- 1.0), 1.0)’
Main.hs:9:9:
No instance for (Show a0) arising from a use of ‘print’
The type variable ‘a0’ is ambiguous
Relevant bindings include points :: [a0] (bound at Main.hs:8:9)
Note: there are several potential instances:
instance Show Double -- Defined in ‘GHC.Float’
instance Show Float -- Defined in ‘GHC.Float’
instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus 65 others
In a stmt of a 'do' block: print points
In the expression:
do { points <- sequence
(map (\ n -> randomNumber ((- 1.0), 1.0)) [1 .. 10]);
print points }
In an equation for ‘main’:
main
= do { points <- sequence
(map (\ n -> randomNumber ((- 1.0), 1.0)) [1 .. 10]);
print points }
Failed, modules loaded: none.

Why does this happen?
There is one particular message in all your errors: The type variable ‘a0’ is ambiguous. Why is this the case? Well, randomNumber works for any instance of Random, and there are a bunch of instances. -1.0 includes Num, since you want to be able to negate a value. Also, the value 1.0 itself concludes that your type needs to be an instance of Fractional. That reduces the amount of types that can be used in this circumstance, but it's still not unique: Float, Double and four others are suitable.
At this point, the compiler gives up, and you need to tell it what instance you actually want to use.
How to fix this
There are many ways to fix this. For one, we could introduce a small helper function:
-- fix a to double
randomDouble :: (Double, Double) -> IO Double
randomDouble = randomNumber
Or we could annotate the type of the ambiguous 1.0:
points <- sequence (map (\n -> randomNumber ((-1.0), 1.0 :: Double)) [1..10])
-- ^^^ as a Double
Or we could annotate the type of the list:
print (points :: [Double])
-- ^^^^^^^^^^^^^^^^^^ points is a list of Doubles
Which one you choose is actually more or less a matter of style and personal preference. That being said, sequence . map f $ xs can be written as mapM f xs, but since you actually have IO a, you're better of with replicateM $ randomNumber (...). Both mapM and replicateM can be found in Control.Monad.
TL;DR
When GHC yells at you for ambiguous types, annotate them.

A couple points:
You called the function randomNumber, but allowed it to take any type that is a part of the Random class (including Chars etc.). If you do only want it to take numbers, you should change the signature to match its purpose (randomNumber :: (Int,Int) -> IO Int) or more generically, randomNumber :: (Num n. Random n) => (n,n) -> IO n
sequence takes a list of actions ([IO a]), and returns a list in the IO monad (IO [a]). It basically just executes each action, stores the result, then re-wraps the list in IO. You could try something like replicateM 10 $ randomNumber (1,10). replicateM takes an Int and an action to carry out, and returns a list of executed actions (as Zeta pointed out, sequence is used internally in a call to replicateM).
(And code blocks aren't working for me for some reason, so I wrote everything as "infix code".)

Related

Testing empty list [] with Eq type

Currently, I am writing a function in Haskell to check a list is symmetric or not.
isReflexive :: Eq a => [(a, a)] -> Bool
isReflexive [] = True
isReflexive xs = and [elem (x, x) xs | x <- [fst u | u <- xs] ++ [snd u | u <- xs]]
test = do
print(isReflexive [])
main = test
The function works fine on the list that is not empty. However, when I test the empty list with the function, it raised an error
Ambiguous type variable ‘a2’ arising from a use of ‘isReflexive’ prevents the constraint ‘(Eq a2)’ from being solved.
Probable fix: use a type annotation to specify what ‘a2’ should be.
These potential instances exist:
instance Eq Ordering -- Defined in ‘GHC.Classes’
instance Eq Integer -- Defined in ‘integer-gmp-1.0.2.0:GHC.Integer.Type’
instance Eq a => Eq (Maybe a) -- Defined in ‘GHC.Maybe’
...plus 22 others
...plus 7 instances involving out-of-scope types
(use -fprint-potential-instances to see them all)
• In the first argument of ‘print’, namely ‘(isReflexive [])’
How to fix this error?
The problem is simply that, in order to apply isReflexive, GHC needs to know which type you are using it on.
The type signature of isReflexive - Eq a => [(a, a)] -> Bool doesn't tell GHC a concrete type that the function works on. That's perfectly fine, and usual, but most often the code that calls the function makes it clear what exactly a is in that particular application. That's not so here, because [] has itself a polymorphic (and therefore ambiguous) type, [a] (for any a).
To fix it you simply have to provide a concrete type for your [] here, which is consistent with the signature of isReflexive. It really doesn't matter what, but an example from many that will work is:
test = do
print(isReflexive ([] :: [(Int, Int)]))
(Note that this is exactly what GHC is telling you when it says Probable fix: use a type annotation to specify what 'a2' should be. 'a2' in that message corresponds to 'a' here, GHC tends to use 'a1', 'a2' etc to refer to all type variables.)

Why does multiplesOf num max = [num*k | k <- [1..floor (max/num)]] throw an error?

I am trying to create a set of all the multiples of a number num under an upper limit max. I have written the following function in Haskell:
multiplesOf num max = [num*k | k <- [1..floor (max/num)]]
Why does this function throw the following error during run-time and how can it be fixed?
<interactive>:26:1: error:
• Ambiguous type variable ‘a0’ arising from a use of ‘print’
prevents the constraint ‘(Show a0)’ from being solved.
Probable fix: use a type annotation to specify what ‘a0’ should be.
These potential instances exist:
instance Show Ordering -- Defined in ‘GHC.Show’
instance Show Integer -- Defined in ‘GHC.Show’
instance Show a => Show (Maybe a) -- Defined in ‘GHC.Show’
...plus 22 others
...plus 18 instances involving out-of-scope types
(use -fprint-potential-instances to see them all)
• In a stmt of an interactive GHCi command: print it
This error was thrown when, for example, entering multiplesOf 3 1000.
There is no error in defining the function. The error is more when you want to use the function.
If we take a look at the type of the function you have constructed, we see:
multiplesOf :: (RealFrac t, Integral t) => t -> t -> [t]
So here the type of input and output values should both be Integral, and RealFrac. So that means that number should be Integral, but at the same time support real division. There are not much types that would fit these requirements.
This problem arises from the fact that you use (/) and floor here, which hints that max and num are RealFracs, but the result of floor is an Integral, and then you mulitply numbers out of this range again with num.
You can however reduce the amount of type constraints, by making use of div :: Integral a => a -> a -> a. This is thus integer division, and the result is truncated towards negative infinity, so we can implement the function like:
multiplesOf :: Integral i => i -> i -> [i]
multiplesOf num max = [num*k | k <- [1..div max num]]
or we can even save us the trouble of making divisions, multiplications, etc. and work with a range expression that does the work for us:
multiplesOf :: (Num n, Enum n) => n -> n -> [n]
multiplesOf num max = [num, (num+num) .. max]
The latter is even less constraint, since Integral i implies Real i and Enum i.

Haskell - Signature and Type error

I'm new to Haskell and I'm having some trouble with function signature and types. Here's my problem:
I'm trying to make a list with every number between 1 and 999 that can be divided by every numeral of it's own number. For example the number 280 can be in that list because 2+8+0=10 and 280/10 = 28 ... On the other hand 123 can't because 1+2+3=6 and 123/6=20,5. When the final operation gives you a number with decimal it will never be in that list.
Here's my code:
let inaHelper x = (floor(x)`mod`10)+ (floor(x/10)`mod`10)+(floor(x/100)`mod`10)
This first part will only do the sum of every numeral of a number.
And this part works...
Here's the final part:
let ina = [x | x <- [1..999] , x `mod` (inaHelper x) == 0 ]
This final part should do the list and the verification if it could be on the list or not. But it's give this error:
No instance for (Integral t0) arising from a use of ‘it’
The type variable ‘t0’ is ambiguous
Note: there are several potential instances:
instance Integral Integer -- Defined in ‘GHC.Real’
instance Integral Int -- Defined in ‘GHC.Real’
instance Integral Word -- Defined in ‘GHC.Real’
In the first argument of ‘print’, namely ‘it’
In a stmt of an interactive GHCi command: print it
...
ina = [x | x <- [1..999] , x `mod` (inaHelper x) == 0 ]
What is the type of x? Integer? Int? Word? The code above is very generic, and will work on any integral type. If we try to print its type we
get something like this
> :t ina
ina :: (Integral t, ...) => [t]
meaning that the result is a list of any type t we want, provided t is an integral type (and a few other constraints).
When we ask GHCi to print the result, GHCi needs to choose the type of x, but can not decide unambiguously. This is what the error message states.
Try specifying a type when you print the result. E.g.
> ina :: [Int]
This will make GHCi choose the type t to be Int, removing the ambiguity.

How and why is [1 .. 0] different from [1 .. -1] in Haskell?

I have defined the following function
let repl x n = [x | _ <- [1..n]]
which imitates the built-in replicate function.
While experimenting with it, I noticed a strange thing: repl 10 0 evaluates to [], while repl 10 -1 produces an error:
No instance for (Show (t10 -> [t0])) arising from a use of ‘print’
In a stmt of an interactive GHCi command: print it
On the other hand, both [1 .. 0] and [1 .. -1] evaluate to [] without producing any errors.
Moreover, both [42 | _ <- [1 .. 0]] and [42 | _ <- [1 .. -1]] evaluate to [] without errors.
So why does my function call result in an error where the explicit substitution doesn't? And more importantly, where does the apparent difference between [1 .. 0] and [1 .. -1] stem from?
And a final question: when I write:
repl 42 -1
the error is exactly the same as with repl 10 -1, i.e. it still has the (Show (t10 -> [t0])) bit in it. I was expecting it to have something like ((Show (t42 -> [t0]))). What's this 10?
Other answers have pointed out that you need to wrap -1 in parentheses. This is an odd corner of the Haskell 98 spec that jumps out to bite unexpectedly. It's not the case that you can never write a negative number without parentheses: -1 * 5 is fine. It's just that the unary prefix operator doesn't have higher precedence than the binary infix operator, so a - is frequently parsed as the latter. Whitespace around operators is not significant in Haskell.
And the incomprehensible typeclass error doesn't help. Incidentally, t10 and t0 are just placeholder type variables made up by the compiler; I don't think it has anything to do with the actual numeric literals you use. And informally, errors like Could not deduce (Num (a0 -> t)) usually indicate to me that a function is applied to too few arguments.
Alternatively, the (undocumented?) NegativeLiterals language extension in GHC 7.8 changes the meaning of -1 to address this problem.
> :set -XNegativeLiterals
> :t repl 10 -1
repl 10 -1 :: Num t => [t]
You have not included the full error message in your question, and if you had, you'd see that repl 10 -1 is parsed as (repl 10) - (1) which is not what you intended.
You'd get the same error with repl 10 +1.
You can often find clues as to how a program code is parsed, by looking closely into the error message. There's no harm in overusing parentheses, while you learn, either.
The program:
repl x n = [x | _ <- [1..n]] -- line 1
main = print (repl 10 -1) -- line 3
The message:
prog.hs:3:8:
No instance for (Show (t1 -> [t0])) arising from a use of `print'
Possible fix: add an instance declaration for (Show (t1 -> [t0]))
In the expression: print (repl 10 - 1)
In an equation for `main': main = print (repl 10 - 1)
prog.hs:3:15:
No instance for (Num t1) arising from a use of `repl'
The type variable `t1' is ambiguous
Possible fix: add a type signature that fixes these type variable(s)
Note: there are several potential instances:
instance Num Double -- Defined in `GHC.Float'
instance Num Float -- Defined in `GHC.Float'
instance Integral a => Num (GHC.Real.Ratio a)
-- Defined in `GHC.Real'
...plus three others
In the first argument of `(-)', namely `repl 10' --------- NB!
In the first argument of `print', namely `(repl 10 - 1)'
In the expression: print (repl 10 - 1)
prog.hs:3:20:
No instance for (Num t0) arising from the literal `10'
The type variable `t0' is ambiguous
Possible fix: add a type signature that fixes these type variable(s)
Note: there are several potential instances:
instance Num Double -- Defined in `GHC.Float'
instance Num Float -- Defined in `GHC.Float'
instance Integral a => Num (GHC.Real.Ratio a)
-- Defined in `GHC.Real'
...plus three others
In the first argument of `repl', namely `10'
In the first argument of `(-)', namely `repl 10' --------- NB!
In the first argument of `print', namely `(repl 10 - 1)'
prog.hs:3:23:
No instance for (Num (t1 -> [t0])) arising from a use of `-'
Possible fix: add an instance declaration for (Num (t1 -> [t0]))
In the first argument of `print', namely `(repl 10 - 1)'
In the expression: print (repl 10 - 1)
In an equation for `main': main = print (repl 10 - 1)
Did you try [1..(-1)]? In Haskell, you cannot write negative numbers like -1 directly. You need to put them in parentheses. The reason is that Haskell doesn't have prefix unary operators because operators in Haskell are always infix. Hence -1 is parsed as [operator (-)] [numeric 1] and not [numeric -1].
This is what causes the problem. To avoid this problem, negative numbers must always be put in parentheses. This ensures that (-1) is parsed as [numeric -1]. It's one of the few corner cases which gives migraines to newcomers in Haskell.

Typeclass instance with functional dependencies doesn't work

Playing around with type-classes I came up with the seemingly innocent
class Pair p a | p -> a where
one :: p -> a
two :: p -> a
This seems to work fine, e.g.
instance Pair [a] a where
one [x,_] = x
two [_,y] = y
However I run in trouble for tuples. Even though the following definition compiles...
instance Pair (a,a) a where
one p = fst p
two p = snd p
... I can't use it as I expected:
main = print $ two (3, 4)
No instance for (Pair (t, t1) a)
arising from a use of `two' at src\Main.hs:593:15-23
Possible fix: add an instance declaration for (Pair (t, t1) a)
In the second argument of `($)', namely `two (3, 4)'
In the expression: print $ two (3, 4)
In the definition of `main': main = print $ two (3, 4)
Is there a way to define the instance correctly? Or do I have to resort to a newtype wrapper?
Your instance works just fine, actually. Observe:
main = print $ two (3 :: Int, 4 :: Int)
This works as expected. So why doesn't it work without the type annotation, then? Well, consider the tuple's type: (3, 4) :: (Num t, Num t1) => (t, t1). Because numeric literals are polymorphic, nothing requires them to be the same type. The instance is defined for (a, a), but the existence of that instance won't tell GHC to unify the types (for a variety of good reasons). Unless GHC can deduce by other means that the two types are the same, it won't choose the instance you want, even if the two types could be made equal.
To solve your problem, you could just add type annotations, as I did above. If the arguments are coming from elsewhere it's usually unnecessary because they'll already be known to be the same type, but it gets clumsy quickly if you want to use numeric literals.
An alternative solution is to note that, because of how instance selection works, having an instance for (a, a) means that you can't write an instance like (a, b) as well even if you wanted to. So we can cheat a bit, to force the unification using the type class, like this:
instance (a ~ b) => Pair (a,b) a where
That needs the TypeFamilies extension for the ~ context, I think. What this does is allow the instance to match on any tuple at first, because instance selection ignores the context. After choosing the instance, however, the a ~ b context asserts type equality, which will produce an error if they're different but--more importantly here--will unify the type variables if possible. Using this, your definition of main works as is, without annotations.
The problem is that a literal number has a polymorphic type. It is not obvious to the typechecker that both literals should have the same type (Int). If you use something that is not polymorphic for your tuples, your code should work. Consider these examples:
*Main> two (3,4)
<interactive>:1:1:
No instance for (Pair (t0, t1) a0)
arising from a use of `two'
Possible fix: add an instance declaration for (Pair (t0, t1) a0)
In the expression: two (3, 4)
In an equation for `it': it = two (3, 4)
*Main> let f = id :: Int -> Int -- Force a monomorphic type
*Main> two (f 3,f 4)
4
*Main> two ('a','b')
'b'
*Main> two ("foo","bar")
"bar"
*Main> two (('a':),('b':)) "cde"
"bcde"
*Main>

Resources