Random unable to deduce correct type when comparing the value to a float - haskell

I'm not really sure how to phrase the title better. Anyways, say I do something like this
(a, _) = random (mkStdGen 0)
b = if a then 1 else 0
Haskell can deduce that the type of a is Bool.
But if I do
(a, _) = random (mkStdGen 0)
b = if a > 0.5 then 1 else 0
it doesn't figure out that I want a to be a Float.
I don't know much about type inference and I don't know much about the typesystem but couldn't it just look for a type that is Random, Ord, and Fractional?
Float would fit in there.
Or why not just have a more generic type in that place since I obviously only care that it has those three typeclasses? Then later if I use the value for something else, it can get more info and maybe get a more concrete type.

The reason why Haskell can't figure this out is because numeric literals are polymorphic, so if you see 0.5, the compiler interprets it more or less as 0.5 :: Fractional a => a. Since this is still polymorphic, even with Random and Ord constraints, the compiler does not "guess" what you meant there, it instead throws an error saying that it doesn't know what you meant. For example, it could also be Double, or a custom defined data type, or any one of a potentially infinite number of types. It's also not very difficult to specify a type signature here or there, if a > (0.5 :: Float) then 1 else 0 isn't much extra work, and it makes your intention explicit rather than implicit.
It also can't just use a "generic type in that place", even with those three typeclasses, because it doesn't know the behavior of those typeclasses without a concrete type. For example, I could write a newtype around Float as
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype MyFloat = MyFloat Float deriving (Eq, Show, Read, Num, Floating, Random)
instance Ord MyFloat where
compare (MyFloat x) (MyFloat y) = compare (sin x) (sin y)
This is perfectly valid code, and could be something that satisfies your constraints, but would have drastically different behavior at runtime than simply Float.

You can always just do this to solve your problem:
(a, _) = random (mkStdGen 0) :: (Float, StdGen)
This forces a to be of type Float and the compiler is able to infer the rest of the types.
The problem Haskell has more than one type which is in the classes Random, Ord and Fractional. It could be Float, as well as it could also be Double. Because Haskell does not know what you mean, it throws an error.

I see the other answers haven't mentioned Haskell's defaulting rules. Under many circumstances, Haskell by default will choose either Integer or Double (not Float, though) if your type is ambiguous, needs to be made specific, and one of those fits.
Unfortunately, one of the requirements for defaulting is that all the type classes involved are "standard". And the class Random is not a standard class. (It used to be in the Haskell 98 Report, but was removed from Haskell 2010. I vaguely recall it did not work for defaulting even back then, though.)
However, as that link also shows, you can relax that restriction with GHC's ExtendedDefaultRules extension, after which your code will cause Double to be chosen. You can even get Float to be chosen if you make a different default declaration that prefers Float to Double (I would not generally recommend this, though, as Double is higher precision.) The following works the way you expected your original to do:
{-# LANGUAGE ExtendedDefaultRules #-}
import System.Random
default (Integer, Float)
(a, _) = random (mkStdGen 0)
b = if a > 0.5 then 1 else 0

Related

The type-specification operator is like down-casting in Object-Oriented languages?

I was going through the book Haskell Programming from First Principles and came across following code-snippet.
Prelude> fifteen = 15
Prelude> :t fifteen
fifteen :: Num a => a
Prelude> fifteenInt = fifteen :: Int
Prelude> fifteenDouble = fifteen :: Double
Prelude> :t fifteenInt
fifteenInt :: Int
Prelude> :t fifteenDouble
fifteenInt :: Double
Here, Num is the type-class that is like the base class in OO languages. What I mean is when I write a polymorphic function, I take a type variable that is constrained by Num type class. However, as seen above, casting fifteen as Int or Double works. Isn't it equivalent to down-casting in OO languages?
Wouldn't some more information (a bunch of Double type specific functions in this case) be required for me to be able to do that?
Thanks for helping me out.
No, it's not equivalent. Downcasting in OO is a runtime operation: you have a value whose concrete type you don't know, and you basically assert that it has some particular case – which is an error if it happens to be actually a different concrete type.
In Haskell, :: isn't really an operator at all. It just adds extra information to the typechecker at compile-time. I.e. if it compiles at all, you can always be sure that it will actually work at runtime.
The reason it works at all is that fifteen has no concrete type. It's like a template / generic in OO languages. So when you add the :: Double constraint, the compiler can then pick what type is instantiated for a. And Double is ok because it is a member of the Num typeclass, but don't confuse a typeclass with an OO class: an OO class specifies one concrete type, which may however have subtypes. In Haskell, subtypes don't exist, and a class is more like an interface in OO languages. You can also think of a typeclass as a set of types, and fifteen has potentially all of the types in the Num class; which one of these is actually used can be chosen with a signature.
Downcasting is not a good analogy. Rather, compare to generic functions.
Very roughly, you can pretend that your fifteen is a generic function
// pseudo code in OOP
A fifteen<A>() where A : Num
When you use fifteen :: Double in Haskell, you tell the compiler that the result of the above function is Double, and that enables the compiler to "call" the above OOP function as fifteen<Double>(), inferring the generic argument.
With some extension on, GHC Haskell has a more direct way to choose the generic parameter, namely the type application fifteen #Double.
There is a difference between the two ways in that ... :: Double specifies what is the return type, while #Double specifies what is the generic argument. In this fifteen case they are the same, but this is not always the case. For instance:
> list = [(15, True)]
> :t list
list :: Num a => [(a, Bool)]
Here, to choose a = Double, we need to write either list :: [(Double, Bool)] or list #Double.
In the type forall a. Num a => a†, the forall a and Num a are parameters specified by the “caller”, that is, the place where the definition (fifteen) used. The type parameter is implicitly filled in with a type argument by GHC during type inference; the Num constraint becomes an extra parameter, a “dictionary” comprising a record of functions ((+), (-), abs, &c.) for a particular Num instance, and which Num dictionary to pass in is determined from the type. The type argument exists only at compile time, and the dictionary is then typically inlined to specialise the function and enable further optimisations, so neither of these parameters typically has any runtime representation.
So in fifteen :: Double, the compiler deduces that a must be equal to Double, giving (a ~ Double, Num a) => a, which is simplified first to Num Double => Double, then to simply Double, because the constraint Num Double is satisfied by the existence of an instance Num Double definition. There is no subtyping or runtime downcasting going on, only the solution of equality constraints, statically.
The type argument can also be specified explicitly with the TypeApplications syntax of fifteen #Double, typically written like fifteen<Double> in OO languages.
The inferred type of fifteen includes a Num constraint because the literal 15 is implicitly a call to something like fromInteger (15 :: Integer)‡. fromInteger has the type Num a => Integer -> a and is a method of the Num typeclass, so you can think of a literal as “partially applying” the Integer argument while leaving the Num a argument unspecified, then the caller decides which concrete type to supply for a, and the compiler inserts a call to the fromInteger function in the Num dictionary passed in for that type.
† forall quantifiers are typically implicit, but can be written explicitly with various extensions, such as ExplicitForAll, ScopedTypeVariables, and RankNTypes.
‡ I say “something like” because this abuses the notation 15 :: Integer to denote a literal Integer, not circularly defined in terms of fromInteger again. (Else it would loop: fromInteger 15 = fromInteger (fromInteger 15) = fromInteger (fromInteger (fromInteger 15))…) This desugaring can be “magic” because it’s a part of the language itself, not something defined within the language.

Haskell: Filtering by type

For any particular type A:
data A = A Int
is is possible to write this function?
filterByType :: a -> Maybe a
It should return Just . id if value of type A is given, and Nothing for value of any other types.
Using any means (GHC exts, TH, introspection, etc.)
NB. Since my last question about Haskell typesystem was criticized by the community as "terribly oversimplified", I feel the need to state, that this is a purely academic interest in Haskell typesystem limitations, without any particular task behind it that needs to be solved.
You are looking for cast at Data.Typeable
cast :: forall a b. (Typeable a, Typeable b) => a -> Maybe b
Related question here
Example
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Typeable
data A = A Int deriving (Show, Typeable)
data B = B String deriving (Show, Typeable)
showByType :: Typeable a =>a ->String
showByType x = case (cast x, cast x) of
(Just (A y), _) ->"Type A: " ++ show y
(_, Just (B z)) ->"Type B: " ++ show z
then
> putStrLn $ showByType $ A 4
Type A: 4
> putStrLn $ showByType $ B "Peter"
Type B: "Peter"
>
Without Typeable derivation, no information exists about the underlying type, you can anyway perform some cast transformation like
import Unsafe.Coerce (unsafeCoerce)
filterByType :: a -> Maybe a
filterByType x = if SOMECHECK then Just (unsafeCoerce x) else Nothing
but, where is that information?
Then, you cannot write your function (or I don't know how) but in some context (binary memory inspection, template haskell, ...) may be.
No, you can't write this function. In Haskell, values without type class constraints are parametric in their type variables. This means we know that they have to behave exactly the same when instantiated at any particular type¹; in particular, and relevant to your question, this means they cannot inspect their type parameters.
This design means that that all types can be erased at run time, which GHC does in fact do. So even stepping outside of Haskell qua Haskell, unsafe tricks won't be able to help you, as the runtime representation is sort of parametric, too.
If you want something like this, josejuan's suggestion of using Typeable's cast operation is a good one.
¹ Modulo some details with seq.
A function of type a -> Maybe a is trivial. It's just Just. A function filterByType :: a -> Maybe b is impossible.
This is because once you've compiled your program, a and b are gone. There is no run time type information in Haskell, at all.
However, as mentioned in another answer you can write a function:
cast :: (Typeable a, Typeable b) => a -> Maybe b
The reason you can write this is because the constraint Typeable a tells the compiler to, where ever this function is called, pass along a run-time dictionary of values specified by Typeable. These are useful operations that can build up and tear down a great range of Haskell types. The compiler is incredibly smart about this and can pass in the right dictionary for virtually any type you use the function on.
Without this run-time dictionary, however, you cannot do anything. Without a constraint of Typeable, you simply do not get the run-time dictionary.
All that aside, if you don't mind my asking, what exactly do you want this function for? Filtering by a type is not actually useful in Haskell, so if you're trying to do that, you're probably trying to solve something the wrong way.

Haskell Types in some examples

I'm in the process of learning Haskell and I'm a beginner.
I wish I could search this question on StackOverflow.
But honestly I'm not quite sure what to search for.
I already tried to get the answers without much success so
please bear with me. It seems this is still really low level
stuff.
So my ghci interactive session never seems to output
"primitive types" like Int for example. I don't know how
else to put it. At the moment I'm trying to follow the
tutorial on http://book.realworldhaskell.org/read/getting-started.html.
Unfortunately, I can't seem to produce the same results.
For example:
Prelude> 5
5
Prelude> :type it
it :: Num a => a
I have to specifically say:
Prelude> let e = 5 :: Int
Prelude> e
5
Prelude> :type it
it :: Int
This is all very confusing to me so I hope somebody
can clear up this confusion a little bit.
EDIT:
On http://book.realworldhaskell.org/read/getting-started.html it says: "Haskell has several numeric types. For example, a literal number such as 1 could, depending on the context in which it appears, be an integer or a floating point value. When we force ghci to evaluate the expression 3 + 2, it has to choose a type so that it can print the value, and it defaults to Integer." I can't seem to force ghci to evaluate the type.
So for example:
Prelude> 3 + 2
5
Prelude> :t it
it :: Num a => a
Where I expected "Integer" to be the correct type.
There are a number of things going on here.
Numeric literals in Haskell are polymorphic; the type of the literal 5 really is Num a => a. It can belong to any type that adheres to the Num type class.
Addition is part of the Num type class, so an addition of two numeric literals is still Num a => a.
Interactive evaluation in ghci is very similar to evaluating actions in the IO monad. When you enter a bare expression, ghci acts as if you ran something like the following:
main = do
let it = 5 + 5
print it
It's not exactly like that, though, because in a program like that, inference would work over the entire do expression body to find a specific type. When you enter a single line, it has to infer a type and compile something with only the context available as of the end of the line you entered. So the print doesn't affect the type inferred for the let-binding, as it's not something you entered.
There's nothing in that program that would constrain it to a particular instance of Num or Show; this means that it is still a polymorphic value. Specifically, GHC compiles values with type class constraints to functions that accept a type class dictionary that provides the instance implementations required to meet the constraint. So, although it looks like a monomorphic value, it is actually represented by GHC as a function. This was surprising to enough people that the dreaded "Monomorphism Restriction" was invented to prevent this kind of surprise. It disallows pattern bindings (such as this one) where an identifier is bound to a polymorphic type.
The Monomorphism Restriction is off by default in GHC now, and it has been off by default in GHCi since version 7.8.
See the GHC manual for more info.
Haskell provides a special bit of magic for polymorphic numbers; each module can make a default declaration that provides type defaulting rules for polymorphic numbers. At your ghci prompt, the defaulting rules made ghci choose 'Int' when it was forced to provide instance dictionaries to show it in order to get to an IO action value.
Here's the relevant section in the Haskell 98 Report.
To sum it up: it was bound to the expression 5 + 5, which has type Num a => a because that's the more general inferred type based on the polymorphic numeric literals.
Polymorphic values are represented as functions waiting for a typeclass dictionary. So evaluating it at a particular instance doesn't force it to become monomorphic.
However, Haskell's type default rules allow it to pick a particular type when you implicitly print it as part of the ghci interaction. It picks Int and so it chooses the Int type class instance dictionaries for Show and Num when forced to by print it.
I hope that makes it somewhat less confusing!
By the way, here is an example of how you can get the same behavior outside of ghci by explicitly requesting the polymorphic let-binding. Without the type signature in this context, it will infer a monomorphic type for foo and give a type error.
main = do
let foo :: Num a => a
foo = 5 + 5
let bar = 8 :: Double
let baz = 9 :: Int
print (foo + bar)
print (foo + baz)
This will compile and run, printing the following:
18.0
19
UPDATE:
Looking at the Real World Haskell example and the comment thread, some people included different ghci logs along with their ghc versions. Using that information, I looked at ghc release notes and found that starting in version 7.8, the Monomorphism Restriction was disabled in ghci by default.
If you run the following command, you'll re-enable the Monomorphism Restriction and in order to be friendly, ghci will default the binding to Integer rather than giving you either an error or a polymorphic binding:
Prelude> :set -XMonomorphismRestriction
Prelude> 5 + 5
10
Prelude> :t it
it :: Integer
Prelude>
It appears that GHCi is performing some magic here. It is correctly defaulting the numbers to Integers so that they can be printed. However it is binding it to the polymorphic type before the defaulting appears.
I guess you want to see the type after the defaulting takes place. For that, I would recommend to use the Data.Typeable library as follows:
> import Data.Typeable
> let withType x = (x, typeOf x)
> withType 5
(5,Integer)
Above, GHCi has to default 5 to Integer, but this causes typeOf x to report the representation of the type after the defaulting happened. Hence we get the wanted type.
The following also works, precisely because typeOf is called after the defaulting happened:
> :type 5
5 :: Num a => a
> typeOf 5
Integer
Keep however in mind that typeOf only works for monomorphic types. In general, the polymorphic result of :type is more useful.
Numbers in Haskell are polymorphic, there are separate types for Fixed and arbitrary precision Integers, Rationals, Floating Point numbers, and user defined number types. All can be instantiated with simple literals, by implementing the fromInteger method on the Num typeclass. The value you've given, (True, 1, "hello world", 3) has two integral literals, and they can be used to create two numbers, of possibly different types. the bit of type before the fat arrow, (Num t, Num t1), is saying that in the inferred type, t and t1 can be anything, so long as they happen to have the Num typeclass defined on them, ie they can be obtained with a fromInteger.

Type signature of num to double?

I'm just starting Learn You a Haskell for Great Good, and I'm having a bit of trouble with type classes. I would like to create a function that takes any number type and forces it to be a double.
My first thought was to define
numToDouble :: Num -> Double
But I don't think that worked because Num isn't a type, it's a typeclass (which seems to me to be a set of types). So looking at read, shows (Read a) => String -> a. I'm reading that as "read takes a string, and returns a thing of type a which is specified by the user". So I wrote the following
numToDouble :: (Num n) => n -> Double
numToDouble i = ((i) :: Double)
Which looks to me like "take thing of type n (must be in the Num typeclass, and convert it to a Double". This seems reasonable becuase I can do 20::Double
This produces the following output
Could not deduce (n ~ Double)
from the context (Num n)
bound by the type signature for numToDouble :: Num n => n -> Double
I have no idea what I'm reading. Based on what I can find, it seems like this has something to do with polymorphism?
Edit:
To be clear, my question is: Why isn't this working?
The reason you can say "20::Double" is that in Haskell an integer literal has type "Num a => a", meaning it can be any numeric type you like.
You are correct that a typeclass is a set of types. To be precise, it is the set of types that implement the functions in the "where" clause of the typeclass. Your type signature for your numToDouble correctly expresses what you want to do.
All you know about a value of type "n" in your function is that it implements the Num interface. This consists of +, -, *, negate, abs, signum and fromInteger. The last is the only one that does type conversion, but its not any use for what you want.
Bear in mind that Complex is also an instance of Num. What should numToDouble do with that? The Right Thing is not obvious, which is part of the reason you are having problems.
However lower down the type hierarchy you have the Real typeclass, which has instances for all the more straightforward numerical types you probably want to work with, like floats, doubles and the various types of integers. That includes a function "toRational" which converts any real value into a ratio, from which you can convert it to a Double using "fromRational", which is a function of the "Fractional" typeclass.
So try:
toDouble :: (Real n) => n -> Double
toDouble = fromRational . toRational
But of course this is actually too specific. GHCI says:
Prelude> :type fromRational . toRational
fromRational . toRational :: (Fractional c, Real a) => a -> c
So it converts any real type to any Fractional type (the latter covers anything that can do division, including things that are not instances of Real, like Complex) When messing around with numeric types I keep finding myself using it as a kind of generic numerical coercion.
Edit: as leftaroundabout says,
realToFrac = fromRational . toRational
You can't "convert" anything per se in Haskell. Between specific types, there may be the possibility to convert – with dedicated functions.
In your particular example, it certainly shouldn't work. Num is the class1 of all types that can be treated as numerical types, and that have numerical values in them (at least integer ones, so here's one such conversion function fromInteger).
But these types can apart from that have any other stuff in them, which oftentimes is not in the reals and can thus not be approximated by Double. The most obvious example is Complex.
The particular class that has only real numbers in it is, suprise, called Real. What is indeed a bit strange is that its method is a conversion toRational, since the rationals don't quite cover the reals... but they're dense within them, so it's kind of ok. At any rate, you can use that function to implement your desired conversion:
realToDouble :: Real n => n -> Double
realToDouble i = fromRational $ toRational i
Incidentally, that combination fromRational . toRational is already a standard function: realToFrac, a bit more general.
Calling type classes "sets of types" is kind of ok, much like you can often get away without calling any kind of collection in maths a set – but it's not really correct. The most problematic thing is, you can't really say some type is not in a particular class: type classes are open, so at any place in a project you could declare an instance for some type to a given class.
Just to be 100% clear, the problem is
(i) :: Double
This does not convert i to a Double, it demands that i already is a Double. That isn't what you mean at all.
The type signature for your function is correct. (Or at least, it means exactly what you think it means.) But your function's implementation is wrong.
If you want to convert one type of data to another, you have to actually call a function of some sort.
Unfortunately, Num itself only allows you to convert an Integer to any Num instance. You're trying to convert something that isn't necessarily an Integer, so this doesn't help. As others have said, you probably want fromRational or similar...
There is no such thing as numeric casts in Haskell. When you write i :: Double, what that means isn't "cast i to Double"; it's just an assertion that i's type is Double. In your case, however, your function's signature also asserts that i's type is Num n => n, i.e., any type n (chosen by the caller) that implements Num; so for example, n could be Integer. Those two assertions cannot be simultaneously true, hence you get an error.
The confusing thing is that you can say 1 :: Double. But that's because in Haskell, a numeric literal like 1 has the same meaning as fromInteger one, where one :: Integer is the Integer whose value is one.
But that only works for numeric literals. This is one of the surprising things if you come to Haskell from almost any other language. In most languages you can use expressions of mixed numeric types rather freely and rely on implicit coercions to "do what I mean"; in Haskell, on the other hand you have to use functions like fromIntegral or fromRational all the time. And while most statically typed languages have a syntax for casting from one numeric type to another, in Haskell you just use a function.

Why doesn't GHC complain when number constant out of range

GHC silently ignores out-of-range bits in numerical constants.
This behavior led me to wrestle with a rather strange bug today:
[0..256]::[Word8] -- evaluates to [0]!
I know what caused this bug (256 == 0 in a rot256 world).... I am interested in why GHC/Haskell was designed not to complain about it at compile time.
(This behavior is true for Int also- for instance, 18446744073709551617::Int = 1).
I've grown used to Haskell catching trivial compile time issues, and I was surprised when I had to track this down.
I suspect the honest answer is "because nobody implemented it yet". But I think there's another layer to that answer, which is that there are some subtle design issues.
For example: how should we know that 256 is out of range for Word8? Well, I suppose one answer might be that the compiler could notice that Word8 is an instance of all three of Integral, Ord, and Bounded. So it could generate a check like
(256 :: Integer) > fromIntegral (maxBound :: Word8)
and evaluate this check at compile time. The problem is that all of a sudden we are running potentially user-written code (e.g. maxBound, fromIntegral, and (>) presumably all come from instance declarations that can be written by a programmer) at compile time. That can be a bit dangerous -- since it's impossible to tell if we'll ever get an answer! So at the very least you would want this check to be off by default, and presumably at least as hard to turn on as Template Haskell is.
On the other hand, it might also be possible to just build in a handful of instances that we "trust" -- e.g. Word8 and Int, as you say. I would find that a bit disappointing, though perhaps such a patch would not be rejected.
It depends on the implementation of the individual Num instance. If you perform
> :type 1
1 :: Num a => a
So Haskell only initially converts something to a generic Num, and then you specify a type of Word8. If you try
> (maxBound :: Word8) + 1
0
> maxBound :: Word8
255
This is what is known as an overflow, and holds true in many languages, notably C. Haskell does not prevent you from doing this because there are legitimate cases where you might want to have overflow. Instead, it is up to you, the programmer, to ensure that your input data is valid. Also, as jozefg points out, it is impossible to know at compile time if every conversion is valid.
You could implement a Cyclic class that gives you the behavior you want, if you already have Eq, Bounded, and Enum:
class (Eq a, Bounded a, Enum a) => Cyclic a where
next :: a -> a
next a = if a == maxBound then minBound else succ a
prev :: a -> a
prev a = if a == minBound then maxBound else pred a
instance Cyclic Word8
> next 255 :: Word8
0
> prev 0 :: Word8
255
Luckily, for all Integral types, you already have Enum and Eq, and the only Integral I know of that doesn't have Bounded is Integer. It's just a matter of adding instance Cyclic <Int Type> for each that you want to use.

Resources