In Haskell, is there a good way to write a num to num conversion function `toNum :: (Num a, Num b) => a -> b`? - haskell

For example, one bad way is to factor through a string:
toReadableNum :: (Num a, Num b, Read b) => a -> b
toReadableNum = read . show
If there are no good ways, are there other bad ways? Implementation specific? Requiring language extension?

You can't go (sanely) from Num to Num, as Num provides no mechanism for extracting information about the value held other than its spurious Eq and Show machinery, but if you are willing to assume a bit more on the behalf of the number you are coming from, then you can have recourse.
In particular
fromIntegral :: (Integral a, Num b) => a -> b
and the composition of
toRational :: Real a => a -> Rational
with
fromRational :: Fractional a => Rational -> a
are both good candidates for doing what you mean, if not exactly what you asked for.
While read . show is well typed and has the signature you propose, the meaning is gobbledigook. There is nothing at all that says the text emitted by one Show instance will be compatible with a completely different Read instance, and there are plenty of counter examples.
The (implied) contract on Read and Show only apply when you use them with the same type!

There are no good ways. Some numbers contain more information that other, so how could you expect to convert between two arbitrary numbers in a good way. Some simple examples: How do you convert a Double to an Int? A Rational to on Int8? A Complex Double to a Float?
All this involve information loss and then there is no obvious right way.
And as #hammar says, the operations in Num simply don't allow you to construct such a function.

You cannot write any useful function of the type (Num a, Num b) => a -> b. Since a and b are type variables, the only useful operations we can use on them are the ones in the Num class. (Eq and Show won't help us much here).
class (Eq a, Show a) => Num a where
(+), (-), (*) :: a -> a -> a
negate :: a -> a
abs :: a -> a
signum :: a -> a
fromInteger :: Integer -> a
The only function here that allows you to make an b if you didn't have one to start with is fromInteger, but you have no way of turning a into an Integer, so the only functions you can write of this type return fromInteger of some constant, or bottom. Not very useful.
As augustss pointed out, there is no obvious way of making this conversion anyway. Remember lots of types can be Num. Not only the various types of real numbers, but also complex numbers, matrices, polynomials, etc. There is no meaningful conversion that would work between all of them.

The good way is to make specific kind of conversion, like round or clamp. Such function does what it say it does.

Related

How to check instance of fractional class for zero value in haskell?

If i have a value which is restricted to be fractional, is there a way i can check it either for being a zero value or for some neutral value? I'm trying to implement safe division with signature like this:
safe_idv :: (Fractional q) => q -> q -> Maybe q
I've checked at Hoogle if there is some method in minimal definition which can help, but to no avail.
Thanks in advance
Update: Since the question caused some confusion, I want to avoid changing the constraints on q, and pattern matching still requires Eq (i guess implicitly).
For following definition:
safe_div :: (Fractional q) => q -> q -> Result q
safe_div a 0 = Err ["dvision by zero"]
safe_div a b = Ok (a / b)
following error is raised:
* Could not deduce (Eq q) arising from the literal `0'
from the context: Fractional q
bound by the type signature for:
safe_div :: forall q. Fractional q => q -> q -> Result q
at AST2.hs:11:1-48
Possible fix:
add (Eq q) to the context of
the type signature for:
safe_div :: forall q. Fractional q => q -> q -> Result q
* In the pattern: 0
In an equation for `safe_div':
safe_div a 0 = Err ["dvision by zero"]
With the signature safe_div :: (Fractional q) => q -> q -> Result q, you can only use methods from Fractional or its superclasses on your q values.
(Or other predefined functions that impose no more constraints than Fractional, but those will ultimately have to be implemented by the class methods. So they can't do anything you couldn't do directly with the methods yourself.)
From Fractional itself that gives us:
(/) :: Fractional a => a -> a -> a
recip :: Fractional a => a -> a
fromRational :: Fractional a => Rational -> a
Well none of those look terribly helpful. But Num is a superclass of Fractional, so we have those methods too:
(+) :: Num a => a -> a -> a
(-) :: Num a => a -> a -> a
(*) :: Num a => a -> a -> a
negate :: Num a => a -> a
abs :: Num a => a -> a
signum :: Num a => a -> a
fromInteger :: Num a => Integer -> a
These also aren't going to help us. abs and signum at first seem like they might, since their "purpose" is telling us certain properties about the number; signum even says this:
For real numbers, the signum is either -1 (negative), 0 (zero) or 1 (positive).
Which sounds exactly the kind of thing we want! The only trouble is that signum communicates the result of its inspection as a value of the same type. If we couldn't tell if our number x is equal to zero, how are we going to tell whether signum x is equal to zero? We're right back at the problem we started with.
The fact is every single method of both Fractional and Num only ever returns a value of the (unknown) type implementing the class. That basically means if you don't know what they type actually is, it's impossible to get any information out of them; the only thing you can do with a value of an unknown Fractional type is pass it to another Fractional (or Num) method, which will also only give you back a value of the same unknown type. There's no way to compare it to anything (which would require something returning a Bool, Ordering, or at least a Maybe, Either, etc). There's no way to convert it to text so it can be shown to a user (which would require something returning a String, Text, etc). You can do further calculations, but the only thing we can ever learn is that the calculation didn't error out, and only by trying it and hoping (which is exactly what you're trying to avoid!).
The only way you can implement your desired function is to add more constraints. Eq is exactly the class of types which can be compared, and you want to compare values of your type, so it just makes sense that you will have to constrain your function to operate within this class of types.
However, anyone calling this function polymorphically is in the same boat. It's very useful for intermediate functions (like this one) to work with unknown Fractional types, so that they can be called with any fractional type. But at the outermost level where someone first decided to call one of these functions, it's only ever useful to call these with a concrete type they actually know something about. Nobody wants to do calculations on numbers where they can't inspect the result in any way! This means that even though your safe_div function (as it is currently written) cannot assume any details of any particular type (such as whether it can be compared for equality), it in fact will only ever realistically be called with specific types like Double, Float, etc, all of which do support Eq. So in practice adding the Eq constraint is hardly limiting who can call it.
I imagine the reason you don't want to change the constraint is that you've already coded up functions where you use this, and they only have Fractional constraints (meaning they can't call safe_div :: (Eq a, Fractional a) => a -> a -> Result a). Unfortunately they'll have to be updated to add the Eq constraint too. The fact is that the interface of just Fractional gives only the ability to do basic arithmetic. To do comparisons and branching calculations you need more. So all your functions that want to do more than basic arithmetic (and that includes any that call anything that does more than basic arithmetic, not just ones that do the comparison and branching themselves) need more constraints than just Fractional. Fortunately the same reasoning as above applies: it is extremely unlikely that you would ever need to call any of these functions with a type that doesn't support Eq, so there really is very little point in resisting the additional constraint.

What is the correct way to use fromIntegral to convert an Integer to real-fractional types?

I am trying to use fromIntegral to convert from Integer to a real-fractional type. The idea is to have a helper method to later be used for the comparison between two instances of a Fraction (whether they are the same).
I have read some of the documentation related to:
fromIntegral :: (Integral a, Num b) => a -> b
realToFrac :: (Real a, Fractional b) => a -> b
Where I am having trouble is taking the concept and make an implementation of the helper method that takes a Num data type with fractions (numerator and denominator) and returns what I think is a real-fractional type value. Here is what I have been able to do so far:
data Num = Fraction {numerator :: Integer, denominator :: Integer}
helper :: Num -> Fractional
helper (Fraction num denom) = realToFrac(num/denom)
You need to learn about the difference between types and type classes. In OO languages, both are kind of the same concept, but in Haskell they're not.
A type contains concrete values. E.g. the type Bool contains the value True.
A class contains types. E.g. the Ord class doesn't contain any values, but it does contain the types which contain values that can be compared.
In case of numbers in Haskell it's a bit confusing that you can't really tell from the name whether you're dealing with a type or a class. Fractional is a class, whereas Rational is a type (which is an instance of Fractional, but so is e.g. Float).
In your example... first let's give that type a better name
data MyRational = Fraction {numerator :: Integer, denominator :: Integer}
...you have two possibilities what helper could actually do: convert to a concrete Rational value
helper' :: MyRational -> Rational
or a generic Fractional-type one
helper'' :: Fractional r => MyRational -> r
The latter is strictly more general, because Rational is an instance of Fractional (i.e. you can in fact use helper'' as a MyRational -> Rational function, but also as a MyRational -> Double function).
In either case,
helper (Fraction num denom) = realToFrac(num/denom)
does not work because you're trying to carry out the division on integer values and only then converting the result. Instead, you need to convert the integers to something fractional and then carry out the division in that type.

What is '(Floating a, Num (a -> a))' in Haskell?

In Haskell, I just know that
:type ((+)(1))
((+)(1)) :: Num a => a -> a
((+)(1) 2
3
But how about
:type abs(sqrt)
abs(sqrt) :: (Floating a, Num (a -> a)) => a -> a
Actually, I try many times but fail to use the function 'abs(sqrt)'. Then I have a few questions. What is the type(class?) '(Floating a, Num (a -> a))'? Is it possible to use the function 'abs(sqrt)'? How?
A type class is a way to generalize functions so that they can be polymorphic and others can implement those functions for their own types. Take as an example the type class Show, which in a simplified form looks like
class Show a where
show :: a -> String
This says that any type that implements the Show typeclass can be converted to a String (there's some more complication for more realistic constraints, but the point of having Show is to be able to convert values to Strings).
In this case, the function show has the full type Show a => a -> String.
If we examine the function sqrt, its type is
> :type sqrt
sqrt :: Floating a => a -> a
And for abs:
> :type abs
abs :: Num b => b -> b
If you ask GHCi what the types are it will use the type variable a in both cases, but I've used b in the type signature for abs to make it clear that these are different type variables of the same name, and it will help avoid confusion in the next step.
These type signatures mean that sqrt takes a value whose type implements the Floating typeclass (use :info Floating to see all the members) and returns a value of that same type, and that the abs function takes a value whose type implements the Num typeclass and returns a value of that same type.
The expression abs(show) is equivalently parsed as abs sqrt, meaning that sqrt is the first and only argument passed to abs. However, we just said that abs takes a value of a Num type, but sqrt is a function, not a number. Why does Haskell accept this instead of complaining? The reason can be seen a little more clearly when we perform substitution with the type signatures. Since sqrt's type is Floating a => a -> a, this must match the argument b in abs's type signature, so by substituting b with Floating a => a -> a we get that abs sqrt :: (Floating a, Num (a -> a)) => a -> a.
Haskell actually allows the function type to implement the Num typeclass, you could do it yourself although it would likely be nonsensical. However, just because something wouldn't seem to make sense to GHC, so long as the types can be cleanly solved it will allow it.
You can't really use this function, it just doesn't really make sense. There is no built-in instance of Num (a -> a) for any a, so you'd have to define your own. You can, however, compose the functions abs and sqrt using the composition operator .:
> :type abs . sqrt
abs . sqrt :: Floating c => c -> c
And this does make sense. This function is equivalent to
myfunc x = abs (sqrt x)
Note here that x is first applied to sqrt, and then the result of that computation is passed to abs, rather than passing the function sqrt to abs.
When you see Num (a -> a) it generally means you made a mistake somewhere.
Perhaps you really wanted: abs . sqrt which has type Floating c => c -> c - i.e. it's a function of a Floating type (e.g. Float, Double) to the same Floating type.
It is probably not possible to use this function.
What's likely happening here is that the type is saying that abs(sqrt) has the constraints that a must be of type class Floating and (a -> a) must be of type class Num. In other words, the sqrt function needs to be able to be treated as if it was a number.
Unfortunately, sqrt is not of type class Num so there won't be any input that will work here (not that it would make sense anyway). However, some versions of GHCi allow you to get the type of as if it were possible.
Have a look at Haskell type length + 1 for a similar type problem.
As ErikR has said, perhaps you meant to write abs . sqrt instead.

How can an arbitrary Num contain any other numeric type?

I'm just starting with Haskell, and I thought I'd start by making a random image generator. I looked around a bit and found JuicyPixels, which offers a neat function called generateImage. The example that they give doesn't seem to work out of the box.
Their example:
imageCreator :: String -> IO ()
imageCreator path = writePng path $ generateImage pixelRenderer 250 300
where pixelRenderer x y = PixelRGB8 x y 128
when I try this, I get that generateImage expects an Int -> Int -> PixelRGB8 whereas pixelRenderer is of type Pixel8 -> Pixel8 -> PixelRGB8. PixelRGB8 is of type Pixel8 -> Pixel8 -> Pixel8 -> PixelRGB8, so it makes sense that pixelRenderer is doing some type inference to determine that x and y are of type Pixel8. If I define a type signature that asserts that they are of type Int (so the function gets accepted by generateImage, PixelRGB8 complains that it needs Pixel8s, not Ints.
Pixel8 is just a type alias for Word8. After some hair pulling, I discovered that the way to convert an Int to a Word8 is by using fromIntegral.
The type signature for fromIntegral is (Integral a, Num b) => a -> b. It seems to me that the function doesn't actually know what you want to convert it to, so it converts to the very generic Num class. So theoretically, the output of this is a variable of any type that fits the type class Num (correct me if I'm mistaken here--as I understand it, classes are kind of like "interfaces" where types are more like classes/primitives in OOP). If I assign a variable
let n = fromIntegral 5
:t n -- n :: Num b => b
So I'm wondering... what is 'b'? I can use this variable as anything, and it will implicitly cast to any numeric type, as it seems. Not only will it implicitly cast to a Word8, it will implicitly cast to a Pixel8, meaning fromPixel effectively gets turned from (as I understood it) (Integral a, Num b) => a -> b to (Integral a) => a -> Pixel8 depending on context.
Can someone please clarify exactly what's happening here? Why can I use a generic Num as any type that fits Num, both mechanically and "ethically"? I don't understand how the implicit conversion is implemented (if I were to create my own class, I feel like I would need to add explicit conversion functions). I also don't really know why this works; here I can use a pretty unsafe type and convert it implicitly to anything else. (for example, fromIntegral 50000 gets translated to 80 if I implicitly convert it to a Word8)
A common implementation of type classes such as Num is dictionary-passing. Roughly, when the compiler sees something like
f :: Num a => a -> a
f x = x + 2
it transforms it into something like
f :: (Integer -> a, a -> a -> a) -> a -> a
-- ^-- the "dictionary"
f (dictFromInteger, dictPlus) x = dictPlus x (dictFromInteger 2)
The latter basically says: "pass me an implementation for these methods of class Num for your type a, and I will use them to produce a function a -> a for you".
Values such as your n :: Num b => b are no different. They are compiled into things such as
n :: (Integer -> b) -> b
n dictFromInteger = dictFromInteger 5 -- roughly
As you can see, this turns innocent-looking integer literals into functions, which can (and does) impact performance. However, in many circumstances the compiler can realize that the full polymorphic version is not actually needed, and remove all the dictionaries.
For instance, if you write f 3 but f expects Int, the "polymorphic" 3 can be converted at compile time. So type inference can aid the optimization phase (and user-written type annotation can greatly help here). Further, some other optimizations can be triggered manually, e.g. using the GHC SPECIALIZE pragma. Finally, the dreaded monomorphism restriction tries hard to force non-functions to remain non-functions after translation, at the cost of some loss of polymorphism. However, the MR is now being regarded as harmful, since it can cause puzzling type errors in some contexts.

Relationship between Haskell's 'forall' and '=>'

I'm having trouble wrapping my mind around the relationship (and interactions) between Haskell's forall and => (and for that matter the . that often connects them).
For example
λ> :t (+)
λ> :t id
give
(+) :: forall a. Num a => a -> a -> a
id :: forall a. a -> a
and while I understand how these work in these specific cases, I'm not comfortable parsing the expressions (signatures?) forall a. Num a => or forall a. themselves into something meaningful, or that I can generally understand in more complex contexts.
What do forall a. Num a => and forall a. mean? Specifically, what is the roles played in each by forall, => and a?
(As another perspective, without invoking the "implicit dictionary passing" implementation of type classes):
forall a. in Haskell means "for every type a".1 It's introducing a type variable, and declaring that the rest of the type expression has to be valid whatever choice is made for a.
You usually don't see it in basic Haskell (without turning on any extensions in GHC), because it's not necessary; you just use type variables in your type signature, and GHC automatically assumes there are foralls introducing those variables at the start of the expression.
For example:
zip :: forall a. ( forall b. ( [a] -> [b] -> [(a, b)] ))
zip :: forall a. forall b. [a] -> [b] -> [(a, b)]
zip :: forall a b. [a] -> [b] -> [(a, b)]
zip :: [a] -> [b] -> [(a, b)]
The above are all the same; they just tell us that zip can be a way of zipping a list of a together with a list of b to make a list of (a, b) pairs, whatever choice we feel like making for a and b.
forall mainly comes into play with extensions, because then you can introduce type variables with scopes other than the default ones assumed by GHC if you don't explicitly write them.
Now, the constraints => type syntax can be read roughly as "these constraints imply this type", or "provided these constraints hold, you can use this type". It's used all the time, even in vanilla Haskell with no extensions, so it's important to understand what it means and how it works and not just copy and paste and hope.
The => arrow allows us to state a set of constraints on the variables in the rest of the type expression; it lets us put limitations on what choices can be made to introduce the type variable. You should read it first by ignoring everything left of the => arrow, and reading the the right part on its own. This gives you the "shape" of the type. The stuff to the left of the => arrow tells you what kind of types you can use the rest of the type with.
An example:
(+) :: Num a => a -> a -> a
This means that (+) is exactly the same kind of thing as anything with a simpler type like a -> a -> a, except the Num a => is telling us that we're not free to choose any type a. We can only choose a type for a when we know that it is a member of the Num type class (another slightly more precise way of saying "a is a member of Num is "the constraint Num a holds").
Note that GHC is still assuming that there's an implicit forall a to introduce the type variable a here, so it really looks like:
(+) :: forall a. Num a => a -> a -> a
In which case you can read this off moderately easily as an English sentence once you know what forall a. and Num a => means: "For every type a, provided Num a holds, plus has the type a -> a -> a".
1 If you're familiar with formal logic at all, it's just an ASCII-friendly way of writing ∀a, a "universally quantified variable".
As the forall matter appears to be settled, I'll attempt to explain the => a bit. The things to the left of the => are arguments, much like ones to the left of a ->. But you don't apply these arguments manually, and they can only have specific types.
f :: Num a => a -> a
is a function that takes two arguments:
A Num a dictionary.
An a.
When you apply f, you just provide the a. GHC has to provide the Num a. If it's applied to a specific concrete type like Int, GHC knows Num Int and can supply it at the call site. Otherwise, it checks that Num a is provided by some outer context and uses that one. The great thing about Haskell's typeclass system is that it ensures that any two Num a dictionaries, however they are found, will be identical. So it doesn't matter where the dictionary comes from—it is sure to be the right one.
Further discussion
A lot of these things we're talking about aren't exactly part of Haskell so much as they're part of the way GHC interprets Haskell by translation to GHC core, AKA System FC, an extension of the very well-studied System F, AKA the Girard-Reynolds calculus. System FC is an explicitly typed polymorphic lambda calculus with algebraic datatypes, etc., but no type inference, no instance resolution, etc. After GHC checks the types in your Haskell code, it translates that code to System FC by a thoroughly mechanical process. It can do this confidently because the type checker "decorates" the code with all the information the desugarer needs to plumb all the dictionaries around. If you have a Haskell function that looks like
foo :: forall a . Num a => a -> a -> a
foo x y= x + y
then that will translate to something that looks like
foo :: forall a . Num a -> a -> a -> a
foo = /\ (a :: *) -> \ (d :: Num a) -> \ (x :: a) -> \ (y :: a) -> (+) #a d x y
The /\ is a type lambda—it's just line a normal lambda except it takes a type variable. The # represents application of a type to a function that takes one. The + is really just a record selector. It chooses the right field from the dictionary it's passed.
I suppose it helps if we add the implied parentheses:
(+) :: ∀ a . ( Num a => (a -> (a -> a)) )
id :: ∀ a . ( a -> a )
The ∀ always goes together with a .. It's basically special syntax meaning “anything between ∀ and . are type variables that I want to introduce into the following scope”†
=> denotes what Idris calls an implicit function: Num a is a dictionary for the instance Num a, and such a dictionary is implicitly needed whenever you're adding numbers. But whether a is a type variable here that was previously introduced by some ∀, or a fixed type, doesn't really matter. You could also have
(+) :: Num Int => Int -> Int -> Int
That's just superfluous, because the compiler knows that Int is a Num instance and hence automatically (implicitly!) chooses the right dictionary.
Really, there's no particular relationship between ∀ and =>, they just happen to be used often together.
†Actually this is a type-level lambda. The type expression ∀ a . b behaves analogously to the value level expression \a -> b.

Resources