How to return an Integral in Haskell? - haskell

I'm trying to figure out Haskell, but I'm a bit stuck with 'Integral'.
From what I gather, Int and Integer are both Integral.
However if I try to compile a function like this:
lastNums :: Integral a => a -> a
lastNums a = read ( tail ( show a ) ) :: Integer
I get
Could not deduce (a ~ Integer)
from the context (Integral a)
How do I return an Integral?
Also lets say I have to stick to that function signature.

Let's read this function type signature in English.
lastNums :: Integral a => a -> a
This means that "Let the caller choose any integral type. The lastNums function can take a value of that type and produce another value of the same type."
However, your definition always returns Integer. According to the type signature, it's supposed to leave that decision up to the caller.
Easiest way to fix this:
lastNums :: Integer -> Integer
lastNums = read . tail . show
There's no shame in defining a monomorphic function. Don't feel it has to be polymorphic just because it can be polymorphic. Often the polymorphic version is more complicated.
Here's another way:
lastNums :: (Integral a, Num a) => a -> a
lastNums = fromInteger . read . tail . show . toInteger
And another way:
lastNums :: (Integral a, Read a, Show a) => a -> a
lastNums = read . tail . show

While Int and Integer both implement Integral, Haskell doesn't quite work like that. Instead, if your function returns a value of type Integral a => a, then it must be able to return any value that implements the Integral typeclass. This is different from how most OOP languages use interfaces, in which you can return a specific instance of an interface by casting it to the interface type.
In this case, if you wanted a function lastNums to take an Integral value, convert it to a string, drop the first digits, then convert back to an Integral value, you would have to implement it as
lastNums :: (Integral a, Show a, Read a) => a -> a
lastNums a = read ( tail ( show a ) )

You need to be able to Read and Show also. And get rid of the Integer annotation. An Integer is a concrete type while Integral is a typeclass.
lastNums :: (Integral a, Show a, Integral b, Read b) => a -> b
lastNums = read . tail . show
*Main> lastNums (32 :: Int) :: Integer
2

The Integral class offers integer division, and it's a subclass of Ord, so it has comparison too. Thus we can skip the string and just do math. Warning: I haven't tested this yet.
lastNums x | x < 0 = -x
| otherwise = dropBiggest x
dropBiggest x = db x 0 1
db x acc !val
| x < 10 = acc
| otherwise = case x `quotRem` 10 of
(q, r) -> db q (acc + r * val) (val * 10)
Side notes: the bang pattern serves to make db unconditionally strict in val. We could add one to acc as well, but GHC will almost certainly figure that out on its own. Last I checked, GHC's native code generator (the default back-end) is not so great at optimizing division by known divisors. The LLVM back-end is much better at that.

Related

can't use pattern matching with datatypes defined using GADT in haskell

I'm trying to define a Complex datatype, and I want the constructors to be able to take any number as instance, so I would like to use a generic type as long as it does implement a Num instance
I'm usind GADTs in order to do so since to my understanding the DataTypeContexts language extension was a "misfeature" even if I think that would have been useful in this case...
In any case this is my code:
data Complex where
Real :: ( Num num, Show num ) => num -> Complex
Imaginary :: ( Num num, Show num ) => num -> Complex
Complex :: ( Num num, Show num ) => num -> num -> Complex
real :: ( Num num, Show num ) => Complex -> num
real (Real r) = r
real (Imaginary _i ) = 0
real (Complex r _i ) = r
here the real implementation gives the following error:
Couldn't match expected type ‘num’ with actual type ‘num1’
‘num1’ is a rigid type variable bound by
a pattern with constructor:
Real :: forall num. (Num num, Show num) => num -> Complex,
in an equation for ‘real’
at <...>/Complex.hs:29:7-12
‘num’ is a rigid type variable bound by
the type signature for:
real :: forall num. (Num num, Show num) => Complex -> num
at <...>/Complex.hs:28:1-47
• In the expression: r
In an equation for ‘real’: real (Real r) = r
• Relevant bindings include
r :: num1
(bound at <...>/Complex.hs:29:12)
real :: Complex -> num
(bound at <...>/Complex.hs:29:1)
which to my understanding is due to the return type do be interpreted as different...
so I tried removing the type definition and letting ghc do his magic with the type but turns out the type signature was the same...
can anyone please explain to me what is wrong here?
Problem is, these definitions allow you to choose different types when (1) constructing a Complex value and when (2) applying the real function. These two situations are not connected to each other in any way, so there is nothing to force the type to be the same between them. For example:
c :: Complex
c = Real (42 :: Int)
d :: Double
d = real c
The definition of d requires the real function to return a Double, but there is no Double wrapped inside of c, there is only Int.
As for solutions, there are two possible ones: (1) establish a connection between these two points, forcing the type to be the same, and (2) allow the type inside to be converted to any other numeric type.
To establish a type-level connection between two points, we need to use a type that is present at both points. What type would that be? Quite obviously, that's the type of c. So we need to make the type of c somehow convey what's wrapped inside it:
data Complex num = Real num | Imaginary num | Complex num num
real :: Complex num -> num
real = ...
-- Usage:
c :: Complex Int
c = Real 42
d :: Int
d = real c
Note that I don't actually need GADTs for this.
To allow type conversion, you'll need to require another type class for the num type. The class Num has a way to convert from any integral type, but there is no way to convert to any such type, because it doesn't make sense: 3.1415 can't be meaningfully converted to an integral type.
So you'll have to come up with your own way to convert, and implement it for all allowed types too:
class Convert a where
toNum :: Num n => a -> n
data Complex where
Real :: ( Num num, Show num, Convert num ) => num -> Complex
...
real :: Num num => Complex -> num
real (Real r) = toNum r
...
Just to be clear, I consider the second option quite insane. I only provided it for illustration. Don't do it. Go with option 1.

Haskell type substitution

I am going through the Haskell Book (http://haskellbook.com/) and got stuck at the following exercise:
f :: Float
f = 1.0
-- Question: Can you replace `f`'s signature by `f :: Num a => a`
At first, I thought the answer would be Yes. Float provides an instance for Num, so substituting a Num a => a by a Float value should be fine (I am thinking co-variance here).
This does not compile however:
Could not deduce (Fractional a) arising from the literal ‘1.0’
from the context: Num a
bound by the type signature for:
f :: forall a. Num a => a
at ...
Possible fix:
add (Fractional a) to the context of
the type signature for:
f :: forall a. Num a => a
• In the expression: 1.0
In an equation for ‘f’: f = 1.0
If I do this however, no problem:
f :: Fractional a => a
f = 1.0
Why can't I use a less specific constraint here like Num a => a?
UPD:
Actually, you can sum this up to:
1.0::(Num a => a)
vs
1.0::(Fractional a => a)
Why is the second working but not the first one? I thought Fractional was a subset of Num (meaning Fractional is compatible with Num)
UPD 2:
Thanks for your comments, but I am still confused. Why this works:
f :: Num a => a -> a
f a = a
f 1.0
while this not:
f :: Num a => a
f = 1.0
UPD 3:
I have just noticed something:
f :: Num a => a
f = (1::Int)
does not work either.
UPD 4
I have been reading all the answers/comments and as far as I understand:
f :: Num a => a
is the Scala equivalent of
def f[A: Num]: A
which would explain why many mentioned that a is defined by the caller. The only reason why we can write this:
f :: Num a => a
f = 1
is because 1 is typed as a Num a => a. Could someone please confirm this assumption? In any case, thank you all for your help.
If I have f :: Num a => a this means that I can use f wherever I need any numeric type. So, all of f :: Int, f :: Double must type check.
In your case, we can't have 1.0 :: Int for the same reason we can't have 543.24 :: Int, that is, Int does not represent fractional values. However, 1.0 does fit any fractional type (as 543.24 does).
Fractional indeed can be thought as a subset of Num. But if I have a value inside all the fractionals f :: forall a . Fractional a => a, I don't necessarily have a value in all the numeric types f :: forall a . Num a => a.
Note that, in a sense, the constraints are on the left side of =>, which makes them behave contra-variantly. I.e. cars are a subset of vehichles, but I can't conclude that a wheel that can be used in any car will be able to be used with any vehicle. Rather, the opposite: a wheel that can be used in any vehicle will be able to be used with any car.
So, you can roughly regard f :: forall a . Num a => a (values fitting in any numeric type, like 3 and 5) as a subtype of f :: forall a . Fractional a => a (values fitting in any fractional type, like 3,5, and 32.43).
Let's start with the monomorphic case:
f :: Float
f = 1.0
Here, you've said that f is a Float; not an Int, not a Double, not any other type. 1.0, on the other hand, is a polymorphic constant; it has type Fractional a => a, and so can be used to provide a value of any type that has a Fractional instance. You can restrict it to Float, or Double, etc. Since f has to be a Float, that's what 1.0 is restricted to.
If you try to change the signature
f :: Num a => a
f = 1.0
now you have a problem. You've now promised that f can be used to provide a value of any type that has a Num instance. That includes Int and Integer, but neither of those types has a Fractional instance. So, the compiler refuses to let you do this. 1.0 simply can't produce an Int should the need arise.
Likewise,
1.0::(Num a => a)
is a lie, because this isn't a restriction; it's an attempt at expanding the set of types that 1.0 can produce. Something with type Num a => a should be able to give me an Int, but 1.0 cannot do that.
1.0::(Fractional a => a)
works because you are just restating something that is already true. It's neither restricting 1.0 to a smaller set of types or trying to expand it.
Now we get something a little more interesting, because you are specifying a function type, not just a value type.
f :: Num a => a -> a
f a = a
This just says that f can take as its argument any value that is no more polymorphic than Num a => a. a can be any type that implements Num, or it can be a polymorphic value that is a subset of the types that represent Num.
You chose
f 1.0
which means a gets unified with Fractional a => a. Type inference then decides that the return type is also Fractional a => a, and returning the same value you passed in is allowed.
We've already covered why this
f :: Num a => a
f = 1.0
isn't allowed above, and
f :: Num a => a
f = (1::Int)
fails for the same reason. Int is simply too specific; it is not the same as Num a => a.
For example, (+) :: Num a => a -> a -> a requires two arguments of the same type. So, I might try to write
1 :: Integer + f
or
1 :: Float + f
. In both cases, I need f to be able to provide a value with the same type as the other argument, to satisfy the type of (+): if I want an Integer, I should be able to get an Integer, and if I want a Float, I should be able to get a Float. But if you could specify a value with something less specific than Num a => a, you wouldn't be able to keep that promise. 1.0 :: Fractional a => a can't provide an Integer, and 1 :: Int can't provide anything except an Int, not even an Integer.
Think of a polymorphic type as a function from a concrete type to a value; you can literally do this if you enable the TypeApplications extension.
Prelude> :set -XTypeApplications
Prelude> :{
Prelude| f :: Num a => a
Prelude| f = 1
Prelude| :}
Prelude> :t f
f :: Num a => a
Prelude> :t f #Int
f #Int :: Int
Prelude> f #Int
1
Prelude> :t f #Float
f #Float :: Float
Prelude> f #Float
1.0
Prelude> :t f #Rational
f #Rational :: Rational
Prelude> f #Rational
1 % 1
The reason these all work is because you promised that you could pass any type with a Num instance to f, and it could return a value of that type. But if you had been allowed to say f = 1.0, there is no way that f #Int could, in fact, return an Int, because 1.0 simply is not capable of producing an Int.
When values with polymorphic types like Num a => a are involved, there are two sides. The source of the value, and the use of the value. One side gets the flexibility to choose any specific type compatible with the polymorphic type (like using Float for Num a => a). The other side is restricted to using code that will work regardless of which specific type is involved - it can only make use of features that will work for every type compatible with the polymorphic type.
There is no free lunch; both sides cannot have the same freedom to pick any compatible type they like.
This is just as true for object-oriented subclass polymorphism, however OO polymorphism rules give the flexibility to the source side, and put all the restrictions on the use side (except for generics, which works like parametric polymorphism), while Haskell's parametric polymorphism gives the flexibility to the use side, and puts all the restrictions on the source side.
For example in an OO language with a general number class Num, and Int and Double as subclasses of this, then a method returning something of type Num would work the way you're expecting. You can return 1.0 :: Double, but the caller can't use any methods on the value that are provided specifically by Double (like say one that splits off the fractional part) because the caller must be programmed to work the same whether you return an Int or a Double (or even any brand new subclass of Num that is private to your code, that the caller cannot possibly know about).
Polymorphism in Haskell is based on type parameters rather than subtyping, which switches things around. The place in the code where f :: Num a => a is used has the freedom to demand any particular choice for a (subject to the Num constraint), and the code for f that is the source of the value must be programmed to work regardless of the use-site's choice. The use-site is even free to demand values of a type that is private to the code using f, that the implementer of f cannot possibly know about. (I can literally open up a new file, make any bizarre new type I like and give it an instance for Num, and any of the standard library functions written years ago that are polymorphic in Num will work with my type)
So this works:
f :: Float
f = 1.0
Because there are no type variables, so both source and use-site simply have to treat this as a Float. But this does not:
f :: Num a => a
f = 1.0
Because the place where f is used can demand any valid choice for a, and this code must be able to work for that choice. (It would not work when Int is chosen, for example, so the compiler must reject this definition of f). But this does work:
f :: Fractional a => a
f = 1.0
Because now the use site is only free to demand any type that is in Fractional, which excludes the ones (like Int) that floating point literals like 1.0 cannot support.
Note that this is exactly how "generics" work in object oriented languages, so if you're familiar with any language supporting generics, just treat Haskell types the way you do generic ones in OO languages.
One further thing that may be confusing you is that in this:
f :: Float
f = 1.0
The literal 1.0 isn't actually definitively a Float. Haskell literals are much more flexible than those of most other languages. Whereas e.g. Java says that 1.0 is definitely a value of type double (with some automatic conversion rules if you use a double where certain other types are expected), in Haskell that 1.0 is actually itself a thing with a polymorphic type. 1.0 has type Fractional a => a.
So the reason the f :: Fractional a => a definition worked is obvious, it's actually the f :: Float definition that needs some explanation. It's making use of exactly the rules I described above in the first section of my post. Your code f = 1.0 is a use-site of the value represented by 1.0, so it can demand any type it likes (subject to Fractional). In particular, it can demand that the literal 1.0 supply a value of type Float.
This again reinforces why the f :: Num a => a definition can't work. Here the type for f is promising to f's callers that they can demand any type they like (subject to Num). But it's going to fulfill that demand by just passing the demand down the chain to the literal 1.0, which has the most general type of Fractional a => a. So if the use-site of f demands a type that is in Num but outside Fractional, f would then try to demand that 1.0 supply that same non-Fractional type, which it can't.
Names do not have types. Values have types, and the type is an intrinsic part of the value. 1 :: Int and 1 :: Integer are two different values. Haskell does not implicitly convert values between types, though it can be trivial to define functions that take values of one type and return values of another. (For example, f :: Int -> Integer with f x = x will "convert" its Int arugment to an Integer.
A declaration like
f :: Num a => a
does not say that f has type Num a => a, it says that you can assign values of type Num a => a to f.
You can think of a polymorphic value like 1 :: Num a => a being all 1-like values in every type with a Num instance, including 1 :: Int, 1 :: Integer, 1 :: Rational, etc.
An assignment like f = 1 succeeds because the literal 1 has the expected type Num a => a.
An assignment like f = 1.0 fails because the literal 1.0 has a different type, Fractional a => a, and that type is too specific. It does not include all 1-like values that Num a => a may be called on to produce.
Suppose you declared g :: Fractional a => a. You can say g = 1.0, because the types match. You cannot say g = (1.0 :: Float), because the types do not match; Float has a Fractional instance, but it is just one of a possibly infinite set of types that could have Fractional instances.
You can say g = 1, because Fractional a => a is more specific than Num a => a, and has Fractional has Num as its superclass. The assignment "selects" the subset of 1 :: Num a => a that overlaps with (and for all intents and purposes is) 1 :: Fractional a => a and assigns that to g. Put another way, just 1 :: Num a => a can produce a value for any single type that has a Num instance, it can produce a value for any subset of types implied by a subclass of Num.
I am puzzled by this as well.
What gets stuck in my mind is something like:
// in pseudo-typescript
type Num = Int | Float;
let f: Num;
f = (1.0 as Float); // why this doesn't work
Fact is, Num a => a is just not a simple sum of numeric types.
It represents something that can morph into various kind of numeric types.
Thanks to chepner's explanation, now I can persuade myself like this:
if I have a Num a => a then I can get a Int from it, I can also get a Float from it, as well as a Double....
If I am able to install 1.1 into a Num a => a, then there is no way I can safely derive a Int from 1.1.
The expression 1 is able to bind to Num a => a is due to the fact that 1 itself is a polymorphic constant with type signature Num a => a.

Problems With Type Inference on (^)

So, I'm trying to write my own replacement for Prelude, and I have (^) implemented as such:
{-# LANGUAGE RebindableSyntax #-}
class Semigroup s where
infixl 7 *
(*) :: s -> s -> s
class (Semigroup m) => Monoid m where
one :: m
class (Ring a) => Numeric a where
fromIntegral :: (Integral i) => i -> a
fromFloating :: (Floating f) => f -> a
class (EuclideanDomain i, Numeric i, Enum i, Ord i) => Integral i where
toInteger :: i -> Integer
quot :: i -> i -> i
quot a b = let (q,r) = (quotRem a b) in q
rem :: i -> i -> i
rem a b = let (q,r) = (quotRem a b) in r
quotRem :: i -> i -> (i, i)
quotRem a b = let q = quot a b; r = rem a b in (q, r)
-- . . .
infixr 8 ^
(^) :: (Monoid m, Integral i) => m -> i -> m
(^) x i
| i == 0 = one
| True = let (d, m) = (divMod i 2)
rec = (x*x) ^ d in
if m == one then x*rec else rec
(Note that the Integral used here is one I defined, not the one in Prelude, although it is similar. Also, one is a polymorphic constant that's the identity under the monoidal operation.)
Numeric types are monoids, so I can try to do, say 2^3, but then the typechecker gives me:
*AlgebraicPrelude> 2^3
<interactive>:16:1: error:
* Could not deduce (Integral i0) arising from a use of `^'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Integral Integer -- Defined at Numbers.hs:190:10
instance Integral Int -- Defined at Numbers.hs:207:10
* In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
<interactive>:16:3: error:
* Could not deduce (Numeric i0) arising from the literal `3'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Numeric Integer -- Defined at Numbers.hs:294:10
instance Numeric Complex -- Defined at Numbers.hs:110:10
instance Numeric Rational -- Defined at Numbers.hs:306:10
...plus four others
(use -fprint-potential-instances to see them all)
* In the second argument of `(^)', namely `3'
In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
I get that this arises because Int and Integer are both Integral types, but then why is it that in normal Prelude I can do this just fine? :
Prelude> :t (2^)
(2^) :: (Num a, Integral b) => b -> a
Prelude> :t 3
3 :: Num p => p
Prelude> 2^3
8
Even though the signatures for partial application in mine look identical?
*AlgebraicPrelude> :t (2^)
(2^) :: (Numeric m, Integral i) => i -> m
*AlgebraicPrelude> :t 3
3 :: Numeric a => a
How would I make it so that 2^3 would in fact work, and thus give 8?
A Hindley-Milner type system doesn't really like having to default anything. In such a system, you want types to be either properly fixed (rigid, skolem) or properly polymorphic, but the concept of “this is, like, an integer... but if you prefer, I can also cast it to something else” as many other languages have doesn't really work out.
Consequently, Haskell sucks at defaulting. It doesn't have first-class support for that, only a pretty hacky ad-hoc, hard-coded mechanism which mainly deals with built-in number types, but fails at anything more involved.
You therefore should try to not rely on defaulting. My opinion is that the standard signature for ^ is unreasonable; a better signature would be
(^) :: Num a => a -> Int -> a
The Int is probably controversial – of course Integer would be safer in a sense; however, an exponent too big to fit in Int generally means the results will be totally off the scale anyway and couldn't feasibly be calculated by iterated multiplication; so this kind of expresses the intend pretty well. And it gives best performance for the extremely common situation where you just write x^2 or similar, which is something where you very definitely don't want to have to put an extra signature in the exponent.
In the rather fewer cases where you have a concrete e.g. Integer number and want to use it in the exponent, you can always shove in an explicit fromIntegral. That's not nice, but rather less of an inconvenience.
As a general rule, I try to avoid† any function-arguments that are more polymorphic than the results. Haskell's polymorphism works best “backwards”, i.e. the opposite way as in dynamic language: the caller requests what type the result should be, and the compiler figures out from this what the arguments should be. This works pretty much always, because as soon as the result is somehow used in the main program, the types in the whole computation have to be linked to a tree structure.
OTOH, inferring the type of the result is often problematic: arguments may be optional, may themselves be linked only to the result, or given as polymorphic constants like Haskell number literals. So, if i doesn't turn up in the result of ^, avoid letting in occur in the arguments either.
†“Avoid” doesn't mean I don't ever write them, I just don't do so unless there's a good reason.

why is this snippet valid with an explicit value, but invalid as a function?

I'm trying to work a problem where I need to calculate the "small" divisors of an integer. I'm just bruteforcing through all numbers up to the square root of the given number, so to get the divisors of 10 I'd write:
[k|k<-[1...floor(sqrt 10)],rem 10 k<1]
This seems to work well. But as soon as I plug this in a function
f n=[k|k<-[1...floor(sqrt n)],rem n k<1]
And actually call this function, I do get an error
f 10
No instance for (Floating t0) arising from a use of `it'
The type variable `t0' is ambiguous
Note: there are several potential instances:
instance Floating Double -- Defined in `GHC.Float'
instance Floating Float -- Defined in `GHC.Float'
In the first argument of `print', namely `it'
In a stmt of an interactive GHCi command: print it
As far as I undrestand the actual print function that prints the result to the console is causing trouble, but I cannot find out what is wrong. It says the type is ambiguous, but the function can clearly only return a list of integers. Then again I checked the type, and it the (inferred) type of f is
f :: (Floating t, Integral t, RealFrac t) => t -> [t]
I can understand that fshould be able to accept any real numerical value, but can anyone explain why the return type should be anything else than Integral or int?
[k|k<-[1...floor(sqrt 10)],rem 10 k<1]
this works because the first 10 is not the same as the latter one - to see this, we need the type signature of your functions:
sqrt :: Floating a => a -> a
rem :: Integral a => a -> a -> a
so the first one means that it works for stuff that have a floating point representation - a.k.a. Float, Double ..., and the second one works for Int, Integer (bigint), Word8 (unsigned 8bit integers)...
so for the 10 in sqrt 10 the compiler says - ahh this is a floating point number, null problemo, and for the 10 in rem 10 k, ahh this is an integer like number, null problemo as well.
But when you bundle them up in a function - you are saying n has to be a floating point and an integral number, the compiler knows no such thing and - complains.
So what do we do to fix that (and a side note ranges in haskell are indicated by .. not ...!). So let us start by taking a concrete solution and generalize it.
f :: Int -> [Int]
f n = [k|k <- [1..n'],rem n k < 1]
where n' = floor $ sqrt $ fromIntegral n
the neccessary part was converting the Int to a floating point number. But if you are putting that in a library all your users need to stick with using Int which is okay, but far from ideal - so how do we generalize (as promised)? We use GHCi to do that for us, using a lazy language we ourselves tend to be lazy as well.
We start by commenting out the type-signature
-- f :: Int -> [Int]
f n = [k|k <- [1..n'],rem n k < 1]
where n' = floor $ sqrt $ fromIntegral n
$> ghci MyLib.hs
....
MyLib > :type f
f :: Integral a => a -> [a]
then we can take this and put it into the library and if someone worked with Word8 or Integer that would work as well.
Another solution would be to use rem (floor n) k < 1 and have
f :: Floating a, Integral b => a -> [b]
as the type, but that would be kind of awkward.

Determining the type of a function

I am trying to figure out the way Haskell determines type of a function. I wrote a sample code:
compareAndIncrease a b =
if a > b then a+1:b:[]
else a:b:[]
which constructs a list basing on the a > b comparison. Then i checked its type with :t command:
compareAndIncrease :: (Ord a, Num a) => a -> a -> [a]
OK, so I need a typeclass Ord for comparison, Num for numerical computations (like a+1). Then I take parameters a and b and get a list in return (a->a->[a]). Everything seems fine. But then I found somewhere a function to replicate the number:
replicate' a b
| a ==0 = []
| a>0 = b:replicate(a-1) b
Note that normal, library replicate function is used inside, not the replicate' one. It should be similar to compareAndIncrease, because it uses comparison, numerical operations and returns a list, so I thought it would work like this:
replicate' :: (Ord a, Num a) => a -> a -> [a]
However, when I checked with :t, I got this result:
replicate' :: Int -> t -> [t]
I continued fiddling with this function and changed it's name to repval, so now it is:
Could anyone explain to me what is happening?
GHCi is a great tool to use here:
*Main> :type replicate
replicate :: Int -> a -> [a]
You define replicate' in terms of replicate (I rename your variables for clarity):
replicate' n e
| -- blah blah blah
| n > 0 = e : replicate (n - 1) e
Since you call replicate (n - 1), the type checker infers that n - 1 must have type Int, from which it infers that n must have type Int, from which it infers that replicate' has type Int -> a -> [a].
If you wrote your replicate' recursively, using replicate' inside instead of replicate, then you would get
*Main> :type replicate'
replicate' :: (Ord a, Num a) => a -> a1 -> [a1]
Edit
As Ganesh Sittampalam points out, it's best to constrain the type to Integral as it doesn't really make sense to replicate a fractional number of times.
The key flaw in your reasoning is that replicate actually only takes Int for the replication count, rather than a more general numeric type.
If you instead used genericReplicate, then your argument would be roughly valid.
genericReplicate :: Integral i => i -> a -> [a]
However note that the constraint is Integral rather than Num because Num covers any kind of number including real numbers, whereas it only makes sense to repeat something an integer number of times.

Resources