How does Haskell pick a type for an ambiguous expression - haskell

If an expression can be typed in several ways, how does Haskell pick which one to use?
Motivating example
Take this example:
$ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
Prelude> import Data.Ratio (Ratio)
Prelude Data.Ratio> f z s = show $ truncate $ z + (read s)
Prelude Data.Ratio> :type f
f :: (RealFrac a, Read a) => a -> String -> String
Prelude Data.Ratio> s = take 30 (cycle "12345")
Prelude Data.Ratio> s
"123451234512345123451234512345"
Prelude Data.Ratio> f 0 s
"123451234512345121227855101952"
Prelude Data.Ratio> f (0::Double) s
"123451234512345121227855101952"
Prelude Data.Ratio> f (0::Float) s
"123451235679745417161721511936"
Prelude Data.Ratio> f (0::Ratio Integer) (s ++ "%1")
"123451234512345123451234512345"
Prelude Data.Ratio> show $ truncate $ read s
"123451234512345121227855101952"
When I used 0 without any type, I got the same result as for (0::Double). So it seems to me that when I just invoke f 0 s, it uses a version of read that produces a Double, and a version of truncate that turns that Double into some integral type. I introduces the variable z so I could have some easy control over the type here. With that I could show that other interpretations are possible, e.g. using Float or exact ratios. So why Double? The last line, which omits the addition, shows that the behavior is independent of that zero constant.
I guess something tells Haskell that Double is a more canonical type than others, either in general or when used as a RealFrac, so if it can interpret an expression using Double as an intermediate result, but also some other types, then it will prefer the Double interpretation.
Core questions
Is my interpretation observed behavior correct, and there is an implicit type default here?
What is the name for this kind of preference?
Is there a way to disable such a choice of canonical type and enforce explicit type specifications for things that can be interpreted in multiple ways?
Own research
I've read that https://en.wikibooks.org/wiki/Haskell/Type_basics_II#Polymorphic_guesswork writes
With no other restrictions, 5.12 will assume the default Fractional type of Double, so (-7) will become a Double as well.
That appears to confirm my assumption that Double is somehow blessed as the default type for some parent category of RealFrac. It still doesn't offer a name for that concept, nor a complete list of the rules around that.
Background
The situation I actually want to handle is more like this:
f :: Integer -> String -> Integer
f scale str = truncate $ (fromInteger scale) * (read str)
So I want a function that takes a string, reads it as a decimal fraction, multiplies it with a given number, then truncates the result back to an integer. I was very surprised to find that this compiles without me specifying the intermediate fractional type anywhere.

If there is an ambiguous type variable v with a Num v constraint, it gets defaulted to Integer or Double, tried in that order, whichever satisfies all other constraints on v.
Those defaulting rules are explained in the Haskell Report: https://www.haskell.org/onlinereport/haskell2010/haskellch4.html#x10-620004
The GHC manual also explains additional defaulting rules in GHCi (this means trying things in GHCi will not give you an accurate picture of what is going on when you compile a program): https://downloads.haskell.org/ghc/latest/docs/html/users_guide/ghci.html#type-defaulting-in-ghci

Related

How to determine the type of literal in Haskell?

When I test the type of literal in GHCi, I find
Prelude> :t 1
1 :: Num p => p
Prelude> :t 'c'
'c' :: Char
Prelude> :t "string"
"string" :: [Char]
Prelude> :t 1.0
1.0 :: Fractional p => p
The problem is how does Haskell to determine the type of such literal? where can I find the information about that?
Furthermore, Are there exist any way to change the way of GHC to interpret the type of literal?
For example:
-- do something
:t 1
1 :: Int -- interprets 1 as Int rather then Num p => p
:t 1.0
1.0 :: Double -- interprets 1.0 as Double rather then Fractional p => p
Thanks in advance.
You can ask ghci to default the type variables:
$ ghci
λ> let x = 3
λ> :type x
x :: Num p => p
λ> :type +d x
x :: Integer
λ> :type +d 1
1 :: Integer
λ> :type +d 1.0
1.0 :: Double
The :type +d will make ghci to chose the default types for the type variables. Also, this is the general Haskell defaulting rule:
default Num Integer
default Real Integer
default Enum Integer
default Integral Integer
default Fractional Double
default RealFrac Double
default Floating Double
default RealFloat Double
You can learn more about it here.
If you write 1, it has any possible number type. That's what Num p => p actually means.
If you use 1 in an expression, GHCi will attempt to figure out the correct type of number to use based on what functions you're calling on it, and then automatically give 1 the right type.
If GHCi cannot guess what the correct type is (because there's not enough context or because several types would fit), it defaults to Integer. (And for 1.0 it will default to Double. And for any other type constraint, it will try to default to () if possible.)
This is similar to how compiled code works. If you write a number in your source code, GHC (the compiler) will attempt to auto-detect what the correct type should be. The difference is, if the compiler can't figure it out, it won't "guess" or "default", it'll just give you a compile-time error and demand that you specify what you mean. That's desirable to make compiled code work how you expected, but it's tedious for interactively trying stuff out, which is why GHCi has defaulting.
The type of a single character is always Char.
The type of a string is always String or [Char]. (One is an alias to the other.)
The type of True and False is always Bool. And so on.
So it's only really numbers that have the possibility of multiple types.
[Well, there's an option to make strings polymorphic too, but we won't worry about that now...]
If you want messy details, you can read the Haskell Language Report (which is the official specification document that defines the Haskell language) and the GHCi user manual (which describes what GHCi does).

Type inference interferes with referential transparency

What is the precise promise/guarantee the Haskell language provides with respect to referential transparency? At least the Haskell report does not mention this notion.
Consider the expression
(7^7^7`mod`5`mod`2)
And I want to know whether or not this expression is 1. For my safety, I will do perform this twice:
( (7^7^7`mod`5`mod`2)==1, [False,True]!!(7^7^7`mod`5`mod`2) )
which now gives (True,False) with GHCi 7.4.1.
Evidently, this expression is now referentially opaque. How can I tell whether or not a program is subject to such behavior? I can inundate the program with :: all over but that does not make it very readable. Is there any other class of Haskell programs in between that I miss? That is between a fully annotated and an unannotated one?
(Apart from the only somewhat related question I found on SO there must be something else on this)
I do not think there's any guarantee that evaluating a polymorphically typed expression such as 5 at different types will produce "compatible" results, for any reasonable definition of "compatible".
GHCi session:
> class C a where num :: a
> instance C Int where num = 0
> instance C Double where num = 1
> num + length [] -- length returns an Int
0
> num + 0 -- GHCi defaults to Double for some reason
1.0
This looks as it's breaking referential transparency since length [] and 0 should be equal, but under the hood it's num that's being used at different types.
Also,
> "" == []
True
> [] == [1]
False
> "" == [1]
*** Type error
where one could have expected False in the last line.
So, I think referential transparency only holds when the exact types are specified to resolve polymorphism. An explicit type parameter application à la System F would make it possible to always substitute a variable with its definition without altering the semantics: as far as I understand, GHC internally does exactly this during optimization to ensure that semantics is unaffected. Indeed, GHC Core has explicit type arguments which are passed around.
The problem is overloading, which does indeed sort of violate referential transparency. You have no idea what something like (+) does in Haskell; it depends on the type.
When a numeric type is unconstrained in a Haskell program the compiler uses type defaulting to pick some suitable type. This is for convenience, and usually doesn't lead to any surprises. But in this case it did lead to a surprise. In ghc you can use -fwarn-type-defaults to see when the compiler has used defaulting to pick a type for you. You can also add the line default () to your module to stop all defaulting.
I thought of something which might help clarify things...
The expression mod (7^7^7) 5 has type Integral a so there are two common ways to convert it to an Int:
Perform all of the arithmetic using Integer operations and types and then convert the result to an Int.
Perform all of the arithmetic using Int operations.
If the expression is used in an Int context Haskell will perform method #2. If you want to force Haskell to use #1 you have to write:
fromInteger (mod (7^7^7) 5)
This will ensure that all of the arithmetic operations will be performed using Integer operations and types.
When you enter he expression at the ghci REPL, defaulting rules typed the expression as an Integer, so method #1 was used. When you use the expression with the !! operator it was typed as an Int, so it was computed via method #2.
My original answer:
In Haskell the evaluation of an expression like
(7^7^7`mod`5`mod`2)
depends entirely on which Integral instance is being used, and this is something that every Haskell programmer learns to accept.
The second thing that every programmer (in any language) has to be aware of is that numeric operations are subject to overflow, underflow, loss of precision, etc. and thereby the laws for arithmetic may not always hold. For instance, x+1 > x is not always true; addition and multiple of real numbers is not always associative; the distributive law does not always hold; etc. When you create an overflowing expression you enter the realm of undefined behavior.
Also, in this particular case there are better ways to go about evaluating this expression which preserves more of our expectation of what the result should be. In particular, if you want to efficiently and accurately compute a^b mod c you should be using the "power mod" algorithm.
Update: Run the following program to see how the choice of Integral instance affects the what an expression evaluates to:
import Data.Int
import Data.Word
import Data.LargeWord -- cabal install largeword
expr :: Integral a => a
expr = (7^e `mod` 5)
where e = 823543 :: Int
main :: IO ()
main = do
putStrLn $ "as an Integer: " ++ show (expr :: Integer)
putStrLn $ "as an Int64: " ++ show (expr :: Int64)
putStrLn $ "as an Int: " ++ show (expr :: Int)
putStrLn $ "as an Int32: " ++ show (expr :: Int32)
putStrLn $ "as an Int16: " ++ show (expr :: Int16)
putStrLn $ "as a Word8: " ++ show (expr :: Word8)
putStrLn $ "as a Word16: " ++ show (expr :: Word16)
putStrLn $ "as a Word32: " ++ show (expr :: Word32)
putStrLn $ "as a Word128: " ++ show (expr :: Word128)
putStrLn $ "as a Word192: " ++ show (expr :: Word192)
putStrLn $ "as a Word224: " ++ show (expr :: Word224)
putStrLn $ "as a Word256: " ++ show (expr :: Word256)
and the output (compiled with GHC 7.8.3 (64-bit):
as an Integer: 3
as an Int64: 2
as an Int: 2
as an Int32: 3
as an Int16: 3
as a Word8: 4
as a Word16: 3
as a Word32: 3
as a Word128: 4
as a Word192: 0
as a Word224: 2
as a Word256: 1
What is the precise promise/guarantee the Haskell language provides with respect to referential transparency? At least the Haskell report does not mention this notion.
Haskell does not provide a precise promise or guarantee. There exist many functions like unsafePerformIO or traceShow which are not referentially transparent. The extension called Safe Haskell however provides the following promise:
Referential transparency — Functions in the safe language are deterministic, evaluating them will not cause any side effects. Functions in the IO monad are still allowed and behave as usual. Any pure function though, as according to its type, is guaranteed to indeed be pure. This property allows a user of the safe language to trust the types. This means, for example, that the unsafePerformIO :: IO a -> a function is disallowed in the safe language.
Haskell provides an informal promise outside of this: the Prelude and base libraries tend to be free of side effects and Haskell programmers tend to label things with side effects as such.
Evidently, this expression is now referentially opaque. How can I tell whether or not a program is subject to such behavior? I can inundate the program with :: all over but that does not make it very readable. Is there any other class of Haskell programs in between that I miss? That is between a fully annotated and an unannotated one?
As others have said, the problem emerges from this behavior:
Prelude> ( (7^7^7`mod`5`mod`2)==1, [False,True]!!(7^7^7`mod`5`mod`2) )
(True,False)
Prelude> 7^7^7`mod`5`mod`2 :: Integer
1
Prelude> 7^7^7`mod`5`mod`2 :: Int
0
This happens because 7^7^7 is a huge number (about 700,000 decimal digits) which easily overflows a 64-bit Int type, but the problem will not be reproducible on 32-bit systems:
Prelude> :m + Data.Int
Prelude Data.Int> 7^7^7 :: Int64
-3568518334133427593
Prelude Data.Int> 7^7^7 :: Int32
1602364023
Prelude Data.Int> 7^7^7 :: Int16
8823
If using rem (7^7^7) 5 the remainder for Int64 will be reported as -3 but since -3 is equivalent to +2 modulo 5, mod reports +2.
The Integer answer is used on the left due to the defaulting rules for Integral classes; the platform-specific Int type is used on the right due to the type of (!!) :: [a] -> Int -> a. If you use the appropriate indexing operator for Integral a you instead get something consistent:
Prelude> :m + Data.List
Prelude Data.List> ((7^7^7`mod`5`mod`2) == 1, genericIndex [False,True] (7^7^7`mod`5`mod`2))
(True,True)
The problem here is not referential transparency because the functions that we're calling ^ are actually two different functions (as they have different types). What has tripped you up is typeclasses, which are an implementation of constrained ambiguity in Haskell; you have discovered that this ambiguity (unlike unconstrained ambiguity -- i.e. parametric types) can deliver counterintuitive results. This shouldn't be too surprising but it's definitely a little strange at times.
A another type has been chosen, because !! requires an Int. The full computation now uses Int instead of Integer.
λ> ( (7^7^7`mod`5`mod`2 :: Int)==1, [False,True]!!(7^7^7`mod`5`mod`2) )
(False,False)
What you think this has to do with referential transparency? Your uses of 7, ^, mod, 5, 2, and == are applications of those variables to dictionaries, yes, but I don't see why you think that fact makes Haskell referentially opaque. Often applying the same function to different arguments produces different results, after all!
Referential transparency has to do with this expression:
let x :: Int = 7^7^7`mod`5`mod`2 in (x == 1, [False, True] !! x)
x is here a single value, and should always have that same single value.
By contrast, if you say:
let x :: forall a. Num a => a; x = 7^7^7`mod`5`mod`2 in (x == 1, [False, True] !! x)
(or use the expression inline, which is equivalent), x is now a function, and can return different values depending on the Num argument you supply to it. You might as well complain that let f = (+1) in map f [1, 2, 3] is [2, 3, 4], but let f = (+3) in map f [1, 2, 3] is [4, 5, 6] and then say "Haskell gives different values for map f [1, 2, 3] depending on the context so it's referentially opaque"!
Probably another type-inference and referential-transparency related thing is the „dreaded“ Monomorphism restriction (its absence, to be exact). A direct quote:
An example, from „A History of Haskell“:
Consider the genericLength function, from Data.List
genericLength :: Num a => [b] -> a
And consider the function:
f xs = (len, len)
where
len = genericLength xs
len has type Num a => a and, without the monomorphism restriction, it could be computed twice.
Notice that in this case types of both expressions are the same. Results are too, but the substitution isn't always possible.

Haskell Type Coercion

I trying to wrap my head around Haskell type coercion. Meaning, when does can one pass a value into a function without casting and how that works. Here is a specific example, but I am looking for a more general explanation I can use going forward to try and understand what is going on:
Prelude> 3 * 20 / 4
15.0
Prelude> let c = 20
Prelude> :t c
c :: Integer
Prelude> 3 * c / 4
<interactive>:94:7:
No instance for (Fractional Integer)
arising from a use of `/'
Possible fix: add an instance declaration for (Fractional Integer)
In the expression: 3 * c / 4
In an equation for `it': it = 3 * c / 4
The type of (/) is Fractional a => a -> a -> a. So, I'm guessing that when I do "3 * 20" using literals, Haskell somehow assumes that the result of that expression is a Fractional. However, when a variable is used, it's type is predefined to be Integer based on the assignment.
My first question is how to fix this. Do I need to cast the expression or convert it somehow?
My second question is that this seems really weird to me that you can't do basic math without having to worry so much about int/float types. I mean there's an obvious way to convert automatically between these, why am I forced to think about this and deal with it? Am I doing something wrong to begin with?
I am basically looking for a way to easily write simple arithmetic expressions without having to worry about the neaty greaty details and keeping the code nice and clean. In most top-level languages the compiler works for me -- not the other way around.
If you just want the solution, look at the end.
You nearly answered your own question already. Literals in Haskell are overloaded:
Prelude> :t 3
3 :: Num a => a
Since (*) also has a Num constraint
Prelude> :t (*)
(*) :: Num a => a -> a -> a
this extends to the product:
Prelude> :t 3 * 20
3 * 20 :: Num a => a
So, depending on context, this can be specialized to be of type Int, Integer, Float, Double, Rational and more, as needed. In particular, as Fractional is a subclass of Num, it can be used without problems in a division, but then the constraint will become
stronger and be for class Fractional:
Prelude> :t 3 * 20 / 4
3 * 20 / 4 :: Fractional a => a
The big difference is the identifier c is an Integer. The reason why a simple let-binding in GHCi prompt isn't assigned an overloaded type is the dreaded monomorphism restriction. In short: if you define a value that doesn't have any explicit arguments,
then it cannot have overloaded type unless you provide an explicit type signature.
Numeric types are then defaulted to Integer.
Once c is an Integer, the result of the multiplication is Integer, too:
Prelude> :t 3 * c
3 * c :: Integer
And Integer is not in the Fractional class.
There are two solutions to this problem.
Make sure your identifiers have overloaded type, too. In this case, it would
be as simple as saying
Prelude> let c :: Num a => a; c = 20
Prelude> :t c
c :: Num a => a
Use fromIntegral to cast an integral value to an arbitrary numeric value:
Prelude> :t fromIntegral
fromIntegral :: (Integral a, Num b) => a -> b
Prelude> let c = 20
Prelude> :t c
c :: Integer
Prelude> :t fromIntegral c
fromIntegral c :: Num b => b
Prelude> 3 * fromIntegral c / 4
15.0
Haskell will never automatically convert one type into another when you pass it to a function. Either it's compatible with the expected type already, in which case no coercion is necessary, or the program fails to compile.
If you write a whole program and compile it, things generally "just work" without you having to think too much about int/float types; so long as you're consistent (i.e. you don't try to treat something as an Int in one place and a Float in another) the constraints just flow through the program and figure out the types for you.
For example, if I put this in a source file and compile it:
main = do
let c = 20
let it = 3 * c / 4
print it
Then everything's fine, and running the program prints 15.0. You can see from the .0 that GHC successfully figured out that c must be some kind of fractional number, and made everything work, without me having to give any explicit type signatures.
c can't be an integer because the / operator is for mathematical division, which isn't defined on integers. The operation of integer division is represented by the div function (usable in operator fashion as x `div` y). I think this might be what is tripping you up in your whole program? This is unfortunately just one of those things you have to learn by getting tripped up by it, if you're used to the situation in many other languages where / is sometimes mathematical division and sometimes integer division.
It's when you're playing around in the interpreter that things get messy, because there you tend to bind values with no context whatsoever. In interpreter GHCi has to execute let c = 20 on its own, because you haven't entered 3 * c / 4 yet. It has no way of knowing whether you intend that 20 to be an Int, Integer, Float, Double, Rational, etc
Haskell will pick a default type for numeric values; otherwise if you never use any functions that only work on one particular type of number you'd always get an error about ambiguous type variables. This normally works fine, because these default rules are applied while reading the whole module and so take into account all the other constraints on the type (like whether you've ever used it with /). But here there are no other constraints it can see, so the type defaulting picks the first cab off the rank and makes c an Integer.
Then, when you ask GHCi to evaluate 3 * c / 4, it's too late. c is an Integer, so must 3 * c be, and Integers don't support /.
So in the interpreter, yes, sometimes if you don't give an explicit type to a let binding GHC will pick an incorrect type, especially with numeric types. After that, you're stuck with whatever operations are supported by the concrete type GHCi picked, but when you get this kind of error you can always rebind the variable; e.g. let c = 20.0.
However I suspect in your real program the problem is simply that the operation you wanted was actually div rather than /.
Haskell is a bit unusual in this way. Yes you can't divide to integers together but it's rarely a problem.
The reason is that if you look at the Num typeclass, there's a function fromIntegral this allows you to convert literals into the appropriate type. This with type inference alleviates 99% of the cases where it'd be a problem. Quick example:
newtype Foo = Foo Integer
deriving (Show, Eq)
instance Num Foo where
fromInteger _ = Foo 0
negate = undefined
abs = undefined
(+) = undefined
(-) = undefined
(*) = undefined
signum = undefined
Now if we load this into GHCi
*> 0 :: Foo
Foo 0
*> 1 :: Foo
Foo 0
So you see we are able to do some pretty cool things with how GHCi parses a raw integer. This has a lot of practical uses in DSL's that we won't talk about here.
Next question was how to get from a Double to an Integer or vice versa. There's a function for that.
In the case of going from an Integer to a Double, we'd use fromInteger as well. Why?
Well the type signature for it is
(Num a) => Integer -> a
and since we can use (+) with Doubles we know they're a Num instance. And from there it's easy.
*> 0 :: Double
0.0
Last piece of the puzzle is Double -> Integer. Well a brief search on Hoogle shows
truncate
floor
round
-- etc ...
I'll leave that to you to search.
Type coercion in Haskell isn't automatic (or rather, it doesn't actually exist). When you write the literal 20 it's inferred to be of type Num a => a (conceptually anyway. I don't think it works quite like that) and will, depending on the context in which it is used (i.e. what functions you pass it to) be instantiated with an appropitiate type (I believe if no further constraints are applied, this will default to Integer when you need a concrete type at some point). If you need a different kind of Num, you need to convert the numbers e.g. (3* fromIntegral c / 4) in your example.
The type of (/) is Fractional a => a -> a -> a.
To divide Integers, use div instead of (/). Note that the type of div is
div :: Integral a => a -> a -> a
In most top-level languages the compiler works for me -- not the other way around.
I argue that the Haskell compiler works for you just as much, if not more so, than those of other languages you have used. Haskell is a very different language than the traditional imperative languages (such as C, C++, Java, etc.) you are probably used to. This means that the compiler works differently as well.
As others have stated, Haskell will never automatically coerce from one type to another. If you have an Integer which needs to be used as a Float, you need to do the conversion explicitly with fromInteger.

Specific type inference using uncurry function

I've been playing with the uncurry function in GHCi and I've found something I couldn't quite get at all. When I apply uncurry to the (+) function and bind that to some variable like in the code below, the compiler infers its type to be specific to Integer:
Prelude> let add = uncurry (+)
Prelude> :t add
add :: (Integer, Integer) -> Integer
However, when ask for the type of the following expression I get (what I expect to be) the correct result:
Prelude> :t uncurry (+)
uncurry (+) :: (Num a) => (a, a) -> a
What would cause that? Is it particular to GHCi?
The same applies to let add' = (+).
NOTE: I could not reproduce that using a compiled file.
This has nothing to do with ghci. This is the monomorphism restriction being irritating. If you try to compile the following file:
add = uncurry (+)
main = do
print $ add (1,2 :: Int)
print $ add (1,2 :: Double)
You will get an error. If you expand:
main = do
print $ uncurry (+) (1,2 :: Int)
print $ uncurry (+) (1,2 :: Double)
Everything is fine, as expected. The monomorphism restriction refuses to make something that "looks like a value" (i.e. defined with no arguments on the left-hand side of the equals) typeclass polymorphic, because that would defeat the caching that would normally occur. Eg.
foo :: Integer
foo = expensive computation
bar :: (Num a) => a
bar = expensive computation
foo is guaranteed only to be computed once (well, in GHC at least), whereas bar will be computed every time it is mentioned. The monomorphism restriction seeks to save you from the latter case by defaulting to the former when it looks like that's what you wanted.
If you only use the function once (or always at the same type), type inference will take care of inferring the right type for you. In that case ghci is doing something slightly different by guessing sooner. But using it at two different types shows what is going on.
When in doubt, use a type signature (or turn off the wretched thing with {-# LANGUAGE NoMonomorphismRestriction #-}).
There's magic involved in extended defaulting rules with ghci. Basically, among other things, Num constraints get defaulted to Integer and Floating constraints to Double, when otherwise there would be an error (in this case, due to the evil monomorphism restriction).

Non-negative integers [duplicate]

This question already has answers here:
Positive integer type
(6 answers)
Closed 3 years ago.
Say i have a function prototype as follows:
func :: [Int] -> [Int]
How is it possible to enforce only a non-negative list of integers as input arguments? I would have to change the param type from [Int] to what.. ? At this fair moment it works with func [-1,-2], i only want it to work with [1,2] i.e. with the interpreter spewing the error message.
newtype NonNegative a = NonNegative a
toNonNegative :: (Num a, Ord a) => a -> NonNegative a
toNonNegative x
| x < 0 = error "Only non-negative values are allowed."
| otherwise = NonNegative x
fromNonNegative :: NonNegative a -> a
fromNonNegative (NonNegative x) = x
Just be careful to never use the NonNegative constructor directly. This will be easier if you put this in a separate module and don't export it.
Also, now you can use (map toNonNegative) to lazily transform a list of numbers.
This will still require a runtime check wherever you inject raw numbers.
Alternatively, you can use Data.Word.
Have you tried http://hackage.haskell.org/package/non-negative ?
You could use Peano numbers, changing your function's type to [Peano] -> .... But then you will have to add conversion functions from integers to peano numbers and back whenever you call your function.
Or you could add a runtime check:
func xs
| any (< 0) xs = error "only non-negative integers allowed as input"
| otherwise = ...
Note that the latter solution makes your function strict.
The wiki page on smart constructors may give you some idea.
In the base package with version >= 4.8.0.0, which is included in GHC 7.10.1 and above, there is now a type Natural which does what you want - you can just change your code to:
import Numeric.Natural (Natural)
func :: [Natural] -> [Int]
It is, however, closer to Integer than to Int, because like Integer and unlike Int, it has no maximum value.
Because Natural, like Integer, is an instance of Num and Integral, all the same arithmetic operations and conversion functions are available as you get with Integer. Attempts to calculate a negative Natural will throw an Underflow at runtime, which is an ArithException. Also, conveniently, you can create a Natural using just an integer literal, without a conversion:
GHCi, version 8.0.2: http://www.haskell.org/ghc/ :? for help
Prelude> :m +Numeric.Natural
Prelude Numeric.Natural> 2 :: Natural
2
However, if you would prefer to stay in the domain of fixed-size integers, there is a solution for that, too - and it has been around for far longer - Word from the module Data.Word (which also contains e.g. Word8 for 8-bit non-negative integers). You would use Word in the same way as Natural. However, be warned - the Word types will silently underflow, without throwing an exception:
GHCi, version 8.0.2: http://www.haskell.org/ghc/ :? for help
Prelude> :m +Data.Word
Prelude Data.Word> 2 :: Word
2
Prelude Data.Word> it - 4
18446744073709551614

Resources