Haskell type `forall a . (Num a, Integral a) => a` for an integer constant of many types - haskell

When writing FFI code in haskell, I often have Int and CInt variables mixed together. I tried to define a new type Intlike to help with defining constants that can be represented as values of either type, as follows:
type Intlike = forall a . (Num a, Integral a) => a
floatSize :: Intlike = fromIntegral $ sizeOf (1 :: CFloat)
Then GHCi complains so:
Fractal.hs:276:24-35: No instance for (Num Intlike) arising from \
a use of ‘fromIntegral’ …
In the expression: fromIntegral
In the expression: fromIntegral $ sizeOf (1 :: CFloat)
In a pattern binding:
floatSize :: Intlike = fromIntegral $ sizeOf (1 :: CFloat)
Compilation failed.
(This is with the Rank2Types language extension.)
The following, however, works:
floatSize :: (Num a, Integral a) => a
floatSize = fromIntegral $ sizeOf (1 :: CFloat)
Is there a good solution that doesn't have me write fromIntegral all the time? What is the difference between Intlike and the one that works? They look similar.

I can't tell you why, but if you write
floatSize :: Intlike
floatSize = fromIntegral $ sizeOf (1 :: CFloat)
then it works just fine. One possibility is that the type annotation on the pattern variable is doing something other than what you expected (I've never really understood what those do). Note that your Num context is redundant, because Integral is a subclass of Num. As for fromIntegral, you will need that just about any time you need to switch between integral types. Another option is to use "generic" functions that give you what you need. For example, you could define
import Foreign.Storable (sizeOf, Storable)
genericSizeOf :: (Storable a, Integral b) => a -> b
genericSizeOf = fromIntegral . sizeOf
Side note: when using functions like sizeOf that take an argument solely for its type, I personally prefer to use undefined rather than an arbitrary value. So I'd write something like sizeOf (undefined::CInt) rather than sizeof (1::CInt). This makes it clear that it doesn't matter what value I'm passing in, and reduces the mental clutter of "What is this 1? What would happen if I changed it to 2?"
In the comments below, Boyd Stephen Smith, Jr., mentions another approach that is apparently to be preferred, although I have not yet read up on it.

Related

Haskell syb Data.Generics not working as expected

On a ghci prompt everywhere (mkT (\x -> 2 * x)) (8.7, 21, "word") evaluates to (8.7, 42, "word").
I expected the 8.7 to be doubled as well. Why am I wrong?
This is the result of mkT monomorphizing its argument in this particular case, but it turns out there's no broader way to address the issue. mkT isn't doing anything wrong.
It's worth looking first at why everywhere (* 2) doesn't type-check.
ghci> :t everywhere
everywhere
:: (forall a. Data a => a -> a) -> forall a. Data a => a -> a
ghci> :t (* 2)
(* 2) :: Num a => a -> a
ghci> :t everywhere (* 2)
<interactive>:1:13: error:
• Could not deduce (Num a) arising from a use of ‘*’
from the context: Data a
bound by a type expected by the context:
forall a. Data a => a -> a
at <interactive>:1:12-16
Possible fix:
add (Num a) to the context of
a type expected by the context:
forall a. Data a => a -> a
• In the expression: (*)
In the first argument of ‘everywhere’, namely ‘(* 2)’
In the expression: everywhere (* 2)
everywhere has a higher-rank type - the first forall a. is inside the parentheses. I kind of dislike documenting the type that way - it uses a as a type variable in two completely separate ways. But there are two different scopes, and that matters. What it's saying is that any function passed to it must be polymorphic over all instances of Data.
But the type of (* 2) doesn't match up there. It won't work with any instance of Data. It requires more - it requires that it be provided an instance of Num. So the error message dutifully reports that it can't deduce (Num a) from the context Data a. So this isn't going to work. The pieces don't fit together.
This is where mkT comes into play:
ghci> :t mkT
mkT :: (Typeable a, Typeable b) => (b -> b) -> a -> a
Its type is a bit funny. It looks almost like it does nothing at all, but Typeable is a funny class. mkT actually compares a and b for type equality, using those Typeable constraints. If they're the same, it applies the function you provided. Otherwise, it just acts as the identity function.
What it does when it's applied to a function is where things are going wrong for you:
ghci> :t mkT (* 2)
mkT (* 2) :: Typeable a => a -> a
It's still polymorphic in a, but the b it used to have has vanished. It had to pick a specific type b to work against, and it did that by defaulting to Integer. (See ghc's extended defaulting rules for details on how that works in ghci.) So...
ghci> mkT (* 2) 3.5
3.5
ghci> mkT (* 2) 7
14
ghci> mkT (* 2) (7 :: Int)
7
At the type level, mkT has to monomorphize its argument. That's the only way it can make use of the Typeable constraint when used in a context where a relevant variable no longer appears in its type.
(To tie the loop back to everywhere, the reason mkT (* 2) works as an argument to everywhere is because Data is a subclass of Typeable. The Data constraint implies that the Typeable requirement will be satisfied.)
So what can you do about this? Well, it's impossible to write it truly generically because of Haskell's open world assumption. Anywhere in the program, any type might be declared an instance of Num with arbitrary implementations of (*) and fromInteger. In order to work with everywhere, there would need to be some mechanism to go from knowing something is an instance of Data to looking up its Num instance. This just isn't possible at run time. Types have been erased. There may be some residues like Typeable dictionaries being carried around, but they don't provide any means to look up other instance dictionaries. And while you might be able to envision a language where that sort of lookup is possible, it actually would be very harmful to allow it in Haskell. It would invalidate the ability to reason about types parametrically, which would be a giant loss.
The best you can do is write transformation functions that work on multiple types:
ghci> let f = mkT (* (2 :: Int)) . mkT (* (2 :: Double)) . mkT (* (2 :: Integer))
ghci> f 5
10
ghci> f 2.7
5.4
ghci> f (9 :: Int)
18
ghci> f "hello"
"hello"
It's verbose and you can probably write something better by hand if you so desire. But it at least works, at least to some extent. And it doesn't require breaking foundational assumptions in the language design, which is always a bonus.
Here is a simplification of your case that doesn't use any Data stuff.
module MyModule where
dbl x = 2 * x
myId :: (a->a) -> a -> a
myId f = f
myDbl = myId dbl
Don't type this to the ghci prompt, rather, create a .hs file and load it.
Now check what type myDbl has.
Prelude > :l MyModule
[1 of 1] Compiling MyModule ( MyModule.hs, interpreted )
Ok, one module loaded.
*MyModule > :t MyModule.myDbl
MyModule.myDbl :: Integer -> Integer
Surprise! Why is it compiling at all? And why the weird types?
Because of the defaulting rules. (Basically, "if you don't know what to do with Num a, just use Integer"). Since myId cannot deal with dbl :: Num a => a -> a, Haskell allows it to take the Integer version.
Disable defaulting by adding default () at the top, and this module no longer compiles.
mkT is no different from myId in this respect.

Problems With Type Inference on (^)

So, I'm trying to write my own replacement for Prelude, and I have (^) implemented as such:
{-# LANGUAGE RebindableSyntax #-}
class Semigroup s where
infixl 7 *
(*) :: s -> s -> s
class (Semigroup m) => Monoid m where
one :: m
class (Ring a) => Numeric a where
fromIntegral :: (Integral i) => i -> a
fromFloating :: (Floating f) => f -> a
class (EuclideanDomain i, Numeric i, Enum i, Ord i) => Integral i where
toInteger :: i -> Integer
quot :: i -> i -> i
quot a b = let (q,r) = (quotRem a b) in q
rem :: i -> i -> i
rem a b = let (q,r) = (quotRem a b) in r
quotRem :: i -> i -> (i, i)
quotRem a b = let q = quot a b; r = rem a b in (q, r)
-- . . .
infixr 8 ^
(^) :: (Monoid m, Integral i) => m -> i -> m
(^) x i
| i == 0 = one
| True = let (d, m) = (divMod i 2)
rec = (x*x) ^ d in
if m == one then x*rec else rec
(Note that the Integral used here is one I defined, not the one in Prelude, although it is similar. Also, one is a polymorphic constant that's the identity under the monoidal operation.)
Numeric types are monoids, so I can try to do, say 2^3, but then the typechecker gives me:
*AlgebraicPrelude> 2^3
<interactive>:16:1: error:
* Could not deduce (Integral i0) arising from a use of `^'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Integral Integer -- Defined at Numbers.hs:190:10
instance Integral Int -- Defined at Numbers.hs:207:10
* In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
<interactive>:16:3: error:
* Could not deduce (Numeric i0) arising from the literal `3'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Numeric Integer -- Defined at Numbers.hs:294:10
instance Numeric Complex -- Defined at Numbers.hs:110:10
instance Numeric Rational -- Defined at Numbers.hs:306:10
...plus four others
(use -fprint-potential-instances to see them all)
* In the second argument of `(^)', namely `3'
In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
I get that this arises because Int and Integer are both Integral types, but then why is it that in normal Prelude I can do this just fine? :
Prelude> :t (2^)
(2^) :: (Num a, Integral b) => b -> a
Prelude> :t 3
3 :: Num p => p
Prelude> 2^3
8
Even though the signatures for partial application in mine look identical?
*AlgebraicPrelude> :t (2^)
(2^) :: (Numeric m, Integral i) => i -> m
*AlgebraicPrelude> :t 3
3 :: Numeric a => a
How would I make it so that 2^3 would in fact work, and thus give 8?
A Hindley-Milner type system doesn't really like having to default anything. In such a system, you want types to be either properly fixed (rigid, skolem) or properly polymorphic, but the concept of “this is, like, an integer... but if you prefer, I can also cast it to something else” as many other languages have doesn't really work out.
Consequently, Haskell sucks at defaulting. It doesn't have first-class support for that, only a pretty hacky ad-hoc, hard-coded mechanism which mainly deals with built-in number types, but fails at anything more involved.
You therefore should try to not rely on defaulting. My opinion is that the standard signature for ^ is unreasonable; a better signature would be
(^) :: Num a => a -> Int -> a
The Int is probably controversial – of course Integer would be safer in a sense; however, an exponent too big to fit in Int generally means the results will be totally off the scale anyway and couldn't feasibly be calculated by iterated multiplication; so this kind of expresses the intend pretty well. And it gives best performance for the extremely common situation where you just write x^2 or similar, which is something where you very definitely don't want to have to put an extra signature in the exponent.
In the rather fewer cases where you have a concrete e.g. Integer number and want to use it in the exponent, you can always shove in an explicit fromIntegral. That's not nice, but rather less of an inconvenience.
As a general rule, I try to avoid† any function-arguments that are more polymorphic than the results. Haskell's polymorphism works best “backwards”, i.e. the opposite way as in dynamic language: the caller requests what type the result should be, and the compiler figures out from this what the arguments should be. This works pretty much always, because as soon as the result is somehow used in the main program, the types in the whole computation have to be linked to a tree structure.
OTOH, inferring the type of the result is often problematic: arguments may be optional, may themselves be linked only to the result, or given as polymorphic constants like Haskell number literals. So, if i doesn't turn up in the result of ^, avoid letting in occur in the arguments either.
†“Avoid” doesn't mean I don't ever write them, I just don't do so unless there's a good reason.

When are type signatures necessary in Haskell?

Many introductory texts will tell you that in Haskell type signatures are "almost always" optional. Can anybody quantify the "almost" part?
As far as I can tell, the only time you need an explicit signature is to disambiguate type classes. (The canonical example being read . show.) Are there other cases I haven't thought of, or is this it?
(I'm aware that if you go beyond Haskell 2010 there are plenty for exceptions. For example, GHC will never infer rank-N types. But rank-N types are a language extension, not part of the official standard [yet].)
Polymorphic recursion needs type annotations, in general.
f :: (a -> a) -> (a -> b) -> Int -> a -> b
f f1 g n x =
if n == (0 :: Int)
then g x
else f f1 (\z h -> g (h z)) (n-1) x f1
(Credit: Patrick Cousot)
Note how the recursive call looks badly typed (!): it calls itself with five arguments, despite f having only four! Then remember that b can be instantiated with c -> d, which causes an extra argument to appear.
The above contrived example computes
f f1 g n x = g (f1 (f1 (f1 ... (f1 x))))
where f1 is applied n times. Of course, there is a much simpler way to write an equivalent program.
Monomorphism restriction
If you have MonomorphismRestriction enabled, then sometimes you will need to add a type signature to get the most general type:
{-# LANGUAGE MonomorphismRestriction #-}
-- myPrint :: Show a => a -> IO ()
myPrint = print
main = do
myPrint ()
myPrint "hello"
This will fail because myPrint is monomorphic. You would need to uncomment the type signature to make it work, or disable MonomorphismRestriction.
Phantom constraints
When you put a polymorphic value with a constraint into a tuple, the tuple itself becomes polymorphic and has the same constraint:
myValue :: Read a => a
myValue = read "0"
myTuple :: Read a => (a, String)
myTuple = (myValue, "hello")
We know that the constraint affects the first part of the tuple but does not affect the second part. The type system doesn't know that, unfortunately, and will complain if you try to do this:
myString = snd myTuple
Even though intuitively one would expect myString to be just a String, the type checker needs to specialize the type variable a and figure out whether the constraint is actually satisfied. In order to make this expression work, one would need to annotate the type of either snd or myTuple:
myString = snd (myTuple :: ((), String))
In Haskell, as I'm sure you know, types are inferred. In other words, the compiler works out what type you want.
However, in Haskell, there are also polymorphic typeclasses, with functions that act in different ways depending on the return type. Here's an example of the Monad class, though I haven't defined everything:
class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
fail :: String -> m a
We're given a lot of functions with just type signatures. Our job is to make instance declarations for different types that can be treated as Monads, like Maybe t or [t].
Have a look at this code - it won't work in the way we might expect:
return 7
That's a function from the Monad class, but because there's more than one Monad, we have to specify what return value/type we want, or it automatically becomes an IO Monad. So:
return 7 :: Maybe Int
-- Will return...
Just 7
return 6 :: [Int]
-- Will return...
[6]
This is because [t] and Maybe have both been defined in the Monad type class.
Here's another example, this time from the random typeclass. This code throws an error:
random (mkStdGen 100)
Because random returns something in the Random class, we'll have to define what type we want to return, with a StdGen object tupelo with whatever value we want:
random (mkStdGen 100) :: (Int, StdGen)
-- Returns...
(-3650871090684229393,693699796 2103410263)
random (mkStdGen 100) :: (Bool, StdGen)
-- Returns...
(True,4041414 40692)
This can all be found at learn you a Haskell online, though you'll have to do some long reading. This, I'm pretty much 100% certain, it the only time when types are necessary.

Haskell get type of algebraic parameter

I have a type
class IntegerAsType a where
value :: a -> Integer
data T5
instance IntegerAsType T5 where value _ = 5
newtype (IntegerAsType q) => Zq q = Zq Integer deriving (Eq)
newtype (Num a, IntegerAsType n) => PolyRing a n = PolyRing [a]
I'm trying to make a nice "show" for the PolyRing type. In particular, I want the "show" to print out the type 'a'. Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
The other way I'm trying to do it is using pattern matching, but I'm running into problems with built-in types and the algebraic type.
I want a different result for each of Integer, Int and Zq q.
(toy example:)
test :: (Num a, IntegerAsType q) => a -> a
(Int x) = x+1
(Integer x) = x+2
(Zq x) = x+3
There are at least two different problems here.
1) Int and Integer are not data constructors for the 'Int' and 'Integer' types. Are there data constructors for these types/how do I pattern match with them?
2) Although not shown in my code, Zq IS an instance of Num. The problem I'm getting is:
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
In the type signature for `test':
test :: (Num a, IntegerAsType q) => a -> a
I kind of see why it is complaining, but I don't know how to get around that.
Thanks
EDIT:
A better example of what I'm trying to do with the test function:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
Even if we ignore the fact that I can't construct Integers and Ints this way (still want to know how!) this 'test' doesn't compile because:
Could not deduce (a ~ Zq t0) from the context (Num a)
My next try at this function was with the type signature:
test :: (Num a, IntegerAsType q) => a -> a
which leads to the new error
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
I hope that makes my question a little clearer....
I'm not sure what you're driving at with that test function, but you can do something like this if you like:
{-# LANGUAGE ScopedTypeVariables #-}
class NamedType a where
name :: a -> String
instance NamedType Int where
name _ = "Int"
instance NamedType Integer where
name _ = "Integer"
instance NamedType q => NamedType (Zq q) where
name _ = "Zq (" ++ name (undefined :: q) ++ ")"
I would not be doing my Stack Overflow duty if I did not follow up this answer with a warning: what you are asking for is very, very strange. You are probably doing something in a very unidiomatic way, and will be fighting the language the whole way. I strongly recommend that your next question be a much broader design question, so that we can help guide you to a more idiomatic solution.
Edit
There is another half to your question, namely, how to write a test function that "pattern matches" on the input to check whether it's an Int, an Integer, a Zq type, etc. You provide this suggestive code snippet:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
There are a couple of things to clear up here.
Haskell has three levels of objects: the value level, the type level, and the kind level. Some examples of things at the value level include "Hello, world!", 42, the function \a -> a, or fix (\xs -> 0:1:zipWith (+) xs (tail xs)). Some examples of things at the type level include Bool, Int, Maybe, Maybe Int, and Monad m => m (). Some examples of things at the kind level include * and (* -> *) -> *.
The levels are in order; value level objects are classified by type level objects, and type level objects are classified by kind level objects. We write the classification relationship using ::, so for example, 32 :: Int or "Hello, world!" :: [Char]. (The kind level isn't too interesting for this discussion, but * classifies types, and arrow kinds classify type constructors. For example, Int :: * and [Int] :: *, but [] :: * -> *.)
Now, one of the most basic properties of Haskell is that each level is completely isolated. You will never see a string like "Hello, world!" in a type; similarly, value-level objects don't pass around or operate on types. Moreover, there are separate namespaces for values and types. Take the example of Maybe:
data Maybe a = Nothing | Just a
This declaration creates a new name Maybe :: * -> * at the type level, and two new names Nothing :: Maybe a and Just :: a -> Maybe a at the value level. One common pattern is to use the same name for a type constructor and for its value constructor, if there's only one; for example, you might see
newtype Wrapped a = Wrapped a
which declares a new name Wrapped :: * -> * at the type level, and simultaneously declares a distinct name Wrapped :: a -> Wrapped a at the value level. Some particularly common (and confusing examples) include (), which is both a value-level object (of type ()) and a type-level object (of kind *), and [], which is both a value-level object (of type [a]) and a type-level object (of kind * -> *). Note that the fact that the value-level and type-level objects happen to be spelled the same in your source is just a coincidence! If you wanted to confuse your readers, you could perfectly well write
newtype Huey a = Louie a
newtype Louie a = Dewey a
newtype Dewey a = Huey a
where none of these three declarations are related to each other at all!
Now, we can finally tackle what goes wrong with test above: Integer and Int are not value constructors, so they can't be used in patterns. Remember -- the value level and type level are isolated, so you can't put type names in value definitions! By now, you might wish you had written test' instead:
test' :: Num a => a -> a
test' (x :: Integer) = x + 2
test' (x :: Int) = x + 1
test' (Zq x :: Zq a) = x
...but alas, it doesn't quite work like that. Value-level things aren't allowed to depend on type-level things. What you can do is to write separate functions at each of the Int, Integer, and Zq a types:
testInteger :: Integer -> Integer
testInteger x = x + 2
testInt :: Int -> Int
testInt x = x + 1
testZq :: Num a => Zq a -> Zq a
testZq (Zq x) = Zq x
Then we can call the appropriate one of these functions when we want to do a test. Since we're in a statically-typed language, exactly one of these functions is going to be applicable to any particular variable.
Now, it's a bit onerous to remember to call the right function, so Haskell offers a slight convenience: you can let the compiler choose one of these functions for you at compile time. This mechanism is the big idea behind classes. It looks like this:
class Testable a where test :: a -> a
instance Testable Integer where test = testInteger
instance Testable Int where test = testInt
instance Num a => Testable (Zq a) where test = testZq
Now, it looks like there's a single function called test which can handle any of Int, Integer, or numeric Zq's -- but in fact there are three functions, and the compiler is transparently choosing one for you. And that's an important insight. The type of test:
test :: Testable a => a -> a
...looks at first blush like it is a function that takes a value that could be any Testable type. But in fact, it's a function that can be specialized to any Testable type -- and then only takes values of that type! This difference explains yet another reason the original test function didn't work. You can't have multiple patterns with variables at different types, because the function only ever works on a single type at a time.
The ideas behind the classes NamedType and Testable above can be generalized a bit; if you do, you get the Typeable class suggested by hammar above.
I think now I've rambled more than enough, and likely confused more things than I've clarified, but leave me a comment saying which parts were unclear, and I'll do my best.
Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
I think Data.Typeable may be what you're looking for.
Prelude> :m + Data.Typeable
Prelude Data.Typeable> typeOf (1 :: Int)
Int
Prelude Data.Typeable> typeOf (1 :: Integer)
Integer
Note that this will not work on any type, just those which have a Typeable instance.
Using the extension DeriveDataTypeable, you can have the compiler automatically derive these for your own types:
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Typeable
data Foo = Bar
deriving Typeable
*Main> typeOf Bar
Main.Foo
I didn't quite get what you're trying to do in the second half of your question, but hopefully this should be of some help.

I don't understand number conversions in Haskell

Here is what I'm trying to do:
isPrime :: Int -> Bool
isPrime x = all (\y -> x `mod` y /= 0) [3, 5..floor(sqrt x)]
(I know I'm not checking for division by two--please ignore that.)
Here's what I get:
No instance for (Floating Int)
arising from a use of `sqrt'
Possible fix: add an instance declaration for (Floating Int)
In the first argument of `floor', namely `(sqrt x)'
In the expression: floor (sqrt x)
In the second argument of `all', namely `[3, 5 .. floor (sqrt x)]'
I've spent literally hours trying everything I can think of to make this list using some variant of sqrt, including nonsense like
intSqrt :: Int -> Int
intSqrt x = floor (sqrt (x + 0.0))
It seems that (sqrt 500) works fine but (sqrt x) insists on x being a Floating (why?), and there is no function I can find to convert an Int to a real (why?).
I don't want a method to test primality, I want to understand how to fix this. Why is this so hard?
Unlike most other languages, Haskell distinguishes strictly between integral and floating-point types, and will not convert one to the other implicitly. See here for how to do the conversion explicitly. There's even a sqrt example :-)
The underlying reason for this is that the combination of implicit conversions and Haskel's (rather complex but very cool) class system would make type reconstruction very difficult -- probably it would stretch it beyond the point where it can be done by machines at all. The language designers felt that getting type classes for arithmetic was worth the cost of having to specify conversions explicitly.
Your issue is that, although you've tried to fix it in a variety of ways, you haven't tried to do something x, which is exactly where your problem lies. Let's look at the type of sqrt:
Prelude> :t sqrt
sqrt :: (Floating a) => a -> a
On the other hand, x is an Int, and if we ask GHCi for information about Floating, it tells us:
Prelude> :info Floating
class (Fractional a) => Floating a where
pi :: a
<...snip...>
acosh :: a -> a
-- Defined in GHC.Float
instance Floating Float -- Defined in GHC.Float
instance Floating Double -- Defined in GHC.Float
So the only types which are Floating are Floats and Doubles. We need a way to convert an Int to a Double, much as floor :: (RealFrac a, Integral b) => a -> b goes the other direction. Whenever you have a type question like this, you can ask Hoogle, a Haskell search engine which searches types. Unfortunately, if you search for Int -> Double, you get lousy results. But what if we relax what we're looking for? If we search for Integer -> Double, we find that there's a function fromInteger :: Num a => Integer -> a, which is almost exactly what you want. And if we relax our type all the way to (Integral a, Num b) => a -> b, you find that there is a function fromIntegral :: (Integral a, Num b) => a -> b.
Thus, to compute the square root of an integer, use floor . sqrt $ fromIntegral x, or use
isqrt :: Integral i => i -> i
isqrt = floor . sqrt . fromIntegral
You were thinking about the problem in the right direction for the output of sqrt; it returned a floating-point number, but you wanted an integer. In Haskell, however, there's no notion of subtyping or implicit casts, so you need to alter the input to sqrt as well.
To address some of your other concerns:
intSqrt :: Int -> Int
intSqrt x = floor (sqrt (x + 0.0))
You call this "nonsense", so it's clear you don't expect it to work, but why doesn't it? Well, the problem is that (+) has type Num a => a -> a -> a—you can only add two things of the same type. This is generally good, since it means you can't add a complex number to a 5×5 real matrix; however, since 0.0 must be an instance of Fractional, you won't be able to add it to x :: Int.
It seems that (sqrt 500) works fine…
This works because the type of 500 isn't what you expect. Let's ask our trusty companion GHCi:
Prelude> :t 500
500 :: (Num t) => t
In fact, all integer literals have this type; they can be any sort of number, which works because the Num class contains the function fromInteger :: Integer -> a. So when you wrote sqrt 500, GHC realized that 500 needed to satisfy 500 :: (Num t, Floating t) => t (and it will implicitly pick Double for numeric types like that thank to the defaulting rules). Similarly, the 0.0 above has type Fractional t => t, thanks to Fractional's fromRational :: Rational -> a function.
… but (sqrt x) insists on x being a Floating …
See above, where we look at the type of sqrt.
… and there is no function I can find to convert an Int to a real ….
Well, you have one now: fromIntegral. I don't know why you couldn't find it; apparently Hoogle gives much worse results than I was expecting, thanks to the generic type of the function.
Why is this so hard?
I hope it isn't anymore, now that you have fromIntegral.

Resources