Disclaimer: I'm a total Isabelle beginner.
I'm trying to export the "sqrt" function or rather functions and definitions using "sqrt" to Haskell. My first try was just:
theory Scratch
imports Complex_Main
begin
definition val :: "real" where "val = sqrt 4"
export_code val in Haskell
end
Which resulted in the following error:
Wellsortedness error
(in code equation root ?n ?x ≡
if equal_nat_inst.equal_nat ?n zero_nat_inst.zero_nat then zero_real_inst.zero_real
else the_inv_into top_set_inst.top_set
(λy. times_real_inst.times_real (sgn_real_inst.sgn_real y)
(abs_real_inst.abs_real y ^ ?n))
?x,
with dependency "val" -> "sqrt" -> "root"):
Type real not of sort {enum,equal}
No type arity real :: enum
So I tried to replace "sqrt" with Haskell's "Prelude.sqrt":
code_printing
constant sqrt ⇀ (Haskell) "Prelude.sqrt _"
export_code val in Haskell
Which still resulted in the same error. Which seems rather odd to me, because replacing "plus" with some arbitrary function "f" seems to be fine:
definition val' :: "nat" where "val' = plus 49 1"
code_printing
constant plus ⇀ (Haskell) "_ `f` _"
export_code val' in Haskell
How do I resolve this issue?
I'm not sure about the code_printing issue, but what do you expect to happen here? Wellsortedness error during code generation usually means that what you're trying to export is simply not computable (or at least Isabelle doesn't know how).
What do you expect something like sqrt 2 to compile to in Haskell? What about sqrt pi? You cannot hope to generate executable code for all real numbers. Isabelle's default implementation restricts itself to rational numbers.
Doing code-printing to replace Isabelle's sqrt with Haskell's sqrt is only going to give you a type error, since Haskell's sqrt works on floating point numbers, and not on Isabelle's exported real type.
There is a file in ~~/src/HOL/Library/Code_Real_Approx_By_Float that maps Isabelle's operations on real numbers to floating point approximations in Standard ML and OCaml, but this is for experimentation only, since you lose all correctness guarantees if you do that sort of thing.
Lastly, there is an entry in the Archive of Formal Proofs that provides exact executable algebraic real numbers, so that you can do at least some operations with square root etc., but this is a big piece of work and the performance can be pretty bad in some cases.
There is also a sqrt operation on natural numbers in Isabelle (i.e. it rounds down) in ~~/src/HOL/Library/Discrete, and that can easily be exported to Haskell.
In the AFP there also is an entry Sqrt_Babylonian, which contains algorithms to compute sqrt up to a given precision epsilon > 0, without any floating point rounding errors.
Regarding the complexity of algebraic numbers that Manuel mentioned, it really depends on your input. If you use nested square-roots or combine different square-roots (like sqrt 2 + ... + sqrt 50), then the performance will degrade soonish. However, if you rarely use square-roots or always use the same square-root in multiple locations, then algebraic numbers might be fast enough.
Related
In many articles about Haskell they say it allows to make some checks during compile time instead of run time. So, I want to implement the simplest check possible - allow a function to be called only on integers greater than zero. How can I do it?
module Positive (toPositive, getPositive, Positive) where
newtype Positive = Positive { unPositive :: Int }
toPositive :: Int -> Maybe Positive
toPositive n = if (n <= 0) then Nothing else Just (Positive n)
-- We can't export unPositive, because unPositive can be used
-- to update the field. Trivially renaming it to getPositive
-- ensures that getPositive can only be used to access the field
getPositive :: Positive -> Int
getPositive = unPositive
The above module doesn't export the constructor, so the only way to build a value of type Positive is to supply toPositive with a positive integer, which you can then unwrap using getPositive to access the actual value.
You can then write a function that only accepts positive integers using:
positiveInputsOnly :: Positive -> ...
Haskell can perform some checks at compile time that other languages perform at runtime. Your question seems to imply you are hoping for arbitrary checks to be lifted to compile time, which isn't possible without a large potential for proof obligations (which could mean you, the programmer, would need to prove the property is true for all uses).
In the below, I don't feel like I'm saying anything more than what pigworker touched on while mentioning the very cool sounding Inch tool. Hopefully the additional words on each topic will clarify some of the solution space for you.
What People Mean (when speaking of Haskell's static guarantees)
Typically when I hear people talk about the static guarantees provided by Haskell they are talking about the Hindley Milner style static type checking. This means one type can not be confused for another - any such misuse is caught at compile time (ex: let x = "5" in x + 1 is invalid). Obviously, this only scratches the surface and we can discuss some more aspects of static checking in Haskell.
Smart Constructors: Check once at runtime, ensure safety via types
Gabriel's solution is to have a type, Positive, that can only be positive. Building positive values still requires a check at runtime but once you have a positive there are no checks required by consuming functions - the static (compile time) type checking can be leveraged from here.
This is a good solution for many many problems. I recommended the same thing when discussing golden numbers. Never-the-less, I don't think this is what you are fishing for.
Exact Representations
dflemstr commented that you can use a type, Word, which is unable to represent negative numbers (a slightly different issue than representing positives). In this manner you really don't need to use a guarded constructor (as above) because there is no inhabitant of the type that violates your invariant.
A more common example of using proper representations is non-empty lists. If you want a type that can never be empty then you could just make a non-empty list type:
data NonEmptyList a = Single a | Cons a (NonEmptyList a)
This is in contrast to the traditional list definition using Nil instead of Single a.
Going back to the positive example, you could use a form of Peano numbers:
data NonNegative = One | S NonNegative
Or user GADTs to build unsigned binary numbers (and you can add Num, and other instances, allowing functions like +):
{-# LANGUAGE GADTs #-}
data Zero
data NonZero
data Binary a where
I :: Binary a -> Binary NonZero
O :: Binary a -> Binary a
Z :: Binary Zero
N :: Binary NonZero
instance Show (Binary a) where
show (I x) = "1" ++ show x
show (O x) = "0" ++ show x
show (Z) = "0"
show (N) = "1"
External Proofs
While not part of the Haskell universe, it is possible to generate Haskell using alternate systems (such as Coq) that allow richer properties to be stated and proven. In this manner the Haskell code can simply omit checks like x > 0 but the fact that x will always be greater than 0 will be a static guarantee (again: the safety is not due to Haskell).
From what pigworker said, I would classify Inch in this category. Haskell has not grown sufficiently to perform your desired tasks, but tools to generate Haskell (in this case, very thin layers over Haskell) continue to make progress.
Research on More Descriptive Static Properties
The research community that works with Haskell is wonderful. While too immature for general use, people have developed tools to do things like statically check function partiality and contracts. If you look around you'll find it's a rich field.
I would be failing in my duty as his supervisor if I failed to plug Adam Gundry's Inch preprocessor, which manages integer constraints for Haskell.
Smart constructors and abstraction barriers are all very well, but they push too much testing to run time and don't allow for the possibility that you might actually know what you're doing in a way that checks out statically, with no need for Maybe padding. (A pedant writes. The author of another answer appears to suggest that 0 is positive, which some might consider contentious. Of course, the truth is that we have uses for a variety of lower bounds, 0 and 1 both occurring often. We also have some use for upper bounds.)
In the tradition of Xi's DML, Adam's preprocessor adds an extra layer of precision on top of what Haskell natively offers but the resulting code erases to Haskell as is. It would be great if what he's done could be better integrated with GHC, in coordination with the work on type level natural numbers that Iavor Diatchki has been doing. We're keen to figure out what's possible.
To return to the general point, Haskell is currently not sufficiently dependently typed to allow the construction of subtypes by comprehension (e.g., elements of Integer greater than 0), but you can often refactor the types to a more indexed version which admits static constraint. Currently, the singleton type construction is the cleanest of the available unpleasant ways to achieve this. You'd need a kind of "static" integers, then inhabitants of kind Integer -> * capture properties of particular integers such as "having a dynamic representation" (that's the singleton construction, giving each static thing a unique dynamic counterpart) but also more specific things like "being positive".
Inch represents an imagining of what it would be like if you didn't need to bother with the singleton construction in order to work with some reasonably well behaved subsets of the integers. Dependently typed programming is often possible in Haskell, but is currently more complicated than necessary. The appropriate sentiment toward this situation is embarrassment, and I for one feel it most keenly.
I know that this was answered a long time ago and I already provided an answer of my own, but I wanted to draw attention to a new solution that became available in the interim: Liquid Haskell, which you can read an introduction to here.
In this case, you can specify that a given value must be positive by writing:
{-# myValue :: {v: Int | v > 0} #-}
myValue = 5
Similarly, you can specify that a function f requires only positive arguments like this:
{-# f :: {v: Int | v > 0 } -> Int #-}
Liquid Haskell will verify at compile-time that the given constraints are satisfied.
This—or actually, the similar desire for a type of natural numbers (including 0)—is actually a common complaints about Haskell's numeric class hierarchy, which makes it impossible to provide a really clean solution to this.
Why? Look at the definition of Num:
class (Eq a, Show a) => Num a where
(+) :: a -> a -> a
(*) :: a -> a -> a
(-) :: a -> a -> a
negate :: a -> a
abs :: a -> a
signum :: a -> a
fromInteger :: Integer -> a
Unless you revert to using error (which is a bad practice), there is no way you can provide definitions for (-), negate and fromInteger.
Type-level natural numbers are planned for GHC 7.6.1: https://ghc.haskell.org/trac/ghc/ticket/4385
Using this feature it's trivial to write a "natural number" type, and gives a performance you could never achieve (e.g. with a manually written Peano number type).
I'm working with linear problems on rationals in Z3. To use Z3 I take SBV.
An example of a problem I pose is:
import Data.SBV
solution1 = do
x <- sRational "x"
w <- sRational "w"
constrain $ x.< w
constrain $ x + 2*w .>=0 .|| x .== 1
My question is:
Are these kinds of problems decidable?
I couldn't find a list of decidable theories or a way to tell if a theory is decidable.
The closest I found is this. The theory about the real ones is decidable, but is it the same for rational numbers? Intuition tells me that it is, but I have not found the information that allows me to assure it.
Thanks in advance
SBV models rationals using the standard "two integers" idea; that is, it represents the numerator and the denominator separately as integers. This means that if you add two symbolic rationals, you'll have a non-linear term over the integers. So, in theory, the problem will be in the semi-decidable fragment. That is, even if you restrict your multiplications to concrete scalars, addition of symbolic rationals will give rise to non-linear terms over integers.
Having said that, I had good luck using rationals; where z3 was able to decide most problems of interest without much difficulty. If it proves to be an issue, you should switch to SReal type (i.e., algebraic reals), for which z3 has a decision procedure. But of course, the models you get can now include algebraic reals, such as square-root-of-2, etc. (i.e., the roots of any polynomial with integer coefficients.)
Side note If your problem allows for delta-sat (i.e., satisfiability with perturbations), you should look into dReal (http://dreal.github.io), which SBV also supports as a backend solver. But perhaps that's not what you had in mind.
Theoretical note
Strictly speaking, linear arithmetic over rationals is decidable; see Section 3 of https://www.cs.ox.ac.uk/people/james.worrell/lecture15-2015.pdf for a proof. However, SMT solvers do not support rationals out-of-the-box; and SBV (as I mentioned above), uses two symbolic integers to represent rationals. So, adding two rationals will give rise to multiplication of two symbolic integers, taking you out of the decidable fragment. Of course, in practice, the solvers are quite adept at coming up with solutions even in the presence of non-linear terms; it's just that you're not always guaranteed. So, a more strict answer to your question is while linear arithmetic over rationals is decidable, the translation used by SBV puts the problem into the non-linear integer arithmetic domain, and hence decidability is not guaranteed. In any case, SMTLib does not come with a theory of rationals, so you're kind of out-of-luck when it comes to first class support for them.
I guess a rational solution will exist iff an integer solution exists to a suitably scaled collection of constraints. For example, x=1/2(=5/10), w=3/5(=6/10) is a solution to your example problem. Scaling your problem by 10, we have the equivalent constraint set:
10*x < 10*w
(10*x + 20*w >= 0) || (10*x == 10)
Writing x'=10*x and w'=10*w, this means that x'=5, w'=6 is an integer solution to:
x' < w'
(x' + w' >= 0) || (x' == 10)
Presburger famously showed that first-order logic plus integers and addition is decidable. (Multiplication by a constant is also allowed, since it can be expanded to an addition -- e.g. 3*x is x+x+x.)
I guess the only trick left is to show that it's possible to choose what scaling to use without having solved the problem yet. Nothing obvious occurs to me off the top of my head, but it seems reasonable that this should be doable. For example, perhaps if you take the product of all the nonzero numerators and denominators in your constraint set, you can show that the set of rationals with that product as their denominator is indistinguishable from the full set of rationals. (If so, you could look through the proof to see if it still works with a smaller denominator.)
I'm not a z3 expert, so I can't talk about how this translates to whether that tool specifically is suitable, but it seems likely to me that it is possible to create a suitable tool.
2022 Update: This bug was filed as a GHC ticket and is now fixed: https://gitlab.haskell.org/ghc/ghc/issues/17231 so this is no longer an issue.
Using ghci 8.6.5
I want to calculate the square root of an Integer input, then round it to the bottom and return an Integer.
square :: Integer -> Integer
square m = floor $ sqrt $ fromInteger m
It works.
The problem is, for this specific big number as input:
4141414141414141*4141414141414141
I get a wrong result.
Putting my function aside, I test the case in ghci:
> sqrt $ fromInteger $ 4141414141414141*4141414141414141
4.1414141414141405e15
wrong... right?
BUT SIMPLY
> sqrt $ 4141414141414141*4141414141414141
4.141414141414141e15
which is more like what I expect from the calculation...
In my function I have to make some type conversion, and I reckon fromIntegral is the way to go. So, using that, my function gives a wrong result for the 4141...41 input.
I can't figure out what ghci does implicitly in terms of type conversion, right before running sqrt. Because ghci's conversion allows for a correct calculation.
Why I say this is an anomaly: the problem does not occur with other numbers like 5151515151515151 or 3131313131313131 or 4242424242424242 ...
Is this a Haskell bug?
TLDR
It comes down to how one converts an Integer value to a Double that is not exactly representable. Note that this can happen not just because Integer is too big (or too small), but Float and Double values by design "skip around" integral values as their magnitude gets larger. So, not every integral value in the range is exactly representable either. In this case, an implementation has to pick a value based on the rounding-mode. Unfortunately, there are multiple candidates; and what you are observing is that the candidate picked by Haskell gives you a worse numeric result.
Expected Result
Most languages, including Python, use what's known as "round-to-nearest-ties-to-even" rounding mechanism; which is the default IEEE754 rounding mode and is typically what you would get unless you explicitly set a rounding mode when issuing a floating-point related instruction in a compliant processor. Using Python as the "reference" here, we get:
>>> float(long(4141414141414141)*long(4141414141414141))
1.7151311090705027e+31
I haven't tried in other languages that support so called big-integers, but I'd expect most of them would give you this result.
How Haskell converts Integer to Double
Haskell, however, uses what's known as truncation, or round-towards-zero. So you get:
*Main> (fromIntegral $ 4141414141414141*4141414141414141) :: Double
1.7151311090705025e31
Turns out this is a "worse" approximation in this case (cf. to the Python produced value above), and you get the unexpected result in your original example.
The call to sqrt is really red-herring at this point.
Show me the code
It all originates from this code: (https://hackage.haskell.org/package/integer-gmp-1.0.2.0/docs/src/GHC.Integer.Type.html#doubleFromInteger)
doubleFromInteger :: Integer -> Double#
doubleFromInteger (S# m#) = int2Double# m#
doubleFromInteger (Jp# bn#(BN# bn#))
= c_mpn_get_d bn# (sizeofBigNat# bn) 0#
doubleFromInteger (Jn# bn#(BN# bn#))
= c_mpn_get_d bn# (negateInt# (sizeofBigNat# bn)) 0#
which in turn calls: (https://github.com/ghc/ghc/blob/master/libraries/integer-gmp/cbits/wrappers.c#L183-L190):
/* Convert bignum to a `double`, truncating if necessary
* (i.e. rounding towards zero).
*
* sign of mp_size_t argument controls sign of converted double
*/
HsDouble
integer_gmp_mpn_get_d (const mp_limb_t sp[], const mp_size_t sn,
const HsInt exponent)
{
...
which purposefully says the conversion is done rounding-toward zero.
So, that explains the behavior you get.
Why does Haskell do this?
None of this explains why Haskell uses round-towards-zero for integer-to-double conversion. I'd strongly argue that it should use the default rounding mode, i.e., round-nearest-ties-to-even. I can't find any mention whether this was a conscious choice, and it at least disagrees with what Python does. (Not that I'd consider Python the gold standard, but it does tend to get these things right.)
My best guess is it was just coded that way, without a conscious choice; but perhaps other people familiar with the history of numeric programming in Haskell can remember better.
What to do
Interestingly, I found the following discussion dating all the way back to 2008 as a Python bug: https://bugs.python.org/issue3166. Apparently, Python used to do the wrong thing here as well, but they fixed the behavior. It's hard to track the exact history, but it appears Haskell and Python both made the same mistake; Python recovered, but it went unnoticed in Haskell. If this was a conscious choice, I'd like to know why.
So, that's where it stands. I'd recommend opening a GHC ticket so it can be at least documented properly that this is the "chosen" behavior; or better, fix it so that it uses the default rounding mode instead.
Update:
GHC ticket opened: https://gitlab.haskell.org/ghc/ghc/issues/17231
2022 Update:
This is now fixed in GHC; at least as of GHC 9.2.2; but possibly earlier:
GHCi, version 9.2.2: https://www.haskell.org/ghc/ :? for help
Prelude> (fromIntegral $ 4141414141414141*4141414141414141) :: Double
1.7151311090705027e31
Not all Integers are exactly representable as Doubles. For those that aren't, fromInteger is in the bad position of needing to make a choice: which Double should it return? I can't find anything in the Report which discusses what to do here, wow!
One obvious solution would be to return a Double that has no fractional part and which represents the integer with the smallest absolute difference from the original of any Double that exists. Unfortunately this appears not to be the decision made by GHC's fromInteger.
Instead, GHC's choice is to return the Double with the largest magnitude that does not exceed the magnitude of the original number. So:
> 17151311090705026844052714160127 :: Double
1.7151311090705025e31
> 17151311090705026844052714160128 :: Double
1.7151311090705027e31
(Don't be fooled by how short the displayed number is in the second one: the Double there is the exact representation of the integer on the line above it; the digits stop there because there are enough to uniquely identify a single Double.)
Why does this matter for you? Well, the true answer to 4141414141414141*4141414141414141 is:
> 4141414141414141*4141414141414141
17151311090705026668707274767881
If fromInteger converted this to the nearest Double, as in plan (1) above, it would choose 1.7151311090705027e31. But since it returns the largest Double less than the input as in plan (2) above, and 17151311090705026844052714160128 is technically bigger, it returns the less accurate representation 1.7151311090705025e31.
Meanwhile, 4141414141414141 itself is exactly representable as a Double, so if you first convert to Double, then square, you get Double's semantics of choosing the representation that is closest to the correct answer, hence plan (1) instead of plan (2).
This explains the discrepancy in sqrt's output: doing your computations in Integer first and getting an exact answer, then converting to Double at the last second, paradoxically is less accurate than converting to Double immediately and doing your computations with rounding the whole way, because of how fromInteger does its conversion! Ouch.
I suspect a patch to modify fromInteger to do something better would be looked on favorably by GHCHQ; in any case I know I would look favorably on it!
It seems to me that the Num type class consists of a pretty arbitrary collection of functions. There are lots of types that naturally have + and * operations, but are problematic as instances of Num, due to the presence of abs, signum, and fromInteger. I can't find any discussion of the design philosophy behind this class, so it's not clear to me if there is a sensible rationale here, or if it's an unfortunate historical oddity.
I'll give an illustration of my question. Suppose I'm implementing a Matrix class, with components that are Doubles. I can obviously implement +, *, -, and negate. Maybe fromInteger x could give a 1x1 Matrix with component the Double value fromInteger x. It's less obvious what to do with abs and signum, but I could come up with something that satisfies the rule (from the class's documentation):
abs x * signum x == x
The objection to this idea is that my instance of Num is not fulfilling some implicit rules that people expect of Num. My * is a partial function (assuming the size of the Matrix is a runtime parameter), which is not true for the usual instances like Double and Int. And it doesn't commute. Whatever I come up with for abs and signum are not going to satisfy everyone's expectations.
The objection to this objection is that my Matrix multiplication is going to be a partial function anyway (and in this kind of type that seems to be accepted in the Haskell community), so why does it matter if it's * in particular that is the partial function? And if my abs and signum satisfy the rule from the documentation, then I've fulfilled my side of the bargain. Anyone relying on anything more from a Num instance is in the wrong.
Should a type like Matrix be an instance of Num?
Don't make Num instances for non-rings. It's just confusing.
Of course, you can often define instances which do something useful, but if it's not completely obvious what then better just define a plain function with descriptive name, or some weaker class instance with better-defined semantics. If somebody wants to use this with short operators or polymorphic Num functions, they can still define that locally in their own module (preferrably with a simple newtype wrapper.
In particular, a Num-instance for general (dynamically-sized) matrices is problematic because it's not obvious what should happen when the dimensions don't match. What behaviour do you want to generalise?
What I would consider a good instance is matrices of fixed quadratic size (i.e. linear endomorphisms on a given vector space). In this case, multiplication is evidently composition†, and number literals would be taken as constant-diagonal matrices, so 1 is actually multiplicative identity. Like what you write in maths contexts.
But that's not compatible with your idea of arbitrarily choosing the size of number-literals as 1×1! People would expect 2 * m to work, but it crashes. Well, better crash than give unexpected results; unfortunately it's tempting to come up with some clever way of defining multiplication in a suitable way. For instance, we could block-diagonal-copy the smaller matrix until it's large enough, perhaps only do this in the 1×1 case... well, Matlab does this kind of ad-hoc stuff, but please! let's not take such a horrible language as a model for what's good ideas.
If you have something that's obviously an additive group, indeed vector space, then make it a VectorSpace! If you also have a multiplication, but it's partial, then better only define it as a plain function.
If you fancy, you can define instances for the fine finely staggered numeric-prelude classes. Personally (though I like the idea of this project) I couldn't yet be bothered to use it anywhere, because it's rather an efford to understand the hierarchy.
†Or is it? Trouble already starts here, I think hmatrix actually implements * on matrices as element-wise multiplication. That's more horrible than Matlab!
In Eiffel one is allowed to use an expanded class which doesn't allocate from the heap. From a developer's perspective one rarely has to think about conversion from Int to Float as it is automatic. My question is this: Why did Haskell not choose a similar approach to modeling Num. Specifically, lets consider the Int instance. Here is the rationale for my question:
[1..3] = [1,2,3]
[1..3.5] = [1.0,2.0,3.0,4.0] -- rounds up
The second list was something that I was not expecting because there are by definition infinite floating point numbers between any two integers. Of course once we test the sequence it is clear that it is returning the floor of the floating point number rounded up. One of the reasons these conversions are needed is allow us to compute mean of a set of Integers for example.
In Eiffel the number type hierarchy is a bit more programmer friendly and the conversion happens as needed: for example creating a sequence can still be a set of Ints that result in a floating point mean. This has a readability advantage.
Is there a reason that expanded class was not implemented in Haskell?Any references will greatly help.
#ony: the point about parallel strategies: wont we face the same issue when using primitives? The manual does discourage using primitives and that makes sense to me in general where ever we can use primitives we probably need to use the abstract type. The issue I faced when trying to a mean of numbers is the missing Fractional Int instance and as to why does 5/3 not promote to a floating point instead of having to create floating point array to achieve the same result. There must be a reason as to why Fractional instance of Int and Integer is not defined? That could help me understand the rationale better.
#leftroundabout: the question is not about expanded classes per se but the convenience that such a feature can offer although that feature alone is not sufficient to handle the type promotion to float from an int for example as mentioned in my response to #ony. Lets take the classic example of a mean and try to define it as
> [Int] :: Double
let mean xs = sum xs / length xs (--not valid haskell code)
[Int] :: Double
let mean = sum xs / fromIntegral (length xs)
I would have liked it if I did not have to call the fromIntegral to get the mean function to work and that ties to the missing Fractional Int. Although the explanation seems to make sense, it has to, what I dont understand is if I am clear that I expect a double and I state it in my type signature is that not sufficient to do the appropriate conversion?
[a..b] is shorthand for enumFromTo a b, a method of the Enum typeclass. It begins at a and succs until the first time b is exceeded.
[a,b..c] is shorthand for enumFromThenTo a b c is similar to enumFromTo except instead of succing it adds the difference b-a each time. By default this difference is computed by roundtripping through Int so fractional differences may or may not be respected. That said, Double works as you'd expect
Prelude> [0.0, 0.5.. 10]
[0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,4.5,5.0,5.5,6.0,6.5,7.0,7.5,8.0,8.5,9.0,9.5,10.0]
[a..] is shorthand for enumFrom a which just succs forever.
[a,b..] is shorthand for enumFromThen a b which just adds (b-a) forever.
As for behaviour #J.Abrahamson already replied. That's definition enumFromThenTo.
As for design...
Actually GHC have Float# that represents unboxed type (can be allocated anywhere, but value is strict).
Since Haskell is a lazy language it assumes that most of the values are not required initially, until they actually referred with a primitive with a strict arguments.
Consider length [2..10]. In this case without optimization Haskell may even avoid generation of numbers and simply build up a list (without values). Probably more useful example takeWhile (<100) [x*(x-1) | x <- [2..]].
But you shouldn't think that we have overhead here since you are writing in language that abstracts away all that stuff with thumbs (except of strict notation). Haskell compiler have to take this as a work for itself. I.e. when compiler will be able to tell that all elements of list will be referenced (transformed to normal form) and it decides to process it within one stack of returns it can allocate it on stack.
Also with such approach you can gain more out of your code by using multiple CPU cores. Imagine that using Strategies your list being processed on a different cores and thus they should share common data on heap (not on stack).