I've just started playing with GHCi. I see that list generators basically solve an equation within a given set:
Prelude> [x | x <- [1..20], x^2 == 4]
[2]
(finds only one root, as expected)
Now, why can't I solve equations with results in ℝ, given that the solution is included in the specified range?
[x | x <- [0.1,0.2..2.0], x*4 == 2]
How can I solve such equations within real numbers set?
Edit: Sorry, I meant 0.1, of course.
List comprehension doesn't solve equations, it just generates a list of items that belong to certain sets. If your set is defined as any x in [1..20] such that x^2==4, that's what you get.
You cannot do that with a complete list of any real number from 0.01 to 2.0, because such real list cannot be represented in haskell (or better: it cannot be represented on any computer), since it has infinite numbers with infinite precision.
[0.01,0.2..2.0] is a list made of the following numbers:
Prelude> [0.01,0.2..2.0]
[1.0e-2,0.2,0.39,0.5800000000000001,0.7700000000000001,0.9600000000000002,1.1500000000000004,1.3400000000000005,1.5300000000000007,1.7200000000000009,1.910000000000001]
And none of these numbers satisfies your condution.
Note that you probably meant [0.1,0.2..2.0] instead of [0.01,0.2..2.0]. Still:
Prelude> [0.1,0.2..2.0]
[0.1,0.2,0.30000000000000004,0.4000000000000001,0.5000000000000001,0.6000000000000001,0.7000000000000001,0.8,0.9,1.0,1.1,1.2000000000000002,1.3000000000000003,1.4000000000000004,1.5000000000000004,1.6000000000000005,1.7000000000000006,1.8000000000000007,1.9000000000000008,2.000000000000001]
As others have mentioned, this is not an efficient way to solve equations, but it can be done with ratios.
Prelude> :m +Data.Ratio
Prelude Data.Ratio> [x|x<-[1%10, 2%10..2], x*4 == 2]
[1 % 2]
Read x % y as x divided by y.
The floating point issue can be solved in this way:
Prelude> [x | x <- [0.1, 0.2 .. 2.0], abs(2 - x*4) < 1e-9]
[0.5000000000000001]
For a reference why floating point numbers can make problems see this: Comparing floating point numbers
First of all [0.01,0.2..2.0] wouldn't include 0.5 even if floating point arithmetic were accurate. I assume you meant the first element to be 0.1.
The list [0.1,0.2..2.0] does not contain 0.5 because floating point arithmetic is imprecise and the 5th element of [0.1,0.2..2.0] is 0.5000000000000001, not 0.5.
Related
Why does the math module return the wrong result?
First test
A = 12345678917
print 'A =',A
B = sqrt(A**2)
print 'B =',int(B)
Result
A = 12345678917
B = 12345678917
Here, the result is correct.
Second test
A = 123456758365483459347856
print 'A =',A
B = sqrt(A**2)
print 'B =',int(B)
Result
A = 123456758365483459347856
B = 123456758365483467538432
Here the result is incorrect.
Why is that the case?
Because math.sqrt(..) first casts the number to a floating point and floating points have a limited mantissa: it can only represent part of the number correctly. So float(A**2) is not equal to A**2. Next it calculates the math.sqrt which is also approximately correct.
Most functions working with floating points will never be fully correct to their integer counterparts. Floating point calculations are almost inherently approximative.
If one calculates A**2 one gets:
>>> 12345678917**2
152415787921658292889L
Now if one converts it to a float(..), one gets:
>>> float(12345678917**2)
1.5241578792165828e+20
But if you now ask whether the two are equal:
>>> float(12345678917**2) == 12345678917**2
False
So information has been lost while converting it to a float.
You can read more about how floats work and why these are approximative in the Wikipedia article about IEEE-754, the formal definition on how floating points work.
The documentation for the math module states "It provides access to the mathematical functions defined by the C standard." It also states "Except when explicitly noted otherwise, all return values are floats."
Those together mean that the parameter to the square root function is a float value. In most systems that means a floating point value that fits into 8 bytes, which is called "double" in the C language. Your code converts your integer value into such a value before calculating the square root, then returns such a value.
However, the 8-byte floating point value can store at most 15 to 17 significant decimal digits. That is what you are getting in your results.
If you want better precision in your square roots, use a function that is guaranteed to give full precision for an integer argument. Just do a web search and you will find several. Those usually do a variation of the Newton-Raphson method to iterate and eventually end at the correct answer. Be aware that this is significantly slower that the math module's sqrt function.
Here is a routine that I modified from the internet. I can't cite the source right now. This version also works for non-integer arguments but just returns the integer part of the square root.
def isqrt(x):
"""Return the integer part of the square root of x, even for very
large values."""
if x < 0:
raise ValueError('square root not defined for negative numbers')
n = int(x)
if n == 0:
return 0
a, b = divmod(n.bit_length(), 2)
x = (1 << (a+b)) - 1
while True:
y = (x + n//x) // 2
if y >= x:
return x
x = y
If you want to calculate sqrt of really large numbers and you need exact results, you can use sympy:
import sympy
num = sympy.Integer(123456758365483459347856)
print(int(num) == int(sympy.sqrt(num**2)))
The way floating-point numbers are stored in memory makes calculations with them prone to slight errors that can nevertheless be significant when exact results are needed. As mentioned in one of the comments, the decimal library can help you here:
>>> A = Decimal(12345678917)
>>> A
Decimal('123456758365483459347856')
>>> B = A.sqrt()**2
>>> B
Decimal('123456758365483459347856.0000')
>>> A == B
True
>>> int(B)
123456758365483459347856
I use version 3.6, which has no hardcoded limit on the size of integers. I don't know if, in 2.7, casting B as an int would cause overflow, but decimal is incredibly useful regardless.
I have a semi-voluntary Haskell homework here and need some help on how to solve it.
The task:
Write a Haskell function
evenmin a b c
that returns the smallest even number from the arguments or the largest uneven one if there is no even number in the arguments.
I know that i can do that with many guards, but I am sure that there is a much nicer way! Please don't write out solution, but nudge me in the right direction if you can. Thanks!
Hint: Instead of 3 arguments, suppose your input is a non-empty list of integers, i.e.
evenmin' :: [Int] -> Int
Suppose further you have a function phi that partitions the input like so:
phi [1, 2, 3, 4, 5, 6] == ([1,3,5],[2,4,6])
What would the definition of evenmin' be? Afterwards, define evenmin a b c = evenmin' [a, b, c].
Order integers in this way:
even integers are ordered by <=
odd integers are ordered by >=
even integers are always smaller than odd ones.
Define myCompare :: Int -> Int -> Ordering.
Realize you want the minimum according to the above ordering.
How to compute the minimum of two objects w.r.t. a generic ordering?
How to extend that to three objects?
Bonus: how to extend that to lists?
In the ghci terminal, I was computing some equations with Haskell using the sqrt function.
I notice that I would sometimes lose precision in my sqrt result, when it was supposed to be simplified.
For example,
sqrt 4 * sqrt 4 = 4 -- This works well!
sqrt 2 * sqrt 2 = 2.0000000000000004 -- Not the exact result.
Normally, I would expect a result of 2.
Is there a way to get the right simplification result?
How does that work in Haskell?
There are usable precise number libraries in Haskell. Two that come to mind are cyclotomic and the CReal module in the numbers package. (Cyclotomic numbers don't support all the operations on complex numbers that you might like, but square roots of integers and rationals are in the domain.)
>>> import Data.Complex.Cyclotomic
>>> sqrtInteger 2
e(8) - e(8)^3
>>> toReal $ sqrtInteger 2
Just 1.414213562373095 -- Maybe Double
>>> sqrtInteger 2 * sqrtInteger 2
2
>>> toReal $ sqrtInteger 2 * sqrtInteger 2
Just 2.0
>>> rootsQuadEq 3 2 1
Just (-1/3 + 1/3*e(8) + 1/3*e(8)^3,-1/3 - 1/3*e(8) - 1/3*e(8)^3)
>>> let eq x = 3*x*x + 2*x + 1
>>> eq (-1/3 + 1/3*e(8) + 1/3*e(8)^3)
0
>>> import Data.Number.CReal
>>> sqrt 2 :: CReal
1.4142135623730950488016887242096980785697 -- Show instance cuts off at 40th place
>>> sqrt 2 * sqrt 2 :: CReal
2.0
>>> sin 3 :: CReal
0.1411200080598672221007448028081102798469
>>> sin 3*sin 3 + cos 3*cos 3 :: CReal
1.0
You do not lose precision. You have limited precision.
The square root of 2 is a real number but not a rational number, therefore it's value cannot be represented exactly by any computer (except representing it symbolically, of course).
Even if you define a very large precision type, it will not be able to represent the square root of 2 exactly. You may get more precision, but never enough to represent that value exactly (unless you have a computer with infinite memory, in which case please hire me).
The explanation for these results lies in the type of the values returned by the sqrt function:
> :t sqrt
sqrt :: Floating a => a -> a
The Floating a means that the value returned belongs to the Floating type class.
The values of all types belonging to this class are stored as floating point numbers. These sacrifice precision for the sake of covering a larger range of numbers.
Double precision floating point numbers can cover very large ranges but they have limited precision and cannot encode all possible numbers. The square root of 2 (√2) is one such number:
> sqrt 2
1.4142135623730951
> sqrt 2 + 0.000000000000000001
1.4142135623730951
As you see above, it is impossible for double precision floating point numbers to be precise enough to represent √2 + 0.000000000000000001, it is simply rounded to the closest approximation which can be expressed using floating point encoding.
As mentioned by another poster, √2 is an irrational number which can be simplified to mean that it requires an infinite number of digits to represent correctly. As such it cannot be represented faithfully using floating point numbers. This leads to errors such as the one you noticed when multiplying it with itself.
You can learn about floating points on their wikipedia page: http://en.wikipedia.org/wiki/Floating_point.
I especially recommend that you read the answer to this other Stack Overflow question: Floating Point Limitations and follow the mentioned link, it will help you understand what's going on under the hood.
Note that this is a problem in every language, not just Haskell. One way to get rid of it entirely is to use symbolic computation libraries but they are much slower than the floating point numbers offered by CPUs. For many computations the loss of precision due to floating points is not a problem.
I am doing question 12 of project euler where I must find the first triangle number with 501 divisors. So I whipped up this with Haskell:
divS n = [ x | x <- [1..(n)], n `rem` x == 0 ]
tri n = (n* (n+1)) `div` 2
divL n = length (divS (tri n))
answer = [ x | x <- [100..] , 501 == (divL x)]
The first function finds the divisors of a number.
The second function calculates the nth triangle number
The 3rd function finds the length of the list that are the divisors of the triangle number
The 4th function should return the value of the triangle number which has 501 divisors.
But so far this run for a while without returning a result. Is the answer very large or do I need some serious optimisation to make this work in a realistic amount of time?
You need to use properties of divisor function: http://en.wikipedia.org/wiki/Divisor_function
Notice that n and n + 1 are always coprime, so that you can get d(n * (n + 1) / 2) by multiplying previously computed values.
It is probably faster to prime-factorise the number and then use the factorisation to find the divisors, than using trial division with all numbers <= sqrt(n).
The Sieve of Eratosthenes is a classical way of finding primes, which may be modified slightly to find the number of divisors of each natural number. Instead of just marking each non-prime as "not prime", you could make a list of all the primes dividing each number.
You can then use those primes to compute the complete set of divisors, or just the number of them, since that is all you need.
Another variation would be to mark not just multiples of primes, but multiples of all natural numbers. Then you could simply use a counter to keep track of the number of divisors for each number.
You also might want to check out The Genuine Sieve of Eratosthenes, which explains why
trial division is way slower than the real sieve.
Last off, you should look carefully at the different kinds of arrays in Haskell. I think it is probably easier to use the ST monad to implement the sieve, but it might be possible to achieve the correct complexity using accumArray, if you can make sure that your update function is strict. I have never managed to get this to work though, so you are on your own here.
If you were using C instead of Haskell, your function would still take much time.
To make it faster you will need to improve the algorithm, using suggestions from the above answers. I suggest to change the title and question description accordingly. Following that I'll delete this comment.
If you wish, I can spoil the problem by sharing my solution.
For now I'll give you my top-level code:
main =
print .
head . filter ((> 500) . length . divisors) .
map (figureNum 3) $ [1..]
The algorithmic improvement lies in the divisors function. You can further improve it using rawicki's suggestion, but already this takes less than 100ms.
Some optimization tips:
check for divisors between 1 and sqrt(n). I promise you won't find any above that limit (except for the number itself).
don't build a list of divisors and count the list, but count them directly.
The default
[1..5]
gives this
[1,2,3,4,5]
and can also be done with the range function. Is it possible to change the step size between the points, so that I could get something like the following instead?
[1,1.5,2,2.5,3,3.5,4,4.5,5]
[1,1.5..5]
You have to be careful with floating point arithmetic. It can't represent 1.1 precisely, so if you try
Prelude> [0,0.1 .. 1]
[0.0,0.1,0.2,0.30000000000000004,0.4,0.5,0.6,0.7,0.7999999999999999,0.8999999999999999,0.9999999999999999]
Best way is more like:
Prelude> map (/10) [0..10]
[0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0]
Actually, [1..5] is syntactic sugar for
enumFromTo 1 5
and [1,1.5..5] for
enumFromThenTo 1 1.5 5
For more information, see http://en.wikibooks.org/wiki/Haskell/Syntactic_sugar
I just want to elaborate on some of the answers above. As #mattiast correctly mentioned,
[start, abs(start - stepSize) .. end] is really just syntactic sugar for:
enumFromThenTo start abs(start - stepSize) end
However, notice that the middle value (in your case, the "1.5" is not the step size, but what the value should be were it's magnitude from the start to be computed.
So if you want to decrement in steps of 0.2, then we would need to do [2,1.8..1] since abs(2 - 1.8) == 0.2