Why does flooring infinity not throw some error? - haskell

I found myself having a case where the equivalent of floor $ 1/0 was being executed.
λ> 1/0
Infinity
This is normal behavior as far as I understand but, when Infinity is floor'd or ceiling'd
λ> floor $ 1/0
179769313486231590772930519078902473361797697894230657273430081157732675805500963132708477322407536021120113879871393357658789768814416622492847430639474124377767893424865485276302219601246094119453082952085005768838150682342462881473913110540827237163350510684586298239947245938479716304835356329624224137216
Instead of failing, this very big number is produced. Why?
Maybe more importantly, how can I distinguish this from a non faulty result without using a filter before applying another function?

The first question is perhaps not so important, so I'll try to answer the second question first.
Once you have a number, if you know that it came from floor x, you can't know whether x was the valid representation of 2^1024 or if it was infinity. You can probably assume anything outside of the range of double is invalid and was produced from infinity, negative infinity, NaN or the like. It would be quite simple to check if your value is valid using one/many of the functions in RealFloat, like isNaN, isInfinite, etc.
You could also use something like data Number a = N a | PosInf | NegInf. Then you write:
instance RealFrac a => RealFrac (Number a) where
...
floor (N n) = floor n
floor PosInf = error "Floor of positive infinity"
floor NegInf = error "Floor of negative infinity"
..
Which approach is best is based mostly on your use case.
Maybe it would be correct for floor (1/0) to be an error. But the value is garbage anyways. Is it better to deal with garbage or an error?
But why 2^1024? I took a look at the source for GHC.Float:
properFraction (F# x#)
= case decodeFloat_Int# x# of
(# m#, n# #) ->
let m = I# m#
n = I# n#
in
if n >= 0
then (fromIntegral m * (2 ^ n), 0.0)
else let i = if m >= 0 then m `shiftR` negate n
else negate (negate m `shiftR` negate n)
f = m - (i `shiftL` negate n)
in (fromIntegral i, encodeFloat (fromIntegral f) n)
floor x = case properFraction x of
(n,r) -> if r < 0.0 then n - 1 else n
Note that decodeFloat_Int# returns the mantissa and exponent. According to wikipedia:
Positive and negative infinity are represented thus: sign = 0 for
positive infinity, 1 for negative infinity. biased exponent = all 1
bits. fraction = all 0 bits.
For Float, this means a base of 2^23, since there are 23 bits in the base, and an exponent of 105 (why 105? I actually have no idea. I would think it should be 255 - 127 = 128, but it seems to actually be 128 - 23). The value of floor is fromIntegral m * (2 ^ n) or base*(2^exponent) == 2^23 * 2^105 == 2^128. For double this value is 1024.

Related

Summing a finite prefix of an infinite series

The number π can be calculated with the following infinite series sum:
I want to define a Haskell function roughlyPI that, given a natural number k, calculates the series sum from 0 to the k value.
Example: roughlyPi 1000 (or whatever) => 3.1415926535897922
What I did was this (in VS Code):
roughlyPI :: Double -> Double
roughlyPI 0 = 2
roughlyPI n = e1/e2 + (roughlyPI (n-1))
where
e1 = 2**(n+1)*(factorial n)**2
e2 = factorial (2*n +1)
factorial 0 = 1
factorial n = n * factorial (n-1)
but it doesn't really work....
*Main> roughlyPI 100
NaN
I don't know what's wrong. I'm new to Haskell, by the way.
All I really want is to be able to type in a number that will give me PI at the end. It can't be that hard...
As mentioned in the comments, we need to avoid large divisions and instead intersperse smaller divisions within the factorials. We use Double for representing PI but even Double has its limits. For instance 1 / 0 == Infinity and (1 / 0) / (1 / 0) == Infinity / Infinity == NaN.
Luckily, we can use algebra to simplify the formula and hopefully delay the blowup of our Doubles. By dividing within our factorial the numbers don't grow too unwieldy too quickly.
This solution will calculate roughlyPI 1000, but it fails on 1023 with NaN because 2 ^ 1024 :: Double == Infinity. Note how each iteration of fac has a division as well as a multiplication to help keep the numbers from blowing up. If you are trying to approximate PI with a computer, I believe there are better algorithms, but I tried to keep it as conceptually close to your attempt as possible.
roughlyPI :: Integer -> Double
roughlyPI 0 = 2
roughlyPI k = e + roughlyPI (k - 1)
where
k' = fromIntegral k
e = 2 ** (k' + 1) * fac k / (2 * k' + 1)
where
fac 1 = 1 / (k' + 1)
fac p = (fromIntegral p / (k' + fromIntegral p)) * fac (p - 1)
We can do better than having a blowup of Double after 1000 by doing computations with Rationals then converting to Double with realToFrac (credit to #leftaroundabout):
roughlyPI' :: Integer -> Double
roughlyPI' = realToFrac . go
where
go 0 = 2
go k = e + go (k - 1)
where
e = 2 ^ (k + 1) * fac k / (2 * fromIntegral k + 1)
where
fac 1 = 1 % (k + 1)
fac p = (p % (k + p)) * fac (p - 1)
For further reference see Wikipedia page on approximations of PI
P.S. Sorry for the bulky equations, stackoverflow does not support LaTex
First note that your code actually works:
*Main> roughlyPI 91
3.1415926535897922
The problem, as was already said, is that when you try to make the approximation better, the factorial terms become too big to be representable in double-precision floats. The simplest – albeit somewhat brute-force – way to fix that is to do all the computation in rational arithmetic instead. Because numerical operations in Haskell are polymorphic, this works with almost the same code as you have, only the ** operator can't be used since that allows fractional exponents (which are in general irrational). Instead, you should use integer exponents, which is anyway the conceptually right thing. That requires a few fromIntegral:
roughlyPI :: Integer -> Rational
roughlyPI 0 = 2
roughlyPI n = e1/e2 + (roughlyPI (n-1))
where
e1 = 2^(n+1)*fromIntegral (factorial n^2)
e2 = fromIntegral . factorial $ 2*n + 1
factorial 0 = 1
factorial n = n * factorial (n-1)
This now works also for much higher degrees of approximation, although it takes a long time to carry around the giant fractions involved:
*Main> realToFrac $ roughlyPI 1000
3.141592653589793
The way to go in such cases is to calculate the ratio of consecutive terms and calculate the terms by rolling multiplications of the ratios:
-- 1. -------------
pi1 n = Sum { k = 0 .. n } T(k)
where
T(k) = 2^(k+1)(k!)^2 / (2k+1)!
-- 2. -------------
ts2 = [ 2^(k+1)*(k!)^2 / (2k+1)! | k <- [0..] ]
pis2 = scanl1 (+) ts2
pi2 n = pis2 !! n
-- 3. -------------
T(k) = 2^(k+1)(k!)^2 / (2k+1)!
T(k+1) = 2^(k+2)((k+1)!)^2 / (2(k+1)+1)!
= T(k) 2 (k+1)^2 / (2k+2) (2k+3)
= T(k) (k+1)^2 / ( k+1) (2k+3)
= T(k) (k+1) / (k+1 + k+2)
= T(k) / (1 + (k+2)/(k+1))
= T(k) / (2 + 1 /(k+1))
-- 4. -------------
ts4 = scanl (/) 2 [ 2 + 1/(k+1) | k <- [0..]] :: [Double]
pis4 = scanl1 (+) ts4
pi4 n = pis4 !! n
This way we share and reuse the calculations as much as possible. This leads to the most efficient code, hopefully leading to the smallest cumulative numerical error. The formula also turned out to be exceptionally simple, and could even be simplified further as ts5 = scanl (/) 2 [ 2 + recip k | k <- [1..]].
Trying it out:
> pis2 = scanl1 (+) $ [ fromIntegral (2^(k+1))*fromIntegral (product[1..k])^2 /
fromIntegral (product[1..(2*k+1)]) | k <- [0..] ] :: [Double]
> take 8 $ drop 30 pis2
[3.1415926533011587,3.141592653447635,3.141592653519746,3.1415926535552634,
3.141592653572765,3.1415926535813923,3.141592653585647,3.141592653587746]
> take 8 $ drop 90 pis2
[3.1415926535897922,3.1415926535897922,NaN,NaN,NaN,NaN,NaN,NaN]
> take 8 $ drop 30 pis4
[3.1415926533011587,3.141592653447635,3.141592653519746,3.1415926535552634,
3.141592653572765,3.1415926535813923,3.141592653585647,3.141592653587746]
> take 8 $ drop 90 pis4
[3.1415926535897922,3.1415926535897922,3.1415926535897922,3.1415926535897922,
3.1415926535897922,3.1415926535897922,3.1415926535897922,3.1415926535897922]
> pis4 !! 1000
3.1415926535897922

fractional type is in Haskell

I want to use rational number type instead of factional type in Haskell (or float/double type in C)
I get below result:
8/(3-8/3)=23.999...
8/(3-8/3)/=24
I know Data.Ratio. However, it support (+) (-) (*) (/) operation on Data.Ratio:
1%3+3%3 == 4 % 3
8/(3-8%3) == 24 % 1
I had checked in Racket:
(= (/ 8 (- 3 (/ 8 3))) 24)
#t
What's correct way to ensure 8/(3-8/3) == 24 in Haskell?
Use an explicit type somewhere in the chain. It will force the entire calculation to be performed with the corrrect type.
import Data.Ratio
main = do
print $ 8/(3-8/3) == 24
print $ 8/(3-8/3) == (24 :: Rational)
Prints
False
True
Data.Ratio.numerator and Data.Ratio.denominator return numerator an denominator of the ratio in reduced form so it is safe to compare denominator to 1 to check if ratio is an integer.
import Data.Ratio
eq :: (Num a, Eq a) => Ratio a -> a -> Bool
eq r i = d == 1 && n == i
where
n = numerator r
d = denominator r
main = print $ (8/(3-8%3)) `eq` 24

Haskell Decimal to Binary

I am trying to build a function that converts a Decimal(Int) into a Binary number.
Unfortunately other than in java it is not possible to divide an int by two in haskell.
I am very new to functional programming so the problem could be something trivial.
So far I could not find another solution to this problem but
here is my first try :
fromDecimal :: Int -> [Int]
fromDecimal 0 = [0]
fromDecimal n = if (mod n 2 == 0) then
do
0:fromDecimal(n/2)
else
do
1:fromDecimal(n/2)
I got an java implementation here which I did before :
public void fromDecimal(int decimal){
for (int i=0;i<values.length;i++){
if(decimal % 2 = 0)
values[i]=true ;
decimal = decimal/ 2;
else {values[i]= false;
} }
}
Hopefully this is going to help to find a solution!
There are some problems with your solution. First of all, I advise not to use do at all, until you understand what do does. Here we do not need do at all.
Unfortunately other than in java it is not possible to divide an int by two in haskell.
It actually is, but the / operator (which is in fact the (/) function), has type (/) :: Fractional a => a -> a -> a. An Int is not Fractional. You can perform integer division with div :: Integral a => a -> a -> a.
So then the code looks like:
fromDecimal :: Int -> [Int]
fromDecimal 0 = [0]
fromDecimal n = if (mod n 2 == 0) then 0:fromDecimal (div n 2) else 1:fromDecimal (div n 2)
But we can definitely make this more elegant. mod n 2 can only result in two outcomes: 0 and 1, and these are exactly the ones that we use at the left side of the (:) operator.
So we do not need to use an if-then-else at all:
fromDecimal :: Int -> [Int]
fromDecimal 0 = [0]
fromDecimal n = mod n 2 : fromDecimal (div n 2)
Likely this is still not exactly what you want: here we write the binary value such that the last element, is the most significant one. This function will add a tailing zero, which does not make a semantical difference (due to that order), but it is not elegant either.
We can define an function go that omits this zero, if the given value is not zero, like:
fromDecimal :: Int -> [Int]
fromDecimal 0 = [0]
fromDecimal n = go n
where go 0 = []
go k = mod k 2 : go (div k 2)
If we however want to write the most significant bit first (so in the same order as we write decimal numbers), then we have to reverse the outcome. We can do this by making use of an accumulator:
fromDecimal :: Int -> [Int]
fromDecimal 0 = [0]
fromDecimal n = go n []
where go 0 r = r
go k rs = go (div k 2) (mod k 2:rs)
You cannot / integers in Haskell – division is not defined in terms of integral numbers! For integral division use div function, but in your case more suitable would be divMod that comes with mod gratis.
Also, you are going to get reversed output, so you can reverse manually it after that, or use more memory-efficient version with accumulator:
decToBin :: Int -> [Int]
decToBin = go [] where
go acc 0 = acc
go acc n = let (d, m) = n `divMod` 2 in go (m : acc) d
go will give you an empty list for 0. You may add it manually if the list is empty:
decToBin = (\l -> if null l then [0] else l) . go [] where ...
Think through how your algorithm will work. It starts from 2⁰, so it will generate bits backward from how we ordinarily think of them, i.e., least-significant bit first. Your algorithm can represent non-negative binary integers only.
fromDecimal :: Int -> [Int]
fromDecimal d | d < 0 = error "Must be non-negative"
| d == 0 = [0]
| otherwise = reverse (go d)
where go 0 = []
go d = d `rem` 2 : go (d `div` 2)
In Haskell, when we generate a list in reverse, go ahead and do so but then reverse the result at the end. The reason for this is consing up a list (gluing new items at the head with :) has a constant cost and the reverse at the end has a linear cost — but appending with ++ has a quadratic cost.
Common Haskell style is to have a private inner loop named go that the outer function applies when it’s happy with its arguments. The base case is to terminate with the empty list when d reaches zero. Otherwise, we take the current remainder modulo 2 and then proceed with d halved and truncated.
Without the special case for zero, fromDecimal 0 would be the empty list rather than [0].
The binary numbers are usually strings and not really used in calculations.
Strings are also less complicated.
The pattern of binary numbers is like any other. It repeats but at a faster clip.
Only a small set is necessary to generate up to 256 (0-255) binary numbers.
The pattern can systematically be expanded for more.
The starting pattern is 4, 0-3
bd = ["00","01","10","11"]
The function to combine them into larger numbers is
d2b n = head.drop n $ [ d++e++f++g | d <- bd, e <- bd, f <- bd, g <- bd]
d2b 125
"01111101"
If it's not obvious how to expand, then
bd = ["000","001","010","011","100","101","110","111"]
Will give you up to 4096 binary digits (0-4095). All else stays the same.
If it's not obvious, the db2 function uses 4 pairs of binary numbers so 4 of the set. (2^8) - 1 or (2^12) - 1 is how many you get.
By the way, list comprehension are sugar coated do structures.
Generate the above patterns with
[ a++b | a <- ["0","1"], b <- ["0","1"] ]
["00","01","10","11"]
and
[ a++b++c | a <- ["0","1"], b <- ["0","1"], c <- ["0","1"] ]
["000","001","010","011","100","101","110","111"]
More generally, one pattern and one function may serve the purpose
b2 = ["0","1"]
b4 = [ a++b++c++d | a <- b2, b <- b2, c <- b2, d <- b2]
b4
["0000","0001","0010","0011","0100","0101","0110","0111","1000","1001","1010","1011","1100","1101","1110","1111"]
bb n = head.drop n $ [ a++b++c++d | a <- b4, b <- b4, c <- b4, d <- b4]
bb 32768
"1000000000000000"
bb 65535
"1111111111111111"
To calculate binary from decimal directly in Haskell using subtraction
cvtd n (x:xs) | x>n = 0:(cvtd n xs)
| n>x = 1:(cvtd (n-x) xs)
| True = 1:[0|f<-xs]
Use any number of bits you want, for example 10 bits.
cvtd 639 [2^e|e<-[9,8..0]]
[1,0,0,1,1,1,1,1,1,1]
import Data.List
dec2bin x =
reverse $ binstr $ unfoldr ndiv x
where
binstr = map (\x -> "01" !! x)
exch (a,b) = (b,a)
ndiv n =
case n of
0 -> Nothing
_ -> Just $ exch $ divMod n 2

Format Float as Int when printing in Haskell

This Haskell program prints "1.0" How can I get it to print "1"?
fact 0 = 1
fact x = x * fact (x-1)
place m n = (fact m) / (fact n) * (fact (m-n))
main = do
print (place 0 0)
By using the / operation, you are asking haskell to use a fractional data type. You probably don't want that in this case. It is preferable to use an integral type such as Int or Integer. So I suggest to do the following:
1. Add a type declaration for the fact function, something like fact :: Integer -> Integer
2. Use quot instead of /.
So your code should look like this:
fact :: Integer -> Integer
fact 0 = 1
fact x = x * fact (x-1)
place :: Integer -> Integer -> Integer
place m n = (fact m) `quot` (fact n) * (fact (m-n))
main = do
print (place 0 0)
Also, as #leftaroundabout pointed out, you probably want to use a better algorithm for computing those binomial numbers.
You could just use round:
print (round $ place 0 0)
This changes the formatting to the one you want. redneb's answer is, however, the right approach.

Haskell Int64 inconsistent?

I am trying to solve the problem 2's complement here (sorry, it requires login, but anyone can login with FB/google account). The problem in short is to count the number of ones appearing in the 2's complement representation of all numbers in a given range [A, B] where A and B are within the 32-bit limits (231 in absolute value). I know my algorithm is correct (it's logarithmic in the bigger absolute value, since I already solved the problem in another language).
I am testing the code below on my machine and it's giving perfectly correct results. When it runs on the Amazon server, it gives a few wrong answers (obviously overflows) and also some stack overflows. This is not a bug in the logic here, because I test the same code on my machine on the same test inputs and get different results. For example, for the range [-1548535525, 662630637] I get 35782216444 on my machine, while according to the tests, my result is some negative overflow value.
The only problem I can think of, is that perhaps I am not using Int64 correctly, or I have a wrong assumption about it's operation.
Any help is appreciated. Code is here.
The stack overflows are a bug in the logic.
countOnes !a !b | a == b = countOnes' a
countOnes' :: Int64 -> Integer
countOnes' !0 = 0
countOnes' !a = (fromIntegral (a .&. 1)) + (countOnes' (a `shiftR` 1))
Whenever you call countOnes' with a negative argument, you get a nonterminating computation, since the shiftR is an arithmetic shift and not a logical one, so you always shift in a 1-bit and never reach 0.
But even with a logical shift, for negative arguments, you'd get a result 32 too large, since the top 32 bits are all 1.
Solution: mask out the uninteresting bits before calling countOnes',
countOnes !a !b | a == b = countOnes' (a .&. 0xFFFFFFFF)
There are some superfluous guards in countOnes,
countOnes :: Int64 -> Int64 -> Integer
countOnes !a !b | a > b = 0
-- From here on we know a <= b
countOnes !a !b | a == b = countOnes' (a .&. 0xFFFFFFFF)
-- From here on, we know a < b
countOnes !0 !n = range + leading + (countOnes 0 (n - (1 `shiftL` m)))
where
range = fromIntegral $ m * (1 `shiftL` (m - 1))
leading = fromIntegral $ (n - (1 `shiftL` m) + 1)
m = (getLog n) - 1
-- From here on, we know a /= 0
countOnes !a !b | a > 0 = (countOnes 0 b) - (countOnes 0 (a - 1))
-- From here on, we know a < 0,
-- the guard in the next and the last equation are superfluous
countOnes !a !0 | a < 0 = countOnes (maxInt + a + 1) maxInt
countOnes !a !b | b < 0 = (countOnes a 0) - (countOnes (b + 1) 0)
countOnes !a !b | a < 0 = (countOnes a 0) + (countOnes 0 b)
The integer overflows on the server are caused by
getLog :: Int64 -> Int
--
countOnes !0 !n = range + leading + (countOnes 0 (n - (1 `shiftL` m)))
where
range = fromIntegral $ m * (1 `shiftL` (m - 1))
leading = fromIntegral $ (n - (1 `shiftL` m) + 1)
m = (getLog n) - 1
because the server has a 32-bit GHC, while you have a 64-bit one. The shift distance/bit width m is an Int (and because it's used as the shift distance, it has to be).
Therefore
m * (1 `shiftL` (m-1))
is an Int too. For m >= 28, that overflows a 32-bit Int.
Solution: remove a $
range = fromIntegral m * (1 `shiftL` (m - 1))
Then the 1 that is shifted is an Integer, hence no overflow.

Resources