From the haskell report:
The quot, rem, div, and mod class
methods satisfy these laws if y is
non-zero:
(x `quot` y)*y + (x `rem` y) == x
(x `div` y)*y + (x `mod` y) == x
quot is integer division truncated
toward zero, while the result of div
is truncated toward negative infinity.
For example:
Prelude> (-12) `quot` 5
-2
Prelude> (-12) `div` 5
-3
What are some examples of where the difference between how the result is truncated matters?
Many languages have a "mod" or "%" operator that gives the remainder after division with truncation towards 0; for example C, C++, and Java, and probably C#, would say:
(-11)/5 = -2
(-11)%5 = -1
5*((-11)/5) + (-11)%5 = 5*(-2) + (-1) = -11.
Haskell's quot and rem are intended to imitate this behaviour. I can imagine compatibility with the output of some C program might be desirable in some contrived situation.
Haskell's div and mod, and subsequently Python's / and %, follow the convention of mathematicians (at least number-theorists) in always truncating down division (not towards 0 -- towards negative infinity) so that the remainder is always nonnegative. Thus in Python,
(-11)/5 = -3
(-11)%5 = 4
5*((-11)/5) + (-11)%5 = 5*(-3) + 4 = -11.
Haskell's div and mod follow this behaviour.
This is not exactly an answer to your question, but in GHC on x86, quotRem on Int will compile down to a single machine instruction, whereas divMod does quite a bit more work. So if you are in a speed-critical section and working on positive numbers only, quotRem is the way to go.
A simple example where it would matter is testing if an integer is even or odd.
let buggyOdd x = x `rem` 2 == 1
buggyOdd 1 // True
buggyOdd (-1) // False (wrong!)
let odd x = x `mod` 2 == 1
odd 1 // True
odd (-1) // True
Note, of course, you could avoid thinking about these issues by just defining odd in this way:
let odd x = x `rem` 2 /= 0
odd 1 // True
odd (-1) // True
In general, just remember that, for y > 0, x mod y always return something >= 0 while x rem y returns 0 or something of the same sign as x.
Related
I solved the following exercise, but I'm not a fan of the solution:
Write the function isPerfectSquare using recursion, to tell if an
Int is a perfectSquare
isPerfectSquare 1 -> Should return True
isPerfectSquare 3 -> Should return False
the num+1 part is for the case for isPerfectSquare 0 and isPerfectSquare 1, one of the parts I don't like one bit, this is my solutiuon:
perfectSquare 0 1 = [0] ++ perfectSquare 1 3
perfectSquare current diff = [current] ++ perfectSquare (current + diff) (diff + 2)
isPerfectSquare num = any (==num) (take (num+1) (perfectSquare 0 1))
What is a more elegant solution to this problem? of course we can't use sqrt, nor floating point operations.
#luqui you mean like this?
pow n = n*n
perfectSquare pRoot pSquare | pow(pRoot) == pSquare = True
| pow(pRoot)>pSquare = perfectSquare (pRoot-1) pSquare
| otherwise = False
--
isPerfectSquare number = perfectSquare number number
I can't believe I didn't see it xD thanks a lot! I must be really tired
You can perform some sort of "binary search" on some implicit list of squares. There is however a problem of course, and that is that we first need an upper bound. We can use as upper bound the number itself, since for all integral squares, the square is larger than the value we square.
So it could look like:
isPerfectSquare n = search 0 n
where search i k | i > k = False
| j2 > n = search i (j-1)
| j2 < n = search (j+1) k
| otherwise = True
where j = div (i+k) 2
j2 = j * j
To verify that a number n is a perfect square, we thus have an algorithm that runs in O(log n) in case the integer operations are done in constant time (for example if the number of bits is fixed).
Wikipedia suggests using Newton's method. Here's how that would look. We'll start with some boilerplate. ensure is a little combinator I've used fairly frequently. It's written to be very general, but I've included a short comment that should be pretty explanatory for how we'll plan to use it.
import Control.Applicative
import Control.Monad
ensure :: Alternative f => (a -> Bool) -> a -> f a
ensure p x = x <$ guard (p x)
-- ensure p x | p x = Just x
-- | otherwise = Nothing
Here's the implementation of the formula given by Wikipedia for taking one step in Newton's method. x is our current guess about the square root, and n is the number we're taking the square root of.
stepApprox :: Integer -> Integer -> Integer
stepApprox x n = (x + n `div` x) `div` 2
Now we can recursively call this stepping function until we get the floor of the square root. Since we're using integer division, the right termination condition is to watch for the next step of the approximation to be equal or one greater to the current step. This is the only recursive function.
iterateStepApprox :: Integer -> Integer -> Integer
iterateStepApprox x n = case x' - x of
0 -> x
1 -> x
_ -> iterateStepApprox x' n
where x' = stepApprox x n
To wrap the whole development up in a nice API, to check if a number is a square we can just check that the floor of its square root squares to it. We also need to pick a starting approximation, but we don't have to be super smart -- Newton's method converges very quickly for square roots. We'll pick half the number (rounded up) as our approximation. To avoid division by zero and other nonsense, we'll make zero and negative numbers special cases.
isqrt :: Integer -> Maybe Integer
isqrt n | n < 0 = Nothing
isqrt 0 = Just 0
isqrt n = ensure (\x -> x*x == n) (iterateStepApprox ((n+1)`div`2) n)
Now we're done! It's pretty fast even for large numbers:
> :set +s
> isqrt (10^10000) == Just (10^5000)
True
(0.58 secs, 182,610,408 bytes)
Yours would spend rather a longer time than the universe has got left computing that. It is also marginally faster than the binary search algorithm in my tests. (Of course, not hand-rolling it yourself is several orders of magnitude faster still, probably in part because it uses a better, but more complicated, algorithm based on Karatsuba multiplication.)
If the function is recursive then it is primitive recursive as are 90% of all recursive functions. For these folds are fast and effective. Considering the programmers time, while keeping things simple and correct is important.
Now, that said, it might be fruitful to cinsider text patterns of functions like sqrt. sqrt return a floating point number. If a number is a perfect square then two characters are ".0" at the end. The pattern might occur, however, at the start of any mantissa. If a string goes in, in reverse, then "0." is at the top of the list.
This function takes a Number and returns a Bool
fps n = (take 2.reverse.show $ (n / (sqrt n))) == "0."
fps 10000.00001
False
fps 10000
True
I'm learning Haskell and I have been practising doing some functions by myself, in this functions are included the calculus of sine using recursion, but I get strange results.
The formula I'm using to calculate the sine is this one:
And my code is this:
--Returns n to power p
pow :: Float->Integer->Float
pow n p =
if p == 0 then
1
else
if p == 1 then
n
else
n * (pow n (p-1))
--Finds a number's factorial
f :: Integer->Integer
f n =
if n == 1 then
n
else
n * (f (n-1))
--TODO: Trigonometric functions ( :v I'll do diz 2)
sinus :: Float->Char->Float
sinus n deg =
if(deg == 'd')then
sinusr 0 (normalize (torad n)) 0
else
sinusr 0 (normalize n) 0
--Get the value equivalent to radians of the presented degrees
torad :: Float->Float
torad v = ( (v * pi) / 180 )
--Recursive to get the value of the entering radians
sinusr :: Integer->Float->Float->Float
sinusr k x result =
if k == 130 then
result + ( ((pow (-1) k ) * ((pow x ((2*k)+1))) / (fromIntegral (f ((2*k)+1)))))
else
result + (sinusr (k+1) x ( ((pow (-1) k ) * ((pow x ((2*k)+1))) / (fromIntegral (f ((2*k)+1))))))
--Subtracts pi/2 the necessary times to get a value minor or equals to pi/2 :v
normalize :: Float->Float
normalize a = a - (fromIntegral (truncate (a / (pi*2)))*(pi*2))
For example, the output it's this:
*Main> sinus 1 'd'
1.7452406e-2
*Main> sinus 1 's'
0.84147096
*Main> sinus 2 's'
NaN
*Main> sinus 2 'd'
3.4899496e-2
Can someone tell me why it is showing me that?
I have worked the same logic with Lisp, and it runs perfectly, I just had to figure out the Haskell syntax, but as you can see, it is not working as it must be.
Beforehand, thank you very much.
Single point arithmetic isn't accurate enough for to calculate a trigonometric function. The exponent doesn't have enough bits for the large, intermediate numbers in sinusr. Or, to be blunt, the following number doesn't fit a Float:
ghci> 2 ^ 130 :: Float
Infinity
As soon as you hit the boundaries of floating point numbers (-Infinity, Infinity) you usually end up with either those or NaN.
Use Double instead. Your implementation of lisp probably uses double point precision floating point numbers too. Even better, don't recalculate the whole fraction in every step, instead update the nominator and denominator, then your values won't get too large for Float.
I started learning Haskell recently and in my class right now, we have constructed a Peano number class and instanced it in the Num typeclass.
During lecture, my professor claimed that depending on whether you viewed the successor function as S x = x + 1 or S x = 1 + x, the appropriate successor case for the multiplication definition would be different. Respectively:
x * S y = x * y + x
x * S y = x + x * y
Moreover, he claimed that using the first of these two choices is preferable because it is lazier but I'm having trouble seeing how this is the case.
We looked at the example in which the addition definition of
x + S y = S (x + y)
is better than
x + S y = S x + y
because evaluating x + y == z occurs much faster but I can't find an analogous case for multiplication.
The lecture notes are here: http://cmsc-16100.cs.uchicago.edu/2014/Lectures/lecture-02.php
Laziness is not about speed but what is available how soon.
With x * S y = x * y + x then you can answer infinity * 2 > 5 very quickly, because it will expand as so:
infinity * (S (S Z)) > 5
infinity * (S Z) + infinity > 5
infinity * Z + infinity + infinity > 5
infinity + infinity > 5
(from there the rest is trivial)
However, I don't think it is all as good as your professor claimed! Try to expand out 2 * infinity > 5 in this formalism and you'll be disappointed (or busy for a very long time :-P). On the other hand, with the other definition of multiplication, you do get an answer there.
Now, if we have the "good" definition of addition, I think it should be the case that you can get an answer with infinities in either position. And indeed, I checked the source of a few Haskell packages that define Nats, and indeed they prefer x * S y = x + x * y rather than the way your professor claimed was better.
Say one wants to calculate the function:
f (x,y) = ((x `mod` 3)+(y `mod` 3)) `mod` 2
Then, if one expands f (-1,0) manually, one gets:
((-1 `mod` 3)+(0 `mod` 3)) `mod` 2
1
If one however uses an inline function, the result is:
let f (x,y) = ((x `mod` 3)+(y `mod` 3)) `mod` 2 in f (-1,0)
0
What happens when storing the function that yields not the expected result?
I assume this is because f uses Integral instead of Int?
Looks like it's a matter of parsing. -1 `mod` 3 gets parsed as -(1 `mod` 3) and not (-1) `mod` 3.
*Main> -(1 `mod` 3)
-1
*Main> (-1) `mod` 3
2
Honestly, the way unary - works in Haskell is a bit of a hack that I personally find confusing. If I really need a negative literal, I usually just add the extra parentheses to be sure.
Another thing to consider is that Haskell has two modulo functions, mod and rem, that treat negative numbers differently. For more details, look to these two other questions.
Type class Integral has two operations quot and div, yet in the Haskell 2010 Language Report it is not specified what they're supposed to do. Assuming that div is integral division, what does quot differently, or what is the purpose of quot? When do you use one, and when the other?
To quote section 6.4.2 from the Haskell report:
The quot, rem, div, and mod class methods satisfy these laws if y is non-zero:
(x `quot` y)*y + (x `rem` y) == x
(x `div` y)*y + (x `mod` y) == x
quot is integer division truncated toward zero, while the result of div is truncated toward negative infinity.
The div function is often the more natural one to use, whereas the quot function corresponds to the machine instruction on modern machines, so it's somewhat more efficient.
The two behave differently when dealing with negative numbers. Consider:
Hugs> (-20) `divMod` 3
(-7,1)
Hugs> (-20) `quotRem` 3
(-6,-2)
Here, -7 * 3 + 1 = -20 and -6 * 3 + (-2) = -20, but the two ways give you different answers.
Also, see here: http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html
The definition for quot is "integer division truncated toward zero", whereas the definition for div is "integer division truncated toward negative infinity".