Haskell - how can I check if number is Double/Float? - haskell

I would like to do smth like:
x `mod` 1.0 == 0 // => int
but it seems mod works only for int... help!
EDIT:
I am trying to check if given number is triangle, http://en.wikipedia.org/wiki/Triangle_number so my idea was to check if n1 is Int...
(n*(n+1))/2 = s => n1 = (-1 +sqrt(1 +
8s))/2

To determine whether a certain Float or Double is indistinguishable from an Integer in Haskell, use floor and ceiling together. Something like:
if floor n == ceiling n
then "It was some integer."
else "It's between integers."
There might also be some fancy stuff you can do with the float's representation in binary, exposed by the RealFloat typeclass:
http://hackage.haskell.org/packages/archive/base/latest/doc/html/Prelude.html#t%3ARealFloat

A better way to check if a number is triangular is to generate a list of triangular numbers and then see if your candidate is in it. Since this is a learning problem I'm going to give hints rather than the answer.
Use a list comprehension to generate the triangular numbers.
Since they will be in order you can find out if you have gone past them.
An alternative approach if you are working with big numbers would be to use a binary search to narrow down the number of rows that might give rise to your candidate.

Total edit:
Okay, I'm still not sure what you're trying to accomplish here.
First, anything modulo 1 is going to be zero, because the modulo function only makes sense on integers. If you want to take the modulo of a fractional type you can convert to an integer first. Edit: Although for what it's worth, Data.Fixed does have a mod' function for non-integral values.
I also don't know what you mean by "check if n1 is Int". Either it is or it isn't; you don't need to check at run time. Edit: Okay, I see now that you're just checking to see if a value has a fractional component. Paul Johnson correctly points out above that it's wise to be careful doing such things with floating point values.
If you want to mix mod and sqrt operations in the same calculation, you'll have to manually convert between appropriate types. fromIntegral will convert any integer type into any number type, floor, ceiling, and round will convert fractional types to integral types.

Related

How to Store / Use Decimal and money values in firestore / nodejs

In MongodDB , There is a data type "Decimal128" which holds the value of a decimal correctly ( see the "why" Here.
What is the recommended way to store / use decimal and money types in firebase? Convert to and from Bigdecimal? Or is the decimal type in firestore sufficient for overcoming rounding issue?
According to the documentation, Firestore's floating point type is 64-bit double precision, IEEE 754. This format has imprecision due to rounding. There is no "decimal" format in Firestore as you find with other databases. And there is no formally recommended type for monetary values in Firestore, so you should look into other ways of representing them in some other way. A web search may help you with that.
As #Doug notes, floating point imprecision makes double unsuitable for storing currency (or any other decimal values that require consistency), particularly if you want to do any math on these stored values.
While your solution to use String type to store decimals will work, it might cause issues if performing math later.
One alternative is to store currency as int 'cents', then divide by 100 when displaying to the user – and thus multiple by 100 before storing user input.
For example, although as double floats:
print(0.2 + 0.1);
= 0.30000000000000004
Instead as int x 100:
int x = 20;
int y = 10;
print((x+y)/100);
= 0.3
This might get unwieldy if your project makes use of many different currency fields, but for most things there's a certain simplicity and transparency to using int x100 for base-100 currencies that I think keeps code predictable.
I'd go with a similar approach like #djoll has mentioned, a common thing we do at Google is to store amount_micros instead of amount, which means for $1 you'd store 1,000,000 as int. It's much easier to perform math this way.
If you want your int (say the number 350) to have decimals, you could add '.00' to the end of it on the client-side. For example, in Angular 2+ you can use the decimal pipe, like so:
{{350 | number:'3.2-5'}}
<!--output: '350.00'-->

Function to find the least prime factor

Does PARI/GP have a function for finding the smallest prime factor of a t_INT or otherwise perform a partial factorization of an integer?
For example, if I have the number:
a=261432792226751124747858820445742044652814631500046047326053169701039080900441047539208779404889565067
it takes a long time to do factor(a) because a contains two huge prime factors. However, it is quite easy to find that 17 is a divisor of a.
Of course in this case I could have used just forprime(p=2,,a % p == 0 && return(p)) or a similar trial division to find the factor. But if the least factor had had 20 decimal figures, say, that would be impractical, and I might have wanted to use the sophisticated methods of factor in that case.
So it would be ideal if I could call factor with some kind of flag saying I will be happy with any partial factorization, or saying that all I care about is the smallest non-trivial divisor, etc.
A very simple partial answer to my question is that factor has an optional argument lim, so you can just say:
factor(a, 10^5)
for example, and only factors below 10^5 will appear in the result (the cofactor greater than 10^5 can be composite!).
The optional argument to factorint is entirely different, a bit-wise "flag", and it does not allow to specify a limit. That was probably what confused me. As an example:
factorint(a, 1+8)
selects flags 1 ("avoid MPQS") and 8 ("don't run final ECM").

Function giving slightly different answer than expected

I'm doing some monad stuff in Haskell and I wrote a function that calculates the probability of winning a gambling game given the game's decision tree. It works like a charm, except for the fact that it sometimes returns SLIGHTLY different answers than expected. For example, I'm uploading my code to DOMjudge and it returns an error, saying that the correct answer should be 1 % 6 instead of 6004799503160661 % 36028797018963968, which is what my function is returning. If you actually do the division they're both nearly the same, but I don't understand why my answer is still slightly different. I've been messing around with different types (using Real instead of Int for example), but so far no luck. I'm kind of new to this stuff and I can't seem to figure this out. Can anyone point me in the right direction?
-code deleted-
You're losing precision due to the division in probabilityOfWinning. You have the right solution to avoiding it---using type Rational = Ratio Integer---but you're applying it too late in the game. By converting toRational after division you've already lost your precision before you converted to Rational.
Try something like this
import Data.Ratio
probabilityOfWinning tree = countWins tree % countGames tree
And then remove the Real type restrictions from countWins and countGames so that they return whole integers instead of floating point numbers. These together will make sure you always use infinite precision math instead of floating point.

High precision floating point numbers in Haskell?

I know Haskell has native data types which allow you to have really big integers so things like
>> let x = 131242358045284502395482305
>> x
131242358045284502395482305
work as expected. I was wondering if there was a similar "large precision float" native structure I could be using, so things like
>> let x = 5.0000000000000000000000001
>> x
5.0000000000000000000000001
could be possible. If I enter this in Haskell, it truncates down to 5 if I go beyond 15 decimal places (double precision).
Depending on exactly what you are looking for:
Float and Double - pretty much what you know and "love" from Floats and Doubles in all other languages.
Rational which is a Ratio of Integers
FixedPoint - This package provides arbitrary sized fixed point values. For example, if you want a number that is represented by 64 integral bits and 64 fractional bits you can use FixedPoint6464. If you want a number that is 1024 integral bits and 8 fractional bits then use $(mkFixedPoint 1024 8) to generate type FixedPoint1024_8.
EDIT: And yes, I just learned about the numbers package mentioned above - very cool.
Haskell does not have high-precision floating-point numbers naitively.
For a package/module/library for this purpose, I'd refer to this answer to another post. There's also an example which shows how to use this package, called numbers.
If you need a high precision /fast/ floating point calculations, you may need to use FFI and long doubles, as the native Haskell type is not implemented yet (see https://ghc.haskell.org/trac/ghc/ticket/3353).
I believe the standard package for arbitrary precision floating point numbers is now https://hackage.haskell.org/package/scientific

Why do most programming languages only give one answer to square root of 4?

Most programming languages give 2 as the answer to square root of 4. However, there are two answers: 2 and -2. Is there any particular reason, historical or otherwise, why only one answer is usually given?
Because:
In mathematics, √x commonly, unless otherwise specified, refers to the principal (i.e. positive) root of x [http://mathworld.wolfram.com/SquareRoot.html].
Some languages don't have the ability to return more than one value.
Since you can just apply negation, returning both would be redundant.
If the square root method returned two values, then one of those two would practically always be discarded. In addition to wasting memory and complexity on the extra return value, it would be little used. Everyone knows that you can multiple the answer returned by -1 and get the other root.
I expect that only mathematical languages would return multiple values here, perhaps as an array or matrix. But for most general-purpose programming languages, there is negligible gain and non-negligible cost to doing as you suggest.
Some thoughts:
Historically, functions were defined as procedures which returned a single value.
It would have been fiddly (using primitive programming constructs) to define a clean function which returned multiple values like this.
There are always exceptions to the rule:
0 for example only has a single root (0).
You cannot take the square root of a negative number (unless the language supports complex numbers). This could be treated as an exception (like "divide by 0") in languages which don't support imaginary numbers or the complex number system.
It is usually simple to deduce the 2 square roots (simply negate the value returned by the function). This was probably left as an exercise by the caller of the sqrt() function, if their domain depended on dealing with both the positive (+) and negative (-) roots.
It's easier to return one number than to return two. Most engineering decisions are made in this manner.
There are many functions which only return 1 answer from 2 or more possibilities. Arc tangent for example. The arc tangent of 1 is returned as 45 degrees, but it could also be 225 or even 405. As with many things in life and programming there is a convention we know and can rely on. Square root functions return positive values is one of them. It is up to us, the programmers, to keep in mind there are other solutions and to act on them if needed in code.
By the way this is a common issue in robotics when dealing with kinematics and inverse kinematics equations where there are multiple solutions of links positions corresponding to Cartesian positions.
In mathematics, by convention it's always assumed that you want the positive square root of something unless you explicitly say otherwise. The square root of four really is two. If you want the negative answer, put a negative sign in front. If you want both, put the plus-or-minus sign. Without this convention it would be impossible to write equations; you would never know what the person intended even if they did put a sign in front (because it could be the negative of the negative square root, for example). Also, how exactly would you write any kind of computer code involving mathematics if operators started returning two values? It would break everything.
The unfortunate exception to this convention is when solving for variables. In the following equation:
x^2 = 4
You have no choice but to consider both possible values for X. if you take the square root of both sides, you get x = 2 but now you must put in the plus or minus sign to make sure you aren't missing any possible solutions. Also, remember that in this case it's technically X that can be either plus or minus, not the square root of four.
Because multiple return types are annoying to implement. If you really need the other result, isn't it easy enough to just multiple the result by -1?
Because most programmers only want one answer.
It's easy enough to generate the negative value from the positive value if the caller wants it. For most code the caller only uses the positive value.
However, nowadays it's easy to return two values in many languages. In JavaScript:
var sqrts=function(x) {
var s=Math.sqrt(x);
if (s>0) {
return [s,-s];
} else {
return [0];
}
}
As long as the caller knows to iterate through the array that comes back, you're gold.
>sqrts(2)
[1.4142135623730951, -1.4142135623730951]
I think because the function is called "sqrt", and if you wanted multiple roots, you would have to call the function "sqrts", which doesn't exist, so you can't do it.
The more serious answer is that you're suggesting a specific instance of a larger issue. Many equations, and commonly inverse functions (including sqrt) have multiple possible solutions, such as arcsin, etc, and these are, in general, an issue. With arcsin, for example, should one return an infinite number of answers? See, for example, discussions about branch cuts.
Because it was historically defined{{citation needed}} as the function which gives the side length of a square of known surface. And length is positive in that context.
you can always tell what is the other number, so maybe it's not necessary to return both of them.
It's likely because when people use a calculator to figure out a square root, they only want the positive value.
Go one step further and ask why your calculator won't let you take the square root of a negative number. It's possible, using imaginary numbers, but the average user has absolutely zero use for this.
On imaginary numbers.

Resources