Format float with given precision ignoring trailing zeros - rust

I'm looking for a way to format! a var: f64 with a given precision prec.
I know I can format!("{:1$}", var, prec). Problem is, given var=3.1 and prec=3, I'll get "3.100" as output. I'm looking for a way to omit those trailing zeros, so when var=3.1 output is "3.1", 3.0 => "3" and 3.14159 => "3.142".
Is there a not so hard way to achieve this?

Assuming you mean "trailing zeroes", you can use my library float_pretty_print. It attains output similar to what you want by running standard float formatter multiple times and choosing the appropriate output. You can set minimum and maximum width.
$ cargo run --example=repl
Type number to be formatted using float_pretty_print
Also type `width=<number>` or `prec=<number>` to set width or precision.
3.1
3.1
3.12345
3.12345
prec=5
3.1
3.1
3.12345
3.123
width=5
3.1
3.100
Note that instead of precision, you set maximum width in characters of the whole output (not just the fractional).
Also do now expect high performance - the library may allocate multiple times for processing one number.

Related

Why are float numbers rounded up in Cassandra?

I created Cassandra table with column type: DataType.FLOAT.
Execute my SQL using CqlSession:
CqlSessionBuilder builder = CqlSession.builder();
builder.addContactPoint(new InetSocketAddress(properties.getHost(), properties.getPort()));
builder.withLocalDatacenter(properties.getDatacenter());
builder.withAuthCredentials(properties.getUsername(), properties.getPassword());
builder.build();
But when I insert float numbers, it's rounded up:
12334.9999 -> 12335.0.
0.999999 -> 0.999999
12345.9999 -> 12346.0
It seems like Cassandra rounds the float and consider the number of all digits, not only after the point.
What are the options to solve this problem? I know that I can use Decimal datatype, but may be you have other solution?
I actually covered this issue with Apache Cassandra and DataStax Astra DB in an article I wrote last month:
The Guerilla Guide to Building E-commerce Product Services with DataStax Astra DB
So the problem here, is that FLOAT is a fixed floating point precision type. This means that when the numeric values are converted from base-10 (decimal) to base-2 (binary), each one of the 32 binary precision points must have a value (zero or one, obviously). It's during this conversion process between base-2 and base-10 that rounding errors occur. The likelihood of a rounding error increases as the value does (on either side of the decimal point).
What are the options to solve this problem? I know that I can use Decimal datatype, but may be you have other solution?
Well, you mentioned the best solution (IMO), which to use a DECIMAL to store the value. This works, because DECIMAL is an arbitrary floating point type. The values in a DECIMAL type are stored in base-10, so there's no conversion necessary and only the required precision is used.
Before arbitrary precision types came along, we used to use INTEGERs for things that had to be accurate. The first E-commerce team I worked on stored product prices in the DB as pennies, to prevent the rounding issue.
Yes, both INT and FLOAT are fixed precision types, but an INT stores whole numbers, and all of its precision points can be used for that. Therefore the usage patterns of the bits are quite different. While both INT and FLOAT allocate a bit for the "sign" (+/-), with floating point numbers the remaining 31 precision points are pre-allocated for the full numeric value and its exponent.
So your example of 12334.9999 is essentially stored in Cassandra like this:
123349999 x 10^-4
And of course, that's stored in binary, which I won't include here for brevity.
tl;dr;
Basically FLOATs use fixed precision to store values as a formula (significand and exponent) in base-2, and the conversion back to base-10 makes rounding errors likely.
You're right, use a DECIMAL type. When you need to be exact, that's the only real solution.
If you're interested, here are two additional SO answers which provide more detail on this topic:
Double vs. BigDecimal?
What is the difference between the float and integer data type when the size is the same?

Does default memory allocated for a datatype play a role in rounding? In what manner a float is rounded if it exceeds allocated memory?

Having a file test2.py with the following contents:
print(2.0000000000000003)
print(2.0000000000000002)
I get this output:
$ python3 test2.py
2.0000000000000004
2.0
I thought lack of memory allocated for float might be causing this but 2.0000000000000003 and 2.0000000000000002 need same amount of memory.
IEEE 754 64-bit binary floating point always uses 64 bits to store a number. It can exactly represent a finite subset of the binary fractions. Looking only at the normal numbers, if N is a power of two in its range, it can represent a number of the form, in binary, 1.s*N where s is a string of 52 zeros and ones.
All the 32 bit binary integers, including 2, are exactly representable.
The smallest exactly representable number greater than 2 is 2.000000000000000444089209850062616169452667236328125. It is twice the binary fraction 1.0000000000000000000000000000000000000000000000000001.
2.0000000000000003 is closer to 2.000000000000000444089209850062616169452667236328125 than to 2, so it rounds up and prints as 2.0000000000000004.
2.0000000000000002 is closer to 2.0, so it rounds down to 2.0.
To store numbers between 2.0 and 2.000000000000000444089209850062616169452667236328125 would require a different floating point format likely to take more than 64 bits for each number.
Floats are not stored as integers are, with each bit signaling a yes/no term of 1,2,4,8,16,32,... value that you add up to get the complete number. They are stored as sign + mantissa + exponent in base 2. Several combinations have special meaning (NaN, +-inf, -0,...). Positive and negative numbers are idential in mantissa and exponent, only the sign differs.
At any given time they have a specific bit-length they are "put into". They can not overflow.
They have however a minimal accuracy, if you try to fit numbers into them that would need a bigger accuracy you get rounding errors - thats what you see in your example.
More on floats and storage (with example):
http://stupidpythonideas.blogspot.de/2015/01/ieee-floats-and-python.html
(which links to a more technical https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)
More on accuracy of floats:
- Floating Point Arithmetic: Issues and Limitations

python division result not true and different results

I am trying to solve fractional knapsack problem.
I have to find items with maximum calories per weight. I will fill my bag up to defined/limited weight with maximum calories.
Though algorithm is true, I can't find true result because of python division weirdness
When I try to find items with max calories per weight (python3)
print ((calories_list[i]/weight_list[i])*10)
# calories[i] 500 and weight[i] 30 (they're integers)
166.66666666666669
on the other hand, I opened terminal and typed python3
>>> 500/30
16.666666666666668
#when multiply with 10, it must be 16.666666666666668 not
#166.66666666666669
as you see, it gives different results
most of all, the important thing is that the real answer
500/30=16.6666666667
I got stucked here two days ago, please help me
Thanks you
As explained in the Python FAQ:
The float type in CPython uses a C double for storage. A float object’s value is stored in binary floating-point with a fixed precision (typically 53 bits) and Python uses C operations, which in turn rely on the hardware implementation in the processor, to perform floating-point operations. This means that as far as floating-point operations are concerned, Python behaves like many popular languages including C and Java.
You could use the decimal module as an alternative:
>>> from decimal import Decimal
>>> Decimal(500)/Decimal(30)
Decimal('16.66666666666666666666666667')

How COBOL handles division rounding by default?

In COBOL what is the result of
COMPUTE RESULT1 = 97 / 100
COMPUTE RESULT2 = 50 / 100
COMPUTE RESULT3 = 32 / 100
COMPUTE RESULT4 = -97 / 100
COMPUTE RESULT5 = -50 / 100
COMPUTE RESULT6 = -32 / 100
When RESULT1/2/3 are:
PIC S9(4)
PIC S9(4)V9
PIC S9(4)V99
Or, in other words, what is the default rounding mode for COBOL divisions?
Edit: What happens with negative values?
Even "discard" is sort of a rounding mode, is it equivalent to rounding towards negative infinity or towards zero?
COBOL does no rounding, unless you tell it to.
What it does do, if you don't tell it to do rounding, is low-order truncation. Some people may prefer to term that something else, it doesn't really matter, the effect is the same. Truncation.
Negative values are dealt with in the same way as positive values (retain a significant digit beyond what is required for the final result, and add one to that (also see later explanation): -0.009 would, to two decimal places, round to -0.01; -0.004 would round to -0.00.
If you specify no decimal places for a field, any fractional part will be simply discarded.
So, when all the targets of your COMPUTEs are 9(4), they will all contain zero, including the negative values.
When all the targets of your COMPUTEs are 9(4)V9, without rounding, they will contain 0.9, 0.5 and 0.3 respectively with the low-order (from the second decimal digit) decimal part being truncated.
And when all the targets of your COMPUTEs are 9(4)V99, they will contain 0.97, 0.50 and 0.32 with the low-order decimal part beyond that being truncated.
You do rounding in the language by using the ROUNDED phrase for the result of any arithmetic verb (ADD, SUBTRACT, MULTIPLY, DIVIDE, COMPUTE).
ADD some-name some-other-name GIVIING some-result
ROUNDED
COMPUTE some-result ROUNDED = some-name
+ some-other-name
The above are equivalent to each other.
To the 1985 Standard, ROUNDED takes the final result with one extra decimal place and adjusts the actual field with the defined decimal places by adding "one" at the smallest possible unit (for V99, it will add one hundredth, at V999 it will add one thousandth, at no decimal places it will add one, with any scaling amount (see PICture character P) it will add one).
You can consider the addition of one to be made to an absolute value, with the result retaining the original sign. Or you can consider it as done in any other way which achieves the same result. The Standard describes the rounding, the implementation meets the Standard in whatever way it likes. My description is a description for human understanding. No compiler need implement it in the way I have described, but logically the results will be the same.
Don't get hung up on how it is implemented.
The 2014 Standard, superseding the 2002 Standard, has further options for rounding, which for the 85 Standard you'd have to code for (very easy use of REDEFINES).
`ROUNDED` `MODE IS` `AWAY-FROM-ZERO` `NEAREST-AWAY-FROM-ZERO` `NEAREST-EVEN` `NEAREST-TOWARD-ZERO` `PROHIBITED` `TOWARD-GREATER` `TOWARD-LESSER TRUNCATION`
Only one "mode" may be specified at a time, and the default if MODE IS is not specified is TRUNCATION, establishing backward-compatibility (and satisfying those who feel the everything is rounding of some type.
The PROHIBITED option is interesting. If the result field has, for instance, two decimal places, then POHIBITED requires that the calculated result has only two high-order decimal places and all lower-order values are zero.
It is important to note with a COMPUTE that only the final result is rounded, intermediate results are not. If you need intermediate rounding, you need one COMPUTE (or other arithmetic verb) per rounded result.

Supress scientific notation without knowing length of number?

In python, how could I go about supressing scientific notation with complete precision WITHOUT knowing the length of number?
I need python to dynamically be able to return the number in normal form with exact precision no matter how large the number is, and to do it without any trailing zeros. The numbers will always be integers but they will be getting very large and I need them to be completely accurate. Even a single digit being rounded or changed would mess up my program.
Any ideas?
Use the decimal class.
Unlike hardware based binary floating point, the decimal module has a user alterable
precision (defaulting to 28 places) which can be as large as needed for a given
problem.
From https://docs.python.org/library/decimal.html

Resources