Translation of Excel INT to Delphi - excel

I am trying to translate an Excel sheet to Delphi and stumbled over the Excel INT.
Excel help say's INT rounds down (toward the negative infinity).
The Excel cell is:
C13-(24*INT(C13/24))
Where C13 is initialized at -465.9862462 and calculates as 14.01375378
The best Delphi I can come up with is:
C13 := -465.9862462;
RoundMode := GetRoundMode;// Save current mode
SetRoundMode(rmDown);
C13 := C13 - (24 * (Round(C13 / 24)));
SetRoundMode(RoundMode);// Restore mode
This produces the correct answer but is there a better way to do this?

Delphi has an Int() function too. It is even documented:
http://docwiki.embarcadero.com/Libraries/Tokyo/en/System.Int
That would grossly be the equivalent of the Excel function. It also rounds down, but not towards negative infinity. It rounds down towards 0. So for negative numbers, you would have to subtract 1.0:
function ExcelInt(N: Extended): Extended;
begin
Result := System.Int(N);
if (Result <> N) and (N < 0.0) then
Result := Result - 1.0;
end;
Note:
One might be inclined to use System.Math.Floor, as that rounds down towards negative infinity already, but that returns an Integer, so for large values, you might get an integer overflow. Note that in Excel, every number is a Double, so its INT() returns a Double too.

The Delphi equivalent is floor:
Rounds variables toward negative infinity.

Short answer: Use System.Math.Floor() function, it is functionally the same as Excel INT() function.
From the Excel documentation for the INT() function we can read that
INT(number) rounds the number argument (a real number) down to the nearest integer.
Ex. INT(8.9) retuns 8, INT(-8.9) returns -9
From the Delphi documentation for the System.Math.Floor() function we can read that Floor() rounds the float type argument toward negative infinity and returns an integer.
Ex. `Floor(2.8) returns 2, Floor(-2.8) returns -3
Testing your function in Excel ( C13-(24*INT(C13/24)) ) and in Delphi ( C13 - (24 * Floor(C13 / 24)) ) both yield the same result of 14.0137538

Related

How to write a function that turns a float into an int and rounds it up instead of down(which is the usual response python gives)

Write a function that takes a float as input and rounds it away from zero, i.e., returns the integer obtained
by rounding up for positive floats and by rounding down for negative floats. The return value should always
be an int object. Consider the example below.
>>> round_away_from_zero(7.2)
8
>>> round_away_from_zero(-3.6)
-4
>>> round_away_from_zero(5.0)
5
Is there a simple iteration that I can solve using this.
I understand it has to do with floor division(i.e x//2) but not sure how to implement
You can use an IF statement. If the number is positive, then we use math.ceil(). Otherwise, we use math.floor().
def round_away_from_zero(n):
if n > 0:
return math.ceil(n)
else:
return math.floor(n)

why is np.exp(x) not equal to np.exp(1)**x

Why is why is np.exp(x) not equal to np.exp(1)**x?
For example:
np.exp(400)
>>>5.221469689764144e+173
np.exp(1)**400
>>>5.221469689764033e+173
np.exp(400)-np.exp(1)**400
>>>1.1093513018771065e+160
This is optimisation of numpy that raise this diff.
Indeed, you have to understand how is calculated the Euler number in math:
e = (1/n)**n with n == inf.
I think numpy stop at a certain order:
You have in the numpy exp documentation here that is not very clear about how the Euler number is calculated.
Because of this order that is not equal to infinity, you have this small difference in the two calculations.
Indeed the value np.exp(400) is calculated using this: (1 + 400/n)**n
>>> (1 + 400/n)**n
5.221642085428121e+173
>>> numpy.exp(400)
5.221469689764144e+173
Here you have n = 1000000000000 wich is very small and raise this difference at 10e-5.
Indeed there is no exact value of the Euler number. Like Pi, you can only have an approched value.
It looks like a rounding issue. In the first case it's internally using a very precise value of e, while in the second you get a less precise value, which when multiplied 400 times the precision issues become more apparent.
The actual result when using the Windows calculator is 5.2214696897641439505887630066496e+173, so you can see your first outcome is fine, while the second is not.
5.2214696897641439505887630066496e+173 // calculator
5.221469689764144e+173 // exp(400)
5.221469689764033e+173 // exp(1)**400
Starting from your result, it looks it's using a value with 15 digits of precision.
2.7182818284590452353602874713527 // e
2.7182818284590450909589085441968 // 400th root of the 2nd result

Strange result from Summation of numbers in Excel and Matlab [duplicate]

I am writing a program where I need to delete duplicate points stored in a matrix. The problem is that when it comes to check whether those points are in the matrix, MATLAB can't recognize them in the matrix although they exist.
In the following code, intersections function gets the intersection points:
[points(:,1), points(:,2)] = intersections(...
obj.modifiedVGVertices(1,:), obj.modifiedVGVertices(2,:), ...
[vertex1(1) vertex2(1)], [vertex1(2) vertex2(2)]);
The result:
>> points
points =
12.0000 15.0000
33.0000 24.0000
33.0000 24.0000
>> vertex1
vertex1 =
12
15
>> vertex2
vertex2 =
33
24
Two points (vertex1 and vertex2) should be eliminated from the result. It should be done by the below commands:
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
After doing that, we have this unexpected outcome:
>> points
points =
33.0000 24.0000
The outcome should be an empty matrix. As you can see, the first (or second?) pair of [33.0000 24.0000] has been eliminated, but not the second one.
Then I checked these two expressions:
>> points(1) ~= vertex2(1)
ans =
0
>> points(2) ~= vertex2(2)
ans =
1 % <-- It means 24.0000 is not equal to 24.0000?
What is the problem?
More surprisingly, I made a new script that has only these commands:
points = [12.0000 15.0000
33.0000 24.0000
33.0000 24.0000];
vertex1 = [12 ; 15];
vertex2 = [33 ; 24];
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
The result as expected:
>> points
points =
Empty matrix: 0-by-2
The problem you're having relates to how floating-point numbers are represented on a computer. A more detailed discussion of floating-point representations appears towards the end of my answer (The "Floating-point representation" section). The TL;DR version: because computers have finite amounts of memory, numbers can only be represented with finite precision. Thus, the accuracy of floating-point numbers is limited to a certain number of decimal places (about 16 significant digits for double-precision values, the default used in MATLAB).
Actual vs. displayed precision
Now to address the specific example in the question... while 24.0000 and 24.0000 are displayed in the same manner, it turns out that they actually differ by very small decimal amounts in this case. You don't see it because MATLAB only displays 4 significant digits by default, keeping the overall display neat and tidy. If you want to see the full precision, you should either issue the format long command or view a hexadecimal representation of the number:
>> pi
ans =
3.1416
>> format long
>> pi
ans =
3.141592653589793
>> num2hex(pi)
ans =
400921fb54442d18
Initialized values vs. computed values
Since there are only a finite number of values that can be represented for a floating-point number, it's possible for a computation to result in a value that falls between two of these representations. In such a case, the result has to be rounded off to one of them. This introduces a small machine-precision error. This also means that initializing a value directly or by some computation can give slightly different results. For example, the value 0.1 doesn't have an exact floating-point representation (i.e. it gets slightly rounded off), and so you end up with counter-intuitive results like this due to the way round-off errors accumulate:
>> a=sum([0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1]); % Sum 10 0.1s
>> b=1; % Initialize to 1
>> a == b
ans =
logical
0 % They are unequal!
>> num2hex(a) % Let's check their hex representation to confirm
ans =
3fefffffffffffff
>> num2hex(b)
ans =
3ff0000000000000
How to correctly handle floating-point comparisons
Since floating-point values can differ by very small amounts, any comparisons should be done by checking that the values are within some range (i.e. tolerance) of one another, as opposed to exactly equal to each other. For example:
a = 24;
b = 24.000001;
tolerance = 0.001;
if abs(a-b) < tolerance, disp('Equal!'); end
will display "Equal!".
You could then change your code to something like:
points = points((abs(points(:,1)-vertex1(1)) > tolerance) | ...
(abs(points(:,2)-vertex1(2)) > tolerance),:)
Floating-point representation
A good overview of floating-point numbers (and specifically the IEEE 754 standard for floating-point arithmetic) is What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg.
A binary floating-point number is actually represented by three integers: a sign bit s, a significand (or coefficient/fraction) b, and an exponent e. For double-precision floating-point format, each number is represented by 64 bits laid out in memory as follows:
The real value can then be found with the following formula:
This format allows for number representations in the range 10^-308 to 10^308. For MATLAB you can get these limits from realmin and realmax:
>> realmin
ans =
2.225073858507201e-308
>> realmax
ans =
1.797693134862316e+308
Since there are a finite number of bits used to represent a floating-point number, there are only so many finite numbers that can be represented within the above given range. Computations will often result in a value that doesn't exactly match one of these finite representations, so the values must be rounded off. These machine-precision errors make themselves evident in different ways, as discussed in the above examples.
In order to better understand these round-off errors it's useful to look at the relative floating-point accuracy provided by the function eps, which quantifies the distance from a given number to the next largest floating-point representation:
>> eps(1)
ans =
2.220446049250313e-16
>> eps(1000)
ans =
1.136868377216160e-13
Notice that the precision is relative to the size of a given number being represented; larger numbers will have larger distances between floating-point representations, and will thus have fewer digits of precision following the decimal point. This can be an important consideration with some calculations. Consider the following example:
>> format long % Display full precision
>> x = rand(1, 10); % Get 10 random values between 0 and 1
>> a = mean(x) % Take the mean
a =
0.587307428244141
>> b = mean(x+10000)-10000 % Take the mean at a different scale, then shift back
b =
0.587307428244458
Note that when we shift the values of x from the range [0 1] to the range [10000 10001], compute a mean, then subtract the mean offset for comparison, we get a value that differs for the last 3 significant digits. This illustrates how an offset or scaling of data can change the accuracy of calculations performed on it, which is something that has to be accounted for with certain problems.
Look at this article: The Perils of Floating Point. Though its examples are in FORTRAN it has sense for virtually any modern programming language, including MATLAB. Your problem (and solution for it) is described in "Safe Comparisons" section.
type
format long g
This command will show the FULL value of the number. It's likely to be something like 24.00000021321 != 24.00000123124
Try writing
0.1 + 0.1 + 0.1 == 0.3.
Warning: You might be surprised about the result!
Maybe the two numbers are really 24.0 and 24.000000001 but you're not seeing all the decimal places.
Check out the Matlab EPS function.
Matlab uses floating point math up to 16 digits of precision (only 5 are displayed).

Why is a whole number not the same when rounded in a custom function?

I have the following custom function that rounds a number to a user-specified accuracy.
It is based on the general formula:
ROUND(Value/ Accuracy,0)*Accuracy
There are times where Number/Accuracy is exactly a multiple of 0.5, and Excel does not do the common rounding rule (ODD number - Round up, EVEN number - Round down), so I made a custom function.
Function CheckTemp(val As Range, NumAccuracy As Range) As Double
Dim Temp As Double
Temp= Abs(val) / NumAccuracy
CheckTemp = (Temp / 0.5) - WorksheetFunction.RoundDown(Temp / 0.5 , 0)
End Function
If CheckTemp = 0, then 'val' falls under this case where depending on the number, I want to specifically round down or up. If it is false, then the general Round() command is used.
I do have a weird case when Accuracy = 0.1 and any 'val' that meets the requirement:
#.X5000000...,
where: 'X' is an ODD number, or zero (i.e. 0,1,3,5,7,9).
Depending on the whole number, the function does not work.
Example:
val = - 5 361 202.55
NumAccuracy = 0.1
Temp = 53 612 025.5
Temp / 0.5 = 107 224 051.
WorksheetFunction.RoundDown(Temp / 0.5,0) = 107 224 051.
CheckTemp = -1.49012E-08
If I break this check into two separate functions, one to output (Temp/0.5) and WF.RoundDown(Temp / 0.5) to the Excel worksheet, and then subtract the two in the worksheet I get EXACTLY 0.
However with VBA coding, an error comes into play and results in a non-zero answer (even more worrisome a NEGATIVE value, which should be impossible when Temp is always positive, and RoundDown('x','y') will always result in a smaller number than 'x').
'val' can be a very large number with many decimal places, so I am trying to keep the 'Double' parameter if possible.
I tried 'Single' variable type and it seems to remove the error with CheckTemp(), but I am worried an end-user may use a number that exceeds the 'Single' variable limit.
You are not wrong, but native rounding in VBA is severely limited.
So, use a proper rounding function like RoundMid as found in my project VBA.Round. It uses Decimal if possible to avoid such errors.
Example:
Value = 5361202.55
NumAccuracy = 0.1
RoundedValue = RoundMid(Value / NumAccuracy, 0) * Numaccuracy
RoundedValue -> 5361202.6

Python floating point precision sum

I have the following array in python
n = [565387674.45, 321772103.48,321772103.48, 214514735.66,214514735.65,
357524559.41]
if I sum all these elements, I get this:
sum(n)
1995485912.1300004
But, this sum should be:
1995485912.13
In this way, I know about floating point "error". I already used the isclose() function from numpy to check the corrected value, but
how much is this limit? Is there any way to reduce this "error"?
The main issue here is that the error propagates to other operations, for example, the below assertion must be true:
assert (sum(n) - 1995485911) ** 100 - (1995485912.13 - 1995485911) ** 100 == 0.
This is problem with floating point numbers. One solution is having them represented in string form and using decimal module:
n = ['565387674.45', '321772103.48', '321772103.48', '214514735.66', '214514735.65',
'357524559.41']
from decimal import Decimal
s = sum(Decimal(i) for i in n)
print(s)
Prints:
1995485912.13
You could use round(num, n) function which rounds the number to the desired decimal places. So in your example you would use round(sum(n), 2)

Resources