I have a 128-bit number stored as 2 64-bit numbers ("Hi" and "Lo"). I need only to divide it by a 32-bit number. How could I do it, using the native 64-bit operations from CPU?
(Please, note that I DO NOT need an arbitrary precision library. Just need to know how to make this simple division using native operations. Thank you).
If you are storing the value (128-bits) using the largest possible native representation your architecture can handle (64-bits) you will have problems handling the intermediate results of the division (as you already found :) ).
But you always can use a SMALLER representation. What about FOUR numbers of 32-bits? This way you could use the native 64-bits operations without overflow problems.
A simple implementation (in Delphi) can be found here.
I have a DECIMAL structure which consists of three 32-bit values: Lo32, Mid32 and Hi32 = 96 bit totally.
You can easily expand my C code for 128-bit, 256-bit, 512-bit or even 1024-bit division.
// in-place divide Dividend / Divisor including previous rest and returning new rest
static void Divide32(DWORD* pu32_Dividend, DWORD u32_Divisor, DWORD* pu32_Rest)
{
ULONGLONG u64_Dividend = *pu32_Rest;
u64_Dividend <<= 32;
u64_Dividend |= *pu32_Dividend;
*pu32_Dividend = (DWORD)(u64_Dividend / u32_Divisor);
*pu32_Rest = (DWORD)(u64_Dividend % u32_Divisor);
}
// in-place divide 96 bit DECIMAL structure
static bool DivideByDword(DECIMAL* pk_Decimal, DWORD u32_Divisor)
{
if (u32_Divisor == 0)
return false;
if (u32_Divisor > 1)
{
DWORD u32_Rest = 0;
Divide32(&pk_Decimal->Hi32, u32_Divisor, &u32_Rest); // Hi FIRST!
Divide32(&pk_Decimal->Mid32, u32_Divisor, &u32_Rest);
Divide32(&pk_Decimal->Lo32, u32_Divisor, &u32_Rest);
}
return true;
}
The subtitle to volume two of The Art of Computer Programming is Seminumerical Algorithms. It's appropriate, as the solution is fairly straight-forward when you think of the number as an equation instead of as a number.
Think of the number as Hx + L, where x is 264. If we are dividing by, call it Y, it is then trivially true that Hx = (N + M)x where N is divisible by Y and M is less than Y. Why would I do this? (Hx + L) / Y can now be expressed as (N / Y)x + (Mx + L) / Y. The values N, N / Y, and M are integers: N is just H / Y and M is H % Y However, as x is 264, this still brings us to a 128 by something divide, which will raise a hardware fault (as people have noted) should Y be 1.
So, what you can do is reformulate the problem as (Ax3 + Bx2 + Cx + D) / Y, with x being 232. You can now go down: (A / Y)x3 + (((A % Y)x + B) / Y)x2 + (((((A % Y)x + B) % Y)x + C) / Y)x + ((((((A % Y)x + B) % Y)x + C) / Y)x + D) / Y. If you only have 64 bit divides: you do four divides, and in the first three, you take the remainder and shift it up 32 bits and or in the next coefficient for the next division.
This is the math behind the solution that has already been given twice.
How could I do it, using the native 64-bit operations from CPU?
Since you want native operations, you'll have to use some built-in types or intrinsic functions. All the above answers will only give you general C solutions which won't be compiled to the division instruction
Most modern 64-bit compilers have some ways to do a 128-by-64 division. In MSVC use _div128() and _udiv128() so you'll just need to call _udiv128(hi, lo, divisor, &remainder)
The _div128 intrinsic divides a 128-bit integer by a 64-bit integer. The return value holds the quotient, and the intrinsic returns the remainder through a pointer parameter. _div128 is Microsoft specific.
In Clang, GCC and ICC there's an __int128 type and you can use that directly
unsigned __int128 div128by32(unsigned __int128 x, uint64_t y)
{
return x/y;
}
Related
I am working within constraints of hardware that has 64bit integer limit. Does not support floating point. I am dealing with very large integers that I need to multiply and divide. When multiplying I encounter an overflow of the 64bits. I am prototyping a solution in python. This is what I have in my function:
upper = x >> 32 #x is cast as int64 before being passed to this function
lower = x & 0x00000000FFFFFFFF
temp_upper = upper * y // z #Dividing first is not an option, as this is not the actual equation I am working with. This is just to make sure in my testing I overflow unless I do the splitting.
temp_lower = lower * y // z
return temp_upper << 32 | lower
This works, somewhat, but I end up losing a lot of precision (my result is off by sometimes a few million). From looking at it, it appears that this is happening because of the division. If sufficient enough it shifts the upper to the right. Then when I shift it back into place I have a gap of zeroes.
Unfortunately this topic is very hard to google, since anything with upper/lower brings up results about rounding up/down. And anything about splitting ints returns results about splitting them into a char array. Anything about int arithmetic bring up basic algebra with integer math. Maybe I am just not good at googling. But can you guys give me some pointers on how to do this?
Splitting like this is just a thing I am trying, it doesnt have to be the solution. All I need to be able to do is to temporarily go over 64bit integer limit. The final result will be under 64bit (After the division part). I remember learning in college about splitting it up like this and then doing the math and re-combining. But unfortunately as I said I am having trouble finding anything online on how to do the actual math on it.
Lastly, my numbers are sometimes small. So I cant chop off the right bits. I need the results to basically be equivalent to if I used something like int128 or something.
I suppose a different way to look at this problem is this. Since I have no problem with splitting the int64, we can forget about that part. So then we can pretend that two int64's are being fed to me, one is upper and one is lower. I cant combine them, because they wont fit into a single int64. So I need to divide them first by Z. Combining step is easy. How do I do the division?
Thanks.
As I understand it, you want to perform (x*y)//z.
Your numbers x,y,z all fit on 64bits, except that you need 128 bits for intermediate x*y.
The problem you have is indeed related to division: you have
h * y = qh * z + rh
l * y = ql * z + rl
h * y << 32 + l*y = (qh<<32 + ql) * z + (rh<<32 + rl)
but nothing says that (rh<<32 + rl) < z, and in your case high bits of l*y overlap low bits of h * y, so you get the wrong quotient, off by potentially many units.
What you should do as second operation is rather:
rh<<32 + l * y = ql' * z + rl'
Then get the total quotient qh<<32 + ql'
But of course, you must care to avoid overflow when evaluating left operand...
Since you are splitting only one of the operands of x*y, I'll assume that the intermediate result always fits on 96 bits.
If that is correct, then your problem is to divide a 3 32bits limbs x*y by a 2 32bits limbs z.
It is thus like Burnigel - Ziegler divide and conquer algorithm for division.
The algorithm can be decomposed like this:
obtain the 3 limbs a2,a1,a0 of multiplication x*y by using karatsuba for example
split z into 2 limbs z1,z0
perform the div32( (a2,a1,a0) , (z1,z0) )
here is some pseudo code, only dealing with positive operands, and with no guaranty to be correct, but you get an idea of implementation:
p = 1<<32;
function (a1,a0) = split(a)
a1 = a >> 32;
a0 = a - (a1 * p);
function (a2,a1,a0) = mul22(x,y)
(x1,x0) = split(x) ;
(y1,y0) = split(y) ;
(h1,h0) = split(x1 * y1);
assert(h1 == 0); -- assume that results fits on 96 bits
(l1,l0) = split(x0 * y0);
(m1,m0) = split((x1 - x0) * (y0 - y1)); -- karatsuba trick
a0 = l0;
(carry,a1) = split( l1 + l0 + h0 + m0 );
a2 = l1 + m1 + h0 + carry;
function (q,r) = quorem(a,b)
q = a // b;
r = a - (b * q);
function (q1,q0,r0) = div21(a1,a0,b0)
(q1,r1) = quorem(a1,b0);
(q0,r0) = quorem( r1 * p + a0 , b0 );
(q1,q0) = split( q1 * p + q0 );
function q = div32(a2,a1,a0,b1,b0)
(q,r) = quorem(a2*p+a1,b1*p+b0);
q = q * p;
(a2,a1)=split(r);
if a2<b1
(q1,q0,r)=div21(a2,a1,b1);
assert(q1==0); -- since a2<b1...
else
q0=p-1;
r=(a2-b1)*p+a1+b1;
(d1,d0) = split(q0*b0);
r = (r-d1)*p + a0 - d0;
while(r < 0)
q = q - 1;
r = r + b1*p + b0;
function t=muldiv(x,y,z)
(a2,a1,a0) = mul22(x,y);
(z1,z0) = split(z);
if z1 == 0
(q2,q1,r1)=div21(a2,a1,z0);
assert(q2==0); -- otherwise result will not fit on 64 bits
t = q1*p + ( ( r1*p + a0 )//z0);
else
t = div32(a2,a1,a0,z1,z0);
Given two segment endpoints A and B (in two dimensions), I would like to perform linear interpolation based on a value t, i.e.:
C = A + t(B-A)
In the ideal world, A, B and C should be collinear. However, we are operating with limited floating-point here, so there will be small deviations. To work around numerical issues with other operations I am using robust adaptive routines originally created by Jonathan Shewchuk. In particular, Shewchuk implements an orientation function orient2d that uses adaptive precision to exactly test the orientation of three points.
Here my question: is there a known procedure how the interpolation can be computed using the floating-point math, so that it lies exactly on the line between A and B? Here, I care less about the accuracy of the interpolation itself and more about the resulting collinearity. In another terms, its ok if C is shifted around a bit as long as collinearity is satisfied.
The bad news
The request can't be satisfied. There are values of A and B for which there is NO value of t other than 0 and 1 for which lerp(A, B, t) is a float.
A trivial example in single precision is x1 = 12345678.f and x2 = 12345679.f. Regardless of the values of y1 and y2, the required result must have an x component between 12345678.f and 12345679.f, and there's no single-precision float between these two.
The (sorta) good news
The exact interpolated value, however, can be represented as the sum of 5 floating-point values (vectors in the case of 2D): one for the formula's result, one for the error in each operation [1] and one for multiplying the error by t. I'm not sure if that will be useful to you. Here's a 1D C version of the algorithm in single precision that uses fused multiply-add to calculate the product error, for simplicity:
#include <math.h>
float exact_sum(float a, float b, float *err)
{
float sum = a + b;
float z = sum - a;
*err = a - (sum - z) + (b - z);
return sum;
}
float exact_mul(float a, float b, float *err)
{
float prod = a * b;
*err = fmaf(a, b, -prod);
return prod;
}
float exact_lerp(float A, float B, float t,
float *err1, float *err2, float *err3, float *err4)
{
float diff = exact_sum(B, -A, err1);
float prod = exact_mul(diff, t, err2);
*err1 = exact_mul(*err1, t, err4);
return exact_sum(A, prod, err3);
}
In order for this algorithm to work, operations need to conform to IEEE-754 semantics in round-to-nearest mode. That's not guaranteed by the C standard, but the GNU gcc compiler can be instructed to do so, at least in processors supporting SSE2 [2][3].
It is guaranteed that the arithmetic addition of (result + err1 + err2 + err3 + err4) will be equal to the desired result; however, there is no guarantee that the floating-point addition of these quantities will be exact.
To use the above example, exact_lerp(12345678.f, 12345679.f, 0.300000011920928955078125f, &err1, &err2, &err3, &err4) returns a result of 12345678.f and err1, err2, err3 and err4 are 0.0f, 0.0f, 0.300000011920928955078125f and 0.0f respectively. Indeed, the correct result is 12345678.300000011920928955078125 which can't be represented as a single-precision float.
A more convoluted example: exact_lerp(0.23456789553165435791015625f, 7.345678806304931640625f, 0.300000011920928955078125f, &err1, &err2, &err3, &err4) returns 2.3679010868072509765625f and the errors are 6.7055225372314453125e-08f, 8.4771045294473879039287567138671875e-08f, 1.490116119384765625e-08f and 2.66453525910037569701671600341796875e-15f. These numbers add up to the exact result, which is 2.36790125353468550173374751466326415538787841796875 and can't be exactly stored in a single-precision float.
All numbers in the examples above are written using their exact values, rather than a number that approximates to them. For example, 0.3 can't be represented exactly as a single-precision float; the closest one has an exact value of 0.300000011920928955078125 which is the one I've used.
It might be possible that if you calculate err1 + err2 + err3 + err4 + result (in that order), you get an approximation that is considered collinear in your use case. Perhaps worth a try.
References
[1] Graillat, Stef (2007). Accurate Floating Point Product and Exponentiation.
[2] Enabling strict floating point mode in GCC
[3] Semantics of Floating Point Math in GCC
I'm implementing a hardware double precision adder with Verilog. During the verification phase when I compare my hardware output to MATLAB (or C) double precision addition outputs I found some weird cases where the LSB is not matching, taking into account that I'm using the same rounding mode (round to nearest even). My question is about the accuracy of the C calculation, is it truly accurate in doing the rounding or it's limited to some CPU architecture (32 or 64 bits)?
Here's an example,
A = 0x62a5a1c59bd10037 = 1.5944933396238637e+167
B = 0x62724bc40659bf0c = 1.685748657333889e+166 = 0.1685748657333889e+167
The correct output (just by doing the addition of the above real numbers manually)
= 1.7630682053572526e+167 = 0x62a7eb3e1c9c3819 (this matches my hardware)
When I try doing A+B in C, the result is equal to
= 1.7630682053572525e+167 = 0x62a7eb3e1c9c3818
When I try this application to check the intermediate operations
http://www.ecs.umass.edu/ece/koren/arith/simulator/FPAdd/
I can see from mantissa addition that C is not doing the rounding correctly (round to nearest even). In this case the mantissa should be rounded by adding one. Any idea why this is happening?
The operation of http://www.ecs.umass.edu/ece/koren/arith/simulator/FPAdd/ is correct. The last round to nearest even peforms a downward rounding:
A+B + 1.0111111010110011111000011100100111000011100000011000|10 *2^555
^
|
to forget the |10 part (exactly in the middle), the result chooses 0 (even) instead of 1
This groovy:
float a = 1;
float b = 2;
def r = a + b;
Creates this Java code when reversed from .class with IntelliJ:
float a = (float)1;
float b = (float)2;
Object r = null;
double var7 = (double)a + (double)b;
r = Double.valueOf(var7);
So r contains a Double.
If I do this:
float a = 1;
float b = 2;
float r = a + b;
It generates code that performs the addition with doubles and converts back to float:
float a = (float)1;
float b = (float)2;
float r = 0.0F;
double var7 = (double)a + (double)b;
r = (float)var7;
So should one abandon floats with groovy as it seems to not want to use them anyway?
Groovy decided to take 5 standard result types of numeric operations. fall back to certain standard numeric types for operations. Those are int, long, BigInteger, double and BigDecimal. Thus adding/multiplying two floats returns a double. Division and pow are special.
From http://www.groovy-lang.org/syntax.html
Division and power binary operations aside,
binary operations between byte, char, short and int result in int
binary operations involving long with byte, char, short and int result
in long
binary operations involving BigInteger and any other integral type
result in BigInteger
binary operations between float, double and BigDecimal result in
double
binary operations between two BigDecimal result in BigDecimal
As for if you should abandon float... normally it is good enough to convert the double to float, especially since groovy is doing that automatically for you.
.net (C#) does something similar with 16-bit integers: Addition of Bytes or Int16s yield Int32. Possibly to prevent overflows.
Operations with "smaller" data types may result in the "bigger" data types. And with bigger, I mean more bits.
As illustrated in this example (more digits also means more bits)
15 (2 digits) x 15 (2 digits) = 225 (3 digits)
1.5 (2 digits) x 1.5 (2 digits) = 2.25 (3 digits)
However, adding two 32 bit integers returns jus a 32 bit integer. And adding two doubles just returns a double. This is because the (virtual) machine is optimized for working with these sizes, which is because physical processors used to be optimized for working with these sizes. Some of them still are. 32 bit operations are often still faster than 64 bit operations, even on 64 bit processors. However, 16 bit operations are not or barely.
Your compiler attempts to protect you against overflows, and allows you to check for them explicitly. So unless you have a good reason not to, I'd default to using these types, and optionally trunc to a compacter type when storing the data.
Good reasons not to include scenarios where you process large amounts (1000s) of numbers, e.g. for graphic processing.
I need to port quite a few formulas from C to Python and vice versa. What is the best way to make sure that nothing breaks in the process?
I am primarily worried about automatic int/int = float conversions.
You could use the // operator. It performs an integer division, but it's not quite what you'd expect from C:
A quote from here:
The // operator performs a quirky kind of integer division. When the
result is positive, you can think of
it as truncating (not rounding) to 0
decimal places, but be careful with
that.
When integer-dividing negative numbers, the // operator rounds “up”
to the nearest integer. Mathematically
speaking, it’s rounding “down” since
−6 is less than −5, but it could trip
you up if you were expecting it to
truncate to −5.
For example, -11 // 2 in Python returns -6, where -11 / 2 in C returns -5.
I'd suggest writing and thoroughly unit-testing a custom integer division function that "emulates" C behaviour.
The page I linked above also has a link to PEP 238 which has some interesting background information about division and the changes from Python 2 to 3. There are some suggestions about what to use for integer division, like divmod(x, y)[0] and int(x/y) for positive numbers, perhaps you'll find more useful things there.
In C:
-11/2 = -5
In Python:
-11/2 = -5.5
And also in Python:
-11//2 = -6
To achieve C-like behaviour, write int(-11/2) in Python. This will evaluate to -5.
Some ways to compute integer division with C semantics are as follows:
def div_c0(a, b):
if (a >= 0) != (b >= 0) and a % b:
return a // b + 1
else:
return a // b
def div_c1(a, b):
q, r = a // b, a % b
if (a >= 0) != (b >= 0) and r:
return q + 1
else:
return q
def div_c2(a, b):
q, r = divmod(a, b)
if (a >= 0) != (b >= 0) and r:
return q + 1
else:
return q
def mod_c(a, b):
return (a % b if b >= 0 else a % -b) if a >= 0 else (-(-a % b) if b >= 0 else a % b)
def div_c3(a, b):
r = mod_c(a, b)
return (a - r) // b
With timings:
import itertools
n = 100
l = [x for x in range(-n, n + 1)]
ll = [(a, b) for a, b in itertools.product(l, repeat=2) if b]
funcs = div_c0, div_c1, div_c2, div_c3
for func in funcs:
correct = all(func(a, b) == funcs[0](a, b) for a, b in ll)
print(f"{func.__name__} correct:{correct} ", end="")
%timeit [func(a, b) for a, b in ll]
# div_c0 correct:True 100 loops, best of 5: 10.3 ms per loop
# div_c1 correct:True 100 loops, best of 5: 11.5 ms per loop
# div_c2 correct:True 100 loops, best of 5: 13.2 ms per loop
# div_c3 correct:True 100 loops, best of 5: 15.4 ms per loop
Indicating the first approach to be the fastest.
For implementing C's % using Python, see here.
In the opposite direction:
Since Python 3 divmod (or //) integer division requires the remainder to have the same sign as divisor at non-zero remainder case, it's inconsistent with many other languages (quote from 1.4. Integer Arithmetic).
To have your "C-like" result same as Python, you should compare the remainder result with divisor (suggestion: by xor on sign bits equals to 1, or multiplication with negative result), and in case it's different, add the divisor to the remainder, and subtract 1 from the quotient.
// Python Divmod requires a remainder with the same sign as the divisor for
// a non-zero remainder
// Assuming isPyCompatible is a flag to distinguish C/Python mode
isPyCompatible *= (int)remainder;
if (isPyCompatible)
{
int32_t xorRes = remainder ^ divisor;
int32_t andRes = xorRes & ((int32_t)((uint32_t)1<<31));
if (andRes)
{
remainder += divisor;
quotient -= 1;
}
}
(Credit to Gawarkiewicz M. for pointing this out.)
You will need to know what the formula does, and understand both the C implementation and how to implement it in Python. But unless you are doing integer maths it should be quite similar, and if you are doing integer maths, the question is why. :)
Integer maths are either done because of some specific purpose, often related to computers, or because it's faster than floats when doing massive computations, like Fractint does for fractals, and in that case Python is usually not the right choice. ;)