Implementing a 64bit integer using 32bit integers - 64-bit

I am implementing an opcua server from specification. Opcua encodes datetime as a 64bit signed integer. The server will run on a 32bit embedded system that doesn't include(support) standard 64bit integers.
I've tried looking online but i can't see useful articles on the subject. I also know that floating point numbers can be implemented from an open stadard from IEEE , but i don't seem to find a standardized representation for 64bit integers.
I am using ansi C for the project.
Where can i get some sort of material,content to get me started on the task ?

You can implement a 64 bit integer using 2 32 bit integers in exactly the same way that you can implement a 2-digit integer from 2 1-digit integers. In other words, what you have is nothing but a 2-"digit" number in base 232.
If you know how to count beyond 9 using only digits 0-9, then you know how to count beyond 232 using only "digits" 0-232, because it is the exact same thing.

Related

How to use leading_zeros/trailing_zeros in platform independent way?

I want find the first non-zero bit in the binary representation of a u32. leading_zeros/trailing_zeros looks like what I want:
let x: u32 = 0b01000;
println!("{}", x.trailing_zeros());
This prints 3 as expected and described in the docs. What will happen on big-endian machines, will it be 3 or some other number?
The documentation says
Returns the number of trailing zeros in the binary representation
is it related to machine binary representation (so the result of trailing_zeros depends on architecture) or base-2 numeral system (so result will be always 3)?
The type u32 respresents binary numbers with 32 bits as an abstract concept. You can imagine them as abstract, mathematical numbers in the range from 0 to 232-1. The binary representation of these numbers is written in the usual convention of starting with the most significant bit (MSB) and ending with the least significant bit (LSB), and the trailing_zeros() method returns the number of trailing zeros in that representation.
Endianness only comes into play when serializing such an integer to bytes, e.g. for writing it to a bytes buffer, a file or the network. You are not doing any of this in your code, so it doesn't matter here.
As mentioned above, writing a number starting with the MSB is also just a convention, but this convention is pretty much universal today for numbers written in positional notation. For programming, this convention is only relevant when formatting a number for display, parsing a number from a string, and maybe for naming methods like trailing_zeros(). When storing an u32 in a register, the bits don't have any defined order.

It is possible to represent a 64-bit number as string when hardware doesn't support 64-bit number?

I want to show a 64-bit number as string. The problem is that my hardware doesn't support 64-bit number, just 32-bit.
So, I have the 64-bit number splitted into two 32-bit number (High and low part).
Example: 64-bit number : 12345678987654321 (002B DC54 6291 F4B1h)
32-bit low part: 1653732529 (6291 F4B1h)
32-bit high part: 2874452 (002B DC54h)
I think the solution to my problem would be showing this number as string.
It is possible?
Thanks.
yes you can use an array of 32 bit uints or even lower bit-width ...
for printing you can use this:
hex to dec
so first print a hex string which is easy on any bit-width (as you just stack up the lower bit-widths prints together from MSW to LSW) and then convert the hex text to dec text...
With this chained array of units you can do the math operations like this:
Cant make value propagate through carry
Doing operation on array of uints is much much more faster than on strings ...
but if you insist yes you can use string representation too ...
There are also hybrid representation like BCD that are suitable for this but your MCU would need to have support for it ...
Depending on your language of choice, the language may allow you to use greater-than-32bit integers, even on 32bits architectures (like python).
If that is the case the problem becomes trivial: compute the value, then compute the corresponding hex string.

Why comparision doesn't work as I suspect? PL/I

This comparison prints '0'b. Don't understand why... As I know strings are converted automatically to float in PL/I if needed.
put skip list('-2.34e-1'=-2.34e-1);
I have tested this in our environment (Enterprise PL/I V4.5 on z/OS) and found the same behaviour - under certain compile-options.
Using the option FLOAT(NODFP) (i.e. do not use native support for decimal floating point, I think the option was introduced with Enterprise PL/I V4.4) the following happens:
the literal -2.34e-1 is converted to its internal representation as bin float(6), i.e. short binary floating point
the literal '-2.34e-1' is compared with a bin float(6) value, so it has to be converted to a bin float as well
since -0.234 does not have an exact representation as a binary fraction it seems the compiler converts it to a bin float(54), i.e. an extended binary floating point value, to get maximum precision.
So since -0.234 has an infinite number of digits after the decimal point in its binary representation but the two converted values preserve a different number of digits the values do not compare equal.
Under FLOAT(DFP) (i.e. when using the machines DFP support)
the internal representation of the literal -2.34e-1 is an actual decimal floating point and thus exact
as is the representation of '-2.34e-1'
so under this compile-option both compare equal and the output of your program is '1'b
So your problem is a combination of the compilers different choice of data-representation and resulting rounding-errors from using binary floating point of different precision.

Strategies for parallel implementation of Lua numbers and a 64bit integer

Lua by default uses a double precision floating point (double) type as its only numeric type. That's nice and useful. However, I'm working on software that expects to see 64bit integers, for which I don't get around using actual 64bit integers one way or another.
The place where the integer type becomes relevant is for file sizes. Although I don't truly expect to see file sizes beyond what Lua can represent with full "integer" precision using a double, I want to be prepared.
What strategies can you recommend when using a 64bit integer type in parallel with the default numeric type of Lua? I don't really want to throw the default implementation overboard (and I'm not worried of its performance compared to integer arithmetics), but I need some way of representing 64bit integers up to their full precision without too much of a performance penalty.
My problem is that I'm unsure where to modify the behavior. Should I modify the syntax and extend the parser (numbers with appended LL or ULL come to mind, which to my knowledge doesn't exist in default Lua) or should I instead write my own C module and define a userdata type that represents the 64bit integer, along with library functions able to manipulate the values? ...
Note: yes, I am embedding Lua, so I am free to extend it whichever way I please.
As part of LuaJIT's port to ARM CPUs (which often have poor floating-point), LuaJIT implemented a "Dual-number VM", which allows it to switch between integers and floats dynamically as needed. You could use this yourself, just switch between 64-bit integers and doubles instead of 32-bit integers and floats.
It's currently live in builds, so you may want to consider using LuaJIT as your Lua "interpreter." Or you could use it as a way to learn how to do this sort of thing.
However, I do agree with Marcelo; the 53-bit mantissa should be plenty. You shouldn't really need this for a good 10 years or so.
I'd suggest storing your data outside of Lua and use some type of reference to retrieve it when calling your other libraries. You can then push various results onto the Lua stack for the user the see, you can even retrieve the value as a string to be precise, but I would avoid modifying them in Lua and relying on the Lua values when calling your external library.
If you're not going to need floating-point precision at any point in the program, you can just redefine LUA_NUMBER to __int64 (or whatever 64-bit int may be in your environment) in luaconf.h.
Otherwise, you can just bring in another library to handle your integers- for infinite precision, you can use a bignum library such as lhf's lbn.

Multiplying 23 bit datatypes in a system with no long long

I am trying to implement floating point operations in a microcontroller and so far I have had ample success.
The problem lies in the way I do multiplication in my computer and it works fine:
unsigned long long gig,mm1,mm2;
unsigned long m,m1,m2;
mm1 = f1.float_parts.mantissa;
mm2 = f2.float_parts.mantissa;
m1 = f1.float_parts.mantissa;
m2 = f2.float_parts.mantissa;
gig = mm1*mm2; //this works fine I get all the bits I need since they are all long long, but won't work in the mcu
gig = m1*m2//this does not work, to be precise it gives only the 32 least significant bits , but works on the mcu
So you can see that my problem is that the microcontroller will throw an undefined refence to __muldi3 if I try the gig = mm1*mm2 there.
And If I try with the smaller data types, it only keeps the least significant bits, which I don't want it to. I need the 23 msb bits of the product.
Does anyone have any ideas as to how I can do this?
Apologizes for the short answer, I hope that someone else will take the time to write a fuller explanation, but basically you do exactly as when you multiply two big numbers by hand on a paper! It's just that instead of working with base 10, you work in base 256. That is, treat your numbers as a byte vectors, and do with each byte what you do to a digit when you "hand multiply".
The comments in the FreeBSD implementation of __muldi3() have a good explanation of the required procedure, see muldi3.c. If you want to go straight to the source (always a good idea!), according to the comments this code was based on an algorithm described in Knuth's The Art of Computer Programming vol. 2 (2nd ed), section 4.3.3, p. 278. (N.B. the link is for the 3rd edition.)
Back on the Intel 8088 (the original PC CPU and the last CPU I wrote assembly code for) when you multiplied two 16 bit numbers (32 bits? whoow) the CPU would return 2 16 bit numbers in two different registers - one with the 16 msb and one with the lsb.
You should check the hardware capabilities of your micro-controller, maybe it has a similar setup (obviously you'll need the code this in assembly if it does).
Otherwise you'll have to implement multiplication on your own.

Resources