How to write 64-bit BigInt to Buffer? - node.js

Is it possible to write 64-bit BigInts into a Buffer in Node.js (10.7+) yet?
Or do I still have to do it in two operations?
let buf = Buffer.allocUnsafe(16);
buf.writeUInt32BE(Number(time>>32n),0,true);
buf.writeUInt32BE(Number(time&4294967295n),4,true);
I can't find anything promising in the docs, but there's other barely documented methods such as BigInt.asUintN, so i thought I'd ask.

I was just faced with a similar problem (needing to build and write 64-bit IDs consisting of a 41-bit timestamp, 13-bit node ID, and a 10-bit counter). The largest single value I was able to write to a buffer was 48-bit using buf.writeIntLE(). So I ended up building up / writing the high 48 bits, and low 16 bits independently. If there's a better way to do it, I'm not aware of it.

Did you already try this package?
https://github.com/substack/node-bigint#tobufferopts

Related

Node.js readUIntBE arbitrary size restriction?

Background
I am reading buffers using the Node.js buffer native API. This API has two functions called readUIntBE and readUIntLE for Big Endian and Little Endian respectively.
https://nodejs.org/api/buffer.html#buffer_buf_readuintbe_offset_bytelength_noassert
Problem
By reading the docs, I stumbled upon the following lines:
byteLength Number of bytes to read. Must satisfy: 0 < byteLength <= 6.
If I understand correctly, this means that I can only read 6 bytes at a time using this function, which makes it useless for my use case, as I need to read a timestamp comprised of 8 bytes.
Questions
Is this a documentation typo?
If not, what is the reason for such an arbitrary limitation?
How do I read 8 bytes in a row ( or how do I read sequences greater than 6 bytes? )
Answer
After asking in the official Node.js repo, I got the following response from one of the members:
No it is not a typo
The byteLength corresponds to e.g. 8bit, 16bit, 24bit, 32bit, 40bit and 48bit. More is not possible since JS numbers are only safe up to Number.MAX_SAFE_INTEGER.
If you want to read 8 bytes, you can read multiple entries by adding the offset.
Source: https://github.com/nodejs/node/issues/20249#issuecomment-383899009

How do I do bcalls in hex?

So I have a TI-84 Plus C Silver Edition. I just started working on writing assembly programs on it using the opcodes. I found a good reference chart here, but was wondering how to do bcalls, specifically how to print a character to the screen. It seems as if the hex code for the call is 3 bytes long, but call takes in 2 bytes. So how do I call it?
Also, does anyone know the memory location programs are loaded into when they are run for my calculator? I haven't been able to find it yet.
Based on the definitions here: http://wikiti.brandonw.net/index.php?title=84PCSE:OS:Include_File , a "bcall" is a RST 28 instruction followed by the specific number of the bcall. So to print a character you would do (given that PutC is 44FB):
rst 28h
dw 44FBh
Presumably the character to print is in A register.
TI uses rst 28h for their bcall which translates to hexadecimal as EF. Bcalls are 2 bytes, but keep in mind that the Z80 and eZ80 are little-endian processors. So as mentioned earlier, _PutC is 44FB, so you would have to use the FB first, then the 44, making bcall(_PutC) equivalent to EFFB44.
I think the calc you are using has an eZ80. While the eZ80 is backwards compatible with the Z80 instruction set, the table to which you linked is not complete for the eZ80. If you are looking to get really wild, you can use the docs provided by Zilog here though I must warn you that if you are not fairly comfortable with Z80 Assembly, the reading material will be too dense.

nodejs buffer uint 32 from ruby?

I am trying to convert a Ruby program to NodeJS, but I seem to be getting stuck with buffers.
I have
rounds = header_bytes[120..-1].unpack('L*').first
In Ruby, which headers a buffer (header_bytes), and get's 120-124 (or in this case -1, which is remaining). Then unpacks it into an unsigned 32 bit integer.
I am trying to do the same thing in JS, but it can't seem to get it to work. I have
rounds = header.slice(120,124).toString('ucs2');
I've tried all the different formats in toString and nothing returns the same result as Ruby.
Assuming that header is an instance of Node's Buffer then you have a variety of functions for reading from a buffer as various sizes of integer, including
buf.readUInt32LE
buf.readUInt32BE
These both take an offset from which to read the bytes. The ruby L specifier means native byte order so depending on where this code is running you might need either of those functions, depending on whether you're on a big or little endian platform. For example on an x86 machine you'd do
header.readUInt32LE(120)
Protocols normally specify big or little endian (traditionally network byte order is big endian)
You can check the platform endianness with os.endianness

linux socket programming with the consideration of real size of char

I'm writing a client and server program with Linux socket programming. I'm confused about something. Although sizeof(char) is guaranteed to be 1, I know the real size of char may be different in different computer. It may be 8bits,16bits or some other size. The problem is that what if client and server have different size of char. For example client char size is 8bits and server char size is 16bits. Client call write(socket_fd, *c, sizeof(char)) and Server call read(socket_fd, *c, sizeof(char)). Does Client sends 8bits and Server wants to receive 16bits? If it is true, what will happen?
Another question: Is it good for me to pass text between client and server because I don't need to consider the big endian and little endian problem?
Thanks in advance.
What system are you communicating with that has 16bits in a byte? In any case, if you want to know exactly how many bits you have - use int8 instead.
#Basile is right. A char is always eight bits in linux. I found this in the book Linux Kernel Development. This book also states some other rules:
Although there is no rule that the int type be 32 bits, it is in Linux on all currently supported architectures.
The same goes for the short type, which is 16 bits on all current architectures, although no rule explicitly decrees that.
Never assume the size of a pointer or a long, which can be either 32 or 64 bits on the currently supported machines in Linux.
Because the size of a long varies on different architectures, never assume that sizeof(int) is equal to sizeof(long).
Likewise, do not assume that a pointer and an int are the same size.
For the choice of pass by binary data or text data through the network, the book UNIX Network Programming Volume1 gives the two solutions:
Pass all numeric data as text strings.
Explicitly define the binary formats of the supported datatypes (number of bits, big- or little-endian) and pass all data between the client and server in this format. RPC packages normally use this technique. RFC 1832 [Srinivasan 1995] describes the External Data Representation (XDR) standard that is used with the Sun RPC package.
The c definition of char as the size of a memory cell is different from the definition used in Unicode.
A Unicode code-point can, depending on the encoding used, require up to 6 bytes of storage.
This is a slightly different problem than byte order and word size differences between different architectures, etc.
If you wish to express complex structures (containing unicode text), it's probably a
good idea to implement a message protocol, that encode messages to a byte array, that can be send over any communication channel.
A simple client/server mechanism is to send a fixed size header containing the length of the following message. It's a nice exercise to build something like this in c... :-)
Depending on what you are trying to do, it may be worthwhile to look at existing technologies for the message interface; Look at Etch, Thrift, SWIG, *-rpc, asn1, soap, xml, json, corba, etc.

Doing file operations with 64-bit addresses in C + MinGW32

I'm trying to read in a 24 GB XML file in C, but it won't work. I'm printing out the current position using ftell() as I read it in, but once it gets to a big enough number, it goes back to a small number and starts over, never even getting 20% through the file. I assume this is a problem with the range of the variable that's used to store the position (long), which can go up to about 4,000,000,000 according to http://msdn.microsoft.com/en-us/library/s3f49ktz(VS.80).aspx, while my file is 25,000,000,000 bytes in size. A long long should work, but how would I change what my compiler(Cygwin/mingw32) uses or get it to have fopen64?
The ftell() function typically returns an unsigned long, which only goes up to 232 bytes (4 GB) on 32-bit systems. So you can't get the file offset for a 24 GB file to fit into a 32-bit long.
You may have the ftell64() function available, or the standard fgetpos() function may return a larger offset to you.
You might try using the OS provided file functions CreateFile and ReadFile. According to the File Pointers topic, the position is stored as a 64bit value.
Unless you can use a 64-bit method as suggested by Loadmaster, I think you will have to break the file up.
This resource seems to suggest it is possible using _telli64(). I can't test this though, as I don't use mingw.
I don't know of any way to do this in one file, a bit of a hack but if splitting the file up properly isn't a real option, you could write a few functions that temp split the file, one that uses ftell() to move through the file and swaps ftell() to a new file when its reaching the split point, then another that stitches the files back together before exiting. An absolutely botched up approach, but if no better solution comes to light it could be a way to get the job done.
I found the answer. Instead of using fopen, fseek, fread, fwrite... I'm using _open, lseeki64, read, write. And I am able to write and seek in > 4GB files.
Edit: It seems the latter functions are about 6x slower than the former ones. I'll give the bounty anyone who can explain that.
Edit: Oh, I learned here that read() and friends are unbuffered. What is the difference between read() and fread()?
Even if the ftell() in the Microsoft C library returns a 32-bit value and thus obviously will return bogus values once you reach 2 GB, just reading the file should still work fine. Or do you need to seek around in the file, too? For that you need _ftelli64() and _fseeki64().
Note that unlike some Unix systems, you don't need any special flag when opening the file to indicate that it is in some "64-bit mode". The underlying Win32 API handles large files just fine.

Resources