base64 representation of UUID using slugId from node.js - base64

I am using slugId which is a node.js module for converting from UUID to base64 URL friendly text and vice-versa. (see: https://github.com/taskcluster/slugid) As one of our QAs was executing tests he found the following which I am unable to explain:
The slugs: aOSL2RT_Rhy-xNuoe3j7ag and aOSL2RT_Rhy-xNuoe3j7ah generate the same UUID: d2369f6c-1eea-4518-a641-33d6c2dc0493.
This is also applicable to more slugs. Example:
0jafbB7qRRimQTPWwtwEkw, 0jafbB7qRRimQTPWwtwEkx. (Both of them translate to UUID: d2369f6c-1eea-4518-a641-33d6c2dc0493)
The decode and decode functions of slugId look sound but I am unable to explain the above behaviour.

A "slugId" is 22 characters. Each character is base64, i.e. representing 6 bits, which means they have a total of 22×6=132 bits. However, UUIDs have only 128 bits; the last 4 bits of the slugId are discarded in the conversion, so there are 16 slugId values that map to each UUID value.
This means you need to sanitize all slugId values on input, e.g. by rejecting any value with one (or more) of those last 4 bits set. Presumably you are already validating them in other ways (e.g. too long, too short, invalid chars, etc.) so this is just one more minor test to be added to the list.

Related

How can I shorten a hexadecimal string further?

I am using MongoDB's build in id fields to label products and for ease of usage/typability, I would like to compress the _id field down from a hexadecimal string that looks like 5b69c35ac2cc78c8979a8a9b to something shorter and involving all letters of the alphabet (both uppercase and lowercase) and numbers. preferably it would involve no more than 10 or 12 characters. Are there any common methods of accomplishing this in Node.JS/MongoDB?
You could convert them to base64, that would make them 16 characters long.
Example:
Buffer.from('5b69c35ac2cc78c8979a8a9b', 'hex').toString('base64') // W2nDWsLMeMiXmoqb
It's better if you can directly access the Buffer - converting many ObjectIds from string could be costly.
The code 5b69c35ac2cc78c8979a8a9b is 24 bytes long (in hex), which means the absolute minimum number of bytes needed to represent this value without losing information is 12, ranging from 0-255 which is not what we want.
If we take a look at the ObjectId we could (maybe) eliminate some bytes:
a 4-byte value representing the seconds since the Unix epoch,
a 3-byte machine identifier,
a 2-byte process id, and
a 3-byte counter, starting with a random value.
Removing machine identifier and process id (if all id's are generated by the same process) would leave us with 7 bytes (0-255), which is still not ideal to encode in base64 or even base32.
So it would probably be better to just use a 32 bit unsigned integer for the product codes and display it as hex using 8 bytes (the leading zeros could be removed).
Encoding those 4 bytes in base64 wouldn't help much (every 3 bytes become 4 bytes), and personally I would prefer case insensitive id's for use in url's which would leave us only with base32.
For better ease of usage/typability than hexadecimal, those 4 bytes could be encoded in z-base-32 and would fit in 7 bytes without padding (7 * 5 bits = 35 bits).

Node.js readUIntBE arbitrary size restriction?

Background
I am reading buffers using the Node.js buffer native API. This API has two functions called readUIntBE and readUIntLE for Big Endian and Little Endian respectively.
https://nodejs.org/api/buffer.html#buffer_buf_readuintbe_offset_bytelength_noassert
Problem
By reading the docs, I stumbled upon the following lines:
byteLength Number of bytes to read. Must satisfy: 0 < byteLength <= 6.
If I understand correctly, this means that I can only read 6 bytes at a time using this function, which makes it useless for my use case, as I need to read a timestamp comprised of 8 bytes.
Questions
Is this a documentation typo?
If not, what is the reason for such an arbitrary limitation?
How do I read 8 bytes in a row ( or how do I read sequences greater than 6 bytes? )
Answer
After asking in the official Node.js repo, I got the following response from one of the members:
No it is not a typo
The byteLength corresponds to e.g. 8bit, 16bit, 24bit, 32bit, 40bit and 48bit. More is not possible since JS numbers are only safe up to Number.MAX_SAFE_INTEGER.
If you want to read 8 bytes, you can read multiple entries by adding the offset.
Source: https://github.com/nodejs/node/issues/20249#issuecomment-383899009

Reduce length of decimal variable (algorithm)

I have a string of decimal digits like:
965854242113548732659745896523654789653244879653245794444524
length : 60 character
I want to send it to a function, but first I want reduce the length of it as much as possible. How can I do that?
I think about convert it to base-34, that will be 1RG7EEWTN7NW60EWIWMASEWWMEOSWC2SS8482WQE. That is 40 characters in length. Can I reduce it more some way?
Your number fits into 70 bits - for such a small payload compression seems nonsensical. Assuming that the server API supports arbitrary binary data, I would simply encode the value in binary and prefix it with the number of bytes needed.
1 byte length information - for 854657986453156789675, the example you gave initially, this would be 9
9 bytes of binary payload
→ 10 bytes of data transferred for your example.
Your example in hex:
09 2e 54 c3 1e 81 cf 05 fd ab
With the length given in bytes, this of course supports only decimals up to 255 bytes length, but I suppose this is sufficient. If your transport protocol has a built in concept of length of a packet, you could even skip the initial length byte.
Important: ensure that all sides use the same endianness. As you are transmitting your data over the network, network byte order (big endian) would be natural.
If you want to transmit very large numbers, keep in mind that you can use any compression algorithm you like on the binary representation of your data. However, your payload must be significantly larger in order to make compression feasible - for example, using zLib compression for the above 9 byte payload results in an 18 byte payload due to the overhead for the zLib datastructures.
If (and only if) you cannot use arbitrary bytes for your payload, you can encode your data (possibly after compression). Most modern libraries have built in support for Base64, so this would be a natual way of representing the data.

How do I pre-determine the length of the resultant cipher text produced in an encryption operation?

I have an application which stores some information in an encrypted state, both on file and in a database. How can I calculate what the length of the resultant cipher text will be based on the plain text input?
The encryption operation consists of using the .NET RijndaelManaged class/algorithm and then a conversion to a Base64 string prior to storage.
What I want to be able to do is to know beforehand how long the encrypted string will be for a given input so that I can limit the length of the input accordingly in relation to the storage space available for its encrypted form (if that makes sense!).
Thanks
Rijndael's output is the same size as the input, rounded up to the next closest multiple of the block size (usually 128 bits, aka 16 bytes). Base64 expands its input to its output by 4/3 -- it takes 4 bytes of output to represent each 3 bytes of input.
So if you have for example an input of 70 bytes, the encrypting step will produce 80 bytes of output (closest multiple of 16 that's > 70), Base64 will turn that into 108 (81/3 times 4).
The encrypted text will be the first cipher block size multiple bigger than you text. You check your Algorithm BlockSize property. Pure Base64 encoding increases the output by a third, but this can vary if you also need to URL escape (percent encode) certain Base64 symbols (like '+' and '/').

To pad or not to pad - creating a communication protocol

I am creating a protocol to have two applications talk over a TCP/IP stream and am figuring out how to design a header for my messages. Using the TCP header as an initial guide, I am wondering if I will need padding. I understand that when we're dealing with a cache, we want to make sure that data being stored fits in a row of cache so that when it is retrieved it is done so efficiently. However, I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
For example: I want to send over a message header consisting of a 3 byte field followed by a 1 byte padding field for 32 bit alignment. Then I will send over the message data.
In this case, the receiver will just take 3 bytes from the stream and throw away the padding byte. And then start reading message data. As I see it, he will not be storing the 3 bytes and the message data the way he wants. The whole point of byte alignment is so that it will be retrieved in an efficient manner. But if the retriever doesn't care about the padding how will it be retrieved efficiently?
Without the padding, the retriever just takes the 3 header bytes from the stream and then takes the data bytes. Since the retriever stores these bytes however he wants, how does it matter whether or not the padding is done?
Maybe I'm missing the point of padding.
It's slightly hard to extract a question from this post, but with what I've said you guys can probably point out my misconceptions.
Please let me know what you guys think.
Thanks,
jbu
If word alignment of the message body is of some use, then by all means, pad the message to avoid other contortions. The padding will be of benefit if most of the message is processed as machine words with decent intensity.
If the message is a stream of bytes, for instance xml, then padding won't do you a whole heck of a lot of good.
As far as actually designing a wire protocol, you should probably consider using a plain text protocol with compression (including the header), which will probably use less bandwidth than any hand-designed binary protocol you could possibly invent.
I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
If I'm a receiver, I might pass a buffer (i.e. an array of bytes) to the protocol driver (i.e. the TCP stack) and say, "give this back to me when there's data in it".
What I (the application) get back, then, is an array of bytes which contains the data. Using C-style tricks like "casting" and so on I can treat portions of this array as if it were words and double-words (not just bytes) ... provided that they're suitably aligned (which is where padding may be required).
Here's an example of a statement which reads a DWORD from an offset in a byte buffer:
DWORD getDword(const byte* buffer)
{
//we want the DWORD which starts at byte-offset 8
buffer += 8;
//dereference as if it were pointing to a DWORD
//(this would fail on some machines if the pointer
//weren't pointing to a DWORD-aligned boundary)
return *((DWORD*)buffer);
}
Here's the corresponding function in Intel assembly; note that it's a single opcode i.e. quite an efficient way to access the data, more efficient that reading and accumulating separate bytes:
mov eax,DWORD PTR [esi+8]
Oner reason to consider padding is if you plan to extend your protocol over time. Some of the padding can be intentionally set aside for future assignment.
Another reason to consider padding is to save a couple of bits on length fields. I.e. always a multiple of 4, or 8 saves 2 or 3 bits off the length field.
One other good reason that TCP has padding (which probably does not apply to you) is it allows dedicated network processing hardware to easily separate the data from the header. As the data always starts on a 32 bit boundary, it's easier to separate the header from the data when the packet gets routed.
If you have a 3 byte header and align it to 4 bytes, then designate the unused byte as 'reserved for future use' and require the bits to be zero (rejecting messages where they are not as malformed). That leaves you some extensibility. Or you might decide to use the byte as a version number - initially zero, and then incrementing it if (when) you make incompatible changes to the protocol. Don't let the value be 'undefined' and "don't care"; you'll never be able to use it if you start out that way.

Resources