python3 - why is size of string bigger than encode - python-3.x

In Python 3, the size of a string such as 'test'.__sizeof__() returns 73. However, if I encode it as utf-8, 'test'.encode().__sizeof__() returns 37.
Why does the size of string significantly larger than the size of its encoding in utf-8?

In CPython, up to and including 3.2, unicode characters, which became str characters in 3.x, were stored as either 16 or 32 bit unsigned ints, depending on whether one had a 'narrow' or 'wide' build. (Always narrow on Windows, both used on linux). In 3.3 and following, CPython switched to a flexible string representation (FSR), using 1, 2, or 4 bytes (8, 16, or 32 bits) per char, depending on the width needed for the 'widest' char in the string. See PEP 393
For 64 bit 3.4.3, 'test'.__sizeof__ == 53, while still b'test'.__sizeof__ == 37. Since both are using 1 byte per char, the extra 16 bytes are extra overhead in a string object. Part of that is the hidden specification of whether the string is using 1, 2, or 4 bytes per char. For comparison, 'tes\u1111'.__sizeof__() == 82 and 'tes\U00011111'.__sizeof__() == 92.
(No, I do not know why the jump to 82. One would probably have to check the code to be sure.)

str in python 3 is typically stored as 16-bit integers instead of bytes, unlike the encoded bytes object. This makes the string twice as large. Some extra metadata is probably also present, inflating the object further.

Related

How can I shorten a hexadecimal string further?

I am using MongoDB's build in id fields to label products and for ease of usage/typability, I would like to compress the _id field down from a hexadecimal string that looks like 5b69c35ac2cc78c8979a8a9b to something shorter and involving all letters of the alphabet (both uppercase and lowercase) and numbers. preferably it would involve no more than 10 or 12 characters. Are there any common methods of accomplishing this in Node.JS/MongoDB?
You could convert them to base64, that would make them 16 characters long.
Example:
Buffer.from('5b69c35ac2cc78c8979a8a9b', 'hex').toString('base64') // W2nDWsLMeMiXmoqb
It's better if you can directly access the Buffer - converting many ObjectIds from string could be costly.
The code 5b69c35ac2cc78c8979a8a9b is 24 bytes long (in hex), which means the absolute minimum number of bytes needed to represent this value without losing information is 12, ranging from 0-255 which is not what we want.
If we take a look at the ObjectId we could (maybe) eliminate some bytes:
a 4-byte value representing the seconds since the Unix epoch,
a 3-byte machine identifier,
a 2-byte process id, and
a 3-byte counter, starting with a random value.
Removing machine identifier and process id (if all id's are generated by the same process) would leave us with 7 bytes (0-255), which is still not ideal to encode in base64 or even base32.
So it would probably be better to just use a 32 bit unsigned integer for the product codes and display it as hex using 8 bytes (the leading zeros could be removed).
Encoding those 4 bytes in base64 wouldn't help much (every 3 bytes become 4 bytes), and personally I would prefer case insensitive id's for use in url's which would leave us only with base32.
For better ease of usage/typability than hexadecimal, those 4 bytes could be encoded in z-base-32 and would fit in 7 bytes without padding (7 * 5 bits = 35 bits).

Problems with SHA 2 Hashing and Java

I am working on following the SHA-2 cryptographically functions as stated in https://en.wikipedia.org/wiki/SHA-2.
I am examining the lines that say:
begin with the original message of length L bits append a single '1' bit;
append K '0' bits, where K is the minimum number >= 0 such that L + 1 + K + 64 is a multiple of 512
append L as a 64-bit big-endian integer, making the total post-processed length a multiple of 512 bits.
I do not understand the last two lines. If my string is short can its length after adding K '0' bits be 512. How should I implement this in Java code?
First of all, it should be made clear that the "string" that is talked about is not a Java String but a bit string. These algorithms are binary/bit based. The implementation will generally not handle bits but bytes. So there is a translation phase where you should see bytes instead of bits.
SHA-512 is operated on in blocks of 512 bits (SHA-224/256) or 1024 bits (SHA-384/512). So basically you have a 64 or 128 byte buffer that you are filling before operating on it. You could also directly cache the data in 32 bit int fields (SHA-224/256) or 64 bit long fields, as that is the word size that is operated on.
Now the padding is relatively simple procedure. The padding is called bit-padding. As it is used in big-endian mode (SHA-2 fortunately uses this instead of the braindead little endian mode in SHA-3) the padding consists of a single bit set on the highest order bit in a byte, with the rest filled by zero's. That makes for a value of (byte) 0x80 which must be put in the buffer.
If you cannot create this padding because the buffer is full then you will have to process the previous block, and then set the first bit of the now available buffer to (byte) 0x80. In the newer Java you can also use (byte) 0b1_0000000 byte the way, which is more explicit.
Now you simply add zero's until you have 8 to 16 bytes left, again depending on the hash output size used. If there aren't enough bytes then fill till the end, process the block, and re-start filling with zero bytes until you have 8 or 16 bytes left again.
Now finally you have to encode the number of bits in those 8 or 16 bytes you've left. So multiply your input by eight, and make sure you encode those bytes in the same way as you'd expect in Java with the least significant bits as much to the right as possible. You might want to use https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html#putLong-long- for this if you don't want to program it yourself. You may probably forget about anything over 2^56 bytes anyway, so if you have SHA-384/SHA-512 then simply set the first eight bytes to zero.
And that's it, except that you still need to process that last block and then use as many bytes from the left as required for your particular output size.

How to determine the byte length of a long

Is there a fast way to determine the number of bytes used in a long? I'm looking for something like this:
len((1000**1000).to_bytes())
(The problem of course is that to_bytes wants the number of bytes as input.)
(x.bit_length() + 7) // 8 will do what you want. Number of bits, converted to bytes and rounded up.

Perl: string length limitations in real life

While, for example, perldata documents that scalar strings in Perl are limited only by available memory, I'm strongly suspecting in real life there would be some other limits.
I'm considering the following ideas:
I'm not sure how strings are implemented in Perl — is there some sort of byte/character counter? If there is, then probably it's implemented as a platform-dependent integer (i.e. 32-bit or 64-bit), so effectively it would limit strings to something like 2 ** 31, 2 ** 32, 2 ** 63 or 2 ** 64 bytes.
If Perl doesn't use a counter and instead uses some byte to terminate the string (which would be strange, as it's perfectly ok to have a string like "foo\0bar" in Perl), then all operations would inevitably get much slower as string length increases.
Most string functions that Perl deals with strings, such as length, for example, return normal scalar integer, and I strongly suspect that it would be platform-limited integer too.
So, what would be the other factors that limit Perl string length in real life? What should be considered an okay string length for practical purposes?
It keep track of the size of the buffer and the number of bytes therein.
$ perl -MDevel::Peek -e'$x="abcdefghij"; Dump($x);'
SV = PV(0x9222b00) at 0x9222678
REFCNT = 1
FLAGS = (POK,pPOK)
PV = 0x9238220 "abcdefghij"\0
CUR = 10 <-- 10 bytes used
LEN = 12 <-- 12 bytes allocated
On a 32-bit build of Perl, it uses 32-bit unsigned integer for these values. This is (exactly) large enough to create a string that uses up your process's entire 4 GiB address space.
On a 64-bit build of Perl, it uses 64-bit unsigned integer for those values. This is (exactly) large enough to create a string that uses up your process's entire 16 EiB address space.
The docs are correct. The size of the string is limited only by available memory.

Efficient binary-to-string formatting (like base64, but for UTF8/UTF16)?

I have many bunches of binary data, ranging from 16 to 4096 bytes, which need to be stored to a database and which should be easily comparable as a unit (e.g. two bunches of data batch only if the lengths match and all bytes match). Strings are nice for that, but converting binary data blindly to a string is apt to cause problems due to character encoding/reinterpretation issues.
Base64 was a common method for storing strings in an era when 7-bit ASCII was the norm; its 33% space penalty was a little annoying, but not horrible. Unfortunately, if one is using UTF-16, the space penalty is 166% (8 bytes to store 3) which seems pretty icky.
Is there any common storage method for storing binary data in a valid Unicode string which will allow better efficiency in UTF-16 (and hopefully not be too horrible in UTF-8)? A base-32768 coding would store 240 bits in sixteen characters, which would take 32 bytes of UTF-16 or 48 bytes of UTF-8. By comparison, base64 coding would use 40 characters, which would take 80 bytes of UTF-16 or 40 bytes of UTF-8. An approach which was designed to take the same space in UTF-8 or UTF-16 might store 48 bits in three characters that would take eight bytes in either UTF-8 or UTF-16, thus storing 240 bits in 40 bytes of either UTF-8 or UTF-16.
Are there any standards for anything like that?
Base32768 does exactly what you wanted. Sorry it took five years to exist.
Usage (this is JavaScript, although porting the base32768 module to another programming language is eminently practical):
var base32768 = require("base32768");
var buf = new Buffer("d41d8cd98f00b204e9800998ecf842", "hex"); // 15 bytes
var str = base32768.encode(buf);
console.log(str); // "迎裶垠⢀䳬Ɇ垙鸂", 8 code points
var buf2 = base32768.decode(str);
console.log(buf.equals(buf2)); // true
Base32768 selects 32,768 characters from the Basic Multilingual Plane. Each character takes 2 bytes when represented as UTF-16 or 3 bytes when represented as UTF-8, giving exactly the efficiency characteristics you describe: 240 bits can be stored in 16 characters i.e. 32 bytes of UTF-16 or 48 bytes of UTF-8. (Except for the occasional padding character, analogous to the = padding seen in Base64.)
This is done by dicing the input bytes (i.e. 8-bit unsigned numbers) into 15-bit unsigned numbers and assigning each resulting 15-bit number to one of the 32,768 characters.
Note that the characters chosen are also "safe" - no whitespace, control characters, combining diacritics or susceptibility to normalization corruption.

Resources