UTF-16 Encoding - Why using complex surrogate pairs? - string

I have been working on string encoding schemes and while I examine how UTF-16 works, I have a question. Why using complex surrogate pairs to represent 21 bits code point? Why not to simply store the bits in the first code unit and the remaining bits in the second code unit? Am I missing something! Is there a problem to store the bits directly like we did in UTF-8?
Example of what I am thinking of:
The character '🙃'
Corresponding code point: 128579 (Decimal)
The binary form: 1 1111 0110 0100 0011 (17 bits)
It's 17-bit code point.
Based on UTF-8 schemes, it will be represented as:
240 : 11110 000
159 : 10 011111
153 : 10 011001
131 : 10 000011
In UTF-16, why not do something looks like that rather than using surrogate pairs:
49159 : 110 0 0000 0000 0111
30275 : 01 11 0110 0100 0011

Proposed alternative to UTF-16
I think you're proposing an alternative format using 16-bit code units analogous to the UTF-8 code scheme — let's designate it UTF-EMF-16.
In your UTF-EMF-16 scheme, code points from U+0000 to U+7FFF would be encoded as a single 16-bit unit with the MSB (most significant bit) always zero. Then, you'd reserve 16-bit units with the 2 most significant bits set to 10 as 'continuation units', with 14 bits of payload data. And then you'd encode code points from U+8000 to U+10FFFF (the current maximum Unicode code point) in 16-bit units with the three most significant bits set to 110 and up to 13 bits of payload data. With Unicode as currently defined (U+0000 .. U+10FFFF), you'd never need more than 7 of the 13 bits set.
U+0000 .. U+7FFF — One 16-bit unit: values 0x0000 .. 0x7FFF
U+8000 .. U+10FFF — Two 16-bit units:
1. First unit 0xC000 .. 0xC043
2. Second unit 0x8000 .. 0xBFFF
For your example code point, U+1F683 (binary: 1 1111 0110 0100 0011):
First unit: 1100 0000 0000 0111 = 0xC007
Second unit: 1011 0110 0100 0011 = 0xB643
The second unit differs from your example in reversing the two most significant bits, from 01 in your example to 10 in mine.
Why wasn't such a scheme used in UTF-16
Such a scheme could be made to work. It is unambiguous. It could accommodate many more characters than Unicode currently allows. UTF-8 could be modified to become UTF-EMF-8 so that it could handle the same extended range, with some characters needing 5 bytes instead of the current maximum of 4 bytes. UTF-EMF-8 with 5 bytes would encode up to 26 bits; UTF-EMF-16 could encode 27 bits, but should be limited to 26 bits (roughly 64 million code points, instead of just over 1 million). So, why wasn't it, or something very similar, adopted?
The answer is the very common one – history (plus backwards compatibility).
When Unicode was first defined, it was hoped or believed that a 16-bit code set would be sufficient. The UCS2 encoding was developed using 16-bit values, and many values in the range 0x8000 .. 0xFFFF were given meanings. For example, U+FEFF is the byte order mark.
When the Unicode scheme had to be extended to make Unicode into a bigger code set, there were many defined characters with the 10 and 110 bit patterns in the most significant bits, so backwards compatibility meant that the UTF-EMF-16 scheme outlined above could not be used for UTF-16 without breaking compatibility with UCS2, which would have been a serious problem.
Consequently, the standardizers chose an alternative scheme, where there are high surrogates and low surrogates.
0xD800 .. 0xDBFF High surrogates (most signicant bits of 21-bit value)
0xDC00 .. 0xDFFF Low surrogates (less significant bits of 21-bit value)
The low surrogates range provides storage for 10 bits of data — the prefix 1101 11 uses 6 of 16 bits. The high surrogates range also provides storage for 10 bits of data — the prefix 1101 10 also uses 6 of 16 bits. But because the BMP (Basic Multilingual Plane — U+0000 .. U+FFFF) doesn't need to be encoded with two 16-bit units, the UTF-16 encoding subtracts 1 from the high order data, and can therefore be used to encode U+10000 .. U+10FFFF. (Note that although Unicode is a 21-bit encoding, not all 21-bit (unsigned) numbers are valid Unicode code points. Values from 0x110000 .. 0x1FFFFF are 21-bit numbers but are not a part of Unicode.)
From the Unicode FAQ — UTF-8, UTF-16, UTF-32 & BOM:
Q: What’s the algorithm to convert from UTF-16 to character codes?
A: The Unicode Standard used to contain a short algorithm, now there is just a bit distribution table. Here are three short code snippets that translate the information from the bit distribution table into C code that will convert to and from UTF-16.
Using the following type definitions
typedef unsigned int16 UTF16;
typedef unsigned int32 UTF32;
the first snippet calculates the high (or leading) surrogate from a character code C.
const UTF16 HI_SURROGATE_START = 0xD800
UTF16 X = (UTF16) C;
UTF32 U = (C >> 16) & ((1 << 5) - 1);
UTF16 W = (UTF16) U - 1;
UTF16 HiSurrogate = HI_SURROGATE_START | (W << 6) | X >> 10;
where X, U and W correspond to the labels used in Table 3-5 UTF-16 Bit Distribution. The next snippet does the same for the low surrogate.
const UTF16 LO_SURROGATE_START = 0xDC00
UTF16 X = (UTF16) C;
UTF16 LoSurrogate = (UTF16) (LO_SURROGATE_START | X & ((1 << 10) - 1));
Finally, the reverse, where hi and lo are the high and low surrogate, and C the resulting character
UTF32 X = (hi & ((1 << 6) -1)) << 10 | lo & ((1 << 10) -1);
UTF32 W = (hi >> 6) & ((1 << 5) - 1);
UTF32 U = W + 1;
UTF32 C = U << 16 | X;
A caller would need to ensure that C, hi, and lo are in the appropriate ranges. [

Related

Node only outputs even numbers when the number goes beyond certain integer length [duplicate]

Is this defined by the language? Is there a defined maximum? Is it different in different browsers?
JavaScript has two number types: Number and BigInt.
The most frequently-used number type, Number, is a 64-bit floating point IEEE 754 number.
The largest exact integral value of this type is Number.MAX_SAFE_INTEGER, which is:
253-1, or
+/- 9,007,199,254,740,991, or
nine quadrillion seven trillion one hundred ninety-nine billion two hundred fifty-four million seven hundred forty thousand nine hundred ninety-one
To put this in perspective: one quadrillion bytes is a petabyte (or one thousand terabytes).
"Safe" in this context refers to the ability to represent integers exactly and to correctly compare them.
From the spec:
Note that all the positive and negative integers whose magnitude is no
greater than 253 are representable in the Number type (indeed, the
integer 0 has two representations, +0 and -0).
To safely use integers larger than this, you need to use BigInt, which has no upper bound.
Note that the bitwise operators and shift operators operate on 32-bit integers, so in that case, the max safe integer is 231-1, or 2,147,483,647.
const log = console.log
var x = 9007199254740992
var y = -x
log(x == x + 1) // true !
log(y == y - 1) // also true !
// Arithmetic operators work, but bitwise/shifts only operate on int32:
log(x / 2) // 4503599627370496
log(x >> 1) // 0
log(x | 1) // 1
Technical note on the subject of the number 9,007,199,254,740,992: There is an exact IEEE-754 representation of this value, and you can assign and read this value from a variable, so for very carefully chosen applications in the domain of integers less than or equal to this value, you could treat this as a maximum value.
In the general case, you must treat this IEEE-754 value as inexact, because it is ambiguous whether it is encoding the logical value 9,007,199,254,740,992 or 9,007,199,254,740,993.
>= ES6:
Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;
<= ES5
From the reference:
Number.MAX_VALUE;
Number.MIN_VALUE;
console.log('MIN_VALUE', Number.MIN_VALUE);
console.log('MAX_VALUE', Number.MAX_VALUE);
console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6
console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6
It is 253 == 9 007 199 254 740 992. This is because Numbers are stored as floating-point in a 52-bit mantissa.
The min value is -253.
This makes some fun things happening
Math.pow(2, 53) == Math.pow(2, 53) + 1
>> true
And can also be dangerous :)
var MAX_INT = Math.pow(2, 53); // 9 007 199 254 740 992
for (var i = MAX_INT; i < MAX_INT + 2; ++i) {
// infinite loop
}
Further reading: http://blog.vjeux.com/2010/javascript/javascript-max_int-number-limits.html
In JavaScript, there is a number called Infinity.
Examples:
(Infinity>100)
=> true
// Also worth noting
Infinity - 1 == Infinity
=> true
Math.pow(2,1024) === Infinity
=> true
This may be sufficient for some questions regarding this topic.
Jimmy's answer correctly represents the continuous JavaScript integer spectrum as -9007199254740992 to 9007199254740992 inclusive (sorry 9007199254740993, you might think you are 9007199254740993, but you are wrong!
Demonstration below or in jsfiddle).
console.log(9007199254740993);
However, there is no answer that finds/proves this programatically (other than the one CoolAJ86 alluded to in his answer that would finish in 28.56 years ;), so here's a slightly more efficient way to do that (to be precise, it's more efficient by about 28.559999999968312 years :), along with a test fiddle:
/**
* Checks if adding/subtracting one to/from a number yields the correct result.
*
* #param number The number to test
* #return true if you can add/subtract 1, false otherwise.
*/
var canAddSubtractOneFromNumber = function(number) {
var numMinusOne = number - 1;
var numPlusOne = number + 1;
return ((number - numMinusOne) === 1) && ((number - numPlusOne) === -1);
}
//Find the highest number
var highestNumber = 3; //Start with an integer 1 or higher
//Get a number higher than the valid integer range
while (canAddSubtractOneFromNumber(highestNumber)) {
highestNumber *= 2;
}
//Find the lowest number you can't add/subtract 1 from
var numToSubtract = highestNumber / 4;
while (numToSubtract >= 1) {
while (!canAddSubtractOneFromNumber(highestNumber - numToSubtract)) {
highestNumber = highestNumber - numToSubtract;
}
numToSubtract /= 2;
}
//And there was much rejoicing. Yay.
console.log('HighestNumber = ' + highestNumber);
Many earlier answers have shown 9007199254740992 === 9007199254740992 + 1 is true to verify that 9,007,199,254,740,991 is the maximum and safe integer.
But what if we keep doing accumulation:
input: 9007199254740992 + 1 output: 9007199254740992 // expected: 9007199254740993
input: 9007199254740992 + 2 output: 9007199254740994 // expected: 9007199254740994
input: 9007199254740992 + 3 output: 9007199254740996 // expected: 9007199254740995
input: 9007199254740992 + 4 output: 9007199254740996 // expected: 9007199254740996
We can see that among numbers greater than 9,007,199,254,740,992, only even numbers are representable.
It's an entry to explain how the double-precision 64-bit binary format works. Let's see how 9,007,199,254,740,992 be held (represented) by using this binary format.
Using a brief version to demonstrate it from 4,503,599,627,370,496:
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
|-- 52 bits --| |exponent part| |-- 52 bits --|
On the left side of the arrow, we have bit value 1, and an adjacent radix point. By consuming the exponent part on the left, the radix point is moved 52 steps to the right. The radix point ends up at the end, and we get 4503599627370496 in pure binary.
Now let's keep incrementing the fraction part with 1 until all the bits are set to 1, which equals 9,007,199,254,740,991 in decimal.
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
(+1)
1 . 0000 ---- 0001 * 2^52 => 1 0000 ---- 0001.
(+1)
1 . 0000 ---- 0010 * 2^52 => 1 0000 ---- 0010.
(+1)
.
.
.
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
Because the 64-bit double-precision format strictly allots 52 bits for the fraction part, no more bits are available if we add another 1, so what we can do is setting all bits back to 0, and manipulate the exponent part:
┏━━▶ This bit is implicit and persistent.
┃
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
|-- 52 bits --| |-- 52 bits --|
(+1)
1 . 0000 ---- 0000 * 2^52 * 2 => 1 0000 ---- 0000. * 2
|-- 52 bits --| |-- 52 bits --|
(By consuming the 2^52, radix
point has no way to go, but
there is still one 2 left in
exponent part)
=> 1 . 0000 ---- 0000 * 2^53
|-- 52 bits --|
Now we get the 9,007,199,254,740,992, and for the numbers greater than it, the format can only handle increments of 2 because every increment of 1 on the fraction part ends up being multiplied by the left 2 in the exponent part. That's why double-precision 64-bit binary format cannot hold odd numbers when the number is greater than 9,007,199,254,740,992:
(consume 2^52 to move radix point to the end)
1 . 0000 ---- 0001 * 2^53 => 1 0000 ---- 0001. * 2
|-- 52 bits --| |-- 52 bits --|
Following this pattern, when the number gets greater than 9,007,199,254,740,992 * 2 = 18,014,398,509,481,984 only 4 times the fraction can be held:
input: 18014398509481984 + 1 output: 18014398509481984 // expected: 18014398509481985
input: 18014398509481984 + 2 output: 18014398509481984 // expected: 18014398509481986
input: 18014398509481984 + 3 output: 18014398509481984 // expected: 18014398509481987
input: 18014398509481984 + 4 output: 18014398509481988 // expected: 18014398509481988
How about numbers between [ 2 251 799 813 685 248, 4 503 599 627 370 496 )?
1 . 0000 ---- 0001 * 2^51 => 1 0000 ---- 000.1
|-- 52 bits --| |-- 52 bits --|
The value 0.1 in binary is exactly 2^-1 (=1/2) (=0.5)
So when the number is less than 4,503,599,627,370,496 (2^52), there is one bit available to represent the 1/2 times of the integer:
input: 4503599627370495.5 output: 4503599627370495.5
input: 4503599627370495.75 output: 4503599627370495.5
Less than 2,251,799,813,685,248 (2^51)
input: 2251799813685246.75 output: 2251799813685246.8 // expected: 2251799813685246.75
input: 2251799813685246.25 output: 2251799813685246.2 // expected: 2251799813685246.25
input: 2251799813685246.5 output: 2251799813685246.5
/**
Please note that if you try this yourself and, say, log
these numbers to the console, they will get rounded. JavaScript
rounds if the number of digits exceed 17. The value
is internally held correctly:
*/
input: 2251799813685246.25.toString(2)
output: "111111111111111111111111111111111111111111111111110.01"
input: 2251799813685246.75.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
input: 2251799813685246.78.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
And what is the available range of exponent part? 11 bits allotted for it by the format.
From Wikipedia (for more details, go there)
So to make the exponent part be 2^52, we exactly need to set e = 1075.
To be safe
var MAX_INT = 4294967295;
Reasoning
I thought I'd be clever and find the value at which x + 1 === x with a more pragmatic approach.
My machine can only count 10 million per second or so... so I'll post back with the definitive answer in 28.56 years.
If you can't wait that long, I'm willing to bet that
Most of your loops don't run for 28.56 years
9007199254740992 === Math.pow(2, 53) + 1 is proof enough
You should stick to 4294967295 which is Math.pow(2,32) - 1 as to avoid expected issues with bit-shifting
Finding x + 1 === x:
(function () {
"use strict";
var x = 0
, start = new Date().valueOf()
;
while (x + 1 != x) {
if (!(x % 10000000)) {
console.log(x);
}
x += 1
}
console.log(x, new Date().valueOf() - start);
}());
The short answer is “it depends.”
If you’re using bitwise operators anywhere (or if you’re referring to the length of an Array), the ranges are:
Unsigned: 0…(-1>>>0)
Signed: (-(-1>>>1)-1)…(-1>>>1)
(It so happens that the bitwise operators and the maximum length of an array are restricted to 32-bit integers.)
If you’re not using bitwise operators or working with array lengths:
Signed: (-Math.pow(2,53))…(+Math.pow(2,53))
These limitations are imposed by the internal representation of the “Number” type, which generally corresponds to IEEE 754 double-precision floating-point representation. (Note that unlike typical signed integers, the magnitude of the negative limit is the same as the magnitude of the positive limit, due to characteristics of the internal representation, which actually includes a negative 0!)
ECMAScript 6:
Number.MAX_SAFE_INTEGER = Math.pow(2, 53)-1;
Number.MIN_SAFE_INTEGER = -Number.MAX_SAFE_INTEGER;
Other may have already given the generic answer, but I thought it would be a good idea to give a fast way of determining it :
for (var x = 2; x + 1 !== x; x *= 2);
console.log(x);
Which gives me 9007199254740992 within less than a millisecond in Chrome 30.
It will test powers of 2 to find which one, when 'added' 1, equals himself.
Anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).
The console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648.
JavaScript has received a new data type in ECMAScript 2020: BigInt. It introduced numerical literals having an "n" suffix and allows for arbitrary precision:
var a = 123456789012345678901012345678901n;
Precision will still be lost, of course, when such big integer is (maybe unintentionally) coerced to a number data type.
And, obviously, there will always be precision limitations due to finite memory, and a cost in terms of time in order to allocate the necessary memory and to perform arithmetic on such large numbers.
For instance, the generation of a number with a hundred thousand decimal digits, will take a noticeable delay before completion:
console.log(BigInt("1".padEnd(100000,"0")) + 1n)
...but it works.
Try:
maxInt = -1 >>> 1
In Firefox 3.6 it's 2^31 - 1.
I did a simple test with a formula, X-(X+1)=-1, and the largest value of X I can get to work on Safari, Opera and Firefox (tested on OS X) is 9e15. Here is the code I used for testing:
javascript: alert(9e15-(9e15+1));
I write it like this:
var max_int = 0x20000000000000;
var min_int = -0x20000000000000;
(max_int + 1) === 0x20000000000000; //true
(max_int - 1) < 0x20000000000000; //true
Same for int32
var max_int32 = 0x80000000;
var min_int32 = -0x80000000;
Let's get to the sources
Description
The MAX_SAFE_INTEGER constant has a value of 9007199254740991 (9,007,199,254,740,991 or ~9 quadrillion). The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(2^53 - 1) and 2^53 - 1.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.
Because MAX_SAFE_INTEGER is a static property of Number, you always use it as Number.MAX_SAFE_INTEGER, rather than as a property of a Number object you created.
Browser compatibility
In JavaScript the representation of numbers is 2^53 - 1.
However, Bitwise operation are calculated on 32 bits ( 4 bytes ), meaning if you exceed 32bits shifts you will start loosing bits.
In the Google Chrome built-in javascript, you can go to approximately 2^1024 before the number is called infinity.
Scato wrotes:
anything you want to use for bitwise operations must be between
0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 -
1).
the console will tell you that 0x80000000 equals +2147483648, but
0x80000000 & 0x80000000 equals -2147483648
Hex-Decimals are unsigned positive values, so 0x80000000 = 2147483648 - thats mathematically correct. If you want to make it a signed value you have to right shift: 0x80000000 >> 0 = -2147483648. You can write 1 << 31 instead, too.
Firefox 3 doesn't seem to have a problem with huge numbers.
1e+200 * 1e+100 will calculate fine to 1e+300.
Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)
Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.
Node.js and Google Chrome seem to both be using 1024 bit floating point values so:
Number.MAX_VALUE = 1.7976931348623157e+308

express.json() is rounding off long numbers [duplicate]

Is this defined by the language? Is there a defined maximum? Is it different in different browsers?
JavaScript has two number types: Number and BigInt.
The most frequently-used number type, Number, is a 64-bit floating point IEEE 754 number.
The largest exact integral value of this type is Number.MAX_SAFE_INTEGER, which is:
253-1, or
+/- 9,007,199,254,740,991, or
nine quadrillion seven trillion one hundred ninety-nine billion two hundred fifty-four million seven hundred forty thousand nine hundred ninety-one
To put this in perspective: one quadrillion bytes is a petabyte (or one thousand terabytes).
"Safe" in this context refers to the ability to represent integers exactly and to correctly compare them.
From the spec:
Note that all the positive and negative integers whose magnitude is no
greater than 253 are representable in the Number type (indeed, the
integer 0 has two representations, +0 and -0).
To safely use integers larger than this, you need to use BigInt, which has no upper bound.
Note that the bitwise operators and shift operators operate on 32-bit integers, so in that case, the max safe integer is 231-1, or 2,147,483,647.
const log = console.log
var x = 9007199254740992
var y = -x
log(x == x + 1) // true !
log(y == y - 1) // also true !
// Arithmetic operators work, but bitwise/shifts only operate on int32:
log(x / 2) // 4503599627370496
log(x >> 1) // 0
log(x | 1) // 1
Technical note on the subject of the number 9,007,199,254,740,992: There is an exact IEEE-754 representation of this value, and you can assign and read this value from a variable, so for very carefully chosen applications in the domain of integers less than or equal to this value, you could treat this as a maximum value.
In the general case, you must treat this IEEE-754 value as inexact, because it is ambiguous whether it is encoding the logical value 9,007,199,254,740,992 or 9,007,199,254,740,993.
>= ES6:
Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;
<= ES5
From the reference:
Number.MAX_VALUE;
Number.MIN_VALUE;
console.log('MIN_VALUE', Number.MIN_VALUE);
console.log('MAX_VALUE', Number.MAX_VALUE);
console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6
console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6
It is 253 == 9 007 199 254 740 992. This is because Numbers are stored as floating-point in a 52-bit mantissa.
The min value is -253.
This makes some fun things happening
Math.pow(2, 53) == Math.pow(2, 53) + 1
>> true
And can also be dangerous :)
var MAX_INT = Math.pow(2, 53); // 9 007 199 254 740 992
for (var i = MAX_INT; i < MAX_INT + 2; ++i) {
// infinite loop
}
Further reading: http://blog.vjeux.com/2010/javascript/javascript-max_int-number-limits.html
In JavaScript, there is a number called Infinity.
Examples:
(Infinity>100)
=> true
// Also worth noting
Infinity - 1 == Infinity
=> true
Math.pow(2,1024) === Infinity
=> true
This may be sufficient for some questions regarding this topic.
Jimmy's answer correctly represents the continuous JavaScript integer spectrum as -9007199254740992 to 9007199254740992 inclusive (sorry 9007199254740993, you might think you are 9007199254740993, but you are wrong!
Demonstration below or in jsfiddle).
console.log(9007199254740993);
However, there is no answer that finds/proves this programatically (other than the one CoolAJ86 alluded to in his answer that would finish in 28.56 years ;), so here's a slightly more efficient way to do that (to be precise, it's more efficient by about 28.559999999968312 years :), along with a test fiddle:
/**
* Checks if adding/subtracting one to/from a number yields the correct result.
*
* #param number The number to test
* #return true if you can add/subtract 1, false otherwise.
*/
var canAddSubtractOneFromNumber = function(number) {
var numMinusOne = number - 1;
var numPlusOne = number + 1;
return ((number - numMinusOne) === 1) && ((number - numPlusOne) === -1);
}
//Find the highest number
var highestNumber = 3; //Start with an integer 1 or higher
//Get a number higher than the valid integer range
while (canAddSubtractOneFromNumber(highestNumber)) {
highestNumber *= 2;
}
//Find the lowest number you can't add/subtract 1 from
var numToSubtract = highestNumber / 4;
while (numToSubtract >= 1) {
while (!canAddSubtractOneFromNumber(highestNumber - numToSubtract)) {
highestNumber = highestNumber - numToSubtract;
}
numToSubtract /= 2;
}
//And there was much rejoicing. Yay.
console.log('HighestNumber = ' + highestNumber);
Many earlier answers have shown 9007199254740992 === 9007199254740992 + 1 is true to verify that 9,007,199,254,740,991 is the maximum and safe integer.
But what if we keep doing accumulation:
input: 9007199254740992 + 1 output: 9007199254740992 // expected: 9007199254740993
input: 9007199254740992 + 2 output: 9007199254740994 // expected: 9007199254740994
input: 9007199254740992 + 3 output: 9007199254740996 // expected: 9007199254740995
input: 9007199254740992 + 4 output: 9007199254740996 // expected: 9007199254740996
We can see that among numbers greater than 9,007,199,254,740,992, only even numbers are representable.
It's an entry to explain how the double-precision 64-bit binary format works. Let's see how 9,007,199,254,740,992 be held (represented) by using this binary format.
Using a brief version to demonstrate it from 4,503,599,627,370,496:
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
|-- 52 bits --| |exponent part| |-- 52 bits --|
On the left side of the arrow, we have bit value 1, and an adjacent radix point. By consuming the exponent part on the left, the radix point is moved 52 steps to the right. The radix point ends up at the end, and we get 4503599627370496 in pure binary.
Now let's keep incrementing the fraction part with 1 until all the bits are set to 1, which equals 9,007,199,254,740,991 in decimal.
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
(+1)
1 . 0000 ---- 0001 * 2^52 => 1 0000 ---- 0001.
(+1)
1 . 0000 ---- 0010 * 2^52 => 1 0000 ---- 0010.
(+1)
.
.
.
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
Because the 64-bit double-precision format strictly allots 52 bits for the fraction part, no more bits are available if we add another 1, so what we can do is setting all bits back to 0, and manipulate the exponent part:
┏━━▶ This bit is implicit and persistent.
┃
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
|-- 52 bits --| |-- 52 bits --|
(+1)
1 . 0000 ---- 0000 * 2^52 * 2 => 1 0000 ---- 0000. * 2
|-- 52 bits --| |-- 52 bits --|
(By consuming the 2^52, radix
point has no way to go, but
there is still one 2 left in
exponent part)
=> 1 . 0000 ---- 0000 * 2^53
|-- 52 bits --|
Now we get the 9,007,199,254,740,992, and for the numbers greater than it, the format can only handle increments of 2 because every increment of 1 on the fraction part ends up being multiplied by the left 2 in the exponent part. That's why double-precision 64-bit binary format cannot hold odd numbers when the number is greater than 9,007,199,254,740,992:
(consume 2^52 to move radix point to the end)
1 . 0000 ---- 0001 * 2^53 => 1 0000 ---- 0001. * 2
|-- 52 bits --| |-- 52 bits --|
Following this pattern, when the number gets greater than 9,007,199,254,740,992 * 2 = 18,014,398,509,481,984 only 4 times the fraction can be held:
input: 18014398509481984 + 1 output: 18014398509481984 // expected: 18014398509481985
input: 18014398509481984 + 2 output: 18014398509481984 // expected: 18014398509481986
input: 18014398509481984 + 3 output: 18014398509481984 // expected: 18014398509481987
input: 18014398509481984 + 4 output: 18014398509481988 // expected: 18014398509481988
How about numbers between [ 2 251 799 813 685 248, 4 503 599 627 370 496 )?
1 . 0000 ---- 0001 * 2^51 => 1 0000 ---- 000.1
|-- 52 bits --| |-- 52 bits --|
The value 0.1 in binary is exactly 2^-1 (=1/2) (=0.5)
So when the number is less than 4,503,599,627,370,496 (2^52), there is one bit available to represent the 1/2 times of the integer:
input: 4503599627370495.5 output: 4503599627370495.5
input: 4503599627370495.75 output: 4503599627370495.5
Less than 2,251,799,813,685,248 (2^51)
input: 2251799813685246.75 output: 2251799813685246.8 // expected: 2251799813685246.75
input: 2251799813685246.25 output: 2251799813685246.2 // expected: 2251799813685246.25
input: 2251799813685246.5 output: 2251799813685246.5
/**
Please note that if you try this yourself and, say, log
these numbers to the console, they will get rounded. JavaScript
rounds if the number of digits exceed 17. The value
is internally held correctly:
*/
input: 2251799813685246.25.toString(2)
output: "111111111111111111111111111111111111111111111111110.01"
input: 2251799813685246.75.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
input: 2251799813685246.78.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
And what is the available range of exponent part? 11 bits allotted for it by the format.
From Wikipedia (for more details, go there)
So to make the exponent part be 2^52, we exactly need to set e = 1075.
To be safe
var MAX_INT = 4294967295;
Reasoning
I thought I'd be clever and find the value at which x + 1 === x with a more pragmatic approach.
My machine can only count 10 million per second or so... so I'll post back with the definitive answer in 28.56 years.
If you can't wait that long, I'm willing to bet that
Most of your loops don't run for 28.56 years
9007199254740992 === Math.pow(2, 53) + 1 is proof enough
You should stick to 4294967295 which is Math.pow(2,32) - 1 as to avoid expected issues with bit-shifting
Finding x + 1 === x:
(function () {
"use strict";
var x = 0
, start = new Date().valueOf()
;
while (x + 1 != x) {
if (!(x % 10000000)) {
console.log(x);
}
x += 1
}
console.log(x, new Date().valueOf() - start);
}());
The short answer is “it depends.”
If you’re using bitwise operators anywhere (or if you’re referring to the length of an Array), the ranges are:
Unsigned: 0…(-1>>>0)
Signed: (-(-1>>>1)-1)…(-1>>>1)
(It so happens that the bitwise operators and the maximum length of an array are restricted to 32-bit integers.)
If you’re not using bitwise operators or working with array lengths:
Signed: (-Math.pow(2,53))…(+Math.pow(2,53))
These limitations are imposed by the internal representation of the “Number” type, which generally corresponds to IEEE 754 double-precision floating-point representation. (Note that unlike typical signed integers, the magnitude of the negative limit is the same as the magnitude of the positive limit, due to characteristics of the internal representation, which actually includes a negative 0!)
ECMAScript 6:
Number.MAX_SAFE_INTEGER = Math.pow(2, 53)-1;
Number.MIN_SAFE_INTEGER = -Number.MAX_SAFE_INTEGER;
Other may have already given the generic answer, but I thought it would be a good idea to give a fast way of determining it :
for (var x = 2; x + 1 !== x; x *= 2);
console.log(x);
Which gives me 9007199254740992 within less than a millisecond in Chrome 30.
It will test powers of 2 to find which one, when 'added' 1, equals himself.
Anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).
The console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648.
JavaScript has received a new data type in ECMAScript 2020: BigInt. It introduced numerical literals having an "n" suffix and allows for arbitrary precision:
var a = 123456789012345678901012345678901n;
Precision will still be lost, of course, when such big integer is (maybe unintentionally) coerced to a number data type.
And, obviously, there will always be precision limitations due to finite memory, and a cost in terms of time in order to allocate the necessary memory and to perform arithmetic on such large numbers.
For instance, the generation of a number with a hundred thousand decimal digits, will take a noticeable delay before completion:
console.log(BigInt("1".padEnd(100000,"0")) + 1n)
...but it works.
Try:
maxInt = -1 >>> 1
In Firefox 3.6 it's 2^31 - 1.
I did a simple test with a formula, X-(X+1)=-1, and the largest value of X I can get to work on Safari, Opera and Firefox (tested on OS X) is 9e15. Here is the code I used for testing:
javascript: alert(9e15-(9e15+1));
I write it like this:
var max_int = 0x20000000000000;
var min_int = -0x20000000000000;
(max_int + 1) === 0x20000000000000; //true
(max_int - 1) < 0x20000000000000; //true
Same for int32
var max_int32 = 0x80000000;
var min_int32 = -0x80000000;
Let's get to the sources
Description
The MAX_SAFE_INTEGER constant has a value of 9007199254740991 (9,007,199,254,740,991 or ~9 quadrillion). The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(2^53 - 1) and 2^53 - 1.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.
Because MAX_SAFE_INTEGER is a static property of Number, you always use it as Number.MAX_SAFE_INTEGER, rather than as a property of a Number object you created.
Browser compatibility
In JavaScript the representation of numbers is 2^53 - 1.
However, Bitwise operation are calculated on 32 bits ( 4 bytes ), meaning if you exceed 32bits shifts you will start loosing bits.
In the Google Chrome built-in javascript, you can go to approximately 2^1024 before the number is called infinity.
Scato wrotes:
anything you want to use for bitwise operations must be between
0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 -
1).
the console will tell you that 0x80000000 equals +2147483648, but
0x80000000 & 0x80000000 equals -2147483648
Hex-Decimals are unsigned positive values, so 0x80000000 = 2147483648 - thats mathematically correct. If you want to make it a signed value you have to right shift: 0x80000000 >> 0 = -2147483648. You can write 1 << 31 instead, too.
Firefox 3 doesn't seem to have a problem with huge numbers.
1e+200 * 1e+100 will calculate fine to 1e+300.
Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)
Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.
Node.js and Google Chrome seem to both be using 1024 bit floating point values so:
Number.MAX_VALUE = 1.7976931348623157e+308

Verilog operation unexpected result

I am studying verilog language and faced problems.
integer intA;
...
intA = - 4'd12 / 3; // expression result is 1431655761.
// -4’d12 is effectively a 32-bit reg data type
This snippet from standard and it blew our minds. The standard says that 4d12 - is a 4 bit number 1100.
Then -4d12 = 0100. It's okay now.
To perform the division, we need to bring the number to the same size. 4 to 32 bit. The number of bits -4'd12 - is unsigned, then it should be equal to 32'b0000...0100, but it equal to 32'b1111...10100. Not ok, but next step.
My version of division: -4d12 / 3 = 32'b0000...0100 / 32'b0000...0011 = 1
Standart version: - 4'd12 / 3 = 1431655761
Can anyone tell why? Why 4 bit number keeps extra bits?
You need to read section 11.8.2 Steps for evaluating an expression of the 1800-2012 LRM. They key piece you are missing is that the operand is 4'd12 and that it is sized to 32 bits as an unsigned value before the unary - operator is applied.
If you want the 4-bit value treated as a signed -3, then you need to write
intA = - 4'sd12 / 3 // result is 1
here the parser interprets -'d12 as 32 bits number which is unsigned initially and the negative sign would result in the negation of bits. so the result would be
negation of ('d12)= negation of (28 zeros + 1100)= 28ones+2zeros+2ones =
11111111111111111111111111110011. gives output to 4294967283 . if you divide this number (4294967283) by 3 the answer would be 1,431,655,761.
keep smiling :)

ASM question, two's complement

so this book "assembly language step by step" is really awesome, but it was sort of cryptic about how two's complement works when working on actual memory and register data. along with that, i'm not sure how signed values are represented in memory either, which i feel might be what's keeping me confused. anywho...
it says: "-1 = $FF, -2 = $FE and so on". now i understand that the two's complement of a number is itself multiplied by -1 and when added to the original will give you 0. so, FF is the hex equivalent of 11111111 in binary, and 255 in decimal. so my question is: what's the book saying when it says "-1 = $FF"? does it mean that -255 + -1 will give you 0 but also, which it didn't explicitly, set the OF flag?
so in practice... let's say we have 11h, which is 17 in decimal, and 00100001 in binary. and this value is in AL.
so then we NEG AL, and this will set the CF and SF, and change the value in AL to... 239 in decimal, 11101111 in binary, or EFh? i just don't see how that would be 17 * -1? or is that just a poorly worded explanation by the book, where it really means that it gives you the value you would need to cause an overflow?
thanks!
In two's complement, for bytes, (-x) == (256 - x) == (~x + 1). (~ is C'ish for the NOT operator, which flips all the bits in its operand.)
Let's say we have 11h.
100h - 11h == EFh
(256 - 17 == 239)
Note, the 256 works with bytes, cause they're 8 bits in size. For 16-bit words you'd use 2^16 (65536), for dwords 2^32. Also note that all math is mod 256 for bytes, 65536 for shorts, etc.
Or, using not/+1,
~11h = EEh
+1... EFh
This method works for words of all sizes.
what's the book saying when it says "-1 = $FF"?
If considering a byte only, the two's complement of 1 is 0xff (or $FF if using that format for hex numbers).
To break it down, the complement (or one's complement) of 1 is 0xfe, then you add 1 to get the two's complement: 0xff
Similarly for 2: the complement is 0xfd, add 1 to get the two's complement: 0xfe
Now let's look at 17 decimal. As you say, that's 0x11. The complement is 0xee, and the two's complement is 0xef - all that agrees with what you stated in your question.
Now, experiment with what happens when you add the numbers together. First in decimal:
17 + (-17) == 0
Now in hex:
0x11 + 0xef == 0x100
Since we're dealing with numeric objects that are only a byte in size, the 1 in 0x100 is discarded (some hand waving here...), and we result in:
0x11 + 0xef == 0x00
To deal with the 'hand waving' (I probably won't do this in an understandable manner, unfortunately): since the overflow flag (OF or sometimes called V for reasons that I don't know) is the same as the carry flag (C) the carry can be ignored (it's an indication that signed arithmetic occurred correctly). One way to think of it that's probably not very precise, but I find useful, is that leading ones in a negative two's complement number are 'the same as' leading zeros in a non-negative two's complement number.

Base64: What is the worst possible increase in space usage?

If a server received a base64 string and wanted to check it's length before converting,, say it wanted to always permit the final byte array to be 16KB. How big could a 16KB byte array possibly become when converted to a Base64 string (assuming one byte per character)?
Base64 encodes each set of three bytes into four bytes. In addition the output is padded to always be a multiple of four.
This means that the size of the base-64 representation of a string of size n is:
ceil(n / 3) * 4
So, for a 16kB array, the base-64 representation will be ceil(16*1024/3)*4 = 21848 bytes long ~= 21.8kB.
A rough approximation would be that the size of the data is increased to 4/3 of the original.
From Wikipedia
Note that given an input of n bytes,
the output will be (n + 2 - ((n + 2) %
3)) / 3 * 4 bytes long, so that the
number of output bytes per input byte
converges to 4 / 3 or 1.33333 for
large n.
So 16kb * 4 / 3 gives very little over 21.3' kb, or 21848 bytes, to be exact.
Hope this helps
16kb is 131,072 bits. Base64 packs 24-bit buffers into four 6-bit characters apiece, so you would have 5,462 * 4 = 21,848 bytes.
Since the question was about the worst possible increase, I must add that there are usually line breaks at around each 80 characters. This means that if you are saving base64 encoded data into a text file on Windows it will add 2 bytes, on Linux 1 byte for each line.
The increase from the actual encoding has been described above.
This is a future reference for myself. Since the question is on worst case, we should take line breaks into account. While RFC 1421 defines maximum line length to be 64 char, RFC 2045 (MIME) states there'd be 76 char in one line at most.
The latter is what C# library has implemented. So in Windows environment where a line break is 2 chars (\r\n), we get this: Length = Floor(Ceiling(N/3) * 4 * 78 / 76)
Note: Flooring is because during my test with C#, if the last line ends at exactly 76 chars, no line-break follows.
I can prove it by running the following code:
byte[] bytes = new byte[16 * 1024];
Console.WriteLine(Convert.ToBase64String(bytes, Base64FormattingOptions.InsertLineBreaks).Length);
The answer for 16 kBytes encoded to base64 with 76-char lines: 22422 chars
Assume in Linux it'd be Length = Floor(Ceiling(N/3) * 4 * 77 / 76) but I didn't get around to test it on my .NET core yet.
Also it would depend on actual character encoding, i.e. if we encode to UTF-32 string, each base64 character would consume 3 additional bytes (4 byte per char).

Resources