Related
Is this defined by the language? Is there a defined maximum? Is it different in different browsers?
JavaScript has two number types: Number and BigInt.
The most frequently-used number type, Number, is a 64-bit floating point IEEE 754 number.
The largest exact integral value of this type is Number.MAX_SAFE_INTEGER, which is:
253-1, or
+/- 9,007,199,254,740,991, or
nine quadrillion seven trillion one hundred ninety-nine billion two hundred fifty-four million seven hundred forty thousand nine hundred ninety-one
To put this in perspective: one quadrillion bytes is a petabyte (or one thousand terabytes).
"Safe" in this context refers to the ability to represent integers exactly and to correctly compare them.
From the spec:
Note that all the positive and negative integers whose magnitude is no
greater than 253 are representable in the Number type (indeed, the
integer 0 has two representations, +0 and -0).
To safely use integers larger than this, you need to use BigInt, which has no upper bound.
Note that the bitwise operators and shift operators operate on 32-bit integers, so in that case, the max safe integer is 231-1, or 2,147,483,647.
const log = console.log
var x = 9007199254740992
var y = -x
log(x == x + 1) // true !
log(y == y - 1) // also true !
// Arithmetic operators work, but bitwise/shifts only operate on int32:
log(x / 2) // 4503599627370496
log(x >> 1) // 0
log(x | 1) // 1
Technical note on the subject of the number 9,007,199,254,740,992: There is an exact IEEE-754 representation of this value, and you can assign and read this value from a variable, so for very carefully chosen applications in the domain of integers less than or equal to this value, you could treat this as a maximum value.
In the general case, you must treat this IEEE-754 value as inexact, because it is ambiguous whether it is encoding the logical value 9,007,199,254,740,992 or 9,007,199,254,740,993.
>= ES6:
Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;
<= ES5
From the reference:
Number.MAX_VALUE;
Number.MIN_VALUE;
console.log('MIN_VALUE', Number.MIN_VALUE);
console.log('MAX_VALUE', Number.MAX_VALUE);
console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6
console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6
It is 253 == 9 007 199 254 740 992. This is because Numbers are stored as floating-point in a 52-bit mantissa.
The min value is -253.
This makes some fun things happening
Math.pow(2, 53) == Math.pow(2, 53) + 1
>> true
And can also be dangerous :)
var MAX_INT = Math.pow(2, 53); // 9 007 199 254 740 992
for (var i = MAX_INT; i < MAX_INT + 2; ++i) {
// infinite loop
}
Further reading: http://blog.vjeux.com/2010/javascript/javascript-max_int-number-limits.html
In JavaScript, there is a number called Infinity.
Examples:
(Infinity>100)
=> true
// Also worth noting
Infinity - 1 == Infinity
=> true
Math.pow(2,1024) === Infinity
=> true
This may be sufficient for some questions regarding this topic.
Jimmy's answer correctly represents the continuous JavaScript integer spectrum as -9007199254740992 to 9007199254740992 inclusive (sorry 9007199254740993, you might think you are 9007199254740993, but you are wrong!
Demonstration below or in jsfiddle).
console.log(9007199254740993);
However, there is no answer that finds/proves this programatically (other than the one CoolAJ86 alluded to in his answer that would finish in 28.56 years ;), so here's a slightly more efficient way to do that (to be precise, it's more efficient by about 28.559999999968312 years :), along with a test fiddle:
/**
* Checks if adding/subtracting one to/from a number yields the correct result.
*
* #param number The number to test
* #return true if you can add/subtract 1, false otherwise.
*/
var canAddSubtractOneFromNumber = function(number) {
var numMinusOne = number - 1;
var numPlusOne = number + 1;
return ((number - numMinusOne) === 1) && ((number - numPlusOne) === -1);
}
//Find the highest number
var highestNumber = 3; //Start with an integer 1 or higher
//Get a number higher than the valid integer range
while (canAddSubtractOneFromNumber(highestNumber)) {
highestNumber *= 2;
}
//Find the lowest number you can't add/subtract 1 from
var numToSubtract = highestNumber / 4;
while (numToSubtract >= 1) {
while (!canAddSubtractOneFromNumber(highestNumber - numToSubtract)) {
highestNumber = highestNumber - numToSubtract;
}
numToSubtract /= 2;
}
//And there was much rejoicing. Yay.
console.log('HighestNumber = ' + highestNumber);
Many earlier answers have shown 9007199254740992 === 9007199254740992 + 1 is true to verify that 9,007,199,254,740,991 is the maximum and safe integer.
But what if we keep doing accumulation:
input: 9007199254740992 + 1 output: 9007199254740992 // expected: 9007199254740993
input: 9007199254740992 + 2 output: 9007199254740994 // expected: 9007199254740994
input: 9007199254740992 + 3 output: 9007199254740996 // expected: 9007199254740995
input: 9007199254740992 + 4 output: 9007199254740996 // expected: 9007199254740996
We can see that among numbers greater than 9,007,199,254,740,992, only even numbers are representable.
It's an entry to explain how the double-precision 64-bit binary format works. Let's see how 9,007,199,254,740,992 be held (represented) by using this binary format.
Using a brief version to demonstrate it from 4,503,599,627,370,496:
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
|-- 52 bits --| |exponent part| |-- 52 bits --|
On the left side of the arrow, we have bit value 1, and an adjacent radix point. By consuming the exponent part on the left, the radix point is moved 52 steps to the right. The radix point ends up at the end, and we get 4503599627370496 in pure binary.
Now let's keep incrementing the fraction part with 1 until all the bits are set to 1, which equals 9,007,199,254,740,991 in decimal.
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
(+1)
1 . 0000 ---- 0001 * 2^52 => 1 0000 ---- 0001.
(+1)
1 . 0000 ---- 0010 * 2^52 => 1 0000 ---- 0010.
(+1)
.
.
.
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
Because the 64-bit double-precision format strictly allots 52 bits for the fraction part, no more bits are available if we add another 1, so what we can do is setting all bits back to 0, and manipulate the exponent part:
┏━━▶ This bit is implicit and persistent.
┃
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
|-- 52 bits --| |-- 52 bits --|
(+1)
1 . 0000 ---- 0000 * 2^52 * 2 => 1 0000 ---- 0000. * 2
|-- 52 bits --| |-- 52 bits --|
(By consuming the 2^52, radix
point has no way to go, but
there is still one 2 left in
exponent part)
=> 1 . 0000 ---- 0000 * 2^53
|-- 52 bits --|
Now we get the 9,007,199,254,740,992, and for the numbers greater than it, the format can only handle increments of 2 because every increment of 1 on the fraction part ends up being multiplied by the left 2 in the exponent part. That's why double-precision 64-bit binary format cannot hold odd numbers when the number is greater than 9,007,199,254,740,992:
(consume 2^52 to move radix point to the end)
1 . 0000 ---- 0001 * 2^53 => 1 0000 ---- 0001. * 2
|-- 52 bits --| |-- 52 bits --|
Following this pattern, when the number gets greater than 9,007,199,254,740,992 * 2 = 18,014,398,509,481,984 only 4 times the fraction can be held:
input: 18014398509481984 + 1 output: 18014398509481984 // expected: 18014398509481985
input: 18014398509481984 + 2 output: 18014398509481984 // expected: 18014398509481986
input: 18014398509481984 + 3 output: 18014398509481984 // expected: 18014398509481987
input: 18014398509481984 + 4 output: 18014398509481988 // expected: 18014398509481988
How about numbers between [ 2 251 799 813 685 248, 4 503 599 627 370 496 )?
1 . 0000 ---- 0001 * 2^51 => 1 0000 ---- 000.1
|-- 52 bits --| |-- 52 bits --|
The value 0.1 in binary is exactly 2^-1 (=1/2) (=0.5)
So when the number is less than 4,503,599,627,370,496 (2^52), there is one bit available to represent the 1/2 times of the integer:
input: 4503599627370495.5 output: 4503599627370495.5
input: 4503599627370495.75 output: 4503599627370495.5
Less than 2,251,799,813,685,248 (2^51)
input: 2251799813685246.75 output: 2251799813685246.8 // expected: 2251799813685246.75
input: 2251799813685246.25 output: 2251799813685246.2 // expected: 2251799813685246.25
input: 2251799813685246.5 output: 2251799813685246.5
/**
Please note that if you try this yourself and, say, log
these numbers to the console, they will get rounded. JavaScript
rounds if the number of digits exceed 17. The value
is internally held correctly:
*/
input: 2251799813685246.25.toString(2)
output: "111111111111111111111111111111111111111111111111110.01"
input: 2251799813685246.75.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
input: 2251799813685246.78.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
And what is the available range of exponent part? 11 bits allotted for it by the format.
From Wikipedia (for more details, go there)
So to make the exponent part be 2^52, we exactly need to set e = 1075.
To be safe
var MAX_INT = 4294967295;
Reasoning
I thought I'd be clever and find the value at which x + 1 === x with a more pragmatic approach.
My machine can only count 10 million per second or so... so I'll post back with the definitive answer in 28.56 years.
If you can't wait that long, I'm willing to bet that
Most of your loops don't run for 28.56 years
9007199254740992 === Math.pow(2, 53) + 1 is proof enough
You should stick to 4294967295 which is Math.pow(2,32) - 1 as to avoid expected issues with bit-shifting
Finding x + 1 === x:
(function () {
"use strict";
var x = 0
, start = new Date().valueOf()
;
while (x + 1 != x) {
if (!(x % 10000000)) {
console.log(x);
}
x += 1
}
console.log(x, new Date().valueOf() - start);
}());
The short answer is “it depends.”
If you’re using bitwise operators anywhere (or if you’re referring to the length of an Array), the ranges are:
Unsigned: 0…(-1>>>0)
Signed: (-(-1>>>1)-1)…(-1>>>1)
(It so happens that the bitwise operators and the maximum length of an array are restricted to 32-bit integers.)
If you’re not using bitwise operators or working with array lengths:
Signed: (-Math.pow(2,53))…(+Math.pow(2,53))
These limitations are imposed by the internal representation of the “Number” type, which generally corresponds to IEEE 754 double-precision floating-point representation. (Note that unlike typical signed integers, the magnitude of the negative limit is the same as the magnitude of the positive limit, due to characteristics of the internal representation, which actually includes a negative 0!)
ECMAScript 6:
Number.MAX_SAFE_INTEGER = Math.pow(2, 53)-1;
Number.MIN_SAFE_INTEGER = -Number.MAX_SAFE_INTEGER;
Other may have already given the generic answer, but I thought it would be a good idea to give a fast way of determining it :
for (var x = 2; x + 1 !== x; x *= 2);
console.log(x);
Which gives me 9007199254740992 within less than a millisecond in Chrome 30.
It will test powers of 2 to find which one, when 'added' 1, equals himself.
Anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).
The console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648.
JavaScript has received a new data type in ECMAScript 2020: BigInt. It introduced numerical literals having an "n" suffix and allows for arbitrary precision:
var a = 123456789012345678901012345678901n;
Precision will still be lost, of course, when such big integer is (maybe unintentionally) coerced to a number data type.
And, obviously, there will always be precision limitations due to finite memory, and a cost in terms of time in order to allocate the necessary memory and to perform arithmetic on such large numbers.
For instance, the generation of a number with a hundred thousand decimal digits, will take a noticeable delay before completion:
console.log(BigInt("1".padEnd(100000,"0")) + 1n)
...but it works.
Try:
maxInt = -1 >>> 1
In Firefox 3.6 it's 2^31 - 1.
I did a simple test with a formula, X-(X+1)=-1, and the largest value of X I can get to work on Safari, Opera and Firefox (tested on OS X) is 9e15. Here is the code I used for testing:
javascript: alert(9e15-(9e15+1));
I write it like this:
var max_int = 0x20000000000000;
var min_int = -0x20000000000000;
(max_int + 1) === 0x20000000000000; //true
(max_int - 1) < 0x20000000000000; //true
Same for int32
var max_int32 = 0x80000000;
var min_int32 = -0x80000000;
Let's get to the sources
Description
The MAX_SAFE_INTEGER constant has a value of 9007199254740991 (9,007,199,254,740,991 or ~9 quadrillion). The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(2^53 - 1) and 2^53 - 1.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.
Because MAX_SAFE_INTEGER is a static property of Number, you always use it as Number.MAX_SAFE_INTEGER, rather than as a property of a Number object you created.
Browser compatibility
In JavaScript the representation of numbers is 2^53 - 1.
However, Bitwise operation are calculated on 32 bits ( 4 bytes ), meaning if you exceed 32bits shifts you will start loosing bits.
In the Google Chrome built-in javascript, you can go to approximately 2^1024 before the number is called infinity.
Scato wrotes:
anything you want to use for bitwise operations must be between
0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 -
1).
the console will tell you that 0x80000000 equals +2147483648, but
0x80000000 & 0x80000000 equals -2147483648
Hex-Decimals are unsigned positive values, so 0x80000000 = 2147483648 - thats mathematically correct. If you want to make it a signed value you have to right shift: 0x80000000 >> 0 = -2147483648. You can write 1 << 31 instead, too.
Firefox 3 doesn't seem to have a problem with huge numbers.
1e+200 * 1e+100 will calculate fine to 1e+300.
Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)
Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.
Node.js and Google Chrome seem to both be using 1024 bit floating point values so:
Number.MAX_VALUE = 1.7976931348623157e+308
Is this defined by the language? Is there a defined maximum? Is it different in different browsers?
JavaScript has two number types: Number and BigInt.
The most frequently-used number type, Number, is a 64-bit floating point IEEE 754 number.
The largest exact integral value of this type is Number.MAX_SAFE_INTEGER, which is:
253-1, or
+/- 9,007,199,254,740,991, or
nine quadrillion seven trillion one hundred ninety-nine billion two hundred fifty-four million seven hundred forty thousand nine hundred ninety-one
To put this in perspective: one quadrillion bytes is a petabyte (or one thousand terabytes).
"Safe" in this context refers to the ability to represent integers exactly and to correctly compare them.
From the spec:
Note that all the positive and negative integers whose magnitude is no
greater than 253 are representable in the Number type (indeed, the
integer 0 has two representations, +0 and -0).
To safely use integers larger than this, you need to use BigInt, which has no upper bound.
Note that the bitwise operators and shift operators operate on 32-bit integers, so in that case, the max safe integer is 231-1, or 2,147,483,647.
const log = console.log
var x = 9007199254740992
var y = -x
log(x == x + 1) // true !
log(y == y - 1) // also true !
// Arithmetic operators work, but bitwise/shifts only operate on int32:
log(x / 2) // 4503599627370496
log(x >> 1) // 0
log(x | 1) // 1
Technical note on the subject of the number 9,007,199,254,740,992: There is an exact IEEE-754 representation of this value, and you can assign and read this value from a variable, so for very carefully chosen applications in the domain of integers less than or equal to this value, you could treat this as a maximum value.
In the general case, you must treat this IEEE-754 value as inexact, because it is ambiguous whether it is encoding the logical value 9,007,199,254,740,992 or 9,007,199,254,740,993.
>= ES6:
Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;
<= ES5
From the reference:
Number.MAX_VALUE;
Number.MIN_VALUE;
console.log('MIN_VALUE', Number.MIN_VALUE);
console.log('MAX_VALUE', Number.MAX_VALUE);
console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6
console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6
It is 253 == 9 007 199 254 740 992. This is because Numbers are stored as floating-point in a 52-bit mantissa.
The min value is -253.
This makes some fun things happening
Math.pow(2, 53) == Math.pow(2, 53) + 1
>> true
And can also be dangerous :)
var MAX_INT = Math.pow(2, 53); // 9 007 199 254 740 992
for (var i = MAX_INT; i < MAX_INT + 2; ++i) {
// infinite loop
}
Further reading: http://blog.vjeux.com/2010/javascript/javascript-max_int-number-limits.html
In JavaScript, there is a number called Infinity.
Examples:
(Infinity>100)
=> true
// Also worth noting
Infinity - 1 == Infinity
=> true
Math.pow(2,1024) === Infinity
=> true
This may be sufficient for some questions regarding this topic.
Jimmy's answer correctly represents the continuous JavaScript integer spectrum as -9007199254740992 to 9007199254740992 inclusive (sorry 9007199254740993, you might think you are 9007199254740993, but you are wrong!
Demonstration below or in jsfiddle).
console.log(9007199254740993);
However, there is no answer that finds/proves this programatically (other than the one CoolAJ86 alluded to in his answer that would finish in 28.56 years ;), so here's a slightly more efficient way to do that (to be precise, it's more efficient by about 28.559999999968312 years :), along with a test fiddle:
/**
* Checks if adding/subtracting one to/from a number yields the correct result.
*
* #param number The number to test
* #return true if you can add/subtract 1, false otherwise.
*/
var canAddSubtractOneFromNumber = function(number) {
var numMinusOne = number - 1;
var numPlusOne = number + 1;
return ((number - numMinusOne) === 1) && ((number - numPlusOne) === -1);
}
//Find the highest number
var highestNumber = 3; //Start with an integer 1 or higher
//Get a number higher than the valid integer range
while (canAddSubtractOneFromNumber(highestNumber)) {
highestNumber *= 2;
}
//Find the lowest number you can't add/subtract 1 from
var numToSubtract = highestNumber / 4;
while (numToSubtract >= 1) {
while (!canAddSubtractOneFromNumber(highestNumber - numToSubtract)) {
highestNumber = highestNumber - numToSubtract;
}
numToSubtract /= 2;
}
//And there was much rejoicing. Yay.
console.log('HighestNumber = ' + highestNumber);
Many earlier answers have shown 9007199254740992 === 9007199254740992 + 1 is true to verify that 9,007,199,254,740,991 is the maximum and safe integer.
But what if we keep doing accumulation:
input: 9007199254740992 + 1 output: 9007199254740992 // expected: 9007199254740993
input: 9007199254740992 + 2 output: 9007199254740994 // expected: 9007199254740994
input: 9007199254740992 + 3 output: 9007199254740996 // expected: 9007199254740995
input: 9007199254740992 + 4 output: 9007199254740996 // expected: 9007199254740996
We can see that among numbers greater than 9,007,199,254,740,992, only even numbers are representable.
It's an entry to explain how the double-precision 64-bit binary format works. Let's see how 9,007,199,254,740,992 be held (represented) by using this binary format.
Using a brief version to demonstrate it from 4,503,599,627,370,496:
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
|-- 52 bits --| |exponent part| |-- 52 bits --|
On the left side of the arrow, we have bit value 1, and an adjacent radix point. By consuming the exponent part on the left, the radix point is moved 52 steps to the right. The radix point ends up at the end, and we get 4503599627370496 in pure binary.
Now let's keep incrementing the fraction part with 1 until all the bits are set to 1, which equals 9,007,199,254,740,991 in decimal.
1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000.
(+1)
1 . 0000 ---- 0001 * 2^52 => 1 0000 ---- 0001.
(+1)
1 . 0000 ---- 0010 * 2^52 => 1 0000 ---- 0010.
(+1)
.
.
.
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
Because the 64-bit double-precision format strictly allots 52 bits for the fraction part, no more bits are available if we add another 1, so what we can do is setting all bits back to 0, and manipulate the exponent part:
┏━━▶ This bit is implicit and persistent.
┃
1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.
|-- 52 bits --| |-- 52 bits --|
(+1)
1 . 0000 ---- 0000 * 2^52 * 2 => 1 0000 ---- 0000. * 2
|-- 52 bits --| |-- 52 bits --|
(By consuming the 2^52, radix
point has no way to go, but
there is still one 2 left in
exponent part)
=> 1 . 0000 ---- 0000 * 2^53
|-- 52 bits --|
Now we get the 9,007,199,254,740,992, and for the numbers greater than it, the format can only handle increments of 2 because every increment of 1 on the fraction part ends up being multiplied by the left 2 in the exponent part. That's why double-precision 64-bit binary format cannot hold odd numbers when the number is greater than 9,007,199,254,740,992:
(consume 2^52 to move radix point to the end)
1 . 0000 ---- 0001 * 2^53 => 1 0000 ---- 0001. * 2
|-- 52 bits --| |-- 52 bits --|
Following this pattern, when the number gets greater than 9,007,199,254,740,992 * 2 = 18,014,398,509,481,984 only 4 times the fraction can be held:
input: 18014398509481984 + 1 output: 18014398509481984 // expected: 18014398509481985
input: 18014398509481984 + 2 output: 18014398509481984 // expected: 18014398509481986
input: 18014398509481984 + 3 output: 18014398509481984 // expected: 18014398509481987
input: 18014398509481984 + 4 output: 18014398509481988 // expected: 18014398509481988
How about numbers between [ 2 251 799 813 685 248, 4 503 599 627 370 496 )?
1 . 0000 ---- 0001 * 2^51 => 1 0000 ---- 000.1
|-- 52 bits --| |-- 52 bits --|
The value 0.1 in binary is exactly 2^-1 (=1/2) (=0.5)
So when the number is less than 4,503,599,627,370,496 (2^52), there is one bit available to represent the 1/2 times of the integer:
input: 4503599627370495.5 output: 4503599627370495.5
input: 4503599627370495.75 output: 4503599627370495.5
Less than 2,251,799,813,685,248 (2^51)
input: 2251799813685246.75 output: 2251799813685246.8 // expected: 2251799813685246.75
input: 2251799813685246.25 output: 2251799813685246.2 // expected: 2251799813685246.25
input: 2251799813685246.5 output: 2251799813685246.5
/**
Please note that if you try this yourself and, say, log
these numbers to the console, they will get rounded. JavaScript
rounds if the number of digits exceed 17. The value
is internally held correctly:
*/
input: 2251799813685246.25.toString(2)
output: "111111111111111111111111111111111111111111111111110.01"
input: 2251799813685246.75.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
input: 2251799813685246.78.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
And what is the available range of exponent part? 11 bits allotted for it by the format.
From Wikipedia (for more details, go there)
So to make the exponent part be 2^52, we exactly need to set e = 1075.
To be safe
var MAX_INT = 4294967295;
Reasoning
I thought I'd be clever and find the value at which x + 1 === x with a more pragmatic approach.
My machine can only count 10 million per second or so... so I'll post back with the definitive answer in 28.56 years.
If you can't wait that long, I'm willing to bet that
Most of your loops don't run for 28.56 years
9007199254740992 === Math.pow(2, 53) + 1 is proof enough
You should stick to 4294967295 which is Math.pow(2,32) - 1 as to avoid expected issues with bit-shifting
Finding x + 1 === x:
(function () {
"use strict";
var x = 0
, start = new Date().valueOf()
;
while (x + 1 != x) {
if (!(x % 10000000)) {
console.log(x);
}
x += 1
}
console.log(x, new Date().valueOf() - start);
}());
The short answer is “it depends.”
If you’re using bitwise operators anywhere (or if you’re referring to the length of an Array), the ranges are:
Unsigned: 0…(-1>>>0)
Signed: (-(-1>>>1)-1)…(-1>>>1)
(It so happens that the bitwise operators and the maximum length of an array are restricted to 32-bit integers.)
If you’re not using bitwise operators or working with array lengths:
Signed: (-Math.pow(2,53))…(+Math.pow(2,53))
These limitations are imposed by the internal representation of the “Number” type, which generally corresponds to IEEE 754 double-precision floating-point representation. (Note that unlike typical signed integers, the magnitude of the negative limit is the same as the magnitude of the positive limit, due to characteristics of the internal representation, which actually includes a negative 0!)
ECMAScript 6:
Number.MAX_SAFE_INTEGER = Math.pow(2, 53)-1;
Number.MIN_SAFE_INTEGER = -Number.MAX_SAFE_INTEGER;
Other may have already given the generic answer, but I thought it would be a good idea to give a fast way of determining it :
for (var x = 2; x + 1 !== x; x *= 2);
console.log(x);
Which gives me 9007199254740992 within less than a millisecond in Chrome 30.
It will test powers of 2 to find which one, when 'added' 1, equals himself.
Anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).
The console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648.
JavaScript has received a new data type in ECMAScript 2020: BigInt. It introduced numerical literals having an "n" suffix and allows for arbitrary precision:
var a = 123456789012345678901012345678901n;
Precision will still be lost, of course, when such big integer is (maybe unintentionally) coerced to a number data type.
And, obviously, there will always be precision limitations due to finite memory, and a cost in terms of time in order to allocate the necessary memory and to perform arithmetic on such large numbers.
For instance, the generation of a number with a hundred thousand decimal digits, will take a noticeable delay before completion:
console.log(BigInt("1".padEnd(100000,"0")) + 1n)
...but it works.
Try:
maxInt = -1 >>> 1
In Firefox 3.6 it's 2^31 - 1.
I did a simple test with a formula, X-(X+1)=-1, and the largest value of X I can get to work on Safari, Opera and Firefox (tested on OS X) is 9e15. Here is the code I used for testing:
javascript: alert(9e15-(9e15+1));
I write it like this:
var max_int = 0x20000000000000;
var min_int = -0x20000000000000;
(max_int + 1) === 0x20000000000000; //true
(max_int - 1) < 0x20000000000000; //true
Same for int32
var max_int32 = 0x80000000;
var min_int32 = -0x80000000;
Let's get to the sources
Description
The MAX_SAFE_INTEGER constant has a value of 9007199254740991 (9,007,199,254,740,991 or ~9 quadrillion). The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(2^53 - 1) and 2^53 - 1.
Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.
Because MAX_SAFE_INTEGER is a static property of Number, you always use it as Number.MAX_SAFE_INTEGER, rather than as a property of a Number object you created.
Browser compatibility
In JavaScript the representation of numbers is 2^53 - 1.
However, Bitwise operation are calculated on 32 bits ( 4 bytes ), meaning if you exceed 32bits shifts you will start loosing bits.
In the Google Chrome built-in javascript, you can go to approximately 2^1024 before the number is called infinity.
Scato wrotes:
anything you want to use for bitwise operations must be between
0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 -
1).
the console will tell you that 0x80000000 equals +2147483648, but
0x80000000 & 0x80000000 equals -2147483648
Hex-Decimals are unsigned positive values, so 0x80000000 = 2147483648 - thats mathematically correct. If you want to make it a signed value you have to right shift: 0x80000000 >> 0 = -2147483648. You can write 1 << 31 instead, too.
Firefox 3 doesn't seem to have a problem with huge numbers.
1e+200 * 1e+100 will calculate fine to 1e+300.
Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)
Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.
Node.js and Google Chrome seem to both be using 1024 bit floating point values so:
Number.MAX_VALUE = 1.7976931348623157e+308
I am writing a program where I need to delete duplicate points stored in a matrix. The problem is that when it comes to check whether those points are in the matrix, MATLAB can't recognize them in the matrix although they exist.
In the following code, intersections function gets the intersection points:
[points(:,1), points(:,2)] = intersections(...
obj.modifiedVGVertices(1,:), obj.modifiedVGVertices(2,:), ...
[vertex1(1) vertex2(1)], [vertex1(2) vertex2(2)]);
The result:
>> points
points =
12.0000 15.0000
33.0000 24.0000
33.0000 24.0000
>> vertex1
vertex1 =
12
15
>> vertex2
vertex2 =
33
24
Two points (vertex1 and vertex2) should be eliminated from the result. It should be done by the below commands:
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
After doing that, we have this unexpected outcome:
>> points
points =
33.0000 24.0000
The outcome should be an empty matrix. As you can see, the first (or second?) pair of [33.0000 24.0000] has been eliminated, but not the second one.
Then I checked these two expressions:
>> points(1) ~= vertex2(1)
ans =
0
>> points(2) ~= vertex2(2)
ans =
1 % <-- It means 24.0000 is not equal to 24.0000?
What is the problem?
More surprisingly, I made a new script that has only these commands:
points = [12.0000 15.0000
33.0000 24.0000
33.0000 24.0000];
vertex1 = [12 ; 15];
vertex2 = [33 ; 24];
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
The result as expected:
>> points
points =
Empty matrix: 0-by-2
The problem you're having relates to how floating-point numbers are represented on a computer. A more detailed discussion of floating-point representations appears towards the end of my answer (The "Floating-point representation" section). The TL;DR version: because computers have finite amounts of memory, numbers can only be represented with finite precision. Thus, the accuracy of floating-point numbers is limited to a certain number of decimal places (about 16 significant digits for double-precision values, the default used in MATLAB).
Actual vs. displayed precision
Now to address the specific example in the question... while 24.0000 and 24.0000 are displayed in the same manner, it turns out that they actually differ by very small decimal amounts in this case. You don't see it because MATLAB only displays 4 significant digits by default, keeping the overall display neat and tidy. If you want to see the full precision, you should either issue the format long command or view a hexadecimal representation of the number:
>> pi
ans =
3.1416
>> format long
>> pi
ans =
3.141592653589793
>> num2hex(pi)
ans =
400921fb54442d18
Initialized values vs. computed values
Since there are only a finite number of values that can be represented for a floating-point number, it's possible for a computation to result in a value that falls between two of these representations. In such a case, the result has to be rounded off to one of them. This introduces a small machine-precision error. This also means that initializing a value directly or by some computation can give slightly different results. For example, the value 0.1 doesn't have an exact floating-point representation (i.e. it gets slightly rounded off), and so you end up with counter-intuitive results like this due to the way round-off errors accumulate:
>> a=sum([0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1]); % Sum 10 0.1s
>> b=1; % Initialize to 1
>> a == b
ans =
logical
0 % They are unequal!
>> num2hex(a) % Let's check their hex representation to confirm
ans =
3fefffffffffffff
>> num2hex(b)
ans =
3ff0000000000000
How to correctly handle floating-point comparisons
Since floating-point values can differ by very small amounts, any comparisons should be done by checking that the values are within some range (i.e. tolerance) of one another, as opposed to exactly equal to each other. For example:
a = 24;
b = 24.000001;
tolerance = 0.001;
if abs(a-b) < tolerance, disp('Equal!'); end
will display "Equal!".
You could then change your code to something like:
points = points((abs(points(:,1)-vertex1(1)) > tolerance) | ...
(abs(points(:,2)-vertex1(2)) > tolerance),:)
Floating-point representation
A good overview of floating-point numbers (and specifically the IEEE 754 standard for floating-point arithmetic) is What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg.
A binary floating-point number is actually represented by three integers: a sign bit s, a significand (or coefficient/fraction) b, and an exponent e. For double-precision floating-point format, each number is represented by 64 bits laid out in memory as follows:
The real value can then be found with the following formula:
This format allows for number representations in the range 10^-308 to 10^308. For MATLAB you can get these limits from realmin and realmax:
>> realmin
ans =
2.225073858507201e-308
>> realmax
ans =
1.797693134862316e+308
Since there are a finite number of bits used to represent a floating-point number, there are only so many finite numbers that can be represented within the above given range. Computations will often result in a value that doesn't exactly match one of these finite representations, so the values must be rounded off. These machine-precision errors make themselves evident in different ways, as discussed in the above examples.
In order to better understand these round-off errors it's useful to look at the relative floating-point accuracy provided by the function eps, which quantifies the distance from a given number to the next largest floating-point representation:
>> eps(1)
ans =
2.220446049250313e-16
>> eps(1000)
ans =
1.136868377216160e-13
Notice that the precision is relative to the size of a given number being represented; larger numbers will have larger distances between floating-point representations, and will thus have fewer digits of precision following the decimal point. This can be an important consideration with some calculations. Consider the following example:
>> format long % Display full precision
>> x = rand(1, 10); % Get 10 random values between 0 and 1
>> a = mean(x) % Take the mean
a =
0.587307428244141
>> b = mean(x+10000)-10000 % Take the mean at a different scale, then shift back
b =
0.587307428244458
Note that when we shift the values of x from the range [0 1] to the range [10000 10001], compute a mean, then subtract the mean offset for comparison, we get a value that differs for the last 3 significant digits. This illustrates how an offset or scaling of data can change the accuracy of calculations performed on it, which is something that has to be accounted for with certain problems.
Look at this article: The Perils of Floating Point. Though its examples are in FORTRAN it has sense for virtually any modern programming language, including MATLAB. Your problem (and solution for it) is described in "Safe Comparisons" section.
type
format long g
This command will show the FULL value of the number. It's likely to be something like 24.00000021321 != 24.00000123124
Try writing
0.1 + 0.1 + 0.1 == 0.3.
Warning: You might be surprised about the result!
Maybe the two numbers are really 24.0 and 24.000000001 but you're not seeing all the decimal places.
Check out the Matlab EPS function.
Matlab uses floating point math up to 16 digits of precision (only 5 are displayed).
I am trying to implement this algorithm in above picture. The Berlekamp-Massey algorithm solves the following problem in a RS(n,k) system : Given a syndrome polynomial
S(z) = {S(n-k-1),........S(2),S(1),S(0)}
, finds the Smallest degree Error polynomial. This algorithm works fine for all Syndromes but when S(0) becomes 0, the error polynomial is incorrect. Is there anything missing from the algorithm mentioned ??
At what point does it fail? Is it failing to produce an error locator polynomial, or does it create one that doesn't have the proper roots, or is it failing later, such as the Forney algorithm?
I'm not sure if you really mean S(0) in your question. Most articles define S(j) as the sum of elements times successive powers (locations) of (alpha^j). It's important that the syndromes used by a decoder are based on the roots of the generator polynomial, if the first consecutive root = FCR = alpha^0 = 1 (g(x) = (x-1)(x-alpha)(x-alpha^2)...), use S(0) through S(n-k-1), or if FCR = alpha (g(x) = (x-alpha)(x-alpha^2)...), use S(1) through S(n-k).
Any RS decoder should work, despite zero syndromes, as long as there aren't too many zero syndromes (which would indicate an uncorrectable error).
The wiki article has a simpler description of the algorithm with example code. Note it uses lambda (Λ) instead of sigma (σ) to represent the error locator polynomial. The description is for the example code, and in this case S[0] could mean S(0) or S(1), depending on the FCR (which is not mentioned).
https://en.wikipedia.org/wiki/Berlekamp%E2%80%93Massey_algorithm
I also wrote interactive RS programs for 4 bit and 8 bit fields, which include Berlekamp Massey as one of the three decoders implemented in the programs. The programs allow the user to specify FCR as 1 or 2, unless a self-reciprocal polynomial is chosen (a non-popular option used to simplify hardware encoders). In these example programs, the polynomials are generally stored most significant term first (legacy issue), so the code shifts arrays to deal with this.
http://rcgldr.net/misc/eccdemo4.zip
http://rcgldr.net/misc/eccdemo8.zip
I modded one one of my ecc demo programs to work with GF(2^10). I ran your test case:
S[...] = {0000 0596 0302 0897} (S[0] = 0)
These are the iterations and polynomials that I get for BM decoder:
k d σ
0 0000 0001
1 0596 0596 x^2 + 0000 x + 0001
2 0302 0596 x^2 + 1006 x + 0001
3 0147 0585 x^2 + 1006 x + 0001
roots are:
383 = 1/(2^526) and 699 = 1/(2^527)
Trivia, by reversing the coefficients, you get the locators instead of their inverse:
0001 x^2 + 1006 x + 0585 : roots are 346 = 2^526 and 692 = 2^527
Note that Forney needs the unreversed polynomial if using Forney to calculate the error values.
I was recently reading an article on string hashing. We can hash a string by converting a string into a polynomial.
H(s1s2s3 ...sn) = (s1 + s2*p + s3*(p^2) + ··· + sn*(p^n−1)) mod M.
What are the constraints on p and M so that the probability of collision decreases?
A good requirement for a hash function on strings is that it should be difficult to find a
pair of different strings, preferably of the same length n, that have equal fingerprints. This
excludes the choice of M < n. Indeed, in this case at some point the powers of p corresponding
to respective symbols of the string start to repeat.
Similarly, if gcd(M, p) > 1 then powers of p modulo M may repeat for
exponents smaller than n. The safest choice is to set p as one of
the generators of the group U(ZM) – the group of all integers
relatively prime to M under multiplication modulo M.
I am not able to understand the above constraints. How selecting M < n and gcd(M,p) > 1 increases collision? Can somebody explain these two with some examples? I just need a basic understanding of these.
In addition, if anyone can focus on upper and lower bounds of M, it will be more than enough.
The above facts has been taken from the following article string hashing mit.
The "correct" answers to these questions involve some amount of number theory, but it can often be instructive to look at some extreme cases to see why the constraints might be useful.
For example, let's look at why we want M ≥ n. As an extreme case, let's pick M = 2 and n = 4. Then look at the numbers p0 mod 2, p1 mod 2, p2 mod 2, and p3 mod 2. Because there are four numbers here and only two possible remainders, by the pigeonhole principle we know that at least two of these numbers must be equal. Let's assume, for simplicity, that p0 and p1 are the same. This means that the hash function will return the same hash code for any two strings whose first two characters have been swapped, since those characters are multiplied by the same amount, which isn't a desirable property of a hash function. More generally, the reason why we want M ≥ n is so that the values p0, p1, ..., pn-1 at least have the possibility of being distinct. If M < n, there will just be too many powers of p for them to all be unique.
Now, let's think about why we want gcd(M, p) = 1. As an extreme case, suppose we pick p such that gcd(M, p) = M (that is, we pick p = M). Then
s0p0 + s1p1 + s2p2 + ... + sn-1pn-1 (mod M)
= s0M0 + s1M1 + s2M2 + ... + sn-1Mn-1 (mod M)
= s0
Oops, that's no good - that makes our hash code exactly equal to the first character of the string. This means that if p isn't coprime with M (that is, if gcd(M, p) ≠ 1), you run the risk of certain characters being "modded out" of the hash code, increasing the collision probability.
How selecting M < n and gcd(M,p) > 1 increases collision?
In your hash function formula, M might reasonably be used to restrict the hash result to a specific bit-width: e.g. M=216 for a 16-bit hash, M=232 for a 32-bit hash, M=2^64 for a 64-bit hash. Usually, a mod/% operation is not actually needed in an implementation, as using the desired size of unsigned integer for the hash calculation inherently performs that function.
I don't recommend it, but sometimes you do see people describing hash functions that are so exclusively coupled to the size of a specific hash table that they mod the results directly to the table size.
The text you quote from says:
A good requirement for a hash function on strings is that it should be difficult to find a pair of different strings, preferably of the same length n, that have equal fingerprints. This excludes the choice of M < n.
This seems a little silly in three separate regards. Firstly, it implies that hashing a long passage of text requires a massively long hash value, when practically it's the number of distinct passages of text you need to hash that's best considered when selecting M.
More specifically, if you have V distinct values to hash with a good general purpose hash function, you'll get dramatically less collisions of the hash values if your hash function produces at least V2 distinct hash values. For example, if you are hashing 1000 values (~210), you want M to be at least 1 million (i.e. at least 2*10 = 20-bit hash values, which is fine to round up to 32-bit but ideally don't settle for 16-bit). Read up on the Birthday Problem for related insights.
Secondly, given n is the number of characters, the number of potential values (i.e. distinct inputs) is the number of distinct values any specific character can take, raised to the power n. The former is likely somewhere from 26 to 256 values, depending on whether the hash supports only letters, or say alphanumeric input, or standard- vs. extended-ASCII and control characters etc., or even more for Unicode. The way "excludes the choice of M < n" implies any relevant linear relationship between M and n is bogus; if anything, it's as M drops below the number of distinct potential input values that it increasingly promotes collisions, but again it's the actual number of distinct inputs that tends to matter much, much more.
Thirdly, "preferably of the same length n" - why's that important? As far as I can see, it's not.
I've nothing to add to templatetypedef's discussion on gcd.