Related
I see that BigInt is supported in node 10. However, there's no ReadBigInt() functionality in the Buffer class.
Is it possible to somehow go around it? Perhaps read 2 ints, cast them to BigInt, shift the upper one and add them to reconstruct the bigint?
A little late to the party here, but as the BigInt ctor accepts a hex string we can just convert the Buffer to a hex string and pass that in to the BigInt ctor. This also works for numbers > 2 ** 64 and doesn't require any dependencies.
function bufferToBigInt(buffer, start = 0, end = buffer.length) {
const bufferAsHexString = buffer.slice(start, end).toString("hex");
return BigInt(`0x${bufferAsHexString}`};
}
I recently had encountered the need to do this as well, and managed to find this npm library: https://github.com/no2chem/bigint-buffer ( https://www.npmjs.org/package/bigint-buffer ) which can read from a buffer as a BigInt.
Example Usage (reading, there is more examples on the linked github/npm):
const BigIntBuffer = require('bigint-buffer');
let testBuffer = Buffer.alloc(16);
testBuffer[0] = 0xff; // 255
console.log(BigIntBuffer.toBigIntBE(testBuffer));
// -> 338953138925153547590470800371487866880n
That will read the 16byte (128bit) number from the buffer.
If you wish to read only part of it as a BigInt, then slicing the buffer should work.
With Node v12, functions for reading bigint from buffers was added, so if possible, you should try to use Node v12 or later.
But these functions are just pure math based on reading integers from the buffer, so you can pretty much copy them into your Node 10-11 code.
https://github.com/nodejs/node/blob/v12.6.0/lib/internal/buffer.js#L78-L152
So modifying these methods to not be class methods could look something like this
function readBigUInt64LE(buffer, offset = 0) {
const first = buffer[offset];
const last = buffer[offset + 7];
if (first === undefined || last === undefined) {
throw new Error('Out of bounds');
}
const lo = first +
buffer[++offset] * 2 ** 8 +
buffer[++offset] * 2 ** 16 +
buffer[++offset] * 2 ** 24;
const hi = buffer[++offset] +
buffer[++offset] * 2 ** 8 +
buffer[++offset] * 2 ** 16 +
last * 2 ** 24;
return BigInt(lo) + (BigInt(hi) << 32n);
}
EDIT: For anyone else having the same issue, I created a package for this.
https://www.npmjs.com/package/read-bigint
One liner: BigInt('0x'+buffer.toString('hex'))
How do I convert a string to an integer in JavaScript?
The simplest way would be to use the native Number function:
var x = Number("1000")
If that doesn't work for you, then there are the parseInt, unary plus, parseFloat with floor, and Math.round methods.
parseInt()
var x = parseInt("1000", 10); // You want to use radix 10
// So you get a decimal number even with a leading 0 and an old browser ([IE8, Firefox 20, Chrome 22 and older][1])
Unary plus
If your string is already in the form of an integer:
var x = +"1000";
floor()
If your string is or might be a float and you want an integer:
var x = Math.floor("1000.01"); // floor() automatically converts string to number
Or, if you're going to be using Math.floor several times:
var floor = Math.floor;
var x = floor("1000.01");
parseFloat()
If you're the type who forgets to put the radix in when you call parseInt, you can use parseFloat and round it however you like. Here I use floor.
var floor = Math.floor;
var x = floor(parseFloat("1000.01"));
round()
Interestingly, Math.round (like Math.floor) will do a string to number conversion, so if you want the number rounded (or if you have an integer in the string), this is a great way, maybe my favorite:
var round = Math.round;
var x = round("1000"); // Equivalent to round("1000", 0)
Try parseInt function:
var number = parseInt("10");
But there is a problem. If you try to convert "010" using parseInt function, it detects as octal number, and will return number 8. So, you need to specify a radix (from 2 to 36). In this case base 10.
parseInt(string, radix)
Example:
var result = parseInt("010", 10) == 10; // Returns true
var result = parseInt("010") == 10; // Returns false
Note that parseInt ignores bad data after parsing anything valid.
This guid will parse as 51:
var result = parseInt('51e3daf6-b521-446a-9f5b-a1bb4d8bac36', 10) == 51; // Returns true
There are two main ways to convert a string to a number in JavaScript. One way is to parse it and the other way is to change its type to a Number. All of the tricks in the other answers (e.g., unary plus) involve implicitly coercing the type of the string to a number. You can also do the same thing explicitly with the Number function.
Parsing
var parsed = parseInt("97", 10);
parseInt and parseFloat are the two functions used for parsing strings to numbers. Parsing will stop silently if it hits a character it doesn't recognise, which can be useful for parsing strings like "92px", but it's also somewhat dangerous, since it won't give you any kind of error on bad input, instead you'll get back NaN unless the string starts with a number. Whitespace at the beginning of the string is ignored. Here's an example of it doing something different to what you want, and giving no indication that anything went wrong:
var widgetsSold = parseInt("97,800", 10); // widgetsSold is now 97
It's good practice to always specify the radix as the second argument. In older browsers, if the string started with a 0, it would be interpreted as octal if the radix wasn't specified which took a lot of people by surprise. The behaviour for hexadecimal is triggered by having the string start with 0x if no radix is specified, e.g., 0xff. The standard actually changed with ECMAScript 5, so modern browsers no longer trigger octal when there's a leading 0 if no radix has been specified. parseInt understands radixes up to base 36, in which case both upper and lower case letters are treated as equivalent.
Changing the Type of a String to a Number
All of the other tricks mentioned above that don't use parseInt, involve implicitly coercing the string into a number. I prefer to do this explicitly,
var cast = Number("97");
This has different behavior to the parse methods (although it still ignores whitespace). It's more strict: if it doesn't understand the whole of the string than it returns NaN, so you can't use it for strings like 97px. Since you want a primitive number rather than a Number wrapper object, make sure you don't put new in front of the Number function.
Obviously, converting to a Number gives you a value that might be a float rather than an integer, so if you want an integer, you need to modify it. There are a few ways of doing this:
var rounded = Math.floor(Number("97.654")); // other options are Math.ceil, Math.round
var fixed = Number("97.654").toFixed(0); // rounded rather than truncated
var bitwised = Number("97.654")|0; // do not use for large numbers
Any bitwise operator (here I've done a bitwise or, but you could also do double negation as in an earlier answer or a bit shift) will convert the value to a 32 bit integer, and most of them will convert to a signed integer. Note that this will not do want you want for large integers. If the integer cannot be represented in 32 bits, it will wrap.
~~"3000000000.654" === -1294967296
// This is the same as
Number("3000000000.654")|0
"3000000000.654" >>> 0 === 3000000000 // unsigned right shift gives you an extra bit
"300000000000.654" >>> 0 === 3647256576 // but still fails with larger numbers
To work correctly with larger numbers, you should use the rounding methods
Math.floor("3000000000.654") === 3000000000
// This is the same as
Math.floor(Number("3000000000.654"))
Bear in mind that coercion understands exponential notation and Infinity, so 2e2 is 200 rather than NaN, while the parse methods don't.
Custom
It's unlikely that either of these methods do exactly what you want. For example, usually I would want an error thrown if parsing fails, and I don't need support for Infinity, exponentials or leading whitespace. Depending on your use case, sometimes it makes sense to write a custom conversion function.
Always check that the output of Number or one of the parse methods is the sort of number you expect. You will almost certainly want to use isNaN to make sure the number is not NaN (usually the only way you find out that the parse failed).
ParseInt() and + are different
parseInt("10.3456") // returns 10
+"10.3456" // returns 10.3456
Fastest
var x = "1000"*1;
Test
Here is little comparison of speed (macOS only)... :)
For Chrome, 'plus' and 'mul' are fastest (>700,000,00 op/sec), 'Math.floor' is slowest. For Firefox, 'plus' is slowest (!) 'mul' is fastest (>900,000,000 op/sec). In Safari 'parseInt' is fastest, 'number' is slowest (but results are quite similar, >13,000,000 <31,000,000). So Safari for cast string to int is more than 10x slower than other browsers. So the winner is 'mul' :)
You can run it on your browser by this link
https://jsperf.com/js-cast-str-to-number/1
I also tested var x = ~~"1000";. On Chrome and Safari, it is a little bit slower than var x = "1000"*1 (<1%), and on Firefox it is a little bit faster (<1%).
I use this way of converting string to number:
var str = "25"; // String
var number = str*1; // Number
So, when multiplying by 1, the value does not change, but JavaScript automatically returns a number.
But as it is shown below, this should be used if you are sure that the str is a number (or can be represented as a number), otherwise it will return NaN - not a number.
You can create simple function to use, e.g.,
function toNumber(str) {
return str*1;
}
Try parseInt.
var number = parseInt("10", 10); //number will have value of 10.
I love this trick:
~~"2.123"; //2
~~"5"; //5
The double bitwise negative drops off anything after the decimal point AND converts it to a number format. I've been told it's slightly faster than calling functions and whatnot, but I'm not entirely convinced.
Another method I just saw here (a question about the JavaScript >>> operator, which is a zero-fill right shift) which shows that shifting a number by 0 with this operator converts the number to a uint32 which is nice if you also want it unsigned. Again, this converts to an unsigned integer, which can lead to strange behaviors if you use a signed number.
"-2.123" >>> 0; // 4294967294
"2.123" >>> 0; // 2
"-5" >>> 0; // 4294967291
"5" >>> 0; // 5
In JavaScript, you can do the following:
ParseInt
parseInt("10.5") // Returns 10
Multiplying with 1
var s = "10";
s = s*1; // Returns 10
Using the unary operator (+)
var s = "10";
s = +s; // Returns 10
Using a bitwise operator
(Note: It starts to break after 2140000000. Example: ~~"2150000000" = -2144967296)
var s = "10.5";
s = ~~s; // Returns 10
Using Math.floor() or Math.ceil()
var s = "10";
s = Math.floor(s) || Math.ceil(s); // Returns 10
Please see the below example. It will help answer your question.
Example Result
parseInt("4") 4
parseInt("5aaa") 5
parseInt("4.33333") 4
parseInt("aaa"); NaN (means "Not a Number")
By using parseint function, it will only give op of integer present and not the string.
Beware if you use parseInt to convert a float in scientific notation!
For example:
parseInt("5.6e-14")
will result in
5
instead of
0
Also as a side note: MooTools has the function toInt() which is used on any native string (or float (or integer)).
"2".toInt() // 2
"2px".toInt() // 2
2.toInt() // 2
We can use +(stringOfNumber) instead of using parseInt(stringOfNumber).
Example: +("21") returns int of 21, like the parseInt("21").
We can use this unary "+" operator for parsing float too...
To convert a String into Integer, I recommend using parseFloat and not parseInt. Here's why:
Using parseFloat:
parseFloat('2.34cms') //Output: 2.34
parseFloat('12.5') //Output: 12.5
parseFloat('012.3') //Output: 12.3
Using parseInt:
parseInt('2.34cms') //Output: 2
parseInt('12.5') //Output: 12
parseInt('012.3') //Output: 12
So if you have noticed parseInt discards the values after the decimals, whereas parseFloat lets you work with floating point numbers and hence more suitable if you want to retain the values after decimals. Use parseInt if and only if you are sure that you want the integer value.
There are many ways in JavaScript to convert a string to a number value... All are simple and handy. Choose the way which one works for you:
var num = Number("999.5"); //999.5
var num = parseInt("999.5", 10); //999
var num = parseFloat("999.5"); //999.5
var num = +"999.5"; //999.5
Also, any Math operation converts them to number, for example...
var num = "999.5" / 1; //999.5
var num = "999.5" * 1; //999.5
var num = "999.5" - 1 + 1; //999.5
var num = "999.5" - 0; //999.5
var num = Math.floor("999.5"); //999
var num = ~~"999.5"; //999
My prefer way is using + sign, which is the elegant way to convert a string to number in JavaScript.
Try str - 0 to convert string to number.
> str = '0'
> str - 0
0
> str = '123'
> str - 0
123
> str = '-12'
> str - 0
-12
> str = 'asdf'
> str - 0
NaN
> str = '12.34'
> str - 0
12.34
Here are two links to compare the performance of several ways to convert string to int
https://jsperf.com/number-vs-parseint-vs-plus
http://phrogz.net/js/string_to_number.html
Here is the easiest solution
let myNumber = "123" | 0;
More easy solution
let myNumber = +"123";
In my opinion, no answer covers all edge cases as parsing a float should result in an error.
function parseInteger(value) {
if(value === '') return NaN;
const number = Number(value);
return Number.isInteger(number) ? number : NaN;
}
parseInteger("4") // 4
parseInteger("5aaa") // NaN
parseInteger("4.33333") // NaN
parseInteger("aaa"); // NaN
The easiest way would be to use + like this
const strTen = "10"
const numTen = +strTen // string to number conversion
console.log(typeof strTen) // string
console.log(typeof numTen) // number
I actually needed to "save" a string as an integer, for a binding between C and JavaScript, so I convert the string into an integer value:
/*
Examples:
int2str( str2int("test") ) == "test" // true
int2str( str2int("t€st") ) // "t¬st", because "€".charCodeAt(0) is 8364, will be AND'ed with 0xff
Limitations:
maximum 4 characters, so it fits into an integer
*/
function str2int(the_str) {
var ret = 0;
var len = the_str.length;
if (len >= 1) ret += (the_str.charCodeAt(0) & 0xff) << 0;
if (len >= 2) ret += (the_str.charCodeAt(1) & 0xff) << 8;
if (len >= 3) ret += (the_str.charCodeAt(2) & 0xff) << 16;
if (len >= 4) ret += (the_str.charCodeAt(3) & 0xff) << 24;
return ret;
}
function int2str(the_int) {
var tmp = [
(the_int & 0x000000ff) >> 0,
(the_int & 0x0000ff00) >> 8,
(the_int & 0x00ff0000) >> 16,
(the_int & 0xff000000) >> 24
];
var ret = "";
for (var i=0; i<4; i++) {
if (tmp[i] == 0)
break;
ret += String.fromCharCode(tmp[i]);
}
return ret;
}
String to Number in JavaScript:
Unary + (most recommended)
+numStr is easy to use and has better performance compared with others
Supports both integers and decimals
console.log(+'123.45') // => 123.45
Some other options:
Parsing Strings:
parseInt(numStr) for integers
parseFloat(numStr) for both integers and decimals
console.log(parseInt('123.456')) // => 123
console.log(parseFloat('123')) // => 123
JavaScript Functions
Math functions like round(numStr), floor(numStr), ceil(numStr) for integers
Number(numStr) for both integers and decimals
console.log(Math.floor('123')) // => 123
console.log(Math.round('123.456')) // => 123
console.log(Math.ceil('123.454')) // => 124
console.log(Number('123.123')) // => 123.123
Unary Operators
All basic unary operators, +numStr, numStr-0, 1*numStr, numStr*1, and numStr/1
All support both integers and decimals
Be cautious about numStr+0. It returns a string.
console.log(+'123') // => 123
console.log('002'-0) // => 2
console.log(1*'5') // => 5
console.log('7.7'*1) // => 7.7
console.log(3.3/1) // =>3.3
console.log('123.123'+0, typeof ('123.123' + 0)) // => 123.1230 string
Bitwise Operators
Two tilde ~~numStr or left shift 0, numStr<<0
Supports only integers, but not decimals
console.log(~~'123') // => 123
console.log('0123'<<0) // => 123
console.log(~~'123.123') // => 123
console.log('123.123'<<0) // => 123
// Parsing
console.log(parseInt('123.456')) // => 123
console.log(parseFloat('123')) // => 123
// Function
console.log(Math.floor('123')) // => 123
console.log(Math.round('123.456')) // => 123
console.log(Math.ceil('123.454')) // => 124
console.log(Number('123.123')) // => 123.123
// Unary
console.log(+'123') // => 123
console.log('002'-0) // => 2
console.log(1*'5') // => 5
console.log('7.7'*1) // => 7.7
console.log(3.3/1) // => 3.3
console.log('123.123'+0, typeof ('123.123'+0)) // => 123.1230 string
// Bitwise
console.log(~~'123') // => 123
console.log('0123'<<0) // => 123
console.log(~~'123.123') // => 123
console.log('123.123'<<0) // => 123
function parseIntSmarter(str) {
// ParseInt is bad because it returns 22 for "22thisendsintext"
// Number() is returns NaN if it ends in non-numbers, but it returns 0 for empty or whitespace strings.
return isNaN(Number(str)) ? NaN : parseInt(str, 10);
}
You can use plus.
For example:
var personAge = '24';
var personAge1 = (+personAge)
then you can see the new variable's type bytypeof personAge1 ; which is number.
Summing the multiplication of digits with their respective power of ten:
i.e: 123 = 100+20+3 = 1100 + 2+10 + 31 = 1*(10^2) + 2*(10^1) + 3*(10^0)
function atoi(array) {
// Use exp as (length - i), other option would be
// to reverse the array.
// Multiply a[i] * 10^(exp) and sum
let sum = 0;
for (let i = 0; i < array.length; i++) {
let exp = array.length - (i+1);
let value = array[i] * Math.pow(10, exp);
sum += value;
}
return sum;
}
The safest way to ensure you get a valid integer:
let integer = (parseInt(value, 10) || 0);
Examples:
// Example 1 - Invalid value:
let value = null;
let integer = (parseInt(value, 10) || 0);
// => integer = 0
// Example 2 - Valid value:
let value = "1230.42";
let integer = (parseInt(value, 10) || 0);
// => integer = 1230
// Example 3 - Invalid value:
let value = () => { return 412 };
let integer = (parseInt(value, 10) || 0);
// => integer = 0
Another option is to double XOR the value with itself:
var i = 12.34;
console.log('i = ' + i);
console.log('i ⊕ i ⊕ i = ' + (i ^ i ^ i));
This will output:
i = 12.34
i ⊕ i ⊕ i = 12
I only added one plus(+) before string and that was solution!
+"052254" // 52254
Number()
Number(" 200.12 ") // Returns 200.12
Number("200.12") // Returns 200.12
Number("200") // Returns 200
parseInt()
parseInt(" 200.12 ") // Return 200
parseInt("200.12") // Return 200
parseInt("200") // Return 200
parseInt("Text information") // Returns NaN
parseFloat()
It will return the first number
parseFloat("200 400") // Returns 200
parseFloat("200") // Returns 200
parseFloat("Text information") // Returns NaN
parseFloat("200.10") // Return 200.10
Math.floor()
Round a number to the nearest integer
Math.floor(" 200.12 ") // Return 200
Math.floor("200.12") // Return 200
Math.floor("200") // Return 200
function doSth(){
var a = document.getElementById('input').value;
document.getElementById('number').innerHTML = toNumber(a) + 1;
}
function toNumber(str){
return +str;
}
<input id="input" type="text">
<input onclick="doSth()" type="submit">
<span id="number"></span>
This (probably) isn't the best solution for parsing an integer, but if you need to "extract" one, for example:
"1a2b3c" === 123
"198some text2hello world!30" === 198230
// ...
this would work (only for integers):
var str = '3a9b0c3d2e9f8g'
function extractInteger(str) {
var result = 0;
var factor = 1
for (var i = str.length; i > 0; i--) {
if (!isNaN(str[i - 1])) {
result += parseInt(str[i - 1]) * factor
factor *= 10
}
}
return result
}
console.log(extractInteger(str))
Of course, this would also work for parsing an integer, but would be slower than other methods.
You could also parse integers with this method and return NaN if the string isn't a number, but I don't see why you'd want to since this relies on parseInt internally and parseInt is probably faster.
var str = '3a9b0c3d2e9f8g'
function extractInteger(str) {
var result = 0;
var factor = 1
for (var i = str.length; i > 0; i--) {
if (isNaN(str[i - 1])) return NaN
result += parseInt(str[i - 1]) * factor
factor *= 10
}
return result
}
console.log(extractInteger(str))
How can I generate random numbers in a specific range using crypto.randomBytes?
I want to be able to generate a random number like this:
console.log(random(55, 956)); // where 55 is minimum and 956 is maximum
and I'm limited to use crypto.randomBytes only inside random function to generate random number for this range.
I know how to convert generated bytes from randomBytes to hex or decimal but I can't figure out how to get a random number in a specific range from random bytes mathematically.
To generate random number in a certain range you can use the following equation
Math.random() * (high - low) + low
But you want to use crypto.randomBytes instead of Math.random()
this function returns a buffer with randomly generated bytes. In turn, you need to convert the result of this function from bytes to decimal. this can be done using biguint-format package. To install this package simply use the following command:
npm install biguint-format --save
Now you need to convert the result of crypto.randomBytes to decimal, you can do that as follow:
var x= crypto.randomBytes(1);
return format(x, 'dec');
Now you can create your random function which will be as follow:
var crypto = require('crypto'),
format = require('biguint-format');
function randomC (qty) {
var x= crypto.randomBytes(qty);
return format(x, 'dec');
}
function random (low, high) {
return randomC(4)/Math.pow(2,4*8-1) * (high - low) + low;
}
console.log(random(50,1000));
Thanks to answer from #Mustafamg and huge help from #CodesInChaos I managed to resolve this issue. I made some tweaks and increase range to maximum 256^6-1 or 281,474,976,710,655. Range can be increased more but you need to use additional library for big integers, because 256^7-1 is out of Number.MAX_SAFE_INTEGER limits.
If anyone have same problem feel free to use it.
var crypto = require('crypto');
/*
Generating random numbers in specific range using crypto.randomBytes from crypto library
Maximum available range is 281474976710655 or 256^6-1
Maximum number for range must be equal or less than Number.MAX_SAFE_INTEGER (usually 9007199254740991)
Usage examples:
cryptoRandomNumber(0, 350);
cryptoRandomNumber(556, 1250425);
cryptoRandomNumber(0, 281474976710655);
cryptoRandomNumber((Number.MAX_SAFE_INTEGER-281474976710655), Number.MAX_SAFE_INTEGER);
Tested and working on 64bit Windows and Unix operation systems.
*/
function cryptoRandomNumber(minimum, maximum){
var distance = maximum-minimum;
if(minimum>=maximum){
console.log('Minimum number should be less than maximum');
return false;
} else if(distance>281474976710655){
console.log('You can not get all possible random numbers if range is greater than 256^6-1');
return false;
} else if(maximum>Number.MAX_SAFE_INTEGER){
console.log('Maximum number should be safe integer limit');
return false;
} else {
var maxBytes = 6;
var maxDec = 281474976710656;
// To avoid huge mathematical operations and increase function performance for small ranges, you can uncomment following script
/*
if(distance<256){
maxBytes = 1;
maxDec = 256;
} else if(distance<65536){
maxBytes = 2;
maxDec = 65536;
} else if(distance<16777216){
maxBytes = 3;
maxDec = 16777216;
} else if(distance<4294967296){
maxBytes = 4;
maxDec = 4294967296;
} else if(distance<1099511627776){
maxBytes = 4;
maxDec = 1099511627776;
}
*/
var randbytes = parseInt(crypto.randomBytes(maxBytes).toString('hex'), 16);
var result = Math.floor(randbytes/maxDec*(maximum-minimum+1)+minimum);
if(result>maximum){
result = maximum;
}
return result;
}
}
So far it works fine and you can use it as really good random number generator, but I strictly not recommending using this function for any cryptographic services. If you will, use it on your own risk.
All comments, recommendations and critics are welcome!
To generate numbers in the range [55 .. 956], you first generate a random number in the range [0 .. 901] where 901 = 956 - 55. Then add 55 to the number you just generated.
To generate a number in the range [0 .. 901], pick off two random bytes and mask off 6 bits. That will give you a 10 bit random number in the range [0 .. 1023]. If that number is <= 901 then you are finished. If it is bigger than 901, discard it and get two more random bytes. Do not attempt to use MOD, to get the number into the right range, that will distort the output making it non-random.
ETA: To reduce the chance of having to discard a generated number.
Since we are taking two bytes from the RNG, we get a number in the range [0 .. 65535]. Now 65535 MOD 902 is 591. Hence, if our two-byte random number is less than (65535 - 591), that is, less than 64944, we can safely use the MOD operator, since each number in the range [0 .. 901] is now equally likely. Any two-byte number >= 64944 will still have to be thrown away, as using it would distort the output away from random. Before, the chances of having to reject a number were (1024 - 901) / 1024 = 12%. Now the chances of a rejection are (65535 - 64944) / 65535 = 1%. We are far less likely to have to reject the randomly generated number.
running <- true
while running
num <- two byte random
if (num < 64944)
result <- num MOD 902
running <- false
endif
endwhile
return result + 55
The crypto package now has a randomInt() function. It was added in v14.10.0 and v12.19.0.
console.log(crypto.randomInt(55, 957)); // where 55 is minimum and 956 is maximum
The upper bound is exclusive.
Here is the (abridged) implementation:
// Largest integer we can read from a buffer.
// e.g.: Buffer.from("ff".repeat(6), "hex").readUIntBE(0, 6);
const RAND_MAX = 0xFFFF_FFFF_FFFF;
const range = max - min;
const excess = RAND_MAX % range;
const randLimit = RAND_MAX - excess;
while (true) {
const x = randomBytes(6).readUIntBE(0, 6);
// If x > (maxVal - (maxVal % range)), we will get "modulo bias"
if (x > randLimit) {
// Try again
continue;
}
const n = (x % range) + min;
return n;
}
See the full source and the official docs for more information.
So the issue with most other solutions are that they distort the distribution (which you probably would like to be uniform).
The pseudocode from #rossum lacks generalization. (But he proposed the right solution in the text)
// Generates a random integer in range [min, max]
function randomRange(min, max) {
const diff = max - min + 1;
// finds the minimum number of bit required to represent the diff
const numberBit = Math.ceil(Math.log2(diff));
// as we are limited to draw bytes, minimum number of bytes
const numberBytes = Math.ceil(numberBit / 4);
// as we might draw more bits than required, we look only at what we need (discard the rest)
const mask = (1 << numberBit) - 1;
let randomNumber;
do {
randomNumber = crypto.randomBytes(numberBytes).readUIntBE(0, numberBytes);
randomNumber = randomNumber & mask;
// number of bit might represent a numbers bigger than the diff, in that case try again
} while (randomNumber >= diff);
return randomNumber + min;
}
About performance concerns, basically the number is in the right range between 50% - 100% of the time (depending on the parameters). That is in the worst case scenario the loop is executed more than 7 times with less than 1% chance and practically, most of the time the loop is executed one or two times.
The random-js library acknowledges that most solution out there don't provide random numbers with uniform distributions and provides a more complete solution
I have a 14-bit data that is fed from FPGA in vhdl, The NIos II processor reads the 14-bit data from FPGA and do some processing tasks, where Nios II system is programmed in C code
The 14-bit data can be positive, zero or negative. In Altera compiler, I can only define the data to be 8,16 or 32. So I define this to be 16 bit data.
First, I need to check if the data is negative, if it is negative, I need to pad the first two MSB to be bit '1' so the system detects it as negative value instead of positive value.
Second, I need to compute the real value of this binary representation into a decimal value of BOTH integer and fraction.
I learned from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) that I could convert a binary (consists of both integer and fraction) to decimal values.
To be specified, I am able to use this code quoted from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) , reproduced as below:
#include <stdio.h>
#include <math.h>
double convert(const char binary[]){
int bi,i;
int len = 0;
int dot = -1;
double result = 0;
for(bi = 0; binary[bi] != '\0'; bi++){
if(binary[bi] == '.'){
dot = bi;
}
len++;
}
if(dot == -1)
dot=len;
for(i = dot; i >= 0 ; i--){
if (binary[i] == '1'){
result += (double) pow(2,(dot-i-1));
}
}
for(i=dot; binary[i] != '\0'; i++){
if (binary[i] == '1'){
result += 1.0/(double) pow(2.0,(double)(i-dot));
}
}
return result;
}
int main()
{
char bin[] = "1101.11";
char bin1[] = "1101";
char bin2[] = "1101.";
char bin3[] = ".11";
printf("%s -> %f\n",bin, convert(bin));
printf("%s -> %f\n",bin1, convert(bin1));
printf("%s -> %f\n",bin2, convert(bin2));
printf("%s -> %f\n",bin3, convert(bin3));
return 0;
}
I am wondering if this code can be used to check for negative value? I did try with a binary string of 11111101.11 and it gives the output of 253.75...
I have two questions:
What are the modifications I need to do in order to read a negative value?
I know that I can do the bit shift (as below) to check if the msb is 1, if it is 1, I know it is negative value...
if (14bit_data & 0x2000) //if true, it is negative value
The issue is, since it involves fraction part (but not only integer), it confused me a bit if the method still works...
If the binary number is originally not in string format, is there any way I could convert it to string? The binary number is originally fed from a fpga block written in VHDL say, 14 bits, with msb as the sign bit, the following 6 bits are the magnitude for integer and the last 6 bits are the magnitude for fractional part. I need the decimal value in C code for Altera Nios II processor.
OK so I m focusing on the fact that you want to reuse the algorithm you mention at the beginning of your question and assume that the binary representation you have for your signed number is Two's complement but I`m not really sure according to your comments that the input you have is the same than the one used by the algorithm
First pad the 2 MSB to have a 16 bit representation
16bit_data = (14_bit_data & 0x2000) ? ( 14_bit_data | 0xC000) : 14_bit_data ;
In case value is positive then value will remained unchanged and if negative this will be the correct two`s complement representation on 16bits.
For fractionnal part everything is the same compared to algorithm you mentionned in your question.
For integer part everything is the same except the treatment of MSB.
For unsigned number MSB (ie bit[15]) represents pow(2,15-6) ( 6 is the width of frationnal part ) whereas for signed number in Two`s complement representation it represents -pow(2,15-6) meaning that algorithm become
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
dec = dec + rem * pow(2, t) * (9 != t ? 1 : -1);
++t;
}
or said differently if you don`t want * operator
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
if( 9 != t)
{
dec = dec + rem * pow(2, t);
}
else
{
dec = dec - rem * pow(2, t);
}
++t;
}
For the second algorithm that you mention, considering you format if dot == 11 and i == 0 we are at MSB ( 10 integer bits followed by dot) so the code become
for(i = dot - 1; i >= 0 ; i--)
{
if (binary[i] == '1')
{
if(11 != dot || i)
{
result += (double) pow(2,(dot-i-1));
}
else
{
// result -= (double) pow(2,(dot-i-1));
// Due to your number format i == 0 and dot == 11 so
result -= 512
}
}
}
WARNING : in brice algorithm the input is character string like "11011.101" whereas according to your description you have an integer input so I`m not sure that this algorithm is suited to your case
I think this should work:
float convert14BitsToFloat(int16_t in)
{
/* Sign-extend in, since it is 14 bits */
if (in & 0x2000) in |= 0xC000;
/* convert to float with 6 decimal places (64 = 2^6) */
return (float)in / 64.0f;
}
To convert any number to string, I would use sprintf. Be aware it may significantly increase the size of your application. If you don't need the float and what to keep a small application, you should make your own conversion function.
Say you have an array of int (in any language with fixed size ints). How would you calculate the int closest to their mean?
Edit: to be clear, the result does not have to be present in the array. That is, for the input array [3, 6, 7] the expected result is 5. Also I guess we need to specify a particular rounding direction, so say round down if you are equally close to two numbers.
Edit: This is not homework. I haven't had homework in five years. And this is my first time on stackoverflow, so please be nice!
Edit: The obvious approach of summing up and dividing may overflow, so I'm trying to think of an approach that is overflow safe, for both large arrays and large ints. I think handling overflow correctly (without cheating and using a different type) is by far the hardest part of this problem.
Here's a way that's fast, reasonably overflow-safe and can work when the number of elements isn't known in advance.
// The length of someListOfNumbers doesn't need to be known in advance.
int mean(SomeType someListOfNumbers) {
double mean = 0, count = 0;
foreach(element; someListOfNumbers) {
count++;
mean += (element - mean) / count;
}
if(count == 0) {
throw new UserIsAnIdiotException(
"Problem exists between keyboard and chair.");
}
return cast(int) floor(mean);
}
Calculate the sum by adding the numbers up, and dividing by the number of them, with rounding:
mean = (int)((sum + length/2) / length;
If you are worried about overflow, you can do something like:
int mean = 0, remainder = 0
foreach n in number
mean += n / length
remainder += n % length
if remainder > length
mean += 1
remainder -= length
if remainder > length/2
mean += 1
print "mean is: " mean
note that this isn't very fast.
um... how about just calculating the mean and then rounding to an integer? round(mean(thearray)) Most languages have facilities that allow you to specify the rounding method.
EDIT: So it turns out that this question is really about avoiding overflow, not about rounding. Let me be clear that I agree with those that have said (in the comments) that it's not something to worry about in practice, since it so rarely happens, and when it does you can always get away with using a larger data type.
I see that several other people have given answers that basically consist of dividing each number in the array by the count of the array, then adding them up. That is also a good approach. But just for kicks, here's an alternative (in C-ish pseudocode):
int sum_offset = 0;
for (int i = 1; i < length(array); i++)
sum_offset += array[i] - array[i-1];
// round by your method of choice
int mean_offset = round((float)sum_offset / length(array));
int mean = mean_offset + array[0];
Or another way to do the same thing:
int min = INT_MAX, max = INT_MIN;
for (int i = 0; i < length(array); i++) {
if (array[i] < min) min = array[i];
if (array[i] > max) max = array[i];
}
int sum_offset = max - min;
// round by your method of choice
int mean_offset = round((float)sum_offset / length(array));
int mean = mean_offset + min;
Of course, you need to make sure sum_offset does not overflow, which can happen if the difference between the largest and smallest array elements is larger than INT_MAX. In that case, replace the last four lines with something like this:
// round by your method of choice
int mean_offset = round((float)max / length(array) - (float)min / length(array));
int mean = mean_offset + min;
Trivia: this method, or something like it, also works quite well for mentally computing the mean of an array whose elements are clustered close together.
Guaranteed not to overflow:
length ← length of list
average ← 0
for each result in the list do:
average ← average + ( result / length )
end for
This has significant problems with accuracy if you're using ints due to truncation (the average of six 4's comes out as 0)
Welcome. fish, hope your stay is a pleasant one.
The following pseudo-code shows how to do this in the case where the sum will fit within an integer type, and round rounds to the nearest integer.
In your sample, the numbers add sum to 16, dividing by 3 gives you 5 1/3, which rounds to 5.
sum = 0
for i = 1 to array.size
sum = sum + array[i]
sum = sum / array.size
sum = round (sum)
This pseudocode finds the average and covers the problem of overflow:
double avg = 0
int count = 0
for x in array:
count += 1
avg = avg * (count - 1) / count // readjust old average
avg += x / count // add in new number
After that, you can apply your rounding code. If there is no easy way to round in your language, then something like this will work (rounds up when over .5):
int temp = avg - int(avg) // finds decimal portion
if temp <= 0.5
avg = int(avg) // round down
else
avg = int(avg) + 1 // round up
Pseudocode for getting the average:
double mean = 0
int count = 0
foreach int number in numbers
count++
mean += number - mean / count
round(mean) // rounds up
floor(mean + 0.5) // rounds up
ceil(mean - 0.5) // rounds down
Rounding generally involves adding 0.5, then truncating (floor), which is why 3.5 rounds up to 4. If you want 3.5 to round down to 3, do the rounding code yourself, but in reverse: subtract 0.5, then find the ceiling.
Edit: Updated requirements (no overflow)
ARM assembly. =] Untested. Won't overflow. Ever. (I hope.)
Can probably be optimized a bit. (Maybe use FP/LR?) =S Maybe THUMB will work better here.
.arm
; r0 = pointer to array of integers
; r1 = number of integers in array
; returns mean in r0
mean:
stmfd sp!, {r4,r5}
mov r5, r1
mov r2, 0 ; sum_lo
mov r3, 0 ; sum_hi
cmp r1, 0 ; Check for empty array
bz .end
.loop:
ldr r4, [r0], #4
add r2, r2, r4
adc r3, r3, #0 ; Handle overflow
sub r1, r1, #1 ; Next
bnz .loop
.end:
div r0, r2, r3, r5 ; Your own 64-bit/32-bit divide: r0 = (r3r2) / r5
bx lr