How can I generate random numbers in a specific range using crypto.randomBytes?
I want to be able to generate a random number like this:
console.log(random(55, 956)); // where 55 is minimum and 956 is maximum
and I'm limited to use crypto.randomBytes only inside random function to generate random number for this range.
I know how to convert generated bytes from randomBytes to hex or decimal but I can't figure out how to get a random number in a specific range from random bytes mathematically.
To generate random number in a certain range you can use the following equation
Math.random() * (high - low) + low
But you want to use crypto.randomBytes instead of Math.random()
this function returns a buffer with randomly generated bytes. In turn, you need to convert the result of this function from bytes to decimal. this can be done using biguint-format package. To install this package simply use the following command:
npm install biguint-format --save
Now you need to convert the result of crypto.randomBytes to decimal, you can do that as follow:
var x= crypto.randomBytes(1);
return format(x, 'dec');
Now you can create your random function which will be as follow:
var crypto = require('crypto'),
format = require('biguint-format');
function randomC (qty) {
var x= crypto.randomBytes(qty);
return format(x, 'dec');
}
function random (low, high) {
return randomC(4)/Math.pow(2,4*8-1) * (high - low) + low;
}
console.log(random(50,1000));
Thanks to answer from #Mustafamg and huge help from #CodesInChaos I managed to resolve this issue. I made some tweaks and increase range to maximum 256^6-1 or 281,474,976,710,655. Range can be increased more but you need to use additional library for big integers, because 256^7-1 is out of Number.MAX_SAFE_INTEGER limits.
If anyone have same problem feel free to use it.
var crypto = require('crypto');
/*
Generating random numbers in specific range using crypto.randomBytes from crypto library
Maximum available range is 281474976710655 or 256^6-1
Maximum number for range must be equal or less than Number.MAX_SAFE_INTEGER (usually 9007199254740991)
Usage examples:
cryptoRandomNumber(0, 350);
cryptoRandomNumber(556, 1250425);
cryptoRandomNumber(0, 281474976710655);
cryptoRandomNumber((Number.MAX_SAFE_INTEGER-281474976710655), Number.MAX_SAFE_INTEGER);
Tested and working on 64bit Windows and Unix operation systems.
*/
function cryptoRandomNumber(minimum, maximum){
var distance = maximum-minimum;
if(minimum>=maximum){
console.log('Minimum number should be less than maximum');
return false;
} else if(distance>281474976710655){
console.log('You can not get all possible random numbers if range is greater than 256^6-1');
return false;
} else if(maximum>Number.MAX_SAFE_INTEGER){
console.log('Maximum number should be safe integer limit');
return false;
} else {
var maxBytes = 6;
var maxDec = 281474976710656;
// To avoid huge mathematical operations and increase function performance for small ranges, you can uncomment following script
/*
if(distance<256){
maxBytes = 1;
maxDec = 256;
} else if(distance<65536){
maxBytes = 2;
maxDec = 65536;
} else if(distance<16777216){
maxBytes = 3;
maxDec = 16777216;
} else if(distance<4294967296){
maxBytes = 4;
maxDec = 4294967296;
} else if(distance<1099511627776){
maxBytes = 4;
maxDec = 1099511627776;
}
*/
var randbytes = parseInt(crypto.randomBytes(maxBytes).toString('hex'), 16);
var result = Math.floor(randbytes/maxDec*(maximum-minimum+1)+minimum);
if(result>maximum){
result = maximum;
}
return result;
}
}
So far it works fine and you can use it as really good random number generator, but I strictly not recommending using this function for any cryptographic services. If you will, use it on your own risk.
All comments, recommendations and critics are welcome!
To generate numbers in the range [55 .. 956], you first generate a random number in the range [0 .. 901] where 901 = 956 - 55. Then add 55 to the number you just generated.
To generate a number in the range [0 .. 901], pick off two random bytes and mask off 6 bits. That will give you a 10 bit random number in the range [0 .. 1023]. If that number is <= 901 then you are finished. If it is bigger than 901, discard it and get two more random bytes. Do not attempt to use MOD, to get the number into the right range, that will distort the output making it non-random.
ETA: To reduce the chance of having to discard a generated number.
Since we are taking two bytes from the RNG, we get a number in the range [0 .. 65535]. Now 65535 MOD 902 is 591. Hence, if our two-byte random number is less than (65535 - 591), that is, less than 64944, we can safely use the MOD operator, since each number in the range [0 .. 901] is now equally likely. Any two-byte number >= 64944 will still have to be thrown away, as using it would distort the output away from random. Before, the chances of having to reject a number were (1024 - 901) / 1024 = 12%. Now the chances of a rejection are (65535 - 64944) / 65535 = 1%. We are far less likely to have to reject the randomly generated number.
running <- true
while running
num <- two byte random
if (num < 64944)
result <- num MOD 902
running <- false
endif
endwhile
return result + 55
The crypto package now has a randomInt() function. It was added in v14.10.0 and v12.19.0.
console.log(crypto.randomInt(55, 957)); // where 55 is minimum and 956 is maximum
The upper bound is exclusive.
Here is the (abridged) implementation:
// Largest integer we can read from a buffer.
// e.g.: Buffer.from("ff".repeat(6), "hex").readUIntBE(0, 6);
const RAND_MAX = 0xFFFF_FFFF_FFFF;
const range = max - min;
const excess = RAND_MAX % range;
const randLimit = RAND_MAX - excess;
while (true) {
const x = randomBytes(6).readUIntBE(0, 6);
// If x > (maxVal - (maxVal % range)), we will get "modulo bias"
if (x > randLimit) {
// Try again
continue;
}
const n = (x % range) + min;
return n;
}
See the full source and the official docs for more information.
So the issue with most other solutions are that they distort the distribution (which you probably would like to be uniform).
The pseudocode from #rossum lacks generalization. (But he proposed the right solution in the text)
// Generates a random integer in range [min, max]
function randomRange(min, max) {
const diff = max - min + 1;
// finds the minimum number of bit required to represent the diff
const numberBit = Math.ceil(Math.log2(diff));
// as we are limited to draw bytes, minimum number of bytes
const numberBytes = Math.ceil(numberBit / 4);
// as we might draw more bits than required, we look only at what we need (discard the rest)
const mask = (1 << numberBit) - 1;
let randomNumber;
do {
randomNumber = crypto.randomBytes(numberBytes).readUIntBE(0, numberBytes);
randomNumber = randomNumber & mask;
// number of bit might represent a numbers bigger than the diff, in that case try again
} while (randomNumber >= diff);
return randomNumber + min;
}
About performance concerns, basically the number is in the right range between 50% - 100% of the time (depending on the parameters). That is in the worst case scenario the loop is executed more than 7 times with less than 1% chance and practically, most of the time the loop is executed one or two times.
The random-js library acknowledges that most solution out there don't provide random numbers with uniform distributions and provides a more complete solution
I have a 14-bit data that is fed from FPGA in vhdl, The NIos II processor reads the 14-bit data from FPGA and do some processing tasks, where Nios II system is programmed in C code
The 14-bit data can be positive, zero or negative. In Altera compiler, I can only define the data to be 8,16 or 32. So I define this to be 16 bit data.
First, I need to check if the data is negative, if it is negative, I need to pad the first two MSB to be bit '1' so the system detects it as negative value instead of positive value.
Second, I need to compute the real value of this binary representation into a decimal value of BOTH integer and fraction.
I learned from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) that I could convert a binary (consists of both integer and fraction) to decimal values.
To be specified, I am able to use this code quoted from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) , reproduced as below:
#include <stdio.h>
#include <math.h>
double convert(const char binary[]){
int bi,i;
int len = 0;
int dot = -1;
double result = 0;
for(bi = 0; binary[bi] != '\0'; bi++){
if(binary[bi] == '.'){
dot = bi;
}
len++;
}
if(dot == -1)
dot=len;
for(i = dot; i >= 0 ; i--){
if (binary[i] == '1'){
result += (double) pow(2,(dot-i-1));
}
}
for(i=dot; binary[i] != '\0'; i++){
if (binary[i] == '1'){
result += 1.0/(double) pow(2.0,(double)(i-dot));
}
}
return result;
}
int main()
{
char bin[] = "1101.11";
char bin1[] = "1101";
char bin2[] = "1101.";
char bin3[] = ".11";
printf("%s -> %f\n",bin, convert(bin));
printf("%s -> %f\n",bin1, convert(bin1));
printf("%s -> %f\n",bin2, convert(bin2));
printf("%s -> %f\n",bin3, convert(bin3));
return 0;
}
I am wondering if this code can be used to check for negative value? I did try with a binary string of 11111101.11 and it gives the output of 253.75...
I have two questions:
What are the modifications I need to do in order to read a negative value?
I know that I can do the bit shift (as below) to check if the msb is 1, if it is 1, I know it is negative value...
if (14bit_data & 0x2000) //if true, it is negative value
The issue is, since it involves fraction part (but not only integer), it confused me a bit if the method still works...
If the binary number is originally not in string format, is there any way I could convert it to string? The binary number is originally fed from a fpga block written in VHDL say, 14 bits, with msb as the sign bit, the following 6 bits are the magnitude for integer and the last 6 bits are the magnitude for fractional part. I need the decimal value in C code for Altera Nios II processor.
OK so I m focusing on the fact that you want to reuse the algorithm you mention at the beginning of your question and assume that the binary representation you have for your signed number is Two's complement but I`m not really sure according to your comments that the input you have is the same than the one used by the algorithm
First pad the 2 MSB to have a 16 bit representation
16bit_data = (14_bit_data & 0x2000) ? ( 14_bit_data | 0xC000) : 14_bit_data ;
In case value is positive then value will remained unchanged and if negative this will be the correct two`s complement representation on 16bits.
For fractionnal part everything is the same compared to algorithm you mentionned in your question.
For integer part everything is the same except the treatment of MSB.
For unsigned number MSB (ie bit[15]) represents pow(2,15-6) ( 6 is the width of frationnal part ) whereas for signed number in Two`s complement representation it represents -pow(2,15-6) meaning that algorithm become
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
dec = dec + rem * pow(2, t) * (9 != t ? 1 : -1);
++t;
}
or said differently if you don`t want * operator
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
if( 9 != t)
{
dec = dec + rem * pow(2, t);
}
else
{
dec = dec - rem * pow(2, t);
}
++t;
}
For the second algorithm that you mention, considering you format if dot == 11 and i == 0 we are at MSB ( 10 integer bits followed by dot) so the code become
for(i = dot - 1; i >= 0 ; i--)
{
if (binary[i] == '1')
{
if(11 != dot || i)
{
result += (double) pow(2,(dot-i-1));
}
else
{
// result -= (double) pow(2,(dot-i-1));
// Due to your number format i == 0 and dot == 11 so
result -= 512
}
}
}
WARNING : in brice algorithm the input is character string like "11011.101" whereas according to your description you have an integer input so I`m not sure that this algorithm is suited to your case
I think this should work:
float convert14BitsToFloat(int16_t in)
{
/* Sign-extend in, since it is 14 bits */
if (in & 0x2000) in |= 0xC000;
/* convert to float with 6 decimal places (64 = 2^6) */
return (float)in / 64.0f;
}
To convert any number to string, I would use sprintf. Be aware it may significantly increase the size of your application. If you don't need the float and what to keep a small application, you should make your own conversion function.
Smaz is able to compress a short string (< 100 bytes) where other compressing tools fail.
But there is a problem with it, particularly repeating characters that it doesn't optimize by itself.
For example the string "this is a short string" compresses fine:
\x9b8\xac>\xbb\xf2>\xc3F
It is 9 bytes long. But if you have a short string with repeating characters you have a problem.. for example the string "this is a string with many aaaaaaaaaaaaaaaaaaaaaa's" compresses into this:
\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\xfe'\n
It is still smaller, but the many "\x04"'s look like a waste of space.
I've been thinking about calculating a letter occurrence and replacing it with a sort of "bookmark".. for example "aaaaaaaaaa" with ten "a" occurrences becomes "a//10".
This is a test Python snippet I've created out of my head, but is very very ugly as of now
a = set("this is a string with many aaaaaaaaaaaaaaaaaaaaaa's")
b = "this is a string with many aaaaaaaaaaaaaaaaaaaaaa's"
for i in a:
if i+i in b: # if char occ. > 2
o = b.count(i) - 2
s = i*o
c = b.replace(s, i+'//'+str(o))
print c
It then becomes:
this is a string with many a//22's
Smaz compressed
\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\xc5\xc5\xff\x0222'\n
My worry is, what if the string contains an url? Is it safe to escape it like "//"? but then you have regex strings. How can it be escaped in that case?
Finally, my clear and concise question is: How do you safely shorten repeating characters that Smaz doesn't compress by itself?
Here's an example of safe compression of repeating bytes. My result for your data example
"this is a string with many aaaaaaaaaaaaaaaaaaaaaa's"
is:
"this is a string with many \x16a's"
It's 31 bytes long, a 39% reduction. "\x16" represents the one byte hexadecimal (22 decimal) count of repeating "a"'s.
What result do you get if you "Smaz" my result?
My result for your Smaz output example
"\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\xfe"
is:
"\x9b8\xac>\xc3F\xf3\xe3\xad\x01\tG\x16\x04\xfe"
It's 15 bytes long, a 56% reduction. "\x16" represents the one byte hexadecimal (22 decimal) count of repeating compressed "\x04"'s ("a"'s).
Here's my code in Go.
package main
import (
"fmt"
)
func Compress(src []byte) (dst []byte) {
for len(src) > 0 {
c := src[0]
n := 1
for ; n < len(src) && src[n] == c; n++ {
}
src = src[n:]
for n > 0 {
m := (n-1)%31 + 1
n -= m
if m == 1 && !(1 <= c && c <= 31) {
dst = append(dst, c)
} else {
dst = append(dst, byte(m), c)
}
}
}
return dst
}
func Decompress(src []byte) (dst []byte) {
for i := 0; i < len(src); i++ {
n, c := byte(1), src[i]
if i+1 < len(src) && (1 <= c && c <= 31) {
n, c = c, src[i+1]
i++
}
for j := byte(0); j < n; j++ {
dst = append(dst, c)
}
}
return dst
}
func test(data string) {
src := []byte(data)
fmt.Printf("%d %q\n", len(src), src)
compress := Compress(src)
fmt.Printf("%d %q\n", len(compress), compress)
decompress := Decompress(compress)
fmt.Printf("%d %q\n", len(decompress), decompress)
fmt.Println(string(Decompress(Compress(src))) == string(src))
}
func main() {
data := "this is a string with many aaaaaaaaaaaaaaaaaaaaaa's"
test(data)
fmt.Println()
smaz := "\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\xfe"
test(smaz)
}
Output:
51 "this is a string with many aaaaaaaaaaaaaaaaaaaaaa's"
31 "this is a string with many \x16a's"
51 "this is a string with many aaaaaaaaaaaaaaaaaaaaaa's"
true
34 "\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\xfe"
15 "\x9b8\xac>\xc3F\xf3\xe3\xad\x01\tG\x16\x04\xfe"
34 "\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\xfe"
true
How do you safely shorten repeating characters that Smaz doesn't compress by itself?
You can't without changing the Smaz algorithm and being incompatible with Smaz.
Smaz is purpose built to be effective on small strings because its dictionary is universal and pre-computed. Other compression schemes need to build up a dictionary that is data set dependent, and typically takes a few hundred bytes for you to see positive returns. Repeating sequences are rare in short strings.
For your proposed Smaz variant with run length encoding scheme to work you would have to take up one of the 256 precious byte slots Smaz reserves for its codes. You could change one of the byte slots to mean "a byte indicating length to follow, followed by the byte to be repeated" - i.e., 3 bytes to communicate [REPEAT BYTE] [BYTE indicating 2 - 257 times] [BYTE CODE TO REPEAT]. You could reassign the Smaz byte code 253 from its present meaning of ".com" for the purpose of run-length encoding. But be aware that your compression will be slightly less effective for general data with ".com".
Also be aware that searching for repeating sequences in a hypothetical Smaz variant with run-length encoding would necessarily take more CPU compute time for the backtracking compression.
I want a random string of characters only (uppercase or lowercase), no numbers, in Go. What is the fastest and simplest way to do this?
Paul's solution provides a simple, general solution.
The question asks for the "the fastest and simplest way". Let's address the fastest part too. We'll arrive at our final, fastest code in an iterative manner. Benchmarking each iteration can be found at the end of the answer.
All the solutions and the benchmarking code can be found on the Go Playground. The code on the Playground is a test file, not an executable. You have to save it into a file named XX_test.go and run it with
go test -bench . -benchmem
Foreword:
The fastest solution is not a go-to solution if you just need a random string. For that, Paul's solution is perfect. This is if performance does matter. Although the first 2 steps (Bytes and Remainder) might be an acceptable compromise: they do improve performance by like 50% (see exact numbers in the II. Benchmark section), and they don't increase complexity significantly.
Having said that, even if you don't need the fastest solution, reading through this answer might be adventurous and educational.
I. Improvements
1. Genesis (Runes)
As a reminder, the original, general solution we're improving is this:
func init() {
rand.Seed(time.Now().UnixNano())
}
var letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
func RandStringRunes(n int) string {
b := make([]rune, n)
for i := range b {
b[i] = letterRunes[rand.Intn(len(letterRunes))]
}
return string(b)
}
2. Bytes
If the characters to choose from and assemble the random string contains only the uppercase and lowercase letters of the English alphabet, we can work with bytes only because the English alphabet letters map to bytes 1-to-1 in the UTF-8 encoding (which is how Go stores strings).
So instead of:
var letters = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
we can use:
var letters = []byte("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
Or even better:
const letters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
Now this is already a big improvement: we could achieve it to be a const (there are string constants but there are no slice constants). As an extra gain, the expression len(letters) will also be a const! (The expression len(s) is constant if s is a string constant.)
And at what cost? Nothing at all. strings can be indexed which indexes its bytes, perfect, exactly what we want.
Our next destination looks like this:
const letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
func RandStringBytes(n int) string {
b := make([]byte, n)
for i := range b {
b[i] = letterBytes[rand.Intn(len(letterBytes))]
}
return string(b)
}
3. Remainder
Previous solutions get a random number to designate a random letter by calling rand.Intn() which delegates to Rand.Intn() which delegates to Rand.Int31n().
This is much slower compared to rand.Int63() which produces a random number with 63 random bits.
So we could simply call rand.Int63() and use the remainder after dividing by len(letterBytes):
func RandStringBytesRmndr(n int) string {
b := make([]byte, n)
for i := range b {
b[i] = letterBytes[rand.Int63() % int64(len(letterBytes))]
}
return string(b)
}
This works and is significantly faster, the disadvantage is that the probability of all the letters will not be exactly the same (assuming rand.Int63() produces all 63-bit numbers with equal probability). Although the distortion is extremely small as the number of letters 52 is much-much smaller than 1<<63 - 1, so in practice this is perfectly fine.
To make this understand easier: let's say you want a random number in the range of 0..5. Using 3 random bits, this would produce the numbers 0..1 with double probability than from the range 2..5. Using 5 random bits, numbers in range 0..1 would occur with 6/32 probability and numbers in range 2..5 with 5/32 probability which is now closer to the desired. Increasing the number of bits makes this less significant, when reaching 63 bits, it is negligible.
4. Masking
Building on the previous solution, we can maintain the equal distribution of letters by using only as many of the lowest bits of the random number as many is required to represent the number of letters. So for example if we have 52 letters, it requires 6 bits to represent it: 52 = 110100b. So we will only use the lowest 6 bits of the number returned by rand.Int63(). And to maintain equal distribution of letters, we only "accept" the number if it falls in the range 0..len(letterBytes)-1. If the lowest bits are greater, we discard it and query a new random number.
Note that the chance of the lowest bits to be greater than or equal to len(letterBytes) is less than 0.5 in general (0.25 on average), which means that even if this would be the case, repeating this "rare" case decreases the chance of not finding a good number. After n repetition, the chance that we still don't have a good index is much less than pow(0.5, n), and this is just an upper estimation. In case of 52 letters the chance that the 6 lowest bits are not good is only (64-52)/64 = 0.19; which means for example that chances to not have a good number after 10 repetition is 1e-8.
So here is the solution:
const letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
const (
letterIdxBits = 6 // 6 bits to represent a letter index
letterIdxMask = 1<<letterIdxBits - 1 // All 1-bits, as many as letterIdxBits
)
func RandStringBytesMask(n int) string {
b := make([]byte, n)
for i := 0; i < n; {
if idx := int(rand.Int63() & letterIdxMask); idx < len(letterBytes) {
b[i] = letterBytes[idx]
i++
}
}
return string(b)
}
5. Masking Improved
The previous solution only uses the lowest 6 bits of the 63 random bits returned by rand.Int63(). This is a waste as getting the random bits is the slowest part of our algorithm.
If we have 52 letters, that means 6 bits code a letter index. So 63 random bits can designate 63/6 = 10 different letter indices. Let's use all those 10:
const letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
const (
letterIdxBits = 6 // 6 bits to represent a letter index
letterIdxMask = 1<<letterIdxBits - 1 // All 1-bits, as many as letterIdxBits
letterIdxMax = 63 / letterIdxBits // # of letter indices fitting in 63 bits
)
func RandStringBytesMaskImpr(n int) string {
b := make([]byte, n)
// A rand.Int63() generates 63 random bits, enough for letterIdxMax letters!
for i, cache, remain := n-1, rand.Int63(), letterIdxMax; i >= 0; {
if remain == 0 {
cache, remain = rand.Int63(), letterIdxMax
}
if idx := int(cache & letterIdxMask); idx < len(letterBytes) {
b[i] = letterBytes[idx]
i--
}
cache >>= letterIdxBits
remain--
}
return string(b)
}
6. Source
The Masking Improved is pretty good, not much we can improve on it. We could, but not worth the complexity.
Now let's find something else to improve. The source of random numbers.
There is a crypto/rand package which provides a Read(b []byte) function, so we could use that to get as many bytes with a single call as many we need. This wouldn't help in terms of performance as crypto/rand implements a cryptographically secure pseudorandom number generator so it's much slower.
So let's stick to the math/rand package. The rand.Rand uses a rand.Source as the source of random bits. rand.Source is an interface which specifies a Int63() int64 method: exactly and the only thing we needed and used in our latest solution.
So we don't really need a rand.Rand (either explicit or the global, shared one of the rand package), a rand.Source is perfectly enough for us:
var src = rand.NewSource(time.Now().UnixNano())
func RandStringBytesMaskImprSrc(n int) string {
b := make([]byte, n)
// A src.Int63() generates 63 random bits, enough for letterIdxMax characters!
for i, cache, remain := n-1, src.Int63(), letterIdxMax; i >= 0; {
if remain == 0 {
cache, remain = src.Int63(), letterIdxMax
}
if idx := int(cache & letterIdxMask); idx < len(letterBytes) {
b[i] = letterBytes[idx]
i--
}
cache >>= letterIdxBits
remain--
}
return string(b)
}
Also note that this last solution doesn't require you to initialize (seed) the global Rand of the math/rand package as that is not used (and our rand.Source is properly initialized / seeded).
One more thing to note here: package doc of math/rand states:
The default Source is safe for concurrent use by multiple goroutines.
So the default source is slower than a Source that may be obtained by rand.NewSource(), because the default source has to provide safety under concurrent access / use, while rand.NewSource() does not offer this (and thus the Source returned by it is more likely to be faster).
7. Utilizing strings.Builder
All previous solutions return a string whose content is first built in a slice ([]rune in Genesis, and []byte in subsequent solutions), and then converted to string. This final conversion has to make a copy of the slice's content, because string values are immutable, and if the conversion would not make a copy, it could not be guaranteed that the string's content is not modified via its original slice. For details, see How to convert utf8 string to []byte? and golang: []byte(string) vs []byte(*string).
Go 1.10 introduced strings.Builder. strings.Builder is a new type we can use to build contents of a string similar to bytes.Buffer. Internally it uses a []byte to build the content, and when we're done, we can obtain the final string value using its Builder.String() method. But what's cool in it is that it does this without performing the copy we just talked about above. It dares to do so because the byte slice used to build the string's content is not exposed, so it is guaranteed that no one can modify it unintentionally or maliciously to alter the produced "immutable" string.
So our next idea is to not build the random string in a slice, but with the help of a strings.Builder, so once we're done, we can obtain and return the result without having to make a copy of it. This may help in terms of speed, and it will definitely help in terms of memory usage and allocations.
func RandStringBytesMaskImprSrcSB(n int) string {
sb := strings.Builder{}
sb.Grow(n)
// A src.Int63() generates 63 random bits, enough for letterIdxMax characters!
for i, cache, remain := n-1, src.Int63(), letterIdxMax; i >= 0; {
if remain == 0 {
cache, remain = src.Int63(), letterIdxMax
}
if idx := int(cache & letterIdxMask); idx < len(letterBytes) {
sb.WriteByte(letterBytes[idx])
i--
}
cache >>= letterIdxBits
remain--
}
return sb.String()
}
Do note that after creating a new strings.Buidler, we called its Builder.Grow() method, making sure it allocates a big-enough internal slice (to avoid reallocations as we add the random letters).
8. "Mimicing" strings.Builder with package unsafe
strings.Builder builds the string in an internal []byte, the same as we did ourselves. So basically doing it via a strings.Builder has some overhead, the only thing we switched to strings.Builder for is to avoid the final copying of the slice.
strings.Builder avoids the final copy by using package unsafe:
// String returns the accumulated string.
func (b *Builder) String() string {
return *(*string)(unsafe.Pointer(&b.buf))
}
The thing is, we can also do this ourselves, too. So the idea here is to switch back to building the random string in a []byte, but when we're done, don't convert it to string to return, but do an unsafe conversion: obtain a string which points to our byte slice as the string data.
This is how it can be done:
func RandStringBytesMaskImprSrcUnsafe(n int) string {
b := make([]byte, n)
// A src.Int63() generates 63 random bits, enough for letterIdxMax characters!
for i, cache, remain := n-1, src.Int63(), letterIdxMax; i >= 0; {
if remain == 0 {
cache, remain = src.Int63(), letterIdxMax
}
if idx := int(cache & letterIdxMask); idx < len(letterBytes) {
b[i] = letterBytes[idx]
i--
}
cache >>= letterIdxBits
remain--
}
return *(*string)(unsafe.Pointer(&b))
}
(9. Using rand.Read())
Go 1.7 added a rand.Read() function and a Rand.Read() method. We should be tempted to use these to read as many bytes as we need in one step, in order to achieve better performance.
There is one small "problem" with this: how many bytes do we need? We could say: as many as the number of output letters. We would think this is an upper estimation, as a letter index uses less than 8 bits (1 byte). But at this point we are already doing worse (as getting the random bits is the "hard part"), and we're getting more than needed.
Also note that to maintain equal distribution of all letter indices, there might be some "garbage" random data that we won't be able to use, so we would end up skipping some data, and thus end up short when we go through all the byte slice. We would need to further get more random bytes, "recursively". And now we're even losing the "single call to rand package" advantage...
We could "somewhat" optimize the usage of the random data we acquire from math.Rand(). We may estimate how many bytes (bits) we'll need. 1 letter requires letterIdxBits bits, and we need n letters, so we need n * letterIdxBits / 8.0 bytes rounding up. We can calculate the probability of a random index not being usable (see above), so we could request more that will "more likely" be enough (if it turns out it's not, we repeat the process). We can process the byte slice as a "bit stream" for example, for which we have a nice 3rd party lib: github.com/icza/bitio (disclosure: I'm the author).
But Benchmark code still shows we're not winning. Why is it so?
The answer to the last question is because rand.Read() uses a loop and keeps calling Source.Int63() until it fills the passed slice. Exactly what the RandStringBytesMaskImprSrc() solution does, without the intermediate buffer, and without the added complexity. That's why RandStringBytesMaskImprSrc() remains on the throne. Yes, RandStringBytesMaskImprSrc() uses an unsynchronized rand.Source unlike rand.Read(). But the reasoning still applies; and which is proven if we use Rand.Read() instead of rand.Read() (the former is also unsynchronzed).
II. Benchmark
All right, it's time for benchmarking the different solutions.
Moment of truth:
BenchmarkRunes-4 2000000 723 ns/op 96 B/op 2 allocs/op
BenchmarkBytes-4 3000000 550 ns/op 32 B/op 2 allocs/op
BenchmarkBytesRmndr-4 3000000 438 ns/op 32 B/op 2 allocs/op
BenchmarkBytesMask-4 3000000 534 ns/op 32 B/op 2 allocs/op
BenchmarkBytesMaskImpr-4 10000000 176 ns/op 32 B/op 2 allocs/op
BenchmarkBytesMaskImprSrc-4 10000000 139 ns/op 32 B/op 2 allocs/op
BenchmarkBytesMaskImprSrcSB-4 10000000 134 ns/op 16 B/op 1 allocs/op
BenchmarkBytesMaskImprSrcUnsafe-4 10000000 115 ns/op 16 B/op 1 allocs/op
Just by switching from runes to bytes, we immediately have 24% performance gain, and memory requirement drops to one third.
Getting rid of rand.Intn() and using rand.Int63() instead gives another 20% boost.
Masking (and repeating in case of big indices) slows down a little (due to repetition calls): -22%...
But when we make use of all (or most) of the 63 random bits (10 indices from one rand.Int63() call): that speeds up big time: 3 times.
If we settle with a (non-default, new) rand.Source instead of rand.Rand, we again gain 21%.
If we utilize strings.Builder, we gain a tiny 3.5% in speed, but we also achieved 50% reduction in memory usage and allocations! That's nice!
Finally if we dare to use package unsafe instead of strings.Builder, we again gain a nice 14%.
Comparing the final to the initial solution: RandStringBytesMaskImprSrcUnsafe() is 6.3 times faster than RandStringRunes(), uses one sixth memory and half as few allocations. Mission accomplished.
You can just write code for it. This code can be a little simpler if you want to rely on the letters all being single bytes when encoded in UTF-8.
package main
import (
"fmt"
"time"
"math/rand"
)
func init() {
rand.Seed(time.Now().UnixNano())
}
var letters = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
func randSeq(n int) string {
b := make([]rune, n)
for i := range b {
b[i] = letters[rand.Intn(len(letters))]
}
return string(b)
}
func main() {
fmt.Println(randSeq(10))
}
Use package uniuri, which generates cryptographically secure uniform (unbiased) strings.
Disclaimer: I'm the author of the package
Simple solution for you, with least duplicate result:
import (
"fmt"
"math/rand"
"time"
)
func randomString(length int) string {
rand.Seed(time.Now().UnixNano())
b := make([]byte, length+2)
rand.Read(b)
return fmt.Sprintf("%x", b)[2 : length+2]
}
Check it out in the PlayGround
Two possible options (there might be more of course):
You can use the crypto/rand package that supports reading random byte arrays (from /dev/urandom) and is geared towards cryptographic random generation. see http://golang.org/pkg/crypto/rand/#example_Read . It might be slower than normal pseudo-random number generation though.
Take a random number and hash it using md5 or something like this.
If you want cryptographically secure random numbers, and the exact charset is flexible (say, base64 is fine), you can calculate exactly what the length of random characters you need from the desired output size.
Base 64 text is 1/3 longer than base 256. (2^8 vs 2^6; 8bits/6bits = 1.333 ratio)
import (
"crypto/rand"
"encoding/base64"
"math"
)
func randomBase64String(l int) string {
buff := make([]byte, int(math.Ceil(float64(l)/float64(1.33333333333))))
rand.Read(buff)
str := base64.RawURLEncoding.EncodeToString(buff)
return str[:l] // strip 1 extra character we get from odd length results
}
Note: you can also use RawStdEncoding if you prefer + and / characters to - and _
If you want hex, base 16 is 2x longer than base 256. (2^8 vs 2^4; 8bits/4bits = 2x ratio)
import (
"crypto/rand"
"encoding/hex"
"math"
)
func randomBase16String(l int) string {
buff := make([]byte, int(math.Ceil(float64(l)/2)))
rand.Read(buff)
str := hex.EncodeToString(buff)
return str[:l] // strip 1 extra character we get from odd length results
}
However, you could extend this to any arbitrary character set if you have a base256 to baseN encoder for your character set. You can do the same size calculation with how many bits are needed to represent your character set. The ratio calculation for any arbitrary charset is: ratio = 8 / log2(len(charset))).
Though both of these solutions are secure, simple, should be fast, and don't waste your crypto entropy pool.
Here's the playground showing it works for any size. https://play.golang.org/p/_yF_xxXer0Z
Another version, inspired from generate password in JavaScript crypto:
package main
import (
"crypto/rand"
"fmt"
)
var chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-"
func shortID(length int) string {
ll := len(chars)
b := make([]byte, length)
rand.Read(b) // generates len(b) random bytes
for i := 0; i < length; i++ {
b[i] = chars[int(b[i])%ll]
}
return string(b)
}
func main() {
fmt.Println(shortID(18))
fmt.Println(shortID(18))
fmt.Println(shortID(18))
}
Following icza's wonderfully explained solution, here is a modification of it that uses crypto/rand instead of math/rand.
const (
letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" // 52 possibilities
letterIdxBits = 6 // 6 bits to represent 64 possibilities / indexes
letterIdxMask = 1<<letterIdxBits - 1 // All 1-bits, as many as letterIdxBits
)
func SecureRandomAlphaString(length int) string {
result := make([]byte, length)
bufferSize := int(float64(length)*1.3)
for i, j, randomBytes := 0, 0, []byte{}; i < length; j++ {
if j%bufferSize == 0 {
randomBytes = SecureRandomBytes(bufferSize)
}
if idx := int(randomBytes[j%length] & letterIdxMask); idx < len(letterBytes) {
result[i] = letterBytes[idx]
i++
}
}
return string(result)
}
// SecureRandomBytes returns the requested number of bytes using crypto/rand
func SecureRandomBytes(length int) []byte {
var randomBytes = make([]byte, length)
_, err := rand.Read(randomBytes)
if err != nil {
log.Fatal("Unable to generate random bytes")
}
return randomBytes
}
If you want a more generic solution, that allows you to pass in the slice of character bytes to create the string out of, you can try using this:
// SecureRandomString returns a string of the requested length,
// made from the byte characters provided (only ASCII allowed).
// Uses crypto/rand for security. Will panic if len(availableCharBytes) > 256.
func SecureRandomString(availableCharBytes string, length int) string {
// Compute bitMask
availableCharLength := len(availableCharBytes)
if availableCharLength == 0 || availableCharLength > 256 {
panic("availableCharBytes length must be greater than 0 and less than or equal to 256")
}
var bitLength byte
var bitMask byte
for bits := availableCharLength - 1; bits != 0; {
bits = bits >> 1
bitLength++
}
bitMask = 1<<bitLength - 1
// Compute bufferSize
bufferSize := length + length / 3
// Create random string
result := make([]byte, length)
for i, j, randomBytes := 0, 0, []byte{}; i < length; j++ {
if j%bufferSize == 0 {
// Random byte buffer is empty, get a new one
randomBytes = SecureRandomBytes(bufferSize)
}
// Mask bytes to get an index into the character slice
if idx := int(randomBytes[j%length] & bitMask); idx < availableCharLength {
result[i] = availableCharBytes[idx]
i++
}
}
return string(result)
}
If you want to pass in your own source of randomness, it would be trivial to modify the above to accept an io.Reader instead of using crypto/rand.
Here is my way ) Use math rand or crypto rand as you wish.
func randStr(len int) string {
buff := make([]byte, len)
rand.Read(buff)
str := base64.StdEncoding.EncodeToString(buff)
// Base 64 can be longer than len
return str[:len]
}
Here is a simple and performant solution for a cryptographically secure random string.
package main
import (
"crypto/rand"
"unsafe"
"fmt"
)
var alphabet = []byte("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
func main() {
fmt.Println(generate(16))
}
func generate(size int) string {
b := make([]byte, size)
rand.Read(b)
for i := 0; i < size; i++ {
b[i] = alphabet[b[i] % byte(len(alphabet))]
}
return *(*string)(unsafe.Pointer(&b))
}
Benchmark
Benchmark 95.2 ns/op 16 B/op 1 allocs/op
func Rand(n int) (str string) {
b := make([]byte, n)
rand.Read(b)
str = fmt.Sprintf("%x", b)
return
}
I usually do it like this if it takes an option to capitalize or not
func randomString(length int, upperCase bool) string {
rand.Seed(time.Now().UnixNano())
var alphabet string
if upperCase {
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
} else {
alphabet = "abcdefghijklmnopqrstuvwxyz"
}
var sb strings.Builder
l := len(alphabet)
for i := 0; i < length; i++ {
c := alphabet[rand.Intn(l)]
sb.WriteByte(c)
}
return sb.String()
}
and like this if you don't need capital letters
func randomString(length int) string {
rand.Seed(time.Now().UnixNano())
var alphabet string = "abcdefghijklmnopqrstuvwxyz"
var sb strings.Builder
l := len(alphabet)
for i := 0; i < length; i++ {
c := alphabet[rand.Intn(l)]
sb.WriteByte(c)
}
return sb.String()
}
If you are willing to add a few characters to your pool of allowed characters, you can make the code work with anything which provides random bytes through a io.Reader. Here we are using crypto/rand.
// len(encodeURL) == 64. This allows (x <= 265) x % 64 to have an even
// distribution.
const encodeURL = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_"
// A helper function create and fill a slice of length n with characters from
// a-zA-Z0-9_-. It panics if there are any problems getting random bytes.
func RandAsciiBytes(n int) []byte {
output := make([]byte, n)
// We will take n bytes, one byte for each character of output.
randomness := make([]byte, n)
// read all random
_, err := rand.Read(randomness)
if err != nil {
panic(err)
}
// fill output
for pos := range output {
// get random item
random := uint8(randomness[pos])
// random % 64
randomPos := random % uint8(len(encodeURL))
// put into output
output[pos] = encodeURL[randomPos]
}
return output
}
This is a sample code which I used to generate certificate number in my app.
func GenerateCertificateNumber() string {
CertificateLength := 7
t := time.Now().String()
CertificateHash, err := bcrypt.GenerateFromPassword([]byte(t), bcrypt.DefaultCost)
if err != nil {
fmt.Println(err)
}
// Make a Regex we only want letters and numbers
reg, err := regexp.Compile("[^a-zA-Z0-9]+")
if err != nil {
log.Fatal(err)
}
processedString := reg.ReplaceAllString(string(CertificateHash), "")
fmt.Println(string(processedString))
CertificateNumber := strings.ToUpper(string(processedString[len(processedString)-CertificateLength:]))
fmt.Println(CertificateNumber)
return CertificateNumber
}
/*
korzhao
*/
package rand
import (
crand "crypto/rand"
"math/rand"
"sync"
"time"
"unsafe"
)
// Doesn't share the rand library globally, reducing lock contention
type Rand struct {
Seed int64
Pool *sync.Pool
}
var (
MRand = NewRand()
randlist = []byte("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890")
)
// init random number generator
func NewRand() *Rand {
p := &sync.Pool{New: func() interface{} {
return rand.New(rand.NewSource(getSeed()))
},
}
mrand := &Rand{
Pool: p,
}
return mrand
}
// get the seed
func getSeed() int64 {
return time.Now().UnixNano()
}
func (s *Rand) getrand() *rand.Rand {
return s.Pool.Get().(*rand.Rand)
}
func (s *Rand) putrand(r *rand.Rand) {
s.Pool.Put(r)
}
// get a random number
func (s *Rand) Intn(n int) int {
r := s.getrand()
defer s.putrand(r)
return r.Intn(n)
}
// bulk get random numbers
func (s *Rand) Read(p []byte) (int, error) {
r := s.getrand()
defer s.putrand(r)
return r.Read(p)
}
func CreateRandomString(len int) string {
b := make([]byte, len)
_, err := MRand.Read(b)
if err != nil {
return ""
}
for i := 0; i < len; i++ {
b[i] = randlist[b[i]%(62)]
}
return *(*string)(unsafe.Pointer(&b))
}
24.0 ns/op 16 B/op 1 allocs/
As a follow-up to icza's brilliant solution, below I am using rand.Reader
func RandStringBytesMaskImprRandReaderUnsafe(length uint) (string, error) {
const (
charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
charIdxBits = 6 // 6 bits to represent a letter index
charIdxMask = 1<<charIdxBits - 1 // All 1-bits, as many as charIdxBits
charIdxMax = 63 / charIdxBits // # of letter indices fitting in 63 bits
)
buffer := make([]byte, length)
charsetLength := len(charset)
max := big.NewInt(int64(1 << uint64(charsetLength)))
limit, err := rand.Int(rand.Reader, max)
if err != nil {
return "", err
}
for index, cache, remain := int(length-1), limit.Int64(), charIdxMax; index >= 0; {
if remain == 0 {
limit, err = rand.Int(rand.Reader, max)
if err != nil {
return "", err
}
cache, remain = limit.Int64(), charIdxMax
}
if idx := int(cache & charIdxMask); idx < charsetLength {
buffer[index] = charset[idx]
index--
}
cache >>= charIdxBits
remain--
}
return *(*string)(unsafe.Pointer(&buffer)), nil
}
func BenchmarkBytesMaskImprRandReaderUnsafe(b *testing.B) {
b.ReportAllocs()
b.ResetTimer()
const length = 16
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
RandStringBytesMaskImprRandReaderUnsafe(length)
}
})
}
package main
import (
"encoding/base64"
"fmt"
"math/rand"
"time"
)
// customEncodeURL is like `bas64.encodeURL`
// except its made up entirely of uppercase characters:
const customEncodeURL = "ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKL"
// Random generates a random string.
// It is not cryptographically secure.
func Random(n int) string {
b := make([]byte, n)
rand.Seed(time.Now().UnixNano())
_, _ = rand.Read(b) // docs say that it always returns a nil error.
customEncoding := base64.NewEncoding(customEncodeURL).WithPadding(base64.NoPadding)
return customEncoding.EncodeToString(b)
}
func main() {
fmt.Println(Random(16))
}
const (
chars = "0123456789_abcdefghijkl-mnopqrstuvwxyz" //ABCDEFGHIJKLMNOPQRSTUVWXYZ
charsLen = len(chars)
mask = 1<<6 - 1
)
var rng = rand.NewSource(time.Now().UnixNano())
// RandStr 返回指定长度的随机字符串
func RandStr(ln int) string {
/* chars 38个字符
* rng.Int63() 每次产出64bit的随机数,每次我们使用6bit(2^6=64) 可以使用10次
*/
buf := make([]byte, ln)
for idx, cache, remain := ln-1, rng.Int63(), 10; idx >= 0; {
if remain == 0 {
cache, remain = rng.Int63(), 10
}
buf[idx] = chars[int(cache&mask)%charsLen]
cache >>= 6
remain--
idx--
}
return *(*string)(unsafe.Pointer(&buf))
}
BenchmarkRandStr16-8 20000000 68.1 ns/op 16 B/op 1 allocs/op
when observing compressed data, I expect an almost uniformely distributed byte stream. When using the chi square test for measure the distribution, I get this result e.g. for ZIP-files and other compressed data, but not for JPG-files. Last days I spent with finding reasons for this, but I cannot find any.
When calculating the entropy of JPGs, I get a high result (e.g. 7,95 Bits/Byte). I thought there must be a connection between the entropy and the distribution: the entropy is hight, when every byte appears with almost the same probability. But when using chi square, a get a p-value which is about 4,5e-5...
I just want to understand how different distributions influence the test results... I thought I can measure the same property with both tests, but obviously I can not.
Thank you very much for any hint!
tom
Distribution in jpeg-files
Ignoring the meta-information and the jpeg-header-data, the payload of a jpeg consists of blocks describing huffmann-tables or encoded MCUs (Minimum-Coded-Units, square blocks of the size 16x16). There may be others but this are the most frequent ones.
Those blocks are delimited by 0xFF 0xSS, where 0xSS is a specific startcode. Here is the first problem: 0xFF is a bit more frequent as twalberg mentioned in the comments.
It may happen, that 0xFF occur in an encoded MCU. To distinguish between this normal payload and the start of a new block, 0xFF 0x00 is inserted. If the distribution of unstuffed payload is perfectly uniform, 0x00 will be twice as often in the stuffed data. To make bad things worse, every MCU is filled up with binary ones to get byte-alignment (a slight bias to larger values) and we might need stuffing again.
There may be also some other factors I'm not aware of. If you need more information you have to provide the jpeg-file.
And about your basic assumption:
for rand_data:
dd if=/dev/urandom of=rand_data count=4096 bs=256
for rand_pseudo (python):
s = "".join(chr(i) for i in range(256))
with file("rand_pseudo", "wb") as f:
for i in range(4096):
f.write(s)
Both should be uniform regarding byte-values, shouldn't they? ;)
$ ll rand_*
-rw-r--r-- 1 apuch apuch 1048576 2012-12-04 20:11 rand_data
-rw-r--r-- 1 apuch apuch 1048967 2012-12-04 20:13 rand_data.tar.gz
-rw-r--r-- 1 apuch apuch 1048576 2012-12-04 20:14 rand_pseudo
-rw-r--r-- 1 apuch apuch 4538 2012-12-04 20:15 rand_pseudo.tar.gz
A uniform distribution might indicate a high entropy but its not a guarantee. Also, rand_data might consists out of 1MB of 0x00. Its extremely unlikely, but possible.
Here you can find two files: the first one is random data, generated with dev/unrandom (about 46MB), the second one is a normal JPG file (about 9MB). It is obvious that the symbols of the JPG-file are not as equally distributed as in dev/urandom.
If I compare both files:
Entropy:
JPG: 7,969247 Bits/Byte
RND: 7,999996 Bits/Byte
P-Value of chi-square test:
JPG: 0
RND: 0,3621
How can the entropy lead to such a high result?!?
Here is my java code
public static double getShannonEntropy_Image(BufferedImage actualImage){
List<String> values= new ArrayList<String>();
int n = 0;
Map<Integer, Integer> occ = new HashMap<>();
for(int i=0;i<actualImage.getHeight();i++){
for(int j=0;j<actualImage.getWidth();j++){
int pixel = actualImage.getRGB(j, i);
int alpha = (pixel >> 24) & 0xff;
int red = (pixel >> 16) & 0xff;
int green = (pixel >> 8) & 0xff;
int blue = (pixel) & 0xff;
//0.2989 * R + 0.5870 * G + 0.1140 * B greyscale conversion
//System.out.println("i="+i+" j="+j+" argb: " + alpha + ", " + red + ", " + green + ", " + blue);
int d= (int)Math.round(0.2989 * red + 0.5870 * green + 0.1140 * blue);
if(!values.contains(String.valueOf(d)))
values.add(String.valueOf(d));
if (occ.containsKey(d)) {
occ.put(d, occ.get(d) + 1);
} else {
occ.put(d, 1);
}
++n;
}
}
double e = 0.0;
for (Map.Entry<Integer, Integer> entry : occ.entrySet()) {
int cx = entry.getKey();
double p = (double) entry.getValue() / n;
e += p * log2(p);
}
return -e;
}