How to make crypto.randomBytes() to replace Math.random() - node.js

I need a cryptographically secure random number generator to replace Math.random()
I came across crypto.randomBytes() however, it returns a byte array. What is a way to make the byte array into 0-1 ( so that it is compatible with Math.random)

This should do the trick:
crypto.randomBytes(4).readUInt32LE() / 0xffffffff;
randomBytes generates 4 random bytes, which are then read as a 32-bit unsigned integer in little endian. The max value of a 32bit unsigned int is 0xffffffff (or 4,294,967,295 in decimal). By dividing the randomly generated 32bit int by its max value you get a value between 0 and 1.

You could do something like:
var randomVal = (crypto.randomBytes(1)[0] / 255);

Related

How do I properly truncate a 64 bit float to a 32 bit truncated float and back (dropping precision) with Node.js?

So obviously there's no 32 bit float, but if we're trying to store big data efficiently and many of our values are floats no greater than 100,000 with exactly 2 decimal places, it makes sense to store the 64 bit value in 32 bits by dropping the bits representing the precision that we don't need.
I tried doing this by simply writing to a 64 bit BE float buffer like so and slicing the first 4 bytes:
// float32 = Number between 0.00 and 100000.00
const setFloat32 = (float32) => {
b64.writeDoubleBE(float32, 0) // b64 = 64 bit buffer
b32 = b64.slice(0, 4)
return b32;
}
And reading it by adding on 4 empty bytes:
// b32 = the 32 bit buffer from the previous func
const readFloat32 = (b32) => {
// b32Empty = empty 32 bit buffer
return Buffer.concat([b32, b32Empty]).readDoubleBE(0);
}
But this modified flat decimals like:
1.85 => 1.8499994277954102
2.05 => 2.049999237060547
How can I fix my approach to do this correctly, and do so in the most efficient manner for read speed?
If you only want to keep two decimals of precision, you can convert your value to a shifted integer and store that:
function shiftToInteger(val, places) {
// multiply by a constant to shift the decimals you want to keep into
// integer positions, then use Math.round() or Math.floor()
// to truncate the rest of the decimals - depending upon which behavior you want
// then return the shifted integer that will fit into a U32 for storage
return Math.round(val * (10 ** places));
}
This creates a shifted integer that can then be stored in a 32 bit value (with the value limits you described), such as a Uint32Array or Int32Array. To use it when you retrieve it from storage, you would then divide it by 100 to convert it back to a standard Javascript float for usage.
The key is to convert whatever decimal precision you want to keep to an integer so you can store that in a non-float type of value that is just large enough for your max anticipated value. You can storage efficiency because you're using all the storage bits for the desired precision rather than wasting a lot of unnecessary storage bits on the decimal precision that you don't need to keep.
Here's an example:
function shiftToInteger(val, places) {
return Math.round(val * (10 ** places));
}
function shiftToFloat(integer, places) {
return integer / (10 ** places);
}
let x = new Uint32Array(10);
x[0] = shiftToInteger(1.85, 2);
console.log(x[0]); // output shifted integer value
console.log(shiftToFloat(x[0], 2)); // convert back to decimal value

android AudioTrack playback short array (16bit)

I have an application that playback audio. It takes encoded audio data over RTP and decode it to 16bit array. The decoded 16bit array is converted to 8 bit array (byte array) as this is required for some other functionality.
Even though audio playback is working it is breaking continuously and very hard to recognise audio output. If I listen carefully I can tell it is playing the correct audio.
I suspect this is due to the fact I convert 16 bit data stream into a byte array and use the write(byte[], int, int, AudioTrack.WRITE_NON_BLOCKING) of AudioTrack class for audio playback.
Therefore I converted the byte array back to a short array and used write(short[], int, int, AudioTrack.WRITE_NON_BLOCKING) method to see if it could resolve the problem.
However now there is no audio sound at all. In the debug output I can see the short array has data.
What could be the reason?
Here is the AUdioTrak initialization
sampleRate =AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_MUSIC);
minimumBufferSize = AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
minimumBufferSize,
AudioTrack.MODE_STREAM);
Here is the code converts short array to byte array
for (int i=0;i<internalBuffer.length;i++){
bufferIndex = i*2;
buffer[bufferIndex] = shortToByte(internalBuffer[i])[0];
buffer[bufferIndex+1] = shortToByte(internalBuffer[i])[1];
}
Here is the method that converts byte array to short array.
public short[] getShortAudioBuffer(byte[] b){
short audioBuffer[] = null;
int index = 0;
int audioSize = 0;
ByteBuffer byteBuffer = ByteBuffer.allocate(2);
if ((b ==null) && (b.length<2)){
return null;
}else{
audioSize = (b.length - (b.length%2));
audioBuffer = new short[audioSize/2];
}
if ((audioSize/2) < 2)
return null;
byteBuffer.order(ByteOrder.LITTLE_ENDIAN);
for(int i=0;i<audioSize/2;i++){
index = i*2;
byteBuffer.put(b[index]);
byteBuffer.put(b[index+1]);
audioBuffer[i] = byteBuffer.getShort(0);
byteBuffer.clear();
System.out.print(Integer.toHexString(audioBuffer[i]) + " ");
}
System.out.println();
return audioBuffer;
}
Audio is decoded using opus library and the configuration is as follows;
opus_decoder_ctl(dec,OPUS_SET_APPLICATION(OPUS_APPLICATION_AUDIO));
opus_decoder_ctl(dec,OPUS_SET_SIGNAL(OPUS_SIGNAL_MUSIC));
opus_decoder_ctl(dec,OPUS_SET_FORCE_CHANNELS(OPUS_AUTO));
opus_decoder_ctl(dec,OPUS_SET_MAX_BANDWIDTH(OPUS_BANDWIDTH_FULLBAND));
opus_decoder_ctl(dec,OPUS_SET_PACKET_LOSS_PERC(0));
opus_decoder_ctl(dec,OPUS_SET_COMPLEXITY(10)); // highest complexity
opus_decoder_ctl(dec,OPUS_SET_LSB_DEPTH(16)); // 16bit = two byte samples
opus_decoder_ctl(dec,OPUS_SET_DTX(0)); // default - not using discontinuous transmission
opus_decoder_ctl(dec,OPUS_SET_VBR(1)); // use variable bit rate
opus_decoder_ctl(dec,OPUS_SET_VBR_CONSTRAINT(0)); // unconstrained
opus_decoder_ctl(dec,OPUS_SET_INBAND_FEC(0)); // no forward error correction
Let's assume you have a short[] array which contains the 16-bit one channel data to be played.
Then each sample is a value between -32768 and 32767 which represents the signal amplitude at the exact moment. And 0 value represents a middle point (no signal). This array can be passed to the audio track with ENCODING_PCM_16BIT format encoding.
But things are going weird when playing ENCODING_PCM_8BIT is used (See AudioFormat)
In this case each sample encoded by one byte. But each byte is unsigned. That means, it's value is between 0 and 255, while 128 represents the middle point.
Java has no unsigned byte format. Byte format is signed. I.e. values -128...-1 will represent actual values of 128...255. So you have to be careful when converting to the byte array, otherwise it will be a noise with barely recognizable source sound.
short[] input16 = ... // the source 16-bit audio data;
byte[] output8 = new byte[input16.length];
for (int i = 0 ; i < input16.length ; i++) {
// To convert 16 bit signed sample to 8 bit unsigned
// We add 128 (for rounding), then shift it right 8 positions
// Then add 128 to be in range 0..255
int sample = ((input16[i] + 128) >> 8) + 128;
if (sample > 255) sample = 255; // strip out overload
output8[i] = (byte)(sample); // cast to signed byte type
}
To perform backward conversion all should be the same: each single sample to be converted to exactly one sample of the output signal
byte[] input8 = // source 8-bit unsigned audio data;
short[] output16 = new short[input8.length];
for (int i = 0 ; i < input8.length ; i++) {
// to convert signed byte back to unsigned value just use bitwise AND with 0xFF
// then we need subtract 128 offset
// Then, just scale up the value by 256 to fit 16-bit range
output16[i] = (short)(((input8[i] & 0xFF) - 128) * 256);
}
The issue of not being able to convert data from byte array to short array was resolved when used bitwise operators instead of using ByteArray. It could be due not setting the correct parameters in ByteArray or it is not suitable for such conversion.
Nevertheless implementing conversion using bitwise operators resolved the problem. Since the original question has been resolved by this approach, please consider this as the final answer.
I will raise a separate topic for playback issue.
Thank you for all your support.

How to interpret the field 'data' of an XImage

I am trying to understand how the data obtained from XGetImage is disposed in memory:
XImage img = XGetImage(display, root, 0, 0, width, height, AllPlanes, ZPixmap);
Now suppose I want to decompose each pixel value in red, blue, green channels. How can I do this in a portable way? The following is an example, but it depends on a particular configuration of the XServer and does not work in every case:
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++) {
unsigned long pixel = XGetPixel(img, x, y);
unsigned char blue = pixel & blue_mask;
unsigned char green = (pixel & green_mask) >> 8;
unsigned char red = (pixel & red_mask) >> 16;
//...
}
In the above example I am assuming a particular order of the RGB channels in pixel and also that pixels are 24bit-depth: in facts, I have img->depth=24 and img->bits_per_pixels=32 (the screen is also 24-bit depth). But this is not a generic case.
As a second step I want to get rid of XGetPixel and use or describe img->data directly. The first thing I need to know is if there is anything in Xlib which exactly gives me all the informations I need to interpret how the image is built starting from the img->data field, which are:
the order of R,G,B channels in each pixel;
the number of bits for each pixels;
the numbbe of bits for each channel;
if possible, a corresponding FOURCC
The shift is a simple function of the mask:
int get_shift (int mask) {
shift = 0;
while (mask) {
if (mask & 1) break;
shift++;
mask >>=1;
}
return shift;
}
Number of bits in each channel is just the number of 1 bits in its mask (count them). The channel order is determined by the shifts (if red shift is 0, the the first channel is R, etc).
I think the valid values for bits_per_pixel are 1, 2, 4, 8, 15, 16, 24 and 32 (15 and 16 bits are the same 2 bytes per pixel format, but the former has 1 bit unused). I don't think it's worth anyone's time to support anything but 24 and 32 bpp.
X11 is not concerned with media files, so no 4CC code.
This can be read from the XImage structure itself.
the order of R,G,B channels in each pixel;
This is contained in this field of the XImage structure:
int byte_order; /* data byte order, LSBFirst, MSBFirst */
which tells you whether it's RGB or BGR (because it only depends on the endianness of the machine).
the number of bits for each pixels;
can be obtained from this field:
int bits_per_pixel; /* bits per pixel (ZPixmap) */
which is basically the number of bits set in each of the channel masks:
unsigned long red_mask; /* bits in z arrangement */
unsigned long green_mask;
unsigned long blue_mask;
the numbbe of bits for each channel;
See above, or you can use the code from #n.m.'s answer to count the bits yourself.
Yeah, it would be great if they put the bit shift constants in that structure too, but apparently they decided not to, since the pixels are aligned to bytes anyway, in "standard order" (RGB). Xlib makes sure to convert it to that order for you when it retrieves the data from the X server, even if they are stored internally in a different format server-side. So it's always in RGB format, byte-aligned, but depending on the endianness of the machine, the bytes inside an unsigned long can appear in a reverse order, hence the byte_order field to tell you about that.
So in order to extract these channels, just use the 0, 8 and 16 shifts after masking with red_mask, green_mask and blue_mask, just make sure you shift the right bytes depending on the byte_order and it should work fine.

What NSNumber (Integer 16, 32, 64) in Core Data should I use to keep NSUInteger

I want to keep NSUInteger into my core data and I don't know which type should I use (integer 16, 32, 64) to suit the space needed.
From my understanding:
Integer 16 can have minimum value of -32,768 to 32,767
Integer 32 can have minimum value of -2,147,483,648 to 2,147,483,647
Integer 64 can have minimum value of -very large to very large
and NSUInteger is type def of unsigned long which equal to unsigned int (Types in objective-c on iPhone)
so If I convert my NSUInteger to NSNumber with numberWithUnsignedInteger: and save it as NSNumber(Integer 32) I could retrieve my data back safely right?
Do you really need the entire range of an NSUInteger? On iOS that's an unsigned 32 bit value, which can get very large. It will find into a signed 64 bit.
But you probably don't need that much precision anyway. The maximum for a uint32_t is UINT32_MAX which is 4,294,967,295 (4 billion). If you increment once a second, it'll take you more than 136 years to reach that value. Your user's iPhone won't be around by then... :)
If at all possible, when writing data to disk or across a network, it's best to be explicit about the size of value. Instead of using NSUInteger as the datatype, use uint16_t, uint32_t, or uint64_t depending on the range you need. This then naturally translates to Integer 16, 32, and 64 in Core Data.
To understand why, consider this scenario:
You opt to use Integer 64 type to store your value.
On a 64-bit iOS device (eg iPhone 6) it stores the value 5,000,000,000.
On a 32-bit iOS device this value is fetched from the store into an NSUInteger (using NSNumber's unsignedIntegerValue).
Now because NSUInteger is only 32-bits on the 32-bit device, the number is no longer 5,000,000,000 because there aren't enough bits to represent 5 billion. If you had swapped the NUInteger in step 3 for uint64_t then the value would still be 5 billion.
If you absolutely must use NSUInteger, then you'll just need to be wary about the issues described above and code defensively for it.
As far as storing unsigned values into the seemingly signed Core Data types, you can safely store them and retrieve them:
NSManagedObject *object = // create object
object.valueNumber = #(4000000000); // Store 4 billion in an Integer 32 Core Data type
[managedObjectContext save:NULL] // Save value to store
// Later on
NSManagedObject *object = // fetch object from store
uint32_t value = object.valueNumber.unsignedIntegerValue; // value will be 4 billion

Is it possible to convert UTF32 text to UTF16 using only Windows API?

I'm trying to find converting UTF-32 text to/from any code page is possible using the Windows API alone. I cannot used CLR to do this task.
The Code page identifiers page at Microsoft at http://msdn.microsoft.com/en-us/library/dd317756(VS.85).aspx lists UTF-32 as being available to only managed applicatiosn.
ConvertStringTo/FromUnicode fails when UTF-32 is used.
You can use this function that takes the UTF-32 codepoint to be converted to it's equivalent UTF-16 codepoint (single or surrogate as the case may be) as the first argument and the high and low surrogates as second and third arguments.
The high and low surrogate values are returned by reference.
If the codepoint is below 0x10000, then we simply return that codepoint in the low surrogate by reference while the high surrogate is 0.
If the codepoint is greater than 0x10000, then, we calculate the high and low surrogate pairs using the rules given on this wikipedia page:
https://en.wikipedia.org/wiki/UTF-16#Example_UTF-16_encoding_procedure
Here's the code:
unsigned int convertUTF32ToUTF16(unsigned int cUTF32, unsigned int &h, unsigned int &l)
{
if (cUTF32 < 0x10000)
{
h = 0;
l = cUTF32;
return cUTF32;
}
unsigned int t = cUTF32 - 0x10000;
h = (((t<<12)>>22) + 0xD800);
l = (((t<<22)>>22) + 0xDC00);
unsigned int ret = ((h<<16) | ( l & 0x0000FFFF));
return ret;
}
With a bit of knowledge of Unicode you should be able to create a UTF32 to UTF16 converter without using any APIs.
All characters in the range U+0000 to U+FFFF can simply have the upper 16 bits removed.
Values in the range U+10000 to U+10FFFF can be converted into two 16-bit words, called surrogate pairs:
http://en.wikipedia.org/wiki/UTF-16#Encoding_of_characters_outside_the_BMP
You can use the iconv library in Windows. It fully supports UTF-32 (big and little endian).

Resources