c# - byte array improper conversion to MB - c#-4.0

The file is about 24mb, and it's held in a DataBase so I convert it to a bit array and then, after multiple suggestions, I use bitconverter.tosingle(,) and this is giving me bad results, here's my code:
byte[] imgData = prod.ImageData;
float myFloat = BitConverter.ToSingle(imgData, 0);
float mb = (myFloat / 1024f) / 1024f;
When I debug, I get these results:
byte[24786273]
myFloat = 12564.0361
mb = 0.0119819986
what is weird is that he size of the array is exactly as the file should be. How do I correctly convert this to float and then so it shows as mb?
EDIT: I tried setting up myFloat as imgData.Length, then the size is correct, however is this a correct way to do it, and can it cause a problem in the future with bigger values?

You are taking the first four bytes out of the image and converting it to an IEEE floating point. I'm not an expert on image files so I'm not sure if the first four bytes are always the length, even if this is the case it would still not be correct (see the specification). However the length of the file is already known through the length of the array, so an easier way to get the size is:
byte[] imgData = prod.ImageData;
float mb = (imgData.Length / 1024f) / 1024f;
To address your concerns: this will still work for large files, consider a 24TB example.
var bytes = 24L * 1024 * 1024 * 1024 * 1024;
var inMb = (bytes / 1024.0F / 1024.0F);

Related

How do I properly truncate a 64 bit float to a 32 bit truncated float and back (dropping precision) with Node.js?

So obviously there's no 32 bit float, but if we're trying to store big data efficiently and many of our values are floats no greater than 100,000 with exactly 2 decimal places, it makes sense to store the 64 bit value in 32 bits by dropping the bits representing the precision that we don't need.
I tried doing this by simply writing to a 64 bit BE float buffer like so and slicing the first 4 bytes:
// float32 = Number between 0.00 and 100000.00
const setFloat32 = (float32) => {
b64.writeDoubleBE(float32, 0) // b64 = 64 bit buffer
b32 = b64.slice(0, 4)
return b32;
}
And reading it by adding on 4 empty bytes:
// b32 = the 32 bit buffer from the previous func
const readFloat32 = (b32) => {
// b32Empty = empty 32 bit buffer
return Buffer.concat([b32, b32Empty]).readDoubleBE(0);
}
But this modified flat decimals like:
1.85 => 1.8499994277954102
2.05 => 2.049999237060547
How can I fix my approach to do this correctly, and do so in the most efficient manner for read speed?
If you only want to keep two decimals of precision, you can convert your value to a shifted integer and store that:
function shiftToInteger(val, places) {
// multiply by a constant to shift the decimals you want to keep into
// integer positions, then use Math.round() or Math.floor()
// to truncate the rest of the decimals - depending upon which behavior you want
// then return the shifted integer that will fit into a U32 for storage
return Math.round(val * (10 ** places));
}
This creates a shifted integer that can then be stored in a 32 bit value (with the value limits you described), such as a Uint32Array or Int32Array. To use it when you retrieve it from storage, you would then divide it by 100 to convert it back to a standard Javascript float for usage.
The key is to convert whatever decimal precision you want to keep to an integer so you can store that in a non-float type of value that is just large enough for your max anticipated value. You can storage efficiency because you're using all the storage bits for the desired precision rather than wasting a lot of unnecessary storage bits on the decimal precision that you don't need to keep.
Here's an example:
function shiftToInteger(val, places) {
return Math.round(val * (10 ** places));
}
function shiftToFloat(integer, places) {
return integer / (10 ** places);
}
let x = new Uint32Array(10);
x[0] = shiftToInteger(1.85, 2);
console.log(x[0]); // output shifted integer value
console.log(shiftToFloat(x[0], 2)); // convert back to decimal value

Why are secp256k1 privateKeys not always 32 bytes in nodejs?

I was generating a lot of secp256k1 keys using node's crypto module when I ran into a problem that some generated private keys were not always 32bytes in length. I wrote a test script and it shows clearly that that happens quite often.
What is the reason for that and is there a fix or do I have to check for length and then regenerate until I get 32 bytes?
This is the test script for reproducing the issue:
const { createECDH, ECDH } = require("crypto");
const privateLens = {};
const publicLens = {};
for(let i = 0; i < 10000; i++){
const ecdh = createECDH("secp256k1");
ecdh.generateKeys();
const privateKey = ecdh.getPrivateKey("hex");
const publicKey = ecdh.getPublicKey("hex");
privateLens[privateKey.length+""] = (privateLens[privateKey.length+""] || 0) + 1;
publicLens[publicKey.length+""] = (publicLens[publicKey.length+""] || 0) + 1;
}
console.log(privateLens);
console.log(publicLens);
The output (of multiple runs) looks like this:
% node test.js
{ '62': 32, '64': 9968 }
{ '130': 10000 }
% node test.js
{ '62': 40, '64': 9960 }
{ '130': 10000 }
% node test.js
{ '62': 39, '64': 9961 }
{ '130': 10000 }
I just don't get it... if I encode it in base64 its always the same length, but decoding that back to a buffer shows 31 bytes for some keys again.
Thanks, any insights are highly appreciated!
For EC cryptography the key is not fully random over the bytes, it's a random number in the range [1, N) where N is the order of the curve. Now generally the number generated will be in the same ballpark as the 256 bit order. This is especially true since N has been (deliberately) chosen to be very close to 2^256, i.e. the high order bits are all set to 1 for secp256k1.
However, about once in 256, the first bits are still all set to zero for the chosen private key s. That means that it takes 31 or fewer bytes instead of 32 bytes. Once in 65536 it will be even 30 bytes instead of 32, etc. And once in somewhere over 4 billion times (short scale) it will even use 29 bytes.
Base64 uses one character for 6 bits excluding overhead. However generally it just encodes blocks of 3 bytes to 4 characters at a time (possibly including padding with = characters). That means that 32 bytes will take ceil(32 / 3) * 4 = 44 bytes. Now since ceil(31 / 3) * 4 = 44 you won't notice anything. However, once in 65536 times you'll get ceil(30 / 3) * 4 = 40. After that going to 36 characters becomes extremely unlikely (although not negligibly small cryptographically speaking, "just" once in 2^48 times - there are lotteries that do worse I suppose)...
So no, you don't have to regenerate the keys - for the algorithm they are perfectly valid after all. For private keys you don't generally have much compatibility requirements, however generally you'd try and encode such keys to a static size (32 bytes, possibly using 00 valued bytes at the left). Re-encoding them as statically sized keys might be a good idea...

android AudioTrack playback short array (16bit)

I have an application that playback audio. It takes encoded audio data over RTP and decode it to 16bit array. The decoded 16bit array is converted to 8 bit array (byte array) as this is required for some other functionality.
Even though audio playback is working it is breaking continuously and very hard to recognise audio output. If I listen carefully I can tell it is playing the correct audio.
I suspect this is due to the fact I convert 16 bit data stream into a byte array and use the write(byte[], int, int, AudioTrack.WRITE_NON_BLOCKING) of AudioTrack class for audio playback.
Therefore I converted the byte array back to a short array and used write(short[], int, int, AudioTrack.WRITE_NON_BLOCKING) method to see if it could resolve the problem.
However now there is no audio sound at all. In the debug output I can see the short array has data.
What could be the reason?
Here is the AUdioTrak initialization
sampleRate =AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_MUSIC);
minimumBufferSize = AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
minimumBufferSize,
AudioTrack.MODE_STREAM);
Here is the code converts short array to byte array
for (int i=0;i<internalBuffer.length;i++){
bufferIndex = i*2;
buffer[bufferIndex] = shortToByte(internalBuffer[i])[0];
buffer[bufferIndex+1] = shortToByte(internalBuffer[i])[1];
}
Here is the method that converts byte array to short array.
public short[] getShortAudioBuffer(byte[] b){
short audioBuffer[] = null;
int index = 0;
int audioSize = 0;
ByteBuffer byteBuffer = ByteBuffer.allocate(2);
if ((b ==null) && (b.length<2)){
return null;
}else{
audioSize = (b.length - (b.length%2));
audioBuffer = new short[audioSize/2];
}
if ((audioSize/2) < 2)
return null;
byteBuffer.order(ByteOrder.LITTLE_ENDIAN);
for(int i=0;i<audioSize/2;i++){
index = i*2;
byteBuffer.put(b[index]);
byteBuffer.put(b[index+1]);
audioBuffer[i] = byteBuffer.getShort(0);
byteBuffer.clear();
System.out.print(Integer.toHexString(audioBuffer[i]) + " ");
}
System.out.println();
return audioBuffer;
}
Audio is decoded using opus library and the configuration is as follows;
opus_decoder_ctl(dec,OPUS_SET_APPLICATION(OPUS_APPLICATION_AUDIO));
opus_decoder_ctl(dec,OPUS_SET_SIGNAL(OPUS_SIGNAL_MUSIC));
opus_decoder_ctl(dec,OPUS_SET_FORCE_CHANNELS(OPUS_AUTO));
opus_decoder_ctl(dec,OPUS_SET_MAX_BANDWIDTH(OPUS_BANDWIDTH_FULLBAND));
opus_decoder_ctl(dec,OPUS_SET_PACKET_LOSS_PERC(0));
opus_decoder_ctl(dec,OPUS_SET_COMPLEXITY(10)); // highest complexity
opus_decoder_ctl(dec,OPUS_SET_LSB_DEPTH(16)); // 16bit = two byte samples
opus_decoder_ctl(dec,OPUS_SET_DTX(0)); // default - not using discontinuous transmission
opus_decoder_ctl(dec,OPUS_SET_VBR(1)); // use variable bit rate
opus_decoder_ctl(dec,OPUS_SET_VBR_CONSTRAINT(0)); // unconstrained
opus_decoder_ctl(dec,OPUS_SET_INBAND_FEC(0)); // no forward error correction
Let's assume you have a short[] array which contains the 16-bit one channel data to be played.
Then each sample is a value between -32768 and 32767 which represents the signal amplitude at the exact moment. And 0 value represents a middle point (no signal). This array can be passed to the audio track with ENCODING_PCM_16BIT format encoding.
But things are going weird when playing ENCODING_PCM_8BIT is used (See AudioFormat)
In this case each sample encoded by one byte. But each byte is unsigned. That means, it's value is between 0 and 255, while 128 represents the middle point.
Java has no unsigned byte format. Byte format is signed. I.e. values -128...-1 will represent actual values of 128...255. So you have to be careful when converting to the byte array, otherwise it will be a noise with barely recognizable source sound.
short[] input16 = ... // the source 16-bit audio data;
byte[] output8 = new byte[input16.length];
for (int i = 0 ; i < input16.length ; i++) {
// To convert 16 bit signed sample to 8 bit unsigned
// We add 128 (for rounding), then shift it right 8 positions
// Then add 128 to be in range 0..255
int sample = ((input16[i] + 128) >> 8) + 128;
if (sample > 255) sample = 255; // strip out overload
output8[i] = (byte)(sample); // cast to signed byte type
}
To perform backward conversion all should be the same: each single sample to be converted to exactly one sample of the output signal
byte[] input8 = // source 8-bit unsigned audio data;
short[] output16 = new short[input8.length];
for (int i = 0 ; i < input8.length ; i++) {
// to convert signed byte back to unsigned value just use bitwise AND with 0xFF
// then we need subtract 128 offset
// Then, just scale up the value by 256 to fit 16-bit range
output16[i] = (short)(((input8[i] & 0xFF) - 128) * 256);
}
The issue of not being able to convert data from byte array to short array was resolved when used bitwise operators instead of using ByteArray. It could be due not setting the correct parameters in ByteArray or it is not suitable for such conversion.
Nevertheless implementing conversion using bitwise operators resolved the problem. Since the original question has been resolved by this approach, please consider this as the final answer.
I will raise a separate topic for playback issue.
Thank you for all your support.

Is there any limit in setting zram device disksize on Linux?

I'm trying to create a zram device on my target device. My target can not allocate memory if the zram disksize is above 100GB, but it's okay with the disksize of 50GB or less.
Is there any limit in setting zram device disksize on Linux? My target device only has 2GB of RAM memory.
I guess you can give a number up to UINT64_MAX - 4095 = 18446744073709547520 on a 64-bit platform.
https://github.com/torvalds/linux/blob/master/drivers/block/zram/zram_drv.h#L101
https://github.com/torvalds/linux/blob/master/drivers/block/zram/zram_drv.c#L1506
https://github.com/torvalds/linux/blob/master/drivers/block/zram/zram_drv.c#L901
So what we have:
... disksize_store(...) {
u64 disksize;
...
// ok, we can give at least UINT64_MAX here.
disksize = unsigned long long memparse(...);
// PAGE_ALIGN, PAGE_SIZE = 1<<12
disksize = PAGE_ALIGN(disksize)
= (((disksize)+((PAGE_SIZE)-1))&(~((typeof(disksize))(PAGE_SIZE)-1)))
= (disksize + ((1<<12)-1))&(~((1<<12)-1))
= (disksize + 4095) & 0xfffffffffffff000
// ^^^^^^^^^^^^^^^ this can overflow
// so max number is UINT64_MAX - 4095 so it doesn't overflow
// otherwise this macro will return 0
...
if (!zram_meta_alloc(..., disksize) {
...
return ...;
}
...
zram->disksize = disksize;
...
}
So let's see into zram_meta_alloc:
... zram_meta_alloc(..., disksize) {
...
num_pages = disksize >> PAGE_SHIFT;
// max num_pages = 0xfffffffffffff = UINT64_MAX >> PAGE_SHIFT
... = vzalloc(num_pages * sizeof(*zram->table));
// ^^^^^^^^^^^^^^^ this can overflow
...
}
vzallloc takes as argument unsigned long. ULONG_MAX should be UINT64_MAX on 64-bit platform. sizeof(*zram->table) is equal to sizeof(unsigned long) + sizeof(unsigned long) + [optional: + sizeof(ktime_t)] + padding (see here). Without padding, assuming 64-bit platform, sizeof(unsigned long) = 8 that should be equal to 8+8[+8] = 16 or 24. But anyway, maximum num_pages is equal to UINT64_MAX >> 12, so to overflow it on 64bit multiplication we would need sizeof(*zram->table) = 2^PAGE_SIZE = 4096, and that shouldn't happen (unless the compiler decides to give over 4000 bytes of padding into the zram->table struct). So we are left with UINT64_MAX - 4095.
So we are left, that the maximum number of disksize is UINT64_MAX-4095. If you give the disksize equal to UINT64_MAX - x, where 0 <= x < 4095, than because of PAGE_ALIGN macro, the disksize will be effectively set to 0. Probably this should be brought up to a kernel developer and they should modify the PAGE_ALIGN macro to support such numbers.
6 days ago to vzalloc calls the call to array_size was added to protect against overflow with this commit.
There is no limit but there is an overhead.
"Note that zram uses about 0.1% of the size of the disk when not in use so a huge zram is wasteful."
https://www.kernel.org/doc/Documentation/blockdev/zram.txt
Also disk_size is a virtual size purely dependent on the input and the compression ratio that receives via chosen alg. Disk-size is the max uncompressed size and general disk parameters.
The only 'actual' control is via mem_limit which is compressed size + disk & zram overheads.
Compression ratio is completely dependent on comp alg chosen from /proc/crypto as zlib & zstd are far more effective but are far slower. It is also very dependent on input as with text zlib & zstd can be over double that what lzo & lz4 will achieve.
If the input is already compressed any alg might garner little to zero compression and without a mem_limit could grab much precious memory from the system.
Mem_limit is the max you are prepared zram to grab from system and a disk-size any more than the compression ratio expected applied to mem_limit is likely a waste.
It will never get used but be part of the 0.1% empty creation overhead.
Maybe try https://github.com/StuartIanNaylor/zram-config

How do I attenuate a WAV file by a given decibel value?

If I wanted to reduce a WAV file's amplitude by 25%, I would write something like this:
for (int i = 0; i < data.Length; i++)
{
data[i] *= 0.75;
}
A lot of the articles I read on audio techniques, however, discuss amplitude in terms of decibels. I understand the logarithmic nature of decibel units in principle, but not so much in terms of actual code.
My question is: if I wanted to attenuate the volume of a WAV file by, say, 20 decibels, how would I do this in code like my above example?
Update: formula (based on Nils Pipenbrinck's answer) for attenuating by a given number of decibels (entered as a positive number e.g. 10, 20 etc.):
public void AttenuateAudio(float[] data, int decibels)
{
float gain = (float)Math.Pow(10, (double)-decibels / 20.0);
for (int i = 0; i < data.Length; i++)
{
data[i] *= gain;
}
}
So, if I want to attenuate by 20 decibels, the gain factor is .1.
I think you want to convert from decibel to gain.
The equations for audio are:
decibel to gain:
gain = 10 ^ (attenuation in db / 20)
or in C:
gain = powf(10, attenuation / 20.0f);
The equations to convert from gain to db are:
attenuation_in_db = 20 * log10 (gain)
If you just want to adust some audio, I've had good results with the normalize package from nongnu.org. If you want to study how it's done, the source code is freely available. I've also used wavnorm, whose home page seems to be out at the moment.
One thing to consider: .WAV files have MANY different formats. The code above only works for WAVE_FORMAT_FLOAT. If you're dealing with PCM files, then your samples are going to be 8, 16, 24 or 32 bit integers (8 bit PCM uses unsigned integers from 0..255, 24 bit PCM can be packed or unpacked (packed == 3 byte values packed next to each other, unpacked == 3 byte values in a 4 byte package).
And then there's the issue of alternate encodings - For instance in Win7, all the windows sounds are actually MP3 files in a WAV container.
It's unfortunately not as simple as it sounds :(.
Oops I misunderstood the question… You can see my python implementations of converting from dB to a float (which you can use as a multiplier on the amplitude like you show above) and vice-versa
https://github.com/jiaaro/pydub/blob/master/pydub/utils.py
In a nutshell it's:
10 ^ (db_gain / 10)
so to reduce the volume by 6 dB you would multiply the amplitude of each sample by:
10 ^ (-6 / 10) == 10 ^ (-0.6) == 0.2512

Resources