Audio data (unsigned char) that have been manipulated cannot be played - audio

I have trouble with playing audio data after they have been manipulated.
The only API I use is the alsa lib API on Linux (Ubuntu) in C.
I get the data from a 16 bits integer wave file in a unsigned char array (called buffer1) using read() and buffer1 can be played properly. I want the data to be passed to another unsigned char array (called buffer2) of the same size. If I just make a loop with buffer2[i] = buffer1[i], it works : buffer2 can be played properly. But in order to manipulate the data, I convert it to a float array then back to unsigned char (Until now I do not manipulate the audio data; I just convert them to float then back to unsigned char to test how it works). But now buffer2 does not make sound although all of its values are strictly identical to the values of buffer1 (I made a printf of many values of buffer1 and buffer2; they are all identical) ... All I did is casting from unsigned to float and vice versa...
Please any idea of what's wrong?
Victor

The values in buffer1 and buffer2 cannot be identical or it would work. Perhaps the formatting that you use in your printf command is masking the differences (%i, %f etc.). Rather than use printf, try setting a breakpoint and looking at the values using your debugger. This might help reveal what is actually going wrong.
EDIT:
Given your comments about how you perform the cast, I think that I can now help. The raw data coming in is of type unsigned char. On most platforms, this will be an integer value between 0 and 255. You want to convert this value to a float to do your manipulation. To make the data meaningful as a floating point type for any manipulation, you want to scale this range between +/- 1.0. This is what the "scale" variable is for in the following code.
#include <iostream>
#include <math.h>
int main()
{
const int BUFFER_LEN = 6;
const unsigned char channelDataIN[] = {0,255, 1, 254, 2, 253};
unsigned char channelDataOUT[BUFFER_LEN];
float channelDataF[BUFFER_LEN];
std::cout.precision(5);
float scale = powf(2.f, 8.f*sizeof(unsigned char) ) - 1.f;
for (int mm = 0; mm < BUFFER_LEN; ++mm)
{
std::cout << "Original = " << (int)channelDataIN[mm] << std::endl;
channelDataF[mm] = (float)(channelDataIN[mm]) * 2.f/scale - 1.f; //Float cast
std::cout << "Float conversion = " << channelDataF[mm] << std::endl;
channelDataOUT[mm] = (unsigned char) ceil( ( 1.f+channelDataF[mm] ) * scale/2.f );
std::cout << "Recovered = " << (int)channelDataOUT[mm] << std::endl;
if (channelDataIN[mm] == channelDataOUT[mm])
std::cout << "The output precisely equals the input" << std::endl << std::endl;
else
std::cout << "The output != input" << std::endl << std::endl;
}
return 0;
}
The output array of unsigned chars after converting the values back is identical to the input array. This is the output from the code . . .
Original = 0
Float conversion = -1
Recovered = 0
The output precisely equals the input
Original = 255
Float conversion = 1
Recovered = 255
The output precisely equals the input
Original = 1
Float conversion = -0.99216
Recovered = 1
The output precisely equals the input
Original = 254
Float conversion = 0.99216
Recovered = 254
The output precisely equals the input
Original = 2
Float conversion = -0.98431
Recovered = 2
The output precisely equals the input
Original = 253
Float conversion = 0.98431
Recovered = 253
The output precisely equals the input

Related

C++/cli: How to convert String containing Bytes as characters to double

I have problem converting String^ containing 8 bytes as characters (as ascii) to double.
I want to take those 8 characters and convert them binary to double.
What would you recommend to do this conversion in C++/cli?
I was trying to use Marshal::Copy, Double::TryParse, etc.
Maybe I use wrong specifications of parameters, but I really lost my last hopes.
There must be something easy to do this conversion.
Thanks.
Well, the bad news is that the System.String class uses only Unicode encoding internally.
So if you give it bytes it will map them to its internal encoding hiding the original value.
The good news is that you can play with the System.Text.Encoding class to retrieve 8bits values corresponding to the unicode characters.
Here is a sample :
#include <iostream>
using namespace System;
using namespace System::Text;
int main()
{
int n = 123456;
double d = 123.456;
std::cout << n << std::endl;
std::cout << d << std::endl;
char* n_as_bytes = (char*)&n;
char* d_as_bytes = (char*)&d;
String^ n_as_string = gcnew String(n_as_bytes, 0, sizeof(n));
String^ d_as_string = gcnew String(d_as_bytes, 0, sizeof(d));
Encoding^ ascii = Encoding::GetEncoding("iso-8859-1");
array<Byte>^ n_as_array = ascii->GetBytes(n_as_string);
array<Byte>^ d_as_array = ascii->GetBytes(d_as_string);
cli::pin_ptr<unsigned char> pin_ptr_n = &n_as_array[0];
cli::pin_ptr<unsigned char> pin_ptr_d = &d_as_array[0];
unsigned char* ptr_n = pin_ptr_n;
unsigned char* ptr_d = pin_ptr_d;
int n_out = *(int*)ptr_n;
double d_out = *(double*)ptr_d;
std::cout << n_out << std::endl;
std::cout << d_out << std::endl;
return 0;
}
This should give you :
123456
123.456
123456
123.456
Not sure it is completely safe, but trying it in your context should be a good start to ensure it is viable. :)

Read/Write to textfile, weird problem driving me nuts

Ok, here is the problem, if I write and read something to a text file like this, it works, no problem:
fstream ff,ff2;
ff.open("simtestagain.txt",ios::out);
CString mycstring = _T("Name with spaces");
char mycharbuffer[255]; //destination buffer
size_t convertedChars = 0; //number of characters converted
wcstombs_s( &convertedChars, mycharbuffer, mycstring.GetLength()+1, mycstring.GetBuffer(), _TRUNCATE);
ff << mycharbuffer;
ff.close();
ff2.open("simtestagain.txt",ios::in);
ff2.getline(mycharbuffer,255);
mycstring = mycharbuffer;
ff2.close();
AfxMessageBox(mycstring);
Now I need to also write numbers in this file, so I do:
fstream ff,ff2;
int a,b;
ff.open("simtestagain.txt",ios::out);
CString mycstring = _T("Name with spaces");
char mycharbuffer[255]; //destination buffer
size_t convertedChars = 0; //number of characters converted
wcstombs_s( &convertedChars, mycharbuffer, mycstring.GetLength()+1, mycstring.GetBuffer(), _TRUNCATE);
ff << 1 << endl;
ff << mycharbuffer << endl;
ff << 2 << endl;
ff.close();
ff2.open("simtestagain.txt",ios::in);
//EDIT: copy/paste error, not in code //ff2 >> mycharbuffer;
ff2 >> a;
ff2.getline(mycharbuffer,255);
mycstring = mycharbuffer;
ff >> b;
ff2.close();
AfxMessageBox(mycstring);
Now the cstring does not work and I can`t figure out why... :(
Get Rid Of ff2 >> mycharbuffer
You use ff2 >> mycharbuffer before you retrieve the first number. So you move the position pointer beyond that line, when you try to input the number it sees a huge long string, and I'm sure it gives you errors.

How to do Bitwise AND(&) on CString values in MFC(VC++)?

HI,
How to do Bitwise AND(&) on CString values in MFC(VC++)?
Example :
CString NASServerIP = "172.24.15.25";
CString SystemIP = " 142.25.24.85";
CString strSubnetMask = "255.255.255.0";
int result1 = NASServerIP & strSubnetMask;
int result2 = SystemIP & strSubnetMask;
if(result1==result2)
{
cout << "Both in Same network";
}
else
{
cout << "not in same network";
}
How i can do bitwise AND on CString values ?
Its giving error as "'CString' does not define this operator or a conversion to a type acceptable to the predefined operator"
You don't. Peforming a bitwise-AND on two strings doesn't make a lot of sense. You need to obtain binary representations of the IP address strings, then you can perform whatever bitwise operations on them. This can be easily done by first obtaining a const char* from a CString then passing it to the inet_addr() function.
A (simple) example based on your code snippet.
CString NASServerIP = "172.24.15.25";
CString SystemIP = " 142.25.24.85";
CString strSubnetMask = "255.255.255.0";
// CStrings can be casted into LPCSTRs (assuming the CStrings are not Unicode)
unsigned long NASServerIPBin = inet_addr((LPCSTR)NASServerIP);
unsigned long SystemIPBin = inet_addr((LPCSTR)SystemIP);
unsigned long strSubnetMaskBin = inet_addr((LPCSTR)strSubnetMask);
// Now, do whatever is needed on the unsigned longs.
int result1 = NASServerIPBin & strSubnetMaskBin;
int result2 = SystemIPBin & strSubnetMaskBin;
if(result1==result2)
{
cout << "Both in Same network";
}
else
{
cout << "Not in same network";
}
The bytes in the unsigned longs are "reversed" from the string representation. For example, if your IP address string is 192.168.1.1, the resulting binary from inet_addr would be 0x0101a8c0, where:
0x01 = 1
0x01 = 1
0xa8 = 168
0xc0 = 192
This shouldn't affect your bitwise operations, however.
You of course need to include the WinSock header (#include <windows.h> is usually sufficient, since it includes winsock.h) and link against the WinSock library (wsock32.lib if you're including winsock.h).

Why doesn't unsigned char* work with ifstream::read?

I am a beginner with C++. I have a new project at work where I have to learn it, so I'm trying some things just to test my understanding. For this problem, I'm trying to read a file and then print it on screen. Super simple, just trying to get good at it and understand the functions that I'm using. I copied some text from a MS Word document into a notepad (*.txt) file, and I'm trying to read this *.txt file. All of the text in the word document is bolded, but other than that there are no 'unusual' characters. Everything prints out on the screen as it appears in the document except the bolded " - " symbol. This character is printed as the "u" with a hat character ("so called extended ASCII" code 150). I try to print out the integer value of this character in my array (which should be 150) but I get -106. I realize this signed integer has the same bits as the unsigned integer 150. My question is how to get the output to say 150? Here's my code:
#include <iostream>
#include <fstream>
using namespace std;
int main() {
unsigned char* input1;
int input1size = 57;
ifstream file("hello_world2.txt",ios::binary | ios::ate);
if (file.is_open()){
int size;
size = (int) file.tellg();
cout <<"This file is " << size << " bytes." << endl;
file.seekg(0,ios::beg);
input1 = new unsigned char[input1size];
file.read(input1, input1size);
cout << "The first " << input1size <<" characters of this file are:" << endl<<endl;
for (int i=0; i<input1size; i++) {
cout << input1[i];
}
cout<<endl;
}
else {
cout <<"Unable to open file" << endl;
int paus;
cin>>paus;
return 0;
}
file.close();
int charcheck = 25;
int a=0;
int a1=0;
int a2=0;
unsigned int a3=0;
unsigned short int a4=0;
short int a5=0;
a = input1[charcheck];
a1 = input1[charcheck-1];
a2 = input1[charcheck+1];
a3 = input1[charcheck];
a4 = input1[charcheck];
a5 = input1[charcheck];
cout <<endl<<"ASCII code for char in input1[" << charcheck-1 <<"] is: " << a1 << endl;
cout <<endl<<"ASCII code for char in input1[" << charcheck <<"] is: " << a << endl;
cout <<endl<<"ASCII code for char in input1[" << charcheck+1 <<"] is: " << a2 << endl;
cout <<endl<<"ASCII code for char in input1[" << charcheck <<"] as unsigned int: " << a3 << endl;
cout <<endl<<"ASCII code for char in input1[" << charcheck <<"] as unsigned short int: " << a4 << endl;
cout <<endl<<"ASCII code for char in input1[" << charcheck <<"] as short int: " << a5 << endl;
int paus;
cin>>paus;
return 0;
}
Output for all this looks like:
This file is 80 bytes.
The first 57 characters of this file are:
STATUS REPORT
PERIOD 01 u 31 JUL 09
TASK 310: APPLIC
ASCII code for char in input1[24] is: 32
ASCII code for char in input1[25] is: -106
ASCII code for char in input1[26] is: 32
ASCII code for char in input1[25] as unsigned int: 4294967190
ASCII code for char in input1[25] as unsigned short int: 65430
ASCII code for char in input1[25] as short int: -106
So it appears "int a" is always read as signed. When I try to make "a" unsigned, it turns all the bits left of the eight bits for the char to 1's. Why is this? Sorry for the length of the question, just trying to be detailed. Thanks!
What you're dealing with is the sign-extension that takes place when the char is promoted to int when you assign it to one of your a? variables.
All the higher order bits must be set to 1 to keep it the same negative value as was in the smaller storage of the char.

Why am I getting an assertion error?

#include <iostream>
using namespace std;
int main ()
{
int size = 0;
int* myArray = new int [size + 1];
cout << "Enter the exponent of the first term: ";
cin >> size;
cout << endl;
for (int i = size; i >= 0; --i)
{
cout << "Enter the coefficient of the term with exponent "
<< i << ": ";
cin >> myArray[i];
}
for (int i = size; i >= 0; --i)
{
cout << i << endl;
}
return 0;
}
Why am I getting an assertion error on input greater than 2? This is the precursor to a polynomial program where the subscript of the array is the power of each term and the element at array[subscript] is the coefficient.
Your array is allocated to be an int[1]. It needs to be allocated after you read in the size value.
You are initializing your array when size = 0, giving an array size of 1
You get your assertion error when you go outside of the array bounds (1).
myArray always has size 0 + 1 = 1. i starts out at whatever the user inputted, and the first array access you make is myArray[i]. So, say the user inputs 5, your array has size 1 and you access myArray[5]. It will fail!
I would allocate the array AFTER you input size.

Resources