Read/Write to textfile, weird problem driving me nuts - text

Ok, here is the problem, if I write and read something to a text file like this, it works, no problem:
fstream ff,ff2;
ff.open("simtestagain.txt",ios::out);
CString mycstring = _T("Name with spaces");
char mycharbuffer[255]; //destination buffer
size_t convertedChars = 0; //number of characters converted
wcstombs_s( &convertedChars, mycharbuffer, mycstring.GetLength()+1, mycstring.GetBuffer(), _TRUNCATE);
ff << mycharbuffer;
ff.close();
ff2.open("simtestagain.txt",ios::in);
ff2.getline(mycharbuffer,255);
mycstring = mycharbuffer;
ff2.close();
AfxMessageBox(mycstring);
Now I need to also write numbers in this file, so I do:
fstream ff,ff2;
int a,b;
ff.open("simtestagain.txt",ios::out);
CString mycstring = _T("Name with spaces");
char mycharbuffer[255]; //destination buffer
size_t convertedChars = 0; //number of characters converted
wcstombs_s( &convertedChars, mycharbuffer, mycstring.GetLength()+1, mycstring.GetBuffer(), _TRUNCATE);
ff << 1 << endl;
ff << mycharbuffer << endl;
ff << 2 << endl;
ff.close();
ff2.open("simtestagain.txt",ios::in);
//EDIT: copy/paste error, not in code //ff2 >> mycharbuffer;
ff2 >> a;
ff2.getline(mycharbuffer,255);
mycstring = mycharbuffer;
ff >> b;
ff2.close();
AfxMessageBox(mycstring);
Now the cstring does not work and I can`t figure out why... :(

Get Rid Of ff2 >> mycharbuffer
You use ff2 >> mycharbuffer before you retrieve the first number. So you move the position pointer beyond that line, when you try to input the number it sees a huge long string, and I'm sure it gives you errors.

Related

60 point CMPT Sci

Write a program that first generates 100 random numbers in the range [11, 25] and store them in a file named “random.txt”. Write just one number in each row.  Then open and read data from the file “random.txt”. Calculate the average of all the 100 number and count how many times the number 16 occurs. Using only one single loop in this step. That is, update the running total and the occurrence of 16 during each repetition of the loop. The sample file and output are given below.
Hint: To generate a random integer in a range [min,max, use the following statement
rand()%(max-min+1)+min;
>>Write a program that first generate 100 random numbers in the range [11, 25] and store them in a file named “random.txt”.
C++ provides the following classes to perform output and input of characters to/from files:
ofstream: Stream class to write on files
ifstream: Stream class to read from files
fstream: Stream class to both read and write from/to files.
I suggest you could try to use ofstream to write the random numbers to “random.txt” file.
Here is the code:
cout << "Now writing data to the file." << endl;
ofstream OutFile("random.txt");
for (int i = 0; i < 100; i++)
OutFile << rand() % (25 - 11 + 1) + 11 << '\n';
OutFile.close();
cout << "Done with the writing." << endl;
>>Then open and read data from the file “random.txt”. Calculate the average of all the 100 number
We could use fopen_sto open the file. And then we can use fscanf_s to read formatted data from stream.
Here is the code:
cout << "Now reading deta fron the file." << endl;
int n, r;
double d, sum;
FILE *f;
fopen_s(&f,"random.txt", "r");
n = 0;
sum = 0.0;
while (1) {
r = fscanf_s(f, "%lf", &d);
if (1 == r)
{
sum += d;
n++;
}
else if (0 == r) {
fscanf_s(f, "%*c");
}
else break;
}
fclose(f);
cout << "average value=" << sum / n << endl;
>> count how many times the number 16 occurs
We could read numbers line by line, if the number is equal to "16", "sum1" plus "1".
Here is my code:
int sum1 = 0;
ifstream infile("random.txt");
int number;
infile >> number;
while (!infile.eof())
{
if (number == 16)
{
sum1++;
}
infile >> number;
}
cout << "The number 16 appears " << sum1 << " times in the file" << endl;

C++/cli: How to convert String containing Bytes as characters to double

I have problem converting String^ containing 8 bytes as characters (as ascii) to double.
I want to take those 8 characters and convert them binary to double.
What would you recommend to do this conversion in C++/cli?
I was trying to use Marshal::Copy, Double::TryParse, etc.
Maybe I use wrong specifications of parameters, but I really lost my last hopes.
There must be something easy to do this conversion.
Thanks.
Well, the bad news is that the System.String class uses only Unicode encoding internally.
So if you give it bytes it will map them to its internal encoding hiding the original value.
The good news is that you can play with the System.Text.Encoding class to retrieve 8bits values corresponding to the unicode characters.
Here is a sample :
#include <iostream>
using namespace System;
using namespace System::Text;
int main()
{
int n = 123456;
double d = 123.456;
std::cout << n << std::endl;
std::cout << d << std::endl;
char* n_as_bytes = (char*)&n;
char* d_as_bytes = (char*)&d;
String^ n_as_string = gcnew String(n_as_bytes, 0, sizeof(n));
String^ d_as_string = gcnew String(d_as_bytes, 0, sizeof(d));
Encoding^ ascii = Encoding::GetEncoding("iso-8859-1");
array<Byte>^ n_as_array = ascii->GetBytes(n_as_string);
array<Byte>^ d_as_array = ascii->GetBytes(d_as_string);
cli::pin_ptr<unsigned char> pin_ptr_n = &n_as_array[0];
cli::pin_ptr<unsigned char> pin_ptr_d = &d_as_array[0];
unsigned char* ptr_n = pin_ptr_n;
unsigned char* ptr_d = pin_ptr_d;
int n_out = *(int*)ptr_n;
double d_out = *(double*)ptr_d;
std::cout << n_out << std::endl;
std::cout << d_out << std::endl;
return 0;
}
This should give you :
123456
123.456
123456
123.456
Not sure it is completely safe, but trying it in your context should be a good start to ensure it is viable. :)

Zlib uncompress returns Z_DATA_ERROR

I am working on a client server application, where client compresses a 2MB data sends to the server, server receives the data uncompresses it and writes it to a file.
For some packets uncompression was failing and I added MD5 sum to both client side and server side code and also debugged using uncompression at the client side after compressing the data. The same parameters that passes for uncompress function in client side is failing with Z_DATA_ERROR in the server side. The data's MD5sum seems same. Am totally clueless what I could do next.
Server Side cod looks like this:
int ret = uncompress((Bytef*)unCompressedBuffer, &dwUncompressedBytes, (const Bytef*) receivedBuffer+525, dwBlockLength);
if (ret == Z_OK)
{
}
else
{
std::cout << " Uncompression failed for Block: " << iBlock << std::endl;
std::cout << " PacketType: 4" << " Block Number:" << iBlock << " Length:" << dwBlockLength << "Error:" << ret << std::endl;
PrintMD5SumResult((PBYTE)receivedBuffer+525, compressedSize-525);
std::cout << " Uncompressed MD5 Checksum:0";
PrintMD5SumResult((PBYTE)unCompressedBuffer, dwUncompressedBytes);
}
}
Client Code Looks like this:
int ret = compress2(l_pCompressData + 4, &destLen,
(const Bytef*) pBlockData, dwBlockSize, 6);
memcpy(m_pWriteBuffer+525, l_pCompressData, destLen);
m_dwWriteBytes = destLen+525;
std::cout << " \n Compressed MD5 Sum:0";
PrintMD5SumResult(m_pWriteBuffer, m_dwWriteBytes);
PrintMD5SumResult(m_pWriteBuffer+525, m_dwWriteBytes-525);
int ret = uncompress(m_pUnCompressData, &uncomLen, (const Bytef*)m_pWriteBuffer+525, destLen);
if(ret != Z_OK)
{
std::cout << " Uncompression has failed." << std::endl;
}
else
{
//std::cout << " UnCompressed MD5 Sum:0";
//PrintMD5SumResult((PBYTE)m_pUnCompressData, md5Output, dwBlockSize);
}
// Write the 2MB to the network
WriteDataOverNetwork(m_NetworkStream, m_pWriteBuffer, m_dwWriteBytes, &dwNumBytes, TRUE);
I narrowed down the problem to the following piece of code in zlib - but have a hard time understanding it. In the inflate() function, (ZSWAP32(hold)) != state->check) this statement is failing. Can someone help me out here? MD5sum used here is from Boton C++ library.
case CHECK:
if (state->wrap) {
NEEDBITS(32);
out -= left;
strm->total_out += out;
state->total += out;
if (out)
strm->adler = state->check =
UPDATE(state->check, put - out, out);
out = left;
if ((
#ifdef GUNZIP
state->flags ? hold :
#endif
ZSWAP32(hold)) != state->check) {
strm->msg = (char *)"incorrect data check";
state->mode = BAD;
break;
}
I also met this issue recently when I used zlib to do in-memory compression/decompression. The code is as follow:
size_t size = 1048576;
void *data;
void *comp_data;
uLong comp_data_len;
void *uncomp_data;
uLong uncomp_data_len;
void *temp;
int ret;
data = calloc(1, size); // data is filled with all zeros
comp_data_len = size * 1.01 + 12;
comp_data = calloc(1, size);
ret = compress(comp_data, &comp_data_len, data, size); //here ret is Z_OK.
uncomp_data_len = size;
uncomp_data = calloc(1, uncomp_data_len);
ret = uncompress(uncomp_data, &uncomp_data_len, comp_data, comp_data_len); //here ret is Z_OK
temp = calloc(1, 496);
for (i = 0; i < 100; i++)
{
//here fill some random data to temp
memcpy((char*)data + i * 100, temp, 496);
ret = compress(comp_data, &comp_data_len, data, size); //here ret is Z_OK.
ret = uncompress(uncomp_data, &uncomp_data_len, comp_data, comp_data_len); //here ret sometimes is Z_OK, sometimes is Z_DATA_ERROR!!!
}
I also traced the code and found that it failed at the statement "inflate() function, (ZSWAP32(hold)) != state->check)" too. So I cannot believe that the function uncompress is related to the data pattern. Am I wrong?
I also noticed that compress function calls deflate to do compression, deflate processes data every 64k, so need I split it to 64k blocks, compress each block one by one, then uncompress can work well?
i don't know whether it‘s the right answer ,maybe it help !my English is so poor,hope you can understand. perhaps the parameters convert to another has bugs . when they convert the info maybe lose ! i meet the same problem , after use the source code type the problem has been solved (Bytef\uLongf\ uLong,etc). the wed is Chinese you can use Google to translate.
http://www.360doc.com/content/13/0927/18/11217914_317498849.shtml
This is my test.the arry[] can be larger,same time the sour[]/dest[]/destLen/Len will be changed.using the source code type the problem has been solved. Hope will be helpful.
my code as follow:
#include <stdio.h>
#include "zlib.h"
int main(){
//the buffer can be larger
Bytef arry[] = "中文测试 yesaaaaa bbbbb ccccc ddddd 中文测试 yesaaaaa bbbbb ccccc ddddd 中文测试yesaaaaa bbbbb ccccc ddddd 中文测试 yes 我是一名军人!";
//buffer length
int size = sizeof(arry);
//store the uncompressed data
Bytef sour[2500];
//store the compressed data
Bytef dest[2500];
//压缩后的数据可能比源数据要大
unsigned long destLen = 2500;
//解压数据时因为不知道源数据大小,设置时长度尽可能大一些。以免出错
unsigned long Len = 2500;
int ret = -1;
ret = compress(dest,&destLen,arry,size);
//dest[destLen] = '\0';
printf("ret = %d\ndest = %s\n", ret, dest);
ret = uncompress(sour,&Len,dest,destLen);
//sour[size-1] = '\0';
printf("ret = %d\nsour = %s\n", ret, sour);
return 0;
}

Audio data (unsigned char) that have been manipulated cannot be played

I have trouble with playing audio data after they have been manipulated.
The only API I use is the alsa lib API on Linux (Ubuntu) in C.
I get the data from a 16 bits integer wave file in a unsigned char array (called buffer1) using read() and buffer1 can be played properly. I want the data to be passed to another unsigned char array (called buffer2) of the same size. If I just make a loop with buffer2[i] = buffer1[i], it works : buffer2 can be played properly. But in order to manipulate the data, I convert it to a float array then back to unsigned char (Until now I do not manipulate the audio data; I just convert them to float then back to unsigned char to test how it works). But now buffer2 does not make sound although all of its values are strictly identical to the values of buffer1 (I made a printf of many values of buffer1 and buffer2; they are all identical) ... All I did is casting from unsigned to float and vice versa...
Please any idea of what's wrong?
Victor
The values in buffer1 and buffer2 cannot be identical or it would work. Perhaps the formatting that you use in your printf command is masking the differences (%i, %f etc.). Rather than use printf, try setting a breakpoint and looking at the values using your debugger. This might help reveal what is actually going wrong.
EDIT:
Given your comments about how you perform the cast, I think that I can now help. The raw data coming in is of type unsigned char. On most platforms, this will be an integer value between 0 and 255. You want to convert this value to a float to do your manipulation. To make the data meaningful as a floating point type for any manipulation, you want to scale this range between +/- 1.0. This is what the "scale" variable is for in the following code.
#include <iostream>
#include <math.h>
int main()
{
const int BUFFER_LEN = 6;
const unsigned char channelDataIN[] = {0,255, 1, 254, 2, 253};
unsigned char channelDataOUT[BUFFER_LEN];
float channelDataF[BUFFER_LEN];
std::cout.precision(5);
float scale = powf(2.f, 8.f*sizeof(unsigned char) ) - 1.f;
for (int mm = 0; mm < BUFFER_LEN; ++mm)
{
std::cout << "Original = " << (int)channelDataIN[mm] << std::endl;
channelDataF[mm] = (float)(channelDataIN[mm]) * 2.f/scale - 1.f; //Float cast
std::cout << "Float conversion = " << channelDataF[mm] << std::endl;
channelDataOUT[mm] = (unsigned char) ceil( ( 1.f+channelDataF[mm] ) * scale/2.f );
std::cout << "Recovered = " << (int)channelDataOUT[mm] << std::endl;
if (channelDataIN[mm] == channelDataOUT[mm])
std::cout << "The output precisely equals the input" << std::endl << std::endl;
else
std::cout << "The output != input" << std::endl << std::endl;
}
return 0;
}
The output array of unsigned chars after converting the values back is identical to the input array. This is the output from the code . . .
Original = 0
Float conversion = -1
Recovered = 0
The output precisely equals the input
Original = 255
Float conversion = 1
Recovered = 255
The output precisely equals the input
Original = 1
Float conversion = -0.99216
Recovered = 1
The output precisely equals the input
Original = 254
Float conversion = 0.99216
Recovered = 254
The output precisely equals the input
Original = 2
Float conversion = -0.98431
Recovered = 2
The output precisely equals the input
Original = 253
Float conversion = 0.98431
Recovered = 253
The output precisely equals the input

Why doesn't unsigned char* work with ifstream::read?

I am a beginner with C++. I have a new project at work where I have to learn it, so I'm trying some things just to test my understanding. For this problem, I'm trying to read a file and then print it on screen. Super simple, just trying to get good at it and understand the functions that I'm using. I copied some text from a MS Word document into a notepad (*.txt) file, and I'm trying to read this *.txt file. All of the text in the word document is bolded, but other than that there are no 'unusual' characters. Everything prints out on the screen as it appears in the document except the bolded " - " symbol. This character is printed as the "u" with a hat character ("so called extended ASCII" code 150). I try to print out the integer value of this character in my array (which should be 150) but I get -106. I realize this signed integer has the same bits as the unsigned integer 150. My question is how to get the output to say 150? Here's my code:
#include <iostream>
#include <fstream>
using namespace std;
int main() {
unsigned char* input1;
int input1size = 57;
ifstream file("hello_world2.txt",ios::binary | ios::ate);
if (file.is_open()){
int size;
size = (int) file.tellg();
cout <<"This file is " << size << " bytes." << endl;
file.seekg(0,ios::beg);
input1 = new unsigned char[input1size];
file.read(input1, input1size);
cout << "The first " << input1size <<" characters of this file are:" << endl<<endl;
for (int i=0; i<input1size; i++) {
cout << input1[i];
}
cout<<endl;
}
else {
cout <<"Unable to open file" << endl;
int paus;
cin>>paus;
return 0;
}
file.close();
int charcheck = 25;
int a=0;
int a1=0;
int a2=0;
unsigned int a3=0;
unsigned short int a4=0;
short int a5=0;
a = input1[charcheck];
a1 = input1[charcheck-1];
a2 = input1[charcheck+1];
a3 = input1[charcheck];
a4 = input1[charcheck];
a5 = input1[charcheck];
cout <<endl<<"ASCII code for char in input1[" << charcheck-1 <<"] is: " << a1 << endl;
cout <<endl<<"ASCII code for char in input1[" << charcheck <<"] is: " << a << endl;
cout <<endl<<"ASCII code for char in input1[" << charcheck+1 <<"] is: " << a2 << endl;
cout <<endl<<"ASCII code for char in input1[" << charcheck <<"] as unsigned int: " << a3 << endl;
cout <<endl<<"ASCII code for char in input1[" << charcheck <<"] as unsigned short int: " << a4 << endl;
cout <<endl<<"ASCII code for char in input1[" << charcheck <<"] as short int: " << a5 << endl;
int paus;
cin>>paus;
return 0;
}
Output for all this looks like:
This file is 80 bytes.
The first 57 characters of this file are:
STATUS REPORT
PERIOD 01 u 31 JUL 09
TASK 310: APPLIC
ASCII code for char in input1[24] is: 32
ASCII code for char in input1[25] is: -106
ASCII code for char in input1[26] is: 32
ASCII code for char in input1[25] as unsigned int: 4294967190
ASCII code for char in input1[25] as unsigned short int: 65430
ASCII code for char in input1[25] as short int: -106
So it appears "int a" is always read as signed. When I try to make "a" unsigned, it turns all the bits left of the eight bits for the char to 1's. Why is this? Sorry for the length of the question, just trying to be detailed. Thanks!
What you're dealing with is the sign-extension that takes place when the char is promoted to int when you assign it to one of your a? variables.
All the higher order bits must be set to 1 to keep it the same negative value as was in the smaller storage of the char.

Resources