Debug and release modes giving different outputs - visual-c++

I have a function in my program that outputs a data structure that consists of three doubles in two formats, one text and one binary.
When I run the program in debug and release modes, I end up with different binary outputs but identical text outputs. What is going on?
Here is the binary output code:
void outputPoints(xyz* points, string description, int length, param parameters)
{
stringstream index;
index.str("");
index << setw( 3 ) << setfill( '0' ) << parameters.stage;
string outputName = parameters.baseFileName + " " + index.str() + " " + description + ".bin"; // create file name
ofstream output; // create output object
cout << "Output " << outputName.c_str() << "...";
output.open(outputName.c_str(), std::ios::binary | std::ios::out); // open or create file for output
output.write(reinterpret_cast<char*>(points), (sizeof(xyz) * length));
output.close(); // close output object
cout << "done" << endl;
}

The debug build usually initializes variables with some patterns. Usually data allocated has the content CDCD, deleted objects are overwritten with FEEE. The CDCD pattern is overwritten when you initialize your variables. The release build doesn't initiliaze with these patterns.
It's worth to check your program for uninitialized variables. You can define a Dump function that just prints the (fist few bytes of) the suspected variables.

I don't know whether you got a solution for your issue and I did not look at your code.
I had the same issue because I was adding unsigned char and unsigned short and saving into unsigned short. I changed all variables to unsigned short and the issue was solved.

Related

How do I convert strings of numbers, coming from a .txt file into an int array using stoi::("string")

I've been trying to take a .txt document with three number entries, read those entries as strings and convert those entries in ints, then put them into an int array, but had no success in doing so and i have no clue as to why. Note that the entries as well as some variable names are pre determined by the assignment, additionally we have to use the std::stoi("string") command, which i am not familiar with nor has any syntax been provided to us (which is especially strange since we are usually not allowed to stray to far from the lecture material)
What I excpected to happen is that the numbers from the .txt file were converted into an array, however what actually happened is that an "unhandled exception" (my apologies if that term does not make sanes we have to programm in our native language) occured and the string library opened itself, marking the error on line 107. The problematic line in my code seems to be "auftraegearray[i++] = std::stoi(MengeanAuftraegen);"
int main()
{
std::fstream Auftraege;
Auftraege.open("Auftraege37.txt", std::ios::out);
Auftraege << "10" << std::endl;
Auftraege << "1" << std::endl;
Auftraege << "20" << std::endl;
Auftraege.close();
int i = 0;
int auftraegearray[4];
std::string MengeanAuftraegen;
Auftraege.open("Auftraege37.txt", std::ios::in);
while (!Auftraege.eof())
{
getline(Auftraege, MengeanAuftraegen);
std::cout << MengeanAuftraegen << std::endl;
auftraegearray[i++] = std::stoi(MengeanAuftraegen);
}
Auftraege.close();

C++ - Corrupted String

I'm quite new to C++, but I'm used to some coding with R language. I started, a few weeks ago, to put together a small application that should copy and rename file pairs (.seq/.ab1). Result from a DNA sequencer analysis (renaming hundreds of them manually would be a real time waste, specially because we have lists with their new names).
Everything seemed to be fine, but the new files (those copied) appear with a "special character" in their names (right before the file type), it seeems like a space, but its not (I've replaced it with a space, and the file opened correctly). After deleting it the file can be oppened by its associated application, but with it, the aplication acusses the file to be corrupted.
The issue seems to come from the code related to ostringstream::str member function, but I honestly don't know how to fix it. I wonder if its not inserting a null character there, before I append the file type...
Here is the part of the code responsible. It gets the old and new names from a 2 column csv file, data separated by ";". Original data, and new (renamed files) data are kept in diferent directories, thats the reason I need to create a string with each file path inside a for loop. I intend to check old and new files content later, probably with memcmp. But first I need them to be correctly renamed.
I'm on a Ubuntu 14.04 (64 bit) machine with gcc 4.8.4 as compiler. I already excuse myself for the probably poor coding and bad english, I'm not a native speaker (writer, actually).
fNew.open(filename);
std::ostringstream oldSeqName (std::ostringstream::ate);
std::ostringstream newSeqName (std::ostringstream::ate);
std::ostringstream oldAb1Name (std::ostringstream::ate);
std::ostringstream newAb1Name (std::ostringstream::ate);
std::fstream log;
time_t now = time(0);
for (std::string nOld, nNew; getline(fNew, nOld, ';') && getline(fNew, nNew); )
{
std::cout << "Old Name: " << nOld << " -> New Name: " << nNew << std::endl;
// Keep a log of the name changes
log.open("NameChangesLog.txt", std::fstream::out | std::fstream::app);
log << ctime(&now) << " - " << "Old Name: " << nOld << " -> New Name: " << nNew << std::endl;
log.close();
// Create old seq files paths string
oldSeqName.str(nOld);
oldSeqName << ".seq";
std::string osn = "./Seq/" + oldSeqName.str();
// Create new seq files paths string
newSeqName.str(nNew);
newSeqName << ".seq";
std::string nsn = "./renamed/" + newSeqName.str();
std::ifstream ifseq(osn, std::ios::binary);
std::ofstream ofseq(nsn, std::ios::binary);
ofseq << ifseq.rdbuf();
ifseq.close();
ofseq.close();
// Create old ab1 files paths string
oldAb1Name.str(nOld);
oldAb1Name << ".ab1";
std::string oan = "./Seq/" + oldAb1Name.str();
// Create new abq files paths string
newAb1Name.str(nNew);
newAb1Name << ".ab1";
std::string nan = "./renamed/" + newAb1Name.str();
std::ifstream ifab1(oan, std::ios::binary);
std::ofstream ofab1(nan, std::ios::binary);
ofab1 << ifab1.rdbuf();
ifab1.close();
ofab1.close();
}
fNew.close();
Is the list file prepared on Windows machine? In that case it would have DOS line ending (\r\n) and is not well suited for getline on Unix. The character you see is likely \r. Make sure you use dos2unix utility before feeding the list file to your program
You probably forget to trim the values returned from getline, so they may still contain whitespace. Whitespace may be tricky to pick up by the application.

Different values depending on floating point exception flags set

Short question:
How can setting the _EM_INVALID exception flag on the FPU result in different values?
Long question:
In our project we have turned off floating point exceptions in our Release build, but turned on ZERODIVIDE, INVALID and OVERFLOW using _controlfp_s() in our Debug build. This is in order to catch errors if they are there.
However, we would also like results of numerical calculations (involving optimisation algorithms, matrix inversion, Monte Carlo and all sorts of things) to be consistent between Debug and Release build to make debugging easier.
I would expect that the setting of the exception flags on the FPU should not affect the calculated values - only whether exceptions are thrown or not. But after working backwards through our calculations I can isolate the below code example that shows that there is a difference on the last bit when calling the log() function.
This propagates to a 0.5% difference in the resulting value.
The below code will give the shown program output when adding it to a new solution in Visual Studio 2005, Windows XP and compile in Debug configuration. (Release will give a different output, but that's because the optimiser reuses the result from the first call to log().)
I hope that someone can shed a bit of light on this. Thanks.
/*
Program output:
Xi, 3893f76f, 7.4555176582633598
K, c0a682c7, 7.44466687218
Untouched
x, da8caea1, 0.0014564635732296288
Invalid exception on
x, da8caea2, 0.001456463573229629
Invalid exception off
x, da8caea1, 0.0014564635732296288
*/
#include <float.h>
#include <math.h>
#include <limits>
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
unsigned uMaskOld = 0;
errno_t err;
cout << std::setprecision (numeric_limits<double>::digits10 + 2);
double Xi = 7.4555176582633598;
double K = 7.44466687218;
double x;
cout << "Xi, " << hex << setw(8) << setfill('0') << *(unsigned*)(&Xi) << ", " << dec << Xi << endl;
cout << "K, " << hex << setw(8) << setfill('0') << *(unsigned*)(&K) << ", " << dec << K << endl;
cout << endl;
cout << "Untouched" << endl;
x = log(Xi/K);
cout << "x, " << hex << setw(8) << setfill('0') << *(unsigned*)(&x) << ", " << dec << x << endl;
cout << endl;
cout << "Invalid exception on" << endl;
::_clearfp();
err = ::_controlfp_s(&uMaskOld, 0, _EM_INVALID);
x = log(Xi/K);
cout << "x, " << hex << setw(8) << setfill('0') << *(unsigned*)(&x) << ", " << dec << x << endl;
cout << endl;
cout << "Invalid exception off" << endl;
::_clearfp();
err = ::_controlfp_s(&uMaskOld, _EM_INVALID, _EM_INVALID);
x = log(Xi/K);
cout << "x, " << hex << setw(8) << setfill('0') << *(unsigned*)(&x) << ", " << dec << x << endl;
cout << endl;
return 0;
}
This is not a complete answer, but it is too long for a comment.
I suggest you isolate the code that does the questionable calculations and put it in a subroutine, preferably in a source module that is compiled separately. Something like:
void foo(void)
{
double Xi = 7.4555176582633598;
double K = 7.44466687218;
double x;
x = log(Xi/K);
…Insert output statements here…
}
Then you would call the routine with different settings:
cout << "Untouched:\n";
foo();
cout << "Invalid exception on:\n";
…Change FP state…
foo();
This guarantees that the same instructions are executed in each case, eliminating the possibility that the compiler has for some reason generated separate code for each sequence. The way you have compiled the code, I suspect it is possible the compiler may have used 80-bit arithmetic in one case and 64-bit arithmetic in another, or may have used 80-bit arithmetic generally but converted some result to 64-bit in one case but not another
Once that is done, you can partition and isolate the code further. E.g., try evaluating Xi/K once before any of the tests, storing that in a double, and passing it to foo as a parameter. The tests whether the log call differs depending on the floating-point state. I suspect that is the case, as it is unlikely the division operation would differ.
Another advantage of isolating the code this way is that you could step through it in the debugger to see exactly where behavior diverges. You could step through it, one instruction at a time, with different floating-point states simultaneously in two windows and examine the results at each step to see exactly where the divergence is. If there is no divergence by the time you reach the log call, you should step through that, too.
Incidental notes:
If you know Xi and K are close to each other, it is better to compute log(Xi/K) as log1p((Xi-K)/K). When Xi and K are close to each other, the subtraction Xi-K is exact (has no error), and the quotient has more useful bits (the 1 that we already knew about and some zero bits following it are gone).
The fact that slight changes in your floating-point environment cause a .5% change in your result implies your calculations are very sensitive to error. This suggests that, even if you make your results reproducible, the errors that necessarily exist in floating-point arithmetic cause your result to be inaccurate. That is, the final error will still exist, it just will not be called to your attention by the difference between two different ways of calculating.
It appears in your C++ implementation that unsigned is four bytes but double is eight bytes. So printing the encoding a double by aliasing it to an unsigned omits half of the bits. Instead, you should convert a pointer to the double to a pointer to const char and print sizeof(double) bytes.

Creating pointer to the sub arrays of mass allocated one dimensional array and release VC++ build

This is my first post I hope I am not making any mistake.
I have the following code. I am trying to allocate and access a two dimensional array in one shot and more importantly in one byte array. I also need to be able to access each sub array individually as shown in the code. It works fine in the debug mode. Though in the release build in VS 2012, it causes some problems during runtime, when the compiler optimizations are applied. If I disable the release compiler optimizations then it works. Do I need to do some kind of special cast to inform the compiler?
My priorities in code is fast allocation and network communication of complete array and at the same time working with its sub arrays.
I prefer not to use boost.
Thanks a lot :)
void PrintBytes(char* x,byte* data,int length)
{
using namespace std;
cout<<x<<endl;
for( int i = 0; i < length; i++ )
{
std::cout << "0x" << std::setbase(16) << std::setw(2) << std::setfill('0');
std::cout << static_cast<unsigned int>( data[ i ] ) << " ";
}
std::cout << std::dec;
cout<<endl;
}
byte* set = new byte[SET_SIZE*input_size];
for (int i=0;i<SET_SIZE;i++)
{
sprintf((char*)&set[i*input_size], "M%06d", i+1);
}
PrintByte((byte*)&set[i*input_size]);

Why protocol buffer c++ library not reading binary objects properly

I created a binary file using a c++ program using protocol buffers. I had issues reading the binary file in my C# program, so I decided to write a small c++ program to test the reading.
My proto file is as follows
message TradeMessage {
required double timestamp = 1;
required string ric_code = 2;
required double price = 3;
required int64 size = 4;
required int64 AccumulatedVolume = 5;
}
When writing to protocol buffer, I first write the object type, then object length and the object itself.
coded_output->WriteLittleEndian32((int) ObjectType_Trade);
coded_output->WriteLittleEndian32(trade.ByteSize());
trade.SerializeToCodedStream(coded_output);
Now, when I am trying to read the same file in my c++ program i see strange behavior.
My reading code is as follows:
coded_input->ReadLittleEndian32(&objtype);
coded_input->ReadLittleEndian32(&objlen);
tMsg.ParseFromCodedStream(coded_input);
cout << "Expected Size = " << objlen << endl;
cout<<" Trade message received for: "<< tMsg.ric_code() << endl;
cout << "TradeMessage Size = " << tMsg.ByteSize() << endl;
In this case, i get following output
Expected Size = 33
Trade message received for: .CSAP0104
TradeMessage Size = 42
When I write to file, I write trade.ByteSize() as 33 bytes, but when I read the same object, the object ByteSize() is 42 bytes, which affects the rest of the data. I am not sure what is wrong in this. Please advice.
Regards,
Alok
This is guesswork, based on the above: when you use ParseFromCodedStream, you aren't actually limiting that to the objlen that you previously found; thus, if the stream contains any more data than this (i.e. that isn't the end of the file), the engine will try to keep reading to the EOF. You must cap the length to your expectation. I am not a C++ expert, so I can't offer direct guidance, but if this was C# (using protobuf-net):
objType = ProtoReader.DirectReadLittleEndianInt32(file);
len = ProtoReader.DirectReadLittleEndianInt32(file);
// assume GetObjectType returns typeof(TradeMessage) for our objType
Type type = GetObjectType(objType);
msg = RuntimeTypeModel.Default.Deserialize(file, null, type, len, null);
So apparently, i was doing a very silly mistake while creating the binary files. I did not open the file in binary mode when i wrote protobuf data to it causing it to add weird ascii characters in the middle. This caused an issue while reading the data using protobuf-net library. The issue is resolved here. Shouldn't have taken so long to resolve this.

Resources