Wrong condition evaluation using gcc - linux

I have a code that depends on many external libraries that works fine in debug mode but it crashes in release mode.
When checking the root cause I found that there is a true condition that's evaluated to false
My code looks like
#include <iostream>
enum fruits
{
a, b, c
}
...
int main()
{
auto condition (fruits::a == 99);
std::cout << condition;
if (fruits::a == 99) std::cout << " FATAL ERROR ";
...
}
The progam outputs :
0 FATAL ERROR
The program is using c++20 with O2 optimisation flag
The issue is not present if I execute the code in a separated program.

Related

How to use try-catch to catch floating point errors?

#include <iostream>
#include <float.h>
#pragma fenv_access (on)
int main(int, char**argv)
{
unsigned int fp_control_word;
_controlfp_s(&fp_control_word, 0, 0);
const unsigned int new_fp_control_word = fp_control_word | _EM_INVALID | _EM_DENORMAL
| _EM_ZERODIVIDE | _EM_OVERFLOW | _EM_UNDERFLOW | _EM_INEXACT;
_controlfp_s(&fp_control_word, new_fp_control_word, _MCW_EM);
try
{ std::cout << std::atof(argv[1]) / std::atof(argv[2]) << std::endl;
} catch (...)
{ std::cout << "caught exception" << std::endl;
}
}
I remember that it is possible to catch memory access errors on windows using a try-catch block.
There is already a question regarding this subject. But it is 10 years old and the code provided does not result in an exception, but in printing a NAN.
I was always curious about using this feature to abort some piece of numerical code in a nice way. The motivation is to abort some VERY COMPLEX piece of code immediately, if anywhere in this code a floating point exception occurred rather than keeping evaluating the rest of the code with NAN results -- which is rather slow and anyway does not make sense.
Please: I don't care if this is not supported by the C++ standard!
The question is, how to get this code to run into the catch-block -- e.g. by using the command line parameters 0.0 0.0!
For me it always prints out NAN.
What compiler options need to be used?
Or does the code need to be changed?
If one provokes a nullptr dereference in the try-block one will end up in the catch-block. But not for division by zero.
One needs to use the compiler option /EHa to enable structured exception handling.
Thanks to https://stackoverflow.com/users/17034/hans-passant for the solution.
Here comes the working code:
#include <iostream>
#include <float.h>
#pragma fenv_access (on)
int main(int, char**argv)
{
unsigned int fp_control_word;
_controlfp_s(&fp_control_word, 0, _MCW_EM);
const unsigned int new_fp_control_word = fp_control_word & ~(_EM_INVALID
| _EM_DENORMAL | _EM_ZERODIVIDE | _EM_OVERFLOW | _EM_UNDERFLOW | _EM_INEXACT);
_controlfp_s(&fp_control_word, new_fp_control_word, _MCW_EM);
try
{ std::cout << std::atof(argv[1]) / std::atof(argv[2]) << std::endl;
} catch (...)
{ std::cout << "caught exception" << std::endl;
}
}

Using unsetenv in linux

Im trying to erase a Linux environment variable with the unsetenv function in a c compiled program. I run the c program and the unsetenv is successful. But when I run the env command in the shell, TURN_ON_TESTING is still there. Why will it not erase?
my c program is
#include <stdlib.h>
void main()
{
char *name = "TURN_ON_TESTING";
if(unsetenv(name) == -1)
printf("Error");
}
thx
Oh, but it erases. Unfortunatelly not what you want.
When you are running your binary bash is creating its 'subprocess' and copies all variables into it. Let us consider following code:
// ununsetter.cpp
#include <stdlib.h>
#include <iostream>
int main()
{
char *name = "TURN_ON_TESTING";
char *val = "NEW_VALUE";
std::cout << "OLD VALUE: " << getenv(name)<<std::endl;
if(setenv(name, val, 10) == -1)
return -1;
std::cout << "NEW VALUE: " << getenv(name)<<std::endl;
return 0;
}
Now lets do the testing:
export TURN_ON_TESTING=OLD_VALUE;
./ununsetter
echo $TURN_ON_TESTING;
As you'll see states of TURN_ON_TESTING will look like:
OLD_VALUE ---> before running app
OLD_VALUE ---> while running app, but before setting it to NEW_VALUE
NEW_VALUE ---> while running app, after setting it to NEW_VALUE
OLD_VALUE ---> after app is finished
The problem is that those values are not the same 'objects' as first and last one.

Why am I experiencing unexpected behavior with Linux signal handling?

I live in an environment with Win7/MSVC 2010sp1, two different Linux boxes (Red Hat) with g++ versions (4.4.7, 4.1.2), and AIX with xlc++ (08.00.0000.0025).
Not so long ago it was requested that we move some code from AIX over to Linux. It didn't take too long to see that Linux was a bit different. Normally when a signal is thrown, we handle it and throw a C++ exception. That was not working as expected.
Long story short, throwing c++ exceptions from a signal handler isn't going to work.
Sometime later, I put together a fix that uses setjmp/longjmp to move the exception out of the handler. Aftersome testing and the dang thing works on all platforms. After an obligatory round of cubical happy dance I moved on to setting up some unit tests. Oops.
Some of my tests were failing on Linux. What I observed was that the raise function only worked once. With two tests using SIGILL, the first one passed, and the second one failed. I broke out an axe, and started chopping away at the code to remove as much cruft as possible. That yielded this smaller example.
#include <csetjmp>
#include <iostream>
#include <signal.h>
jmp_buf mJmpBuf;
jmp_buf *mpJmpBuf = &mJmpBuf;
int status = 0;
int testCount = 3;
void handler(int signalNumber)
{
signal(signalNumber, handler);
longjmp(*mpJmpBuf, signalNumber);
}
int main(void)
{
if (signal(SIGILL, handler) != SIG_ERR)
{
for (int test = 1; test <= testCount; test++)
{
try
{
std::cerr << "Test " << test << "\n";
if ((status = setjmp(*mpJmpBuf)) == 0)
{
std::cerr << " About to raise SIGILL" << "\n";
int returnStatus = raise(SIGILL);
std::cerr << " Raise returned value " << returnStatus
<< "\n";
}
else
{
std::cerr << " Caught signal. Converting signal "
<< status << " to exception" << "\n";
std::exception e;
throw e;
}
std::cerr << " SIGILL should have been thrown **********\n";
}
catch (std::exception &)
{ std::cerr << " Caught exception as expected\n"; }
}
}
else
{ std::cerr << "The signal handler wasn't registered\n"; }
return 0;
}
For the Windows and the AIX boxes I get the expected output.
Test 1
About to raise SIGILL
Caught signal. Converting signal 4 to exception
Caught exception as expected Test 2
About to raise SIGILL
Caught signal. Converting signal 4 to exception
Caught exception as expected Test 3
About to raise SIGILL
Caught signal. Converting signal 4 to exception
Caught exception as expected
For both Linux boxes it looks like this.
Test 1
About to raise SIGILL
Caught signal. Converting signal 4 to exception
Caught exception as expected
Test 2
About to raise SIGILL
Raise returned value 0
SIGILL should have been thrown **********
Test 3
About to raise SIGILL
Raise returned value 0
SIGILL should have been thrown **********
So, my real question is "What is going on here?"
My retorical questions are:
Is anyone else observing this behavior?
What should I do to try to troubleshoot this issue?
What other things should I be aware of?
You must use sigsetjmp/siglongjmp to ensure the correct behavior when mixing signals and jumps. If you change your code it will correctly work under Linux.
You also used the old signal API which not recommended. I encourage you to use the much more reliable sigaction interface. The first benefits will be that you have no more need to reset the signal catch in the handler...

(Optimization?) Bug regarding GCC std::thread

While testing some functionality with std::thread, a friend encountered a problem with GCC and we thought it's worth asking if this is a GCC bug or perhaps there's something wrong with this code (the code prints (for example) "7 8 9 10 1 2 3", but we expect every integer in [1,10] to be printed):
#include <algorithm>
#include <iostream>
#include <iterator>
#include <thread>
int main() {
int arr[10];
std::iota(std::begin(arr), std::end(arr), 1);
using itr_t = decltype(std::begin(arr));
// the function that will display each element
auto f = [] (itr_t first, itr_t last) {
while (first != last) std::cout<<*(first++)<<' ';};
// we have 3 threads so we need to figure out the ranges for each thread to show
int increment = std::distance(std::begin(arr), std::end(arr)) / 3;
auto first = std::begin(arr);
auto to = first + increment;
auto last = std::end(arr);
std::thread threads[3] = {
std::thread{f, first, to},
std::thread{f, (first = to), (to += increment)},
std::thread{f, (first = to), last} // go to last here to account for odd array sizes
};
for (auto&& t : threads) t.join();
}
The following alternate code works:
int main()
{
std::array<int, 10> a;
std::iota(a.begin(), a.end(), 1);
using iter_t = std::array<int, 10>::iterator;
auto dist = std::distance( a.begin(), a.end() )/3;
auto first = a.begin(), to = first + dist, last = a.end();
std::function<void(iter_t, iter_t)> f =
[]( iter_t first, iter_t last ) {
while ( first != last ) { std::cout << *(first++) << ' '; }
};
std::thread threads[] {
std::thread { f, first, to },
std::thread { f, to, to + dist },
std::thread { f, to + dist, last }
};
std::for_each(
std::begin(threads),std::end(threads),
std::mem_fn(&std::thread::join));
return 0;
}
We thought maybe its got something to do with the unsequenced evaluation of function's arity or its just the way std::thread is supposed to work when copying non-std::ref-qualified arguments. We then tested the first code with Clang and it works (and so started to suspect a GCC bug).
Compiler used: GCC 4.7, Clang 3.2.1
EDIT: The GCC code gives the wrong output with the first version of the code, but with the second version it gives the correct output.
From this modified program:
#include <algorithm>
#include <iostream>
#include <iterator>
#include <thread>
#include <sstream>
int main()
{
int arr[10];
std::iota(std::begin(arr), std::end(arr), 1);
using itr_t = decltype(std::begin(arr));
// the function that will display each element
auto f = [] (itr_t first, itr_t last) {
std::stringstream ss;
ss << "**Pointer:" << first << " | " << last << std::endl;
std::cout << ss.str();
while (first != last) std::cout<<*(first++)<<' ';};
// we have 3 threads so we need to figure out the ranges for each thread to show
int increment = std::distance(std::begin(arr), std::end(arr)) / 3;
auto first = std::begin(arr);
auto to = first + increment;
auto last = std::end(arr);
std::thread threads[3] = {
std::thread{f, first, to},
#ifndef FIX
std::thread{f, (first = to), (to += increment)},
std::thread{f, (first = to), last} // go to last here to account for odd array sizes
#else
std::thread{f, to, to+increment},
std::thread{f, to+increment, last} // go to last here to account for odd array sizes
#endif
};
for (auto&& t : threads) {
t.join();
}
}
I add the prints of the first and last pointer for lambda function f, and find this interesting results (when FIX is undefined):
**Pointer:0x28abd8 | 0x28abe4
1 2 3 **Pointer:0x28abf0 | 0x28abf0
**Pointer:0x28abf0 | 0x28ac00
7 8 9 10
Then I add some code for the #ELSE case for the #ifndef FIX. It works well.
- Update: This conclusion, the original post below, is wrong. My fault. See Josh's comment below -
I believe the 2nd line std::thread{f, (first = to), (to +=
increment)}, of threads[] contains a bug: The assignment inside the
two pairs of parenthesis, can be evaluated in any order, by the
parser. Yet the assignment order of 1st, 2nd and 3rd argument of the
constructor needs to keep the order as given.
--- Update: corrected ---
Thus the above debug printing results suggest that GCC4.8.2 (my version)
is still buggy (not to say GCC4.7), but GCC 4.9.2 fixes this bug, as
reported by Maxim Yegorushkin (see comment above).

C++ Delete Error -- _unlock_fhandle throwing exception?

I have a straightforward problem but I don't understand why I have it.
I would greatly appreciate any insight.
I wrote this code to test that I was correctly creating and using DLLs in Visual Studio 2010 under Win 7 64bit that could execute on Windows XP. The code executes correctly, and because it is a small test program freeing the allocated memory is not critical, but certainly will be in the future.
I am implicitly calling the DLL, as I say, it appears to work just fine. When I add the line "delete dllMsg;" to toyUseDLL.cpp it crashes, and the debugger shows _unlock_fhandle in osfinfo.c.
If it's relevant I am compiling the program with /MT to embed the runtime library (for a small handful of not important reasons).
It seems pretty obvious that I'm deallocating something not allocated, but the program output is correct since the pointers are passing the referenced memory locations. The only thing I can think of is that my pointer isn't valid, and it's only working by pure chance that the memory wasn't overwritten.
Thanks for any help, I'm pretty new to C++ and have already found a lot of great help on this site, so thanks for everyone who has posted in the past!! :-)
msgDLL.h
#include <string>
using namespace std;
namespace toyMsgs {
class myToyMsgs {
public:
static __declspec(dllexport) string* helloMsg(void);
static __declspec(dllexport) string* goodbyeMsg(void);
};
}
msgDLL.cpp
#include <iostream>
#include <string>
#include "msgDLL.h"
using namespace std;
namespace toyMsgs {
string* myToyMsgs::helloMsg(void) {
string *dllMsg = new string;
dllMsg->assign("Hello from the DLL");
cout << "Here in helloMsg, dllMsg is: \"" << *(dllMsg) << "\"" << endl;
return (dllMsg);
}
string* myToyMsgs::goodbyeMsg(void) {
string *dllMsg = new string;
dllMsg->assign("Good bye from the DLL");
cout << "Here in goodbyeMsg, dllMsg is: \"" << *(dllMsg) << "\"" << endl;
return (dllMsg);
}
}
toyUseDLL.cpp
#include <iostream>
#include <string>
#include "stdafx.h"
#include "msgDLL.h"
using namespace std;
int _tmain(int argc, _TCHAR* argv[]) {
string myMsg;
string *dllMsg;
myMsg.assign ("This is a hello from the toy program");
cout << myMsg << endl;
dllMsg = toyMsgs::myToyMsgs::helloMsg();
cout << "Saying Hello? " << *(dllMsg) << endl;
delete dllMsg;
myMsg.assign ("This is the middle of the toy program");
cout << myMsg << endl;
dllMsg = toyMsgs::myToyMsgs::goodbyeMsg();
cout << "Saying goodbye? " << *(dllMsg) << endl;
myMsg.assign ("This is a goodbye from the toy program");
cout << myMsg << endl;
return 0;
}
Program Output:
This is a hello from the toy program
Here in helloMsg, dllMsg is: "Hello from the DLL"
Saying Hello? Hello from the DLL
This is the middle of the toy program
Here in goodbyeMsg, dllMsg is: "Good bye from the DLL"
Saying goodbye? Good bye from the DLL
This is a goodbye from the toy program
The problem is that you are using /MT to compile your EXE and DLL. When you use /MT, each executable gets its own copy of the C runtime library, which is a separate and independent context. CRT and Standard C++ Library types can't safely be passed across the DLL boundary when both DLLs are compiled /MT. In your case the string is allocated by one CRT (on its private OS Heap), and freed by the EXE (which has a different heap) causing the crash in question.
To make the program work, simply compile /MD.
General advice: /MT is almost never the right thing to do (for a large handful of relatively important reasons including memory cost, performance, servicing, security and others).
Martyn
There is some good analysis here Why does this program crash: passing of std::string between DLLs

Resources