Visual C++ / STL exception not caught - visual-c++

The following method (in a Visual Studio 2008 ref class) contains a simple error that I thought would be caught - but instead it causes the process to abort with a "Debug Assertion Failed!" message box (msg includes the offending STL vector src line#). This occurs whether compiled in Debug or Release mode. The process in this case is Excel.exe and the method is accessed via COM interop.
Can someone tell me why this error doesn't get trapped ?
String^ FOO()
{
try {
std::vector<int> vfoo;
vfoo.push_back(999);
return vfoo[1].ToString(); //!!!! error: index 1 not valid
}
catch(std::exception& stdE) { // not catching
return "Unhandled STL exception";
}
catch(System::Exception^ E) { // not catching
return "Unhandled .NET exception: " + E->Message;
}
catch(...) { // not even this is catching
return "Unhandled exception";
}
}

In the Debug configuration you'll get an assert that's enabled by the iterator debugging feature. Designed to help to find mistakes in your use of the standard C++ library. You can use the Call Stack window to trace back to the statement in your code that triggered the assert. The feature is controlled by the _HAS_ITERATOR_DEBUGGING macro, very few reasons to ever turn that off in the Debug build. Well, none.
In the Release configuration, you'll run into the Checked Iterators feature, part of the Secure CRT Library initiative introduced at VS2005 and controlled by the _SECURE_SCL macro. It has a hook built in to get the debugger to stop, much as the above, to show you why it bombed. But not without a debugger, if none is attached then it immediately terminates your program with SEH exception code 0xc0000417. That's kinda where the buck stops, the DLL version of the CRT was built with _SECURE_SCL in effect and you have no option to not use that DLL when you write managed code. Building with /MT is required to completely turn it off and that's not possible in C++/CLI.
This tends to drive C++ programmers pretty nutty, catch (...) {} is a blessed language feature even though the odds of restoring program state are very close to zero. There is a back-door however (there's always a back-door), the argument validation code emits the error condition through a function pointer. The default handler immediately aborts the program with no way to catch it, not even with SetUnhandledExceptionFilter(). You can replace the handler with the _set_invalid_parameter_handler() function. That needs to be done by your Main() method, something like this:
#include "stdafx.h"
#include <stdlib.h>
using namespace System;
#pragma managed(push, off)
void no_invalid_parameter_exit(const wchar_t * expression, const wchar_t * function,
const wchar_t * file, unsigned int line, uintptr_t pReserved) {
throw new std::invalid_argument("invalid argument");
}
#pragma managed(pop)
int main(array<System::String ^> ^args)
{
_set_invalid_parameter_handler(no_invalid_parameter_exit);
// etc...
}
Which will run one of your catch handlers. The managed one, leaving no decent breadcrumbs to show what happened, but that's normal for native C++ exceptions.

"Debug Assertion Failed!" sounds like, well, an assert()-like check. These are NOT exceptions.
I actually use assert()-style checking for everything that constitutes a programming error, and use exceptions for runtime errors. Maybe Microsoft follows a similar policy; an "index out of bounds" is clearly a programming error, not something that is caused by e.g. your disk getting full.

Related

What is the best alternative to using ENSURE to avoid code analysis warnings?

Example:
ENSURE(strTitle.LoadString(AFX_IDS_APP_TITLE));
ENSURE(strMainInstruction.LoadString(IDS_STR_SUBMIT_STATS_MAIN_TEXT));
ENSURE(strContent.LoadString(IDS_STR_SUBMIT_STATS_CONTENT_TEXT));
ENSURE(strAdditional.LoadString(IDS_STR_SUBMIT_STATS_ADDITIONAL_TEXT));
ENSURE(strFooter.LoadString(IDS_STR_TASK_DIALOG_FOOTER));
ENSURE(strVerification.LoadString(IDS_STR_SUBMIT_STATS_VERIFICATION_TEXT));
ENSURE(strExpand.LoadString(IDS_STR_FIND_OUT_MORE));
ENSURE(strCollapse.LoadString(IDS_STR_COLLAPSE));
Definition:
#define ENSURE(cond) ENSURE_THROW(cond, ::AfxThrowInvalidArgException() )
It is a Microsoft macro, although I can't see it documented. I started using it when I noticed it being used in Microsoft SDK code. Annoyingly it triggers code analysis:
Warning C26496: The variable '__afx_condVal' does not change after construction, mark it as const (con.4).
I did raise it with Microsoft. The underlying macro ENSURE_THROW:
#define ENSURE_THROW(cond, exception) \
do { int __afx_condVal=!!(cond); ASSERT(__afx_condVal); if (!(__afx_condVal)){exception;} } __pragma(warning(suppress:4127)) while (false)
... it only needs the word const to resolve it.
Is there an alternative call I can make because I understand that ASSERT only works in DEBUG build.
You could redefine ENSURE_THROW and put this into your stdafx.h:
#undef ENSURE_THROW
#define ENSURE_THROW(cond, exception) \
do { const int __afx_condVal=!!(cond); ASSERT(__afx_condVal); \
if (!(__afx_condVal)){exception;} } while (false)
It's identical to the original MS definiton in afx.h, but with the const added.
This could break in future versions of MFC, although it is unlikely that MS will ever change this.
A cleaner way would be excluding certain header files from the analysis, but this depends on your tool.
Be aware that ENSURE and ASSERT are very different things:
ASSERT(x) : in debug builds it will stop the program execution with a message if x is false, in release builds it's a NOP, x won't even be evaluated
ENSURE(x) : x will be evaluated and if it's false an exception will be thrown. In debug builds a diagnostic dialog is shown before the exception is thrown.

Catching and recovering from error in C++ function called from Duktape

I have created a plugin for the OpenCPN marine navigation program that incorporates Duktape to provide a scripting capability. OpenCPN uses wxWidgets.
Basically, the plugin presents the user with a console comprising a script window, an output window and various buttons. The user enters their script (or loads it from a .js file) and clicks on Run. The script is run using duk_peval. On return I display the result, destroy the context and wait for the user to run again, perhaps after modifying the script. All this works well. However, consider the following test script:
add(2, 3);
function add(a, b){
if (a == b) throw("args match");
return(a + b);
}
If the two arguments in the call to add are equal. The script throws an error and the user can try again. This all works.
Now I can implement add as a c++ function thus:
static duk_ret_t add(duk_context *ctx){
int a, b;
a = duk_get_int(ctx, 0);
b = duk_get_int(ctx, 1);
if (a == b){
duk_error(ctx, DUK_ERR_TYPE_ERROR, "args match");
}
duk_pop_2(ctx);
duk_push_int(ctx, a+b);
return (1);
}
As written, this passes the error to the fatal error handler. I know I must not try and use Duktape further but I can display the error OK. However, I have no way back to the plugin.
The prescribed action is to exit or abort but these both terminate the hosting application, which is absolutely unacceptable. Ideally, I need to be able to return from the duk_peval call with the error.
I have tried running the add function using duk_pcall from an outer C++ function. This catches the error and I can display it from that outer function. But when I return from that outer function, the script carries on when it should not and the eventual return from the duk_peval call has no knowledge of the error.
I know I could use try/catch in the script but with maybe dozens of calls to the OpenCPN APIs this is unrealistic. Percolating an error return code all the way back, maybe through several C++ functions and then to the top-level script would also be very cumbersome as the scripts and functions can be quite complex.
Can anyone please suggest a way of passing control back to my invoking plugin - preferably by returning from the duk_peval?
I have cracked this at last.
Firstly, I use the following in error situations:
if (a == b){
duk_push_error_object(ctx, DUK_ERR_ERROR, "args match");
duk_thow(ctx);
}
If an error has been thrown, the returned values from duk_peval and duk_pcall are non-zero and the error object is on the stack - as documented
It is all working nicely for me now.

Visual Studio 2015 doesn't honour _Check_return_ or _Must_inspect_result_

I have a cross-platform build. On a *nix platform using GCC, I use the __attribute__((warn_unused_result)) to notify the consumer of my API if a return value is not checked. I assumed that _Check_return does the same thing on MSVC, but it doesn't appear to be working the way I expect.
The following code does not produce a warning as I expect. Warnings are set to /Wall.
_Check_return_ _Must_inspect_result_ int foo()
{
return 100;
}
int main()
{
foo();
return 0;
}
Code compiles without warnings. What am I doing wrong (or what should I be using to generate warnings for unchecked return codes)?
SAL annotations like _Check_return_ and _Must_inspect_result_ are only checked during code analysis builds (either by starting a code analysis build in the IDE or by building with the /analyze flag on the command line).
See "Understanding SAL" on MSDN for more information.

C++/CLI and GetLastError

I have created a C++ Test Project for my C++ library in Visual Studio 2010. The test project uses C++/CLI (/clr set) and I am having problems retrieving the last error set by my library functions; GetLastError always returns zero.
In the example below I want to test that the correct return value and last error is set by my Write function:
[TestMethod]
void Write_InvalidHandle_Error()
{
char buffer[] = "Hello";
DWORD actual = -1;
DWORD expected = ERROR_INVALID_HANDLE;
int actualRetVal = 0;
int expectedRetVal = -1;
HANDLE handle = INVALID_HANDLE_VALUE;
actualRetVal = Write(handle, buffer);
actual = GetLastError();
Assert::AreEqual(expectedRetVal, actualRetVal);
Assert::AreEqual(expected, actual);
}
I have checked my Write function and it does set the correct return value and last error but the latter is not retrieved in my test method. Even when I change the Write function to just set the error and return the problem occurs (and I call no other function before calling GetLastError in my test method):
int Write(HANDLE h, const char* buf)
{
SetLastError(ERROR_INVALID_HANDLE);
return -1;
}
Any idea how I can fix this? I assume there is a problem with C++/CLI because when I use my library outside of this testing scenario (pure C++) GetLastError works.
Relying on GetLastError()/SetLastError() across the managed/unmanaged boundary is problematic.
When using P/Invoke and the DllImport attribute you can (must) set the SetLastError property to get access to the native error code on the managed side.
When using C++/CLI, however, the compiler handles all marshalling for you, and explicitly does not set that flag.
You can read some more details about it in this blog post. The gist of it is:
If you use DllImport explicitly in C++, the same rules apply as with
C#. But when you call unmanaged APIs directly from managed C++ code,
neither GetLastError nor Marshal.GetLastWin32Error will work reliably.
This is also covered at length in Chapter 9 of "Expert Visual C++/CLI" by Marcus Heege which is available on Google Books:
As mentioned before, for these native local functions, C++/CLI
automatically generates P/Invoke metadata without the lasterror flag,
because it is very uncommon to use the GetLastError value to
communicate error codes within a project. However, the MSDN
documentation on GetLastError allows you to use SetLastError and
GetLastError for your own functions. Therefore, this optimization can
theoretically cause wrong GetLastError values.
Basically, don't do it!
I would recommend to use (native) C++ exceptions to communicate errors between managed and unmanaged code. C++/CLI supports these very nicely. If you can't modify your Write() function directly, you could create a wrapper function on the unmanaged side which uses GetLastError() and then throws an exception if necessary.

Make a Simple wxWidgets Program Without Memory Leaks

I'm trying to make a basic wxWidgets program that doesn't leak any memory (I'm developing on Windows 7 and am using Visual Studio 2010 and trying to use CRT to check for leaks).
I started from the OpenGL sample and gradually worked it down. After adding CRT calls to the OnExit method of my wxApp object (the only place I ever even saw it mentioned), I realized that memory was being leaked everywhere.
I gradually worked it down more until I created this sample code, which makes CRT spit out a huge load of leaks:
#include <wx/glcanvas.h>
#include <wx/wxprec.h>
#ifndef WX_PRECOMP
#include <wx/wx.h>
#endif
#ifdef __WXMSW__
#include <wx/msw/msvcrt.h>
#endif
#if !defined(_INC_CRTDBG)// || !defined(_CRTDBG_MAP_ALLOC)
#error "Debug CRT functions have not been included!"
#endif
class App : public wxApp {
public:
bool OnInit(void);
int OnExit(void);
};
bool App::OnInit(void) {
if (!wxApp::OnInit()) return false;
return true;
}
int App::OnExit(void) {
return wxApp::OnExit();
}
int WINAPI WinMain(HINSTANCE h_instance, HINSTANCE h_prev_instance, wxCmdLineArgType cmd_line, int cmd_show) {
int leaks = _CrtDumpMemoryLeaks();
if (leaks) {
int i=0, j=6/i; //Put a breakpoint here or throw an exception
}
return EXIT_SUCCESS;
}
#pragma comment(lib,"wxbase29ud.lib")
#pragma comment(lib,"wxmsw29ud_gl.lib")
#pragma comment(lib,"wxmsw29ud_core.lib")
#pragma comment(lib,"wxpngd.lib")
#pragma comment(lib,"wxzlibd.lib")
#pragma comment(lib,"comctl32.lib")
#pragma comment(lib,"rpcrt4.lib")
Notice that the class App is not used anywhere. The function definitions outside the class are necessary to prevent it being optimized away. If the class App is not present, then no errors occur.
The questions are, why isn't this working? How can I make a leak free wxWidgets program? How should I use _CrtDumpMemoryLeaks()? Why aren't there resources about this--and if there are, where are they? The best I could find was this, which only suggested using CRT, but didn't actually say how. Help?
It is possible that are these are not real memory leaks. When you call _CrtDumpMemoryLeaks() it goes through the heap looking for objects that have not been freed and displays them as leaks. Since you are calling it before your application has ended then anything that has been allocated on the heap will show up as leaks.
I'm pretty sure that wxWidgets creates some global objects (for example, I know there are wxEmptyString, wxDefaultPosition and so forth and I daresay there are others that do actually perform some allocations) that will not be destroyed until after the end of your application. _CrtDumpMemoryLeaks() would need to be called after that point in order to not show false positives.
You can try to get the CRT to call _CrtDumpMemoryLeaks() automatically on program exit as explained on MSDN.
There is also a related question here that might help you.
Edit: I've tried this myself by adding the following code to the top of my App::OnInit() method and the only leaks I get shown are a 64 byte one, which matches my forced leak. So it doesn't look like all wx applications are leaky. However, I also tried it with your code and I do get leaks reported.
_CrtSetReportMode( _CRT_ERROR, _CRTDBG_MODE_FILE );
_CrtSetReportFile( _CRT_ERROR, _CRTDBG_FILE_STDERR );
int tmpDbgFlag = _CrtSetDbgFlag(_CRTDBG_REPORT_FLAG);
tmpDbgFlag |= _CRTDBG_LEAK_CHECK_DF;
_CrtSetDbgFlag(tmpDbgFlag);
// Force a leak
malloc(64);
Edit 2: You need to include the following line after your App class definition so that wxWidgets uses your App class as the application object (and provides it's own WinMain). I'm guessing that whetever it does in wxApp requires this line in order to clean itself up properly:
IMPLEMENT_APP(App)
Edit 3: I also found, in the wxWidgets page you linked to that the startup code will automatically call _CrtSetDbgFlag() for you in debug mode. So you get leak detection without having to add the code yourself. You can test this by allocating some memory and not freeing it.

Resources