Visual Studio 2012 - vshost32-clr2.exe has stopped working - visual-studio-2012

I'm creating a WinForm Application in C# using Visual Studio 2012 and I'm getting an error when I debug it :
vshost32-clr2.exe has stopped working
I already searched but most results are for Visual Studio 2010 and lower and I get similar solutions which I think is not applicable to Visual Studio 2012 :
Properties -> Debug -> Enable unmanaged code debugging
Source : vshost32.exe crash when calling unmanaged DLL
Additional Details :
My project doesn't use any DLL.
As far as I progress in my project, it only occurs when the width is 17.
I use the following code :
Bitmap tmp_bitmap = new Bitmap(Width, Height, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
Rectangle rect = new Rectangle(0, 0, 16, tmp_bitmap.Height);
System.Drawing.Imaging.BitmapData bmpData =
tmp_bitmap.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadWrite,
tmp_bitmap.PixelFormat);
unsafe
{
// Get address of first pixel on bitmap.
byte* ptr = (byte*)bmpData.Scan0;
int bytes = Width * Height * 3; //124830 [Total Length from 190x219 24 Bit Bitmap]
int b; // Individual Byte
for (int i = 0; i < bytes; i++)
{
_ms.Position = EndOffset - i; // Change the fs' Position
b = _ms.ReadByte(); // Reads one byte from its position
*ptr = Convert.ToByte(b);
ptr++;
// fix width is odd bug.
if (Width % 4 != 0)
if ((i + 1) % (Width * 3) == 0 && (i + 1) * 3 % Width < Width - 1)
{
ptr += 2;
}
}
// Unlock the bits.
tmp_bitmap.UnlockBits(bmpData);
}
I think posting my code is necessary as it only occurs when such value is set to my method.
I hope you can help me fix this problem.
Thank you very much in advance!

Not sure if this is the same issue, but I had a very similar issue which resolved (vanished) when I un-checked "Enable the Visual Studio hosting process" under the Debug section of Project/Properties. I also enabled native code debugging.

This issue can be related with debugging application as "Any CPU" under x64 OS, set Target CPU as x86

Adding my 2 cents since I ran into this today.
In my case, a call to a printer was passing some invalid value, and it seems it send the debugger to sleep with the fishes.
If you run into this, see if you can pinpoint the line and make sure there are no funny business issues around a call out (like a printing service)

Below solution worked for me:
Go to the Project->Properties->Debug tab
Unchecked the 'Enable the Visual Studio hosting process' checkbox
Checked 'Enable native code debugging' option
Hope this helps.

Related

size_t different default value when running through VS2013 debugger vs CommandLine

Running the following code via the visual studio debugger executes successfully. The "count" variable will be default initialized to 0.
If I run via the command line, i get random behaviour and my EXPECT_EQ( ... ) fails.
size_t expectedCount = actual.length() - expected.length();
position += 12;
size_t count;
for (size_t i = position ; i < actual.length(); ++i) {
if (actual.at(i) == 'a')
++count;
}
EXPECT_EQ(expectedCount , count);
I'm assuming this is because Visual studio gives me a clean stack (everything is 0) whereas the commandline has lingering garbage?
In a function scope, the syntax size_t count; will not initialize a variable. Use size_t count{};
For more info on initialization, see
Variable initialization in C++.
Your Debug build may be setting count to 0 due to the nature of that build configuration but not in Release build. You need to initialize count to zero. Always initialize variables.

Viewing call stack for all threads when debugging a multithreaded Windows CE application

So, working with Visual Studio 2008 developing native C++ code for a Windows CE 6.0 platform. Consider the following multithreaded application:
#include "stdafx.h"
void IncrementCounter(int& counter)
{
if (++counter >= 1000)
{
counter = 0;
}
}
unsigned long ThreadFunction(void* arguments)
{
int threadCounter = 0;
while (true)
{
Sleep(20);
IncrementCounter(threadCounter);
}
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
CreateThread(
NULL,
0,
(LPTHREAD_START_ROUTINE)ThreadFunction,
NULL,
0,
NULL
);
int mainCounter = 0;
while (true)
{
Sleep(20);
IncrementCounter(mainCounter);
}
return 0;
}
When I build this to run on my Windows 7 dev. machine and run a debug session from Visual Studio with a breakpoint on the counter = 0; statement, execution will eventually break and two threads will be displayed in the "Threads" debug window. I can switch back and forth between the two threads using either a double-click or right-click->"Switch to Thread" and see a call stack and browse source and inspect symbols (for the call stack frames within my application code) for both threads. However when I do the same on Windows CE connecting via. ActiveSync/WMDC (have tried on both our custom CE 6.0 hardware with an in-house OS and SDK, and an old Windows mobile 5.0 PDA with the stock MS SDK) I can see a call stack and browse source for the thread in which the break has taken place (where the current execution point is within my application code), however I don't get anything useful for the other thread, which is currently blocked in kernel space waiting it's sleep timeout.
Anyone know whether it's possible to get this working better on Windows CE? I'm guessing it might be something to do with the debugger not knowing where to find .pdb symbol files for the WinCE kernel elements, or perhaps do I need to be running a Debug OS?
Windows CE 6 remote debugging. No call stack when pause program describes the same issue, but doesn't really provide a solution
thanks
Richard
Probably its because of missing pdb file for coredll.dll. If you are creating image for your device you will have access to this file, otherwise I am afraid its platform dependent.
You can find here that this issue is considered to be by design in VS2005 so maybe its the same for VS2008:
http://connect.microsoft.com/VisualStudio/feedback/details/190785/unable-to-debug-windows-mobile-application-that-is-in-a-system-call
In following link you can find some instructions for finding call stack using platform builder for "Thread That Is Not Running"
https://distrinet.cs.kuleuven.be/projects/SEESCOA/internal/workpackages/workpackage6/Task6dot2/ESCE/classes/331.pdf
Since I'am using only VS 2005 I cannot confirm if its of any help.
If logging is not sufficient (as was suggested in the SO link you provided), to find call stack for a thread like in your example I suggest using GetThreadCallStack function. Here is a step by step procedure:
1 - Add following code to your project:
typedef struct _CallSnapshotEx {
DWORD dwReturnAddr;
DWORD dwFramePtr;
DWORD dwCurProc;
DWORD dwParams[4];
} CallSnapshotEx;
#define STACKSNAP_EXTENDED_INFO 2
DWORD dwGUIThread;
void DumpGUIThreadCallStack() {
HINSTANCE hCore = LoadLibrary(_T("coredll.dll"));
typedef ULONG (*GETTHREADCALLSTACK)(HANDLE hThrd, ULONG dwMaxFrames, LPVOID lpFrames[], DWORD dwFlags,DWORD dwSkip);
GETTHREADCALLSTACK pGetThreadCallStack = (GETTHREADCALLSTACK)GetProcAddress(hCore, _T("GetThreadCallStack"));
if ( !pGetThreadCallStack )
return;
#define MAX_FRAMES 40
CallSnapshotEx lpFrames[MAX_FRAMES];
DWORD dwCnt = pGetThreadCallStack((HANDLE)dwGUIThread, MAX_FRAMES, (void**)lpFrames, STACKSNAP_EXTENDED_INFO, 0);
TCHAR szBuff[64];
for ( DWORD i = 0; i < dwCnt; ++i ) {
wsprintf(szBuff, L"[%d] %p\n", i, lpFrames[i].dwReturnAddr);
OutputDebugString(szBuff);
}
}
it will dump in Output window call frames return addresses (sample output is in point 3).
2 - initialize dwGUIThread in WinMain:
dwGUIThread = GetCurrentThreadId();
3 - execute DumpGUIThreadCallStack(); before actuall breakpoint inside ThreadFunction. It will write to output window text similar to this:
[0] 8C04D2C4
[1] 8C04D34C
[2] 40026D48
[3] 000111F4 <--- 1
[4] 00011BAC <--- 2
[5] 4003C2DC
1 and 2 are return addresses that you are interested in, and you want to find symbols nearest to them.
4 - while inside debugger switch to disassembly mode (right click on source file and choose 'Go to disassembly'). In this mode at the top of the window you will see Address: line. You should copy paste to it addresses from output window, in my case 000111F4 will direct me to following lines:
while (true)
{
Sleep(20);
000111F0 mov r0, #0x14
000111F4 bl 0001193C // <--- 1
IncrementCounter(mainCounter);
which gives you what your GUI thread is actually doing.
Visual Studio Debugger allows to execute functions from immediate window, but I was unable to call DumpGUIThreadCallStack, I am always getting 'Error: function evaluation not supported'.
To find nearest symbols for frame return addresses you can also use .map files together with .cod files (/FAcs compiled sources), there are some good tutorials on that on google.
Above example was tested with the use of VS 2005 and Standard SDK 5.0, on WCE6.0 (end user) device.

What difference between VC++ 2010 Express and Borland C++ 3.1 in compiling simple c++ code file?

I already don't know what to think or what to do. Next code compiles fine in both IDEs, but in VC++ case it causes weird heap corruptions messages like:
"Windows has triggered a breakpoint in Lab4.exe.
This may be due to a corruption of the heap, which indicates a bug in Lab4.exe or any of the DLLs it has loaded.
This may also be due to the user pressing F12 while Lab4.exe has focus.
The output window may have more diagnostic information."
It happens when executing Task1_DeleteMaxElement function and i leave comments there.
Nothing like that happens if compiled in Borland C++ 3.1 and everything work as expected.
So... what's wrong with my code or VC++?
#include <conio.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <memory.h>
void PrintArray(int *arr, int arr_length);
int Task1_DeleteMaxElement(int *arr, int arr_length);
int main()
{
int *arr = NULL;
int arr_length = 0;
printf("Input the array size: ");
scanf("%i", &arr_length);
arr = (int*)realloc(NULL, arr_length * sizeof(int));
srand(time(NULL));
for (int i = 0; i < arr_length; i++)
arr[i] = rand() % 100 - 50;
PrintArray(arr, arr_length);
arr_length = Task1_DeleteMaxElement(arr, arr_length);
PrintArray(arr, arr_length);
getch();
return 0;
}
void PrintArray(int *arr, int arr_length)
{
printf("Printing array elements\n");
for (int i = 0; i < arr_length; i++)
printf("%i\t", arr[i]);
printf("\n");
}
int Task1_DeleteMaxElement(int *arr, int arr_length)
{
printf("Looking for max element for deletion...");
int current_max = arr[0];
for (int i = 0; i < arr_length; i++)
if (arr[i] > current_max)
current_max = arr[i];
int *temp_arr = NULL;
int temp_arr_length = 0;
for (int j = 0; j < arr_length; j++)
if (arr[j] < current_max)
{
temp_arr = (int*)realloc(temp_arr, temp_arr_length + 1 * sizeof(int)); //if initial array size more then 4, breakpoint activates here
temp_arr[temp_arr_length] = arr[j];
temp_arr_length++;
}
arr = (int*)realloc(arr, temp_arr_length * sizeof(int));
memcpy(arr, temp_arr, temp_arr_length);
realloc(temp_arr, 0); //if initial array size is less or 4, breakpoint activates at this line execution
return temp_arr_length;
}
My guess is VC++2010 is rightly detecting memory corruption, which is ignored by Borland C++ 3.1.
How does it work?
For example, when allocating memory for you, VC++2010's realloc could well "mark" the memory around it with some special value. If you write over those values, realloc detects the corruption, and then crashes.
The fact it works with Borland C++ 3.1 is pure luck. This is a very very old compiler (20 years!), and thus, would be more tolerant/ignorant of this kind of memory corruption (until some random, apparently unrelated crash occurred in your app).
What's the problem with your code?
The source of your error:
temp_arr = (int*)realloc(temp_arr, temp_arr_length + 1 * sizeof(int))
For the following temp_arr_length values, in 32-bit, the allocation will be of:
0 : 4 bytes = 1 int when you expect 1 (Ok)
1 : 5 bytes = 1.25 int when you expect 2 (Error!)
2 : 6 bytes = 1.5 int when you expect 3 (Error!)
You got your priotities wrong. As you can see:
temp_arr_length + 1 * sizeof(int)
should be instead
(temp_arr_length + 1) * sizeof(int)
You allocated too little memory,and thus wrote well beyond what was allocated for you.
Edit (2012-05-18)
Hans Passant commented on allocator diagnostics. I took the liberty of copying them here until he writes his own answer (I've already seen coments disappear on SO):
It is Windows that reminds you that you have heap corruption bugs, not VS. BC3 uses its own heap allocator so Windows can't see your code mis-behaving. Not noticing these bugs before is pretty remarkable but not entirely impossible.
[...] The feature is not available on XP and earlier. And sure, one of the reasons everybody bitched about Vista. Blaming the OS for what actually were bugs in the program. Win7 is perceived as a 'better' OS in no small part because Vista forced programmers to fix their bugs. And no, the Microsoft CRT has implemented malloc/new with HeapAlloc for a long time. Borland had a history of writing their own, beating Microsoft for a while until Windows caught up.
[...] the CRT uses a debug allocator like you describe, but it generates different diagnostics. Roughly, the debug allocator catches small mistakes, Windows catches gross ones.
I found the following links explaining what is done to memory by Windows/CRT allocators before and after allocation/deallocation:
http://www.codeguru.com/cpp/w-p/win32/tutorials/article.php/c9535/Inside-CRT-Debug-Heap-Management.htm
https://stackoverflow.com/a/127404/14089
http://www.nobugs.org/developer/win32/debug_crt_heap.html#table
The last link contains a table I printed and always have near me at work (this was this table I was searching for when finding the first two links... :- ...).
If it is crashing in realloc, then you are over stepping, the book keeping memory of malloc & free.
The incorrect code is as below:
temp_arr = (int*)realloc(temp_arr, temp_arr_length + 1 * sizeof(int));
should be
temp_arr = (int*)realloc(temp_arr, (temp_arr_length + 1) * sizeof(int));
Due to operator precedence of * over +, in the next run of the loop when you are expecting realloc to passed 8 bytes, it might be passing only 5 bytes. So, in your second iteration, you will be writing into 3 bytes someone else's memory, which leads to memory corruption and eventual crash.
Also
memcpy(arr, temp_arr, temp_arr_length);
should be
memcpy(arr, temp_arr, temp_arr_length * sizeof(int) );

boost::posix_time fails in release build

I want to open a new log file each a program runs, so I create a filename with the current time.
FILE * fplog;
void OpenLog()
{
boost::posix_time::ptime now = boost::posix_time::second_clock::local_time();
char buf[256];
sprintf(buf,"ecrew%d%02d%02d_%02d%02d%02d.log",
now.date().year(),now.date().month(),now.date().day(),
now.time_of_day().hours(),now.time_of_day().minutes(),now.time_of_day().seconds());
fplog = fopen(buf,"w");
}
This works perfectly in a debug build, producing files with names such as
ecrew20110309_141506.log
However the same code fails strangely in a release build
ecrew198619589827196617_141338.log
BTW, this also fails in the same way:
boost::posix_time::ptime now = boost::posix_time::second_clock::local_time();
char buf[256];
boost::gregorian::date day (boost::gregorian::day_clock::local_day());
sprintf(buf,"ecrew%d%02d%02d_%02d%02d%02d.log",
day.year(),day.month(),day.day(),
now.time_of_day().hours(),now.time_of_day().minutes(),now.time_of_day().seconds());
fplog = fopen(buf,"w");
This works:
boost::posix_time::ptime now = boost::posix_time::second_clock::local_time();
char buf[256];
sprintf(buf,"ecrew%s_%02d%02d%02d.log",
to_iso_string( boost::gregorian::day_clock::local_day() ).c_str(),
now.time_of_day().hours(),now.time_of_day().minutes(),now.time_of_day().seconds());
fplog = fopen(buf,"w");
I'd still be curious why the previous two version fail in release build, but work in debug.
Okay I'm a bit late but as I stumbled on to your question when looking for the answer myself ( day_clock::local_day() gives weird results when compiled as Release, here on Win XP + Boost 1.46 ) ,
I thought I should come back with what worked for me.
The data seems to be stocked (I just use year, month and day) in a 16 bit manner but when you read them you get a 32 bit integer and whatever bug there is, it writes garbage into the top bits or it doesn't clean 'em out before writing to the lower bytes.
So my workaround is just to zero out the topmost 16 bits:
date todaysdate(day_clock::local_day());
int year = todaysdate.year() & 0xFFFF;
instead of say:
date todaysdate(day_clock::local_day());
int year = todaysdate.year();
and it works well for me anyway.
Valmond

Freetype2 failing under WoW64

I built a tff to D3D texture function using freetype2(2.3.9) to generate grayscale maps from the fonts. it works great under native win32, however, on WoW64 it just explodes (well, FT_Done and FT_Load_Glyph do). from some debugging, it seems to be a problem with HeapFree as called by free from FT_Free.
I know it should work, as games like WCIII, which to the best of my knowledge use freetype2, run fine, this is my code, stripped of the D3D code(which causes no problems on its own):
FT_Face pFace = NULL;
FT_Error nError = 0;
FT_Byte* pFont = static_cast<FT_Byte*>(ARCHIVE_LoadFile(pBuffer,&nSize));
if((nError = FT_New_Memory_Face(pLibrary,pFont,nSize,0,&pFace)) == 0)
{
FT_Set_Char_Size(pFace,nSize << 6,nSize << 6,96,96);
for(unsigned char c = 0; c < 95; c++)
{
if(!FT_Load_Glyph(pFace,FT_Get_Char_Index(pFace,c + 32),FT_LOAD_RENDER))
{
FT_Glyph pGlyph;
if(!FT_Get_Glyph(pFace->glyph,&pGlyph))
{
LOG("GET: %c",c + 32);
FT_Glyph_To_Bitmap(&pGlyph,FT_RENDER_MODE_NORMAL,0,1);
FT_BitmapGlyph pGlyphMap = reinterpret_cast<FT_BitmapGlyph>(pGlyph);
FT_Bitmap* pBitmap = &pGlyphMap->bitmap;
const size_t nWidth = pBitmap->width;
const size_t nHeight = pBitmap->rows;
//add to texture atlas
}
}
}
}
else
{
FT_Done_Face(pFace);
delete pFont;
return FALSE;
}
FT_Done_Face(pFace);
delete pFont;
return TRUE;
}
ARCHIVE_LoadFile returns blocks allocated with new.
As a secondary question, I would like to render a font using pixel sizes, I came across FT_Set_Pixel_Sizes, but I'm unsure as to whether this stretches the font to fit the size, or bounds it to a size. what I would like to do is render all the glyphs at say 24px (MS Word size here), then turn it into a signed distance field in a 32px area.
Update
After much fiddling, I got a test app to work, which leads me to think the problems are arising from threading, as my code is running in a secondary thread. I have compiled freetype into a static lib using the multithread DLL, my app uses the multithreaded libs. gonna see if i can set up a multithreaded test.
Also updated to 2.4.4, to see if the problem was a known but fixed bug, didn't help however.
Update 2
After some more fiddling, it turns out I wasn't using the correct lib for 2.4.4 -.- after fixing that, the test app works 100%, but the main app still crashes when FT_Done_Face is called, still seems to be a crash in the memory heap management of windows. is it possible that there is a bug in freetype2 that makes it blow up under user threads?

Resources