MFC multithreading with delete[] , dbgheap.c - multithreading

I've got a strange problem and really don't understand what's going on.
I made my application multi-threaded using the MFC multithreadclasses.
Everything works well so far, but now:
Somewhere in the beginning of the code I create the threads:
m_bucketCreator = new BucketCreator(128,128,32);
CEvent* updateEvent = new CEvent(FALSE, FALSE);
CWinThread** threads = new CWinThread*[numThreads];
for(int i=0; i<8; i++){
threads[i]=AfxBeginThread(&MyClass::threadfunction, updateEvent);
m_activeRenderThreads++;
}
this creates 8 threads working on this function:
UINT MyClass::threadfunction( LPVOID params ) //executed in new Thread
{
Bucket* bucket=m_bucketCreator.getNextBucket();
...do something with bucket...
delete bucket;
}
m_bucketCreator is a static member. Now I get some thread error in the deconstructor of Bucket on the attempt to delete a buffer (however, the way I understand it this buffer should be in the memory of this thread, so I don't get why there is an error). On the attempt of delete[] buffer, the error happens in _CrtIsValidHeapPointer() in dbgheap.c.
Visual studio outputs the message that it trapped a halting point and this can be either due to heap corruption or because the user pressed f12 (I didn't ;) )
class BucketCreator {
public:
BucketCreator();
~BucketCreator(void);
void init(int resX, int resY, int bucketSize);
Bucket* getNextBucket(){
Bucket* bucket=NULL;
//enter critical section
CSingleLock singleLock(&m_criticalSection);
singleLock.Lock();
int height = min(m_resolutionY-m_nextY,m_bucketSize);
int width = min(m_resolutionX-m_nextX,m_bucketSize);
bucket = new Bucket(width, height);
//leave critical section
singleLock.Unlock();
return bucket;
}
private:
int m_resolutionX;
int m_resolutionY;
int m_bucketSize;
int m_nextX;
int m_nextY;
//multithreading:
CCriticalSection m_criticalSection;
};
and class Bucket:
class Bucket : public CObject{
DECLARE_DYNAMIC(RenderBucket)
public:
Bucket(int a_resX, int a_resY){
resX = a_resX;
resY = a_resY;
buffer = new float[3 * resX * resY];
int buffersize = 3*resX * resY;
for (int i=0; i<buffersize; i++){
buffer[i] = 0;
}
}
~Bucket(void){
delete[] buffer;
buffer=NULL;
}
int getResX(){return resX;}
int getResY(){return resY;}
float* getBuffer(){return buffer;}
private:
int resX;
int resY;
float* buffer;
Bucket& operator = (const Bucket& other) { /*..*/}
Bucket(const Bucket& other) {/*..*/}
};
Can anyone tell me what could be the problem here?
edit: this is the other static function I'm calling from the threads. Is this safe to do?
static std::vector<Vector3> generate_poisson(double width, double height, double min_dist, int k, std::vector<std::vector<Vector3> > existingPoints)
{
CSingleLock singleLock(&m_criticalSection);
singleLock.Lock();
std::vector<Vector3> samplePoints = std::vector<Vector3>();
...fill the vector...
singleLock.Unlock();
return samplePoints;
}

All the previous replies are sound. For the copy constructor, make sure that it doesn't just copy the buffer pointer, otherwise that will cause the problem. It needs to allocate a new buffer, not the pointer value, which would cause an error in 'delete'. But I don't get the impression that the copy contructor will get called in your code.
I've looked at the code and I am not seeing any error in it as is. Note that the thread synchronization isn't even necessary in this GetNextBucket code, since it's returning a local variable and those are pre-thread.
Errors in ValidateHeapPointer occur because something has corrupted the heap, which happens when a pointer writes past a block of memory. Often it's a for() loop that goes too far, a buffer that wasn't allocated large enough, etc.
The error is reported during a call to 'delete' because that's when the heap is validated for bugs in debug mode. However, the error has occurred before that time, it just happens that the heap is checked only in 'new' and 'delete'. Also, it isn't necessarily related to the 'Bucket' class.
What you need to need to find this bug, short of using tools like BoundsChecker or HeapValidator, is comment out sections of your code until it goes away, and then you'll find the offending code.
There is another method to narrow down the problem. In debug mode, include in your code, and sprinkle calls to _CrtCheckMemory() at various points of interest. That will generate the error when the heap is corrupted. Simply move the calls in your code to narrow down at what point the corruption begins to occur.
I don't know which version of Visual C++ you are using. If you're using a earlier one like VC++ 6.0, make sure that you are using the Multitreaded DLL version of the C Run Time Library in the compiler option.

You're constructing a RenderBucket. Are you sure you're calling the 'Bucket' class's constructor from there? It should look like this:
class RenderBucket : public Bucket {
RenderBucket( int a_resX, int a_resY )
: Bucket( a_resX, a_resY )
{
}
}
Initializers in the Bucket class to set the buffer to NULL is a good idea... Also making the Default constructor and copy constructor private will help to make double sure those aren't being used. Remember.. the compiler will create these automatically if you don't:
Bucket(); <-- default constructor
Bucket( int a_resx = 0, int a_resy = 0 ) <-- Another way to make your default constructor
Bucket(const class Bucket &B) <-- copy constructor

You haven't made a private copy constructor, or any default constructor. If class Bucket is constructed via one of these implicitly-defined methods, buffer will either be uninitialized, or it will be a copied pointer made by a copy constructor.
The copy constructor for class Bucket is Bucket(const Bucket &B) -- if you do not explicitly declare a copy constructor, the compiler will generate a "naive" copy constructor for you.
In particular, if this object is assigned, returned, or otherwise copied, the copy constructor will copy the pointer to a new object. Eventually, both objects' destructors will attempt to delete[] the same pointer and the second attempt will be a double deletion, a type of heap corruption.
I recommend you make class Bucket's copy constructor private, which will cause attempted copy construction to generate a compile error. As an alternative, you could implement a copy constructor which allocates new space for the copied buffer.
Exactly the same applies to the assignment operator, operator=.
The need for a copy constructor is one of the 55 tips in Scott Meyer's excellent book, Effective C++: 55 Specific Ways to Improve Your Programs and Designs:
This book should be required reading for all C++ programmers.
If you add:
class Bucket {
/* Existing code as-is ... */
private:
Bucket() { buffer = NULL; } // No default construction
Bucket(const Bucket &B) { ; } // No copy construction
Bucket& operator= (const Bucket &B) {;} // No assignment
}
and re-compile, you are likely to find your problem.
There is also another possibility: If your code contains other uses of new and delete, then it is possible these other uses of allocated memory are corrupting the linked-list structure which defines the heap memory. It is common to detect this corruption during a call to delete, because delete must utilize these data structures.

Related

Code analysis C26408 — Replacing the m_pszHelpFilePath variable in InitInstance

In my application's InitInstance function, I have the following code to rewrite the location of the CHM Help Documentation:
CString strHelp = GetProgramPath();
strHelp += _T("MeetSchedAssist.CHM");
free((void*)m_pszHelpFilePath);
m_pszHelpFilePath = _tcsdup(strHelp);
It is all functional but it gives me a code analysis warning:
C26408 Avoid malloc() and free(), prefer the nothrow version of new with delete (r.10).
When you look at the official documentation for m_pszHelpFilePath it does state:
If you assign a value to m_pszHelpFilePath, it must be dynamically allocated on the heap. The CWinApp destructor calls free( ) with this pointer. You many want to use the _tcsdup( ) run-time library function to do the allocating. Also, free the memory associated with the current pointer before assigning a new value.
Is it possible to rewrite this code to avoid the code analysis warning, or must I add a __pragma?
You could (should?) use a smart pointer to wrap your reallocated m_pszHelpFilePath buffer. However, although this is not trivial, it can be accomplished without too much trouble.
First, declare an appropriate std::unique_ptr member in your derived application class:
class MyApp : public CWinApp // Presumably
{
// Add this member...
public:
std::unique_ptr<TCHAR[]> spHelpPath;
// ...
};
Then, you will need to modify the code that constructs and assigns the help path as follows (I've changed your C-style cast to an arguably better C++ cast):
// First three (almost) lines as before ...
CString strHelp = GetProgramPath();
strHelp += _T("MeetSchedAssist.CHM");
free(const_cast<TCHAR *>(m_pszHelpFilePath));
// Next, allocate the shared pointer data and copy the string...
size_t strSize = static_cast<size_t>(strHelp.GetLength() + 1);
spHelpPath std::make_unique<TCHAR[]>(strSize);
_tcscpy_s(spHelpPath.get(), strHelp.GetString()); // Use the "_s" 'safe' version!
// Now, we can use the embedded raw pointer for m_pszHelpFilePath ...
m_pszHelpFilePath = spHelpPath.get();
So far, so good. The data allocated in the smart pointer will be automatically freed when your application object is destroyed, and the code analysis warnings should disappear. However, there is one last modification we need to make, to prevent the MFC framework from attempting to free our assigned m_pszHelpFilePath pointer. This can be done by setting that to nullptr in the MyApp class override of ExitInstance:
int MyApp::ExitInstance()
{
// <your other exit-time code>
m_pszHelpFilePath = nullptr;
return CWinApp::ExitInstance(); // Call base class
}
However, this may seem like much ado about nothing and, as others have said, you may be justified in simply supressing the warning.
Technically, you can take advantage of the fact that new / delete map to usual malloc/free by default in Visual C++, and just go ahead and replace. The portability won't suffer much as MFC is not portable anyway. Sure you can use unique_ptr<TCHAR[]> instead of direct new / delete, like this:
CString strHelp = GetProgramPath();
strHelp += _T("MeetSchedAssist.CHM");
std::unique_ptr<TCHAR[]> str_old(m_pszHelpFilePath);
auto str_new = std::make_unique<TCHAR[]>(strHelp.GetLength() + 1);
_tcscpy_s(str_new.get(), strHelp.GetLength() + 1, strHelp.GetString());
m_pszHelpFilePath = str_new.release();
str_old.reset();
For robustness for replaced new operator, and for least surprise principle, you should keep free / strdup.
If you replace multiple of those CWinApp strings, suggest writing a function for them, so that there's a single place with free / strdup with suppressed warnings.

Problems using malloc in D language: why writeln call the destructor twice in this example

I am trying to write a D wrapper for a C library (libmpdec) that stores its data using the standard C malloc function. But the are
some nasty bugs in my programs that I don't know how to solve.
So I have written the following test example, trying to understand this. The idea is to create structure holding a pointer to a memory area allocated using malloc in the constructor and that contains a zero-terminated C string, and free the area using the destructor. Also I can print the string using printf. The problem arises when I try to implement a method toString() so that I can use the standard D function writeln. For some reason that I don't understand the destructor seems to be called twice! (one after writeln) and so a segmentation fault occurs.
import std.stdio;
import core.stdc.stdlib;
import std.string;
import core.stdc.string;
struct Prueba {
char* pointer;
string name;
this(string given_name)
{
writeln("calling the constructor");
pointer= cast (char*) malloc(char.sizeof*10);
name=given_name;
char* p= pointer;
*p= 'a';
p++;
*p= 'b';
p++;
*p= '\n';
p++;
*p= '\0';
}
~this()
{
writeln("\n calling the destructor");
free(pointer);
}
void print()
{
printf("Using printf %s \n",pointer);
}
string toString()
{
ulong len=strlen(pointer);
return cast(string) pointer[0..len];
}
}
void main()
{
writeln("version 1");
Prueba p=Prueba("a");
writeln("using writeln ",p);
p.print();
}
But if I store the result in a string varible like
string s=p.toString();
writeln("using writeln ",s);
The program just works! I cannot figure out why!
You can see both versions of my test program at
https://github.com/pdenapo/example_programs_in_D/tree/master/using_malloc
Many thanks for any help!
Update: It seems that writeln plays no role here. And I can get the
same result with something like
void probando(Prueba q)
{
q.print();
}
probando(p);
The problem seems to be that a copy of p is created when calling a function.
In cases like this, it's often a good idea to see if it's the same instance being destroyed. Adding &this to the writeln calls, I get this output:
version 1
calling the constructor at 6FBB70F960
Instance on stack: 6FBB70F960
using writeln ab
calling the destructor at 6FBB70F820
calling the destructor at 6FBB70F7F0
As we can see, the pointers are different, so there's two instances.
D structs are value types, and so are copied and moved. When you call a function taking a class parameter, a pointer is what's actually being passed, and it basically says 'the class instance you're looking for is over there'. With structs a copy is created, and suddenly you have two independent objects living their separate lives.
Of course, that's not what you want - Prueba isn't actually a copyable type, since having two copies will result in two calls to the destructor, and thus double freeing. To mark it as non-copyable, simply add #disable this(this); to disable the postblit, and the compiler will helpfully throw error messages at you when a copy would be created.
This will cause a compiler error on the writeln line, and you will have to manually call toString, e.g.: writeln("using writeln ", p.toString());
Note that a non-copyable struct may be passed to functions as ref, since that doesn't create a new copy. We can't really modify writeln to do that, but it's worth knowing for your own functions.

Another weird issue with Garbage Collection?

OK, so here's the culprit method :
class FunctionDecl
{
// More code...
override void execute()
{
//...
writeln("Before setting... " ~ name);
Glob.functions.set(name,this);
writeln("After setting." ~ name);
//...
}
}
And here's what happens :
If omit the writeln("After setting." ~ name); line, the program crashes, just at this point
If I keep it in (using the name attribute is the key, not the writeln itself), it works just fine.
So, I suppose this is automatically garbage collected? Why is that? (A pointer to some readable reference related to GC and D would be awesome)
How can I solve that?
UPDATE :
Just tried a GC.disable() at the very beginning of my code. And... automagically, everything works again! So, that was the culprit as I had suspected. The thing is : how is this solvable without totally eliminating Garbage Collection?
UPDATE II :
Here's the full code of functionDecl.d - "unnecessary" code omitted :
//================================================
// Imports
//================================================
// ...
//================================================
// C Interface for Bison
//================================================
extern (C)
{
void* FunctionDecl_new(char* n, Expressions i, Statements s) { return cast(void*)(new FunctionDecl(to!string(n),i,s)); }
void* FunctionDecl_newFromReference(char* n, Expressions i, Expression r) { return cast(void*)(new FunctionDecl(to!string(n),i,r)); }
}
//================================================
// Functions
//================================================
class FunctionDecl : Statement
{
// .. class variables ..
this(string n, Expressions i, Statements s)
{
this(n, new Identifiers(i), s);
}
this(string n, Expressions i, Expression r)
{
this(n, new Identifiers(i), r);
}
this(string n, Identifiers i, Statements s)
{
// .. implementation ..
}
this(string n, Identifiers i, Expression r)
{
// .. implementation ..
}
// .. other unrelated methods ..
override void execute()
{
if (Glob.currentModule !is null) parentModule = Glob.currentModule.name;
Glob.functions.set(name,this);
}
}
Now as for what Glob.functions.set(name,this); does :
Glob is an instance holding global definitions
function is the class instance dealing with defined functions (it comes with a FunctionDecl[] list
set simply does that : list ~= func;
P.S. I'm 99% sure it has something to do with this one : Super-weird issue triggering "Segmentation Fault", though I'm still not sure what went wrong this time...
I think the problem is that the C function is allocating the object, but D doesn't keep a reference. If FunctionDecl_new is called back-to-back in a tight memory environment, here's what would happen:
the first one calls, creating a new object. That pointer goes into the land of C, where the D GC can't see it.
The second one goes, allocating another new object. Since memory is tight (as far as the GC pool is concerned), it tries to run a collection cycle. It finds the object from (1), but cannot find any live pointers to it, so it frees it.
The C function uses that freed object, causing the segfault.
The segfault won't always run because if there's memory to spare, the GC won't free the object when you allocate the second one, it will just use its free memory instead of collecting. That's why omitting the writeln can get rid of the crash: the ~ operator allocates, which might just put you over the edge of that memory line, triggering a collection (and, of course, running the ~ gives the gc a chance to run in the first place. If you never GC allocate, you never GC collect either - the function looks kinda like gc_allocate() { if(memory_low) gc_collect(); return GC_malloc(...); })
There's three solutions:
Immediately store a reference in the FunctionDecl_new function in a D structure, before returning:
FunctionDecl[] fdReferences;
void* FunctionDecl_new(...) {
auto n = new FunctionDecl(...);
fdReferences ~= n; // keep the reference for later so the GC can see it
return cast(void*) n;
}
Call GC.addRoot on the pointer right before you return it to C. (I don't like this solution, I think the array is better, a lot simpler.)
Use malloc to create the object to give to C:
void* FunctionDecl_new(...) {
import std.conv : emplace;
import core.stdc.stdlib : malloc;
enum size = __traits(classInstanceSize, FunctionDecl);
auto memory = malloc(size)[0 .. size]; // need to slice so we know the size
auto ref = emplace!FunctionDecl(memory, /* args to ctor */); // create the object in the malloc'd block
return memory.ptr; // give the pointer to C
}
Then, of course, you ought to free the pointer when you know it is no longer going to be used, though if you don't, it isn't really wrong.
The general rule I follow btw is any memory that crosses language barriers for storage (usage is different) ought to be allocated similarly to what that language expects: So if you pass data to C or C++, allocate it in a C fashion, e.g. with malloc. This will lead to the least surprising friction as it gets stored.
If the object is just being temporarily used, it is fine to pass a plain pointer to it, since a temp usage isn't stored or freed by the receiving function so there's less danger there. Your reference will still exist too, if nothing else, on the call stack.

vtk memory management when importing data

I am using vtkImageImport to convert from an opencv matrix to vtkImageData.
vtkImageData* convertImage(const cv::Mat& image)
{
int width = image.cols;
int height = image.rows;
vtkSmartPointer<vtkImageImport> importer = vtkSmartPointer<vtkImageImport>::New();
importer->SetDataScalarTypeToUnsignedChar();
importer->SetImportVoidPointer(image.data);
importer->SetWholeExtent(0,width-1,0, height-1, 0, 0);
importer->SetDataExtentToWholeExtent();
importer->Update();
return importer->GetOutput();
}
I have 2 questions on how pointers and memory are managed when doing this kind of import.
Does the importer create and allocate a new data pointer for the vtkImageData created, so that the vtk image is not affected when the opencv matrix is destroyed.
Will the returned vtkImageData* still be valid once the importer is destroyed, or should I return a smart pointer to keep the reference counting > 0? My reasoning is that if the importer uses smart pointers for the vtkImageData internally, then the image will be destroyed as the importer is destroyed. VTK examples always use the importer in the same scope as where the data is then used.
Thank you
The importer does a copy when you ask to do it.
void vtkImageImport::SetImportVoidPointer (void* ptr, int save); stores your void* pointer without copying the data, save arg is used in the destructor of ImgImport, if save is 1, your stored pointer will not be deleted.
void SetImportVoidPointer (void *ptr) calls previous methods with save=1, so in your code, data from CV::Mat, will not be destroyed. Be careful : both use the same pointer : Modify your CV::Mat; and the result from importer will be different.
void CopyImportVoidPointer (void *ptr, vtkIdType size); make a copy of your data, in this case, internally, ImgImport will copy the data, store the new pointer, and delete it in the destructor. You can delete your CV::Mat, it may not affect the Importer.
For the second question, i'm not sure of the internally behavior for retrieve the output but storing the output from a vtkAlgorithm in a vtkSmartPointer allows you to delete the algorithm without corrupt the data.
vtkSmartPointer<vtkImageData> convertImage(const cv::Mat& image)
{
int width = image.cols;
int height = image.rows;
vtkSmartPointer<vtkImageImport> importer = vtkSmartPointer<vtkImageImport>::New();
importer->SetDataScalarTypeToUnsignedChar();
importer->SetImportVoidPointer(image.data);
importer->SetWholeExtent(0,width-1,0, height-1, 0, 0);
importer->SetDataExtentToWholeExtent();
importer->Update();
vtkSmartPointer<vtkImageData> outp = importer->GetOutput();
importer->Delete (); // this line cause the destruction of the output, unless it's stored in a smartPointer.
return outp;
}
Importer->Delete() is just for example. I tried with a basic exemple, and I as able to use the output from a vtkAlgorithm, without storing the output in a smartPointer.
But reading the Vtk official tutorials (section "Getting an Object with a Smart Pointer"), if you don't store the output in a smartpointer, the importer may be deleted at the end of the scope, the data too.
That's why i'm not sure : I can still use the "direct" pointer returned by the convertImage methods. You must use the smartpointers as describes in the VTK tutos. It's the best way to be sure your data is valid .
Hope it help.

JNA - Use structure array as byref argument

I know parts of this issue is covered by some posts here and I have looked at them and tested some but with no luck.
I have this native method signature which should populate the provided CBadgeData structure array with results:
int elc_GetBadges(int nHandle, char* cErr, int* nRecCount, CBadgeData** arr)
The CBadgeData structure is implemented as follows:
package test.elcprog;
import java.util.Arrays;
import java.util.List;
import com.sun.jna.Pointer;
import com.sun.jna.Structure;
public class CBadgeData extends Structure{
public static class ByReference extends CBadgeData implements Structure.ByReference { }
public int nBadgeID, nTrigger, nExtraData;
public String cName;
public CBadgeData(Pointer pointer){
super(pointer);
}
public CBadgeData(){ }
public String ToString() {
return nBadgeID + "," + nTrigger + "," + nExtraData + "," + cName;
}
#Override
protected List getFieldOrder() {
String[] s = new String[]{"nBadgeID","nTrigger","nExtraData","cName"};
return Arrays.asList(s);
}
}
My last try to craft this argument and call the method looked like this:
CBadgeData.ByReference[] badges = new CBadgeData.ByReference[max_items];
new CBadgeData.ByReference().toArray(badges);
int ret = inst.elc_GetBadges(handle, err, recCount, badges);
It fails with segmentation error.
My Question is what Java type should be provided here as an argument for the native CBadgeData** in the call to elc_GetBadges?
EDIT -1-
Populating the array myself (with or without terminating null pointer) didn't work and caused further Seg crashes. I then used Pointer[] arg as technomage suggested:
Pointer[] pointers = new Pointer[max_items];
for(int i=0; i<max_items; i++){
pointers[i] = new CBadgeData.ByReference().getPointer();
}
int ret = inst.elc_GetBadges(handle, err, recCount, pointers);
This caused no error but seems to not make any changes to the returning struct which should have contain 4 items in this case:
int bid = new CBadgeData(pointers[i]).nBadgeID; // this returns null for all items
Using explicit read() / write() on the struct led to Seg crashes again (on the read):
Any idea what am I still missing here?
EDIT -2-
Interestingly enough - using the Memory.get directly, after calling the native method, gets the correct results:
Memory m= (Memory)pointers[0];
System.out.println("1st int: "+m.getInt(0)); // this gets 24289 which is 5ee1
System.out.println("2nd int: "+m.getInt(4)); // this gets 3
System.out.println("3rd int: "+m.getInt(8)); // this gets 255
System.out.println("String: "+m.getString(12)); // this gets "Badge[5EE1]" as supposed
But the read() still crashes. Any thoughts?
I'm inferring that CBadgeData** input is intended to be an array of pointer to CBadgeData.
As such, the Structure.ByReference tagging is correct.
Structure.toArray() is probably not appropriate here, or at least not necessary (it allocates a contiguous block of structs in memory). You can just populate your array with CBadgeData.ByReference instances.
Perhaps your callee is expecting a NULL pointer at the end of the array? I don't see another indicator of the array length to the callee.
CBadgeData.ByReference[] badges = new CBadgeData.ByReference[max_items+1];
for (int i=0;i < badges.length-1;i++) {
badges[i] = new CBadgeData.ByReference();
}
badges[badges.length-1] = null;
Pretty sure that works. If for whatever reason there's a bug handling Structure.ByReference[], I know that Pointer[] is reliable and will do the same thing.
EDIT
If you use Pointer[] instead of Structure.ByReference[] (please post a bug to the project site if Structure.ByReference[] does not work), you will have to manually call Structure.write/read before/after your native function call, since JNA will not know that the pointers reference structures that need to be synched with native memory. I'd bet, however, that the cause of your crashes when using Structure.ByReference[] was simply that JNA was automatically calling Structure.read() after the call and triggered the same error that you see when calling it explicitly.
If you get a segfault on read, it likely means that your structure fields aren't properly aligned or defined, or (less likely) that you have corrupt data that can't be read properly. To diagnose this, set jna.dump_memory=true and print out your struct after calling Structure.write() to see if the contents of the structure appear as you'd expect. It'd also help to post the native and JNA forms of your structure here, if possible.

Resources