Managed C++ Memory leak - memory-leaks

I am getting a memory leak whenver a new RPC thread in a DCOM server (c++ DCOM server) invokes the following managed C++ method
void CToolDataClient::SetLotManagerActive(bool bLotManagerActive)
{
if( m_toolDataManager != nullptr)
{
m_toolDataManager->LotActive = bLotManagerActive;
}
}
I get the managed C++ object pointer using the floowing code
typedef bool (*FPTR_CREATEINTERFACE)(CToolDataInterface ** ppInterface);
FPTR_CREATEINTERFACE fnptr = (FPTR_CREATEINTERFACE)GetProcAddress(hModule,(LPTSTR)"CreateInstance");
if ( NULL != fnptr )
{
CICELogger::Instance()->LogMessage("CToolDataManager::CToolDataManager", Information,
"Created instance of DataManagerBridge");
fnptr(&m_pToolDataInterface);
}
This is how I invoke the managed call in the DCOME server C++ portion
void CToolDataManager::SetLotManagerActive(bool bLotManagerActive)
{
if(m_pToolDataInterface != NULL)
{
m_pToolDataInterface->SetLotManagerActive(bLotManagerActive);
}
}
The callstack given below indicate the location of the memory leak . Is there any ways to solve this memory leak? Please help me
ntdll!RtlDebugAllocateHeap+000000E1
ntdll!RtlAllocateHeapSlowly+00000044
ntdll!RtlAllocateHeap+00000E64
mscorwks!EEHeapAlloc+00000142
mscorwks!EEHeapAllocInProcessHeap+00000052
**mscorwks!operator new[]+00000025
mscorwks!SetupThread+00000238
mscorwks!IJWNOADThunk::FindThunkTarget+00000019
mscorwks!IJWNOADThunkJumpTargetHelper+0000000B
mscorwks!IJWNOADThunkJumpTarget+00000048
ICEScheduler!CToolDataManager::SetLotManagerActive+00000025** (e:\projects\ice\ice_dev\trunk\source\application source\iceschedulersystem\icescheduler\tooldatamanager.cpp, 250)
ICEScheduler!SetLotManagerActive+00000014 (e:\projects\ice\ice_dev\trunk\source\application source\iceschedulersystem\icescheduler\schddllapi.cpp, 589)
ICELotControl!CLotDetailsHandler::SetLotManagerStatus+0000006C (e:\projects\ice\ice_dev\source\application source\icelotsystem\icelotcontrol\lotdetailshandler.cpp, 1823)
ICELotControl!CLotManager::StartJob+00000266 (e:\projects\ice\ice_dev\source\application source\icelotsystem\icelotcontrol\lotmanager.cpp, 205)
RPCRT4!Invoke+00000030
RPCRT4!NdrStubCall2+00000297
RPCRT4!CStdStubBuffer_Invoke+0000003F
OLEAUT32!CUnivStubWrapper::Invoke+000000C5
ole32!SyncStubInvoke+00000033
ole32!StubInvoke+000000A7
ole32!CCtxComChnl::ContextInvoke+000000E3
ole32!MTAInvoke+0000001A
ole32!AppInvoke+0000009C
ole32!ComInvokeWithLockAndIPID+000002E0
ole32!ThreadInvoke+000001CD
RPCRT4!DispatchToStubInC+00000038
RPCRT4!RPC_INTERFACE::DispatchToStubWorker+00000113
RPCRT4!RPC_INTERFACE::DispatchToStub+00000084
RPCRT4!RPC_INTERFACE::DispatchToStubWithObject+000000C0
RPCRT4!LRPC_SCALL::DealWithRequestMessage+000002CD
RPCRT4!LRPC_ADDRESS::DealWithLRPCRequest+0000016D
RPCRT4!LRPC_ADDRESS::ReceiveLotsaCalls+0000028F

First, is LotActive a member variable/field (pure data) or a property?
I think that it is a property, and before it can be set, the JIT has to compile the code for the setter. In desktop .NET, native code produced by the JIT compilation process is not garbage collected, instead it exists for the lifetime of the AppDomain, so it could look like a leak.
Can you check whether each call to this function leaks another object, or the leak just occurs once?

Related

Caching JNI objects and thread-safety (in Android)

I'm writing a C++ app with native threads (pthreads) and I need to call some Java methods etc. I'm not sure which JNI objects can be safely cached ie stored in my C++ object for use later, possibly/probably by a different thread. I do know that if my class' methods can be called by different threads I mustn't cache the JNIEnv, but instead cache the JavaVM and get a JNIEnv by attaching the current thread. But does that also mean I can't cache anything obtained from a JNIEnv? I need to use the objects obtained by the following JNIEnv methods:
FindClass, GetMethodID, NewObject, NewGlobalRef
Do those stay valid across threads, or do I have to get new ones every time? If the latter, is there a way to create an object in one native thread and be able to access the same object in a different thread?
JNI methods like FindClass, GetMethodID, GetFieldID are expensive operation that are guaranteed to generate the same result over the life of the JVM. Since these operations are time consuming, it is wise to store the result somewhere to be reused later on in the native side (this is caching).
JNI caching regards only these JNI function calls. If you want to cache any other C++ or Java object this is a different topic. (Just to be clear).
The cached classes, methods and fields do not depend on the thread they are retrieved from, so they are valid across different threads. At most you have to perform thread safe operations when getting or setting some object's field with Set<type>Field or Get<type>Field.
Since FindClass returns a local reference to the class object, you have to turn it into a global reference to guarantee its reuse after the function that retrieves it ends. You can achieve this by using NewGlobalReference:
jclass tmp_double_Class = env->FindClass( "java/lang/Double" ); // Check for exceptions!
double_Class = static_cast<jclass>( env->NewGlobalRef( tmp_double_Class ) );
if( double_Class == NULL )
return;
env->DeleteLocalRef( tmp_double_Class );
Here you have an example of the all JNI Caching topic:
MyJni.cpp:
// Just a shortcut for checking for exceptions
#define CHECK_JNI_EXCEPTION( JNIenv ) \
if( JNIenv->ExceptionCheck() )\
{\
JNIenv->ExceptionClear();\
return JNI_FALSE;\
}\
\
// Global variables
jclass point_Class;
jmethodID point_ctor_Method;
jfieldID point_x_Field;
jfieldID point_y_Field;
JNIEXPORT jboolean JNICALL Java_com_company_package_MyClass_nativeInit( JNIEnv * env,
jclass clazz )
{
// Cache java.lang.Double class, methods and fields
jclass tmp_point_Class = env->FindClass( "android/graphics/Point" );
CHECK_JNI_EXCEPTION( env )
point_Class = static_cast<jclass>( env->NewGlobalRef( tmp_point_Class ) );
if( point_Class == NULL )
return JNI_FALSE;
env->DeleteLocalRef( tmp_point_Class );
point_ctor_Method = env->GetMethodID( point_Class, "<init>", "(II)V" );
CHECK_JNI_EXCEPTION( env )
point_x_Field = env->GetFieldID( point_Class, "x", "I" );
CHECK_JNI_EXCEPTION( env )
point_y_Field = env->GetFieldID( point_Class, "y", "I" );
CHECK_JNI_EXCEPTION( env )
return JNI_TRUE;
}
MyJni.java:
package com.company.package;
class MyClass {
// ... All java code here ...
// Trigger JNI Caching (could be also done using JNI_OnLoad...)
private static native void nativeInit();
static {
System.loadLibrary( "mylib" );
nativeInit(); // should check the result
}
}
Have fun ;)
Objects are not thread-specific. They are initially "local" references, and if you want to keep a copy you have to tell the VM that you're doing so by creating (and, eventually, deleting) a "global" reference.
See http://developer.android.com/training/articles/perf-jni.html, especially the "Local and Global References" section.

Why the bad_alloc(const char*) was made private in Visual C++ 2012?

I am just trying to compile a bit bigger project using the Visual Studio 2012 Release Candidate, C++. The project was/is compiled using the VS2010 now. (I am just greedy to get the C++11 things, so I tried. :)
Apart of things that I can explain by myself, the project uses the code like this:
ostringstream ostr;
ostr << "The " __FUNCTION__ "() failed to malloc(" << i << ").";
throw bad_alloc(ostr.str().c_str());
The compiler now complains
error C2248: 'std::bad_alloc::bad_alloc' : cannot access private member declared
in class 'std::bad_alloc'
... which is true. That version of constructor is now private.
What was the reason to make that version of constructor private? Is it recommended by C++11 standard not to use that constructor with the argument?
(I can imagine that if allocation failed, it may cause more problems to try to construct anything new. However, it is only my guess.)
Thanks,
Petr
The C++11 Standard defines bad_alloc as such (18.6.2.1):
class bad_alloc : public exception {
public:
bad_alloc() noexcept;
bad_alloc(const bad_alloc&) noexcept;
bad_alloc& operator=(const bad_alloc&) noexcept;
virtual const char* what() const noexcept;
};
With no constructor that takes a string. A vendor providing such a constructor would make the code using it not portable, as other vendors are not obliged to provide it.
The C++03 standard defines a similar set of constructors, so VS didn't follow this part of the standard even before C++11. MS does try to make VS as standard compliant as possible, so they've probably just used the occasion (new VS, new standard) to fix an incompatibility.
Edit: Now that I've seen VS2012's code, it is also clear why the mentioned constructor is left private, instead of being completely removed: there seems to be only one use of that constructor, in the bad_array_new_length class. So bad_array_new_length is declared a friend in bad_alloc, and can therefore use that private constructor. This dependency could have been avoided if bad_array_new_length just stored the message in the pointer used by what(), but it's not a lot of code anyway.
If you are accustomed to passing a message when you throw a std::bad_alloc, a suitable technique is to define an internal class that derives from std::bad_alloc, and override ‘what’ to supply the appropriate message.
You can make the class public and call the assignment constructor directly, or make a helper function, such as throw_bad_alloc, which takes the parameters (and additional scalar information) and stores them in the internal class.
The message is not formatted until ‘what’ is called. In this way, stack unwinding may have freed some memory so the message can be formatted with the actual reason (memory exhaustion, bad request size, heap corruption, etc.) at the catch site. If formatting fails, simply assign and return a static message.
Trimmed example:
(Tip: The copy constructor can just assign _Message to nullptr, rather than copy the message since the message is formatted on demand. The move constructor, of course can just confiscate it :-).
class internal_bad_alloc: public std::bad_alloc
{
public:
// Default, copy and move constructors....
// Assignment constructor...
explicit internal_bad_alloc(int errno, size_t size, etc...) noexcept:
std::bad_alloc()
{
// Assign data members...
}
virtual ~internal_bad_alloc(void) noexcept
{
// Free _Message data member (if allocated).
}
// Override to format and return the reason:
virtual const char* what(void) const noexcept
{
if (_Message == nullptr)
{
// Format and assign _Message. Assign the default if the
// format fails...
}
return _Message;
}
private:
// Additional scalar data (error code, size, etc.) pass into the
// constructor and used when the message is formatted by 'what'...
mutable char* _Message;
static char _Default[];
}
};
//
// Throw helper(s)...
//
extern void throw_bad_alloc(int errno, size_t size, etc...)
{
throw internal_bad_alloc(errno, size, etc...);
}

Memory leak with dynamic code in COM Interop

For compatibility reasons between out legacy ASP code and new ASP.NET code, we have a bunch of .NET COM object that expose some of our .NET utilities to the ASP. In some cases we need to work with another COM objects inside of our .NET COM wrappers. To allow high flexibility and avoid PIA dependencies, we decided to use dynamic code to work with those COM objects.
Simplified C# COM object:
using System;
using System.Text;
using System.Runtime.InteropServices;
namespace TestCom
{
[ComVisible(true)]
[Guid("6DC92920-8C3C-4C81-A615-BD0E3A332024")]
[InterfaceType(ComInterfaceType.InterfaceIsDual)]
public interface ITestComObject
{
[DispId(1)]
string MyMethod(dynamic dictionary);
}
[ComVisible(true)]
[Guid("F52A463E-F03B-4703-860C-E86CDD6D04E3")]
[ClassInterface(ClassInterfaceType.None)]
[ProgId("TestCom.TestComObject")]
public class TestComObject : ITestComObject
{
string ITestComObject.MyMethod(dynamic dictionary)
{
StringBuilder sb = new StringBuilder();
if (dictionary != null)
{
foreach (object key in dictionary)
{
object p = dictionary[key];
if (p != null)
{
sb.AppendFormat("{0}={1}{2}", key, p, Environment.NewLine);
}
}
}
return sb.ToString();
}
}
}
Test ASP Page:
<%# Language=VBScript %>
<%
Dim testObj, params
Set testObj = Server.CreateObject("TestCom.TestComObject")
Set params = Server.CreateObject("Scripting.Dictionary")
params("lang") = "en-US"
params("num") = 42
Response.Write testObj.MyMethod(params)
Set testObj = Nothing
Set params = Nothing
%>
In normal case, the dynamic code would compile just once and subsequent calls would reuse it. However in our case it seems that the dynamic code is compiled on every single call. When i attach a memory profiler to IIS process i can clearly see additional objects from Microsoft.CSharp.RuntimeBinder.Semantics namespace appearing in gen2. This effectively causes a memory leak in our IIS process.
Any ideas how to fix this dynamic code compilation issue? Please note that rewriting all code to use PIA and COM interfaces is not always an option in our case.
I suggest you put any code which is susceptible to memory leak as a separate process - parent process communicate with such leaky processses via socket for instance. Then either freshly kick starts these leaky process on every call, or restarts them sometime during the evening!

Garbage collection and cgo

Is it possible to make the garbage collector in Go handle and release memory allocated through C code? I apologize, I haven't used C and cgo before so my examples may need some clarification.
Lets say you've got some C library that you'd like to use and this library allocates some memory that needs to be freed manually. What I'd like to do is something like this:
package stuff
/*
#include <stuff.h>
*/
import "C"
type Stuff C.Stuff
func NewStuff() *Stuff {
stuff := Stuff(C.NewStuff()) // Allocate memory
// define the release function for the runtime to call
// when this object has no references to it (to release memory)
// In this case it's stuff.Free()
return stuff
}
func (s Stuff) Free() {
C.Free(C.Stuff(s)) // Release memory
}
Is there any way for the garbage collector to call Stuff.Free() when there are no references to *Stuff in the Go runtime?
Am I making sense here?
Perhaps a more direct question is: Is it possible to make the runtime automatically handle the cleanup of C allocated memory by writing a function that the runtime calls when there are zero references to that object?
There exists the runtime.SetFinalizer function, but it cannot be used on any object allocated by C code.
However, you can create a Go object for each C object that needs to be freed automatically:
type Stuff struct {
cStuff *C.Stuff
}
func NewStuff() *Stuff {
s := &Stuff{C.NewStuff()}
runtime.SetFinalizer(s, (*Stuff).Free)
return s
}
func (s *Stuff) Free() {
C.Free(s.cStuff)
}

inserting "this" into an STL map from the constructor

VERSION 1
class Doh {
private:
static std::map<const std::string, const Doh*> someMap;
std::string stringValue_;
public:
Doh(std::string str) : stringValue_(str) {
Doh::someMap.insert(
std::make_pair<const std::string,const Doh*>
(this->stringValue_,this)
);
}
}
The above was ok with MSVC 2010 but with MSVC 2008 it fails – and I guess it is because the object is not constructed yet when it is inserted in the map (I got a memory access violation).
So, I tried a delayed insertion, which worked:
VERSION 2
Doh(std::string str) : stringValue_(str) {
boost::thread(&Doh::insertIntoTheStaticMap,this);
}
void insertIntoTheStaticMap() {
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
Doh::someMap.insert(
std::make_pair<const std::string,const Doh*>
(this->stringValue_,this)
);
}
But as you might be able to guess, my intention is to have the static Doh::someMap as a common lookup dictionary.
VERSION 1 didn’t need any thread-safety because I would create all Doh instances in the same thread – in initialization blocks - which would be called by dynamic initializers before I enter main().
But with VERSION 2, the naïve sleep() is neither graceful nor reliable (not to mention, I might need to lock the map before insertion).
What would be a nice KISS approach?
Only potential issue I see is the initialization of the static member, if there are multiple source files. Try guarding it with a function.
class Doh {
private:
static std::map< std::string, Doh * > &get_map() {
static std::map< std::string, Doh * > someMap;
return someMap; // initialize upon first use
}
std::string stringValue_;
public:
Doh(std::string str) : stringValue_(str) {
get_map().insert(
std::make_pair
(this->stringValue_,this)
);
}
};
In neither version is there any sign of init for stringvalue_ - what does the debugger show you about this key when you hit the map insert in version 1 of the code? How is this field set up, and what is its type?
Running this in the debugger for VS2008 should allow you to narrow down the point of failure into the <map> source, I would have thought.

Resources