memory leaks with function - memory-leaks

___Below program should crash when MyClass::processing is called after "delete p" and "p = NULL" but it crashes when "MyClass::a" is tried to access. Why so???
#include <iostream>
using namespace std;
class MyClass
{
public:
int a;
void processing()
{
cout<<"Processing"<<endl;
}
};
int main(void)
{
MyClass* p(new MyClass);
MyClass* q = p;
p->a = 10;
cout<<"p:: "<<p<<" q:: "<<q<<endl;
cout<<"p->a"<<p->a<<"q->a"<<q->a<<endl;
delete p;
p->processing(); // Watch out! p is now dangling!
cout<<"\n\nAfter Deletion::"<<endl;
cout<<"p:: "<<p<<" q:: "<<q<<endl;
cout<<"p->a"<<p->a<<"q->a"<<q->a<<endl;
p = NULL; // p is no longer dangling
cout<<"\n\nAfter Assigning null"<<endl;
p->processing(); // Watch out! p is now dangling!
q->processing(); // Ouch! q is still dangling!
cout<<"p:: "<<p<<" q:: "<<q<<endl;
}

Calling a method after you've deleted the instance is undefined behaviour. Anything could happen.
A sample of possible behaviours:
You get "lucky": free just released the memory but didn't invalidate it. The instance is partially intact, and the call succeeds by accident.
free overwrites the memory with garbage (or sentinel values), causing you to jump to some crazy address and segfault.
free deallocates the whole block and you segfault on trying to access *p.
Some other thread puts the address of system in the now-unused block of memory. Your stack happens to contain a pointer to "rm -rf /". Your hard-drive is erased, hilarity ensues.
Never, ever, ever rely on undefined behaviour.

Related

"this" argument in boost bind

I am writing multi-threaded server that handles async read from many tcp sockets. Here is the section of code that bothers me.
void data_recv (void) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)),
boost::bind ( &RPC::on_data_recv, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (boost::system::error_code ec, std::size_t bytesRx) {
if ( rawDataW[bytesRx-1] == ENDMARKER { // <-- this code is fine
process_and_write_rawdata_to_file
}
else {
read_socket_until_endmarker // <-- HELP REQUIRED!!
process_and_write_rawadata_to_file
}
}
Nearly always the async_read_some reads in data including the endmarker, so it works fine. Rarely, the endmarker's arrival is delayed in the stream and that's when my program fails. I think it fails because I have not understood how boost bind works.
My first question:
I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
my second question:
In the on_data_recv() method, how do I read data from the same socket that was read in the on_data() method? In other words, how do I pass the socket as argument from calling method to the handler? when the handler is executed in another thread? Any help in form of a few lines of code that can fit into my "read_socket_until_endmarker" will be appreciated.
My first question: I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
In the example (and I'm assuming this holds for your functions as well) the start_accept() is a member function. The bind function is conveniently designed such that when you use & in front of its first argument, it interprets it as a member function that is applied to its second argument.
So while a code like this:
void foo(int x) { ... }
bind(foo, 3)();
Is equivalent to just calling foo(3)
Code like this:
struct Bar { void foo(int x); }
Bar bar;
bind(&foo, &bar, 3)(); // <--- notice the & before foo
Would be equivalent to calling bar.foo(3).
And thus as per your example
boost::bind ( &RPC::on_data_recv, this, // <--- notice & again
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)
When this object is invoked inside Asio it shall be equivalent to calling this->on_data_recv(error, size). Checkout this link for more info.
For the second part, it is not clear to me how you're working with multiple threads, do you run io_service.run() from more than one thread (possible but I think is beyond your experience level)? It might be the case that you're confusing async IO with multithreading. I'm gonna assume that is the case and if you correct me I'll change my answer.
The usual and preferred starting point is to have just one thread running the io_service.run() function. Don't worry, this will allow you to handle many sockets asynchronously.
If that is the case, your two functions could easily be modified as such:
void data_recv (size_t startPos = 0) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)) + startPos,
boost::bind ( &RPC::on_data_recv, this,
startPos,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (size_t startPos,
boost::system::error_code ec,
std::size_t bytesRx) {
// TODO: Check ec
if (rawDataW[startPos + bytesRx-1] == ENDMARKER) {
process_and_write_rawdata_to_file
}
else {
// TODO: Error if startPos + bytesRx == 648*2
data_recv(startPos + bytesRx);
}
}
Notice though that the above code still has problems, the main one being that if the other side sent two messages quickly one after another, we could receive (in one async_read_some call) the full first message + part of the second message, and thus missing the ENDMARKER from the first one. Thus it is not enough to only test whether the last received byte is == to the ENDMARKER.
I could go on and modify this function further (I think you might get the idea on how), but you'd be better off using async_read_until which is meant exactly for this purpose.

Dynamically-Allocated Implementation-Class std::async-ing its Member

Consider an operation with a standard asynchronous interface:
std::future<void> op();
Internally, op needs to perform a (variable) number of asynchronous operations to complete; the number of these operations is finite but unbounded, and depends on the results of the previous asynchronous operations.
Here's a (bad) attempt:
/* An object of this class will store the shared execution state in the members;
* the asynchronous op is its member. */
class shared
{
private:
// shared state
private:
// Actually does some operation (asynchronously).
void do_op()
{
...
// Might need to launch more ops.
if(...)
launch_next_ops();
}
public:
// Launches next ops
void launch_next_ops()
{
...
std::async(&shared::do_op, this);
}
}
std::future<void> op()
{
shared s;
s.launch_next_ops();
// Return some future of s used for the entire operation.
...
// s destructed - delayed BOOM!
};
The problem, of course, is that s goes out of scope, so later methods will not work.
To amend this, here are the changes:
class shared : public std::enable_shared_from_this<shared>
{
private:
/* The member now takes a shared pointer to itself; hopefully
* this will keep it alive. */
void do_op(std::shared_ptr<shared> p); // [*]
void launch_next_ops()
{
...
std::async(&shared::do_op, this, shared_from_this());
}
}
std::future<void> op()
{
std::shared_ptr<shared> s{new shared{}};
s->launch_next_ops();
...
};
(Asides from the weirdness of an object calling its method with a shared pointer to itself, )the problem is with the line marked [*]. The compiler (correctly) warns that it's an unused variable.
Of course, it's possible to fool it somehow, but is this an indication of a fundamental problem? Is there any chance the compiler will optimize away the argument and leave the method with a dead object? Is there a better alternative to this entire scheme? I don't find the resulting code the most intuitive.
No, the compiler will not optimize away the argument. Indeed, that's irrelevant as the lifetime extension comes from shared_from_this() being bound by decay-copy ([thread.decaycopy]) into the result of the call to std::async ([futures.async]/3).
If you want to avoid the warning of an unused argument, just leave it unnamed; compilers that warn on unused arguments will not warn on unused unnamed arguments.
An alternative is to make do_op static, meaning that you have to use its shared_ptr argument; this also addresses the duplication between this and shared_from_this. Since this is fairly cumbersome, you might want to use a lambda to convert shared_from_this to a this pointer:
std::async([](std::shared_ptr<shared> const& self){ self->do_op(); }, shared_from_this());
If you can use C++14 init-captures this becomes even simpler:
std::async([self = shared_from_this()]{ self->do_op(); });

Garbage collection and cgo

Is it possible to make the garbage collector in Go handle and release memory allocated through C code? I apologize, I haven't used C and cgo before so my examples may need some clarification.
Lets say you've got some C library that you'd like to use and this library allocates some memory that needs to be freed manually. What I'd like to do is something like this:
package stuff
/*
#include <stuff.h>
*/
import "C"
type Stuff C.Stuff
func NewStuff() *Stuff {
stuff := Stuff(C.NewStuff()) // Allocate memory
// define the release function for the runtime to call
// when this object has no references to it (to release memory)
// In this case it's stuff.Free()
return stuff
}
func (s Stuff) Free() {
C.Free(C.Stuff(s)) // Release memory
}
Is there any way for the garbage collector to call Stuff.Free() when there are no references to *Stuff in the Go runtime?
Am I making sense here?
Perhaps a more direct question is: Is it possible to make the runtime automatically handle the cleanup of C allocated memory by writing a function that the runtime calls when there are zero references to that object?
There exists the runtime.SetFinalizer function, but it cannot be used on any object allocated by C code.
However, you can create a Go object for each C object that needs to be freed automatically:
type Stuff struct {
cStuff *C.Stuff
}
func NewStuff() *Stuff {
s := &Stuff{C.NewStuff()}
runtime.SetFinalizer(s, (*Stuff).Free)
return s
}
func (s *Stuff) Free() {
C.Free(s.cStuff)
}

Gdiplus::Image Object and boost::shared_ptr

I have a simple Image cache class in my MFC application, to keep track of images loaded from the file system:
typedef boost::shared_ptr<Gdiplus::Image> ImagePtr;
typedef std::map<std::string, ImagePtr> ImageMap;
Whenever an image is requested by file name, a look up is done, or if it is already loaded, the appropriate ImagePtr is returned.
The problem occurs when I exit my application, and the shared pointer gets destructed. I get an access violation here, in checked_delete.hpp:
// verify that types are complete for increased safety
template<class T> inline void checked_delete(T * x)
{
// intentionally complex - simplification causes regressions
typedef char type_must_be_complete[ sizeof(T)? 1: -1 ];
(void) sizeof(type_must_be_complete);
delete x; // <-------- violation here!!
}
Is GDI+ managing these objects for me? If so, what do I need to do to my shared_ptr so that it doesn't call delete? Or is something else awry?
Thanks in advance!
That might be a symptom of calling GdiplusShutdown before the pointers are destroyed.

Managed C++ Memory leak

I am getting a memory leak whenver a new RPC thread in a DCOM server (c++ DCOM server) invokes the following managed C++ method
void CToolDataClient::SetLotManagerActive(bool bLotManagerActive)
{
if( m_toolDataManager != nullptr)
{
m_toolDataManager->LotActive = bLotManagerActive;
}
}
I get the managed C++ object pointer using the floowing code
typedef bool (*FPTR_CREATEINTERFACE)(CToolDataInterface ** ppInterface);
FPTR_CREATEINTERFACE fnptr = (FPTR_CREATEINTERFACE)GetProcAddress(hModule,(LPTSTR)"CreateInstance");
if ( NULL != fnptr )
{
CICELogger::Instance()->LogMessage("CToolDataManager::CToolDataManager", Information,
"Created instance of DataManagerBridge");
fnptr(&m_pToolDataInterface);
}
This is how I invoke the managed call in the DCOME server C++ portion
void CToolDataManager::SetLotManagerActive(bool bLotManagerActive)
{
if(m_pToolDataInterface != NULL)
{
m_pToolDataInterface->SetLotManagerActive(bLotManagerActive);
}
}
The callstack given below indicate the location of the memory leak . Is there any ways to solve this memory leak? Please help me
ntdll!RtlDebugAllocateHeap+000000E1
ntdll!RtlAllocateHeapSlowly+00000044
ntdll!RtlAllocateHeap+00000E64
mscorwks!EEHeapAlloc+00000142
mscorwks!EEHeapAllocInProcessHeap+00000052
**mscorwks!operator new[]+00000025
mscorwks!SetupThread+00000238
mscorwks!IJWNOADThunk::FindThunkTarget+00000019
mscorwks!IJWNOADThunkJumpTargetHelper+0000000B
mscorwks!IJWNOADThunkJumpTarget+00000048
ICEScheduler!CToolDataManager::SetLotManagerActive+00000025** (e:\projects\ice\ice_dev\trunk\source\application source\iceschedulersystem\icescheduler\tooldatamanager.cpp, 250)
ICEScheduler!SetLotManagerActive+00000014 (e:\projects\ice\ice_dev\trunk\source\application source\iceschedulersystem\icescheduler\schddllapi.cpp, 589)
ICELotControl!CLotDetailsHandler::SetLotManagerStatus+0000006C (e:\projects\ice\ice_dev\source\application source\icelotsystem\icelotcontrol\lotdetailshandler.cpp, 1823)
ICELotControl!CLotManager::StartJob+00000266 (e:\projects\ice\ice_dev\source\application source\icelotsystem\icelotcontrol\lotmanager.cpp, 205)
RPCRT4!Invoke+00000030
RPCRT4!NdrStubCall2+00000297
RPCRT4!CStdStubBuffer_Invoke+0000003F
OLEAUT32!CUnivStubWrapper::Invoke+000000C5
ole32!SyncStubInvoke+00000033
ole32!StubInvoke+000000A7
ole32!CCtxComChnl::ContextInvoke+000000E3
ole32!MTAInvoke+0000001A
ole32!AppInvoke+0000009C
ole32!ComInvokeWithLockAndIPID+000002E0
ole32!ThreadInvoke+000001CD
RPCRT4!DispatchToStubInC+00000038
RPCRT4!RPC_INTERFACE::DispatchToStubWorker+00000113
RPCRT4!RPC_INTERFACE::DispatchToStub+00000084
RPCRT4!RPC_INTERFACE::DispatchToStubWithObject+000000C0
RPCRT4!LRPC_SCALL::DealWithRequestMessage+000002CD
RPCRT4!LRPC_ADDRESS::DealWithLRPCRequest+0000016D
RPCRT4!LRPC_ADDRESS::ReceiveLotsaCalls+0000028F
First, is LotActive a member variable/field (pure data) or a property?
I think that it is a property, and before it can be set, the JIT has to compile the code for the setter. In desktop .NET, native code produced by the JIT compilation process is not garbage collected, instead it exists for the lifetime of the AppDomain, so it could look like a leak.
Can you check whether each call to this function leaks another object, or the leak just occurs once?

Resources