Is it possible to make the garbage collector in Go handle and release memory allocated through C code? I apologize, I haven't used C and cgo before so my examples may need some clarification.
Lets say you've got some C library that you'd like to use and this library allocates some memory that needs to be freed manually. What I'd like to do is something like this:
package stuff
/*
#include <stuff.h>
*/
import "C"
type Stuff C.Stuff
func NewStuff() *Stuff {
stuff := Stuff(C.NewStuff()) // Allocate memory
// define the release function for the runtime to call
// when this object has no references to it (to release memory)
// In this case it's stuff.Free()
return stuff
}
func (s Stuff) Free() {
C.Free(C.Stuff(s)) // Release memory
}
Is there any way for the garbage collector to call Stuff.Free() when there are no references to *Stuff in the Go runtime?
Am I making sense here?
Perhaps a more direct question is: Is it possible to make the runtime automatically handle the cleanup of C allocated memory by writing a function that the runtime calls when there are zero references to that object?
There exists the runtime.SetFinalizer function, but it cannot be used on any object allocated by C code.
However, you can create a Go object for each C object that needs to be freed automatically:
type Stuff struct {
cStuff *C.Stuff
}
func NewStuff() *Stuff {
s := &Stuff{C.NewStuff()}
runtime.SetFinalizer(s, (*Stuff).Free)
return s
}
func (s *Stuff) Free() {
C.Free(s.cStuff)
}
Related
I have a piece of data
type data struct {
// all good data here
...
}
This data is owned by a manager and used by other threads for reading only. The manager needs to periodically update the data. How do I design the threading model for this? I can think of two options:
1.
type manager struct {
// acquire read lock when other threads read the data.
// acquire write lock when manager wants to update.
lock sync.RWMutex
// a pointer holding a pointer to the data
p *data
}
2.
type manager struct {
// copy the pointer when other threads want to use the data.
// When manager updates, just change p to point to the new data.
p *data
}
Does the second approach work? It seems I don't need any lock. If other threads get a pointer pointing to the old data, it would be fine if manager updates the original pointer. As GoLang will do GC, after all other threads read the old data it will be auto released. Am I correct?
Your first option is fine and perhaps simplest to do. However, it could lead to poor performance with many readers as it could struggle to obtain a write lock.
As the comments on your question have stated, your second option (as-is) can cause a race condition and lead to unpredictable behaviour.
You could implement your second option by using atomic.Value. This would allow you to store the pointer to some data struct and atomically update this for the next readers to use. For example:
// Data shared with readers
type data struct {
// all the fields
}
// Manager
type manager struct {
v atomic.Value
}
// Method used by readers to obtain a fresh copy of data to
// work with, e.g. inside loop
func (m *manager) Data() *data {
return m.v.Load().(*data)
}
// Internal method called to set new data for readers
func (m *manager) update() {
d:=&data{
// ... set values here
}
m.v.Store(d)
}
I have a bunch of functions in a Go program that work on a struct that uses a mutex to manage concurrent access to its functions.
Some of these functions, that operate on specific data, need locks and thereby use mutex.Lock() to get hold of a mutex that manages the access to that data. Today I encountered an issue when two of these locking methods call each other. As soon as mutex.Lock() is called a second time it blocks - of course.
The problem I am facing is very similar to this code: http://play.golang.org/p/rPARZsordI
Is there any best-practise in Go on how to solve this issue? As far as I know recursive locks are not available in Go.
It seems like a design flaw of your system. You should factor out the part that you need both locked and unlocked. E.g. if what you do is
func (t *Thing) A() { t.Lock(); defer t.Unlock(); t.foo(); t.B() }
func (t *Thing) B() { t.Lock(); defer t.Unlock(); t.bar() }
then what you should do instead is
func (t *Thing) A() { t.Lock(); defer t.Unlock(); t.foo(); t.b() }
func (t *Thing) B() { t.Lock(); defer t.Unlock(); t.b() }
func (t *Thing) b() { t.bar() }
Consider an operation with a standard asynchronous interface:
std::future<void> op();
Internally, op needs to perform a (variable) number of asynchronous operations to complete; the number of these operations is finite but unbounded, and depends on the results of the previous asynchronous operations.
Here's a (bad) attempt:
/* An object of this class will store the shared execution state in the members;
* the asynchronous op is its member. */
class shared
{
private:
// shared state
private:
// Actually does some operation (asynchronously).
void do_op()
{
...
// Might need to launch more ops.
if(...)
launch_next_ops();
}
public:
// Launches next ops
void launch_next_ops()
{
...
std::async(&shared::do_op, this);
}
}
std::future<void> op()
{
shared s;
s.launch_next_ops();
// Return some future of s used for the entire operation.
...
// s destructed - delayed BOOM!
};
The problem, of course, is that s goes out of scope, so later methods will not work.
To amend this, here are the changes:
class shared : public std::enable_shared_from_this<shared>
{
private:
/* The member now takes a shared pointer to itself; hopefully
* this will keep it alive. */
void do_op(std::shared_ptr<shared> p); // [*]
void launch_next_ops()
{
...
std::async(&shared::do_op, this, shared_from_this());
}
}
std::future<void> op()
{
std::shared_ptr<shared> s{new shared{}};
s->launch_next_ops();
...
};
(Asides from the weirdness of an object calling its method with a shared pointer to itself, )the problem is with the line marked [*]. The compiler (correctly) warns that it's an unused variable.
Of course, it's possible to fool it somehow, but is this an indication of a fundamental problem? Is there any chance the compiler will optimize away the argument and leave the method with a dead object? Is there a better alternative to this entire scheme? I don't find the resulting code the most intuitive.
No, the compiler will not optimize away the argument. Indeed, that's irrelevant as the lifetime extension comes from shared_from_this() being bound by decay-copy ([thread.decaycopy]) into the result of the call to std::async ([futures.async]/3).
If you want to avoid the warning of an unused argument, just leave it unnamed; compilers that warn on unused arguments will not warn on unused unnamed arguments.
An alternative is to make do_op static, meaning that you have to use its shared_ptr argument; this also addresses the duplication between this and shared_from_this. Since this is fairly cumbersome, you might want to use a lambda to convert shared_from_this to a this pointer:
std::async([](std::shared_ptr<shared> const& self){ self->do_op(); }, shared_from_this());
If you can use C++14 init-captures this becomes even simpler:
std::async([self = shared_from_this()]{ self->do_op(); });
why in D I can't send to another thread through Tid.send local instances of structs?
I would like to make simple handling of thread communication like this:
void main()
{
...
tid.send(Command.LOGIN, [ Variant("user"), Variant("hello1234") ] );
...
}
void thread()
{
...
receive(
(Command cmd, Variant[] args) { ... })
)
...
}
If I understand it correctly, D should create the array of Variants in the stack and
then copy content of the array the send function right? So there should not be any
issues about synchronization and concurrency. I'm quiet confused, this concurrency is
wierd, I'm used to code with threads in C# and C.
Also I'm confused about the shared keyword, and creating shared classes.
Usualy when I try to call method of shared class instance from non-shared object, the
compiler throws an error. Why?
you should idup the array and it will be able to go through, normal arrays are default sharable (as they have a shared mutable indirection)
as is the compiler can rewrite the sending as
Variant[] variants = [ Variant("user"), Variant("hello1234") ] ;
tid.send(Command.LOGIN, variants);
and Variant[] fails the hasUnsharedAlias test
you can fix this by making the array shared or immutable (and receiving the appropriate one on the other side)
I am getting a memory leak whenver a new RPC thread in a DCOM server (c++ DCOM server) invokes the following managed C++ method
void CToolDataClient::SetLotManagerActive(bool bLotManagerActive)
{
if( m_toolDataManager != nullptr)
{
m_toolDataManager->LotActive = bLotManagerActive;
}
}
I get the managed C++ object pointer using the floowing code
typedef bool (*FPTR_CREATEINTERFACE)(CToolDataInterface ** ppInterface);
FPTR_CREATEINTERFACE fnptr = (FPTR_CREATEINTERFACE)GetProcAddress(hModule,(LPTSTR)"CreateInstance");
if ( NULL != fnptr )
{
CICELogger::Instance()->LogMessage("CToolDataManager::CToolDataManager", Information,
"Created instance of DataManagerBridge");
fnptr(&m_pToolDataInterface);
}
This is how I invoke the managed call in the DCOME server C++ portion
void CToolDataManager::SetLotManagerActive(bool bLotManagerActive)
{
if(m_pToolDataInterface != NULL)
{
m_pToolDataInterface->SetLotManagerActive(bLotManagerActive);
}
}
The callstack given below indicate the location of the memory leak . Is there any ways to solve this memory leak? Please help me
ntdll!RtlDebugAllocateHeap+000000E1
ntdll!RtlAllocateHeapSlowly+00000044
ntdll!RtlAllocateHeap+00000E64
mscorwks!EEHeapAlloc+00000142
mscorwks!EEHeapAllocInProcessHeap+00000052
**mscorwks!operator new[]+00000025
mscorwks!SetupThread+00000238
mscorwks!IJWNOADThunk::FindThunkTarget+00000019
mscorwks!IJWNOADThunkJumpTargetHelper+0000000B
mscorwks!IJWNOADThunkJumpTarget+00000048
ICEScheduler!CToolDataManager::SetLotManagerActive+00000025** (e:\projects\ice\ice_dev\trunk\source\application source\iceschedulersystem\icescheduler\tooldatamanager.cpp, 250)
ICEScheduler!SetLotManagerActive+00000014 (e:\projects\ice\ice_dev\trunk\source\application source\iceschedulersystem\icescheduler\schddllapi.cpp, 589)
ICELotControl!CLotDetailsHandler::SetLotManagerStatus+0000006C (e:\projects\ice\ice_dev\source\application source\icelotsystem\icelotcontrol\lotdetailshandler.cpp, 1823)
ICELotControl!CLotManager::StartJob+00000266 (e:\projects\ice\ice_dev\source\application source\icelotsystem\icelotcontrol\lotmanager.cpp, 205)
RPCRT4!Invoke+00000030
RPCRT4!NdrStubCall2+00000297
RPCRT4!CStdStubBuffer_Invoke+0000003F
OLEAUT32!CUnivStubWrapper::Invoke+000000C5
ole32!SyncStubInvoke+00000033
ole32!StubInvoke+000000A7
ole32!CCtxComChnl::ContextInvoke+000000E3
ole32!MTAInvoke+0000001A
ole32!AppInvoke+0000009C
ole32!ComInvokeWithLockAndIPID+000002E0
ole32!ThreadInvoke+000001CD
RPCRT4!DispatchToStubInC+00000038
RPCRT4!RPC_INTERFACE::DispatchToStubWorker+00000113
RPCRT4!RPC_INTERFACE::DispatchToStub+00000084
RPCRT4!RPC_INTERFACE::DispatchToStubWithObject+000000C0
RPCRT4!LRPC_SCALL::DealWithRequestMessage+000002CD
RPCRT4!LRPC_ADDRESS::DealWithLRPCRequest+0000016D
RPCRT4!LRPC_ADDRESS::ReceiveLotsaCalls+0000028F
First, is LotActive a member variable/field (pure data) or a property?
I think that it is a property, and before it can be set, the JIT has to compile the code for the setter. In desktop .NET, native code produced by the JIT compilation process is not garbage collected, instead it exists for the lifetime of the AppDomain, so it could look like a leak.
Can you check whether each call to this function leaks another object, or the leak just occurs once?