Meaning of ref class and gcnew - visual-c++

I am not clear about the usage of keywords gcnew and ref class. usually in C++ when we use keyword new for creating an object that time only the memory for the class allocated in Heap. As per I read about VC++ ref class when we use this token to create class then the memory for the class allocated in Heap before the object creation. is it true? if suppose it is to be true then what is the usage of gcnew. is gcnew does anything with reference count?
What is difference in below Handler creation statements
1)<class Name> ^<handler> = gcnew <class Name>;
2)<class Name> <handler>;
I know the first one allocates the memory in heap for that class. what about Second one, Usually in C++ if I create handler like this then the memory for that class is allocated in the stack, As per I read in C++/CLI programming when I creating a class using the token ref class then the memory for the class allocated in heap. I want to know whether the Second handler creation statement also creates the memory in heap.

Related

Why does finalize() execute only after new object is created, but not after gc() is invoked?

Shouldn't finalize() execute immediately when gc() is called? The order of output result is a little unconvincing.
class Test
{
int x = 100;
int y = 115;
protected void finalize()
{System.out.println("Resource Deallocation is completed");}
}
class DelObj
{
public static void main(String arg[])
{
Test t1 = new Test();
System.out.println("Values are "+t1.x+", "+t1.y+"\nObject refered by t1 is at location: "+t1);
t1 = null; // dereferencing
System.gc(); // explicitly calling
Test t2= new Test();
System.out.println("Values are "+t2.x+", "+t2.y+"\nObject refered by t2 is at location: "+t2);
}
}
Got the execution result of finalize() after a new object is created, referred by t2:
D:\JavaEx>java DelObj
Values are 100, 115
Object refered by t1 is at location: Test#6bbc4459
Values are 100, 115
Object refered by t2 is at location: Test#2a9931f5
Resource Deallocation is completed
Calling System.gc() only provides a hint to the JVM, but does not guaranty that an actual garbage collection will happen.
However, the bigger problem with your expectation is that garbage collection is not the same as finalization.
Referring to the Java 6 documentation, System.gc() states:
Runs the garbage collector.
Calling the gc method suggests that the Java Virtual Machine expend effort toward recycling unused objects in order to make the memory they currently occupy available for quick reuse. …
Compare to System.runFinalization():
Runs the finalization methods of any objects pending finalization.
Calling this method suggests that the Java Virtual Machine expend effort toward running the finalize methods of objects that have been found to be discarded but whose finalize methods have not yet been run. …
So there can be “pending finalization”, resp. “objects that have been found to be discarded but whose finalize methods have not yet been run”.
Unfortunately, Java 6’s documentation of finalize() starts with the misleading sentence:
Called by the garbage collector on an object when garbage collection determines that there are no more references to the object.
whereas garbage collection and finalization are two different things, hence, the finalize() method is not called by the garbage collector. But note that the subsequent documentation says:
The Java programming language does not guarantee which thread will invoke the finalize method for any given object.
So when you say “The order of output result is a little unconvincing”, recall that we’re talking about multi-threading here, so in absence of additional synchronization, the order is outside your control.
The Java Language Specification even says:
The Java programming language does not specify how soon a finalizer will be invoked, except to say that it will happen before the storage for the object is reused.
and later on
The Java programming language imposes no ordering on finalize method calls. Finalizers may be called in any order, or even concurrently.
In practice, the garbage collector will only enqueue objects needing finalization, while one or more finalizer threads poll the queue and execute the finalize() methods. When all finalizer threads are busy executing particular finalize() methods, the queue of objects needing finalization may grow arbitrary long.
Note that modern JVMs contain an optimization for those classes not having a dedicated finalize() method, i.e. inherit the method from Object or just have an empty method. Instances of these classes, the majority of all objects, skip this finalization step and their space gets reclaimed immediately.
So if you added a finalize() method just to find out when the object gets garbage collected, it’s the very presence of that finalize() method which slows down the process of the memory reclamation.
So better refer to the JDK 11 version of finalize():
Deprecated.
The finalization mechanism is inherently problematic. Finalization can lead to performance issues, deadlocks, and hangs. Errors in finalizers can lead to resource leaks; there is no way to cancel finalization if it is no longer necessary; and no ordering is specified among calls to finalize methods of different objects. Furthermore, there are no guarantees regarding the timing of finalization. The finalize method might be called on a finalizable object only after an indefinite delay, if at all. Classes whose instances hold non-heap resources should provide a method to enable explicit release of those resources, and they should also implement AutoCloseable if appropriate. The Cleaner and PhantomReference provide more flexible and efficient ways to release resources when an object becomes unreachable.
So when your object does not contain a non-memory resource, hence, doesn’t actually need finalization, you can use
class Test
{
int x = 100;
int y = 115;
}
class DelObj
{
public static void main(String[] arg)
{
Test t1 = new Test();
System.out.println("Values are "+t1.x+", "+t1.y+"\nObject refered by t1 is at location: "+t1);
WeakReference<Test> ref = new WeakReference<Test>(t1);
t1 = null; // dereferencing
System.gc(); // explicitly calling
if(ref.get() == null) System.out.println("Object deallocation is completed");
else System.out.println("Not collected");
Test t2= new Test();
System.out.println("Values are "+t2.x+", "+t2.y+"\nObject refered by t2 is at location: "+t2);
}
}
The System.gc() call still is only a hint, but you will find your object being collected afterwards in most practical cases. Note that the hash code printed for the objects, like with Test#67f1fba0 has nothing to do with memory locations; that’s a tenacious myth. The patterns behind object memory addresses is often unsuitable for hashing, further most modern JVMs can move objects to different memory locations during their lifetime, whereas the identity hash code is guaranteed to stay the same.

Windows UMDF CComPtr IWDFMemory does not get freed

In my UMDF driver i have a IWDFMemory packed inside a CComPtr
CComPtr<IWDFMemory> memory;
The documentation of CComPtr says, If a CComPtr object gets out of scope, it gets automagically freed. That means this code should not create any memory leaks:
void main()
{
CComPtr<IWDFDriver> driver = /*driver*/;
/*
driver initialisation
*/
{
// new scope starts here
CComPtr<IWDFMemory> memory = NULL;
driver->CreateWdfMemory(0x1000000, NULL, NULL, &memory);
// At this point 16MB memory have been allocated.
// I can verify this by the task manager.
// scope ends here
}
// If I understand right the memory I allocated in previous scope should already
// be freed at this point. But in the task manager I still can see the 16 MB
// memory used by the process.
}
Also if I manually assign NULL to memory or call memory.Release() before scope end the memory does not get freed. I am wondering what is happening here?
According to MSDN:
If NULL is specified in the pParentObject parameter, the driver object
becomes the default parent object for the newly created memory object.
If a UMDF driver creates a memory object that the driver uses with a
specific device object, request object, or other framework object, the
driver should set the memory object's parent object appropriately.
When the parent object is deleted, the memory object and its buffer
are deleted.
Since you do indeed pass NULL, the memory won't be released until the CComPtr<IWDFDriver> object is released.

How do two or more threads share memory on the heap that they have allocated?

As the title says, how do two or more threads share memory on the heap that they have allocated? I've been thinking about it and I can't figure out how they can do it. Here is my understanding of the process, presumably I am wrong somewhere.
Any thread can add or remove a given number of bytes on the heap by making a system call which returns a pointer to this data, presumably by writing to a register which the thread can then copy to the stack.
So two threads A and B can allocate as much memory as they want. But I don't see how thread A could know where the memory that thread B has allocated is located. Nor do I know how either thread could know where the other thread's stack is located. Multi-threaded programs share the heap and, I believe, can access one another's stack but I can't figure out how.
I tried searching for this question but only found language specific versions that abstract away the details.
Edit:
I am trying not to be language or OS specific but I am using Linux and am looking at it from a low level perspective, assembly I guess.
My interpretation of your question: How can thread A get to know a pointer to the memory B is using? How can they exchange data?
Answer: They usually start with a common pointer to a common memory area. That allows them to exchange other data including pointers to other data with each other.
Example:
Main thread allocates some shared memory and stores its location in p
Main thread starts two worker threads, passing the pointer p to them
The workers can now use p and work on the data pointed to by p
And in a real language (C#) it looks like this:
//start function ThreadProc and pass someData to it
new Thread(ThreadProc).Start(someData)
Threads usually do not access each others stack. Everything starts from one pointer passed to the thread procedure.
Creating a thread is an OS function. It works like this:
The application calls the OS using the standard ABI/API
The OS allocates stack memory and internal data structures
The OS "forges" the first stack frame: It sets the instruction pointer to ThreadProc and "pushes" someData onto the stack. I say "forge" because this first stack frame does not arise naturally but is created by the OS artificially.
The OS schedules the thread. ThreadProc does not know it has been setup on a fresh stack. All it knows is that someData is at the usual stack position where it would expect it.
And that is how someData arrives in ThreadProc. This is the way the first, initial data item is shared. Steps 1-3 are executed synchronously by the parent thread. 4 happens on the child thread.
A really short answer from a bird's view (1000 miles above):
Threads are execution paths of the same process, and the heap actually belongs to the process (and as a result shared by the threads). Each threads just needs its own stack to function as a separate unit of work.
Threads can share memory on a heap if they both use the same heap. By default most languages/frameworks have a single default heap that code can use to allocate memory from the heap. In unmanaged languages you generally make explicit calls to allocate heap memory. In C, that might be malloc, etc. for example. In managed languages heap allocation is usually automatic and how allocation is done depends on the language--usually through the use of the new operator. but, that depends slightly on context. If you provide the OS or language context you're asking about, I might be able to provide more detail.
A Thread shared with other threads belonging to the same process: its code section, data section and other operating system resources such as open files and signals.
The part you are missing is static memory containing static variables.
This memory is allocated when the program is started, and assigned known adresses (determined at the linking time). All threads can access this memory without exchanging any data runtime, because the addresses are effectively hardcoded.
A simple example might look like this:
// Global variable.
std::atomic<int> common_var;
void thread1() {
common_var = compute_some_value();
}
void thread2() {
do_something();
int current_value = common_var;
do_more();
}
And of course the global value may be a pointer, that can be used to exchange heap memory. The producer allocates some objects, the consumer takes and uses them.
// Global variable.
std::atomic<bool> produced;
SomeData* data_pointer;
void producer_thread() {
while (true) {
if (!produced) {
SomeData* new_data = new SomeData();
data_pointer = new_data;
// Let the other thread know there is something to read.
produced = true;
}
}
}
void consumer_thread() {
while (true) {
if (produced) {
SomeData* my_data = data_pointer;
data_pointer = nullptr;
// Let the other thread know we took the data.
produced = false;
do_something_with(my_data);
delete my_data;
}
}
}
Please note: these are not examples of good concurrent code, but they show the general idea without too much clutter.

C++0x allocators

I observed that my copy of MSVC10 came with containers that appeared to allow state based allocators, and wrote a simple pool allocator, that allocates pools for a specific type.
However, I discovered that if _ITERATOR_DEBUG_LEVEL != 0 the MSVC vector creates a proxy allocator from the passed allocator (for iterator tracking?), uses the proxy, then lets the proxy fall out of scope, expecting the allocated memory to remain. This causes problems because my allocator attempts to release it's pool upon destruction. Is this allowed by the C++0x standard?
The code is roughly like this:
class _Container_proxy{};
template<class T, class _Alloc>
class vector {
_Alloc _Alval;
public:
vector() {
// construct _Alloc<_Container_proxy> _Alproxy
typename _Alloc::template rebind<_Container_proxy>::other
_Alproxy(_Alval);
//allocate
this->_Myproxy = _Alproxy.allocate(1);
/*other stuff, but no deallocation*/
} //_Alproxy goes out of scope
~_Vector_val() { // destroy proxy
// construct _Alloc<_Container_proxy> _Alproxy
typename _Alloc::template rebind<_Container_proxy>::other
_Alproxy(_Alval);
/*stuff, but no allocation*/
_Alproxy.deallocate(this->_Myproxy, 1);
} //_Alproxy goes out of scope again
According to the giant table of allocator requirements in section 17.6.3.5, an allocator must be copyable. Containers are allowed to copy them freely. So you need to store the pool in a std::shared_ptr or something similar in order to prevent deletion while one of the allocators is in existence.

Visual C++ unamanged and managed

What are the differences between creating an instance of a .NET object in C++ that is managed vs. unmanaged. That is to say, what are the differences between these to statements:
StreamWriter ^stream = gcnew StreamWriter(fileName);
versus
StreamWriter *stream = new StreamWriter(fileName);
My assumption is that if I use gcnew, the memory allocated for StreamWriter will be managed by the garbage collector. Alternatively if I use the pointer(*) and new keyword, I will have to call delete to deallocate the memory.
My real question is: will the garbage collector manage the memory that is being allocated inside of the .NET objects? For instance if a .NET object instantiates another object, and it goes out of scope - will the garbage collector manage that memory even if I use the pointer(*) and new keyword and NOT the gcnew and handle(^).
In C++/CLI, you can't new a .NET object, you'll get something similar to the following error:
error C2750: 'System::Object' : cannot use 'new' on the reference type; use 'gcnew' instead
Usage of new for .NET objects is allowed in the older Managed Extensions for C++ (/clr:oldsyntax compiler flag). "Managed C++" is now deprecated because it is horrible. It has been superceded by C++/CLI, which introduced the ^ and gcnew.
In C++/CLI, you must use gcnew (and ^ handles) for managed types and you must use new (and * pointers) for native types. If you do create objects on the native heap using new, it is your responsibility to destroy them when you are done with them.
Ideally you should use a smart pointer (like std::shared_ptr or std::unique_ptr) to manage the object on the native heap. However, since you can't have a native smart pointer as a field of a ref class, this isn't entirely straightforward. The simplest and most generic approach would probably be to write your own smart pointer wrapper ref class that correctly implements IDisposable.
When you create an object with gcnew it is binded to the garbage collector, and the garbage collector will be responsible for destroying it.
If you use new it won't be binded to the garbage collector, and it will be your responsibility to delete the object.
Just to clarify:
If you have a managed C# object that contains within it an unmanaged object, the garbage collector will not delete the unmanaged object. It will just call the managed object's destructor (if it exists) before it is deleted. You should write your own code in the destructor to delete the unmanaged objects you created.

Resources