What are Managed objects and unmanaged object in C++/CLI
Managed objects are a feature of the .NET framework and its implementation of a C++-like language, and have their memory managed for you by the .NET garbage collector. C++ itself has no such concept, and a better (in general) way of managing all resources (not just memory) called RAII.
The concept Managed/Unmanaged is not typically C++. It is Microsoft .Net technology speak.
In normal, plain C++ applications, the application itself is responsible for deleting all the memory it has allocated. This requires the developer to be very careful about when to delete memory. If memory is deleted too soon, the application may crash if it still has a pointer to it. If memory is deleted too late, or not deleted at all, the application has a memory leak.
Environments like Java and .Net solve this problem by using garbage collectors. The developer should not delete memory anymore, the garbage collector will do this for him.
In the 'native' .Net languages (like C#), the whole language works with the garbage collector concept. To make the transition from normal, plain C++ applications to .Net easier, Microsoft added some extensions to its C++ compiler, so that C++ developers could already benefit from the advantages of .Net.
Whenever you use normal, plain C++, Microsoft talks about unmanaged, or native C++. If you use the .Net extensions in C++, Microsoft talks about managed C++. If your application contains both, you have a mixed-mode application.
Managed objects do not exist in C++.
They exist in Microsoft's .NET extensions to C++, and a complete explanation would be a bit long, sorry.
Related
does Boo have a garbage collector?
what type?
Yes, as provided by the CLR.
Depends on the implementation of the runtime.
For Microsoft .NET, it is using a 3 Generational GC, see .NET Development (General) Technical Articles: Garbage Collector Basics and Performance Hints.
For Mono, it is using the Boehm conservative GC, but is migrating to a new Generational, Precise GC.
Since it's a .NET/CLR language, it relies on the garbage collector provided by that infrastructure. Although the garbage collector is an implementation detail to the infrastructure.
The two main CLR implementations are the Microsoft .NET Framework and the mono project.
If you're interested, you can read about the implementation of the MS.NET GC or the Mono GC.
I'm writing a .NET 4.0 library that should be efficient and simple to use.
The library is used by referencing it and using its different classes.
Should I use .NET 4.0 Tasks tot make things more efficient internally? I fear that it might make the usage of the library more complex and limited since the users might want to decide for themselves when and where to use tasks and threads.
If your answer depends on the kind of library, here is more information:
The library is Pcap.Net, which is a wrapper for WinPcap and includes a packet interpretation framework.
It only is an issue when the user can 'see' the threading, ie you give out access to data that could be accessed (by you) on another Thread. Probably not a good idea.
But when the parallel processing stays completely inside your application then there is very little chance your users would object.
Should? Dunno. How about giving people an option by providing extension methods that use tasks against the library and push that out in a separate DLL? If you want to use tasks, reference the extension library and go crazy. Otherwise, stick with the core dll.
I believe there are many projects that follow this pattern with Linq. They provide their core library and a separate .Linq.DLL which has extension methods...
We're building a MMO server, highly optimized for latency.
So, with the CLR 4.0 and with introduced new workstation GC, is it now possible to use Background Garbage collection on a Windows Server?
Apparently not. See this article, which specifically states that Microsoft is not offering background GC for server GC in V4.0 (though it looks like this is under consideration).
You might also find this essay (PDF) interesting, given what you're trying to do.
I'm new to C#/Java and plan to prototype it for soft real-time system.
If I wrote C#/Java app just like how I do in C++ in terms of memory management, that is, I explicitly "delete" the objects that I no longer use, then would the app still be affected by garbage collector? If so, how does it affect my app?
Sorry if this sounds like an obvious answer, but being new, I want to be thorough.
Take a look at IBM's Metronome, their garbage collector for hard real-time systems.
Your premise is wrong: you cannot explicitly “delete” objects in either Java or C#, so your application will always be affected by the GC.
You may try to trigger a collection by calling GC.Collect (C#) with an appropriate parameter (e.g. GC.MaxGeneration) but this still doesn’t guarantee that the GC won’t be working at other moments during execution.
By explicitly "delete" if you mean releasing the reference to the object then you are reliant on the garbage collector in C# managed code - see the System.GC class for ways of controlling it.
If you choose to write unmanaged C# code then you will have more control over memory, akin to C++, and will be responsible for deleting your instantiated objects, able to use pointers, etc. For more info see MSDN doc - Unsafe Code and Pointers (C# Programming Guide).
In unmanaged code you will not be at the mercy of the the Garbage Collector and its indeterminate cleanup algorithms.
I don't know if Java has an equivalent unmanaged mode, but this Microsoft info might help provide some direction on C#/.NET to use its available features for your requirement of dealing with the garbage collector.
In Csharp or Java you can't delete object. What you can do is only mark them available for deletion. The memory free up will be done by Garbage Collector.. It might be the case that Garbage Collector may not run during the life time of your application. However it's likely to run. When your system is becoming short of resources it is the most likely time when GC routines are run by the runtime. And when resources are low GC becomes the highest priority thread. So your application do get effected. However you can minimize the effect by calculating the correct load and required resources for your application life time and make sure to buy the right hardware which is good enough for that. But still you can't just bench mark your performance.
Besides just GC the managed application do get a slight overhead over the traditional C++ application due to the extra delegation layer involved. And a slight first time performance panelty since the run time needs to be up and running before your application get started.
Here are some references for developing real-time systems with the .net compact framework:
IEEE - C# and the .NET Framework: Ready for Real Time?
MSDN - Real-Time Behavior of the .NET Compact Framework
They both talk about the memory requirements of using the .net framework.
C# and Java are not for Real-Time development. Soft real-time is attainable however as you note.
For C#, the best you can do is implement the finalize/dispose pattern:
http://msdn.microsoft.com/en-us/library/b1yfkh5e(VS.71).aspx
You can request it to collect, but typically it's much better at determining how to do this.
http://msdn.microsoft.com/en-us/library/system.gc(VS.71).aspx
For Java, there are many options to optimize it:
http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html
Along with third party solutions like IBM Metronome as noted above.
This is a real science within CS itself.
I have an application that is built in C# .NET. It uses excel as a presentation layer and unmanged C++ as a processing engine. Is there a tool I can use to check for memory leaks for each component?
AQTime will instrument both managed and unmanaged code. I have used it successfully to find memory leaks in managed/unmanaged project.