How and when does ngen.exe work? - ngen

I want to know the benefit of pre-JIT compilation (ngen.exe). What is the role of the Native Image Generator (NGen) process and why is it required?
Please provide an example.

For code execution on the .NET platform, the Common Intermediate Language (CIL) representation needs to be translated into machine code. If this happens immediately before execution this is referred to as JIT (Just In Time) compilation. Output of JIT is not persisted so your managed application has to go through JIT for every launch.
Alternatively, you can use pre-compilation to reduce startup overheads related with JIT compilation. NGen performs pre-compilation and keeps the native images in a native image cache. Then applications can run with the native images and may experience faster startup time due to reduced JIT compilation overhead. Initially, NGen was an install-time technology, developers made application installers issue NGen commands to trigger pre-compilation during install time. For more details, check out NGen Revs Up Your Performance with Powerful New Features. This article provides an example application that leverages NGen.
With Windows 8 (.NET 4.5), a new NGen mode: "Auto NGen" has been introduced. Basically, the .NET runtime generates usage logs for managed applications. When the system is idle, an automatic maintenance task runs in the background and generates native images. This way developers no longer have to deal with NGen explicitly. Note that this feature is only enabled for .NET 4.5+ applications that target Window Store or use the GAC. Here's an MSDN page that may be helpful:
Creating Native Images
And this is high-level overview of NGen and related technologies:
Got a need for speed? .NET applications start faster
Lastly, .NET framework libraries themselves use NGen for better performance. When .NET framework is serviced, some of the native images get invalidated. Then NGen needs to run to re-generate the invalid native images. This is done automatically via the .NET Runtime Optimization service which runs during idle time.

When a .NET compiler compiles C# or VB.NET code it half compiles them and creates CIL code. When you run this half-compiled .NET EXE file the JIT runs in the background and compiles the half CIL code in to full machine language. This mode is termed as normal JIT.
You can also go the other way around saying you do not want runtime compilation by running a full compiled EXE file. This compilation is done by using negen.exe. In this scenario the JIT does not participate at runtime. This is termed as pre-JIT mode.
If you want to see how they affect performance you can see this YouTube video which demonstrates normal-JIT and pre-JIT mode of compilation:
Explain JIT, Ngen.exe, Pre-jit, Normal-Jit and Econo-Jit.? (.NET interview questions)

Per MSDN:
The Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images, which are files containing compiled processor-specific machine code, and installs them into the native image cache on the local computer. The runtime can use native images from the cache instead of using the just-in-time (JIT) compiler to compile the original assembly.
I have used NGEN in the past during installation so that the software would start up faster.

NGen (Native Image Generator) basically compiles .NET byte code (CIL) into native code for the computer it's running on. The benefit is that given that you're not compiling the code to native every time, you run it or need it, but you do it just once, the application starts and run faster. If you want more information there are plenty of resources out there about the benefits of JIT vs. Ahead of Time Compilation (which is what NGen does).

Related

Can I use mtouch to build a library?

I want to build a dll from all my package dependencies using mtouch. I have tried different options and failed.
Giving the root-assembly as my dll plus all packages gives me MT0052: No command specified
I think mtouch can not do that . From doc of Using mtouch to Bundle Xamarin.iOS Apps , you can see :
The process of turning a .NET executable into an application is mostly driven by the mtouch command, a tool that integrates many of the steps required to turn the application into a bundle. This tool is also used to launch your application in the simulator and to deploy the software to an actual iPhone or iPod Touch device.
It just transfers a existed .NET executable into an application ,can not help you bind library into an application .
You also can see the COMPILATION MODE doc of mtouch , there are four mode :
--abi=ABI
Comma-separated list of ABIs to target. Currently supported: armv6, armv6+llvm, armv7, armv7+llvm, armv7+llvm+thumb2, armv7s, armv7s+llvm, armv7s+llvm+thumb2. Fat binaries are automatically created if more than one ABI is targetted.
To use the LLVM optimizing compiler code generation backend instead of Mono's default code generation backend target one of the llvm ABIs. Build times take considerably longer for native code, but the generated code is shorter and performs better.
You may also instruct the LLVM code generator to produce ARM Thumb instructions by targetting one of the llvm+thumb2 targets. Thumb instructions produce more compact executables.
--cxx
Enables C++ support. This is required if you are linking with some third party libraries that use the C++ runtime. With this flag, mtouch uses the C++ compiler to drive the compilation process instead of the C compiler.
-sim=DIRECTORY
This compiles the program and assemblies passed on the command line into the specified directory for use with the iOS simulator. This generates a standalone program that is entirely driven by the C# or ECMA CIL code.
-dev=DIRECTORY
This compiles the program and assemblies passed on the command line into the specified directory for use on an iPod Touch, iPhone or iPad device. The target directory can be used as the contents of a .app directory This generates a standalone program that is entirely driven by the C# or ECMA CIL code.
mtouch not supports binding library , it just compiles a existed executable which already binded library .
By the way , if want to bind a third party library , official doc recommands you to use Binding iOS Libraries .

Why does the Windows SDK now include strmbase.lib?

Windows SDK 7.1 was the last version that included the baseclasses direct show sample. But later Windows SDK have strmbase.lib with the compiled library. What use is the library without the headers?
It might be included without good reason waiting for its cleanup time, or there is unobvious reason such as reference to this static library when linking other legacy libraries.
Either way you are correct in the part that DirectShow bases classes are no longer in Windows SDK. Those interested in DirectShow development would typically get DirectShow BaseClasses and samples from Windows-classic-samples/Win7Samples and build the code including strmbase.lib themselves.

What the difference between Release|AnyCPU and Release|ARM

I'm working on WinRt version of my class library dll. Finally, after the huge "code cleanup" my project is on building step and I have two ways. To build the solution with Release|AnyCPU as usually or build it with Release|ARM (Which unclear for me). Which dependencies my dll will get or avoid in process of building, what will be different, will there a specific IL optimizations on a second way?
If you're only using managed code, there's no reason not to use Release|AnyCPU. This way the same package will be used for all three platforms (ARM, x86 and x64).
On the other hand, if your project references natively compiled library, you'll need to set a specific platform, like Release|ARM, that your native library is compiled for. If the native library is installed as an extension (e.g. SQLite for Windows Runtime), you'll be able to compile your app for all 3 target platforms, each one referencing the appropriate native library, though they will need to be individual packages instead of a single universal one.
You'll still be able to submit your app to the store as a single app even if it has 3 separate packages, one for each platform.

native sqlite library crashes in with c++/cli

The short story
When compiling a C++ application for the X64 platform with Common Language Runtime support and using the native sqlite library inside it, the application crashes inside sqlite3MemRealloc, attempting to allocate a huge amount of memory (around 5GB).
When the same application is compiled without CLR support, the required functionality works and NO attempt to allocate this amount of memory is made. I put a break-point with a condition to verify this fact.
The database itself is a small 800KB file, and we are attempting to run a simple "select * from XYZ" query. Tried it both with the existing sqlite 3.7.11 that we have in our code base and the latest sqlite 3.7.14.
This problem is consistent. No matter how many times I rebuild the application or play with some settings - with CLR support it crashes, without CLR support it works.
The longer story
I was trying to develop an application that leverages code from an existing code-base written in C++ but also leverages the power of the .NET framework.
I created a C++/CLI application that linked against the existing code (that utilizes sqlite inside it). My code does not use sqlite directly. The existing code that does uses sqlite is a native C++ library that the rest of the code base depends on. So I cannot touch it easily and therefore cannot simply use System.Data.Sqlite.
I isolated this problem by removing all dependencies on the .NET framework and creating a simple application that only utilizes the existing native code without using any .NET framework code, and compiling it twice - with and without CLR support.
Eventually, the problem was solved by compiling SQLITE with the memsys5 memory allocator and configuring it to use it on startup. When it did not depend on the MSVCRT memory allocation but on its own internal allocation strategy, everything worked.
It seems that by some mysterious force, the existance of the CLR inteferes with MSVCRT's memory allocation functions.
More info can be found here.

System.Security.VerificationException when running ANTS profiler in .net 4.0

I've been using RedGate's ANTS Performance Profiler for a while now. We recently updated our 3rd party dlls (Telerik) to their .net 4.0 version. When we did this, I no longer can profile our code because as soon as I hit a Telerik control I get:
System.Security.VerificationException: Operation could destabilize the runtime.
I spoke with RedGate and they told me, "Basically it's all down to Microsoft and their changes to CASPOL. ANTS has more features and these features require high privileges so that ANTS can read metadata out of assemblies in the running environment..."
Their suggestion was to run the process in full trust mode. How do I do that?
I've tried making adjustments to our Assembly.cs file, but since the problem doesn't seem to be generated from our code, there's not much I can do in terms of adjusting code.
P.S. Our app is a WPF/Winforms desktop application. I've found solutions for web apps by making changes to the web.config, but I can't really seem to find an equivalent solution (or understand it if it exists).

Resources