dll size (debug and release) - visual-c++

I have read in other discussions that Release dlls have reduced size compared to Debug dlls. But why is it that the size of the dll I have made is the other way around: the Release dll is bigger than the Debug dll. Will it cause problems?

It won't cause problems, its probably that the compiler is 'inlining' more items in the release build and creating larger code. It all depends on the code itself.
Nothing to worry about.
EDIT:
If you are really worried about and not worried about speed you can turn on optimize for size. Or turn off auto inlining and see what difference you get.
EDIT:
More info, you can use dumpbin /headers to see where the dll gets larger

How much bigger is your Release DLL than your Debug DLL?
Your Debug DLLs might seem small is you are generating PDB symbol files (so the debug symbol is not actually in the DLL file). Or if you are inadvertently compiling debug symbols into your Release DLL.

This can be caused by performance optimizations like loop unrolling - if it's significantly different, check your Release linker settings to make sure that you're not statically compiling anything in.

The performance can be influenced if your application perform tasks of high performance. A release version can even be larger than a debug version, if marked options to generate code with information on Debug included. But this also depends on the compiler you are using.

Related

Dynamically loaded library remains loaded despite dlclose

Today I'm looking for some enlightenment of the deep magic inside the dynamic loader. I'm debugging/troubleshooting a plugin system for a C++ application running on Linux. It loads plugins via dlopen (RTLD_NOW | RTLS_LOCAL) and releases them using dlclose. Nothing extraordinary - one would think.
However, I noticed that some plugins remain loaded even after dlclose is successfully called*. I concluded this after looking at the running process' memory map using pmap. Some libs get immediately removed from process memory and others keep lingering around apparently indefinitely.
Continuing on, the dlopen man page states:
The function dlclose() decrements the reference count on the dynamic
library handle handle. If the reference count drops to zero and no
other loaded libraries use symbols in it, then the dynamic library is
unloaded.
That means the problem boils down to these two possibilities; Either the reference counts aren't zero, or other loaded libs are using symbols from some - but not all - of the plugins.
I'm pretty sure (although not 100%) that the reference counts are zero. The application's plugin manager handles all plugins exactly the same. It also makes sure that a plugin doesn't get loaded multiple times. IMO loading & unloading should therefore behave the same for all plugins.
That leaves the second possibility: other loaded libs are using symbols from the plugins. Another typical case of 'That should never happen'. Although it's certainly possible. We're using gcc and default visibility and as far as I've seen nothing is stripped, so tons of symbols are being exported. Actually that worries me much more as these plugins should be independent.
Here then are my open questions at this point:
Are my conclusions so far correct?
Do you know a way to verifydlopen's reference count?
If there are internal symbols of my plugins (accidentally) used by other libs, is there a way to track down who is using which symbols?
My machine is:
Linux 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:44 UTC 2014 i686 i686 i686 GNU/Linux
* I should mention that all of the loading and unloading happens in the main thread, so there should be no multi threading issue here.
other loaded libs are using symbols from the plugins
If other libs are not linked to that shared library at link time, referring to symbols of a shared library does not prevent that shared library from being unloaded.
To debug the run-time linker set environment variable LD_DEBUG to all, e.g. LD_DEBUG=all ./my_app. See man ld.so for details.

Slow linking in release with optimization disabled

I have a project (VC2005) which takes an unreasonable time (over 40 min) to link in Release while it is linked in less than 5 sec in Debug.
Both builds have incremental linking disabled and all files are located on the same drive.
Disabling Linker optimization in Release does not help.
Task manager never shows more than 150,000 K memory used by linker, which for a computer with 3GB of RAM is nothing.
I am building much bigger projects and never noticed such difference in building time.
Any ideas why this happens?
As remarked, the most probable reason is /LTCG (whole program optimization).
Other factors might be individual files compiled with /Gy (you should see some warnings in the output), or /OPT:REF, /OPT:ICF (check project properties/linker/optimization), or - very unlikely - you're unknowingly running some phase of PGO instrumentation.

How to increase probability of Linux core dumps matching symbols?

I have a very complex cross-platform application. Recently my team and I have been running stress tests and have encountered several crashes (and core dumps accompanying them). Some of these core dumps are very precise, and show me the exact location where the crash occurred with around 10 or more stack frames. Others sometimes have just one stack frame with ?? being the only symbol!
What I'd like to know is:
Is there a way to increase the probability of core dumps pointing in the right direction?
Why isn't the number of stack frames reported consistent?
Any best practice advise for managing core dumps.
Here's how I compile the binaries (in release mode):
Compiler and platform: g++ with glibc-2.3.2-95.50 on CentOS 3.6 x86_64 -- This helps me maintain compatibility with older versions of Linux.
All files are compiled with the -g flag.
Debug symbols are stripped from the final binary and saved in a separate file.
When I have a core dump, I use GDB with the executable which created the core, and the symbols file. GDB never complains that there's a mismatch between the core/binary/symbols.
Yet I sometimes get core dumps with no symbols at all! It's understandable that I'm linking against non-debug version of libstdc++ and libgcc, but it would be nice if at least the stack trace shows me where in my code did the faulty instruction call originate (although it may ultimately end in ??).
Others sometimes have just one stack frame with "??" being the only symbol!
There can be many reasons for that, among others:
the stack frame was trashed (overwritten)
EBP/RBP (on x86/x64) is currently not holding any meaningful value — this can happen e.g. in units compiled with -fomit-frame-pointer or asm units that do so
Note that the second point may occur simply by, for example, glibc being compiled in such a way. Having the debug info for such system libraries installed could mitigate this (something like what the glibc-debug{info,source} packages are on openSUSE).
gdb has more control over the program than glibc, so glibc's backtrace call would naturally be unable to print a backtrace if gdb cannot do so either.
But shipping the source would be much easier :-)
As an alternative, on a glibc system, you could use the backtrace function call (or backtrace_symbols or backtrace_symbols_fd) and filter out the results yourself, so only the symbols belonging to your own code are displayed. It's a bit more work, but then, you can really tailor it to your needs.
Have you tried installing debugging symbols of the various libraries that you are using? For example, my distribution (Ubuntu) provides libc6-dbg, libstdc++6-4.5-dbg, libgcc1-dbg etc.
If you're building with optimisation enabled (eg. -O2), then the compiler can blur the boundary between stack frames, for example by inlining. I'm not sure that this would cause backtraces with just one stack frame, but in general the rule is to expect great debugging difficulty since the code you are looking it in the core dump has been modified and so does not necessarily correspond to your source.

Release LIB is huge compared to debug

I have a static library project with standard debug/release build options. I was intrigued to spot that while the debug .lib is a fairly large 22Mb, the release one is a whopping 100Mb. And this is not a massive code-base either, about 75 classes and none of them very giant.
My questions are whether this is normal, and whether I should care?
I would check to see if you're statically linking libraries in release mode and dynamically linking them in debug mode. You might be statically linking the C++ runtime for instance.
I had the same problem. The fix is very simple. Project Property/Configuration properties/General/Whole Program Optimization use No Whole Program Optimization instead of Use Link Time Code Generation. Size of my static library decreased from 5MB to 1.3MB
No, this is not normal. It should be the other way around. Yes, you should care.
I'd start by looking at the sizes again, to make sure I didn't transpose the release and debug sizes somehow.
Then look at the libraries you're linking in for release and debug. Did you accidentially link a debug library to ship, and ship library to debug?
Take a close look at your settings for release and debug. Something very fishy is going on.
Is it possible that a massive amount of this code is inline, and the debug version isn't "inlining"?
Ideally release lib should be smaller than debug one.
I guess you may be statically linking other libs such as MFC ,ATL etc...
check you release and debug build setting.
use #pragma once to avoid multiple time file inclusion.
I would typically expect the reverse...
Is it possible that there are big swaths of code inside preprocessor included blocks that only get included in release builds?
Template code is especially suspect in this case.
Update
I think that the issue is most likely caused by linking to static libs in release mode, and shared libs in debug mode...
+1 karoberts
There is one thing that can explain such a size: debug symbols embedded in the release build (as opposed to built as a pdb). Are you sure you don't have debug symbols being generated for your release build ? (which visual c++ are you using?)

Why is the startup of an App on linux slower when using shared libs?

On the embedded device I'm working on, the startup time is an important issue. The whole application consists of several executables that use a set of libraries. Because space in FLASH memory is limited we'd like to use shared libraries.
The application workes as usual when compiled and linked with shared libraries and the amount of FLASH memory is reduced as expected.
The difference to the version that is linked to static libs is that the startup time of the application is about 20s longer and I have no idea why.
The application runs on an ARM9 CPU at 180 MHz with Linux 2.6.17 OS,
16 MB FLASH (JFFS File System) and 32 MB RAM.
Bacause shared libraries have to be linked to at runtime, usually by dlopen() or something similar. There's no such step for static libraries.
Edit: some more detail. dlopen has to perform the following tasks.
Find the shared library
Load it into memory
Recursively load all dependencies (and their dependencies....)
Resolve all symbols
This requires quite a lot of IO operations to accomplish.
In a statically linked program all of the above is done at compile time, not runtime. Therefore it's much faster to load a statically linked program.
In your case, the difference is exaggerated by the relatively slow hardware your code has to run on.
This is a fine example of the classic tradeoff of speed and space.
You can statically link all your executables so that they are faster but then they will take more space
OR
You can have shared libraries that take less space but also more time to load.
So decide what you want to sacrifice.
There are many factors for this difference (OS, compiler e.t.c) but a good list of reasons can be found here. Basically shared libraries were created for space reasons and much of the "magic" involved to make them work takes a performance hit.
(As a historical note the original Netscape navigator on Linux/Unix was a statically linked big fat executable).
This may help others with similar problems:
The reason why startup took so long in my case was, that the default setting of the GCC is to export all symbols inside of a library.
A big improvement is to set a compiler setting "-fvisibility=hidden".
All symbols that the lib has to export have to be augmented with the statement
__attribute__ ((visibility("default")))
see gcc wiki
and the very fine article how to write shared libraries
Ok, I have learned now that the usage of shared libraries has it's disadvatages concerning speed. I found this article about dynamic linking and loading enlighting. The loading process seems to be much lengthier than I have expected.
Interesting.. typically loading time for a shared library is unnoticeable from a fat app that is statically linked. So I can only surmise that the system is either very slow to load a library from flash memory, or the library that is loaded is being checked in some way (eg .NET apps run a checksum for all loaded dlls, reducing startup time considerably in some cases). It could be that the shared libraries are being loaded as-needed, and unloaded afterwards which could indicate a configuration problem.
So, sorry I can't help say why, but I think its an issue with your ARM device/OS. Have you tried instrumenting the startup code, or statically linking with 1 of the most commonly-used libraries to see if that makes a large difference. Also put the shared libs in the same directory as the app to reduce the time it takes to search the FS for the lib.
One option which seems obvious to me, is to statically link the several programs all into a single binary. That way you continue to share as much code as possible (probably more than before), but you will also avoid the overhead of the dynamic linker AND save the space of having the dynamic linker on the system at all.
It's pretty easy to combine several executables into the same one, you normally just examine argv and decide which routine to call based on that.

Resources