We're using AQTime's coverage profiler to check coverage results for unit tests. It seems to generally work okay, but has a nasty habit of overestimating coverage because some functions don't appear at all. I think this is because the linker has stripped them because they're not called, but obviously it is not ideal because I'd like them to show up as "not covered".
Does anyone know if there's a way to configure either Visual C++ or AQTime such that these functions will be correctly tagged as "not covered"?
AQtime gets the list of routines from a module's debug information. Since the linker has stripped some routines, there is no debug information for them and AQtime does not "see" them.
As a rule, all linkers have an option to enable/disable this feature. For example, in a Visual C++ project this option is named References and located under the Linker | Optimization properties group. Remove the value of this option or set it to No (/OPT:NOREF) in your application's Debug configuration and the linker will not remove unused functions. The option is documented here.
Odd. Whether the compiler/linker strips them or not, the fact is, at the end of test execution, there is no record of them being executed. So if the tool can list all the functions, it should have no such evidence, and thus it should report "not executed".
Our SD C++ Test Coverage tool does not have this bizarre behavior, at least not if you actually specify that the containing compilation unit or header file that is part of those that should be instrumented. (You might fail to list ah header file that contains a function body, and that won't then get reported). It will even report that a function, which has been inlined, has been executed, regardless of the number of places it has been inlined.
Related
Let's say I want to use a specific Linux / POSIX feature, that is provided conditionally based on feature test macros. For example, the type cpu_set_t, the macro CPU_SET_ZERO, and the function sched_setaffinity.
Ideally I would just like to tell CMake that I need those, and it should figure out what extra feature test macros to set or fail with a nice error message if it can't be provided on the current system. Is that possible?
I am aware that I can lookup in the manpages and manually use add_definitions(-D_GNU_SOURCE), but that can become tedious once multiple functionalities that were introduced and deprecated in different versions of the POSIX standard are combined. In my experience, it can become difficult to maintain portability across different versions of the glibc implementation.
There are the CMake platform checks, but they only seem to help in checking. So I get the error during cmake rather than make, but I still have to figure out the right feature test macros manually.
cmake-compile-features seem to offer only features directly related to the compiler, not the library.
If you are just looking to determine whether a function or variable exists, you can use the CheckSymbolExists module. Likewise, there is CheckStructHasMember for structs (assuming there is a standard member you can check for). So:
include (CheckSymbolExists)
include (CheckStructHasMember)
CHECK_SYMBOL_EXISTS(CPU_SET_ZERO sched.h CPU_SET_ZERO_exists)
CHECK_SYMBOL_EXISTS(sched_setaffinity sched.h sched_setaffinity_exists)
CHECK_STRUCT_HAS_MEMBER(cpu_set_t <member?> sched.h cpu_set_t_exists)
It appears that cpu_set_t is an opaque type, so instead you can use the CheckCXXSourceCompiles module, which is a frontend for the try_compile command. It is a generic way to determine if any particular code compiles. try_compile is used extensively by 'base' CMake to determine features (try a search in the Modules directory!). Essentially, you pass it in a minimal source file, which should fail compilation if your feature is not present, and it reports back to your CMake script the result.
This question has some answers on SO but mine is slightly different. Before marking as duplicate, please give it a shot.
MSVC has always provided the /Gy compiler option to enable identical functions to be folded into COMDAT sections. At the same time, the linker also provides the /OPT:ICF option. Is my understanding right that these two options must be used in conjunction? That is, while the former packages functions into COMDAT, the latter eliminates redundant COMDATs. Is that correct?
If yes, then either we use both or turn off both?
Answer from someone who communicated with me off-line. Helped me understand these options a lot better.
===================================
That is essentially true. Suppose we talk just C, or C++ but with no member functions. Without /Gy, the compiler creates object files that are in some sense irreducible. If the linker wants just one function from the object, it gets them all. This is specially a consideration in programming for libraries, such that if you mean to be kind to the library's users, you should write your library as lots of small object files, typically one non-static function per object, so that the user of the library doesn't bloat from having to carry code that actually never executes.
With /Gy, the compiler creates object files that have COMDATs. Each function is in its own COMDAT, which is to some extent a mini-object. If the linker wants just one function from the object, it can pick out just that one. The linker's /OPT switch gives you some control over what the linker does with this selectivity - but without /Gy there's nothing to select.
Or very little. It's at least conceivable that the linker could, for instance, fold functions that are each the whole of the code in an object file and happen to have identical code. It's certainly conceivable that the linker could eliminate a whole object file that contains nothing that's referenced. After all, it does this with object files in libraries. The rule in practice, however, used to be that if you add a non-COMDAT object file to the linker's command line, then you're saying you want that in the binary even if unreferenced. The difference between what's conceivable and what's done is typically huge.
Best, then, to stick with the quick answer. The linker options benefit from being able to separate functions (and variables) from inside each object file, but the separation depends on the code and data to have been organised into COMDATs, which is the compiler's work.
===================================
As answered by Raymond Chen in Jan 2013
As explained in the documentation for /Gy, function-level linking
allows functions to be discardable during the "unused function" pass,
if you ask for it via /OPT:REF. It does not alter the actual classical
model for linking. The flag name is misleading. It's not "perform
function-level linking". It merely enables it by telling the linker
where functions begin and end. And it's not so much function-level
linking as it is function-level unlinking. -Raymond
(This snippet might make more sense with some further context:here are the posts about classical linking model:1, 2
So in a nutshell - yes. If you activate one switch without the other, there would be no observable impact.
I am unable to identify a compiler flag that would turn off all of the seemingly pointless (with production code) calls that are presumably mainly for tracing.
--no-traces
does not accomplish this.
Calls like:
HX_STACK_LINE
HX_STACK_PUSH
Perhaps these should be able to be turned off and the APIs that rely on them disabled if necessary for production code.
I was worried about this at first as well. However, it turns out that as long as you don't define certain variables, all of those inserted lines are removed when the C++ code is compiled.
(For reference, the variables are HXCPP_DEBUGGER, HXCPP_DEBUG, HXCPP_STACK_VARS, HXCPP_STACK_LINE, and HXCPP_STACK_TRACE, and none of them are defined by default)
My context is MSVC 6.
Starting with a successfully compiled program, with browse information built, I can go into a existing function and hover over a variable, and the IDE will show me the data type, and variable name. One could well imagine that the information is coming from the browse file.
In practice, If I create a new variable.
int z;
and hover over the z, the IDE will show me the data type and variable name. I have not compiled the program yet, hence the browse file has not been updated. This seems to say,
that there is a portion of the IDE, which watches as you type, and stays aware of the datatypes and functions as you enter them. For all I know, it may compile them internally as well.
I have also noticed, that syntax errors can effectively disable this functionality.
I haven't seen this discussed anywhere. Is there a term for this sort of functionality?
It's probably the lexical analysis and syntactic analysis at work and building up it's own symbol table. It's part of the parsing phase of most compilers. That would explain why the functionality breaks when you see syntax errors. The parsing needs to occur successfully to have a reliable symbol table.
In compilers, its usually called a symbol table.
I'm not sure that there's a term common to all integrated development environments.
I have inherited a very large and complex project (actually, a 'solution' consisting of 119 'projects', most of which are DLLs) that was built and tested under VC8 (VS2005), and I have the task of porting it to VC9 (VS2008).
The porting process I used was:
Copy the VC8 .sln file and rename it
to a VC9 .sln file.
Copy all of
the VC8 project files, and rename
them to VC9 project files.
Edit
all of the VC9 project files,
s/vc8/vc9.
Edit the VC9 .sln,
s/vc8/vc9/
Load the VC9 .sln with
VS2008, and let the IDE 'convert'
all of the project files.
Fix
compiler and linker errors until I
got a good build.
So far, I have run into the following issues in that last step.
1) A change in the way decorated names are calculated, causing truncation of the names.
This is more than just a warning (http://msdn.microsoft.com/en-us/library/074af4b6.aspx). Libraries built with this warning will not link with other modules. Applying the solution given in MSDN was non-trivial, but doable. I addressed this problem separately in How do I increase the allowed decorated name length in VC9 (MSVC 2008)?
2) A change that does not allow the assignment of zero to an iterator. This is per the spec, and it was fairly easy to find and fix these previously-allowed coding errors. Instead of assignment of zero to an iterator, use the value end().
3) for-loop scope is now per the ANSI standard. Another easy-to-fix problem.
4) More space required for pre-compiled headers. In some cases a LOT more space was required. I ended up using /Zm999 to provide the maximum PCH space. If PCH memory usage gets bumped up again, I assume that I will have to forgo PCH altogether, and just endure the increase in what is already a very long build time.
5) A change in requirements for copy ctors and default dtors. It appears that in template classes, under certain conditions that I haven't quite figured out yet, the compiler no longer generates a default ctor or a default dtor. I suspect this is a bug in VC9, but there may be something else that I'm doing wrong. If so, I'd sure like to know what it is.
6) The GUIDs in the sln and vcproj files were not changed. This does not appear to impact the build in any way that I can detect, but it is worrisome nevertheless.
Note that despite all of these issues, the project built, ran, and passed extensive QA testing under VC8. I have also back-ported all of the changes to the VC8 projects, where they still build and run just as happily as they did before (using VS2005/VC8). So, all of my changes required for a VC9 build at least appear to be backward-compatible, although the regression testing is still underway.
Now for the really hard problem: I have run into a difference in the startup sequence between VC8 and VC9 projects. The program uses a small-object allocator modeled after Loki, in Andrei Alexandrescu's Book Modern C++ Design. This allocator is initialized using a global variable defined in the main program module.
Under VC8, this global variable is constructed at the very beginning of the program startup, from code in a module crtexe.c. Under VC9, the first module that executes is crtdll.c, which indicates that the startup sequence has been changed. The DLLs that are starting up appear to be confusing the small-object allocator by allocating and deallocating memory before the global object can initialize the statistics, which leads to some spurious diagnostics. The operation of the program does not appear to be materially affected, but the QA folks will not allow the spurious diagnostics to get past them.
Is there some way to force the construction of a global object prior to loading DLLs?
What other porting issues am I likely to encounter?
Is there some way to force the construction of a global object prior to loading DLLs?
How about the DELAYLOAD option? So that DLLs aren't loaded until their first call?
That is a tough problem, mostly because you've inherited a design that's inherently dangerous because you're not supposed to rely on the initialization order of global variables.
It sounds like something you could try to work around by replacing the global variable with a singleton that other functions retrieve by calling a global function or method that returns a pointer to the singleton object. If the object exists at the time of the call, the function returns a pointer to it. Otherwise, it allocates a new one and returns a pointer to the newly allocated object.
The problem, of course, is that I can't think of a singleton implementation that would avoid the problem you're describing. Maybe this discussion would be useful: http://www.oneunified.net/blog/Personal/SoftwareDevelopment/CPP/Singleton.article
That's certainly an interesting problem. I don't have a solution other than perhaps to change the design so that there is no dependence on undefined behavior of the order or link/dll startup. Have you considered linking with the older linker? (or whatever the VS.NET term is)
Because the behavior of your variable and allocator relied on some (unknown at the time) arbitrary order of startup I would probably fix that so that it is not an issue in the future. I guess you are really asking if anyone knows how to do some voodoo in VC9 to make the problem disappear. I am interested in hearing it as well.
How about this,
Make your main program a DLL too, call it main.dll, linked to all the other ones, and export the main function as say, mainEntry(). Remove the global variable.
Create a new main exe which has the global variable and its initialization, but doesn't link statically to any of the other application DLLs (except for the allocator stuff).
This new main.exe then dynamically loads the main.dll using LoadLibrary(), then uses GetProcAddress to call mainEntry().
The solution to the problem turned out to be more straightforward than I originally thought. The initialization order problem was caused by the existence of several global variables of types derived from std container types (a basic design flaw that predated my position with that company). The solution was to replace all such globals with singletons. There were about 100 of them.
Once this was done, the initialization (and destruction) order was under programmer control.