Is there an option in the Visual Studio 2012 C++ compiler to make it warn if you use uninitialized class members?
The RTC-Checks are not compatible with managed C++ (/clr)
What kind of data member? A pointer member variable or one that gets its constructor automatically called?
It is really up to the author to be experienced enough to be paranoid about pointers and watch their initialization, assignment and dereferencing like a hawk to make sure it's safe. No compiler or static analyzer can take the place of a competent programmer in making sure pointers are used safely.
You basically want to find these issues at compile time if possible, and at run-time only as a last resort.
For compile time tool, you do have some options that might help you:
The static analyzer that comes with Visual Studio can warn if a pointer is being used without being checked first. But it does not give the same emphasis for a pointer class member. I've seen a 3rd party static analyzer called CppCheck that does that check.
Coverity (Another static analyzer) would also probably do that too. Ah, but wait, Coverity doesn't work for managed code (last I checked). And it's so expensive you probably have to sell your house, and your neighbors house to pay for it, and have a coverity engineer come to your office to take 3 days to get it installed, and then it will take 24 hours to run the analysis.
For runtime checking, I'm have no idea what alternative you might have for RTC with managed code. But it would be very Very VERY wise to minimize the amount of pure native code you expose to the /clr switch. Some programmer years ago turned that on for our product for our largest project (It had hundreds of files). Even though out of the hundreds of files in the project only 4 or 5 files used the managed code, he still turned on the switch anyways for the hundreds of other pure native files.
As a result, there was thousands of crashes for years until we reversed that stupidity.
So put your code in clear managable layers. Seperate the managed C++ code from the pure native C++ code and in visual studio only turn on the /clr switch on the managed files.
And by all means use static analysis tools as much possible.
Related
I'm a little late with this question, but better late than never. I've been using Visual Studio 6.0 since it came out, but recently switched to VS 2013 on a new PC.
I've gotten my projects to build under 2013, but the resulting executables it produces are consistently bigger than VS6.0 produced. I've seen a similar thread on here about that happening in the transition from VS2008 to VS2010, and the comments and suggestions there all seem to attribute the change to changes in MFC libraries that are statically linked in. But my projects are straight C code. No C++, let alone MFC. And the 'Use of MFC' option on my project is set to "Use Standard Windows Libraries" (presumably set by the import tool that generated the 2013-compatible project). The only non-stadard library it uses is wsock32.lib.
The extra size isn't a killer, but it's a significant relative to the size of the whole app. My biggest .exe goes from 980Kb to 1.3Mb - about a 35% increase in size to an app whose small size was a selling point (i.e. install this tiny app and you have access to all of our goodies). That's without debugging info - the increase on the debug version is even more - but I don't really care about that.
Any ideas how to strip out the new cruft - or even to know what it is?
This is a good manual how to make your binaries smaller.
Basic ideas are the following:
Don't forget about Release mode
Declare #define WIN32_LEAN_AND_MEAN
Dynamically link to the C++ runtime
Compile the executable without debugging information
Compile with /O1, an 'optimize for size' flag
Remove iostream and fstream headers, use low level instead if possible
Typically you generate a MAP file on both systems, and figure out the sections that cause the largest contributions.
Anton's answer reminds me: first check if they are both linked the same way (both static or both dynamic, otherwise it is apples and oranges)
I have here a C++/CLI solution which isn't mixed with native C++ (although we have this type too). It consists of three projects, where are two relevant for my question.
The first one is a static library (.lib) and deals with Acitve Diretytory matters.
The second one is the executable main project (.exe) which depends on the other projects.
I'm new to Visual Studio 2012 and want to use the advantages of tools like the code analysis. Running the code analysis over the solution reveals several CA2122 warnings:
CA2122 Do not indirectly expose methods with link demands
I understand the security concerns related to this warning and I think I understood how to deal with it, although I'm also new to this security stuff. This warnings are related to the Active Directory code when the whole solution is examined, while examining only the lib-project they will not appear and everything seems to be ok.
Now to the core of the problem:
I tried to mark all methods where I'm warned with the SecuritySafeCritical attribute
--> no changes, same warnings
I've solved this warning in another project by marking the whole assembly as SecurityCritical and adding the SecuritySafeCritical to the problematic method. This will not work since adding a AssemblyInfo.cpp with marking the assembly as SecurityCritical will not affect this problem. (I know that *.cpp seem to be obsolete in managed static librarys since the code seem to have to be complete in the header files making this kind of project obsolete... but we don't want to have .dll for every small part and we also want to have this stuff capsulated in an own project instead of having some loose header files or have it mixed with other regions)
After that I tried to mark the whole assembly of the main project as SecurityTransparent because so far I understand this SecuritySafeCritical marked code can be called by SecurityTransparent or SecurityCritical code (what is for me every kind of security). --> My as SecuritySafeCritical marked methods now are marked with CA2141 warnings and many other methods produce new warnings (most of them are related to exception handling):
CA2141:Transparent methods must not satisfy LinkDemands
CA2140: Transparent code must not reference security critical items
So I decided to try marking this assembly as SecurityCritical too.
--> My SecuritySafeCritical methods finally produce no warnings, but there are still all these other warnings from methods having exceptionhandling.
So I don't know how to solve this problem. I assume that having a managed static library is the problem and when having just a dll-project maybe I could solve the problem as mentionend in 2., but I want to avoid to share another *.dll project with our programs.
I searched for a solution but found nothing which would help in this case. Also informations on this topic are rare, out of date (because related to .Net Framework 2.0 while the whole security thing seems to be changed massively with .Net Framework 4.0) or hard to understand for me. So I hope someone has an idea what I could try or what I should do.
In a way I am looking for best-practice here.
I have a common project that is shared by many of my apps. This project has FlurryAnaylics and the ATMHud DLLs as references.
If I do not also reference these DLLs in the main project, the apps will often, but not always, fail in the debug-to-device test. In the debug-to-simulator I don't need to add these DLLs to the main project.
So, the question is: Do I have to include references to DLLs in the main project that I have in sub projects all the time?
Whenever possible I use references to project files (csproj files) over references to assemblies (.dll). It makes a lot of things easier, like:
code navigation (IDE);
automatic build dependency (the source code you're reading is the one you're building, not something potentially out-of-sync);
source-level debugging (even if you can have it without it, you're sure to be in-sync);
(easier) switch between Debug|Release|... configurations;
changing defines (or any project-level option);
E.g.
Solution1.sln
Project1a.csproj
MonoTouch.Dialog.csproj (link to ../Common/MonoTouch.Dialog.csproj)
Solution2.sln
Project2a.csproj
MonoTouch.Dialog.csproj (link to ../Common/MonoTouch.Dialog.csproj)
Common.sln
MonoTouch.Dialog.csproj
Large solutions might suffer a bit from doing this (build performance, searching across files...). The larger they get the less likely everyone has to know about every part of it. So there's a diminished return on the advantages while the inconvenience grows with each project being added.
E.g. I would not want to have references to every framework assemblies inside Mono (but personally I could live with all the SDK assemblies of MonoTouch ;-)
Note: Working with assemblies references should not cause you random errors while debugging on device. If you can create such a test case please fill a bug report :-)
I've never been a big fan of MFC, but that's not really the point. I read that Microsoft is due to release a new version of MFC in 2010 and it really struck me as odd - I thought MFC was dead (no ill intention, I really did).
Is is MFC used for new developments? If so, whats the benefit? I couldn't imagine it having any benefit over something such as C# (or even just c++ using Win32 APIs for that matter).
There is a ton of code out there using MFC. I see these questions all the time is this still used is that still used the answer is yes. I work in a very large organization which still employs hundreds of people who write in cobol. If it has ever been used in the enterprise it will continue to be used until there is no more hardware to support it, then some company will pay someone to write an emulator so that the old code will still work.
The navy still uses ships with computers with magnetic cores for memory and I'm sure they have people to work on them. Technology once created can never not be supported. its a bit of the case of Deus ex machina where large organizations aren't completely sure what their system do and have such an overriding sense of fear of brining the enterprise to its knees they have no desire to try out you new fangled technologies(BTW we pay IBM for best effort support on OS2).
Also mfc is a perfectly acceptable solution for windows development given it is an object model which wraps the System API which is pretty much all that most people get out of .net.
As an addendum and since this question is up for a bounty this is a quote from MS regarding mfc in VS 11
In every release we need to balance our investment across the various areas of the product. However, we still believe that MFC is the most fully-featured library for building native desktop applications. We are fully committed to supporting and maintaining MFC at a high level of quality. Here’s a short list of some of the issues that we fixed in MFC for Visual Studio 11:
Here is the link if you want to read the full post
Coolness does not factor in choosing the technology for a new system. Yes if you are a student or want to play around you choose whatever you want.
But in the real world each technology has advantages and drawbacks. A year ago one of the teams started a new project, it was decided that it will be done in MFC.
The reason is very simple: they have to use windows api a lot for low level operations with the printer, internet explorer and god knows what else.
C# was not even in the game, the decision was made between MFC and QT, both had the needed functionality, both could easily integrate the low level functionality, the only difference was that some team members already had MFC experience, so they didn't have to waste time and money with trainings.
Let's suppose they choose C# and WPF:
-1 You have to wrap all native C++ and ASM code in a DLL (ouch this can be painful, instead of coding you write wrappers).
-1 You probably need two teams now, one for the ui one for the winapi stuff. It is very unlikely that you'll find a lot of people able to write both C# and winapi stuff. Agreed that either way you need someone to make the interface pretty (programmers usually suck at this and they cost more) but at least with C++ only code, there is no more wait time between two teams, need a ui modification, no problem I don't have to wait for the ui designer, he will make it pretty later.
+1 You can write the UI code in C# and WPF, let's say the UI development is faster, but the UI is only 1/4 of the project, so the total gain is probably very small.
-1 Performance degradation: for every small operation you can't do in C# you call a external DLL (this is a minor issue since the program runs on 8GB RAM Quad Cores).
So in conclusion: MFC is still used for new development because the requirements and the costs decide the technology for a project and it just so happens that MFC is the best in some cases.
MFC is still used for some new development, and a lot of maintenance development (including inside of Microsoft).
While it can be minutely slower than using the Win32 API directly, the performance loss really is tiny -- rarely as much as a whole percent. Using .NET, the performance loss is considerably greater (in my testing, rarely less than 10%, with 20-30% being typical, and higher still for heavy computation. Just for example, I have a program that does Eigenvector/Eigenvalue computation on fairly large arrays. My original version using C++ and MFC runs one test case in just under a minute on our standard test machine. Some of my coworkers decided it would be cool to re-implement it in C#. Their version takes almost three minutes on the same machine (quad core, 16-gigs of RAM, so no, not "legacy" hardware). I'll admit I haven't looked at their code too closely, so maybe it could be improved, but they're decent coders so a 3:1 improvement strikes me as unlikely.
With MFC, it's also easy to bypass the framework and use the Win32 API directly when/if you want to. With .NET, you can use P/Invoke for that, but it's quite painful by comparison.
MFC has been updated with every release of Visual Studio. It just isn't the headline feature item.
As for new development, yes. It is still used and will continue to be so (even though I, like you, prefer not to). Many organizations made the technology decision years ago and have no reason to change.
I do think you are talking about well-established shops though, folks with more interest in maintaining / enhancing what has been written rather than stay on the cutting edge.
The release of the MFC Feature Pack (one or two years ago, iirc) was the biggest extension of MFC since around 10 years and it gave quite a new boost to MFC development. I guess a lot of companies decided to maintain their legacy applications, push them forward and delevelop new applications on its basis.
For me (as someone who has to maintain a large MFC application) the bigger problem is the decreasing development and support of (Microsoft and third-party) components rather than MFC itself. For instance is porting to 64bit not easy if a lot of old and unsupported pure 32bit Active-X components are assembled in the application.
I did a project last year based on MFC. I'm not sure why MFC was chosen, but it was adequate for making a virtual 3D graphic user interface—a building management security system—with 10 frame per second refresh rate run efficiently on win32-based PCs dating back to the mid-1990s. The executable (which requires only core win32 system DLLs) is less than 400K—not an easy accomplishment with modern tools.
There are advantages to staying away from managed code (maybe you're writing a driver UI, or doing COM).
That and there's tons of MFC code out there. Maybe you work for Company X, and need to use one of the zillion DLLs they've been writing over the last dozen years.
I can think of one commercial software title that benefits from using MFC over C#: Wwise[1]. C++ is an obvious choice for the sound engine, so it makes sense to write the authoring tool in C++ as well. It's both an authoring tool and a sound engine. They could have built the authoring tool in C#, and the sound engine in C++, but if they're debugging a problem with the sound engine that's reproducible through the wwise authoring tool, it's easier for them to see the whole call stack just like that.
I think there's some ways of doing a mixed call stack nowadays, but maybe that wasn't there when they first made Wwise? In any case, using MFC ensured that they wouldn't need a solution to the problem of mixed call stacks. The call stack just works.
[1]Wwise is built on MFC: https://www.audiokinetic.com/fr/library/edge/?source=SDK&id=plugin_frontend_windows.html
I have inherited a very large and complex project (actually, a 'solution' consisting of 119 'projects', most of which are DLLs) that was built and tested under VC8 (VS2005), and I have the task of porting it to VC9 (VS2008).
The porting process I used was:
Copy the VC8 .sln file and rename it
to a VC9 .sln file.
Copy all of
the VC8 project files, and rename
them to VC9 project files.
Edit
all of the VC9 project files,
s/vc8/vc9.
Edit the VC9 .sln,
s/vc8/vc9/
Load the VC9 .sln with
VS2008, and let the IDE 'convert'
all of the project files.
Fix
compiler and linker errors until I
got a good build.
So far, I have run into the following issues in that last step.
1) A change in the way decorated names are calculated, causing truncation of the names.
This is more than just a warning (http://msdn.microsoft.com/en-us/library/074af4b6.aspx). Libraries built with this warning will not link with other modules. Applying the solution given in MSDN was non-trivial, but doable. I addressed this problem separately in How do I increase the allowed decorated name length in VC9 (MSVC 2008)?
2) A change that does not allow the assignment of zero to an iterator. This is per the spec, and it was fairly easy to find and fix these previously-allowed coding errors. Instead of assignment of zero to an iterator, use the value end().
3) for-loop scope is now per the ANSI standard. Another easy-to-fix problem.
4) More space required for pre-compiled headers. In some cases a LOT more space was required. I ended up using /Zm999 to provide the maximum PCH space. If PCH memory usage gets bumped up again, I assume that I will have to forgo PCH altogether, and just endure the increase in what is already a very long build time.
5) A change in requirements for copy ctors and default dtors. It appears that in template classes, under certain conditions that I haven't quite figured out yet, the compiler no longer generates a default ctor or a default dtor. I suspect this is a bug in VC9, but there may be something else that I'm doing wrong. If so, I'd sure like to know what it is.
6) The GUIDs in the sln and vcproj files were not changed. This does not appear to impact the build in any way that I can detect, but it is worrisome nevertheless.
Note that despite all of these issues, the project built, ran, and passed extensive QA testing under VC8. I have also back-ported all of the changes to the VC8 projects, where they still build and run just as happily as they did before (using VS2005/VC8). So, all of my changes required for a VC9 build at least appear to be backward-compatible, although the regression testing is still underway.
Now for the really hard problem: I have run into a difference in the startup sequence between VC8 and VC9 projects. The program uses a small-object allocator modeled after Loki, in Andrei Alexandrescu's Book Modern C++ Design. This allocator is initialized using a global variable defined in the main program module.
Under VC8, this global variable is constructed at the very beginning of the program startup, from code in a module crtexe.c. Under VC9, the first module that executes is crtdll.c, which indicates that the startup sequence has been changed. The DLLs that are starting up appear to be confusing the small-object allocator by allocating and deallocating memory before the global object can initialize the statistics, which leads to some spurious diagnostics. The operation of the program does not appear to be materially affected, but the QA folks will not allow the spurious diagnostics to get past them.
Is there some way to force the construction of a global object prior to loading DLLs?
What other porting issues am I likely to encounter?
Is there some way to force the construction of a global object prior to loading DLLs?
How about the DELAYLOAD option? So that DLLs aren't loaded until their first call?
That is a tough problem, mostly because you've inherited a design that's inherently dangerous because you're not supposed to rely on the initialization order of global variables.
It sounds like something you could try to work around by replacing the global variable with a singleton that other functions retrieve by calling a global function or method that returns a pointer to the singleton object. If the object exists at the time of the call, the function returns a pointer to it. Otherwise, it allocates a new one and returns a pointer to the newly allocated object.
The problem, of course, is that I can't think of a singleton implementation that would avoid the problem you're describing. Maybe this discussion would be useful: http://www.oneunified.net/blog/Personal/SoftwareDevelopment/CPP/Singleton.article
That's certainly an interesting problem. I don't have a solution other than perhaps to change the design so that there is no dependence on undefined behavior of the order or link/dll startup. Have you considered linking with the older linker? (or whatever the VS.NET term is)
Because the behavior of your variable and allocator relied on some (unknown at the time) arbitrary order of startup I would probably fix that so that it is not an issue in the future. I guess you are really asking if anyone knows how to do some voodoo in VC9 to make the problem disappear. I am interested in hearing it as well.
How about this,
Make your main program a DLL too, call it main.dll, linked to all the other ones, and export the main function as say, mainEntry(). Remove the global variable.
Create a new main exe which has the global variable and its initialization, but doesn't link statically to any of the other application DLLs (except for the allocator stuff).
This new main.exe then dynamically loads the main.dll using LoadLibrary(), then uses GetProcAddress to call mainEntry().
The solution to the problem turned out to be more straightforward than I originally thought. The initialization order problem was caused by the existence of several global variables of types derived from std container types (a basic design flaw that predated my position with that company). The solution was to replace all such globals with singletons. There were about 100 of them.
Once this was done, the initialization (and destruction) order was under programmer control.