I'm a little late with this question, but better late than never. I've been using Visual Studio 6.0 since it came out, but recently switched to VS 2013 on a new PC.
I've gotten my projects to build under 2013, but the resulting executables it produces are consistently bigger than VS6.0 produced. I've seen a similar thread on here about that happening in the transition from VS2008 to VS2010, and the comments and suggestions there all seem to attribute the change to changes in MFC libraries that are statically linked in. But my projects are straight C code. No C++, let alone MFC. And the 'Use of MFC' option on my project is set to "Use Standard Windows Libraries" (presumably set by the import tool that generated the 2013-compatible project). The only non-stadard library it uses is wsock32.lib.
The extra size isn't a killer, but it's a significant relative to the size of the whole app. My biggest .exe goes from 980Kb to 1.3Mb - about a 35% increase in size to an app whose small size was a selling point (i.e. install this tiny app and you have access to all of our goodies). That's without debugging info - the increase on the debug version is even more - but I don't really care about that.
Any ideas how to strip out the new cruft - or even to know what it is?
This is a good manual how to make your binaries smaller.
Basic ideas are the following:
Don't forget about Release mode
Declare #define WIN32_LEAN_AND_MEAN
Dynamically link to the C++ runtime
Compile the executable without debugging information
Compile with /O1, an 'optimize for size' flag
Remove iostream and fstream headers, use low level instead if possible
Typically you generate a MAP file on both systems, and figure out the sections that cause the largest contributions.
Anton's answer reminds me: first check if they are both linked the same way (both static or both dynamic, otherwise it is apples and oranges)
Related
sigh
In my third try to introducing Linux to my everyday life I'm still bouncing hard on programming.
I have solution for my major thesis' program created in Visual Studio (five projects in one solution; 4 static libs, one executable), and managed with it since it's beginning. It's posted on Git, so I thought, that I'll try changing it and compiling on my laptop with Ubuntu 18.4. While changing it was no-effort task (Linux text editors are neat; as opposed to almost non-existent IDEs), compiling in the other hand was the point I doubted in all my skills.
First problem I've encountered is that I have no idea how to manage multiple projects outside VS. I mean, there are .vcxproj files with data, but not very useful for what I saw.
Second problem are references - I have no idea how to link #include directories to specific point in files without going ../MainFolder/subfolder/file.h which is extremely unesthetic.
I expect, that those are just iceberg's peaks, and I will encounter massive amount of problems in future, but as for now - can anybody give me idea of how to manage such project in Linux?
Is there an option in the Visual Studio 2012 C++ compiler to make it warn if you use uninitialized class members?
The RTC-Checks are not compatible with managed C++ (/clr)
What kind of data member? A pointer member variable or one that gets its constructor automatically called?
It is really up to the author to be experienced enough to be paranoid about pointers and watch their initialization, assignment and dereferencing like a hawk to make sure it's safe. No compiler or static analyzer can take the place of a competent programmer in making sure pointers are used safely.
You basically want to find these issues at compile time if possible, and at run-time only as a last resort.
For compile time tool, you do have some options that might help you:
The static analyzer that comes with Visual Studio can warn if a pointer is being used without being checked first. But it does not give the same emphasis for a pointer class member. I've seen a 3rd party static analyzer called CppCheck that does that check.
Coverity (Another static analyzer) would also probably do that too. Ah, but wait, Coverity doesn't work for managed code (last I checked). And it's so expensive you probably have to sell your house, and your neighbors house to pay for it, and have a coverity engineer come to your office to take 3 days to get it installed, and then it will take 24 hours to run the analysis.
For runtime checking, I'm have no idea what alternative you might have for RTC with managed code. But it would be very Very VERY wise to minimize the amount of pure native code you expose to the /clr switch. Some programmer years ago turned that on for our product for our largest project (It had hundreds of files). Even though out of the hundreds of files in the project only 4 or 5 files used the managed code, he still turned on the switch anyways for the hundreds of other pure native files.
As a result, there was thousands of crashes for years until we reversed that stupidity.
So put your code in clear managable layers. Seperate the managed C++ code from the pure native C++ code and in visual studio only turn on the /clr switch on the managed files.
And by all means use static analysis tools as much possible.
In a way I am looking for best-practice here.
I have a common project that is shared by many of my apps. This project has FlurryAnaylics and the ATMHud DLLs as references.
If I do not also reference these DLLs in the main project, the apps will often, but not always, fail in the debug-to-device test. In the debug-to-simulator I don't need to add these DLLs to the main project.
So, the question is: Do I have to include references to DLLs in the main project that I have in sub projects all the time?
Whenever possible I use references to project files (csproj files) over references to assemblies (.dll). It makes a lot of things easier, like:
code navigation (IDE);
automatic build dependency (the source code you're reading is the one you're building, not something potentially out-of-sync);
source-level debugging (even if you can have it without it, you're sure to be in-sync);
(easier) switch between Debug|Release|... configurations;
changing defines (or any project-level option);
E.g.
Solution1.sln
Project1a.csproj
MonoTouch.Dialog.csproj (link to ../Common/MonoTouch.Dialog.csproj)
Solution2.sln
Project2a.csproj
MonoTouch.Dialog.csproj (link to ../Common/MonoTouch.Dialog.csproj)
Common.sln
MonoTouch.Dialog.csproj
Large solutions might suffer a bit from doing this (build performance, searching across files...). The larger they get the less likely everyone has to know about every part of it. So there's a diminished return on the advantages while the inconvenience grows with each project being added.
E.g. I would not want to have references to every framework assemblies inside Mono (but personally I could live with all the SDK assemblies of MonoTouch ;-)
Note: Working with assemblies references should not cause you random errors while debugging on device. If you can create such a test case please fill a bug report :-)
It seems that all my adult life I've been tormented by the VC++ linker complaining or balking because various libraries do not agree on which version of the Runtime library to use. I'm never in the mood to master that dismal subject. So I just try to mess with it until it works. The error messages are never useful. Neither is the Microsoft documentation on the subject - not to me at least.
Sometimes it does not find functions - because the name-mangling is not what was expected? Sometimes it refuses to mix-and-match. Other times it just says, "LINK : warning LNK4098: defaultlib 'LIBCMTD' conflicts with use of other libs; use /NODEFAULTLIB:library" Using /NODEFAULTLIB does not work, but the warning seems to be benign. What the heck is "DEFAULTLIB" anyway? How does the linker decide? I've never seen a way to specify to the linker which runtime library to use, only how to tell the compiler which library to create function calls for.
There are "dependency walker" programs that can inspect object files to see what DLL's they depend on. I just ran one on a project I'm trying to build, and it's a real mess. There are system .libs and .dll's that want conflicting runtime versions. For example, COMCTL32.DLL wants MSVCRT.DLL, but I am linking with MSVCRTD.DLL. I am searching to see if there's a COMCTL32D.DLL, even as I type.
So I guess what I'm asking for is a tutorial on how to sort those things out. What do you do, and how do you do it?
Here's what I think I know. Please correct me if any of this is wrong.
The parameters are Debug/Release, Multi-threaded/Single-threaded, and static/DLL. Only six of the eight possible combinations are covered. There is no single-threaded DLL, either Debug or Release.
The settings only affect which runtime library gets linked in (and the calling convention to link with it). You do not, for example, have to use a DLL-based runtime if you are building a DLL, nor do you have to use a Debug version of runtime when building the Debug version of a program, although it seems to help when single-stepping past system calls.
Bonus question: How could anyone or any company create such a mess?
Your points (1) and (2) look correct to me. Another thing to note with (2) is that linking in the debug CRT also gives you access to things like enhanced heap checking, checked iterators, and other assorted sanity checks. You cannot redistribute the debug CRT with your application, however -- you must ship using the release build only. Not only is it required by the VC license, but you probably don't want to be shipping debug binaries anyway.
There is no such thing as COMCTL32D.DLL. DLLs that are part of Windows must load the CRT that they were linked against when Windows was built -- this is included with the OS as MSVCRT.DLL. This Windows CRT is completely independent from the Visual C++ CRT that is loaded by the modules that comprise your program (MSVCRT.DLL is the one that ships with Windows. The VC CRT will include a version number, for example MSVCRT80.DLL). Only the EXE and DLL files that make up your program are affected by the debug/release multithreaded/single-threaded settings.
The best practice here IMO is to pick a setting for your CRT and standardize upon it for every binary that you ship. I'd personally use the multithreaded DLL runtime. This is because Microsoft can (and does) issue security updates and bug fixes to the CRT that can be pushed out via Windows Update.
I have automatic generated code (around 18,000 lines, basically a wrap of data) and other about 2,000 lines code in a C++ project. The project turned on the link-time-optimization operation. /O2 and fast-code optimization. To compile the code, VC++ 2008 express takes incredibly long time (around 1.5 hours). After all, it has only 18,000 lines, why the compiler takes so much time?
a little explanation to the 18,000 code. It is plain C, not even C++ which includes many unpacked for-loop, a sample would be:
a[0].a1 = 0.1284;
a[0].a2 = 0.32186;
a[0].a3 = 0.48305;
a[1].a1 = 0.543;
..................
It basically fill a complex struct. But not so complex to compiler I guess.
The Debug mode is fast, only the Relase mode has this issue. Before I have the 18,000 lines of code, they are all fine. (that time the data is in external location). However, the release mode does many work which reduce the size of exe from 1,800kb to 700kb.
this issue does happen in link stage because all .obj files are generated. I have suspect on link-time-code-generation too but cannot figure out where is wrong.
Several factors influence link time, including but not limited to:
Computer speed, especially available memory
Libraries included in the build.
Programming paradigm - are you using boost by any chance?
18,000 lines of template metaprogramming compiling on even a new quad-core and 1.5 hours of linking wouldn't completely surprise me.
Historically, a common cause of slow C++ computation is excessive header file inclusion, usually a result of poor modularization. You can get a lot of redundant compilation by including the same big headers in lots of small source files. The usual reference in these cases is Lakos.
You don't state whether you are using the pre-compiled header, which is the quick and dirty substitute for a header file refactoring.
That's why we generate lots of DLLs for our debug builds, but generally link them in for our release builds. It's easier (for our particular purposes) to deal with more monolithic executables, but it takes a long time to link.
As said in one of the comments, you probably have Link Time Code Generation (/LTCG) enabled, which moves the majority of code generation and optimization to the link stage.
This enables some amazing optimizations, but also increases link times significantly.
The C++ team says the've significantly optimized the linker for VS 2010.