I have a project (VC2005) which takes an unreasonable time (over 40 min) to link in Release while it is linked in less than 5 sec in Debug.
Both builds have incremental linking disabled and all files are located on the same drive.
Disabling Linker optimization in Release does not help.
Task manager never shows more than 150,000 K memory used by linker, which for a computer with 3GB of RAM is nothing.
I am building much bigger projects and never noticed such difference in building time.
Any ideas why this happens?
As remarked, the most probable reason is /LTCG (whole program optimization).
Other factors might be individual files compiled with /Gy (you should see some warnings in the output), or /OPT:REF, /OPT:ICF (check project properties/linker/optimization), or - very unlikely - you're unknowingly running some phase of PGO instrumentation.
Related
Anyone knows what's the deal with this IDE?
I have been running it for a while, lately it has become very slow and unresponsive at times.
Gobbles up CPU even when just editing a bunch of js files.
Possibilities:
1. My code base is getting bigger...
2. I have several listeners which compile coffeescript and sass files in the background when these change.
In any case, I am very surprised (for the worse) that this is so slow. Would expect better from a developer of an IDE.
Anyone had this kind of problem before?
10x
There are a couple performance tweaks you can apply to Webstorm to see if it improves your situation. When my colleagues and I found that Webstorm was slowing down these tweaks solved all our problems.
First things first, ensure your project is configured to utilise webstorm resources efficiently by excluding particular directories from a project. This will ensure the containing files are not indexed in memory and will not decrease performance when performing functions such as searching for files or text within files. Some examples of good candidates to exclude are the node_modules directory and compiled code directories.
If there are still performance issues, try the following:
If you are on Windows by default you would be using the 32-bit version. Navigate to the Webstorm directory (within program files) and you'll see webstorm64.exe, which will run Webstorm in 64-bit mode. (You might need to install a proper 64-bits JDK yourself then.)
The default VM options for IntelliJ IDEA may be not optimal when your project contains more than 10000 classes and developers often try to change the default options to minimize IntelliJ IDEA hangtime.
You can try bumping up the JVM memory limits for Webstorm. Open the VM options from the IDE_HOME\bin\<product>[bits][.exe].vmoptions. Initially try doubling the Xms and Xmxmemory values.
Please note that very big Xmx and Xms values are not so good. In this case, GarbageCollector has to work with a big part of memory at a time and causes considerable hang-ups.
For more info on configuring JVM memory options you can refer to:
Configuring IntelliJ IDEA VM options - http://blog.jetbrains.com/idea/2006/04/configuring-intellij-idea-vm-options/
Configuring JVM options and platform properties - https://intellij-support.jetbrains.com/entries/23395793-Configuring-JVM-options-and-platform-properties
You can now do it from UI.
These are my before-after. No problems with the garbage collector. Just multiplied all values by 4. Machine: 20Gb RAM, 4Ghz i7 CPU & SSD disk. With defaults it started to lag. Now no lag again.
Pasting as text for quick copy:
# custom WebStorm VM options
# Default:
# -Xms128m
# -Xmx750m
# -XX:ReservedCodeCacheSize=240m
# -XX:+UseCompressedOops
-Xms512m
-Xmx3000m
-XX:ReservedCodeCacheSize=960m
-XX:+UseCompressedOops
I was dealing with a similar situation. CPU used to spike like crazy, and the IDE used to lag. Go to WebStorm preference and try disabling plugins that you do not need.
For instance, if your project uses SASS, what's the point of having LESS plugin running? Likewise, if your project uses Git, you don't need to have CVS or Perforce Integration.
CPU still spikes when WebStorm is indexing my project files, but I usually just wait it out.
Stopping my TypeScript file watching significantly helped (both in the IDE settings menu and in tsconfig.json). I assume that once the project gets big enough, any changes force a large recompile. It's not ideal but it's something that worked for me and may work for others as well.
I am working to reduce the build time of a large Visual C++ 2008 application. One of the worst bottlenecks appears to be the generation of the PDB file: during the linking stage, mspdbsrv.exe quickly consumes available RAM, and the build machine begins to page constantly.
My current theory is that our PDB files are simply too large. However, I've been unable to find any information on what the "normal" size of a PDB file is. I've taking some rough measurements of one of the DLLs in our application, as follows:
CPP files: 34.5 MB, 900k lines
Header files: 21 MB, 400k lines
Compiled DLL: 33 MB (compiled for debug, not release)
PDB: 187 MB
So, the PDB file is roughly 570% the size of the DLL. Can someone with experience with large Visual C++ applications tell me whether these ratios and sizes make sense? Or is there a sign here that we are doing something wrong?
(The largest PDB file in our application is currently 271 MB, for a 47.5 MB DLL. Source code size is harder to measure for that one, though.)
Thanks!
Yes, .pdb files can be very large - even of the sizes you mention. Since a .pdb file contains data to map source lines to machine code and you compile a lot of code there's a lot of data in the .pdb file and you likely can't do anything with that directly.
One thing you could try is to split your program into smaller parts - DLLs. Each DLL will have its own independent .pdb. However I seriously doubt it will decrease the build time.
Do you really need full debug information at all time? You can create a configuration with less debug info in it.
But as sharptooth already said, it is time to refactor and split your program in small more maintainable parts. This won't only reduce build time.
I have a static library project with standard debug/release build options. I was intrigued to spot that while the debug .lib is a fairly large 22Mb, the release one is a whopping 100Mb. And this is not a massive code-base either, about 75 classes and none of them very giant.
My questions are whether this is normal, and whether I should care?
I would check to see if you're statically linking libraries in release mode and dynamically linking them in debug mode. You might be statically linking the C++ runtime for instance.
I had the same problem. The fix is very simple. Project Property/Configuration properties/General/Whole Program Optimization use No Whole Program Optimization instead of Use Link Time Code Generation. Size of my static library decreased from 5MB to 1.3MB
No, this is not normal. It should be the other way around. Yes, you should care.
I'd start by looking at the sizes again, to make sure I didn't transpose the release and debug sizes somehow.
Then look at the libraries you're linking in for release and debug. Did you accidentially link a debug library to ship, and ship library to debug?
Take a close look at your settings for release and debug. Something very fishy is going on.
Is it possible that a massive amount of this code is inline, and the debug version isn't "inlining"?
Ideally release lib should be smaller than debug one.
I guess you may be statically linking other libs such as MFC ,ATL etc...
check you release and debug build setting.
use #pragma once to avoid multiple time file inclusion.
I would typically expect the reverse...
Is it possible that there are big swaths of code inside preprocessor included blocks that only get included in release builds?
Template code is especially suspect in this case.
Update
I think that the issue is most likely caused by linking to static libs in release mode, and shared libs in debug mode...
+1 karoberts
There is one thing that can explain such a size: debug symbols embedded in the release build (as opposed to built as a pdb). Are you sure you don't have debug symbols being generated for your release build ? (which visual c++ are you using?)
On the embedded device I'm working on, the startup time is an important issue. The whole application consists of several executables that use a set of libraries. Because space in FLASH memory is limited we'd like to use shared libraries.
The application workes as usual when compiled and linked with shared libraries and the amount of FLASH memory is reduced as expected.
The difference to the version that is linked to static libs is that the startup time of the application is about 20s longer and I have no idea why.
The application runs on an ARM9 CPU at 180 MHz with Linux 2.6.17 OS,
16 MB FLASH (JFFS File System) and 32 MB RAM.
Bacause shared libraries have to be linked to at runtime, usually by dlopen() or something similar. There's no such step for static libraries.
Edit: some more detail. dlopen has to perform the following tasks.
Find the shared library
Load it into memory
Recursively load all dependencies (and their dependencies....)
Resolve all symbols
This requires quite a lot of IO operations to accomplish.
In a statically linked program all of the above is done at compile time, not runtime. Therefore it's much faster to load a statically linked program.
In your case, the difference is exaggerated by the relatively slow hardware your code has to run on.
This is a fine example of the classic tradeoff of speed and space.
You can statically link all your executables so that they are faster but then they will take more space
OR
You can have shared libraries that take less space but also more time to load.
So decide what you want to sacrifice.
There are many factors for this difference (OS, compiler e.t.c) but a good list of reasons can be found here. Basically shared libraries were created for space reasons and much of the "magic" involved to make them work takes a performance hit.
(As a historical note the original Netscape navigator on Linux/Unix was a statically linked big fat executable).
This may help others with similar problems:
The reason why startup took so long in my case was, that the default setting of the GCC is to export all symbols inside of a library.
A big improvement is to set a compiler setting "-fvisibility=hidden".
All symbols that the lib has to export have to be augmented with the statement
__attribute__ ((visibility("default")))
see gcc wiki
and the very fine article how to write shared libraries
Ok, I have learned now that the usage of shared libraries has it's disadvatages concerning speed. I found this article about dynamic linking and loading enlighting. The loading process seems to be much lengthier than I have expected.
Interesting.. typically loading time for a shared library is unnoticeable from a fat app that is statically linked. So I can only surmise that the system is either very slow to load a library from flash memory, or the library that is loaded is being checked in some way (eg .NET apps run a checksum for all loaded dlls, reducing startup time considerably in some cases). It could be that the shared libraries are being loaded as-needed, and unloaded afterwards which could indicate a configuration problem.
So, sorry I can't help say why, but I think its an issue with your ARM device/OS. Have you tried instrumenting the startup code, or statically linking with 1 of the most commonly-used libraries to see if that makes a large difference. Also put the shared libs in the same directory as the app to reduce the time it takes to search the FS for the lib.
One option which seems obvious to me, is to statically link the several programs all into a single binary. That way you continue to share as much code as possible (probably more than before), but you will also avoid the overhead of the dynamic linker AND save the space of having the dynamic linker on the system at all.
It's pretty easy to combine several executables into the same one, you normally just examine argv and decide which routine to call based on that.
I have read in other discussions that Release dlls have reduced size compared to Debug dlls. But why is it that the size of the dll I have made is the other way around: the Release dll is bigger than the Debug dll. Will it cause problems?
It won't cause problems, its probably that the compiler is 'inlining' more items in the release build and creating larger code. It all depends on the code itself.
Nothing to worry about.
EDIT:
If you are really worried about and not worried about speed you can turn on optimize for size. Or turn off auto inlining and see what difference you get.
EDIT:
More info, you can use dumpbin /headers to see where the dll gets larger
How much bigger is your Release DLL than your Debug DLL?
Your Debug DLLs might seem small is you are generating PDB symbol files (so the debug symbol is not actually in the DLL file). Or if you are inadvertently compiling debug symbols into your Release DLL.
This can be caused by performance optimizations like loop unrolling - if it's significantly different, check your Release linker settings to make sure that you're not statically compiling anything in.
The performance can be influenced if your application perform tasks of high performance. A release version can even be larger than a debug version, if marked options to generate code with information on Debug included. But this also depends on the compiler you are using.