compiler optmizations during debugging - visual-c++

I'm using Visual Studio 2008 Pro programming in c++. When I press the run button in debugging mode, are any compiler optimizations applied to the program by default?

The debugger will by default be running a debug build, which won't have optimizations turned on.
If optimizations are enabled, you may notice that "Step" and "Next" sometimes appear to cause the program flow to jump around. This is because the compiler sometimes re-order instructions and the debugger is doing it's best.

I suppose it depends on what you'd classify as optimizations, but mostly no. Just for example, recent versions of VS do apply the (anonymous) return value optimization, at least in some cases, even with optimization disabled (/O0) as is normal for a debug build.
If you want to debug optimized code, it's usually easiest to switch to a release build, and then tell it to generate debug info. In theory you can turn on optimization in a debug build, but you have to change more switches to do it.

Related

Is there an "Optimize debugging experience" compiler flag in Rust?

In C++ you have compile flags to enable "Optimize debugging experience" using "-Og" or "/Og" (and possibly other flags on other compilers).
This flag enables very basic optimisations that don't interfere with the debugging experience (as far as I understand it). But it does mean that trivial or "free" optimisations made by the compiler are enabled for the program, which you then don't have to worry about.
From GCC optimise options, "Optimize debugging experience" is :
Optimize debugging experience. -Og should be the optimization level of
choice for the standard edit-compile-debug cycle, offering a
reasonable level of optimization while maintaining fast compilation
and a good debugging experience. It is a better choice than -O0 for
producing debuggable code because some compiler passes that collect
debug information are disabled at -O0.
I was wondering if there was an option somewhere to enable the same kind of benefits, or if any such options are planned. Ideally which can be enabled through cargo and in as cross-platform a manner as possible.
Note that I'm not asking about "opt-levels" which are the equivalent of "-O1, -02, etc".
In the Rust Cargo book on Profiles, you will see that the default compilation profile, named dev, is specified like this:
[profile.dev]
opt-level = 0
debug = true
debug-assertions = true
overflow-checks = true
lto = false
panic = 'unwind'
incremental = true
codegen-units = 256
rpath = false
As debug = true means that the full debug information is stored, this means that the compiled objects of the project will already be prepared for debugging, albeit with no optimisations. At the moment there is no flag in Cargo nor rustc to "optimize the debugging experience". While we can be sure that debugging symbols are retained, unlike what is stated by GCC where "debugging information in some passes is lost with -O0", applying optimisations and still have a good debugging experience is a bit of a trade-off game: in particular, LLVM provides some guarantees, but the abilities to navigate and play around with properties while in debug mode may become compromised (relevant LLVM documentation page).
If we take the true meaning of "improving the debugging experience", this is something that can be done on a case by case basis, by tweaking the compilation profile. For example, it is a common requirement in real-time program development, such as videogame development, to apply a few code optimisations so that the run-time performance is bearable. See the Rustc book on code-gen options to see the things that can be done on this end. Each opt-level would contribute to that experience in its own way.
See also:
Does Cargo support custom profiles?
How to get a release build with debugging information when using cargo?

How to allow dead_code and unused_imports for dev builds only?

The unused imports and dead code warnings are the most common that I've found while learning Rust, and they get annoying after awhile (a very short while, like a few seconds). Especially when they are mixed with compiler errors, because it makes the console difficult to read.
I was able to turn off these warnings:
#![allow(unused_imports)]
#![allow(dead_code)]
This will disable the warnings for all builds, but I want the warnings enabled for release builds.
I tried disabling them like this:
#![cfg(dev)]
#![allow(unused_imports)]
#![allow(dead_code)]
But, this removed the entire Rust file from release builds (not what I want).
I tried to configure using cfg_attr but it had no effect for either builds.
#![cfg_attr(dev, allow(unused_imports))]
#![cfg_attr(dev, allow(dead_code))]
I have Googled and read all the related questions on StackOverflow but can't figure this out.
dev isn't a supported predicate for conditional compilation, so your examples will never include the affected code. As far as I know, the best way to detect debug mode is instead with #[cfg(debug_assertions)]. With my testing, #![cfg_attr(debug_assertions, allow(dead_code, unused_imports))] seems to work to disable the lints for debug builds but enable them in release builds.
You can see a list of supported predicates in the Rust reference.

What does "enable optimizations" do?

For both Xamarin.Android and Xamarin.IOS projects, there is a checkbox under "Compiler" titled "Enable Optimizations". The meaning is clear enough, but exactly what optimizations are those? For IOS, for example, there is already a separate option for enabling the optimizing LLVM compiler.
The C# compiler (either Mono's mcs on the Mac or Microsoft's csc on Windows) can emit somewhat better IL when this option is selected.
YMMV but, in general, this means some extra time to compile your source code and the IL might be harder to read (if you decompile it) and sometime debug. In most cases the generated code will be identical.
Because of this the default option is, normally, to use Enable Optimizations only for release builds (and not for debug builds).
OTOH this has nothing to do with the JIT (or AOT or LLVM) optimizations that will be done later at runtime (for Xamarin.Android) or at native compilation (for Xamarin.iOS).

The compilation of the compiler could affect the compiled programs?

Probably my question sounds weird, but my point is: i have to compile a program using GCC, if i compile GCC from the source i will get a slight edge in terms of performances from a software compiled with the fresh new GCC? What I should expect?
You won't get any faster programs out of a compiler built with optimizing flags. Since a program is the compilers' output, and optimizations don't change the output of a correct program, the programs stay the same.
You might, however, profit from new available options if your distributor ships an incomplete compiler. Look through the GCC manual for any options you want to enable (like certain target architecture variants), and if you can't enable them in your current compiler build, there might be potential in a custom-built compiler. However, it is unlikely that it's worth it.
Not unless you're building a newer version of gcc, or enabling cloog, graphite, etc.
the performance difference usually is nothing or is negligible.
in a very rare, really very rare cases you can see noticeable difference, but not always performance improvement. degradation is possible too.

Are debugging symbols any good when compiling with LLVM?

I'm trying to hook up a real-time crash reporting service like airbrake, bugsense or TestFlight's SDK but I'm wondering if the crash reports that are generated from crashes are any good when compiling your MonoTouch project using the LLVM compiler.
When you're configuring an iPhone build if you go to the proj settings > iPhone Build > Advanced tab it says "Experimental, not compatible with debug mode". This is why I'm questioning the stacktrace from the crash reports.
There are several points to consider here:
a) enabling debug on your builds:
tells the compilers to emit debugging symbols (e.g. the .mdb files) which includes a lot of information (variables names, scopes, lines numbers...);
add extra debugging code to your application (e.g. to connect the application, on the device, to the debugger, on your Mac);
tells the compiler (e.g. AOT) to disable some optimizations (that would make debugging harder);
This result in larger, slower applications that contains a lot of data you don't want people to access (e.g. if you fear reverse engineering). For releases it's a no win situation for everyone.
b) using the LLVM compiler won't work with debug mode. It's generally not an issue since, when debugging, you'll likely want the build process to be as fast as possible (and LLVM is slower to build). A problematic case is if your bug shows up only on LLVM builds.
c) The availability of managed stack traces do not requires debug symbols. They are built from the metadata available in your .dll and .exe files. But, when debugging symbols are available, the stack trace will include the line numbers and filenames for each stack frame.
d) I never used the tools you mentioned, but I do believe them to be useful :-) You might wish to ask specific questions about them (wrt MonoTouch). Otherwise I think it's worth testing to see if the level of details differ (and if the extra details are of any help to you). IMO I doubt it will bring you more than the actual 'cost' of shipping 'debug' builds.
first create a "crash me" feature in your application;
then compare reported results from non-LLVM "release" and "debug" builds;
next compare the non-LLVM "release" and LLVM "release" builds;
It be nice to post your experience of the above: here, monotouch mailing-list and/or a blog entry :-)

Resources