I'm trying to hook up a real-time crash reporting service like airbrake, bugsense or TestFlight's SDK but I'm wondering if the crash reports that are generated from crashes are any good when compiling your MonoTouch project using the LLVM compiler.
When you're configuring an iPhone build if you go to the proj settings > iPhone Build > Advanced tab it says "Experimental, not compatible with debug mode". This is why I'm questioning the stacktrace from the crash reports.
There are several points to consider here:
a) enabling debug on your builds:
tells the compilers to emit debugging symbols (e.g. the .mdb files) which includes a lot of information (variables names, scopes, lines numbers...);
add extra debugging code to your application (e.g. to connect the application, on the device, to the debugger, on your Mac);
tells the compiler (e.g. AOT) to disable some optimizations (that would make debugging harder);
This result in larger, slower applications that contains a lot of data you don't want people to access (e.g. if you fear reverse engineering). For releases it's a no win situation for everyone.
b) using the LLVM compiler won't work with debug mode. It's generally not an issue since, when debugging, you'll likely want the build process to be as fast as possible (and LLVM is slower to build). A problematic case is if your bug shows up only on LLVM builds.
c) The availability of managed stack traces do not requires debug symbols. They are built from the metadata available in your .dll and .exe files. But, when debugging symbols are available, the stack trace will include the line numbers and filenames for each stack frame.
d) I never used the tools you mentioned, but I do believe them to be useful :-) You might wish to ask specific questions about them (wrt MonoTouch). Otherwise I think it's worth testing to see if the level of details differ (and if the extra details are of any help to you). IMO I doubt it will bring you more than the actual 'cost' of shipping 'debug' builds.
first create a "crash me" feature in your application;
then compare reported results from non-LLVM "release" and "debug" builds;
next compare the non-LLVM "release" and LLVM "release" builds;
It be nice to post your experience of the above: here, monotouch mailing-list and/or a blog entry :-)
Related
For both Xamarin.Android and Xamarin.IOS projects, there is a checkbox under "Compiler" titled "Enable Optimizations". The meaning is clear enough, but exactly what optimizations are those? For IOS, for example, there is already a separate option for enabling the optimizing LLVM compiler.
The C# compiler (either Mono's mcs on the Mac or Microsoft's csc on Windows) can emit somewhat better IL when this option is selected.
YMMV but, in general, this means some extra time to compile your source code and the IL might be harder to read (if you decompile it) and sometime debug. In most cases the generated code will be identical.
Because of this the default option is, normally, to use Enable Optimizations only for release builds (and not for debug builds).
OTOH this has nothing to do with the JIT (or AOT or LLVM) optimizations that will be done later at runtime (for Xamarin.Android) or at native compilation (for Xamarin.iOS).
I know question like this have been asked before, but there isn't the exact answer I'm searching for.
Today I was writing ACM-ICPC contest with my team. Usually we are using GNU C++ 4.8.1 compilator (which was available on contest). We had written code, which had time limit exceeded on test case 10. At the end of contest, less then 2 minutes remaining, I sent the exactly same submission with Visual C++ 2013 (same source file, different language) it got accepted and worked. There were more than 60 test cases and our code passed them all.
Once more I say that there were no differences between the source codes.
Now I'm just interested why it happened.
Anyone knows what the reason is?
Without knowing the exact compiler options you used, this answer might be a bit difficult to answer. Usually, compilers come with many options and provide some default values which are used as long as the user does not override them. This is also true for code optimization options. Both mentioned compilers are capable to significantly improve the speed of the generated binary when being told so. A wild guess would be that in our case, the optimization settings used by the GNU compiler did not improve the executable performance so much but the VC++ settings did. For example because not any flags were used in one case. Another wild guess would be that one compiler was generating a debug binary and the other did not (check for the option -g with GCC which switches debug symbol generation on).
On the other hand, depending on the program you created, it could of course be that VC++ was simply better in performing the optimization than g++.
If you are interested in easy increasing the performance, have a look at the high-level optimization flags at https://gcc.gnu.org/onlinedocs/gnat_ugn/Optimization-Levels.html or for the full story, at the complete list at https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html.
More input on comparing compilers:
http://willus.com/ccomp_benchmark2.shtml?p1
http://news.dice.com/2013/11/26/speed-test-2-comparing-c-compilers-on-windows
I'm a little late with this question, but better late than never. I've been using Visual Studio 6.0 since it came out, but recently switched to VS 2013 on a new PC.
I've gotten my projects to build under 2013, but the resulting executables it produces are consistently bigger than VS6.0 produced. I've seen a similar thread on here about that happening in the transition from VS2008 to VS2010, and the comments and suggestions there all seem to attribute the change to changes in MFC libraries that are statically linked in. But my projects are straight C code. No C++, let alone MFC. And the 'Use of MFC' option on my project is set to "Use Standard Windows Libraries" (presumably set by the import tool that generated the 2013-compatible project). The only non-stadard library it uses is wsock32.lib.
The extra size isn't a killer, but it's a significant relative to the size of the whole app. My biggest .exe goes from 980Kb to 1.3Mb - about a 35% increase in size to an app whose small size was a selling point (i.e. install this tiny app and you have access to all of our goodies). That's without debugging info - the increase on the debug version is even more - but I don't really care about that.
Any ideas how to strip out the new cruft - or even to know what it is?
This is a good manual how to make your binaries smaller.
Basic ideas are the following:
Don't forget about Release mode
Declare #define WIN32_LEAN_AND_MEAN
Dynamically link to the C++ runtime
Compile the executable without debugging information
Compile with /O1, an 'optimize for size' flag
Remove iostream and fstream headers, use low level instead if possible
Typically you generate a MAP file on both systems, and figure out the sections that cause the largest contributions.
Anton's answer reminds me: first check if they are both linked the same way (both static or both dynamic, otherwise it is apples and oranges)
I'm using Visual Studio 2008 Pro programming in c++. When I press the run button in debugging mode, are any compiler optimizations applied to the program by default?
The debugger will by default be running a debug build, which won't have optimizations turned on.
If optimizations are enabled, you may notice that "Step" and "Next" sometimes appear to cause the program flow to jump around. This is because the compiler sometimes re-order instructions and the debugger is doing it's best.
I suppose it depends on what you'd classify as optimizations, but mostly no. Just for example, recent versions of VS do apply the (anonymous) return value optimization, at least in some cases, even with optimization disabled (/O0) as is normal for a debug build.
If you want to debug optimized code, it's usually easiest to switch to a release build, and then tell it to generate debug info. In theory you can turn on optimization in a debug build, but you have to change more switches to do it.
It seems that all my adult life I've been tormented by the VC++ linker complaining or balking because various libraries do not agree on which version of the Runtime library to use. I'm never in the mood to master that dismal subject. So I just try to mess with it until it works. The error messages are never useful. Neither is the Microsoft documentation on the subject - not to me at least.
Sometimes it does not find functions - because the name-mangling is not what was expected? Sometimes it refuses to mix-and-match. Other times it just says, "LINK : warning LNK4098: defaultlib 'LIBCMTD' conflicts with use of other libs; use /NODEFAULTLIB:library" Using /NODEFAULTLIB does not work, but the warning seems to be benign. What the heck is "DEFAULTLIB" anyway? How does the linker decide? I've never seen a way to specify to the linker which runtime library to use, only how to tell the compiler which library to create function calls for.
There are "dependency walker" programs that can inspect object files to see what DLL's they depend on. I just ran one on a project I'm trying to build, and it's a real mess. There are system .libs and .dll's that want conflicting runtime versions. For example, COMCTL32.DLL wants MSVCRT.DLL, but I am linking with MSVCRTD.DLL. I am searching to see if there's a COMCTL32D.DLL, even as I type.
So I guess what I'm asking for is a tutorial on how to sort those things out. What do you do, and how do you do it?
Here's what I think I know. Please correct me if any of this is wrong.
The parameters are Debug/Release, Multi-threaded/Single-threaded, and static/DLL. Only six of the eight possible combinations are covered. There is no single-threaded DLL, either Debug or Release.
The settings only affect which runtime library gets linked in (and the calling convention to link with it). You do not, for example, have to use a DLL-based runtime if you are building a DLL, nor do you have to use a Debug version of runtime when building the Debug version of a program, although it seems to help when single-stepping past system calls.
Bonus question: How could anyone or any company create such a mess?
Your points (1) and (2) look correct to me. Another thing to note with (2) is that linking in the debug CRT also gives you access to things like enhanced heap checking, checked iterators, and other assorted sanity checks. You cannot redistribute the debug CRT with your application, however -- you must ship using the release build only. Not only is it required by the VC license, but you probably don't want to be shipping debug binaries anyway.
There is no such thing as COMCTL32D.DLL. DLLs that are part of Windows must load the CRT that they were linked against when Windows was built -- this is included with the OS as MSVCRT.DLL. This Windows CRT is completely independent from the Visual C++ CRT that is loaded by the modules that comprise your program (MSVCRT.DLL is the one that ships with Windows. The VC CRT will include a version number, for example MSVCRT80.DLL). Only the EXE and DLL files that make up your program are affected by the debug/release multithreaded/single-threaded settings.
The best practice here IMO is to pick a setting for your CRT and standardize upon it for every binary that you ship. I'd personally use the multithreaded DLL runtime. This is because Microsoft can (and does) issue security updates and bug fixes to the CRT that can be pushed out via Windows Update.