GCC v/s Visual studio run time differences - visual-c++

I have written a C++ code for a vehicle routing project. On my dell laptop I have both Ubuntu and Windows 7 installed. When i run my code in a gcc compiler on UNIX platform it runs at least 10x faster than the exact same code on Visual C++ 2010 on the windows OS (both of them on the same machine). This is not just for one particular code, turns out this happens for almost every C++ code i have been using.
I am assuming there is an explanation to such a large differences in runtimes and why gcc out performs visual C++ run time wise. Could anyone enlighten me on this?
Thanks.

In my experience, both compilers are fairly equal, but you have to watch out for a few things:
1. Visual Studio defaults to stack-checking on, which means that every function starts with a small amount of "memset" and ends with a small amount of "memcmp". Turn that off if you want performance - it's great for catching when you write to the 11th element of a ten element array.
2. Visual studio does buffer overflow checking. Again, this can add a significant amount of time to the execution.
See: Visual Studio Runtime Checks
I believe these are normally enabled in debug mode, but not in release builds, so you should get similar results from release builds and -O2 or -O3 optimized builds on gcc.
If this doesn't help, then perhaps you can give us a small (compilable) example, and the respective timings.

Related

SEGILL android ndk code

I have developed a library which I have testing on an x86-64 bit machine and it works and passes tests successfully. When I put it in my android application, the code stops in a constructor that just initializes all its variables to their default values (pointers get assigned to null, booleans to false...). I have set the target for x86-64 bit so I am sure it's not a problem of deploying a different architecture. How can I find out the root of the problem because if I do comment out the initialization in the constructor, it will execute a good amount of code before giving a SEGILL error again? I am using android 8 x64 bit intel image in the emulator. Also, the log cat doesn't show anything, the only error is the SEGILL.
It seems that most of the time, doing some pointer manipulation causes the problem. Simply initializing pointers with null or new causes the app to crash.
Instead of enabling SSE, I enabled avx which is not supported by android and therefore clang optimized some parts by using avx which resulted in SIGILL.

Android Studio uses too much CPU

I'm running AS 1.2.2. on OSX 10.10.3 The CPU usage swings wildly up and down. Trying to edit anything is a real pain - deleting characters, typing, type-checking - all are slow because Studio is consuming a huge amount of resources. I can press a key and must wait sometimes 5 seconds before it updates on the screen
Anyone else has this problem and figure out how to make Android Studio usable again.

Why would Eclipse C ndk jni execution speed be 6 times slower than gcc shell?

I have a program which uses C ndk subroutines in jni which processes a 3204x2406 image file which takes 6+ seconds when run *(without debug) by eclipse with NDK_DEBUG=0.
I've got the same code compiled with GCC on the android running in the shell at under 1 second.
The code consists of loops and integer math. Both the eclipse ndk program and the gcc shell program access the exact same file from the exact same location. There are no trace statements within the 6 seconds. The only external calls is 2406 read statements.
The eclipse is the google integrated download 21.0.0, which is uses Juno 4.2.1 and c/c++ 8.1.1. And yes, i've restarted eclipse and cleaned the project.
I'm now thinking about trying to call or link to the GCC code, but keep feeling like I must be missing something silly.

Visual C++ express 10 using too much memory

I use process explorer (which is a microsoft tool) on windows XP, and often the "physical memory" is being filled at max (3GB) while I use visual C++. At a point, all my programs are slow and are unresponsive, and when it returns to normal, available memory comes back by nearly half ! What is wrong ?
I'm programming some project with Ogre3D, maybe I can deactivate some options in visual, what exactly is it caching that eats that much memory ?
Apparently MSVC is designed to work on big machines, there are many settings in text editor -> C++ to remove some weight, but my guess is that windows xp + recent microsoft apps don't play nice.

How to reduce compilation cost in GCC and make?

I am trying to build some big libraries, like Boost and OpenCV, from their source code via make and GCC under Ubuntu 8.10 on my laptop. Unfortunately the compilation of those big libraries seem to be big burden to my laptop (Acer Aspire 5000). Its fan makes higher and higher noises until out of a sudden my laptop shuts itself down without the OS gracefully turning off.
So I wonder how to reduce the compilation cost in case of make and GCC?
I wouldn't mind if the compilation will take much longer time or more space, as long as it can finish without my laptop shutting itself down.
Is building the debug version of libraries always less costly than building release version because there is no optimization?
Generally speaking, is it possible to just specify some part of a library to install instead of the full library? Can the rest be built and added into if later found needed?
Is it correct that if I restart my laptop, I can resume compilation from around where it was when my laptop shut itself down? For example, I noticed that it is true for OpenCV by looking at the progress percentage shown during its compilation does not restart from 0%. But I am not sure about Boost, since there is no obvious information for me to tell and the compilation seems to take much longer.
UPDATE:
Thanks, brianegge and Levy Chen! How to use the wrapper script for GCC and/or g++? Is it like defining some alias to GCC or g++? How to call a script to check sensors and wait until the CPU temperature drops before continuing?
I'd suggest creating a wrapper script for gcc and/or g++
#!/bin/bash
sleep 10
exec gcc "$#"
Save the above as "gccslow" or something, and then:
export CC="gccslow"
Alternatively, you can call the script gcc and put it at the front of your path. If you do that, be sure to include the full path in the script, otherwise, the script will call itself recursively.
A better implementation could call a script to check sensors and wait until the CPU temperature drops before continuing.
For your latter question: A well written Makefile will define dependencies as a directed a-cyclical graph (DAG), and it will try to satisfy those dependencies by compiling them in the order according to the DAG. Thus as a file is compiled, the dependency is satisfied and need not be compiled again.
It can, however, be tricky to write good Makefiles, and thus sometime the author will resort to a brute force approach, and recompile everything from scratch.
For your question, for such well known libraries, I will assume the Makefile is written properly, and that the build should resume from the last operation (with the caveat that it needs to rescan the DAG, and recalculate the compilation order, that should be relatively cheap).
Instead of compiling the whole thing, you can compile each target separately. You have to examine the Makefile for identifying them.
Tongue-in-cheek: What about putting the laptop into the fridge while compiling?

Resources