I'm compiling my code on a server that has OpenMPI, but I need to know which version I'm on so I can read the proper documentation. Is there a constant in <mpi.h> that I can print to display my current version?
As explained in this tutorial, you may also check the MPI version running the command:
mpiexec --version
or
mpirun --version
in your terminal.
With OpenMPI, the easiest thing to do is to run ompi_info; the first few lines will give you the information you want. In your own code, if you don't mind something OpenMPI specific, you can look at use OMPI_MAJOR_VERSION, OMPI_MINOR_VERSION, and OMPI_RELEASE_VERSION in mpi.h. That obviously won't work with MPICH2 or other MPI implementations.
More standardly, as part of MPI-3, there is a standard MPI routine called MPI_Get_library_version which gives you detailed library information at run time. This is small enough and useful enough that newer versions of MPI implementations will have this very quickly - for instance it's in the OpenMPI 1.7 development trunk - but it doesn't really help you today.
I am not familier with OpenMPI but MPI has a function MPI_Get_Version, please check your mpi.h for similar functions.
You can also get the version of OpenMPI that the compiler wrapper (e.g. mpicxx/mpic++/mpicc/mpifort) comes from:
mpicxx --showme:version
This can be useful if (for any reason) you have different versions of MPI compiler wrapper and executor.
(Just mpicxx --showme will additionally show you where MPI is installed and which compiler flags uses, see the manual for more.)
Related
I'm currently trying to profile a preloaded shared library by using the LD_PROFILE environment variable.
I compile the library with "-g" flag and export LD_PROFILE_OUTPUT as well as LD_PROFILE before running an application (ncat in my case) with the preloaded library. So, more precisely what I do is the following:
Compile shared library libexample.so with "-g" flag.
export LD_PROFILE_OUTPUT=`pwd`
export LD_PROFILE=libexample.so
run LD_PRELOAD=`pwd`/libexample.so ncat ...
The preloading itself does work and my library is used, but no file libexample.so.profile gets created. If I use export LD_PROFILE=libc.so.6 instead, there is a file libc.so.6.profile as expected.
Is this a problem of combining LD_PRELOAD and LD_PROFILE or is there anything I might have done wrong?
I'm using glibc v2.12 on CentOS 6.4 if that is of any relevance.
Thanks a lot!
Sorry, I don't know the answer why LD_PROFILE does not work with LD_PRELOAD.
However, for profiling binaries compiled with -g I really like the tool valgrind together with the grapichal tool kcachegrind.
valgrind --tool=callgrind /path/to/some/binary with options
will create a file called something like callgrind.out.1234 where 1234 was the pid of the program when run. That file can be analyzed with:
kcachegrind callgrind.out.1234
In kcachegrind you will easily see in which functions most CPU time is spended, the callee map also shows this in a nise graphical way. The call graph might help to understand how the program works. You will even be able to look at the source code to see how much CPU time is spent on each line.
I hope that you will find valgrind useful even though this was not the answer to your LD_PROFILE question. The drawback of valgrind is that it slows things down both when valgrind is used for profiling and memory checking.
I downloaded the tight vnc source code from its website. Now I am trying to use gdb on its executable. The debugger successfully adds breakpoints on functions but when I try to step through the function it says :
Single Stepping until exit from function func, which has no line number information
I think it is due to the fact that the compilation wasnt done with correct flags. I am trying to search the configuration files to understand how to enable it, but haven't been able to so far. I am not acquainted with Imakefiles etc. Maybe someone who has done this previously can help ?
Using gnu GCC and GDB on an ubuntu machine
You should compile with the -g flag.
If you are trying to learn the code, I would recommend "-g -O0". That will turn off the optimizer - gcc optimization can make it confusing to step through code.
My initial task was to install mod_perl 2.0.6 + Apache 2.2.22.
The process stopped with a lot of errors related to off64_t when compiling mod_perl. So, I started to dig deeper. Firstly, I have installed two new instances of Perl 5.8.9 (because I'll have to use this version): a threaded version and a not-threaded one (they are identical, only usethreads differs). Trying to reproduce the same using the threaded Perl finished with success and no off64_t errors at all.
The conclusion is obvious: threaded Perl provides the neccessary off64_t, the non-threaded one doesn't.
Searching further, I have compared config.h (from core/<arch>/CORE) of both Perl'es, and at the line 3671 I can see this (in the non-threaded Perl):
/* HAS_OFF64_T:
* This symbol will be defined if the C compiler supports off64_t.
*/
/*#define HAS_OFF64_T / **/
and in the threads-enabled Perl:
#define HAS_OFF64_T /**/
perl -V for both Perl instances reports ccflags ='... -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 ...' as used compiler flags.
As I understand, off64_t is used for large files and isn't related to threads. I found this information about off_t and off64_t:
If the source is compiled with _FILE_OFFSET_BITS = 64 this type (i.e. off_t) is transparently replaced by off64_t.
Shortly: There are 2 identical Perl builds with a single difference: the usethreads configure parameter. Threaded Perl enables off64_t, non-threaded one doesn't.
My question is: Why does this happen and how threads are connected to this off64_t data type that should be used for large files, not for threads ?
Info: Arch Linux OS 32-bit (kernel 2.6.33), gcc 4.5.0, libc 2.11.1, standard Perl 5.8.9
Notes: off64_t is processed in Configure at line 15526, a simple try.c is generated and tried to be compiled. The question is why the not-threaded Perl cannot compile it while threaded Perl can.
I'm not sure if answering my own question is an accepted behaviour, but while I was searching for the solution and not just waiting for someone else to do my homework, I think it will be useful for other people reading this.
Shortly, the answer to my question is the -D_GNU_SOURCE gcc compiler flag and it seems threads have nothing in common with this off64_t type.
It appears that when -Dusethreads is used for Configure, hints/linux.sh is used and the following code is executed:
case "$usethreads" in
$define|true|[yY]*)
ccflags="-D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS $ccflags"
then the code is compiled with _GNU_SOURCE defined, which allows a lot of things to be used (like is answered in these thread: What does “#define _GNU_SOURCE” imply?).
When Perl is built without threads support, these flags are skipped and a lot of bits from header files remain commented.
It seems the Perl itself is not affected by this. Even older versions of Apache were not, but Apache 2.2+ started to use code which is enabled by _GNU_SOURCE and building mod_perl is not as straightforward as before.
I don't know who should take a notice about this. Maybe core Perl maintainers themselves, maybe Apache maintainers, maybe no one and it's just my particlar case or compiler issues.
Conclusion: when building not-threaded Perl the _GNU_SOURCE is not used, as a result Perl .h files have a lot of commented #defines and building mod_perl against the Apache 2.2+ sources fails. An additional -Accflags='-D_GNU_SOURCE' should be added when building Perl.
Other answers are welcome too. Maybe I'm wrong or I'm just seeing the top of the iceberg.
i've downloaded "openjdk-6-src-b23-05_jul_2011" to have a look at the native implementations for the methods in sun.misc.Unsafe. e.g. compareAndSwapInt(...) but i am not able find anything in the downloaded sources of openjdk. i want to get an idea how these methods look like (i was interested in the atomic stuff the jdk provides).
could anybody point me to the right location(s)?
$ ls jdk/src/
linux share solaris windows
$ ls hotspot/src/os/
linux posix solaris windows
any help appreciated
marcel
Implementation of unsafe methods itself is not OS-specific, therefore it can be found in hotspot/src/share/vm/prims/unsafe.cpp. It delegates to hotspot/src/share/vm/runtime/atomic.cpp, which includes OS and CPU specific files, such as hotspot/src/os_cpu/windows_x86/atomic_windows_x86.inline.hpp.
Gcc atomic builtins as provided like java
http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html
But problem is there is no standard, as you move to solaris, you will need something else. So you have to use different system call as you change your platform.
I am trying to build some big libraries, like Boost and OpenCV, from their source code via make and GCC under Ubuntu 8.10 on my laptop. Unfortunately the compilation of those big libraries seem to be big burden to my laptop (Acer Aspire 5000). Its fan makes higher and higher noises until out of a sudden my laptop shuts itself down without the OS gracefully turning off.
So I wonder how to reduce the compilation cost in case of make and GCC?
I wouldn't mind if the compilation will take much longer time or more space, as long as it can finish without my laptop shutting itself down.
Is building the debug version of libraries always less costly than building release version because there is no optimization?
Generally speaking, is it possible to just specify some part of a library to install instead of the full library? Can the rest be built and added into if later found needed?
Is it correct that if I restart my laptop, I can resume compilation from around where it was when my laptop shut itself down? For example, I noticed that it is true for OpenCV by looking at the progress percentage shown during its compilation does not restart from 0%. But I am not sure about Boost, since there is no obvious information for me to tell and the compilation seems to take much longer.
UPDATE:
Thanks, brianegge and Levy Chen! How to use the wrapper script for GCC and/or g++? Is it like defining some alias to GCC or g++? How to call a script to check sensors and wait until the CPU temperature drops before continuing?
I'd suggest creating a wrapper script for gcc and/or g++
#!/bin/bash
sleep 10
exec gcc "$#"
Save the above as "gccslow" or something, and then:
export CC="gccslow"
Alternatively, you can call the script gcc and put it at the front of your path. If you do that, be sure to include the full path in the script, otherwise, the script will call itself recursively.
A better implementation could call a script to check sensors and wait until the CPU temperature drops before continuing.
For your latter question: A well written Makefile will define dependencies as a directed a-cyclical graph (DAG), and it will try to satisfy those dependencies by compiling them in the order according to the DAG. Thus as a file is compiled, the dependency is satisfied and need not be compiled again.
It can, however, be tricky to write good Makefiles, and thus sometime the author will resort to a brute force approach, and recompile everything from scratch.
For your question, for such well known libraries, I will assume the Makefile is written properly, and that the build should resume from the last operation (with the caveat that it needs to rescan the DAG, and recalculate the compilation order, that should be relatively cheap).
Instead of compiling the whole thing, you can compile each target separately. You have to examine the Makefile for identifying them.
Tongue-in-cheek: What about putting the laptop into the fridge while compiling?