How to make OpenMP work with MinGW-64 under Cygwin? - cygwin

The Scenario
I am developing an application in C99 ANSI C that uses OpenMP and GMP. It's natural habitat will be a linux machine with plenty of cores, so there's basically no big trouble there, but for reasons I do not want to debate here, I have to develop under Cygwin on a 64 bit Windows machine.
When I use the 32 bit version of gcc, something, somewhere, goes horribly wrong and the application is about 60 times slower than a very crude single-threaded version, when it should in fact be faster by a factor that equals the number of CPU's. It makes it impossible to work with. I really don't know what is causing this; Anyway, I have decided to use the 64 bit version of MinGW instead, that'd be x86_64-w64-mingw32-gcc-4.5.3 and his friends, to be precise.
A side note: I am sure that the slowdown is not a flaw in my multithreading, the multithreaded application works correctly and faster on the linux machine.
The actual Problem
Setting up GMP was easy, it can be compiled from source without any trouble and then works like a charm. Compiling the following easy example with -fopenmp also works like a charm:
#include <gmp.h>
#include <omp.h>
int main() {
#pragma omp parallel
{
mpz_t t;
mpz_init(t);
mpz_set_si(t,omp_get_thread_num());
# pragma omp critical
{
gmp_printf("Hello From GMP'd Thread %Zd!\n",t);
fflush(stdout);
}
mpz_clear(t);
}
return 0;
}
However, executing it gives me
$ ./test
test.exe: error while loading shared libraries: ?:
cannot open shared object file: No such file or directory
I am aware of this question, but I would like to make this work without downloading any binaries other than those from an official Cygwin repository. Since my example compiled with the -fopenmp switch, I am convinced that this should also be very much possible.
Can someone help me with that? Thanks a bunch in advance.

I think that "error while loading shared libraries: ?:" means that cygwin does not know where to find libgmp-10.dll and/or libgomp-1.dll.
Both DLL are required according to Dependency Walker
Your program worked after I added the directory that contains both DLL to my PATH:
#$ x86_64-w64-mingw32-g++ -fopenmp -o w64test gmp_hello.c -lgmp
#$ file ./w64test.exe
./w64test.exe: PE32+ executable (console) x86-64, for MS Windows
#$ ./w64test.exe
/home/david/SO/hello_openmp/w64test.exe: error while loading shared
libraries: ?: cannot open shared object file: No such file or
directory
#$ ls /cygdrive/c/dev/cygwin/usr/x86_64-w64-mingw32/sys-root/mingw/bin/*mp*dll
/cygdrive/c/dev/cygwin/usr/x86_64-w64-mingw32/sys-root/mingw/bin/libgmp-10.dll
/cygdrive/c/dev/cygwin/usr/x86_64-w64-mingw32/sys-root/mingw/bin/libgomp-1.dll
#$ export PATH=$PATH:/cygdrive/c/dev/cygwin/usr/x86_64-w64-mingw32/sys-root/mingw/bin/
#$ ./w64test.exe
Hello From GMP'd Thread 1!
Hello From GMP'd Thread 0!
note
I compiled and installed gmp-5.0.5 with the following commands:
./configure --build=i686-pc-cygwin --host=x86_64-w64-mingw32 --prefix=/usr/x86_64-w64-mingw32/sys-root/mingw --enable-shared --disable-static
make -j 2
make check
make install
update
Your program also works with cygwin "GCC Release series 4 compiler".
#$ g++ -fopenmp -o cygtest gmp_hello.c -lgmp
#$ ./cygtest.exe
Hello From GMP'd Thread 1!
Hello From GMP'd Thread 0!
#$ g++ -v
Target: i686-pc-cygwin
Thread model: posix
gcc version 4.5.3 (GCC)
You might need to install the following packages:
libgmp-devel (Development library for GMP)
libgmp3 (Runtime library for GMP)
libgomp1 (GOMP shared runtime)

Related

Remove all previous version MPI and reinstall correctly it

First of all: I'm on linux mint 17.3 x64
What I've done so far:
Guide to install Open MPI 1.8
Guide to install MPI
Attemp to remove MPI executing: sudo apt-get install libcr-dev mpich2 mpich2-doc (Actually the should be not installed)
What I can see from terminal:
output of: echo $PATH
/path/to/mpj//bin:/home/timmy/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/timmy/.openmpi/bin
(I immagine that I've to remove /path/to/mpj/ (not exists) and /home/timmy/.openmpi/bin (I want to remove previous version of ompi))
output of: echo $LD_LIBRARY_PATH
(nothing)
Really, doesn't appear anything!
output of mpirun
--------------------------------------------------------------------------
mpirun could not find anything to do.
It is possible that you forgot to specify how many processes to run
via the "-np" argument.
--------------------------------------------------------------------------
Why I want to remove Open MPI and reinstall it
I've a project to do using both MPI and OpenMP and with the actual installation of MPI I cannot compile using the following command: mpicc -openmp "test_omp.c" -o "test_omp". It gives me the following error: Not defined function omp_get_thread_num(); and moreover, it ignore my #pragma commands.
Your problem is that you are giving the compiler the wrong option to enable the OpenMP support. -openmp is only understood by the (commercial) Intel compiler, which is probably the tool-set installed on the site you've referred to in your other question. Most Linux distributions come with GCC and one is to assume that mpicc will use GCC (check with mpicc -showme).
The option to enable OpenMP support in GCC is -fopenmp (notice the f).

GCC: Difference between buildroot gcc and precompiled gcc (installed with APT)?

I'm trying to make custom binaries for initrd for x86 system. I took generic precompiled Debian 7 gcc (version 4.7.2-5) and compiled kernel with it. Next step was to make helloworld program instead of init script in initrd to check my development progress. Helloworld program was also compiled with that gcc. When I tried to start my custom system, kernel started with no problem, but helloworld program encountered some errors:
kernel: init[24879] general protection ip:7fd7271585e0 sp:7fff1ef55070 error:0 in init[7fd727142000+20000]
(numbers are not mine, I took similar string from google). Helloworld program:
#include <stdio.h>
int main(){
printf("Helloworld\r\n");
sleep(9999999);
return 0;
}
Compilation:
gcc -static -o init test.c
Earlier I also had stuck with same problem on ARM system (took generic compiler, compiled kernel and some binaries with it and tried to run, kernel runs, but binary - not). Solved it with complete buildroot system, and took buildroot compiler in next projects.
So my question is: what difference between gcc compiled as part of buildroot and generic precompiled gcc?
I know that buildroot compiler is made in several steps, with differenet libs and so on, is this main difference, platform independence?
I don't need a solution, I can take buildroot anytime. I want to know source of my problem, to avoid such problems in future. Thanks.
UPD: Replaced sleep with while(1); and got same situation. My kernel output:
init[1]: general protection ip: 8053682 sp: bf978294 error: 0 in init[8048000+81000]
printk: 14300820 message suppressed.
and repeating every second.
UPD2: I added vdso32-int80.so (original name, like in kernel tree), tested - no luck.
I added ld-linux.so (2 files: ld-2.13.so with symbolic link), tested - same error.
Busybox way allows to run binaries without any of this libraries, tested by me on ARM platform.
Thanks for trying to help me, any other ideas?

"Compiler threading support is not turned on."

Normally I can google my way around and find solutions, but not this time.
I'm using 64 bit Linux Ubuntu 11.04 to compile a 32 bit windows application. I'm using i586-mingw32msvc-gcc to compile my C++ files.
test.cpp:
#include <boost/asio.hpp>
makefile:
i586-mingw32msvc-gcc -c -m32 -mthreads -o test.o test.cpp
Error:
boost/asio/detail/socket_types.hpp:
# include <sys/ioctl.h>
doesn't exist.
Added to makefile: -DBOOST_WINDOWS
Error:
# warning Please define _WIN32_WINNT or _WIN32_WINDOWS appropriately
Ok, added to makefile: -D_WIN32_WINNT=0x0501
Error:
# error "Compiler threading support is not turned on. Please set the correct command line options for threading: -pthread (Linux), -pthreads (Solaris) or -mthreads (Mingw32)"
Yet I did specify -mthreads.
Adding -DBOOST_HAS_THREADS might be sufficient (see # elif defined __GNUC__ from the offending header). But it's likely/possible that your boost installation has been crafted to support your build environment and not your target. Try building it yourself with your cross-compiling toolchain.
It turned out that I had a set of #undef and #defines to force the GLIBC version to something that allowed me to compile for Linux (not cross compile) RHEL5, which otherwise would give me all kinds of other errors. Turns out that when cross compiling for windows using mingw, that force feeding the GLIBC version causes boost to take a strange path leaving various aspects undefined, including the bahavior or availability of threading. I surrounded it with an #ifndef _WIN32 which made the problem go away.
Maybe that -mthreads argument needs to come last.

Setting up G++ or ICC for mpi.h on Ubuntu

I have never done any major programing outside of VS08.
I am trying to compile a program called LAMMPS with either of the two relevant make files. One calls g++ and the other calls icc (Intel's compiler).
icc produces this error:
icc -O -DLAMMPS_GZIP -DMPICH_SKIP_MPICXX -DFFT_FFTW -M write_restart.cpp > write_restart.d
write_restart.cpp(15): catastrophic error: cannot open source file "mpi.h"
#include "mpi.h"
and g++ throws this error
g++ -g -O -DLAMMPS_GZIP -DMPICH_SKIP_MPICXX -DFFT_FFTW -M verlet.cpp > verlet.d
pointers.h:25: fatal error: mpi.h: No such file or directory
compilation terminated.
The mpi.h file is located in /usr/lib/openmpi/include
It is my understanding that I need to set that $PATH variable which reads
bash: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/X11R6/bin:/opt/intel/bin:/usr/lib/openmpi/include:
and $LD_LIBRARY_PATH which currently reads
/usr/lib/openmpi/lib:
SO, how does one include the mpi.h file? So that either icc or g++ find it?
mpi.h is a header for MPI library. That would be included if you use mpic++ MPI compiler wrapper instead of g++ in your makefile. mpic++ will call the appropriate compiler. From what you describe you have openmpi package installed on your ubuntu machine.
For more info, you need to consult the manual, e.g.
http://lammps.sandia.gov/doc/Section_start.html#2_2 (for LAMMPS)
and perhaps you need to see openmpi manual as to how to set up additional compiler. Not sure if this can be done after openmpi itself has been built. By default I think in Ubuntu openmpi compiler wrappers would only call g++. CMIIW.
Okay, so I got it to work with g++ when setting up cc as "mpic++.mpich2" instead of "mpic++"
you can try compile using openmpi make file in /src/MAKE
make openmpi
in my case, this option was successful

How can I link to a specific glibc version?

When I compile something on my Ubuntu Lucid 10.04 PC it gets linked against glibc. Lucid uses 2.11 of glibc. When I run this binary on another PC with an older glibc, the command fails saying there's no glibc 2.11...
As far as I know glibc uses symbol versioning. Can I force gcc to link against a specific symbol version?
In my concrete use I try to compile a gcc cross toolchain for ARM.
You are correct in that glibc uses symbol versioning. If you are curious, the symbol versioning implementation introduced in glibc 2.1 is described here and is an extension of Sun's symbol versioning scheme described here.
One option is to statically link your binary. This is probably the easiest option.
You could also build your binary in a chroot build environment, or using a glibc-new => glibc-old cross-compiler.
According to the http://www.trevorpounds.com blog post Linking to Older Versioned Symbols (glibc), it is possible to to force any symbol to be linked against an older one so long as it is valid by using the same .symver pseudo-op that is used for defining versioned symbols in the first place. The following example is excerpted from the blog post.
The following example makes use of glibc’s realpath, but makes sure it is linked against an older 2.2.5 version.
#include <limits.h>
#include <stdlib.h>
#include <stdio.h>
__asm__(".symver realpath,realpath#GLIBC_2.2.5");
int main()
{
const char* unresolved = "/lib64";
char resolved[PATH_MAX+1];
if(!realpath(unresolved, resolved))
{ return 1; }
printf("%s\n", resolved);
return 0;
}
Setup 1: compile your own glibc without dedicated GCC and use it
Since it seems impossible to do just with symbol versioning hacks, let's go one step further and compile glibc ourselves.
This setup might work and is quick as it does not recompile the whole GCC toolchain, just glibc.
But it is not reliable as it uses host C runtime objects such as crt1.o, crti.o, and crtn.o provided by glibc. This is mentioned at: https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location Those objects do early setup that glibc relies on, so I wouldn't be surprised if things crashed in wonderful and awesomely subtle ways.
For a more reliable setup, see Setup 2 below.
Build glibc and install locally:
export glibc_install="$(pwd)/glibc/build/install"
git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout glibc-2.28
mkdir build
cd build
../configure --prefix "$glibc_install"
make -j `nproc`
make install -j `nproc`
Setup 1: verify the build
test_glibc.c
#define _GNU_SOURCE
#include <assert.h>
#include <gnu/libc-version.h>
#include <stdatomic.h>
#include <stdio.h>
#include <threads.h>
atomic_int acnt;
int cnt;
int f(void* thr_data) {
for(int n = 0; n < 1000; ++n) {
++cnt;
++acnt;
}
return 0;
}
int main(int argc, char **argv) {
/* Basic library version check. */
printf("gnu_get_libc_version() = %s\n", gnu_get_libc_version());
/* Exercise thrd_create from -pthread,
* which is not present in glibc 2.27 in Ubuntu 18.04.
* https://stackoverflow.com/questions/56810/how-do-i-start-threads-in-plain-c/52453291#52453291 */
thrd_t thr[10];
for(int n = 0; n < 10; ++n)
thrd_create(&thr[n], f, NULL);
for(int n = 0; n < 10; ++n)
thrd_join(thr[n], NULL);
printf("The atomic counter is %u\n", acnt);
printf("The non-atomic counter is %u\n", cnt);
}
Compile and run with test_glibc.sh:
#!/usr/bin/env bash
set -eux
gcc \
-L "${glibc_install}/lib" \
-I "${glibc_install}/include" \
-Wl,--rpath="${glibc_install}/lib" \
-Wl,--dynamic-linker="${glibc_install}/lib/ld-linux-x86-64.so.2" \
-std=c11 \
-o test_glibc.out \
-v \
test_glibc.c \
-pthread \
;
ldd ./test_glibc.out
./test_glibc.out
The program outputs the expected:
gnu_get_libc_version() = 2.28
The atomic counter is 10000
The non-atomic counter is 8674
Command adapted from https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location but --sysroot made it fail with:
cannot find /home/ciro/glibc/build/install/lib/libc.so.6 inside /home/ciro/glibc/build/install
so I removed it.
ldd output confirms that the ldd and libraries that we've just built are actually being used as expected:
+ ldd test_glibc.out
linux-vdso.so.1 (0x00007ffe4bfd3000)
libpthread.so.0 => /home/ciro/glibc/build/install/lib/libpthread.so.0 (0x00007fc12ed92000)
libc.so.6 => /home/ciro/glibc/build/install/lib/libc.so.6 (0x00007fc12e9dc000)
/home/ciro/glibc/build/install/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007fc12f1b3000)
The gcc compilation debug output shows that my host runtime objects were used, which is bad as mentioned previously, but I don't know how to work around it, e.g. it contains:
COLLECT_GCC_OPTIONS=/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crt1.o
Setup 1: modify glibc
Now let's modify glibc with:
diff --git a/nptl/thrd_create.c b/nptl/thrd_create.c
index 113ba0d93e..b00f088abb 100644
--- a/nptl/thrd_create.c
+++ b/nptl/thrd_create.c
## -16,11 +16,14 ##
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
+#include <stdio.h>
+
#include "thrd_priv.h"
int
thrd_create (thrd_t *thr, thrd_start_t func, void *arg)
{
+ puts("hacked");
_Static_assert (sizeof (thr) == sizeof (pthread_t),
"sizeof (thr) != sizeof (pthread_t)");
Then recompile and re-install glibc, and recompile and re-run our program:
cd glibc/build
make -j `nproc`
make -j `nproc` install
./test_glibc.sh
and we see hacked printed a few times as expected.
This further confirms that we actually used the glibc that we compiled and not the host one.
Tested on Ubuntu 18.04.
Setup 2: crosstool-NG pristine setup
This is an alternative to setup 1, and it is the most correct setup I've achieved far: everything is correct as far as I can observe, including the C runtime objects such as crt1.o, crti.o, and crtn.o.
In this setup, we will compile a full dedicated GCC toolchain that uses the glibc that we want.
The only downside to this method is that the build will take longer. But I wouldn't risk a production setup with anything less.
crosstool-NG is a set of scripts that downloads and compiles everything from source for us, including GCC, glibc and binutils.
Yes the GCC build system is so bad that we need a separate project for that.
This setup is only not perfect because crosstool-NG does not support building the executables without extra -Wl flags, which feels weird since we've built GCC itself. But everything seems to work, so this is only an inconvenience.
Get crosstool-NG and configure it:
git clone https://github.com/crosstool-ng/crosstool-ng
cd crosstool-ng
git checkout a6580b8e8b55345a5a342b5bd96e42c83e640ac5
export CT_PREFIX="$(pwd)/.build/install"
export PATH="/usr/lib/ccache:${PATH}"
./bootstrap
./configure --enable-local
make -j `nproc`
./ct-ng x86_64-unknown-linux-gnu
./ct-ng menuconfig
The only mandatory option that I can see, is making it match your host kernel version to use the correct kernel headers. Find your host kernel version with:
uname -a
which shows me:
4.15.0-34-generic
so in menuconfig I do:
Operating System
Version of linux
so I select:
4.14.71
which is the first equal or older version. It has to be older since the kernel is backwards compatible.
Now you can build with:
env -u LD_LIBRARY_PATH time ./ct-ng build CT_JOBS=`nproc`
and now wait for about thirty minutes to two hours for compilation.
Setup 2: optional configurations
The .config that we generated with ./ct-ng x86_64-unknown-linux-gnu has:
CT_GLIBC_V_2_27=y
To change that, in menuconfig do:
C-library
Version of glibc
save the .config, and continue with the build.
Or, if you want to use your own glibc source, e.g. to use glibc from the latest git, proceed like this:
Paths and misc options
Try features marked as EXPERIMENTAL: set to true
C-library
Source of glibc
Custom location: say yes
Custom location
Custom source location: point to a directory containing your glibc source
where glibc was cloned as:
git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout glibc-2.28
Setup 2: test it out
Once you have built he toolchain that you want, test it out with:
#!/usr/bin/env bash
set -eux
install_dir="${CT_PREFIX}/x86_64-unknown-linux-gnu"
PATH="${PATH}:${install_dir}/bin" \
x86_64-unknown-linux-gnu-gcc \
-Wl,--dynamic-linker="${install_dir}/x86_64-unknown-linux-gnu/sysroot/lib/ld-linux-x86-64.so.2" \
-Wl,--rpath="${install_dir}/x86_64-unknown-linux-gnu/sysroot/lib" \
-v \
-o test_glibc.out \
test_glibc.c \
-pthread \
;
ldd test_glibc.out
./test_glibc.out
Everything seems to work as in Setup 1, except that now the correct runtime objects were used:
COLLECT_GCC_OPTIONS=/home/ciro/crosstool-ng/.build/install/x86_64-unknown-linux-gnu/bin/../x86_64-unknown-linux-gnu/sysroot/usr/lib/../lib64/crt1.o
Setup 2: failed efficient glibc recompilation attempt
It does not seem possible with crosstool-NG, as explained below.
If you just re-build;
env -u LD_LIBRARY_PATH time ./ct-ng build CT_JOBS=`nproc`
then your changes to the custom glibc source location are taken into account, but it builds everything from scratch, making it unusable for iterative development.
If we do:
./ct-ng list-steps
it gives a nice overview of the build steps:
Available build steps, in order:
- companion_tools_for_build
- companion_libs_for_build
- binutils_for_build
- companion_tools_for_host
- companion_libs_for_host
- binutils_for_host
- cc_core_pass_1
- kernel_headers
- libc_start_files
- cc_core_pass_2
- libc
- cc_for_build
- cc_for_host
- libc_post_cc
- companion_libs_for_target
- binutils_for_target
- debug
- test_suite
- finish
Use "<step>" as action to execute only that step.
Use "+<step>" as action to execute up to that step.
Use "<step>+" as action to execute from that step onward.
therefore, we see that there are glibc steps intertwined with several GCC steps, most notably libc_start_files comes before cc_core_pass_2, which is likely the most expensive step together with cc_core_pass_1.
In order to build just one step, you must first set the "Save intermediate steps" in .config option for the intial build:
Paths and misc options
Debug crosstool-NG
Save intermediate steps
and then you can try:
env -u LD_LIBRARY_PATH time ./ct-ng libc+ -j`nproc`
but unfortunately, the + required as mentioned at: https://github.com/crosstool-ng/crosstool-ng/issues/1033#issuecomment-424877536
Note however that restarting at an intermediate step resets the installation directory to the state it had during that step. I.e., you will have a rebuilt libc - but no final compiler built with this libc (and hence, no compiler libraries like libstdc++ either).
and basically still makes the rebuild too slow to be feasible for development, and I don't see how to overcome this without patching crosstool-NG.
Furthermore, starting from the libc step didn't seem to copy over the source again from Custom source location, further making this method unusable.
Bonus: stdlibc++
A bonus if you're also interested in the C++ standard library: How to edit and re-build the GCC libstdc++ C++ standard library source?
Link with -static. When you link with -static the linker embeds the library inside the executable, so the executable will be bigger, but it can be executed on a system with an older version of glibc because the program will use it's own library instead of that of the system.
In my opinion, the laziest solution (especially if you don't rely on latest bleeding edge C/C++ features, or latest compiler features) wasn't mentioned yet, so here it is:
Just build on the system with the oldest GLIBC you still want to support.
This is actually pretty easy to do nowadays with technologies like chroot, or KVM/Virtualbox, or docker, even if you don't really want to use such an old distro directly on any pc. In detail, to make a maximum portable binary of your software I recommend following these steps:
Just pick your poison of sandbox/virtualization/... whatever, and use it to get yourself a virtual older Ubuntu LTS and compile with the gcc/g++ it has in there by default. That automatically limits your GLIBC to the one available in that environment.
Avoid depending on external libs outside of foundational ones: like, you should dynamically link ground-level system stuff like glibc, libGL, libxcb/X11/wayland things, libasound/libpulseaudio, possibly GTK+ if you use that, but otherwise preferrably statically link external libs/ship them along if you can. Especially mostly self-contained libs like image loaders, multimedia decoders, etc can cause less breakage on other distros (breakage can be caused e.g. if only present somewhere in a different major version) if you statically ship them.
With that approach you get an old-GLIBC-compatible binary without any manual symbol tweaks, without doing a fully static binary (that may break for more complex programs because glibc hates that, and which may cause licensing issues for you), and without setting up any custom toolchain, any custom glibc copy, or whatever.
An alternative is to just discard the version info and let the linker default to whatever version it has.
To do this, you may want to check out PatchELF1:
$ nm --dynamic --undefined-only --with-symbol-versions MyLib.so \
| grep GLIBC | sed -e 's#.\+###' | sort --unique
GLIBC_2.17
GLIBC_2.29
$ nm --dynamic --undefined-only --with-symbol-versions MyLib.so | grep GLIBC_2.29
U exp#GLIBC_2.29
U log#GLIBC_2.29
U log2#GLIBC_2.29
U pow#GLIBC_2.29
$ patchelf --clear-symbol-version exp \
--clear-symbol-version log \
--clear-symbol-version log2 \
--clear-symbol-version pow MyLib.so
This is extremely useful if you don't have the source at hand (or have a hard time understanding the code2).
1 Although patchelf is available on many distributions, they are very likely to be outdated.
The --clear-symbol-version flag was added in version 0.12, which, unfortunately, does not fully remove version requirements.
You will need to compile manually from this merge request before it gets merged.
2 FYI, I was cross-compiling LuaJIT.
This repo:
https://github.com/wheybags/glibc_version_header
provides a header file that takes care of the details described in the accepted answer.
Basically:
Download the header of the corresponding GCC you want to link against
Add -include /path/to/header.h to your compiler flags
You may also need to add -D_REENTRANT if you're linking pthread
I also add the linker flags:
-static-libgcc -static-libstdc++ -pthread
But those are dependent on your app's requirements.

Resources