I'm using Rust, bindgen, and a build script to work on some FFI bindings to a library.
This library is built using OpenMP, so when linking against it, I'd normally pass the -fopenmp flag to the compiler.
How can I get this flag to be set by build.rs when the library is built by Cargo?
Currently, building using Cargo fails, with the failing command being something like:
cc -Wl,--as-needed -Wl,-z,noexecstack -m64 -l gomp -l stdc++
...skipping dozens of paths/files...
-Wl,-Bdynamic -l dl -l rt -l pthread -l gcc_s -l c -l m -l rt -l pthread -l util
which fails with hundreds of undefined reference to 'GOMP_parallel_end' errors.
Rerunning the generated command above with the -fopenmp flag manually added succeeds.
I can specify the flag using RUSTFLAGS='-C link-args=-fopenmp' before compiling, but is there a way of specifying it from within build.rs?
This feature has been added to Cargo and was stabilized in Cargo 1.56. The accepted answer is now out-of-date.
Linker arguments can be specified in build.rs like so:
// Pass `-fopenmp` to the linker.
println!("cargo:rustc-link-arg=-fopenmp");
You cannot could not. See the sibling answer from ecstaticm0rse for an updated answer.
Before then, you can use a Cargo configuration file.
.cargo/config
[build]
rustflags = ["-C", "link-args=-fsome-artisanal-option"]
Execution
$ cargo build --verbose
Compiling example v0.1.0 (file:///private/tmp/example)
Running `rustc ...blah blah blah... -C link-args=-fsome-artisanal-option`
error: linking with `cc` failed: exit code: 1
|
= note: "cc" "-m64" ...blah blah blah... "-fsome-artisanal-option"
= note: clang: error: unknown argument: '-fsome-artisanal-option'
See also:
How to get the linker to produce a map file using Cargo
How can I globally configure a Cargo profile option?
Is it possible to specify `panic = "abort"` for a specific target?
Related
I'm using a library that wraps the LLVM-C API (inkwell) so I need to link my Rust binary to the LLVM library. If I export the following rust flags with:
export RUSTFLAGS=-lLLVM-12 -lm -ldl -lc -lpthread -lutil -lgcc_s -C link-args=-L/usr/lib/llvm/12/lib64
Then compilation runs fine.
If I then insert these lines into my Cargo.toml file for the project however:
[build]
rustflags = ["-lLLVM-12", "-lm","-ldl","-lc","-lpthread","-lutil","-lgcc_s", "-C", "link-args=-L/usr/lib/llvm/12/lib64"]
Then I get linking errors against LLVM-C functions.
Why does this work with an environment variable but not in my cargo config file? Am I msiconfiguring cargo in some way?
According to Cargo documentation, rustflags property is documented via .cargo/config.toml, not Cargo.toml.
I am compiling my binary on a Raspberry device but when I move it to another one I get the following error:
./iot-relay
./iot-relay: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory`
I've come to the conclusion that compiling a fully statically linked binary may help so I start reading about this here
Adding the target that I guess is appropriate:
rustup target add armv7-unknown-linux-musleabihf
And attempting to compile:
cargo build --release --target armv7-unknown-linux-musleabihf
It compiles most of the stuff but then:
error: failed to run custom build command for `paho-mqtt-sys v0.3.0`
Caused by:
process didn't exit successfully: `/home/pi/rust/iot-relay/target/release/build/paho-mqtt-sys-9d34dbf9179b933d/build-script-build` (exit code: 101)
--- stdout
debug:Running the bundled build for Paho C
cargo:rerun-if-changed=build.rs
running: "cmake" "/home/pi/.cargo/registry/src/github.com-1285ae84e5963aae/paho-mqtt-sys-0.3.0/paho.mqtt.c/" "-DPAHO_BUILD_SHARED=off" "-DPAHO_BUILD_STATIC=on" "-DPAHO_ENABLE_TESTING=off" "-DPAHO_WITH_SSL=on" "-DCMAKE_INSTALL_PREFIX=/home/pi/rust/iot-relay/target/armv7-unknown-linux-musleabihf/release/build/paho-mqtt-sys-964264b133c84ace/out" "-DCMAKE_C_FLAGS= -ffunction-sections -fdata-sections -fPIC -march=armv7-a" "-DCMAKE_C_COMPILER=arm-linux-musleabihf-gcc" "-DCMAKE_CXX_FLAGS= -ffunction-sections -fdata-sections -fPIC -march=armv7-a" "-DCMAKE_CXX_COMPILER=arm-linux-musleabihf-g++" "-DCMAKE_ASM_FLAGS= -ffunction-sections -fdata-sections -fPIC -march=armv7-a" "-DCMAKE_ASM_COMPILER=arm-linux-musleabihf-gcc" "-DCMAKE_BUILD_TYPE=Release"
-- The C compiler identification is unknown
-- Configuring incomplete, errors occurred!
See also "/home/pi/rust/iot-relay/target/armv7-unknown-linux-musleabihf/release/build/paho-mqtt-sys-964264b133c84ace/out/build/CMakeFiles/CMakeOutput.log".
See also "/home/pi/rust/iot-relay/target/armv7-unknown-linux-musleabihf/release/build/paho-mqtt-sys-964264b133c84ace/out/build/CMakeFiles/CMakeError.log".
--- stderr
fatal: not a git repository (or any of the parent directories): .git
CMake Error at CMakeLists.txt:21 (PROJECT):
The CMAKE_C_COMPILER:
arm-linux-musleabihf-gcc
is not a full path and was not found in the PATH.
Tell CMake where to find the compiler by setting either the environment
variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
the compiler, or to the compiler name if it is in the PATH.
thread 'main' panicked at '
command did not execute successfully, got: exit code: 1
build script failed, must exit now', /home/pi/.cargo/registry/src/github.com-1285ae84e5963aae/cmake-0.1.44/src/lib.rs:885:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: build failed
How can I fix this and get my statically linked binary that would work on my other Raspberry? Unfortunately, the one that can't run the binary is a custom one and has some custom software that prevents me from doing system updates so I am hoping to pack everything required into the binary on the one over which I do have control.
This is cargo looking to follow build instructions to for a C dependency and not finding the gcc supporting the specified toolchain. You may need to find the correct location and name for your gcc targeting your environment (CC=). If you don't have a cross compiler for that target and musl libc combo, you may have to build one. See the link below for info on how to do that.
https://wiki.musl-libc.org/getting-started.html
I'm trying to use MinGW-W64 instead of MinGW in Codelite. When I compile simple "hello, world" project it's all right. But when I try to link some libraries, I get a strange linker error. Project with exactly same settings compiles by MinGW with any problems. There are build output for both variants:
MinGW GCC 4.8.1
`C:\Windows\system32\cmd.exe /C D:/apps/mingw/bin/mingw32-make.exe -j8 SHELL=cmd.exe -e -f Makefile
"----------Building project:[ code - Debug ]----------"
mingw32-make.exe[1]: Entering directory 'D:/Projects/codelite/code'
codelite-cc D:/apps/mingw/bin/g++.exe -c "D:/Projects/codelite/code/src/main.cpp" -Wfatal-errors -g -O0 -pedantic -W -std=c++11 -Wall -o ./Debug/src_main.cpp.o -I./inc/
D:/apps/mingw/bin/g++.exe -o bin/code #"code.txt" -L./lib/ -lopengl32
mingw32-make.exe[1]: Leaving directory 'D:/Projects/codelite/code'
====0 errors, 0 warnings====`
MinGW-W64 GCC 5.2.0
`C:\Windows\system32\cmd.exe /C D:/apps/mingw-w64/mingw32/bin/mingw32-make.exe -j8 SHELL=cmd.exe -e -f Makefile
"----------Building project:[ code - Debug ]----------"
mingw32-make.exe[1]: Entering directory 'D:/Projects/codelite/code'
codelite-cc D:/apps/mingw-w64/mingw32/bin/g++.exe -c "D:/Projects/codelite/code/src/main.cpp" -Wfatal-errors -g -O0 -pedantic -W -std=c++11 -Wall -o ./Debug/src_main.cpp.o -I./inc/
D:/apps/mingw-w64/mingw32/bin/g++.exe -o bin/code #"code.txt" -L./lib/ -lopengl32
g++.exe: error: #code.txt -L./lib/: No such file or directory
mingw32-make.exe[1]: *** [bin/code] Error 1
code.mk:78: recipe for target 'bin/code' failed
mingw32-make.exe[1]: Leaving directory 'D:/Projects/codelite/code'
mingw32-make.exe: *** [All] Error 2
Makefile:4: recipe for target 'All' failed
====1 errors, 0 warnings====`
This looks like a bug in your toolchain and not in CodeLite.
There is a space between the "#code.text" and -L./lib and for some reason g++ does not see it...
I put my money on the mingw32-make tool. You can tell CodeLite to use the mingw32-make.exe from the 4.8.1 version (which worked): settings->build settings->compilers->[YOUR COMPILER NAME]->Make
Another option is to disable the option that tells CodeLite to generate Makefile that passes the object list via file to the compiler:
Settings->Build Settings->compilers->[YOUR COMPILER NAME]->Advanced tab and uncheck the option: pass object list to the linker via file
Lately i too have found the similar problem. Later i was able to figure out the issue. We just to need to go to the project settings under change makefile generator default to codelite makefile generator i think that will work.
I have some trouble with crosscompiling C++ program which takes advantage of openMP library. I am using Linux Ubuntu 12.04 LTS. I want to obtain executable file runnable on Windows.
I have no problem with compiling my program with OMP with regular g++ command:
g++ a.cpp b.cpp -o OMPres -pg -O3 -I./CBLAS/include -L./ -lcblas
Also when I try crosscompilation without OMP, everything runs perfectly fine:
x86_64-w64-mingw32-g++ a.cpp b.cpp -O3 -I./CBLAS/include ./CBLAS/cblas_WIN64.a ./BLAS/blas_WIN64.a -o res.exe -l gfortran -static
But when I try to crosscompile it with OMP using following command:
x86_64-w64-mingw32-g++ a.cpp b.cpp -O3 -I./CBLAS/include ./CBLAS/cblas_WIN64.a ./BLAS/blas_WIN64.a -o OMPres.exe -l gfortran -static -fopenmp
I get this error:
a.cpp:41:17: fatal error: omp.h: No such file or directory
compilation terminated.
I found where omp.h file is located on my disk, and added the path to the command. After executing it:
x86_64-w64-mingw32-g++ a.cpp b.cpp -O3 -I./CBLAS/include -I/usr/lib/gcc/x86_64-linux-gnu/4.6/include ./CBLAS/cblas_WIN64.a ./BLAS/blas_WIN64.a -o OMPres.exe -l gfortran -static -fopenmp
I got another error: x86_64-w64-mingw32-g++: error: libgomp.spec: No such file or directory
As I also have this file on the disk I tried to copy it in various places and finaly it worked when I copied it directly into the directory where compilation takes place. Then it produced another error:
/usr/bin/x86_64-w64-mingw32-ld: cannot find -lgomp
/usr/bin/x86_64-w64-mingw32-ld: cannot find -lrt
collect2: ld returned 1 exit status
I don't have a good understanding of how compilers exactly work. I tried to update all mingw-w64 compilers that I could find with apt-cache search but nothing helped. I have no idea what more I can do :(.
First, #nmaier is completely correct in that the Ubuntu x86_64-w64-mingw32 toolchain is crippled, and that you can rebuild the toolchain yourself.
I, however, suggest that you use MXE, which saves you the time of manually compiling gcc and every dependency of it. The steps below should be enough for your purpose:
# Get MXE
git clone https://github.com/mxe/mxe.git && cd mxe
# Settings
cat <<EOF > settings.mk
MXE_TARGETS := x86_64-w64-mingw32.static
JOBS := 4
EOF
# Build gcc, libgomp, blas, and cblas. It will take a while
make -j2 libgomp cblas
# Add toolchain to PATH
# See http://htmlpreview.github.io/?https://github.com/mxe/mxe/blob/master/index.html#tutorial step 4
export PATH=`pwd`/usr/bin:$PATH
# You don't need -I./CBLAS/include ./CBLAS/cblas_WIN64.a ./BLAS/blas_WIN64.a
# because headers and libraries are installed to standard location and
# I already used `-lcblas -lblas`.
x86_64-w64-mingw32-g++ a.cpp b.cpp -fopenmp -O3 -o res.exe -lcblas -lblas -lgfortran -lquadmath
Your x86_64-w64-mingw32 toolchain appears to have been build without libgomp.
You could check your supplier/distribution if it there additional or variant packages that have libgomp.
Or switch to a different supplier/distribution.
Or you could rebuild (or build in the first place) a cross gcc with --enable-libgomp. This is kinda the hard way.
PS:
Adding paths that do not correspond with your platform, like -I/usr/lib/gcc/x86_64-linux-gnu/4.6/include, is a bad idea in general, and will most certainly fail... This kinda creates a Franken-compiler.
I have an autotools project that compiles just fine on the Mac, but under Linux (Ubuntu 12.04.1 LTS) the command lines passed to gcc have the libraries out of order relative to the object files. For example, autotools generates the following command to compile my code, a single file named test.c into a binary named test:
gcc -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -g -O2 -lglib-2.0 -o test test-test.o
This command line fails with:
/home/user/glib-test/test.c:4: undefined reference to `g_malloc`
/home/user/glib-test/test.c:5: undefined reference to `g_free`
However, if I compile from the command line and switch it up so the library reference is after the object files it works just fine:
gcc -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -g -O2 -o test test-test.o -lglib-2.0
The challenge is that I can't figure out how to force Autotools to generate the command line in the right order. For the sake of clarity, I've reproduced the simple test case here. First up is configure.ac:
dnl Process this file with autoconf to produce a configure script.
AC_PREREQ(2.59)
AC_INIT(glib-test, 1.0)
AC_CANONICAL_SYSTEM
AM_INIT_AUTOMAKE()
AC_PROG_CC
AM_PROG_CC_C_O
PKG_CHECK_MODULES(GLIB, glib-2.0 > 2.0)
AC_CONFIG_FILES(Makefile)
AC_OUTPUT
Next is the simple Makefile.am:
CFLAGS=-Wall
bin_PROGRAMS=test
test_CFLAGS=$(GLIB_CFLAGS)
test_LDFLAGS=$(GLIB_LIBS)
test_SOURCES=test.c
Finally, the source code to this minimal test case, test.c:
#include <glib.h>
int main(int argc, char **argv) {
gchar *foo = g_malloc(100);
g_free(foo);
return 0;
}
Compilation is then achieved using the following series of commands:
touch NEWS README AUTHORS ChangeLog
aclocal
autoconf
automake --add-missing
./configure
make
I should be clear, I understand why my code won't compile, I'm just wondering how to get automake to put the libraries at the end of the command line so gcc will execute and link properly. It should be noted that gcc on Mac OS X Lion doesn't seem to have this problem.
The solution turned out to be the difference between LDFLAGS and LDADD. In short LDFLAGS is added before the object files on the command line and LDADD is added afterwards. Thus, changing Makefile.am to the following solved the problem:
CFLAGS=-Wall
bin_PROGRAMS=test
test_CFLAGS=$(GLIB_CFLAGS)
test_LDADD=$(GLIB_LIBS)
test_SOURCES=test.c
It only took tracking down a GCC developer at work to solve. Also, this example I provided is rather poor because test has a defined meaning in some contexts of autotools.
I solved a similar problem. The ./configure script in question was unable to complete a check for presence of a function due to a missing symbol. If I added the correct library to $LDFLAGS or such like, it was added before the .c file and the library was ignored.
The names of functions to check are first added to ac_func_list and then there is a loop in the body of the ./configure that for each of them calls ac_fn_c_check_func () and that in turns calls ac_fn_c_try_link ()
the checking function ac_fn_c_try_link () uses a command of this pattern:
ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS \
$LDFLAGS conftest.$ac_ext $LIBS >&5'
$LDADD is completely ignored here. Therefore the only solution here is to add the -l flags to variable $LIBS.