I have a working cross-compiler toolchain, thanks to crosstool-ng :) -- however, crosstool-ng is very sparsely documented, and I am brand new to cross-compiling. The specific host and target are not, I think, important in this context.
I have some basic questions about the directory structure. The toolchain was installed into a directory named after the target. Inside that are a set of directories:
arm-unknown-linux-gnueabi
bin
include
lib
libexec
share
I presume this is for the actual cross-compiler bits, since the compilers in bin/ do work for this purpose. Notice that there is an inner arm-unknown-linux-gnueabi/ directory, ie, the path in there is ../arm-unknown-linux-gnueabi/arm-unknown-linux-gnueabi. Inside that there is another tree:
bin
debug-root
include
lib
lib32
lib64
sysroot
The lib* directories are symlinks into sysroot/. The stuff in bin seems to be the same set of cross-compile tools as in the parent directory /bin:
> bin/gcc -v
Using built-in specs.
COLLECT_GCC=./gcc
Target: arm-unknown-linux-gnueabi
Configured with: /usr/x-tool/.build/src/gcc-4.7.2/configure
--build=x86_64-build_unknown-linux-gnu
--host=x86_64-build_unknown-linux-gnu
--target=arm-unknown-linux-gnueabi
So my first question is: what are these for? And what is this directory for?
My second question then is: how should sysroot/ be used? It's apparently for support libraries native to the target platform, so I presume if I were building such a library I should use that as the --prefix, although it would amount to the same thing as using the parent directory, since lib* is symlinked...this "directory in the middle" with a bin and symlinks down to sysroot is confusing. I believe (some) autotools style packages can be configured "--with-sysroot". What is the significance of that, if I see it, and how should it be used in relation to other options such as --prefix, etc?
For your first question, as toolchain installed directory:
bin/arm-unknown-linux-gnueabi-gcc
arm-unknown-linux-gnueabi/bin/gcc
They are the same, indeed hard links.
You can use arm-unknown-linux-gnueabi-gcc by CC=arm-unknown-linux-gnueabi-gcc, e.g.
export PATH=<toolchain installed dir>/bin:$PATH
CC=arm-unknown-linux-gnueabi-gcc ./configure
make
Or
export PATH=<toolchain installed dir>/arm-unknown-linux-gnueabi/bin:$PATH
./configure
make
I always used the first form, and I am not sure if the latter form works.
For your second question, in my experience, you don't need to concern about sysroot. cross-compiler will find the correct C header files in sysroot/usr/include automatically.
Except that you want to cross-compile some libraries and install them into sysroot, you can get it by
export PATH=<toolchain installed dir>/bin:$PATH
CC=arm-unknown-linux-gnueabi-gcc ./configure --prefix=<toolchain installed dir>/arm-unknown-linux-gnueabi/arm-unknown-linux-gnueabi/sysroot
make
make install
Starting at 38:39 of the talk Anatomy of Cross-Compilation Toolchains by Thomas Petazzoni, the speaker gives an in-depth walk through of the output directory structure.
Related
I am using Buildroot (2017.02.5) to build a custom cross compilation toolchain. I have two buildroot configurations; one to build the RFS and one purely to build a toolchain. I have things configured this way because I don't want the toolchain to be rebuilt unless I intentionally rebuild it- the configuration which builds the RFS references this toolchain as an external toolchain.
Generally, the built toolchain works fine, but I have some existing applications (Linux userspace) which #include's <openssl/md5.h>. When I try to compile this, I get a "<openssl/md5.h>: No such file or directory" error, which is expected because the sysroot dir of the generated toolchain does not contain an openssl directory.
How can I make buildroot include openssl in the toolchain? All searches I have done seem to point to cross compiling openssl for my embedded target, which is not an issue. The issue is that I need to include it in the toolchain.
I have Target packages --> Libraries --> Crypto --> openssl set to y, but I don't think this makes any difference in this scenario since I believe it relates only to the RFS (and the defconfig in question does not build an RFS, only a toolchain).
I could compile OpenSSL outside of the buildroot tree and install it to the sysroot dir, but this doesn't seem correct as it would pollute sysroot.
I'm sure I'm missing something simple here- any help would be appreciated.
After some further reading of the buildroot documentation (which is very good), I figured that packages selected under Target packages do in fact get pushed into the sysroot of the toolchain (or are supposed to at least) which would make sense. The reason this didn't appear to be working was because I was doing a make toolchain as opposed to make all (or just a simple make). The packages didn't get built with the former, so they weren't in the sysroot of the toolchain.
What I'd like to do is configure my CMakeLists file so that while building my project the linker uses a copy of a shared library (.so) that resides in my build tree to link the executable against but then does not set the rpath in the linked executable so that the system must provide the library when the loader requests it.
Specifically, I want to link against libOpenCL.so during build time on a build farm that doesn't have libOpenCL.so installed as a system library. To do this, libOpenCL.so is in the project build tree and referenced using an absolute path in the CMakeLists file. This absolute path is to ensure that if the system does happen to have libOpenCL.so installed then it is not used.
However, when running the final executable, CMake has added the absolute path to the rpath which stops the system version of libOpenCL.so being picked up by the library loader and used.
Seems simple but I can't quite figure it out.
Thanks!
I know this answer is super late. I faced the same requirement as yours.
Either we need is whitelist approach where we set CMAKE_BUILD_RPATH explicitly with what we need. Or we need a blacklist approach where we tell cmake, which RPATHs we don't want in the executable. Way to remove RPath from build tree is not documented yet: https://gitlab.kitware.com/cmake/cmake/issues/16825
The solution I took is:
Set RUNPATH instead of RPATH. You can achieve this by the statement:
SET(CMAKE_EXE_LINKER_FLAGS "-Wl,--enable-new-dtags")
When RUNPATH is present, RPATH is ignored.
RUNPATH - same as RPATH, but searched after LD_LIBRARY_PATH, supported only on most recent UNIX
Then I can achieve the overriding the library using the environment variable LD_LIBRARY_PATH.
According to the CMake Wiki this should not be a problem:
By default if you don't change any RPATH related settings, CMake will link the executables and shared libraries with full RPATH to all used libraries in the build tree. When installing, it will clear the RPATH of these targets so they are installed with an empty RPATH.
So you might try to simply install it?
Can any body explain to me what does the whole sentence mean?
I know this is to set Macro BLAS_LIBS as another string.
But I'm not sure what's the "-lblas" mean and I don't know how to use it.
Similar as the following code. "-llapack"
export LAPACK_LIBS="-L$LAPACKHOME/lib -llapack"
How can the program find out the BLAS and LAPACK libraries just by "-lblas" and "-llapack" ?
Thanks for advance.
I'm not sure why you say "just by -llapack" because that's not what is happening here. Specifically, the -L option just before it specifies a directory path to add to the library resolution path. This works roughly like PATH in the shell.
For example, with the command line fragment gcc -Lfoodir -Lbardir -lfoo -lbar, you basically instruct the linker to search the directories foodir and bardir for the library files libfoo.a and libbar.a.
The -l option is described in GCC: Options for Linking and -L and friends in the following section GCC: Options for Directory Search.
This build arrangement -- configure the build to show where the required files are before compiling -- is common for libraries, where if a user has already downloaded and compiled a required library for some other project, they don't need to rebuild it; they can just point the compiler to wherever they already have the stuff needed for this project.
Building your own libraries is becoming increasingly unnecessary anyway, as prepackaged binaries of most common libraries are available for most systems these days. But of course, if you are on an unusual platform, or have specialized needs which dictate recompilation with different options than any available prebuilt binary, you will still need to understand how to do this.
Is there any way to link against RedHat static libraries while building on Ubuntu and using GCC?
Copy over the RedHat library and header files to a directory preserving directory structure and give GCC the --sysroot directive to tell it to look in that directory as prefix for searching libs and headers
I see two obvious solutions:
Copy /usr/lib, /lib and /usr/include from a Red Hat system into a subtree and point -I and -L to this subtree.
Install a minimal RedHat into a chroot and compile there.
The first solution is the easiest, but you might run into libc version issues. The second solution is guaranteed to work, but not far from running a complete RedHat for compilation.
I have a certain shared object library in a special directory which I
make sure special directory is in $LD_LIBRARY_PATH
make sure this directory has read and execute permisions for all
make sure appropriate library directory is in ld.so.conf and that root has done a ldconfig
(verify by checking for library using ldconfig -p as normaluser.
make sure it is has no soname problems (i.e. create a few symlinks if necessary)
Now, say I compile a program that needs that special library, a program packaged in a typical Open Source manner which ./configure && make, etc) and it says -lspecialibrary cannot be found, an error which a lack of any of the above checks would also probably throw.
A workaround I have done is to symlink the library to /usr/local/lib64 and suddenly the library has ben found. Also when compiling a relatively simple package, I manually add -L/path/to/spec/lib and that also has worked. But I regard those two methods as hacks, so I was looking for any clues as to why my list of checks aren't good enough to find a library.
(I particularly find the $LD_LIBRARY_PATH of shallow use. In fact I can exclude certain libraries from it, and they will still be found in a compilation process).
$LD_LIBRARY_PATH and ldconfig are only used to locate libraries when running programs that need libraries, i.e. they are used by the loader not the compiler. Your program depends on libspeciallibrary.so. When running your program $LD_LIBRARY_PATH and ldconfig are consulted to find libspeciallibary.so.
These methods are not used by your compiler to find libraries. For your compiler, the -L option is the right way to go. Since your package uses the autotools, you should set the $LDFLAGS environment variable:
LDFLAGS=-L/path/to/lib ./configure && make
This is also documented in the configure help:
./configure --help