Buildroot: install and build the toolchain only - linux

I want to install and build just the toolchain for my Buildroot project. make help suggests that the command make <options> toolchain should work; however, running that command simply returns Nothing to be done for 'toolchain'. and output/host is never created.

You first have to configure Buildroot in order to instruct it about what toolchain you want to produce. See Buildroot quick start in the Buildroot user manual.
If you just downloaded Buildroot, the steps to produce a toolchain are:
run make menuconfig
In Target options select your hardware platform and ABI
In Toolchain configure the kind of toolchain you want
exit saving
run make toolchain
The toolchain is in output/host/.

A more recent way to build just the toolchain, which can be used both within and outside of Buildroot, is documented in the Buildroot manual.
Though make toolchain in Luca's answer does build the toolchain, it also places other host dependencies into output/host/, making it slightly more difficult to get a clean toolchain as compared to make sdk below, which produces a toolchain tarball in output/images/:
6.1.3. Build an external toolchain with Buildroot
The Buildroot internal toolchain option can be used to create an external toolchain. Here are a series of steps to build an internal toolchain and package it up for reuse by Buildroot itself (or other projects).
Create a new Buildroot configuration, with the following details:
Select the appropriate Target options for your target CPU architecture
In the Toolchain menu, keep the default of Buildroot toolchain for Toolchain type, and configure your toolchain as desired
In the System configuration menu, select None as the Init system and none as /bin/sh
In the Target packages menu, disable BusyBox
In the Filesystem images menu, disable tar the root filesystem
Then, we can trigger the build, and also ask Buildroot to generate a SDK. This will conveniently generate for us a tarball which contains our toolchain:
make sdk
This produces the SDK tarball in $(O)/images, with a name similar to arm-buildroot-linux-uclibcgnueabi_sdk-buildroot.tar.gz. Save this tarball, as it is now the toolchain that you can re-use as an external toolchain in other Buildroot projects.

Related

Build and bind against older libc version

I have dependencies in my code that requires libc. When building (cargo build --release) on Ubuntu 20.04 (glibc 2.31) the resulting executable doesn't run on CentOS 7 (glibc 2.17). It throws an error saying it requires GLIBC 2.18.
When build the same code on CentOS 7 the resulting executable runs on CentOS 7 and Ubuntu 20.04.
Is there a way to control which GLIBC version is required to build this version on Ubuntu 20.04 too?
If your project does not depend on any native libraries, then probably the easiest way would be to use the x86_64-unknown-linux-musl target.
This target statically links against MUSL Libc rather than dynamically linking against the system's libc. As a result it produces completely static binaries which should run on a wide range of systems.
To install this target:
rustup target add x86_64-unknown-linux-musl
To build your project using this target:
cargo build --target x86_64-unknown-linux-musl
See the edition guide for more details.
If you are using any non-rust libraries it becomes more difficult, because they may be dynamically linked and may in turn depend on the system libc. In that case you would either need to statically link the external libraries (assuming that is even possible, and that the libraries you are using will work with MUSL libc), or make different builds for each platform you want to target.
If you end up having to make different builds for each platform, a docker container would be the easiest way to achieve that.
Try cross.
Install it globally:
cargo install cross
Then build your project with it:
cross build --target x86_64-unknown-linux-gnu --release
cross take the same arguments as cargo but you have to specify a target explicitly. Also, the build directory is always target/{TARGET}/(debug|release), not target/(debug|release)
cross uses docker images prebuilt for different target architectures but nothing stops you from "cross-compiling" against the host architecture. The glibc version in these docker images should be conservative enough. If it isn't, you can always configure cross to use a custom image.
In general, you need to build binaries for a given OS on that OS, or at the very least build on the oldest OS you intend to support.
glibc uses symbol versioning to preserve the behavior of older programs while adding support for new functionality. For example, a newer version of pthread_mutex_lock may support lock elision, while the old one would not. You're seeing this error because when you link against libc, you link against the default version of the symbol if a version isn't explicitly specified, and in at least one case, the version you linked against is from glibc 2.18. Changing this would require recompiling libstd (and the libc crate, if you're using it) with custom changes to pick the old versioned symbols, which is a lot of work for little gain.
If your only dependency is glibc, then it might be sufficient to just compile on CentOS 7. However, if you depend on other libraries, like OpenSSL, then those just aren't compatible across OS versions because their SONAMEs differ, and there's no way around that. So that's why generally you want to build different binaries per OS.

Buildroot toolchain with openssl

I am using Buildroot (2017.02.5) to build a custom cross compilation toolchain. I have two buildroot configurations; one to build the RFS and one purely to build a toolchain. I have things configured this way because I don't want the toolchain to be rebuilt unless I intentionally rebuild it- the configuration which builds the RFS references this toolchain as an external toolchain.
Generally, the built toolchain works fine, but I have some existing applications (Linux userspace) which #include's <openssl/md5.h>. When I try to compile this, I get a "<openssl/md5.h>: No such file or directory" error, which is expected because the sysroot dir of the generated toolchain does not contain an openssl directory.
How can I make buildroot include openssl in the toolchain? All searches I have done seem to point to cross compiling openssl for my embedded target, which is not an issue. The issue is that I need to include it in the toolchain.
I have Target packages --> Libraries --> Crypto --> openssl set to y, but I don't think this makes any difference in this scenario since I believe it relates only to the RFS (and the defconfig in question does not build an RFS, only a toolchain).
I could compile OpenSSL outside of the buildroot tree and install it to the sysroot dir, but this doesn't seem correct as it would pollute sysroot.
I'm sure I'm missing something simple here- any help would be appreciated.
After some further reading of the buildroot documentation (which is very good), I figured that packages selected under Target packages do in fact get pushed into the sysroot of the toolchain (or are supposed to at least) which would make sense. The reason this didn't appear to be working was because I was doing a make toolchain as opposed to make all (or just a simple make). The packages didn't get built with the former, so they weren't in the sysroot of the toolchain.

Android NDK - Where to put the .so file which specified in LOCAL_SHARED_LIBRARIES?

I have successfully build libcurl for android as a shared library, both armeabi-v7a and x86, and one of my project depends on it. I have set "LOCAL_SHARED_LIBRARIES := libcurl", the problem is where should I put those libcurl.so files?
I tried putting them under (project)/jni/lib/(platform)/libcurl.so, and ndk-build gives me a whole load of linking error. (project)/lib/(platform)/libcurl.so will not work too because ndk-build will clear this directory before build.
So I tried again, building 1 platform at a time, however I still have no idea where to put it. jni/libcurl.so will not work.
Simple, follow this, download curl source from http://curl.haxx.se/
- prepare the toolchain of the Android NDK for standalone use; this can
be done by invoking the script:
./build/tools/make-standalone-toolchain.sh
which creates a usual cross-compile toolchain. Lets assume that you put
this toolchain below /opt then invoke configure with something like:
export PATH=/opt/arm-linux-androideabi-4.4.3/bin:$PATH
./configure --host=arm-linux-androideabi [more configure options]
make
Done.

How to generate kernel headers of a toolchain for ARM Integrator Target Machine

I'm trying to build a toolchain from scratch for ARM Integrator target machine. I started by building binutils and it is OK.
Now I have to generate kernel headers and I don't know how to do this in the right way.
Any help will be useful.
I searched a lot for this, in order to cross compile gcc.
This example involves the source of linux-3.9.
#cd to the top directory of the kernel source
cd linux-3.9
make mrproper
make ARCH=arm integrator_defconfig
make ARCH=arm headers_check
make ARCH=arm INSTALL_HDR_PATH=$SOMEWHERE headers_install
variable $SOMEWHERE is where you want it extracted.
What if you want something else than integrator? How to find out? Assuming your are still at the top directory of the kernel's source tree, here are the other _defconfig you could use:
ls /arch/arm/configs/*
Idem for other architectures.
Note: If you build a cross toolchain with newlib instead of glibc, you do not need kernel headers. Which library should you use? It depends of your needs. newlib is aimed at embedded solutions.
Sources:
http://pmc.polytechnique.fr/pagesperso/dc/arm-en.html
http://www.ifp.illinois.edu/~nakazato/tips/xgcc.html
http://www.gentoo.org/proj/en/base/embedded/handbook/?part=1&chap=2

How do I configure Qt for cross-compilation from Linux to Windows target?

I want to cross compile the Qt libraries (and eventually my application) for a Windows x86_64 target using a Linux x86_64 host machine. I feel like I am close, but I may have a fundamental misunderstanding of some parts of this process.
I began by installing all the mingw packages on my Fedora machine and then modifying the win32-g++ qmake.conf file to fit my environment. However, I seem to be getting stuck with some seemingly obvious configure options for Qt: -platform and -xplatform. Qt documentation says that -platform should be the host machine architecture (where you are compiling) and -xplatform should be the target platform for which you wish to deploy. In my case, I set -platform linux-g++-64 and -xplatform linux-win32-g++ where linux-win32-g++ is my modified win32-g++ configuration.
My problem is that, after executing configure with these options, I see that it invokes my system's compiler instead of the cross compiler (x86_64-w64-mingw32-gcc). If I omit the -xplatform option and set -platform to my target spec (linux-win32-g++), it invokes the cross compiler but then errors when it finds some Unix related functions aren't defined.
Here is some output from my latest attempt: http://pastebin.com/QCpKSNev.
Questions:
When cross-compiling something like Qt for Windows from a Linux host, should the native compiler ever be invoked? That is, during a cross compilation process, shouldn't we use only the cross compiler? I don't see why Qt's configure script tries to invoke my system's native compiler when I specify the -xplatform option.
If I'm using a mingw cross-compiler, when will I have to deal with a specs file? Spec files for GCC are still sort of a mystery to me, so I am wondering if some background here will help me.
In general, beyond specifying a cross compiler in my qmake.conf, what else might I need to consider?
Just use M cross environment (MXE). It takes the pain out of the whole process:
Get it:
$ git clone https://github.com/mxe/mxe.git
Install build dependencies
Build Qt for Windows, its dependencies, and the cross-build tools;
this will take about an hour on a fast machine with decent internet access;
the download is about 500MB:
$ cd mxe && make qt
Go to the directory of your app and add the cross-build tools to the PATH environment variable:
$ export PATH=<mxe root>/usr/bin:$PATH
Run the Qt Makefile generator tool then build:
$ <mxe root>/usr/i686-pc-mingw32/qt/bin/qmake && make
You should find the binary in the ./release directory:
$ wine release/foo.exe
Some notes:
Use the master branch of the MXE repository; it appears to get a lot more love from the development team.
The output is a 32-bit static binary, which will work well on 64-bit Windows.
(This is an update of #Tshepang's answer, as MXE has evolved since his answer)
Building Qt
Rather than using make qt to build Qt, you can use MXE_TARGETS to control your target machine and toolchain (32- or 64-bit). MXE started using .static and .shared as a part of the target name to show which type of lib you want to build.
# The following is the same as `make qt`, see explanation on default settings after the code block.
make qt MXE_TARGETS=i686-w64-mingw32.static # MinGW-w64, 32-bit, static libs
# Other targets you can use:
make qt MXE_TARGETS=x86_64-w64-mingw32.static # MinGW-w64, 64-bit, static libs
make qt MXE_TARGETS=i686-w64-mingw32.shared # MinGW-w64, 32-bit, shared libs
# You can even specify two targets, and they are built in one run:
# (And that's why it is MXE_TARGET**S**, not MXE_TARGET ;)
# MinGW-w64, both 32- and 64-bit, static libs
make qt MXE_TARGETS='i686-w64-mingw32.static x86_64-w64-mingw32.static'
In #Tshepang's original answer, he did not specify an MXE_TARGETS, and the default is used. At the time he wrote his answer, the default was i686-pc-mingw32, now it's i686-w64-mingw32.static. If you explicitly set MXE_TARGETS to i686-w64-mingw32, omitting .static, a warning is printed because this syntax is now deprecated. If you try to set the target to i686-pc-mingw32, it will show an error as MXE has removed support for MinGW.org (i.e. i686-pc-mingw32).
Running qmake
As we changed the MXE_TARGETS, the <mxe root>/usr/i686-pc-mingw32/qt/bin/qmake command will no longer work. Now, what you need to do is:
<mxe root>/usr/<TARGET>/qt/bin/qmake
If you didn't specify MXE_TARGETS, do this:
<mxe root>/usr/i686-w64-mingw32.static/qt/bin/qmake
Update: The new default is now i686-w64-mingw32.static
Another way to cross-compile software for Windows on Linux is the MinGW-w64 toolchain on Archlinux. It is easy to use and maintain, and it provides recent versions of the compiler and many libraries. I personally find it easier than MXE and it seems to adopt newer versions of libraries faster.
First, you will need an arch-based machine (virtual machine or docker container will suffice). It does not have to be Arch Linux, derivatives will do as well. I used Manjaro Linux.
Most of the MinGW-w64 packages are not available at the official Arch repositories, but there is plenty in AUR. The default package manager for Arch (Pacman) does not support installation directly from AUR, so you will need to install and use an AUR wrapper like yay or yaourt. Then installing MinGW-w64 version of Qt5 and Boost libraries is as easy as:
yay -Sy mingw-w64-qt5-base mingw-w64-boost
#yaourt -Sy mingw-w64-qt5-base mingw-w64-qt5-boost #if you use yaourt
This will also install the MinGW-w64 toolchain (mingw-w64-gcc) and other dependencies.
Cross-compiling a Qt project for windows (x64) is then as simple as:
x86_64-w64-mingw32-qmake-qt5
make
To deploy your program you will need to copy corresponding dlls from /usr/x86_64-w64-mingw32/bin/. For example, you will typically need to copy /usr/x86_64-w64-mingw32/lib/qt/plugins/platforms/qwindows.dll to program.exe_dir/platforms/qwindows.dll.
To get a 32bit version you simply need to use i686-w64-mingw32-qmake-qt5 instead. Cmake-based projects work just as easily with x86_64-w64-mingw32-cmake.
This approach worked extremely well for me, was the easiest to set-up, maintain, and extend.
It also goes well with continuous integration services. There are docker images available too.
For example, let's say I want to build QNapi subtitle downloader GUI. I could do it in two steps:
Start the docker container:
sudo docker run -it burningdaylight/mingw-arch:qt /bin/bash
Clone and compile QNapi
git clone --recursive 'https://github.com/QNapi/qnapi.git'
cd qnapi/
x86_64-w64-mingw32-qmake-qt5
make
That's it! In many cases, it will be that easy. Adding your own libraries to the package repository (AUR) is also straightforward. You would need to write a PKBUILD file, which is as intuitive as it can get, see mingw-w64-rapidjson, for example.
Ok I think I've got it figured out.
Based in part on https://github.com/mxe/mxe/blob/master/src/qt.mk and https://www.videolan.org/developers/vlc/contrib/src/qt4/rules.mak
It appears that "initially" when you run configure (with -xtarget, etc.), it configures then runs your "hosts" gcc to build the local binary file ./bin/qmake
./configure -xplatform win32-g++ -device-option CROSS_COMPILE=$cross_prefix_here -nomake examples ...
then you run normal "make" and it builds it for mingw
make
make install
so
yes
only if you need to use something other than msvcrt.dll (its default). Though I have never used anything else so I don't know for certain.
https://stackoverflow.com/a/18792925/32453 lists some configure params.
In order to compile Qt, one must run it's configure script, specifying the host platform with -platform (e.g. -platform linux-g++-64 if you're building on a 64-bit linux with the g++ compiler) and the target platform with -xplatform (e.g. -xplatform win32-g++ if you're cross compiling to windows).
I've also added this flag:
-device-option CROSS_COMPILE=/usr/bin/x86_64-w64-mingw32-
which specifies the prefix of the toolchain I'm using, which will get prepended to 'gcc' or 'g++' in all the makefiles that are building binaries for windows.
Finally, you might get problems while building icd, which apparently is something that is used to add ActiveX support to Qt. You can avoid that by passing the flag -skip qtactiveqt to the configure script. I've got this one out of this bug report: https://bugreports.qt.io/browse/QTBUG-38223
Here's the whole configure command I've used:
cd qt_source_directory
mkdir my_build
cd my_build
../configure \
-release \
-opensource \
-no-compile-examples \
-platform linux-g++-64 \
-xplatform win32-g++ \
-device-option CROSS_COMPILE=/usr/bin/x86_64-w64-mingw32- \
-skip qtactiveqt \
-v
As for yout questions:
1 - Yes. The native compiler will be called in order to build some tools that are needed in the build process. Maybe things like qconfig or qmake, but I'm not entirely sure which tools, exactly.
2 - Sorry. I have no idea what specs files are in the context of compilers =/ . But as far as I know, you wouldn't have to deal with that.
3 - You can specify the cross compiler prefix in the configure command line instead of doing it in the qmake.conf file, as mentioned above. And there's also that problem with idc, whose workaround I've mentioned as well.

Resources