Combining existing rootfs with custom toolchain - linux

I've got a Raspberry PI with Emdebian installed on it, and want to cross-compile projects.
There is plenty of documentation on how to obtain a toolchain and build a simple project with it. I myself managed to build a toolchain with crosstool-ng and wrote a hello world program which works fine.
What I don't get is how to handle cross-compiling more complex projects like Qt, which have dependencies on other libraries. Let's use libdbus as an example, as that is one of Qt's dependencies.
The installed Emdebian already contains libdbus.so, so naturally I'd prefer to use that, instead of cross-compiling my own libdbus.so, as compiling all of Qt's dependencies would take a lot of time.
For cross-compiling, there are two important directories, as far as I understand:
The "staging" directory, where all the installed libraries and applications live. This initially is a copy of the toolchain's sysroot directory, and gets populated with more libraries as they are cross-compiled.
The "rootfs" directory, which is equivalent to what is on the device - essentially a copy of the staging directory without unneeded stuff like documentation and header files. As far as I understand it, the best approach is to copy required files from the staging directory into the rootfs.
Getting the rootfs directory is easy, as that can be a NFS mount from the device. But how do I get a staging directory for the existing Emdebian installation on the PI? The staging directory needs to include things like dbus headers, which are not installed on the rootfs.
Some people simply install the dbus headers on the device, with apt-get install libdbus-dev, and then use the rootfs as the staging directory. With this setup, there is no distinction between rootfs and staging anymore, with the disadvantage that the rootfs is polluted with headers, documentation and so on. The advantage of course is that it is easy.
What is the best way to get the dbus headers into my staging directory on my host machine? What is the usual approach people use in this situation?
As a side question, why does the approach of obtaining a toolchain, compiling a program and then copying that on a target work at all? The toolchain ships its own versions of libc, libstdc++ etc, are they not incompatible with the versions that are installed on the target? Especially when creating using a custom toolchain compiled with crosstool-ng?
(Note that I am not asking how to compile Qt, I can figure that out myself. My question is more general, about the approach to take when combining a custom toolchain with an existing installation/rootfs)

In my experience, you don't need to compile your dbus. You can do it as
Create Debian cross rootfs by debootstrap by https://wiki.debian.org/EmDebian/CrossDebootstrap
Create your cros-compile toolchain by crosstool-ng, and make sure the kernel version and eglibc version are the same as rootfs created by 1st step
Build QT by
CPPFLAGS=-I<rootfs>/usr/include \
LDFLAGS=-L<rootfs>/lib -L<rootfs>/usr/lib -Wl,-rpath-link,<rootfs>/lib,<rootfs>/usr/lib \
./configure <your options>
make
Install QT into the stage directory by
make install DESTDIR=<stage directory>
Copy QT dependent libraries from rootfs the your stage directory
So you can see that, the stage directory is kept minimum without pollution.

Related

Basic set of libraries in chroot (fedora linux)

I'm running an old version of my linux distro (fedora, but this is not very relevant) and for reasons which are completely irrelevant I'm not in a position to update it. However I do need a newer version of gcc and some other libraries than those supplied by my old distro.
I could compile a newer gcc and all the other libraries of course but I thought the simplest way would be to install a minimal set of packages from the latest distro version to a directory and then just chroot there. This way I'd take advantage of the binary packages present in the newest distro and all the infrastructure around it (like dependency installation, etc.) and I wouldn't need to compile everything from source.
My question is this: if I only would like to be able to compile with the most recent gcc and run those programs, what is the minimal set of packages I need? Since we are talking about fedora, what is the minimal set of rpms (beyond glibc and gcc)? Note that I don't need any X environment, networking, or anything like that, only the most basic terminal tools.
The minimal set varies depending on your user needs and what you're linking with. What I do when making a chroot environment is have a look at the distro I want to chroot and see if they have a base rpm/deb package that kickstarts everything. Then I install that in the chroot. From there I add libraries and applications as needed.
For an example where I create a chroot for RHEL on Arch see http://www.zenskg.net/wordpress/?p=267

Patching and compiling kernel, in which directory

I'm trying to apply a patch to my kernel source with limited success. The target machine is really some ARM device, but I haven't compiled a kernel before so I thought I'd start with an x86_64 kernel. This has been only marginally easier :)
Now, according to some tutorials, it seemed like we should use the source in /usr/src/linux-something. But when I tried to patch there I got
File Documentation/sysrq.txt is read-only; trying to patch anyway
patch: **** Can't create temporary file Documentation/sysrq.txt.oG1oiZW : Permission denied
even under sudo. So I tried just copying the patch and the linux source folder to my home directory and patched it from there. This worked. Why is this and will this have any weird side effects when compiling?
It seems you have no permission to /usr/src/linux-something. Download kernel source, put it anywere you could, then patch & compile it.
Build a x86_64 arch kernel from source which downloaded from kernel.org is well, if you wanna build an arm arch kernel for a special board, use buildroot or openwrt is better.
The package manager for some distributions installs the kernel source in /usr/src and distribution-specific build scripts may assume that the source is in that directory.
However, if you download vanilla kernel source from kernel.org, you should be able to build it anywhere.

Clone compiled cmake build to identical hardware

I have successfully compiled opencv 3.1 on a raspberry pi. Developing with the library works perfectly fine. Now I wanted to set up another, identical raspberry with opencv and to save the compilation time, my idea was to copy the binaries to the second raspberry.
So I copied the opencv directories including the build folder and tried to run sudo make install. Instead of using the already compiled files, compilation using cmake starts over again.
How can I convince the second raspberry's build environent that there is no need to recompile everything? On my original raspberry, I can run sudo make install on the exactly same files without recompilation. Installed dev-libraries are the same on both systems. Is this a cmake, make or a opencv specific problem?
I also tried to copy all .so and .h files from /usr/... directories, but then I run into further problems when other cmake projects try to locate the opencv package.
Build directory is not intended to be copied into other place or on another machine.
For deliver program to other machine you should use installed files, or, more generally, a package.
CMake is shipped with CPack, which can build the program from sources and create package contained all its deliverables.
You may create .deb package on the first Raspberry PI machine:
cpack -G DEB <source-dir>
and install it on the second machine using dpkg.
There are also "archive" packages like .tgz or .zip. Full list of CPack generators is described in wiki.

How can I obtain a newer GCC? I don't have root, and can't compile it (memory error)

I have a shared account on a machine that is running an older version of GCC. I do not have root. When I try to compile GCC, my build process gets killed due to memory usage from the following command:
build/genattrtab ../../../work/gcc-6.1.0/gcc/common.md ../../../work/gcc-6.1.0/gcc/config/i386/i386.md insn-conditions.md \
-Atmp-attrtab.c -Dtmp-dfatab.c -Ltmp-latencytab.c
I'd really like to be able to compile some software on this machine that requires a newer GCC. Any suggestions are appreciated.
You can manually unpack one of the GCC packages for any major distribution, try to use the package that closely matches your distribution. These installable packages are just tar files with some meta data and install script. You can unpack them and extract binaries that you'll need. Just keep in mind that you might need to more than just gcc package. Some distributions chop their devtools into tons of small packages ( gcc, g++, binutils, gdb)
Another good source is to use pre-build gcc toolchain used by embedded vendors, sometimes these vendors include host version of gcc together with cross-compiler. For example Android NDK is one of such distributions.
Finally, you can compile GCC on another machine that is not so restrictive and copy the resulting binaries to your restrictive machine. As in case of the first approach of unpacking installable package, try to find machine that resembles your restrictive machine as close as possible. You can use tools like vagrant and docker to set-up close replica of your target machine. Vagrant and docker have a lot of pre-built templates that you can use as a jump start to create the machine you need.

Creating custom Linux image with Yocto using TI sitara am335x devkit compiler

I want to use Yocto to build a linux dist from my own sources (not Arago sources).
I have installed Yocto eclipse plugin, but I can't configure the compiler toolchain.
I have the ti-sdk-am335x-evm-07.00.00.00 SDK installed, and would like to use it
to compile my own dist.
In the Yocto Project ADT preferences in eclipse, what do I specify for:
Toolchain Root Location
And
Sysroot Location?
It won't show up a target architecture, when I try to configure it. What folders should I set?
First, make sure that you built the toolchain, or otherwise made it available. Try this:
bitbake meta-ide-support
That will build a script that you can source in another directory to have access to the toolchain.
Did you check out the Yocto Manual? Specifically, look at section:
4.2.2.1.4.1. Configuring the Cross-Compiler Options
What I've gotten to work is this:
Toolchain Root Location: the manual says the top of the build directory, but for me it won't work unless I have it at build/tmp. In other words, the Toolchain Root Directory is the directory right above where the environment setup script got built.
Sysroot Directory: build/tmp/sysroots/
Also, try with "Standalone pre-built toolchain" selected instead of "Build system derived toolchain," as discussed here.

Resources