chroot into other arch's environment - linux

Following the Linux from Scratch book I have managed to build a toolchain for an ARM on
an ARM. This is till chapter 6 of the book, and on the ARM board itself I could go on further with no problems.
My question is if I can use the prepared environment to continue building the soft from chapter 6 on my x86_64 Fedora 16 laptop?
I thought that while I have all the binaries set up I could just copy them to laptop, chroot inside and feel myself as on the ARM board, but using the command from the book gives no result:
`# chroot "$LFS" /tools/bin/env -i HOME=/root TERM="$TERM" PS1='\u:\w\$
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin /tools/bin/bash --login +h
chroot: failed to run command `/tools/bin/env': No such file or directory`
The binary is there, but it doesn't belong to this system:
`# ldd /tools/bin/env
not a dynamic executable`
The binary is compiled as per the book:
# readelf -l /tools/bin/env | grep interpreter
[Requesting program interpreter: /tools/lib/ld-linux.so.3]
So I wonder if there is a way, like using proper environment variables for CC LD READELF, to continue building for ARM using these tools on x86_64 host.
Thank you.

Yes, you certainly can chroot into an ARM rootfs on an x86 box.
Basically, like this:
$ sudo chroot /path/to/arm/rootfs /bin/sh
sh-4.3# ls --version 2>&1 | head
/bin/ls: unrecognized option '--version'
BusyBox v1.22.1 (2017-03-02 15:41:43 CST) multi-call binary.
Usage: ls [-1AaCxdLHRFplinsehrSXvctu] [-w WIDTH] [FILE]...
List directory contents
-1 One column output
-a Include entries which start with .
-A Like -a, but exclude . and ..
sh-4.3# ls
bin css dev home media proc sbin usr wav
boot data etc lib mnt qemu-arm sys var
My rootfs is for a small embedded device, so everything is BusyBox-based.
How is this working? Firstly, I have the binfmt-misc support running in the kernel. I didn't have to do anything; it came with Ubuntu 18. When the kernel sees an ARM binary, it hands it off to the registered interpreter /usr/bin/qemu-arm-static.
A static executable by that name is found inside my rootfs:
sh-4.3# ls /usr/bin/q*
/usr/bin/qemu-arm-static
I got it from a Ubuntu package. I installed:
$ apt-get install qemu-user-static
and then copied /usr/bin/qemu-arm-static into the usr/bin subdirectory of the rootfs tree.
That's it; now I can chroot into that rootfs without even mentioning QEMU on the chroot command line.

Nope. You can't run ARM binaries on x86, so you can't enter its chroot. No amount of environment variables will change that.
You might be able to continue the process by creating a filesystem image for the target and running it under an emulator (e.g, qemu-system-arm), but that's quite a different thing.

No you cannot, at least not using chroot. What you have in your hands is a toolchain with an ARM target for an ARM host. Binaries are directly executable only on architectures compatible with their host architecture - and x86_64 is not ARM-compatible.
That said, you might be able to use an emulated environment. qemu, for example, offers two emulation modes for ARM: qemu-system-arm that emulates a whole ARM-based system and qemu-arm that uses ARM-native libraries to provide a thinner emulation layer for running ARM Linux executables on non-ARM hosts.

Related

Run a native X86 binary from inside an ARM chroot

I have setup a chroot for an aarch64 rootfs. I am using qemu-aarch64-static as an emulator. This works. I can login to the chroot and execute aarch64 binaries.
Now I would like to run a native (x86_64) cross compiler from within this environment. (I have a large application which does not build using a cross compiler. Using a qemu emulated gcc is too slow). I cannot find a way to run x86 executables from the chroot.
First I mount the native filesystem into the chroot
mount -o bind / /mnt/rpi_rootfs/mnt/native
prepare chroot
cd /mnt/rpi_rootfs
sudo mount -t proc /proc proc/
sudo mount --rbind /sys sys/
sudo mount --rbind /dev dev/
login to the chroot
sudo chroot /mnt/rpi_rootfs/
Create a link to the x86 dynamic linker/loader
ln -s /mnt/native/lib/ld-linux.so.2 /lib/ld-linux.so.2
Try to run any x86 native binary.
LD_LIBRARY_PATH=/mnt/native/lib:/mnt/native/usr/lib /mnt/native/bin/pwd
Error:
>/mnt/native/bin/pwd: No such file or directory
I was inspired by this approach: https://gitlab.com/postmarketOS/pmbootstrap/-/issues/1731
Notes:
On the native system: ls /proc/sys/fs/binfmt_misc/ shows the various registered emulators, such as qemu-aarch64.
In the chroot ls /proc/sys/fs/binfmt_misc/ is empty.
I use the 'pwd' app as an example.
Execute
file /bin/pwd
/bin/pwd: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2
This shows that actually /lib64/ld-linux-x86-64.so.2 is required to run the application. Thus step 4 above needs to be changed.
Note: /lib64/ld-linux-x86-64.so.2 is a symlink.
Activate the chroot and next, inside the chroot environment create a symlink from the expected location of the dynamic linker to the actual file on the host:
>ln -s /mnt/native/lib/x86_64-linux-gnu/ld-2.31.so /lib64/ld-linux-x86-64.so.2
When this is done it is finally possible to run native x86 applications in the aarch64 chroot. This allows one to run high performance cross compilers from within the chroot.
>LD_LIBRARY_PATH=/mnt/native/lib:/mnt/native/usr/lib:/mnt/native/lib/x86_64-linux-gnu /mnt/native/bin/pwd
>/

How to build openssl for arm linux

I working with Ubuntu x86 and have gcc cross compiler for Arm linux.
I want to build open ssl github project for arm linux.
I read the documents but couldn't understand how to build that.
Assuming here you are building for 64 bit arm Linux system, The self-contained procedure hereafter should work - working for me on Ubuntu 19.10 x86_64:
# openssl
wget https://www.openssl.org/source/openssl-1.1.1e.tar.gz
tar zxf openssl-1.1.1e.tar.gz
# a toolchain I know is working
wget "https://developer.arm.com/-/media/Files/downloads/gnu-a/9.2-2019.12/binrel/gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu.tar.xz?revision=61c3be5d-5175-4db6-9030-b565aae9f766&la=en&hash=0A37024B42028A9616F56A51C2D20755C5EBBCD7" -O gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu.tar.xz
mkdir -p /opt/arm/9
tar Jxf gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu.tar.xz -C /opt/arm/9
# building
cd openssl-1.1.1e
./Configure linux-aarch64 --cross-compile-prefix=/opt/arm/9/gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu/bin/aarch64-none-linux-gnu- --prefix=/opt/openssl-1.1.1e --openssldir=/opt/openssl-1.1.1e -static
make install
ls -gG /opt/openssl-1.1.1e/bin/
total 10828
-rwxr-xr-x 1 6214 Mar 23 23:27 c_rehash
-rwxr-xr-x 1 11077448 Mar 23 23:27 openssl
file /opt/openssl-1.1.1e/bin/openssl
/opt/openssl-1.1.1e/bin/openssl: ELF 64-bit LSB executable, ARM aarch64, version 1 (GNU/Linux), statically linked, for GNU/Linux 3.7.0, with debug_info, not stripped
In the case you would want to build for a 32 bits arm system with hardware floating point support, we just need to slightly modify adapt the procedure for three commands:
wget "https://developer.arm.com/-/media/Files/downloads/gnu-a/9.2-2019.12/binrel/gcc-arm-9.2-2019.12-x86_64-arm-none-linux-gnueabihf.tar.xz?revision=fed31ee5-2ed7-40c8-9e0e-474299a3c4ac&la=en&hash=76DAF56606E7CB66CC5B5B33D8FB90D9F24C9D20" -O gcc-arm-9.2-2019.12-x86_64-arm-none-linux-gnueabihf.tar.xz
tar Jxf gcc-arm-9.2-2019.12-x86_64-arm-none-linux-gnueabihf.tar.xz -C /opt/arm/9
./Configure linux-generic32 --cross-compile-prefix=/opt/arm/9/gcc-arm-9.2-2019.12-x86_64-arm-none-linux-gnueabihf/bin/arm-none-linux-gnueabihf- --prefix=/opt/openssl-1.1.1e --openssldir=/opt/openssl-1.1.1e -static
Update: providing more information upon reading comment.
1) linux-generic32 is, as implied by its name, a generic 32 bit linux target that should work on any 32 bit system. You can find this information at a lot of places on the Internet, like here. The drawback is that the executable may not be optimized for your target. If you read the Configure script, you will see a list of environment variable you can specify for directing the compilation. For example, if your SoC is a cortex-a9, you may pass the option -mtune=cortex-a9 by setting CFLAGS - you will find a lot of information on the Internet, but I would suggest to look at Configure, it does contain a lot of useful comments.
By the way, if you execute Configure with a non-existing target, you get the list of all possible one:
./Configure does-not-exist
2) hf stands for Hardware floating point. Some 32 bit arm SoC do have hardware support for floating-point operations, some do not. Since you did not specify the exact brand/model of SoC you were targeting, I took a guess, and used a toolchain capable of generating code for the arm floating-point hardware, when present.

Is there a need to recomplie my linux kernel?

I am a beginner learning linux kernel module development. I am following a tutorial that says to recompile my kernel so as to enable various debugging features like forced module unloading e.t.c. Is is okay if I do that? Does it effects my pre-built kernel. In what cases that I am forced to insert a module into a running kernel and the kernel won't allow me to do so?
It is perfectly okay to compile and install a kernel to do kernel module development. If you are in ubuntu, you can follow the following steps to make sure that you are using the same kernel sources as your booted machine.
Step 1. Find out the linux being used in your booting from /boot/grub/grub.cfg file. Look for the entry agains 'linux ' in the boot option entries that you select while booting up.
Example excerpt : linux /boot/vmlinuz-3.13.0-24-generic root=UUID=e377a464-92db-4c07-86a9-b151800630c0 ro quiet splash $vt_handoff
Step 2. Look for the name of the package with the same version using the following command.
dpkg -l | grep linux | grep 3.13.0-24-generic
Example output:
$ dpkg -l | grep linux | grep 3.13.0-24-generic
ii linux-headers-3.13.0-24-generic 3.13.0-24.46 amd64 Linux kernel headers for version 3.13.0 on 64 bit x86 SMP
ii linux-image-3.13.0-24-generic 3.13.0-24.46 amd64 Linux kernel image for version 3.13.0 on 64 bit x86 SMP
ii linux-image-extra-3.13.0-24-generic 3.13.0-24.46 amd64 Linux kernel extra modules for version 3.13.0 on 64 bit x86 SMP
Step 3. Download sources of the package "linux-headers-3.13.0-24-generic" to get the same kernel that was used in your PC.
$ apt-get source linux-headers-3.13.0-24-generic
Step 4. Use the config file that is available at /boot/ folder as the config file to compile this kernel source
Example :
$ ls /boot/config-3.13.0-24-generic (Notice the same version used in this file)
Step 5. Turn on your debugging symbols on this config to do your testing.
Recompiling kernel help us to learn how kernel work.
latest kernel patches can be applied through kernel compile and install.
We can enable debug flag through compilation.
We can remove the not needed code.
Helps to add your own kernel code and test your code.
It is easy to recompile and install the linux kernel but it takes more time if we compile using low speed computer or VM.

The Jungo WinDriver need a linux symbolic link , what does it mean?

Its manual says:
To run GUI WinDriver applications (e.g., DriverWizard [5]; Debug Monitor [7.2]) you must also
have version 5.0 of the libstdc++ library — libstdc++.so.5. If you do not have this file, install it from the relevant RPM in your Linux distribution (e.g., compat-libstdc++).
Before proceeding with the installation, you must also make sure that you have a linux symbolic link. If you do not, create one by typing : /usr/src$ ln -s 'target kernel'/linux
For example, for the Linux 2.4 kernel type :
/usr/src$ ln -s linux-2.4/ linux
what does this symbolic link mean ? what do the <target kernel> and linux preset ?
If I install WinDriver in Ubuntu 13.10 , how should specify these two parameters ?
When installing WinDriver on a Linux machine, you must make sure that you are compiling WinDriver with the same header files that were used to build your kernel. #uname -a will tell you your kernel version number.
You should verify that the directory /usr/src/linux (normally a symbolic ink) is pointing to the correct kernel header sources and that the header files are using exactly the same version numbers as your running kernel.
A refers to the location of the kernel headers and refers to the Linux kernel name-number.
To fix this:
Become super user: $su ;
Change directory to: /usr/src/: # cd /usr/src/ ;
Delete the previous link you created (if any): # rm linux ;
And create a new symbolic: # ln -s linux-2.4/ linux.
I recommend following the Linux installation procedure from the Windriver manual at:
http://www.jungo.com/st/support/documentation/windriver/11.5.0/wdpci_manual.mhtml/wd_install_process.html#wd_install_linux
Regards,
Nadav, Jungo support manager

How to debug my Cross compiled Linux Kernel?

I 've cross compiled a Linux Kernel (for ARM on i686 - using Cross-LFS).
Now I'm trying to boot this Kernel using QEMU.
$ qemu-system-arm -m 128 -kernel /mnt/clfs-dec4/boot/clfskernel-2.6.38.2 --nographic -M versatilepb
Then, it shows this line and waits for infinite time !!
Uncompressing Linux... done, booting the kernel.
So, I want to debug the kernel, so that I can study what exactly is happening.
I'm new to these kernel builds, Can someone please help me to debug my custom built kernel as it is not even showing anything after that statement. Is there any possibility of the kernel being broken? ( I dont think so, b'se it didnot give any error while compiling )
And my aim is to generate a custom build very minimal Linux OS. Any suggestions regarding any tool-chains etc which would be easy & flexible depending on my requirements like drivers etc.,
ThankYou
You can use GDB to debug your kernel with QEMU you can use -s -S options. If you want a simple and reliable toolchain, you can use ELDK from DENX (http://www.denx.de/wiki/DULG/ELDK).
You can install it like this (It's not the last version, but you got the idea):
wget http://ftp.denx.de/pub/eldk/4.2/arm-linux-x86/iso/arm-2008-11-24.iso
sudo mkdir -p /mnt/cdrom (if necessary)
sudo mount -o loop arm-2008-11-24.iso /mnt/cdrom
/mnt/cdrom/install -d $HOME/EMBEDDED_TOOLS/ELDK/
The command above should install the toolchain under $HOLE/EMBEDDED_TOOLS/ELDK (modify it if you need)
echo "export PATH=$PATH:$HOME/EMBEDDED_TOOLS/ELDK/ELDK42/usr/bin" >> $HOME/.bashrc
You can then see the version of your ARM toolchain like this:
arm-linux-gcc -v
You can test a hello_world.c program like this:
arm-linux-gcc hello_world.c -o hello_world
And you type: file hello_wrold to see the target architecture of the binary, it should be something like this:
hello_wrold: ELF 32-bit LSB executable, ARM, version 1 (SYSV)
Now if you want to compile a production kernel, you need to optimize it (i suggest using busybox) and if you want just one for testing now, try this steps:
Create a script to set your chain tool set_toolchain.sh:
#! /usr/bin/sh
PATH=$PATH:$HOME/EMBEDDED_TOOLS/ELDK/ELDK42/usr/bin
ARCH=arm
CROSS_COMPILE=arm-linux-gnueabi-
export PATH ARCH CROSS_COMPILE
And run your script (source ./set_toolchain.sh)
Download a linux kernel and unzip it (Let's assume 2.6.x, it's an old kernel, but there are a lot of chances that it work without compilation errors).
Inside your unzipped kernel:
cd ~/linux-2.6.29/arch/arm/configs
make versatile_defconfig
Here we use versatile chip, you may need to use make menuconfig to modify the option OABI and set it to ARM EABI, this option is under Kernel features menu
After all this steps, you can compile you kernel:
make
if you want verbose compilation make v=1
After this you got your kernel under arch/arm/boot/zImage.
Hope this help.
Regards.
I would suggest to build your kernel by activating the option in the section Kernel hacking of your configuration file.
Then you may use kdb or kgdb which is easier to use but requires another machine running gdb.
`
You can also connect Qemu and GDB. Qemu has the -s and -S options that run a GDB server and allow you to connect to it via TCP to localhost:1234. Then you can load your kernel image (the unzipped one) in GDB and see how far your kernel boots.

Resources