Failed to execute /init - linux

I am trying to build a basic root filesystem using Buildroot, for an embedded system (the Banana PI D1).
I am using a kernel from an SDK supplied by the SoC vendor. From this repo I am using only the kernel, found in src/kernel.
There's nothing remarkable about the Buildroot configuration. It builds with no errors and the resulting root filesystem looks like it contains everything I would expect.
I have configured it to build the filesystem as an initramfs embedded within the zImage.
The kernel appears to start up correctly, but cannot load init and then panics:
Booting Linux on physical CPU 0
Linux version 3.4.35 (harmic#penski.harmic.moo.org) (gcc version 4.8.4 (Buildroot 2015.02) ) #7 Sat Mar 21 22:59:18 AEDT 2015
CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=00053177
...
Kernel command line: root=/dev/mtdblock1 ro init=/sbin/init mem=64M console=ttySAK0,115200
...
Freeing init memory: 4632K
Failed to execute /init
Failed to execute /sbin/init. Attempting defaults...
mmc0: host does not support reading read-only switch. assuming write-enable.
mmc0: new SDHC card at address 0007
mmcblk0: mmc0:0007 SD08G 7.42 GiB
mmcblk0: p1
Kernel panic - not syncing: No init found. Try passing init= option to kernel. See Linux Documentation/init.txt for guidance.
I have tried a number of troubleshooting steps:
I've built a root filesystem using this miniroot project (took some doing, as it is quite out of date). It booted OK, using the same kernel as I am trying to use with the buildroot root fs.
I've tried using both uClibc and eglibc
I've tried using Buildroot's own cross-tools as well as the cross tools supplied by the SoC vendor
I've confirmed that the built rootfs does include an /init (it does!)
There is a gist here containing the buildroot configuration, a copy of the kernel boot log, and a listing of the contents of the generated filesystem.
What steps can I take to troubleshoot this further?
Update:
The generated rootfs.cpio.gz weighs in at 2139200 bytes. I have read that there is a maximum size of initramfs you can use, but I have not been able to find where the hard limit is documented.
I have attached a listing of the generated root filesystem to the gist linked above.
I have unpacked the rootfs on the host and inspected it. /init contains this:
#!/bin/sh
# devtmpfs does not get automounted for initramfs
/bin/mount -t devtmpfs devtmpfs /dev
exec 0</dev/console
exec 1>/dev/console
exec 2>/dev/console
exec /sbin/init $*
/sbin/init is a symlink to /bin/busybox.
/bin/busybox is dynamically linked:
$ file busybox
busybox: setuid ELF 32-bit MSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.16, stripped
$ ../../../host/usr/bin/armeb-buildroot-linux-gnueabi-readelf -a busybox | grep "Shared library:"
0x00000001 (NEEDED) Shared library: [libc.so.6]
libc.so.6 is present in /lib. /lib32 is a symlink to /lib for good measure.
The device has 64M RAM.
Both the vendor's cross tools and the buildroot cross tools are set up for eabi

As suggested by #sawdust, the problem was that the CPU was not supposed to be run in big endian mode.
After changing the target to 'ARM (Little Endian)', cleaning and re-building, it now boots correctly.
In retrospect this should have been obvious - inspecting the vendor's kernel image:
$ file zImage
zImage: Linux kernel ARM boot executable zImage (little-endian)

Related

single partition & no initramfs linux boot gets kernel panic

compiled a kernel using gentoo specifications for Thinkpad T430
mounted empty ext4 hard drive and created boot/ directory on it, moved bzImage and System.map inside
Installed extlinux to it with "extlinux --install [mounted directory]/boot"
placed syslinux.cfg inside with the following config:
DEFAULT linux
SAY Now booting the kernel from EXTLINUX...
LABEL linux
KERNEL /boot/bzImage
APPEND root=/dev/sdb rw init=/bin/bash
Created bin/ folder in mounted hard drive, placed bash binary inside
At this point i'm able to boot the kernel to the point where it has to run init, however it panics:
---[ Kernel Panic - not syncing: Requested init /bin/bash failed (error -2). ]---
4chan solved my question in 10 minutes, i didn't have the libc.so libraries

What is the use of vmlinux file generated when we compile linux kernel

I am compiling Linux Kernel to my ARM board. I have seen file called vmlinux generated in kernel root folder. Can someone give good explanation about this file and it's use.
vmlinux is a ELF format based file which is nothing but the uncompressed version of kernel image which can be used for debugging. The zImage or bzImage are the compressed version of kernel image which is normally used for booting.
The vmlinux as such directly cannot be used by UBoot. However, by addition of metadata info in the process of creation of uImage for vmlinux, it is possible to boot via UBoot.
The vmlinux is the boot file in ELF format, and then the initrd file (ram disk) is run in the same directory (/boot).
The vmlinux file is practically the kernel itself.

Upgraded Redhat Linux kernel gives kernel panic on boot

I'm trying to use a new kernel (2.6.32) on RHEL 5.10 32bit (2.6.18 kernel). The .32 kernel is downloaded from kernel.org not patched by Redhat. I know this is silly, but upgrading to RHEL 6 is not an option to us.
I did make menuconfig; make; make modules; make modules_install; make install; reboot. Then I got a kernel panic. I built 2.6.18 kernel from source, either patched by redhat or not. Both worked fine.
My question is whether it's possible to use a 2.6.32 kernel with all the filesystem and libraries from a RHEL 5.10 installation (2.6.18). If it's possible then what's wrong with my process?
========
Mounting root filesystem.
mount: could not find filesystem '/dev/root'
Setting up other filesystems.
Setting up new root fs
setuproot: moving /dev failed: No such file or directory
no fstab.sys, mounting internal defaults
setuproot: error mounting /proc: Nosuch file or dirctory
Switching to new root and running init.
unmounting old /dev
unmounting old /proc
unmounting old /sys
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!
Pid: 1, comm: init Not tainted 2.6.32.63 #1
Call Trace:
[<c0xxxxxx>] ? panic
[<c0xxxxxx>] ? do_exit
[<c0xxxxxx>] ? do_group_exit
[<c0xxxxxx>] ? sys_exit_group
[<c0xxxxxx>] ? syscall_call
My /boot/grub/grub.conf has the following.
root (hd0,0)
kernel /boot/vmlinuz-2.6.32-63 ro root=LABEL=/ rhgb
initrd /boot/initrd-2.6.32-63.img
Thanks to all the help and comments I'm able to answer it myself.
This is what I tried but failed. Make a diff between the old and new initrd (gunzip | cpio). There are different modules installed, but they don't matter. I disabled loadable modules (everything build-in), the problem remains. I compiled busybox and put it in the initrd (edit /init), and I got a shell. From there I can manually mknod and mount the file system, but still have the kernel panic when switching root.
Finally I found this. It has a better description and solution to the problem. Enable "deprecated sysfs" and it's all fixed.

chroot into other arch's environment

Following the Linux from Scratch book I have managed to build a toolchain for an ARM on
an ARM. This is till chapter 6 of the book, and on the ARM board itself I could go on further with no problems.
My question is if I can use the prepared environment to continue building the soft from chapter 6 on my x86_64 Fedora 16 laptop?
I thought that while I have all the binaries set up I could just copy them to laptop, chroot inside and feel myself as on the ARM board, but using the command from the book gives no result:
`# chroot "$LFS" /tools/bin/env -i HOME=/root TERM="$TERM" PS1='\u:\w\$
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin /tools/bin/bash --login +h
chroot: failed to run command `/tools/bin/env': No such file or directory`
The binary is there, but it doesn't belong to this system:
`# ldd /tools/bin/env
not a dynamic executable`
The binary is compiled as per the book:
# readelf -l /tools/bin/env | grep interpreter
[Requesting program interpreter: /tools/lib/ld-linux.so.3]
So I wonder if there is a way, like using proper environment variables for CC LD READELF, to continue building for ARM using these tools on x86_64 host.
Thank you.
Yes, you certainly can chroot into an ARM rootfs on an x86 box.
Basically, like this:
$ sudo chroot /path/to/arm/rootfs /bin/sh
sh-4.3# ls --version 2>&1 | head
/bin/ls: unrecognized option '--version'
BusyBox v1.22.1 (2017-03-02 15:41:43 CST) multi-call binary.
Usage: ls [-1AaCxdLHRFplinsehrSXvctu] [-w WIDTH] [FILE]...
List directory contents
-1 One column output
-a Include entries which start with .
-A Like -a, but exclude . and ..
sh-4.3# ls
bin css dev home media proc sbin usr wav
boot data etc lib mnt qemu-arm sys var
My rootfs is for a small embedded device, so everything is BusyBox-based.
How is this working? Firstly, I have the binfmt-misc support running in the kernel. I didn't have to do anything; it came with Ubuntu 18. When the kernel sees an ARM binary, it hands it off to the registered interpreter /usr/bin/qemu-arm-static.
A static executable by that name is found inside my rootfs:
sh-4.3# ls /usr/bin/q*
/usr/bin/qemu-arm-static
I got it from a Ubuntu package. I installed:
$ apt-get install qemu-user-static
and then copied /usr/bin/qemu-arm-static into the usr/bin subdirectory of the rootfs tree.
That's it; now I can chroot into that rootfs without even mentioning QEMU on the chroot command line.
Nope. You can't run ARM binaries on x86, so you can't enter its chroot. No amount of environment variables will change that.
You might be able to continue the process by creating a filesystem image for the target and running it under an emulator (e.g, qemu-system-arm), but that's quite a different thing.
No you cannot, at least not using chroot. What you have in your hands is a toolchain with an ARM target for an ARM host. Binaries are directly executable only on architectures compatible with their host architecture - and x86_64 is not ARM-compatible.
That said, you might be able to use an emulated environment. qemu, for example, offers two emulation modes for ARM: qemu-system-arm that emulates a whole ARM-based system and qemu-arm that uses ARM-native libraries to provide a thinner emulation layer for running ARM Linux executables on non-ARM hosts.

Compiling a kernel - no bzImage/vmlinuz produced

I'm trying to compile a kernel (altered version of 2.6.32.9, found here https://github.com/rabeeh/linux-2.6.32.9). I am doing the compilation on a emulated ARM system (qemu) (yes, I should probably cross-compile, but that's a different topic) running Ubuntu Core (https://wiki.ubuntu.com/Core) and the kernel (vmlinuz) from Ubuntu 11.04 (downloaded from http://ports.ubuntu.com/ubuntu-ports/dists/natty/main/installer-armel/current/images/versatile/netboot/vmlinuz).
After running make bzImage, I look in the arch/arm/boot folder, and find only a file called zImage. I tried using this zImage instead of the vmlinuz I downloaded from ubuntu.com in qemu, but that doesn't work, just shows a black screen. I guess zImage is not the same as bzImage, which is what I think vmlinuz (judging from different articles on the internet) is.
So, a few questions:
Why doesn't make bzImage produce a bzImage/vmlinuz?
Can I convert a vmlinux to a vmlinuz using for example mkimage (there are lots of guides on the opposite...)?
Thanks
The bzImage filename and make target was originally x86-specific (big zImage). Many of the bootloaders on architectures that are not equal to baremetal-x86 (SPARC, PPC, IA64, etc. and also Xen on *) directly take vmlinux (or one of its compressed forms, for example vmlinux.gz, aka zImage). I guess some maintainers just added bzImage as a make target name because they wanted to have the x86 madness on their arch as well.
I get the result you describe by asking qemu to emulate a different cpu than arm926ej-s. But booting versatilepb with the default cpu works. I've cross-compiled my kernel, and I compiled all the drivers into it (so I don't use initrd).
Just download 100MB arm-eabi toolchain from http://www.mentor.com/embedded-software/sourcery-tools/sourcery-codebench/editions/lite-edition/ (it's free but they want your email, like the x86 Intel compiler). It has an installer, just say "next" until it's done, like on Windows. Then add the bin directory to your path:
export PATH=~/CodeSourcery/Sourcery_CodeBench_Lite_for_ARM_EABI/bin/:$PATH
Then go back to your kernel source dir and do
make ARCH=arm CROSS_COMPILE=arm-none-eabi- menuconfig
make ARCH=arm CROSS_COMPILE=arm-none-eabi- zImage modules
You can do
sudo make ARCH=arm CROSS_COMPILE=arm-none-eabi- INSTALL_MOD_PATH=path_to_arm_root modules_install
if you can reach your ARM filesystem from the host. If you're using NFS root it's trivial, but if you're using a disk image you need to either:
use a raw disk image and kpartx (depends on your host kernel having dm-multipath) or
qemu-nbd which supports qcow (and depends on host kernel having network block device support)
To boot in qemu with disk you need the right drivers (SYM53C8XX SCSI). The versatile defconfig doesn't select those.

Resources