Booting kernel from SD in qemu (ARM) with u-boot - linux

I'm quite new to embedded systems and I'm playing around with ARM on qemu. So I've run into problem booting linux kernel image from an emulated SD on versatile express with cpu cortex-a9.
I prepared everything in the following order: first, I've built the kernel with vexpress_defconfig using appropriate ARM toolchain. Then I've built u-boot with vexpress_ca9x4_defconfig. Everything went just fine. The linux kernel source I took is at version 4.13 (latest stable from git). U-boot version is 2017.09-00100. Then I prepared an SD image:
dd if=/dev/zero of=sd.img bs=4096 count=4096
mkfs.vfat sd.img
mount sd.img /mnt/tmp -o loop,rw
cp kernel/arch/arm/boot/zImage /mnt/tmp
umount /mnt/tmp
Next I try to run qemu as follows:
qemu-system-arm -machine vexpress-a9 -cpu cortex-a9 -m 128M -dtb kernel/arch/arm/boot/dts/vexpress-v2p-ca9.dtb -kernel uboot/u-boot -sd sd.img -nographic
U-boot loads successfully and gives me command prompt. And SD is really there and attached:
=> mmcinfo
Device: MMC
Manufacturer ID: aa
OEM: 5859
Name: QEMU!
Tran Speed: 25000000
Rd Block Len: 512
SD version 1.0
High Capacity: No
Capacity: 16 MiB
Bus Width: 1-bit
Erase Group Size: 512 Bytes
I attempt to load compressed image from SD into memory and boot it:
fatload mmc 0:0 0x4000000 zImage
and everything looks ok
=> fatload mmc 0:0 0x4000000 zImage
reading zImage
3378968 bytes read in 1004 ms (3.2 MiB/s)
but then I want to boot the kernel and get an error:
=> bootz 0x4000000
Bad Linux ARM zImage magic!
I also tried U-boot images, crafted with u-boot's mkimage, like in this example:
uboot/tools/mkimage -A arm -C none -O linux -T kernel -d kernel/arch/arm/boot/Image -a 0x00010000 -e 0x00010000 uImage
also trying out -C gzip on zImage and different load/entry addresses, to no avail. The images were copied to sd.img. When I fatload the image and check it with iminfo, whichever options I try, I constantly get error:
=> iminfo 0x4000000
## Checking Image at 04000000 ...
Unknown image format!
I'm totally confused and this problem drives me nuts, while information on this subject in Internet is rather scarce. Please, hint me what I'm doing wrong and redirect into right direction.
qemu in use is QEMU emulator version 2.9.0.

Related

Where does QEMU load the DTB?

I am writing my own bootloader for aarch64 that must boot linux, and in order to execute it properly I need to follow the linux boot protocol.
Here are some memory mappings: located in my linker file
FLASH_START = 0x000100000;
RAM_START = 0x40000000;
TEXT_START = 0x40080000;
Here is the command I am using to lauch my virt, giving 4 cores and 2GB of RAM
qemu-system-aarch64 -nographic -machine virt -cpu cortex-a72 -kernel pflash.bin -initrd initramfs.cpio.gz -serial mon:stdio -m 2G -smp 4
The pflash.bin has the following layout:
dd if=/dev/zero of=pflash.bin bs=1M count=512
dd if=my_bootloader.img of=pflash.bin conv=notrunc bs=1M count=20
dd if=Kernel of=pflash.bin conv=notrunc bs=1M seek=50
Where: Kernel is the linux kernel image file,
and my_bootloader.img is simply the objcopy of the elf file:
aarch64-linux-gnu-objcopy -O binary my_bootloader.elf my_bootloader.img
And the elf file is created in the following manner:
aarch64-linux-gnu-ld -nostdlib -T link.ld my_bootloader.o -o my_bootloader.elf
Here is my_bootloader.S
.section ".text.startup"
.global _start
_start:
ldr x30, =STACK_TOP
mov sp, x30
ldr x0, =RAM_START
ldr x1, =0x43280000
br x1
ret
As you can see All I have done so far is:
Set up the stack
Presumably I have loaded the DTB to the x0 (just like the linux boot protocol demands)
Branch to the location of the linux kernel
So I have not yet loaded the initramfs.cpio.gz which contains the file system but I should already normally get at least some output from the kernal since the DTB was loaded.
My question is, have I loaded it correctly? And I guess that the simple answer is no. But basically I have no clue where qemu puts the dtb in my RAM, and after looking everywhere on the documentation I cannot seem to find this information.
I would much appreciate if someone could tell me where QEMU loads the dtb so I can put it into x0 and the kernel could gladly read it!
Thanks in advance!
Where the dtb (if any) is is board specific. The QEMU documentation for the Arm 'virt' board tells you where the DTB is:
https://www.qemu.org/docs/master/system/arm/virt.html#hardware-configuration-information-for-bare-metal-programming
However, your command line is incorrect. "-kernel pflash.bin" says "this file is a Linux kernel, boot it in the way that the Linux kernel should be booted". What you want is "load this file into the flash, and start up in the way that the CPU would start out of reset on real hardware". For that you want one of the other ways of loading a guest binary (-bios is probably simplest). And you probably don't want to pass QEMU a -initrd option, either, since that is intended for either (a) QEMU's builtin bootloader or (b) QEMU-aware bootloaders that know how to extract a kernel and initrd from the fw-cfg device.
PS: If you tell QEMU to provide more than one guest CPU then your bootloader will need to deal with the secondary CPUs. That either means using PSCI to start them up, or else handling the fact that all the CPUs start executing the same code out of reset (which one depends on how you choose to start QEMU). You're better off sticking to '-smp 1' to start off with, and come back and deal with SMP later.

How to boot the ubuntu cloud image with a customized kernel

I try to use my own Linux kernel built from source for an ubuntu cloud image and boot it using QEMU. My customized kernel is outside of the ubuntu image:
$ ls
kernel ubuntu-20.04-amd64.img ...
Here is the command line I used:
sudo qemu-system-x86_64 \
-enable-kvm -cpu host -smp 2 -m 4096 -nographic \
-drive id=root,media=disk,file=ubuntu-20.04-amd64.img \
-kernel ./kernel/arch/x86/boot/bzImage \
-append "root=/dev/sda console=ttyS0" \
-device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp:127.0.0.1:5555-:22
When I boot it, I can see the following log:
[ 0.875446] List of all partitions:
[ 0.875736] 0800 4194304 sda
[ 0.875736] driver: sd
[ 0.876259] 0801 4194303 sda1 00000000-01
[ 0.876259]
[ 0.876893] No filesystem could mount root, tried:
[ 0.876893] ext3
[ 0.877435] ext2
[ 0.877610] ext4
[ 0.877834]
[ 0.878149] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,0)
Any suggestions?
The log you quote says that the disk image is partitioned: that is, sda is the entire (virtual) disk, and it has a partition table with one partition named sda1. Your "append" command line asks to use '/dev/sda' as if the disk image had only a single filesystem on it. Try '/dev/sda1' instead.

Kernel Panic with ramfs on embedded device: No filesystem could mount root

I'm working on an embedded ARM device running Linux (kernel 3.10), with NAND memory for storage. I'm trying to build a minimal linux which will reside on its own partition and carry out updates of the main firmware.
The kernel uses a very minimal root fs which is stored in a ramfs. However, I can't get it to boot. I get the following error:
[ 0.794113] List of all partitions:
[ 0.797600] 1f00 128 mtdblock0 (driver?)
[ 0.802669] 1f01 1280 mtdblock1 (driver?)
[ 0.807697] 1f02 1280 mtdblock2 (driver?)
[ 0.812735] 1f03 8192 mtdblock3 (driver?)
[ 0.817761] 1f04 8192 mtdblock4 (driver?)
[ 0.822794] 1f05 8192 mtdblock5 (driver?)
[ 0.827820] 1f06 82944 mtdblock6 (driver?)
[ 0.832850] 1f07 82944 mtdblock7 (driver?)
[ 0.837876] 1f08 12288 mtdblock8 (driver?)
[ 0.842906] 1f09 49152 mtdblock9 (driver?)
[ 0.847928] No filesystem could mount root, tried: squashfs
[ 0.853569] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
[ 0.861806] CPU: 0 PID: 1 Comm: swapper Not tainted 3.10.73 #11
[ 0.867732] [<800133ec>] (unwind_backtrace+0x0/0x12c) from [<80011a50>] (show_stack+0x10/0x14)
(...etc)
The root fs is built by the build process, using the following (simplified for clarity):
# [Copy some things to $(ROOTFS_OUT_DIR)/mini_rootfs]
cd $(ROOTFS_OUT_DIR)/mini_rootfs && find . | cpio --quiet -o -H newc > $(ROOTFS_OUT_DIR)/backup.cpio
gzip -f -9 $(ROOTFS_OUT_DIR)/backup.cpio
This creates $(ROOTFS_OUT_DIR)/backup.cpio.gz
The kernel is then built like this:
#$(MAKE) -C $(LINUX_SRC_DIR) O=$(LINUX_OUT_DIR) \
CONFIG_INITRAMFS_SOURCE="$(ROOTFS_OUT_DIR)/backup.cpio.gz" \
CONFIG_INITRAMFS_ROOT_UID=0 CONFIG_INITRAMFS_ROOT_GID=0
I think this means it uses the same config as the main firmware (built elsewhere), but supplies the minimal ramfs image using CONFIG_INITRAMFS_SOURCE.
From Kernel.Org, the ramfs is always built anyway, and CONFIG_INITRAMFS_SOURCE is all that is needed to specify a pre-made root fs to use. There are no build errors to indicate that there is a problem creating the ramfs, and the size of the resulting kernel looks about right. backup.cpio.gz is about 3.6 MB; the final zImage is 6.1 MB; the image is written to a partition which is 8 MB in size.
To use this image, I set some flags used by the (custom) boot loader which tell it to boot from the minimal partition, and also set a different command line for the kernel. Here is the command line used to boot:
console=ttyS0 rootfs=ramfs root=/dev/ram rw rdinit=/linuxrc mem=220M
Note that the nimimal root fs contains "/linuxrc", which is actually a link to /bin/busybox:
lrwxrwxrwx 1 root root 11 Nov 5 2015 linuxrc -> bin/busybox
Why doesn't this boot? Why is it trying "squashfs" filesystem, and is this wrong?
SOLVED! It turned out that a file name used by the (custom) build system had changed as part of an update, and so it was not putting the correct kernel image into the firmware package. I was actually trying to boot the wrong kernel with the "rootfs=ramfs" parameter, one which didn't have a ramfs.
So, for future reference, this error occurs if you specify "rootfs=ramfs" but your kernel wasn't built with any rootfs built in (CONFIG_INITRAMFS_SOURCE=... NOT specified)

Kernel Panic - Not Syncing. Segfault at Init

I recently tried to install php5-gd package on my debian vmware server and it failed at libc6 - i386.
Afterwards every command other than CD caused a Segmentation fault and the server would no long boot, showing the following error
[ 4.808086] init[1]: segfault at 0 ip (null) sp bff4645c error 14 in init[8048000+8000]
[ 4.808372] Kernel panic - not syncing: Attempted to kill init!
[ 4.808442] Pid: 1, comm: init Not tainted 3.2.0-4-686-pae #1 Debian 3.2.65-1
[ 4.808512] Call Trace:
(Trace continued in this image
)
I am at a complete loss on what to do at this moment. Any help or direction would be appreciated
Edit: I've since uploaded debian-live-8.3.0-i386-standard to the vmware store and booted the broken vm with the live cd.
Now I am in the live cd terminal but not sure what to do next. I did a lsblk and noted that the broken vm's boot partition is sda > sda2 and that's all I have done so far. Do I need to mount this somewhere now?
Edit2: I've now mounted the broken partition into the live cd, however when I tried to chroot, I get Segmentation Fault:
# mkdir -p /mnt/tcs1/boot
# mount /dev/tcs1/root /mnt/tcs1
# mount /dev/sda1 /mnt/tcs1/boot
# mount -t proc none /mnt/tcs1/proc
# mount -o bind /dev /mnt/tcs1/dev
# mount -o bind /run /mnt/tcs1/run
# mount -o bind /sys /mnt/tcs1/sys
# chroot /mnt/tcs1 /bin/bash
# Segmentation fault
Solved:
I relinked ld-linux.so.2 to /lib/i386-linux-gnu/ld-2.19.so from a rescue CD and managed to chroot in

Using qemu to boot OpenSUSE (or any other OS) with custom kernel?

Duplicate;
Could not find an answer, so posting here.
I want to run OpenSUSE as guest with a custom kernel image which is on my host machine. I'm trying:
$ qemu-system-x86_64 -hda opensuse.img -m 512 -kernel
~/kernel/linux-git/arch/x86_64/boot/bzImage -initrd
~/kernel/linux-git/arch/x86_64/boot/initrd.img -boot c
But it boots into BusyBox instead. Using uname -a shows Linux (none). Also, using -append "root=/dev/sda" (as suggested on the link above) does not seem to work. How do I tell the kernel image to boot with OpenSUSE?
I have OpenSUSE installed into opensuse.img, and:
$ qemu-system-x86_64 -hda opensuse.img -m 512 -boot c
boots it with the stock kernel.
Most virtual machines are booted from a disk image or an ISO file, but KVM can directly load a Linux kernel into memory skipping the bootloader. This means you don't need an image file containing the kernel and boot files. Instead, you can run a kernel directly like this:
qemu-kvm -kernel arch/x86/boot/bzImage -initrd initramfs.gz -append "console=ttyS0" -nographic
These flags directly load a kernel and initramfs from the host filesystem without the need to generate a disk image or configure a bootloader.
The optional -initrd flag loads an initramfs for the kernel to use as the root filesystem.
The -append flags adds kernel parameters and can be used to enable the serial console.
The -nographic option restricts the virtual machine to just a serial console and therefore keeps all test kernel output in your terminal rather than in a graphical window.
Take a look in the below link. It has lot more info [thanks to the Guy who wrote all that]
http://blog.vmsplice.net/2011/02/near-instant-kernel-development-cycle.html
Usually for arm architecture like raspberry pi or any board .
To boot with your custom kernel
qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1" -hda 2013-05-25-wheezy-raspbian.img
where -hda your suse.img here u have to find in which partition your rootfs present u can check
fdisk -l your image
if only one partition then pass /dev/sda or if its in 2nd /dev/sda2
I think no need of initrd image required here. usually it will mount main rootfs so no need when u boot it main rootfs.
So try this
qemu-system-x86_64 -hda opensuse.img -m 512 -kernel ~/kernel/linux-git/arch/x86_64/boot/bzImage -append "root=/dev/sda" -boot c
Note check exactly in which partition your rootfs is present then pass /dev/sda*
Im not sure you just try above one . Also you mention that uname -a
gives linux none This is bcoz while configuring your kernel you have to mention name otherwise it ll default take as none

Resources