How to boot the ubuntu cloud image with a customized kernel - linux

I try to use my own Linux kernel built from source for an ubuntu cloud image and boot it using QEMU. My customized kernel is outside of the ubuntu image:
$ ls
kernel ubuntu-20.04-amd64.img ...
Here is the command line I used:
sudo qemu-system-x86_64 \
-enable-kvm -cpu host -smp 2 -m 4096 -nographic \
-drive id=root,media=disk,file=ubuntu-20.04-amd64.img \
-kernel ./kernel/arch/x86/boot/bzImage \
-append "root=/dev/sda console=ttyS0" \
-device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp:127.0.0.1:5555-:22
When I boot it, I can see the following log:
[ 0.875446] List of all partitions:
[ 0.875736] 0800 4194304 sda
[ 0.875736] driver: sd
[ 0.876259] 0801 4194303 sda1 00000000-01
[ 0.876259]
[ 0.876893] No filesystem could mount root, tried:
[ 0.876893] ext3
[ 0.877435] ext2
[ 0.877610] ext4
[ 0.877834]
[ 0.878149] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,0)
Any suggestions?

The log you quote says that the disk image is partitioned: that is, sda is the entire (virtual) disk, and it has a partition table with one partition named sda1. Your "append" command line asks to use '/dev/sda' as if the disk image had only a single filesystem on it. Try '/dev/sda1' instead.

Related

Booting kernel from SD in qemu (ARM) with u-boot

I'm quite new to embedded systems and I'm playing around with ARM on qemu. So I've run into problem booting linux kernel image from an emulated SD on versatile express with cpu cortex-a9.
I prepared everything in the following order: first, I've built the kernel with vexpress_defconfig using appropriate ARM toolchain. Then I've built u-boot with vexpress_ca9x4_defconfig. Everything went just fine. The linux kernel source I took is at version 4.13 (latest stable from git). U-boot version is 2017.09-00100. Then I prepared an SD image:
dd if=/dev/zero of=sd.img bs=4096 count=4096
mkfs.vfat sd.img
mount sd.img /mnt/tmp -o loop,rw
cp kernel/arch/arm/boot/zImage /mnt/tmp
umount /mnt/tmp
Next I try to run qemu as follows:
qemu-system-arm -machine vexpress-a9 -cpu cortex-a9 -m 128M -dtb kernel/arch/arm/boot/dts/vexpress-v2p-ca9.dtb -kernel uboot/u-boot -sd sd.img -nographic
U-boot loads successfully and gives me command prompt. And SD is really there and attached:
=> mmcinfo
Device: MMC
Manufacturer ID: aa
OEM: 5859
Name: QEMU!
Tran Speed: 25000000
Rd Block Len: 512
SD version 1.0
High Capacity: No
Capacity: 16 MiB
Bus Width: 1-bit
Erase Group Size: 512 Bytes
I attempt to load compressed image from SD into memory and boot it:
fatload mmc 0:0 0x4000000 zImage
and everything looks ok
=> fatload mmc 0:0 0x4000000 zImage
reading zImage
3378968 bytes read in 1004 ms (3.2 MiB/s)
but then I want to boot the kernel and get an error:
=> bootz 0x4000000
Bad Linux ARM zImage magic!
I also tried U-boot images, crafted with u-boot's mkimage, like in this example:
uboot/tools/mkimage -A arm -C none -O linux -T kernel -d kernel/arch/arm/boot/Image -a 0x00010000 -e 0x00010000 uImage
also trying out -C gzip on zImage and different load/entry addresses, to no avail. The images were copied to sd.img. When I fatload the image and check it with iminfo, whichever options I try, I constantly get error:
=> iminfo 0x4000000
## Checking Image at 04000000 ...
Unknown image format!
I'm totally confused and this problem drives me nuts, while information on this subject in Internet is rather scarce. Please, hint me what I'm doing wrong and redirect into right direction.
qemu in use is QEMU emulator version 2.9.0.

Kernel Panic with ramfs on embedded device: No filesystem could mount root

I'm working on an embedded ARM device running Linux (kernel 3.10), with NAND memory for storage. I'm trying to build a minimal linux which will reside on its own partition and carry out updates of the main firmware.
The kernel uses a very minimal root fs which is stored in a ramfs. However, I can't get it to boot. I get the following error:
[ 0.794113] List of all partitions:
[ 0.797600] 1f00 128 mtdblock0 (driver?)
[ 0.802669] 1f01 1280 mtdblock1 (driver?)
[ 0.807697] 1f02 1280 mtdblock2 (driver?)
[ 0.812735] 1f03 8192 mtdblock3 (driver?)
[ 0.817761] 1f04 8192 mtdblock4 (driver?)
[ 0.822794] 1f05 8192 mtdblock5 (driver?)
[ 0.827820] 1f06 82944 mtdblock6 (driver?)
[ 0.832850] 1f07 82944 mtdblock7 (driver?)
[ 0.837876] 1f08 12288 mtdblock8 (driver?)
[ 0.842906] 1f09 49152 mtdblock9 (driver?)
[ 0.847928] No filesystem could mount root, tried: squashfs
[ 0.853569] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
[ 0.861806] CPU: 0 PID: 1 Comm: swapper Not tainted 3.10.73 #11
[ 0.867732] [<800133ec>] (unwind_backtrace+0x0/0x12c) from [<80011a50>] (show_stack+0x10/0x14)
(...etc)
The root fs is built by the build process, using the following (simplified for clarity):
# [Copy some things to $(ROOTFS_OUT_DIR)/mini_rootfs]
cd $(ROOTFS_OUT_DIR)/mini_rootfs && find . | cpio --quiet -o -H newc > $(ROOTFS_OUT_DIR)/backup.cpio
gzip -f -9 $(ROOTFS_OUT_DIR)/backup.cpio
This creates $(ROOTFS_OUT_DIR)/backup.cpio.gz
The kernel is then built like this:
#$(MAKE) -C $(LINUX_SRC_DIR) O=$(LINUX_OUT_DIR) \
CONFIG_INITRAMFS_SOURCE="$(ROOTFS_OUT_DIR)/backup.cpio.gz" \
CONFIG_INITRAMFS_ROOT_UID=0 CONFIG_INITRAMFS_ROOT_GID=0
I think this means it uses the same config as the main firmware (built elsewhere), but supplies the minimal ramfs image using CONFIG_INITRAMFS_SOURCE.
From Kernel.Org, the ramfs is always built anyway, and CONFIG_INITRAMFS_SOURCE is all that is needed to specify a pre-made root fs to use. There are no build errors to indicate that there is a problem creating the ramfs, and the size of the resulting kernel looks about right. backup.cpio.gz is about 3.6 MB; the final zImage is 6.1 MB; the image is written to a partition which is 8 MB in size.
To use this image, I set some flags used by the (custom) boot loader which tell it to boot from the minimal partition, and also set a different command line for the kernel. Here is the command line used to boot:
console=ttyS0 rootfs=ramfs root=/dev/ram rw rdinit=/linuxrc mem=220M
Note that the nimimal root fs contains "/linuxrc", which is actually a link to /bin/busybox:
lrwxrwxrwx 1 root root 11 Nov 5 2015 linuxrc -> bin/busybox
Why doesn't this boot? Why is it trying "squashfs" filesystem, and is this wrong?
SOLVED! It turned out that a file name used by the (custom) build system had changed as part of an update, and so it was not putting the correct kernel image into the firmware package. I was actually trying to boot the wrong kernel with the "rootfs=ramfs" parameter, one which didn't have a ramfs.
So, for future reference, this error occurs if you specify "rootfs=ramfs" but your kernel wasn't built with any rootfs built in (CONFIG_INITRAMFS_SOURCE=... NOT specified)

Kernel Panic - Not Syncing. Segfault at Init

I recently tried to install php5-gd package on my debian vmware server and it failed at libc6 - i386.
Afterwards every command other than CD caused a Segmentation fault and the server would no long boot, showing the following error
[ 4.808086] init[1]: segfault at 0 ip (null) sp bff4645c error 14 in init[8048000+8000]
[ 4.808372] Kernel panic - not syncing: Attempted to kill init!
[ 4.808442] Pid: 1, comm: init Not tainted 3.2.0-4-686-pae #1 Debian 3.2.65-1
[ 4.808512] Call Trace:
(Trace continued in this image
)
I am at a complete loss on what to do at this moment. Any help or direction would be appreciated
Edit: I've since uploaded debian-live-8.3.0-i386-standard to the vmware store and booted the broken vm with the live cd.
Now I am in the live cd terminal but not sure what to do next. I did a lsblk and noted that the broken vm's boot partition is sda > sda2 and that's all I have done so far. Do I need to mount this somewhere now?
Edit2: I've now mounted the broken partition into the live cd, however when I tried to chroot, I get Segmentation Fault:
# mkdir -p /mnt/tcs1/boot
# mount /dev/tcs1/root /mnt/tcs1
# mount /dev/sda1 /mnt/tcs1/boot
# mount -t proc none /mnt/tcs1/proc
# mount -o bind /dev /mnt/tcs1/dev
# mount -o bind /run /mnt/tcs1/run
# mount -o bind /sys /mnt/tcs1/sys
# chroot /mnt/tcs1 /bin/bash
# Segmentation fault
Solved:
I relinked ld-linux.so.2 to /lib/i386-linux-gnu/ld-2.19.so from a rescue CD and managed to chroot in

QEMU fails to initialize NVMe device

I want to learn the NVMe driver in Linux, but I don't have a physical NVMe drive. So, I think QEMU is my current only choice. I setup the system in these steps logined as "root":
built QEMU-2.2.1 from source code cloned from stable branch
git clone -b stable-2.2 git://git.qemu-project.org/qemu
./configure
--enable-linux-aio --target-list=x86_64-softmmu
make clean
make -j8
make install
create an img and install CentOS6.6 into this image:
qemu-img create -f raw ./vdisk/16GB.img 16G
qemu-system-x86_64 -m 1024 -cdrom ./vdisk/CentOS-6.6-x86_64-minimal.iso -hda ./vdisk/16GB.img
run CentOS6.6 in QEMU with nvme device
qemu-system-x86_64 -m 1024 -hda ./vdisk/16GB.img -device nvme
But it shows the error message below:
qemu-system-x86_64: -device nvme: Device initialization failed.
qemu-system-x86_64: -device nvme: Device 'nvme' could not be initialized
I also run the CentOS6.6 in QEMU without nvme device, it just runs very well.
qemu-system-x86_64 -m 1024 -hda ./vdisk/16GB.img
So, how can I get more debug information on this error? Or, how can I solve this issue if you also have similar experience?
Thanks,
-Crane
Find the solution: create a img for nvme device, and start qemu like this:
qemu-system-x86_64 -m 2048 -hda ./vdisk/16GB.img -drive file=./vdisk/nvme_dut.img,if=none,id=drv0 -device nvme,drive=drv0,serial=foo --enable-kvm -smp 2

Using qemu to boot OpenSUSE (or any other OS) with custom kernel?

Duplicate;
Could not find an answer, so posting here.
I want to run OpenSUSE as guest with a custom kernel image which is on my host machine. I'm trying:
$ qemu-system-x86_64 -hda opensuse.img -m 512 -kernel
~/kernel/linux-git/arch/x86_64/boot/bzImage -initrd
~/kernel/linux-git/arch/x86_64/boot/initrd.img -boot c
But it boots into BusyBox instead. Using uname -a shows Linux (none). Also, using -append "root=/dev/sda" (as suggested on the link above) does not seem to work. How do I tell the kernel image to boot with OpenSUSE?
I have OpenSUSE installed into opensuse.img, and:
$ qemu-system-x86_64 -hda opensuse.img -m 512 -boot c
boots it with the stock kernel.
Most virtual machines are booted from a disk image or an ISO file, but KVM can directly load a Linux kernel into memory skipping the bootloader. This means you don't need an image file containing the kernel and boot files. Instead, you can run a kernel directly like this:
qemu-kvm -kernel arch/x86/boot/bzImage -initrd initramfs.gz -append "console=ttyS0" -nographic
These flags directly load a kernel and initramfs from the host filesystem without the need to generate a disk image or configure a bootloader.
The optional -initrd flag loads an initramfs for the kernel to use as the root filesystem.
The -append flags adds kernel parameters and can be used to enable the serial console.
The -nographic option restricts the virtual machine to just a serial console and therefore keeps all test kernel output in your terminal rather than in a graphical window.
Take a look in the below link. It has lot more info [thanks to the Guy who wrote all that]
http://blog.vmsplice.net/2011/02/near-instant-kernel-development-cycle.html
Usually for arm architecture like raspberry pi or any board .
To boot with your custom kernel
qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1" -hda 2013-05-25-wheezy-raspbian.img
where -hda your suse.img here u have to find in which partition your rootfs present u can check
fdisk -l your image
if only one partition then pass /dev/sda or if its in 2nd /dev/sda2
I think no need of initrd image required here. usually it will mount main rootfs so no need when u boot it main rootfs.
So try this
qemu-system-x86_64 -hda opensuse.img -m 512 -kernel ~/kernel/linux-git/arch/x86_64/boot/bzImage -append "root=/dev/sda" -boot c
Note check exactly in which partition your rootfs is present then pass /dev/sda*
Im not sure you just try above one . Also you mention that uname -a
gives linux none This is bcoz while configuring your kernel you have to mention name otherwise it ll default take as none

Resources