How to find the root device name of a debootstrap image - linux

I'm creating a ubuntu 20.04 QEMU image with debootstrap (debootstrap --arch amd64 focal .). However, when I tried to boot it with a compiled Linux kernel, it failed to boot:
[ 0.678611] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[ 0.681639] Call Trace:
...
[ 0.685135] ret_from_fork+0x35/0x40
[ 0.685712] Kernel Offset: disabled
[ 0.686182] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]---
I'm using the following command:
sudo qemu-system-x86_64 \
-enable-kvm -cpu host -smp 2 -m 4096 -no-reboot -nographic \
-drive id=root,media=disk,file=ubuntu2004.img \
-net nic,macaddr=00:da:bc:de:00:13 -net tap,ifname=tap0,script=no \
-kernel kernel/arch/x86/boot/bzImage \
-append "root=/dev/sda1 console=ttyS0"
So I'm guessing the error comes from the wrong root device name (/dev/sda1 in my case). Is there any way to find the correct root device name?
Update from #Peter Maydell's comment:
[ 0.302200] VFS: Cannot open root device "sda1" or unknown-block(0,0): error -6
[ 0.302413] Please append a correct "root=" boot option; here are the available partitions:
[ 0.302824] fe00 4194304 vda
[ 0.302856] driver: virtio_blk
[ 0.303152] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
Where vda should be the root device name.

This kind of "unable to mount root fs on unknown-block" error has several possible causes:
You asked the kernel to use the wrong device as the rootfs (or you didn't specify, and the built-in default is the wrong one)
You asked the kernel to use the right device as the rootfs, but the kernel doesn't have a device driver for it compiled in. (This might include more complicated cases like "the device is a PCI device and the kernel doesn't have the PCI controller driver compiled in.")
You asked the kernel to use the right device as the rootfs, and the kernel does have a driver for it, but it couldn't find the hardware, perhaps because the QEMU command line is incorrect
An important clue in figuring out which is the problem is to look at the part of the kernel log just before the "Kernel panic" part of the log. The kernel should print a list of "available partitions", which are the devices that it has a driver for and which are present. If that list contains a plausible looking device name, as in your case (where "vda" is listed as provided by the "virtio_blk" driver) then you now know what the root device name should be, and all you need to do is fix the kernel command line, eg "root=vda". Note that this list is a list of available partitions, so if your disk image has multiple partitions they should show up in the list as "vda1", "vda2", etc. (In this case it looks like your image is a single filesystem, not a disk image with multiple partitions, so only "vda" is in the list.)
If the kernel's list of available partitions doesn't include anything that looks like the disk you were expecting, then either the kernel is missing the driver, or the QEMU command line doesn't have the option to provide the device. This is a little harder to debug, but there may be useful information earlier in the kernel bootup log where the kernel probes for hardware -- for instance there should be logging when the PCI controller is probed. You can also of course double-check the config file for your kernel to see if the right CONFIG options are set.
If you're using a standard distro kernel then these usually have all the usual devices built-in, and your first check should be your QEMU command line. If you built your own kernel from source, check your config, especially if you were trying to achieve a "minimal" kernel with only the desired drivers present.

Related

How to add VFIO-IOMMU in KVM virtual machine (Aarch64)?

I am using aarch64 Linux to test VFIO-IOMMU feature in KVM VM.
The host is cortex-A78 running Linux-5.10.104 (with VFIO_IOMMU enabled). The guest OS is Ubuntu-22.04 (Linux-5.15, also with VFIO_IOMMU enabled).
The VM is created with virt-manager with virtio devices, like NIC, SCSI, etc.
But I did not find the way to add VFIO-IOMMU device to the VM in internet.
I tried by adding following lines into the vm.xml,
<iommu model='smmuv3'/>
But after guest OS boot, I found following logs about iommu but nothing about SMMUv3.
t#t:~$ dmesg | grep -i mmu
[ 0.320696] iommu: Default domain type: Translated
[ 0.321218] iommu: DMA domain TLB invalidation policy: strict mode
So how can VFIO-IOMMU be supported/added to the VM in this case?
The qemu-system-aarch64 is 4.2.1, I am not sure if it could support smmuv4 for ARMv8
I confirmed that QEMU-6.2.0 supports SMMUv3. The guest OS log shows something as follows,
[ 0.578157] arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0
[ 0.578841] arm-smmu-v3 arm-smmu-v3.0.auto: ias 44-bit, oas 44-bit (features 0x00008305)
[ 0.580289] arm-smmu-v3 arm-smmu-v3.0.auto: allocated 65536 entries for cmdq
[ 0.581060] arm-smmu-v3 arm-smmu-v3.0.auto: allocated 128 entries for evtq

Using qemu+kvm is slower than using qemu in rv6(xv6 rust porting)

We are currently working on the rv6 project which is porting MIT's educational operating system xv6 to Rust. Our code is located here.
We use qemu and qemu's virt platform to execute rv6, and it works well with using qemu.
Executing command on arm machine is this:
RUST_MODE=release TARGET=arm KVM=yes GIC_VERSION=3
qemu-system-aarch64 -machine virt -kernel kernel/kernel -m 128M -smp 80 -nographic -drive file=fs.img,if=none,format=raw,id=x0,copy-on-read=off -device virtio-blk-device,drive=x0,bus=virtio-mmio-bus.0 -cpu cortex-a53 -machine gic-version=3 -net none
To make some speed boost experiment with KVM, we made rv6 support the arm architecture on arm machine. The arm architecture's driver code locates in here.
The problem is, when we use qemu with kvm, the performance is significantly reduced.
Executing command on arm machine with KVM is this:
qemu-system-aarch64 -machine virt -kernel kernel/kernel -m 128M -smp 80 -nographic -drive file=fs.img,if=none,format=raw,id=x0,copy-on-read=off -device virtio-blk-device,drive=x0,bus=virtio-mmio-bus.0 -cpu host -enable-kvm -machine gic-version=3 -net none
We repeated
Write 500 bytes syscall 10,000 times and the result was: kvm disable: 4,500,000 us, kvm enable: 29,000,000 us. (> 5 times)
Open/Close syscall 10,000 times result: kvm disable: 12,000,000 us, kvm enable: 29,000,000 us. (> 5 times)
Getppid syscall 10,000 times result: kvm disable: 735,000 us, kvm enable: 825,000 us. (almost same)
Simple calculation(a = a * 1664525 + 1013904223) 100 million times result: kvm disable: 2,800,000 us, kvm enable: 65,000,000 us. (> 20 times)
And the elapsed time was estimated by uptime_as_micro syscall in rv6.
These results were so hard to understand. So first we tried to find the bottleneck on rv6's booting process, because finding bottleneck during processing user program was so difficult.
We found that the first noticeable bottleneck on rv6 booting process was here:
run.as_mut().init();
self.runs().push_front(run.as_ref());
As far as we know, this part is just kind of "list initialization and push element" part. So we thought that by some reason, the KVM is not actually working and it makes worse result. And also this part is even before turn on some interrupts, so we thought arm's GIC or interrupt related thing is not related with problem.
So, how can I get better performance when using kvm with qemu?
To solve this problem, we tried these already:
change qemu(4.2, 6.2), virt version, change some command for qemu-kvm like cpu, drive cache, copy-on-read something, kernel_irqchip.., cpu core.. etc
find some kvm hypercall to use - but not exists on arm64
Run lmbench by ubuntu on qemu with kvm to check KVM itself is okay. - We found KVM with ubuntu is super faster than only using qemu.
Check 16550a UART print code is really slow on enabling KVM which makes incorrect result on benchmark - Without bottleneck code, we found the progress time of rv6 booting were almost same with KVM enabled or not.
Check other people who suffer same situation like us - but this superuser page not works. Our clocksource is arch_sys_counter.
Our Environment
qemu-system-aarch64 version: 4.2.1 (Debian 1:4.2-3ubuntu6.19)
CPU model: Neoverse-N1
Architecture: arrch64
CPU op-mode(s): 32-bit, 64-bit
CPU(s): 80
Ubuntu 20.04.3 LTS(Focal Fossa)
/dev/kvm exists
The main reason is here: https://forum.osdev.org/viewtopic.php?f=1&t=56120
RV6's memory setting was incorrect.
Change ARM's device memory to cacheable memory for MAIR_EL1 ARM register solved the problem.

Issue with binding platform driver to the platform device

I want to make work mt7622 soc's ethernet controller and faced with this issue. I've compiled mtk_soc_eth driver as mkt_eth module and I have an entry in mt7622-bananapi-bpi-r64.dts device tree for device, compatible with driver.
During boot this module is loaded into system automatically (I think after mounting rootfs):
[root#nixos:~]# lsmod | grep mtk_eth
mtk_eth 69632 0
dsa_core 98304 1 mtk_eth
And it seems registered as platform driver:
[root#nixos:~]# ls /sys/bus/platform/drivers/mtk_soc_eth
bind module uevent unbind
Also after boot I have an platform device:
[root#nixos:~]# ls /sys/bus/platform/devices/1b100000.ethernet
driver_override
modalias
of_node
power
subsystem
supplier:platform:10006000.power-controller
supplier:platform:10209000.apmixedsys
supplier:platform:10210000.topckgen
supplier:platform:10211000.pinctrl
supplier:platform:1b000000.syscon
supplier:platform:1b128000.sgmiisys
uevent
waiting_for_supplier
However they are not binded for some reason. Moreover, when I try to bind them manually, I get an error:
[root#nixos:~]# echo '1b100000.ethernet' > /sys/bus/platform/drivers/mtk_soc_eth/bind
-bash: echo: write error: Resource temporarily unavailable
How can I understand why ethernet device doesn't bind with driver?
Well, it seems that I figured out where is the problem. It seems that linux kernel have rich debug options) I've enabled dynamic debug to track which happens inside __driver_probe_device https://github.com/torvalds/linux/blob/master/drivers/base/dd.c#L730 function:
[root#nixos:~]# echo 'file dd.c +p'>/sys/kernel/debug/dynamic_debug/control
[root#nixos:~]# echo 'file core.c +p'>/sys/kernel/debug/dynamic_debug/control
And then tried to bind device driver and device:
[root#nixos:~]# echo '1b100000.ethernet' >/sys/bus/platform/drivers/mtk_soc_eth/bind
-bash: echo: write error: Resource temporarily unavailable
[root#nixos:~]# dmesg -T | tail
...
[Sat Jan 1 00:03:27 2000] bus: 'platform': __driver_probe_device: matched device 1b100000.ethernet with driver mtk_soc_eth
[Sat Jan 1 00:03:27 2000] platform 1b100000.ethernet: error -EPROBE_DEFER: supplier 1b000000.syscon not ready
It seems that one of the dependent devices (1b000000.syscon) is not ready (at the same time /sys/bus/platform/devices/1b100000.ethernet/waiting_for_supplier was still 0 for some reason). I need to load clk-mt7622-eth driver as well.

Error running qemu-system-riscv using root.bin and vmlinux

I am following riscv.org guides for toolchain building. When emulate using qemu running local built rootfilesystem (with busybox) and Linux Kernel, encounter the error below:
Running Qemu using local-built root.bin and kernel image
danny#danny:~/test/riscv/work$ qemu-system-riscv -hda root-local.bin -kernel vmlinux-local -nographic
unassigned address was called?
with addr: 102000735F80006E
not implemented for riscv
Running Qemu using riscv.org stocked root.bin and kernel image
danny#danny:~/test/riscv/work$ qemu-system-riscv -hda root.bin -kernel vmlinux -nographic
[ 0.150000] io scheduler cfq registered (default)
[ 0.160000] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[ 0.160000] serial8250: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[ 0.160000] TCP: cubic registered
[ 0.160000] htifbd: detected disk with ID 1
[ 0.160000] htifbd: adding htifbd0
[ 0.160000] VFS: Mounted root (ext2 filesystem) readonly on device 254:0.
[ 0.160000] devtmpfs: mounted
[ 0.160000] Freeing unused kernel memory: 64K (ffffffff80002000 - ffffffff80012000)
[ 0.200000] EXT2-fs (htifbd0): warning: mounting unchecked fs, running e2fsck is recommended
#uname -a
Linux ucbvax 3.14.15-g4073e84-dirty #4 Sun Jan 11 07:17:06 PST 2015 riscv GNU/Linux
If qemu testing using the downloaded root.bin and vmlinux from riscv.org, seem ok but cant see the busybox starting message and the terminal cant Halt :
Have tested qemu using various combination and result as below:
**root.bin vmlinux RESULT**
local-built local-built Unassigned address was called ....
Downloaded Downloaded Seem OK but without busybox starting bar
local-built Downloaded Kernelpanic-not syncing:No working init found
Downloaded local-built Unassigned address was called ....
We are starting a project to build and fabricate a RISCV silicon chip for Makers around the world and testing the toolchain now in order to port Ubuntu Core & Android to RISCV. Any idea what might probably went wrong ?
Thanks.
QEMU hasn't been fully updated to support the new RISC-V privileged spec (github issue). The update is currently underway.
For an ISA simulator, spike is a good alternative. It may not have all of the platform features of QEMU, but it could serve as a starting point while the QEMU update completes.

linux: running self compiled kernel in qemu: VFS: Unable to mount root fs on unknown wn-block(0,0)

I try to get this running and don't know what I'm doing wrong. I have created an Debian.img (disk in raw format with virtual device manager - gui to libvirt I guess) and installed debian with no troubles. Now I want to get this running with a self compiled kernel. I copied the .config-file from my working (virtual) debian and made no more changes at all. This is what I do:
qemu-system-x86_64 -m 1024M -kernel /path/to/bzImage -hda /var/lib/libvirt/images/Debian.img -append "root=/dev/sda1 console=ttyS0" -enable-kvm -nographic
But during boot I always get this error message.
[ 0.195285] Initializing network drop monitor service
[ 0.196177] List of all partitions:
[ 0.196641] No filesystem could mount root, tried:
[ 0.197292] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[ 0.198355] Pid: 1, comm: swapper/0 Not tainted 3.2.46 #7
[ 0.199055] Call Trace:
[ 0.199386] [<ffffffff81318c30>] ? panic+0x95/0x19e
[ 0.200049] [<ffffffff81680f7d>] ? mount_block_root+0x245/0x271
[ 0.200834] [<ffffffff8168112f>] ? prepare_namespace+0x133/0x169
[ 0.201590] [<ffffffff81680c94>] ? kernel_init+0x14c/0x151
[ 0.202273] [<ffffffff81325a34>] ? kernel_thread_helper+0x4/0x10
[ 0.203022] [<ffffffff81680b48>] ? start_kernel+0x3c1/0x3c1
[ 0.203716] [<ffffffff81325a30>] ? gs_change+0x13/0x13
What I'm doing wrong? Please someone help. Do I need to pass the -initrd option? I tried this already but had no luck yet.
I figured it out by myself. Some time has passed, but as I recall the solution was to provide an initial ramdisk. This is how I got it working with hardware acceleration.
Compiling
make defconfig
CONFIG_EXT4_FS=y
CONFIG_IA32_EMULATION=y
CONFIG_VIRTIO_PCI=y (Virtualization -> PCI driver for virtio devices)
CONFIG_VIRTIO_BALLOON=y (Virtualization -> Virtio balloon driver)
CONFIG_VIRTIO_BLK=y (Device Drivers -> Block -> Virtio block driver)
CONFIG_VIRTIO_NET=y (Device Drivers -> Network device support -> Virtio network driver)
CONFIG_VIRTIO=y (automatically selected)
CONFIG_VIRTIO_RING=y (automatically selected)
---> see http://www.linux-kvm.org/page/Virtio
Enable paravirt in config
Disable NMI watchdog on HOST for using performance counters on GUEST. You may ignore this.
cat /proc/sys/kernel/nmi_watchdog
---> see http://kvm.et.redhat.com/page/Guest_PMU
Start in Qemu
sudo qemu-system-x86_64 -m 1024M -hda /var/lib/libvirt/images/DEbian.img -enable-kvm -initrd /home/username/compiled_kernel/initrd.img-3.2.46 -kernel /home/username/compiled_kernel/bzImage -append "root=/dev/sda1 console=ttyS0" -nographic -redir tcp:2222::22 -cpu host -smp cores=2
Start in KVM
Kernal path: /home/username/compiled_kernel/bzImage
Initrd path: /home/username/compiled_kernel/initrd.img-3.2.46
Kernel arguments: root=/dev/sda1
Hope this helps if someone has the same issues.
This is for AArch64 (arm64) on QEMU case.
I was following this good tutorial: https://ibug.io/blog/2019/04/os-lab-1/
In my case I was met with this error message:
---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0) ]---
I did mknod dev/ram b 1 0 in the initrd.
Later I noticed there was an error message above that line implying the kernel didn't support the ram disk. So I edited .config and set these items:
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=1
CONFIG_BLK_DEV_RAM_SIZE=131072 (= 128MB, the number is in unit of 1014B)
And then the problem was gone! The initrd was mounted on /dev/ram and the first init process ran well.
It turns out running make defconfig didn't set thses values by default for me.
maybe your system image file is bad and can not be mounted.
You may try these command to mount the image file and check if it is a valid root file system for linux.
losetup /dev/loop0 /var/lib/libvirt/images/Debian.img
kpartx -av /dev/loop0
mount /dev/mapper/loop0p1 /mnt/tmp
The most likely thing is that the kernel doesn't know the correct device to boot from.
You can supply this explicitly from the qemu command line. So if the root is on partition 2, you can say:
qemu -kernel /path/to/bzImage \
-append root=/dev/sda2 \
-hda /path/to/hda.img \
.
.
.
Notice I use /dev/sda2 even though the disk is IDE. Even virtual machines seem to use SATA nowadays.
The other possibilities are that as #Houcheng says, your root FS is corrupted, or else that the kernel does not have that particular FS type built in. But I think you would get a different error if that were the case.
QEMU version
QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.11), Copyright (c) 2003-2008 Fabrice Bellard
running build-root 4.9.6 with the following arguments to be passed
qemu-system-x86_64 -kernel output/images/bzImage -hda output/images/rootfs.qcow2 -boot c -m 128 -append root=/dev/sda -localtime -no-reboot -name rtlinux -net nic -net user -redir tcp:2222::22 -redir tcp:3333::3333
was accepting only /dev/sda as an option for the root fs to mount (it will show you a little hint for the root fs option once it will boot and hang with the following error):
VFS: Cannot open root device "hda" or unknown-block(0,0): error -6
Please append a correct "root=" boot option; here are the available partitions:
0800 61440 sda driver: sd

Resources