Error running qemu-system-riscv using root.bin and vmlinux - riscv

I am following riscv.org guides for toolchain building. When emulate using qemu running local built rootfilesystem (with busybox) and Linux Kernel, encounter the error below:
Running Qemu using local-built root.bin and kernel image
danny#danny:~/test/riscv/work$ qemu-system-riscv -hda root-local.bin -kernel vmlinux-local -nographic
unassigned address was called?
with addr: 102000735F80006E
not implemented for riscv
Running Qemu using riscv.org stocked root.bin and kernel image
danny#danny:~/test/riscv/work$ qemu-system-riscv -hda root.bin -kernel vmlinux -nographic
[ 0.150000] io scheduler cfq registered (default)
[ 0.160000] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[ 0.160000] serial8250: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[ 0.160000] TCP: cubic registered
[ 0.160000] htifbd: detected disk with ID 1
[ 0.160000] htifbd: adding htifbd0
[ 0.160000] VFS: Mounted root (ext2 filesystem) readonly on device 254:0.
[ 0.160000] devtmpfs: mounted
[ 0.160000] Freeing unused kernel memory: 64K (ffffffff80002000 - ffffffff80012000)
[ 0.200000] EXT2-fs (htifbd0): warning: mounting unchecked fs, running e2fsck is recommended
#uname -a
Linux ucbvax 3.14.15-g4073e84-dirty #4 Sun Jan 11 07:17:06 PST 2015 riscv GNU/Linux
If qemu testing using the downloaded root.bin and vmlinux from riscv.org, seem ok but cant see the busybox starting message and the terminal cant Halt :
Have tested qemu using various combination and result as below:
**root.bin vmlinux RESULT**
local-built local-built Unassigned address was called ....
Downloaded Downloaded Seem OK but without busybox starting bar
local-built Downloaded Kernelpanic-not syncing:No working init found
Downloaded local-built Unassigned address was called ....
We are starting a project to build and fabricate a RISCV silicon chip for Makers around the world and testing the toolchain now in order to port Ubuntu Core & Android to RISCV. Any idea what might probably went wrong ?
Thanks.

QEMU hasn't been fully updated to support the new RISC-V privileged spec (github issue). The update is currently underway.
For an ISA simulator, spike is a good alternative. It may not have all of the platform features of QEMU, but it could serve as a starting point while the QEMU update completes.

Related

How to add VFIO-IOMMU in KVM virtual machine (Aarch64)?

I am using aarch64 Linux to test VFIO-IOMMU feature in KVM VM.
The host is cortex-A78 running Linux-5.10.104 (with VFIO_IOMMU enabled). The guest OS is Ubuntu-22.04 (Linux-5.15, also with VFIO_IOMMU enabled).
The VM is created with virt-manager with virtio devices, like NIC, SCSI, etc.
But I did not find the way to add VFIO-IOMMU device to the VM in internet.
I tried by adding following lines into the vm.xml,
<iommu model='smmuv3'/>
But after guest OS boot, I found following logs about iommu but nothing about SMMUv3.
t#t:~$ dmesg | grep -i mmu
[ 0.320696] iommu: Default domain type: Translated
[ 0.321218] iommu: DMA domain TLB invalidation policy: strict mode
So how can VFIO-IOMMU be supported/added to the VM in this case?
The qemu-system-aarch64 is 4.2.1, I am not sure if it could support smmuv4 for ARMv8
I confirmed that QEMU-6.2.0 supports SMMUv3. The guest OS log shows something as follows,
[ 0.578157] arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0
[ 0.578841] arm-smmu-v3 arm-smmu-v3.0.auto: ias 44-bit, oas 44-bit (features 0x00008305)
[ 0.580289] arm-smmu-v3 arm-smmu-v3.0.auto: allocated 65536 entries for cmdq
[ 0.581060] arm-smmu-v3 arm-smmu-v3.0.auto: allocated 128 entries for evtq

How to find the root device name of a debootstrap image

I'm creating a ubuntu 20.04 QEMU image with debootstrap (debootstrap --arch amd64 focal .). However, when I tried to boot it with a compiled Linux kernel, it failed to boot:
[ 0.678611] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[ 0.681639] Call Trace:
...
[ 0.685135] ret_from_fork+0x35/0x40
[ 0.685712] Kernel Offset: disabled
[ 0.686182] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]---
I'm using the following command:
sudo qemu-system-x86_64 \
-enable-kvm -cpu host -smp 2 -m 4096 -no-reboot -nographic \
-drive id=root,media=disk,file=ubuntu2004.img \
-net nic,macaddr=00:da:bc:de:00:13 -net tap,ifname=tap0,script=no \
-kernel kernel/arch/x86/boot/bzImage \
-append "root=/dev/sda1 console=ttyS0"
So I'm guessing the error comes from the wrong root device name (/dev/sda1 in my case). Is there any way to find the correct root device name?
Update from #Peter Maydell's comment:
[ 0.302200] VFS: Cannot open root device "sda1" or unknown-block(0,0): error -6
[ 0.302413] Please append a correct "root=" boot option; here are the available partitions:
[ 0.302824] fe00 4194304 vda
[ 0.302856] driver: virtio_blk
[ 0.303152] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
Where vda should be the root device name.
This kind of "unable to mount root fs on unknown-block" error has several possible causes:
You asked the kernel to use the wrong device as the rootfs (or you didn't specify, and the built-in default is the wrong one)
You asked the kernel to use the right device as the rootfs, but the kernel doesn't have a device driver for it compiled in. (This might include more complicated cases like "the device is a PCI device and the kernel doesn't have the PCI controller driver compiled in.")
You asked the kernel to use the right device as the rootfs, and the kernel does have a driver for it, but it couldn't find the hardware, perhaps because the QEMU command line is incorrect
An important clue in figuring out which is the problem is to look at the part of the kernel log just before the "Kernel panic" part of the log. The kernel should print a list of "available partitions", which are the devices that it has a driver for and which are present. If that list contains a plausible looking device name, as in your case (where "vda" is listed as provided by the "virtio_blk" driver) then you now know what the root device name should be, and all you need to do is fix the kernel command line, eg "root=vda". Note that this list is a list of available partitions, so if your disk image has multiple partitions they should show up in the list as "vda1", "vda2", etc. (In this case it looks like your image is a single filesystem, not a disk image with multiple partitions, so only "vda" is in the list.)
If the kernel's list of available partitions doesn't include anything that looks like the disk you were expecting, then either the kernel is missing the driver, or the QEMU command line doesn't have the option to provide the device. This is a little harder to debug, but there may be useful information earlier in the kernel bootup log where the kernel probes for hardware -- for instance there should be logging when the PCI controller is probed. You can also of course double-check the config file for your kernel to see if the right CONFIG options are set.
If you're using a standard distro kernel then these usually have all the usual devices built-in, and your first check should be your QEMU command line. If you built your own kernel from source, check your config, especially if you were trying to achieve a "minimal" kernel with only the desired drivers present.

Glitch Booting Linux with Spike?

I used Spike to boot linux using the riscv tools but the linux boot sequence seems to stop at Bootconsole[early0] disabled.
I tried adding kernel command line root=/dev/vda ro console=ttyS0 but didn't work. The same console settings works in QEMU. Also checked the .config file for the line CONFIG_HVC_RISCV_SBI=y. It was there. Still coulnd't get past it.
Tried with Linux kernel version 4.19 to 5.2. No luck. Am I doing something wrong here?
Steps I followed:
Compiled linux with Riscv toolchain
compiled riscv-pk with ../configure --host=riscv64-unknown-elf --with-payload= [path to vmlinux]
used "Spike bbl" to start spike image.
Please let me know if any more info is required.
Sorry, noob here.
Attaching terminal output
bbl loader
OF: fdt: Ignoring memory range 0x80000000 - 0x80200000
Linux version 4.19.59 (root#AsusFX504) (gcc version 8.2.0 (GCC)) #2 SMP Sat Jul 20 05:11:32 IST 2019
bootconsole [early0] enabled
initrd not found or empty - disabling initrd
Zone ranges:
DMA32 [mem 0x0000000080200000-0x00000000ffffffff]
Normal empty
Movable zone start for each node
Early memory node ranges
node 0: [mem 0x0000000080200000-0x00000000ffffffff]
Initmem setup node 0 [mem 0x0000000080200000-0x00000000ffffffff]
software IO TLB: mapped [mem 0xfa3fe000-0xfe3fe000] (64MB)
elf_hwcap is 0x112d
percpu: Embedded 17 pages/cpu s29912 r8192 d31528 u69632
Built 1 zonelists, mobility grouping on. Total pages: 516615
Kernel command line:
Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
Sorting __ex_table...
Memory: 1988760K/2095104K available (5468K kernel code, 329K rwdata, 1751K rodata, 193K init, 806K bss, 106344K reserved, 0K cma-reserved)
SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
rcu: Hierarchical RCU implementation.
rcu: RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=1.
rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=1
NR_IRQS: 0, nr_irqs: 0, preallocated irqs: 0
clocksource: riscv_clocksource: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns
Console: colour dummy device 80x25
console [tty0] enabled
bootconsole [early0] disabled
This might be because Virtual Terminal is enabled in your linux config. Disabling Virtual Terminal might solve your issue.
In Linux make menuconfig go to :-
Location:
-> Device Drivers
-> Character devices
and Disble Virtual terminal .
Symbol: VT [=y] n
Type : bool
Prompt: Virtual terminal
As mentioned previously, the reason is that VT is enabled so kernel has a dummy VT framebuffer that simply goes nowhere, but you don't have to disable it. console=ttyS0 will also not work, since SPIKE doesn't emulate it. Note that this won't work on the HiFive Unleashed, either, since serial terminals there are ttySIF0 and ttySIF1. What you want is console=hvc0 and SPIKE will be able to continue from there, e.g. in your kernel .config:
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="earlyprintk console=hvc0"
CONFIG_CMDLINE_FORCE=y

how to enable PMU in KVM guest

I am running KVM/QEMU in my Lenovo X1 laptop.
The guest OS is Ubuntu 15.04 x86_64.
Now, I want to run 'perf' command in guest OS, but I found followings in guest OS in dmesg.
...
[ 0.055442] smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge) (fam: 06, model: 3a, stepping: 09)
[ 0.056000] Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only.
[ 0.057602] x86: Booting SMP configuration:
[ 0.058686] .... node #0, CPUs: #1
[ 0.008000] kvm-clock: cpu 1, msr 0:1ffd6041, secondary cpu clock
...
So, the perf command could NOT work hardware PMU event in guest OS.
How could I enable hardware PMU from my host to the Ubuntu guest?
Thanks,
-Tao
Page https://github.com/mozilla/rr/wiki/Building-And-Installing gives some hints how to enable guest PMU:
Qemu: On QEMU command line use
-cpu host
Libvirt/KVM: Specify CPU passthrough in domain XML definition:
<cpu mode='host-passthrough'/>
Same advice in https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-monitoring_tools-vpmu
I edit <cpu mode='host-passthrough'/> line into /etc/libvirt/qemu/my_vm_name.xml file instead of <cpu>...</cpu> block.
(In virt-manager use "host-passthrough" as CPU "Model:" field - http://blog.wikichoon.com/2016/01/using-cpu-host-passthrough-with-virt.html)
Now PMU works, tested with perf stat echo inside the VM, there are "arch_perfmon" in /proc/cpuinfo and PMUs are enabled in dmesg|grep PMU.
-cpu host option of Qemu was used according to /var/log/libvirt/qemu/vm_name.log:
/usr/bin/kvm-spice ... -machine ...,accel=kvm,... -cpu host ...

linux: running self compiled kernel in qemu: VFS: Unable to mount root fs on unknown wn-block(0,0)

I try to get this running and don't know what I'm doing wrong. I have created an Debian.img (disk in raw format with virtual device manager - gui to libvirt I guess) and installed debian with no troubles. Now I want to get this running with a self compiled kernel. I copied the .config-file from my working (virtual) debian and made no more changes at all. This is what I do:
qemu-system-x86_64 -m 1024M -kernel /path/to/bzImage -hda /var/lib/libvirt/images/Debian.img -append "root=/dev/sda1 console=ttyS0" -enable-kvm -nographic
But during boot I always get this error message.
[ 0.195285] Initializing network drop monitor service
[ 0.196177] List of all partitions:
[ 0.196641] No filesystem could mount root, tried:
[ 0.197292] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[ 0.198355] Pid: 1, comm: swapper/0 Not tainted 3.2.46 #7
[ 0.199055] Call Trace:
[ 0.199386] [<ffffffff81318c30>] ? panic+0x95/0x19e
[ 0.200049] [<ffffffff81680f7d>] ? mount_block_root+0x245/0x271
[ 0.200834] [<ffffffff8168112f>] ? prepare_namespace+0x133/0x169
[ 0.201590] [<ffffffff81680c94>] ? kernel_init+0x14c/0x151
[ 0.202273] [<ffffffff81325a34>] ? kernel_thread_helper+0x4/0x10
[ 0.203022] [<ffffffff81680b48>] ? start_kernel+0x3c1/0x3c1
[ 0.203716] [<ffffffff81325a30>] ? gs_change+0x13/0x13
What I'm doing wrong? Please someone help. Do I need to pass the -initrd option? I tried this already but had no luck yet.
I figured it out by myself. Some time has passed, but as I recall the solution was to provide an initial ramdisk. This is how I got it working with hardware acceleration.
Compiling
make defconfig
CONFIG_EXT4_FS=y
CONFIG_IA32_EMULATION=y
CONFIG_VIRTIO_PCI=y (Virtualization -> PCI driver for virtio devices)
CONFIG_VIRTIO_BALLOON=y (Virtualization -> Virtio balloon driver)
CONFIG_VIRTIO_BLK=y (Device Drivers -> Block -> Virtio block driver)
CONFIG_VIRTIO_NET=y (Device Drivers -> Network device support -> Virtio network driver)
CONFIG_VIRTIO=y (automatically selected)
CONFIG_VIRTIO_RING=y (automatically selected)
---> see http://www.linux-kvm.org/page/Virtio
Enable paravirt in config
Disable NMI watchdog on HOST for using performance counters on GUEST. You may ignore this.
cat /proc/sys/kernel/nmi_watchdog
---> see http://kvm.et.redhat.com/page/Guest_PMU
Start in Qemu
sudo qemu-system-x86_64 -m 1024M -hda /var/lib/libvirt/images/DEbian.img -enable-kvm -initrd /home/username/compiled_kernel/initrd.img-3.2.46 -kernel /home/username/compiled_kernel/bzImage -append "root=/dev/sda1 console=ttyS0" -nographic -redir tcp:2222::22 -cpu host -smp cores=2
Start in KVM
Kernal path: /home/username/compiled_kernel/bzImage
Initrd path: /home/username/compiled_kernel/initrd.img-3.2.46
Kernel arguments: root=/dev/sda1
Hope this helps if someone has the same issues.
This is for AArch64 (arm64) on QEMU case.
I was following this good tutorial: https://ibug.io/blog/2019/04/os-lab-1/
In my case I was met with this error message:
---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0) ]---
I did mknod dev/ram b 1 0 in the initrd.
Later I noticed there was an error message above that line implying the kernel didn't support the ram disk. So I edited .config and set these items:
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=1
CONFIG_BLK_DEV_RAM_SIZE=131072 (= 128MB, the number is in unit of 1014B)
And then the problem was gone! The initrd was mounted on /dev/ram and the first init process ran well.
It turns out running make defconfig didn't set thses values by default for me.
maybe your system image file is bad and can not be mounted.
You may try these command to mount the image file and check if it is a valid root file system for linux.
losetup /dev/loop0 /var/lib/libvirt/images/Debian.img
kpartx -av /dev/loop0
mount /dev/mapper/loop0p1 /mnt/tmp
The most likely thing is that the kernel doesn't know the correct device to boot from.
You can supply this explicitly from the qemu command line. So if the root is on partition 2, you can say:
qemu -kernel /path/to/bzImage \
-append root=/dev/sda2 \
-hda /path/to/hda.img \
.
.
.
Notice I use /dev/sda2 even though the disk is IDE. Even virtual machines seem to use SATA nowadays.
The other possibilities are that as #Houcheng says, your root FS is corrupted, or else that the kernel does not have that particular FS type built in. But I think you would get a different error if that were the case.
QEMU version
QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.11), Copyright (c) 2003-2008 Fabrice Bellard
running build-root 4.9.6 with the following arguments to be passed
qemu-system-x86_64 -kernel output/images/bzImage -hda output/images/rootfs.qcow2 -boot c -m 128 -append root=/dev/sda -localtime -no-reboot -name rtlinux -net nic -net user -redir tcp:2222::22 -redir tcp:3333::3333
was accepting only /dev/sda as an option for the root fs to mount (it will show you a little hint for the root fs option once it will boot and hang with the following error):
VFS: Cannot open root device "hda" or unknown-block(0,0): error -6
Please append a correct "root=" boot option; here are the available partitions:
0800 61440 sda driver: sd

Resources