Where does QEMU load the DTB? - linux

I am writing my own bootloader for aarch64 that must boot linux, and in order to execute it properly I need to follow the linux boot protocol.
Here are some memory mappings: located in my linker file
FLASH_START = 0x000100000;
RAM_START = 0x40000000;
TEXT_START = 0x40080000;
Here is the command I am using to lauch my virt, giving 4 cores and 2GB of RAM
qemu-system-aarch64 -nographic -machine virt -cpu cortex-a72 -kernel pflash.bin -initrd initramfs.cpio.gz -serial mon:stdio -m 2G -smp 4
The pflash.bin has the following layout:
dd if=/dev/zero of=pflash.bin bs=1M count=512
dd if=my_bootloader.img of=pflash.bin conv=notrunc bs=1M count=20
dd if=Kernel of=pflash.bin conv=notrunc bs=1M seek=50
Where: Kernel is the linux kernel image file,
and my_bootloader.img is simply the objcopy of the elf file:
aarch64-linux-gnu-objcopy -O binary my_bootloader.elf my_bootloader.img
And the elf file is created in the following manner:
aarch64-linux-gnu-ld -nostdlib -T link.ld my_bootloader.o -o my_bootloader.elf
Here is my_bootloader.S
.section ".text.startup"
.global _start
_start:
ldr x30, =STACK_TOP
mov sp, x30
ldr x0, =RAM_START
ldr x1, =0x43280000
br x1
ret
As you can see All I have done so far is:
Set up the stack
Presumably I have loaded the DTB to the x0 (just like the linux boot protocol demands)
Branch to the location of the linux kernel
So I have not yet loaded the initramfs.cpio.gz which contains the file system but I should already normally get at least some output from the kernal since the DTB was loaded.
My question is, have I loaded it correctly? And I guess that the simple answer is no. But basically I have no clue where qemu puts the dtb in my RAM, and after looking everywhere on the documentation I cannot seem to find this information.
I would much appreciate if someone could tell me where QEMU loads the dtb so I can put it into x0 and the kernel could gladly read it!
Thanks in advance!

Where the dtb (if any) is is board specific. The QEMU documentation for the Arm 'virt' board tells you where the DTB is:
https://www.qemu.org/docs/master/system/arm/virt.html#hardware-configuration-information-for-bare-metal-programming
However, your command line is incorrect. "-kernel pflash.bin" says "this file is a Linux kernel, boot it in the way that the Linux kernel should be booted". What you want is "load this file into the flash, and start up in the way that the CPU would start out of reset on real hardware". For that you want one of the other ways of loading a guest binary (-bios is probably simplest). And you probably don't want to pass QEMU a -initrd option, either, since that is intended for either (a) QEMU's builtin bootloader or (b) QEMU-aware bootloaders that know how to extract a kernel and initrd from the fw-cfg device.
PS: If you tell QEMU to provide more than one guest CPU then your bootloader will need to deal with the secondary CPUs. That either means using PSCI to start them up, or else handling the fact that all the CPUs start executing the same code out of reset (which one depends on how you choose to start QEMU). You're better off sticking to '-smp 1' to start off with, and come back and deal with SMP later.

Related

How to re-enable CPU Cores after isolcpus

I'm running some processes on a Jetson NX and I was trying to isolate 3 of the cores so I could use taskset and dedicate them to my python script which incorporated multi processing. To do this, I followed a few tutorials and modified my /boot/extlinux/extlinux.conf file to be:
TIMEOUT 30
DEFAULT primary
MENU TITLE L4T boot options
LABEL primary
MENU LABEL primary kernel
LINUX /boot/Image
INITRD /boot/initrd
APPEND ${cbootargs} quiet root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4 console=ttyTCU0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=3-5
This worked fine for my needs and when I ran cat /sys/devices/system/cpu/isolated
it outputted 3-5. Now I'm trying to bring back cores 3 and 4, so I modified the extlinux.conf file to say:
TIMEOUT 30
DEFAULT primary
MENU TITLE L4T boot options
LABEL primary
MENU LABEL primary kernel
LINUX /boot/Image
INITRD /boot/initrd
APPEND ${cbootargs} quiet root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4 console=ttyTCU0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 isolcpus=5
and I rebooted my Jetson. However It still says cores 3-5 are isolated. Is there some other steps I need to take to re-enable these cores?

Why does strace report my x64 FASM program runs in 32-bit mode? [duplicate]

This question already has answers here:
What is better "int 0x80" or "syscall" in 32-bit code on Linux?
(3 answers)
Closed 3 years ago.
I have format ELF64 executable 3 at top of my source file.
I compiled my program using fasm main.asm
Output:
flat assembler version 1.73.13 (16384 kilobytes memory, x64)
3 passes, 319 bytes.
Then I tried to run it using strace ./main, because it didn't work as expected and in output there is strace: [ Process PID=3012310 runs in 32 bit mode. ].
file main: main: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, no section header
uname -m: x86_64
Use syscall instead of int 0x80
strace is wrong, your process isn't actually running in 32-bit mode, just using the 32-bit int 0x80 system call ABI.
You can check with gdb ./main and use starti. info regs will show that the register state is 64-bit, including 16x 64-bit registers, not 8x 32-bit registers. Or more simply, layout reg.
I see the same strace bug(?) when building a program with NASM that uses the 32-bit int 0x80 ABI in 64-bit mode to make an exit system call.
I added a delay loop before the first system call and I see strace doesn't print out the bitness of the target process until it makes a system call. So apparently strace infers that from whether it uses the 64-bit syscall ABI or the 32-bit int 0x80 / sysenter ABI!
What happens if you use the 32-bit int 0x80 Linux ABI in 64-bit code?
Perhaps this is related to strace trying to figure out how to decode system calls: The Linux ptrace API that strace uses doesn't have a simple reliable mechanism to tell which system call ABI a process invoked. https://superuser.com/questions/834122/how-to-distinguish-syscall-from-int-80h-when-using-ptrace
A 64-bit process that uses 32-bit system calls used to just get decoded according to 64-bit call numbers. But now it seems modern strace checks:
I used eax=1 / syscall to invoke write, and eax=1 / int 0x80 to invoke exit, and strace decoded them both correctly
execve("./nasm-test", ["./nasm-test"], 0x7ffdb8da5890 /* 52 vars */) = 0
write(0, NULL, 0) = 0
strace: [ Process PID=5219 runs in 32 bit mode. ]
exit(0) = ?
+++ exited with 0 +++
This is with strace 5.3 on Linux 5.3.1-arch1-1-ARCH.

How to check if /init starts /etc/inittab

I have an embedded ARM system with processor AT91SAM9G45.
System consists of two components:
Linux kernel (4.14.79)
Busybox 1.29.3 as initramfs image.
I connect to the device using putty and connecting to serial port.
When kernel starts, everything goes fine. Kernel unpacks initramfs image, all files are found and listed (I see it by debug messages). But when it starts /init, log messages are:
Freeing unused kernel memory: 384K
This architecture does not have kernel memory protection.
run_init_process BEFORE /init
run_init_process AFTER /init, result = 0
Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004
/init is symlink to /bin/busybox. I tried to replace /init with /sbin/init, /bin/busybox, /linuxrc, but results are the same.
/etc/inittab file:
# Begin /etc/inittab
id::initdefault:
si::sysinit:/etc/init.d/rc S
#l0::wait:/etc/rc.d/init.d/rc 0
#l1::wait:/etc/rc.d/init.d/rc 1
#l2::wait:/etc/rc.d/init.d/rc 2
#l3::wait:/etc/rc.d/init.d/rc 3
#l4::wait:/etc/rc.d/init.d/rc 4
#l5::wait:/etc/rc.d/init.d/rc 5
#l6::wait:/etc/rc.d/init.d/rc 6
ca::ctrlaltdel:/sbin/shutdown -t1 -a -r now
su::once:/sbin/sulogin
1::respawn:/sbin/getty ttyS1 115200
2::respawn:/sbin/getty ttyS2 115200
3::respawn:/sbin/getty ttyS3 115200
4::respawn:/sbin/getty ttyS4 115200
5::respawn:/sbin/getty ttyS5 115200
6::respawn:/sbin/getty ttyS6 115200
# End /etc/inittab
/etc/init.d/rcS file (this file is allowed to execute):
#!/bin/busybox sh
echo "Hello world!"
I don't know if even /init process starts parsing /etc/inittab or it falls before getting /etc/inittab by some reasons I cannot find out. Maybe there are
some mistakes in my /etc/inittab and /etc/init.d/rcS files. Maybe there are some errors with terminal (/etc/init.d/rcS cannot write to stdout cause it's blocked, suspended, being used by another process and so on).
How to definitely get sured, that /etc/inittab is started?
Welcome to StackOverflow.
I see there is space between rc and S
si::sysinit:/etc/init.d/rc S
change it to
si::sysinit:/etc/init.d/rcS
let me know if it works.
/init is symlink to /bin/busybox.
The typical /init file in an initramfs built by Buildroot that incorporates Busybox is a script of seven lines:
#!/bin/sh
# devtmpfs does not get automounted for initramfs
/bin/mount -t devtmpfs devtmpfs /dev
exec 0</dev/console
exec 1>/dev/console
exec 2>/dev/console
exec /sbin/init $*
Note the comment ("devtmpfs does not get automounted for initramfs") and the mount command for /dev.
It's /sbin/init (rather than /init) that is linked to /bin/busybox.
IOW without the proper setup of the /dev directory, userland has no I/O capabilty.
Only after devtmpfs has been mounted should the init program in Busybox be executed, which will then access /etc/inittab.
See Is there a way to get Linux to treat an initramfs as the final root filesystem?
and
Make CONFIG_DEVTMPFS_MOUNT apply to initramfs/initmpfs

Booting kernel from SD in qemu (ARM) with u-boot

I'm quite new to embedded systems and I'm playing around with ARM on qemu. So I've run into problem booting linux kernel image from an emulated SD on versatile express with cpu cortex-a9.
I prepared everything in the following order: first, I've built the kernel with vexpress_defconfig using appropriate ARM toolchain. Then I've built u-boot with vexpress_ca9x4_defconfig. Everything went just fine. The linux kernel source I took is at version 4.13 (latest stable from git). U-boot version is 2017.09-00100. Then I prepared an SD image:
dd if=/dev/zero of=sd.img bs=4096 count=4096
mkfs.vfat sd.img
mount sd.img /mnt/tmp -o loop,rw
cp kernel/arch/arm/boot/zImage /mnt/tmp
umount /mnt/tmp
Next I try to run qemu as follows:
qemu-system-arm -machine vexpress-a9 -cpu cortex-a9 -m 128M -dtb kernel/arch/arm/boot/dts/vexpress-v2p-ca9.dtb -kernel uboot/u-boot -sd sd.img -nographic
U-boot loads successfully and gives me command prompt. And SD is really there and attached:
=> mmcinfo
Device: MMC
Manufacturer ID: aa
OEM: 5859
Name: QEMU!
Tran Speed: 25000000
Rd Block Len: 512
SD version 1.0
High Capacity: No
Capacity: 16 MiB
Bus Width: 1-bit
Erase Group Size: 512 Bytes
I attempt to load compressed image from SD into memory and boot it:
fatload mmc 0:0 0x4000000 zImage
and everything looks ok
=> fatload mmc 0:0 0x4000000 zImage
reading zImage
3378968 bytes read in 1004 ms (3.2 MiB/s)
but then I want to boot the kernel and get an error:
=> bootz 0x4000000
Bad Linux ARM zImage magic!
I also tried U-boot images, crafted with u-boot's mkimage, like in this example:
uboot/tools/mkimage -A arm -C none -O linux -T kernel -d kernel/arch/arm/boot/Image -a 0x00010000 -e 0x00010000 uImage
also trying out -C gzip on zImage and different load/entry addresses, to no avail. The images were copied to sd.img. When I fatload the image and check it with iminfo, whichever options I try, I constantly get error:
=> iminfo 0x4000000
## Checking Image at 04000000 ...
Unknown image format!
I'm totally confused and this problem drives me nuts, while information on this subject in Internet is rather scarce. Please, hint me what I'm doing wrong and redirect into right direction.
qemu in use is QEMU emulator version 2.9.0.

Using qemu to boot OpenSUSE (or any other OS) with custom kernel?

Duplicate;
Could not find an answer, so posting here.
I want to run OpenSUSE as guest with a custom kernel image which is on my host machine. I'm trying:
$ qemu-system-x86_64 -hda opensuse.img -m 512 -kernel
~/kernel/linux-git/arch/x86_64/boot/bzImage -initrd
~/kernel/linux-git/arch/x86_64/boot/initrd.img -boot c
But it boots into BusyBox instead. Using uname -a shows Linux (none). Also, using -append "root=/dev/sda" (as suggested on the link above) does not seem to work. How do I tell the kernel image to boot with OpenSUSE?
I have OpenSUSE installed into opensuse.img, and:
$ qemu-system-x86_64 -hda opensuse.img -m 512 -boot c
boots it with the stock kernel.
Most virtual machines are booted from a disk image or an ISO file, but KVM can directly load a Linux kernel into memory skipping the bootloader. This means you don't need an image file containing the kernel and boot files. Instead, you can run a kernel directly like this:
qemu-kvm -kernel arch/x86/boot/bzImage -initrd initramfs.gz -append "console=ttyS0" -nographic
These flags directly load a kernel and initramfs from the host filesystem without the need to generate a disk image or configure a bootloader.
The optional -initrd flag loads an initramfs for the kernel to use as the root filesystem.
The -append flags adds kernel parameters and can be used to enable the serial console.
The -nographic option restricts the virtual machine to just a serial console and therefore keeps all test kernel output in your terminal rather than in a graphical window.
Take a look in the below link. It has lot more info [thanks to the Guy who wrote all that]
http://blog.vmsplice.net/2011/02/near-instant-kernel-development-cycle.html
Usually for arm architecture like raspberry pi or any board .
To boot with your custom kernel
qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1" -hda 2013-05-25-wheezy-raspbian.img
where -hda your suse.img here u have to find in which partition your rootfs present u can check
fdisk -l your image
if only one partition then pass /dev/sda or if its in 2nd /dev/sda2
I think no need of initrd image required here. usually it will mount main rootfs so no need when u boot it main rootfs.
So try this
qemu-system-x86_64 -hda opensuse.img -m 512 -kernel ~/kernel/linux-git/arch/x86_64/boot/bzImage -append "root=/dev/sda" -boot c
Note check exactly in which partition your rootfs is present then pass /dev/sda*
Im not sure you just try above one . Also you mention that uname -a
gives linux none This is bcoz while configuring your kernel you have to mention name otherwise it ll default take as none

Resources