Trace instructions in QEMU Full System Emulation (RISC-V) - linux

As a college homework I have to run a benchmark on a system that uses RISC-V architecture.
Note: I don't have much knowledge of Linux and I don't know almost anything about QEMU.
About the virtual machine with RISC-V architecture:
To access a system with RISC-V architecture I used QEMU and as I'm using WSL2 to access a Linux kernel I used the tutorial below to install QEMU inside WSL2 and build the risc-v system with debian:
Emulating RISC-V Debian on WSL2 | David Burela’s blog
I was able to install it correctly and run the QEMU virtual machine with RISC-V architecture.
About the benchmark:
As a benchmark I'm using JPEG2000 from MediaBench II which you can access at this link:
MediaBench II (slu.edu)
The benchmark consists of running the JPEG2000 encoder using the provided application called “Jasper” with the benchmark image included in the files. Thus, the application takes the image file “input_base_4CIF.ppm” which is in the ppm format and encodes it to the jp2 format (JPEG2000) generating the output file “output_base_4CIF_96bps.jp2”.
To run the benchmark:
I accessed the RISC-V virtual machine through QEMU (which runs Debian)
Inside the virtual machine, I downloaded all the source code, data, & scripts for JPEG-2000 using wget
I compiled all scripts of the JPEG2000 algorithm as well as the Jasper application scripts following the included guides (as I compiled inside the virtual machine, I guarantee that the files were compiled for the RISC-V architecture)
I entered the jpg2000enc folder with the command:
$ cd jpg2000enc
I deleted the jpg2000 image already included in the output_base folder with the commands:
$ cd output_base
$ rm output_base_4CIF_96bps.jp2
I went back to the jpg2000enc folder with the command
$ cd ..
Finally I ran the command that runs Jasper for the image provided (input_base_4CIF.ppm) which is in the input_base folder with the command:
$ jasper -f ./input_base/input_base_4CIF.ppm -F ./output_base/output_base_4CIF_96bps.jp2 -T jp2 -O rate=0.010416667
I accessed the output_base folder to check if the jpg2000 file was generated successfully
$ cd output_base
$ ls
I saw that the file was generated successfully!
Note that everything ran perfectly!
Now the problem I'm failing to solve:
For the next steps of my homework, I need to log all the RISC-V instructions executed when I run the command:
$ jasper -f ./input_base/input_base_4CIF.ppm -F ./output_base/output_base_4CIF_96bps.jp2 -T jp2 -O rate=0.010416667
How can I trace all RISC-V instructions as they are translated by QEMU during execution of the presented command?
Ps: It's important to mention that I don't want to trace the x86 instructions that are actually executed by my processor when QEMU translates the RISC-V instructions of the jasper application, I really need the RISC-V instructions.

Related

WSL2 distro shell can't launch a file copied from outside

The situation in short
I can't launch an executable (binary or a script) in a WSL2 distro if it wasn't created inside this distro
I can launch scripts and binaries that were created inside the distro shell (not using /mnt/c or /mnt/d in any way)
But I can't launch anything that was created outside and copied inside from Windows (using /mnt/c or /mnt/d)
I can see the copied files in the file system, can "cat" them, can look them up with "which", but I cannot launch them by entering the path into the command line
The questions I have in regards to all this
How come that the shell can't see the files while utils you run from the shell can?
How do I make the shell see files that were copied from outside?
If I can't make the shell launch the files, then how do I launch them?
The Situation in detail
I have Windows 10 with WSL2 and two distros
Ubuntu-20.04
Alpine
In Ubuntu I have a "Hello, World!" project written in C
It compiles in Ubuntu and then and runs in Ubuntu just fine
But, when I copy it from Ubuntu to Windows
cp hello /mnt/d/
and then go to Alpine and copy it inside from Windows
cp /mnt/d/hello .
I then have trouble launching it inside Alpine
Here is the output of file hello command in Ubuntu with some extra formatting (just in case)
$ file hello
hello:
ELF 64-bit LSB shared object,
x86-64,
version 1 (SYSV),
dynamically linked,
interpreter /lib64/ld-linux-x86-64.so.2,
BuildID[sha1]=021352ab7bf244e340c3c42ce34225b74baa6618,
for GNU/Linux 3.2.0,
not stripped
Here's what I have in Alpine
$ cp /mnt/d/hello .
$ ls -l
-rwxr-xr-x 1 pavel pavel 16760 Apr 19 19:07 hello
$ ./hello
-ash: ./hello: not found
Now same with a script copied from Windows
Copy the script inside Alpine from Windows
$ cp /mnt/d/hello.sh .
Checking the contents
$ cat hello.sh
#!/bin/ash
echo Hello!
Setting the execute permission just in case
$ chmod agu+x hello.sh
Trying to run it
$ ./hello.sh
-ash: ./hello.sh: not found
But, I can launch the hello.sh by explicitly calling the ash tool and passing the script path as the argument
$ ash ./hello.sh
Hello!
At the same time, a script created inside Alpine runs just by entering it's path to the command line
$ cat << EOF > hello-local.sh
> #!/bin/ash
> echo Local hello!
> EOF
$ chmod agu+x hello-local.sh
$ ./hello-local.sh
Local hello!
Also, I couldn't make a file that would run from one that wouldn't either by copying it with cp
cp hello.sh hello2.sh
or by copying it with cat
cat hello.sh > hello3.sh
cmod agu+x hello3.sh
Why do I need to copy things from outside
It all started when I wanted to explore how Docker for Windows uses Linux namespaces to separate containers
The distro that Docker for Windows uses is called docker-desktop
The docker-desktop distro neither has utilities that I need for my experiments, nor a package manager to get those utilities
So I tried to copy them from outside
But now Docker for Windows studies is not the only concern
I want to understand this magic that is happening just as bad
To be fair, there really are three separate questions here, but not necessarily the questions you listed in your post:
Secondary question -- Why does your script that you copied to Alpine fail?
As #MarkPlotnick covered in the comments (and you confirmed), it was due to the script having DOS/Windows line endings (CRLF). In general, try to avoid creating or editing Linux text files using Windows tools unless you are sure that they are using Linux line-endings.
Secondary question -- Why does your C program fail when you compile on Ubuntu and copy the binary to Alpine?
Also as #MarkPlotnick mentioned in the comments, this is because Ubuntu uses glibc as the standard library implementation by default, but Alpine uses musl. See a number of questions here for more information. The first one in the list sorted by "relevance" is actually a pretty good one to start with.
Main question -- How to explore the docker-desktop distro
Really, your main goal seems to be how to gain access to certain tools inside the docker-desktop distro in order to learn more about it.
I was going to say, "don't" (with more explanation), but the reality is that I think it's a potentially good learning experience. I've done it, to some degree, so who am I to say it's "too dangerous" or recommend against it? ;-)
I will give fair warning, though -- The docker-desktop distro isn't intended to be run by users. Docker Desktop "injects" links and sockets into your other WSL2 distros (which you can enable/disable per-distro in Docker Desktop) so that its tools, processes, etc., are available to all your WSL2 (and PowerShell/CMD) instances.
I'd personally try to avoid making any changes to the docker-desktop distro itself. They'll likely be overwritten anyway by Docker Desktop when it extracts a new rootfs.
However, we can still gain access to the tools we need by accessing them from another distribution, but without copying them into docker-desktop.
First, a note -- As I think you have probably already figured out,docker-desktop is also musl-basesd. So you'll want to use tools from another musl-based distro like Alpine.
This can be easily accomplished by running the following line once in your Alpine instance (as root):
echo "/ /mnt/wsl/instances/Alpine none defaults,bind,X-mount.mkdir 0 0" >> /etc/fstab
That will add a mount to the Alpine instance into the tmpfs /mnt/wsl mount. You can see my Super User answer here for more details on that.
Once you wsl --terminate Alpine and restart it, you'll have access to the Alpine files from any other WSL2 distribution.
As a useful (for your intent) example, install the util-linux package in Alpine to get access to the lsns command.
Then, in the docker-desktop distro (which I assume you already know to access with wsl -u root -d docker-desktop, but I'll include that command here for other future readers), to list the namespaces:
/mnt/host/wsl/instances/Alpine/usr/bin/lsns
The docker-desktop instance automounts at a slightly different directory than default (see cat /etc/wsl.conf), so you need to adjust the path to /mnt/host/wsl instead of /mnt/wsl.
But with that in place, you can run all (most?) of your Alpine binaries directly in docker-desktop without having to modify it directly. If you have a script in your home directory that you want to run in docker-desktop, for instance:
/mnt/host/wsl/instances/Alpine/home/users/<yourusername>/hello.sh
Note that if you have a binary that requires a dynamically-linked library on Alpine, I'm assuming you'll need to adjust your LD_LIBRARY_PATH accordingly, although I haven't tested. For instance:
LD_LIBRARY_PATH=/mnt/host/wsl/instances/Alpine/usr/lib /mnt/host/wsl/intances/Alpine/usr/bin/<whatever>

Install Alpine in diskless mode on NON VNC dedicated server

Hello i try to figure out how to install alpine linux, in diskless mode, on my remote dedicated server, without vnc access.
The server hoster just has a few images and a rescue system without a VNC option.
I already tried to boot the iso via grub-image boot but the alpine linux installation image doesn't have openssh installed, so i couldn't connect to the server to do the alpine-setup.
So i thought i can maybe edit the squashfs image.
In a Debian Live CD it's easy to unsquash the image, enable "PermitRootLogin= yes" and squash it again but with alpine linux i have absolutely no clue.
After this i tried to build with mkimage a custom alpine iso but i just don't know how to build properly i get "unable to load key file" and "$apks: unable to select package (or its dependencies)" error after the building.
(https://wiki.alpinelinux.org/wiki/How_to_make_a_custom_ISO_image_with_mkimage)
I used this code for the mkimage profile:
profile_nas() {
profile_standard
kernel_cmdline="unionfs_size=512M console=tty0 console=ttyS0,115200"
syslinux_serial="0 115200"
kernel_addons="zfs"
apks="\$apks openssh"
local _k _a
for _k in \$kernel_flavors; do
apks="\$apks linux-\$_k"
for _a in \$kernel_addons; do
apks="\$apks \$_a-\$_k"
done
done
apks="\$apks linux-firmware"
}
and this one to build
sh mkimage.sh --tag edge \
--outdir ~/iso \
--arch x86_64 \
--repository https://dl-cdn.alpinelinux.org/alpine/edge/main/ \
--profile nas
even if I'm able to generate the custom alpine linux iso i don't understand
this part of the guide (and if i would be able to understand this part i still wouldn't know how to enable remote ROOT access aka "PermitRootLogin= yes" in sshd_config):
Making packages available on boot
A package may be made available in the live system by defining the generation of an apkovl which contains a corresponding /etc/apk/world file, and adding that overlay definition to the mkimg-profile, e.g. with `apkovl="genapkovl-mkimgoverlay.sh"`
The definition may be done as in the genapkovl-dhcp.sh example. Copy the relevant parts (including the rc_add lines) into a `genapkovl-mkimgoverlay.sh` file and add the package(s) that should be installed in the live system on separate lines in the file contents for /etc/apk/world.
After this i tried to do a ssh in initramfs with dropbear-initramfs.
But it also doesn't work. With encrypted filesystems it always worked but to do this task i can't get a connection.
Does someone have a different idea how i can accomplish this task?

Is there any short A to Z description of how to debug the Linux kernel that has been tested and contains ALL necessary steps ? Esp. for Yocto?

Debugging the Linux Kernel with kgdb over rs-232 needs several preparation steps. I found awesome documentation, but no single-source that is fully self-contained and summarizes all steps needed, does not explain for ages, and has been tested. And also covers Yocto.
Is there any source that covers all that is needed in one single and short description ?
I.e.:
What files are needed in the directory GDB is started from (e.g. kernel awareness, source, vmlinux) and how to get theese, where to put it ?
When and where to get a cross-gdb from ?
ALL kernel config options needed, also the not-obvious ones (like CONFIG_RANDOMIZE_BASE)
How to configure the serial ports
Explaining a working back and forth of breaking into debugee and debugger to get started.
Explaining one rock-solid option of stopping the kernel that runs everywhere.
Explaining how to get this done not only for PC-PC debugging, but also for Yocto targets.
Debugging the Linux Kernel via a Nullmodem-Cable:
It took me a while to get a kgdb connection with Linux kernel awareness fully running. I share my way of doing this with Ubuntu Eoan (optional: Yocto Warrior) in 2020 here:
Tested with:
Debugging a linux based Intel PC from an Intel MacBook running MacOS Catalina. Using the gdb from the Homebrew package "i386-elf-gdb“. (wituout „-tui“ option in GDB)
Debugging a linux based ARM target (i.mx6, Yocto) from a linux based Intel PC.
Prerequisites:
You need two computers and a serial nullmodem cable. Check the cable by firiing up a serial termianl (e.g. screen or putty) on both hosts, connecting to your serial port (e.g. /dev/ttyS0 or /dev/ttyUSB0) and print characters from each station to the other. Remember the /dev/tty ports you confirmed.
Preparation:
You need on the first debuggee computer, we call it „target":
Special kernel installed that contains symbols, kgdb support etc.
Learn how to compile and install a kernel and use in make menuconfig belows configuration. You can search for Sybmbols with F8 or the / key in menuconfig.
(E.g. wiki.ubuntu.com. There take care in the first paragraph to execute deb-src before apt-get :)
# CONFIG_SERIAL_KGDB_NMI is not set
CONFIG_CONSOLE_POLL=y
# CONFIG_DEBUG_INFO is not set
CONFIG_KGDB=y
CONFIG_KGDB_SERIAL_CONSOLE=y
# CONFIG_KGDB_TESTS is not set
# CONFIG_KGDB_KDB is not set
CONFIG_FRAME_POINTER=y
CONFIG_DEBUG_INFO=y
# CONFIG_DEBUG_INFO_REDUCED is not set
# CONFIG_DEBUG_INFO_SPLIT is not set
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_GDB_SCRIPTS=y
CONFIG_STRIP_ASM_SYMS=y
# CONFIG_RANDOMIZE_BASE is not set
(Note for advanced Yocto use, skip if you're debugging a PC:
In yocto I created in my layer a file: recipes-kernel/linux/linux-mainline_%.bbappend with the content:
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI += "file://kgdb.cfg“
And in files/kgdb.cfg I added the config fragment shown above (without the on ARM unavailable options CONFIG_RANDOMIZE_BASE and CONFIG_FRAME_POINTER)
)
You need on the second debugger computer, we call it „debugger pc":
Full kernel source code, same code you used to compile the kernel above. (If you compiled the .o and .ko objects in place and not in a build-folder you better not copy the directory from the other pc, where you called make etc. in, but then better grab fresh sources again.)
vmlinux file containing the symbols (lies in the kernel source root, or build folder on the highest level after kernel make).
vmlinux-gdb.py file that was made during the kernel build (also lies at the same position on the highest level.).
All scripts in the folder scripts/gdb (Folder scripts in the same toplevel-position. If you use a dedicated build folder use the script folder from there, not from the source folder.)
(Advanced: If both computers don’t match in CPU, like Intel and Arm, a cross-gdb build. Ignore if you're on Intel/AMD.)
Note for advanced Yocto use, I did something like (ignore if you debug a PC):
bitbake -c patch virtual/kernel #(apply the changed kernel config from above)
bitbake -f -c compile virtual/kernel #(unpack is not sufficient because of vmlinux-gdb.py)
mkdir ~/gdbenv
cp -a tmp/work-shared/phyboard-mira-imx6-14/kernel-source/. ~/gdbenv
cp tmp/work/phyboard_mira_imx6_14-phytec-linux-gnueabi/linux-mainline/4.19.100-phy1-r0.0/build/vmlinux ~/gdbenv
cp tmp/work/phyboard_mira_imx6_14-phytec-linux-gnueabi/linux-mainline/4.19.100-phy1-r0.0/build/vmlinux-gdb.py ~/gdbenv
mkdir ~/gdbenv/scripts
cp -r tmp/work/phyboard_mira_imx6_14-phytec-linux-gnueabi/linux-mainline/4.19.100-phy1-r0.0/build/scripts/gdb ~/gdbenv/scripts
Then (ignore if you're on a PC)
yocto bitbake -c populate_sdk [my-image]
Then (still ignore on PC) install the sdk .sh-installation file from your deploy directory on the debugger pc and start the environment as guided by the output of the install script (remember that command), then use "$GDB" for starting the cross-gdb instead of „gdb".
Debug execution
Launch on the debugger two console screens:
Console 1, ssh: +++++++++++++++++++++++++++++++++++++++
ssh user#192.168.x.y
sudo -s
echo ttyS0,9600n8 > /sys/module/kgdboc/parameters/kgdboc
echo 1 > /proc/sys/kernel/sysrq
echo g > /proc/sysrq-trigger
Console 2, local: ++++++++++++++++++++++++++++++++++++++++
cd ~/gdbenv
gdb -tui ./vmlinux
add-auto-load-safe-path ~/gdbenv
source ~/gdbenv/vmlinux-gdb.py
set serial baud 9600
target remote /dev/ttyS0 (use the tty port you confirmed in the beginning)
b [name of the c funtion you want to debug]
cont
Back to console 1, ssh: +++++++++++++++++++++++++++++++++++++++
[Now trigger the function, e.g. sudo modprobe yourFancyKernelModule]
Back to console 2, local: ++++++++++++++++++++++++++++++++++++++++
Now use gdb functions, like bt, step, next, finish ...
You can also use linux-aware commands. Call "apropos lx“ in gdb for a list of commands.

How to reduce the build time while cross-compiling atlas for armv7?

I am trying to cross compile the atlas library for an armv7 cortex-a9 processor.
When I try to make build it takes more than five hours to build the library from source. I think the problem is it runs all sanity tests. Is there a way to skip this?
Host system: ubuntu 16.4 in a virtual box with 4gb ram allocated , and 2 cores.
Target system: cortex a9, small endian armv7 architecture
Build process:
export PATH =$PATH:PATH TO ARM TOOL CHAIN FROM BUILDROOT
export CC=arm-linux-gcc
export ARCH=arm
export RANLIB=arm-linux-ranlib
export STRIP=arm-linux-strip
export LD=arm-linux-ld
export CPP=arm-linux-cpp
export AR=arm-linux-ar
export AS=arm-linux-as
export FC=arm-linux-gfortran
downloaded the atlas library
tar -xf atlas.3.10.3.tat.gz
cd ATLAS
mkdir test
cd test
../configure -Si archdef 0
make build
It would be helpful to know if I am missing some steps in between or any build commands to be included while make so that the sanity tests don't happen and I
get the output any sooner?
Though it doesn't answer your question, just FYI - modern approach is to use docker for building, CI tests and so on. VM (such as VirtualBox) will eat more resources.
For ARM cross-compilation you may consider https://github.com/dockcross/dockcross it has image for Cortex-A9 as well.
If you makefile runs long tests, then there may be an option to skip them indeed. Check the makefile if the author implemented something for that purpose.

preparing data for haar training OpenCV

i want to make my own haar classifier for detection of hand, so i was following the toutorial given at
Naotoshi Seo. In this tutorial various linux command are used and i don't have linux.
some command are:
$ find [image dir] -name '*.[image ext]' > [description file]
$ createsamples -info samples.dat -vec samples.vec -w 20 -h 20
so how can i use these command on my windows
The "find" command creates a text file with the names of your image files. You can do that by hand if you want.
The second one is not a standard Linux command, but, I assume, one of the tools you need to create the classifier. If you want to use Windows you will have to download the Windows version of the tools in that tutorial. He uses Cygwin so you will need to install that as well.
Another option is to download the Ubuntu Live distribution and install it on a USB memory stick. You can then boot from the USB stick, create your classifier in Linux, then go beck to Widows when you're done.

Resources