Builtin Platform Driver __initcall Not Called on Linux Kernel Init - linux

Background
I am bringing up a Linux kernel via Yocto for some vendor-provided embedded hardware. I have configured the image to boot via fitImage with an initramfs and no rootfs (there is persistent storage but this is entirely for userspace application use). Think PXE live image and you won't be far off.
Things have been going well until my initramfs image crossed the ~128MB mark. Below this and everything boots as expected and all drivers are bound without issue. Above this mark and the kernel still boots but many drivers, though not all, are not bound. This is quite perplexing as all drivers are statically built into the kernel (no modules are used on this platform). Unfortunately, one of these modules runs the platform watchdog which causes entirely predictable reboots.
Thus far I have verified that all of the symbols are present in the vmlinux image:
$ objdump -x vmlinux | grep mtk_wdt
0000000000000000 l df *ABS* 0000000000000000 mtk_wdt.c
ffffff800880ac40 l F .text 000000000000004c mtk_wdt_stop
ffffff800880ac90 l F .text 0000000000000040 mtk_wdt_shutdown
ffffff80091de778 l F .init.text 0000000000000020 mtk_wdt_driver_init
ffffff800880acd0 l F .text 000000000000004c mtk_wdt_ping
ffffff800880ad20 l F .text 0000000000000070 mtk_wdt_set_timeout
ffffff800880ad90 l F .text 0000000000000074 mtk_wdt_start
ffffff800880ae08 l F .text 0000000000000144 mtk_wdt_resume
ffffff800880af50 l F .text 0000000000000120 mtk_wdt_suspend
ffffff800880b070 l F .text 0000000000000080 mtk_wdt_remove
ffffff80088977a8 l F .text 0000000000000210 mtk_wdt_isr
ffffff80091fe0a0 l F .exit.text 000000000000001c mtk_wdt_driver_exit
ffffff800880b4f0 l F .text 0000000000000310 mtk_wdt_probe
ffffff8008c2acd8 l O .rodata 0000000000000028 mtk_wdt_info
ffffff8008c2ad00 l O .rodata 0000000000000050 mtk_wdt_ops
ffffff8008c2ad98 l O .rodata 00000000000000b8 mtk_wdt_pm_ops
ffffff8008c2ae50 l O .rodata 0000000000000190 mtk_wdt_dt_ids
ffffff80093a3cb8 l O .data 00000000000000b0 mtk_wdt_driver
ffffff800a199368 l O .bss 0000000000000008 mtk_wdt1
ffffff8009285598 l O .init.data 0000000000000008 __initcall_mtk_wdt_driver_init6
Additionally, I have sha256 checksums for the binary kernel (eg. linux.bin), initramfs and device tree in the before fitImage assembly, after fitImage assembly and after unpacking into system memory (via bootloader); all match. Near as I can tell, what gets built is what gets unpacked and booted.
Furthermore, I have enabled initcall_debug and, while I see other __initcall()s, the non-bound drivers are, unsurprisingly, missing.
I know the devices are present in the device tree and correctly configured. After boot, I get about 5 seconds of console access before the watchdog kicks; just enought time to get a command or two off. On a "working" images (initramfs < ~128MB) and on failing images (initramfs > ~128MB) the contents of /sys/bus/platform/devices are identical and I can see (among others), my watchdog:
$ ls -lha /sys/bus/platform/devices
...
lrwxrwxrwx 1 root root 0 Jan 1 00:00 10007000.watchdog -> ../../../devices/platform/10007000.watchdog
Performing the same test but comparing /sys/bus/platform/devices shows the drivers which weren't __initcall()ed as missing.
Some collective other things I have checked:
Device Tree. The same DTB is used in both the working and broken images. I have also verified the device tree in-memory as mentioned above. Device tree overlays are not used.
Real memory load offsets. Everything is where is should be and there is plenty of space in each region. I can move the kernel around in memory an the issue persists regardless of location.
Bad memory. This failure happens identically across multiple units.
Bad compression. The issue manifests regardless of kernel / initramfs compression. Currently I am testing with everything uncompressed to minimize breakage points.
Bad signing. I have disabled signature verification (applied to the fitImage partition image after all else); no dice there either.
Attempting to bundle the initramfs directly into the kernel. No change. Right now I have the initramfs as built into the fitImage but otherwise loaded and verified independently.
Bad kernel command line arguments. I am using root=/dev/ram initrd=0x48000000,384M and have traced this all the way into init/initramfs.c where unpacking is done. I was able to verify the offsets passed are, indeed, in the correct virtual memory space and sum to 384M.
Updating the kernel linker script per this forum post. I am able to see a .initramfs section generated in vmlinux via objdump but yet the issue persists.
Given all of the above, the only thing I don't know how to verify is the jump from vmlinux to linux.bin. This is done in Yocto via objcopy as follows:
[ -n "${vmlinux_path}" ] && ${OBJCOPY} -O binary -R .note -R .comment -S "${vmlinux_path}" linux.bin
Questions
How can I verify a given symbol is included in the final linux.bin?
What mechanisms would affect inclusion or exclusion of a given symbol at build time?
Which pieces of the kernel build and runtime are affected by initramfs size?
Are there any other tools / techniques / tribal wisdom which can help debug this situation?
EDIT 1:
Below is the basic memory map of where everything lands and space utilization. As mentioned above and in the comments, I can relocate the kernel, DTB and initramfs to (almost) arbitrary locations but the issue still persists.
0x40000000 - 0x40001000 = Bootloader arg area (Fixed usage)
0x40080000 - 0x41EDFFFF = Kernel (~12MB / 29.5MB used)
0x41E00000 - 0x42FF5FFF = Trampoline (96 bytes / ~6MB used)
0x42FF6000 - 0x42FFFFFF = ATF BL3-1 (Fixed usage)
0x43000000 - 0x43FFFFFF = Trusted OS (~476K / 16M used)
0x44000000 - 0x44FFFFFF = DTB (~77.3K / 16M used)
0x45000000 - 0x47FFFFFF = Trusted OS memory (dynamic)
0x48000000 - 0x5FFFFFFF = Initramfs (~129MB / 384MB used)
0x60000000 - MEM END = Free

So, like most kernel issues the real problem was not where I thought it was. As it turns out, the problem was caused by one of the other drivers earlier in the init list hanging the core, preventing the watchdog driver from being registered. How this is affected by the initramfs is beyond me and is its own question.
For anyone who comes across this in the future, the answers to my specific questions above are listed below:
How can I verify a given symbol is included in the final linux.bin?
I was not able to figure out how to do this statically. That said, I was able to print the addresses of the init functions at runtime by adding printk()s to do_initcall_level in init/main.c. The addresses printed can then be compared to the output of objdump on vmlinux (see my question for the incantation).
A really useful and in-depth description of the initcall process can be found here.
Note that you can also turn on initcall_debug, which will print each function name. In my case I wanted raw addresses which is why I chose the printk() method.
What mechanisms would affect inclusion or exclusion of a given symbol at build time?
Most of this boils down to your .config. The vast majority of inclusion / exclusion is done via the preprocessor. Other useful items are the linker script common header at include/asm-generic/vmlinux.lds.h and the platform linker script at for your device arch/<arch>/*/*.lds.
Which pieces of the kernel build and runtime are affected by initramfs size?
No idea on this one, still.
Are there any other tools / techniques / tribal wisdom which can help debug this situation?
Don't panic

Related

MIPSEL GLIBC sem_init() not shared

I'm currently developing an application that is meant to run on Linux for MIPSEL platform.
The application uses POSIX semaphores inside one of its DSO, and the attention in my case is for the sem_init() function used inside one DSO on which the app depends.
The application is currently using the GLIBC_2.31, but I observed the same phenomenon also using the GLIBC_2_26.
Now I'm going to explain the anomaly and the current status of the troubleshooting.
All happened when I noticed a process waiting on a semaphore wouldn't wake when the semaphore incremented its value.
Just for the records, I'm aware that for multiprocessing, the semaphore must sit in a memory area that is shared between the processes that use the semaphore.
Now, investigating further using strace, I also noticed that the semaphore I initialized using sem_init(&semaphore, 1, 0) resulted in being private, despite I initialized it to be shared.
futex(0x774ce0bc, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, {tv_sec=1615204364, tv_nsec=515578961}, FUTEX_BITSET_MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
The next step I performed on this issue has been to debug using gdb the semaphore initialization process. There I discovered that in this particular combination, MIPSEL - LINUX - GLIBC, the sem_init() exists in two versions:
$ mipsel-linux-objdump -T libpthread.so.0 | grep sem_init
00012e30 g DF .text 00000040 (GLIBC_2.0) sem_init
00012dd0 g DF .text 00000060 GLIBC_2.2 sem_init
For some reason, the linker decided to link my code against the older (not supporting shared semaphores) instead of the newer.
Looking at the GLIBC code, particularly where the sem_init() is implemented (/nptl/sem_init.c), I realized that sem_init() is an alias and that two symbols identify the functions' implementation univocally.
My next move then has been to call directly the function I needed __new_sem_init(), just to be sure any linker alchemy I'm not aware of would interfere with my intentions.
Unfortunately, the symbol is not exported, and it can't be used.
Looking at my library liba.so which uses sem_wait(), sem_init(), and sem_post() I noticed that these symbols are all exported by libpthread.so.0.
$ mipsel-linux-nm -D libpthread.so.0 | grep sem_
00014a10 T sem_clockwait
00013784 T sem_close
00012e70 T sem_destroy
00012e70 T sem_destroy
00013b14 T sem_getvalue
00013b00 T sem_getvalue
00012e30 T sem_init
00012dd0 T sem_init
000131b8 T sem_open
00014ab0 T sem_post
00014b90 T sem_post
00014508 T sem_timedwait
00013f40 T sem_trywait
00013fac T sem_trywait
00013920 T sem_unlink
00013eb0 T sem_wait
00014020 T sem_wait
But looking at the liba.so dependencies, I've been surprised by the fact that the library depends only on the main libc library (libc.so.6) and has no dependency on libpthread.so.0 where the actual sem_* symbols are implemented.
$ /lib/ld-2.31.so --list /usr/lib/liba.so
linux-vdso.so.1 (0x7ffaa000)
libc.so.6 => /lib/libc.so.6 (0x77ddc000)
/lib/ld-2.31.so (0x77f8a000)
I guess this might be strongly related to my problem, but I have no clue how to link these two facts.
How the liba.so could be compiled and linked using symbols its dependencies do not provide?
$ mipsel-linux-objdump -T libc.so.6 | grep sem
000fab80 g DF .text 00000070 GLIBC_2.0 semget
000fabf0 g DF .text 000000b0 GLIBC_2.2 semctl
000faca0 w DF .text 00000074 GLIBC_2.3.3 semtimedop
000359d0 g DF .text 00000074 GLIBC_2.0 sigisemptyset
00149854 g DF .text 000000b0 (GLIBC_2.0) semctl
000fab60 g DF .text 00000018 GLIBC_2.0 semop
$
How to force the GNU linker to link against sem_init##GLIBC_2.2 instead of sem_init#GLIBC_2.0?
It seems like my problem was indeed an underlinking issue.
After proper research on the literature and a considerable amount of tests, I've finally been able to resolve my original problem. Make my application use the correct version of the sem_init().
My assumption that because my library did not carry the libpthread dependency was on the right path to follow.
Checking about my Makefile, I saw the -libpthread switch was missing in the statement, creating the shared library.
That was the root of the problem, and once I added the switch, everything started working as expected.
One of the things that delayed reaching the solution was building the same library with the same Makefile on a different target; the libpthread dependency was included in the library.
But I do not have any explanation about why the dynamic linker has picked the oldest symbol sem_init#GLIBC_2_0 in place of sem_init#GLIBC_2_2.
I understand the linker has been able to link my sem_init() because the original executable app has libpthread as a dependency, so at link time, the library was there, and it could be able to find the symbol.
What I can not explain is why it picked the sem_init#GLIBC_2_0
According to these sources
http://peeterjoot.com/2019/09/20/an-example-of-linux-glibc-symbol-versioning/ The ## one means that it applies to new code, whereas the #MYSTUFF_1.1 is a load only function, and no new code can use that symbol.
https://developers.redhat.com/blog/2019/08/01/how-the-gnu-c-library-handles-backward-compatibility/ The ## tells the dynamic linker that this version is the default version.
https://web.archive.org/web/20100430151127/http://www.trevorpounds.com/blog/?33 The double ## can only be defined once for a given symbol since it denotes the default version to use.
If a symbol is unversioned, the symbol that carries the double ## should be picked by the dynamic linker.
Is this statement true only if the symbol is unversioned and the object where the symbol is required properly contains a dependency on the DSO containing that symbol?

Cannot Make Code Segment Execute-Only (Not Readable)

I'm trying to make the Code Segment Execute-Only (Not Readable).
But I FAILED after I tried everything the Manual told me to. Here is what I did to make the code segment unreadable.
>uname -a
Linux Emmet-VM 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:18:00 UTC 2015 i686 i686 i686 GNU/Linux
>lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty
First, I've found this in "Intel(R)64 and IA-32 Architectures Software Developer's Manual(Combined Volumes 1,2A,2B,2C,2D,3A,3B,3C and 3D)":
Set read-enable bit to enable read and Segment Types.(Sorry, I'm still not allowed to embed pictures in my posts, so links instead)
So, I guess if I change %CS, and let it point to a Segment Descriptor which has read-enable bit set as 0, I should make the Code Segment not readable.
Then, I use the code below to insert a new Segment into LDT.entry[2], and I do set the code segment type to 8, aka 1000B, which means "Execute-Only" according to "Segment Types" link posted above:
typedef struct user_desc UserDesc;
UserDesc *seg = (UserDesc*)malloc(sizeof(UserDesc));
seg->entry_number = 0x2;
seg->base_addr = 0x00000000;
seg->limit = 0xffffffff;
seg->seg_32bit = 0x1;
seg->contents = 0x02;
seg->read_exec_only = 0x1;
seg->limit_in_pages = 0x1;
seg->seg_not_present = 0x0;
seg->useable = 0x0;
int ret = modify_ldt(1, (void*)seg, sizeof(UserDesc));
After that, I change %CS to 0x17(00010111B, meaning the entry 2 in LDT) with ljmp.
asm("ljmp $0x17, $reload_cs\n"
"reload_cs:");
But, even with this, I still can read the byte code in code segment:
void foo() {printf("foo\n");}
void test(){
char* a = (char*)foo;
printf("0x%x\n", (unsigned int)a[0]);// This prints 0x55
}
If the code segment is unreadable, code above should throw a segment fault error. But it prints 0x55 successfully.
So, I wonder, is there any mistake I've made during my test?
Or is this just a mistake in Intel's Manual?
You are still accessing the code through DS when doing (unsigned int)a[0].
Write only segments don't exist (and if they did, it would be a bad idea to set DS write only).
If you did everything correctly mov eax, [cs:...] (NASM syntax) will fail (but mov eax, [ds:...] won't).
After a quick glance at the Intel Manual execute only pages should not exist (at least directly), so using mprotect with PROT_EXEC may be of limited use (the code would still be readable).
Worth a shot, though.
There are three ways around this.
None of which can be implemented without the aid of the OS though, so they are more theoretical than practical.
Protection keys
If the CPU supports them (See section 4.6.2 of the Intel manual 3), they introduce an asymmetry in how code and data are read.
Reading data is subject to the key protection.
Fetching however is not:
How a linear address’s protection key controls access to the address depends on the mode of a linear address:
A linear address’s protection controls only data accesses to the address. It does not in any way affect instructions fetches from the address.
So it's possible to set a protection key for the code pages that your application don't have in its PKRU register.
You would still be allowed to execute the code but not to read it.
Desync the TLBs
If your application has never touched the code pages for reading, they will occupy some entries in the ITLB but not in the DTLB.
If then, the OS map them as supervisor-only without flushing the TLBs, access to them is prevented when accessed as data (since no DTLB entries for those pages are present, forcing a walk on the memory) but thanks to the ITLB the code can still be fetched.
This is more involved in practice as code span multiple pages and is actually read as data by the OS.
EPT
The Extended Data Pages are used during virtualization to translate Guest physical addresses to Host physical addresses.
Though they seems just another level of indirection, they have separate Read, Write and Execute control bits.
A paper has been written about preventing the leakage of the kernel code (to counteract dynamic Return Oriented Programming).

How to get linux ebpf assembly?

I want to learn linux ebpf vm, if I write a ebpf program test.c, used llvm:
clang -O2 -target bpf -o test.o test.c. How to get the ebpf assembly like tcpdump -d in classic bpf, thanks.
This depends on what you mean exactly by “learn[ing] linux ebpf vm”.
The language itself
If you mean learning about the instructions of eBPF, the assembly-like language itself, you can have a look at the documentation from the kernel (quite dense) or at this summarized version of the syntax from bcc project.
The virtual machine
If you prefer to see how the internals of the eBPF virtual machine work, you can either have a look at various presentations (I recommend those from D. Borkmann), I have a list here in this blog post; or you can directly read at the kernel sources, under linux/kernel/bpf (in particular file core.c). Alternatively, there is a simpler userspace implementation available.
Dump eBPF instructions
Now if you want to see the code that has been compiled from C to eBPF, here are a couple of solutions.
Read the object file
For my part I compile with the command presented in the tc-bpf man page:
__bcc() {
clang -O2 -emit-llvm -c $1 -o - | \
llc -march=bpf -filetype=obj -o "`basename $1 .c`.o"
}
alias bcc=__bcc
The code is translated into eBPF and stored in one of the sections of the ELF file produced. Then I can examine my program with tools such as objdump or readelf. For example, if my program is in the classifier section:
$ bcc return_zero.c
$ readelf -x classifier return_zero.o
Hex dump of section 'classifier':
0x00000000 b7000000 02000000 95000000 00000000 ................
In the above output, two instructions are displayed (little endian — the first field starting with 0x is the offset inside the section). We could parse this to put in shape the instructions and to obtain:
b7 0 0 0000 00000002 // Load 0x02 in register r0
95 0 0 0000 00000000 // Exit and return value in r0
[April 2019 edit] Dump an eBPF program loaded in the kernel
It is possible to dump the instructions of programs loaded (and then possibly attached to one of the available BPF hooks) in the kernel, either as eBPF assembly instructions, or as machine instructions if the program has been JIT-compiled. bpftool, relying on libbpf, is the go-to utility for doing such things. For example, one can see what programs are currently loaded, and note their ids, with:
# bpftool prog show
Then dumping the instructions for a program of a given id is as simple as:
# bpftool prog dump xlated id <id>
# bpftool prog dump jited id <id>
for eBPF or JITed (if available) instructions respectively. Output can also be formatted as JSON if necessary.
Advanced tools
Depending on the tools you use to inject BPF into the kernel, you can generally dump the output of the in-kernel verifier, that contains most of the instructions formatted in a human-friendly way.
With the bcc set of tools (not directly related to the previous command, and not related at all with the old 16-bit compiler), you can get this by using the relevant flags for the BPF object instance, while with tc filter add dev eth0 bpf obj … verbose this is done with the verbose keyword.
Disassemblers
The aforementioned userspace implementation (uBPF) has its own assembler and disassembler that might be of interest to you: it takes the “human-friendly” (add32 r0, r1 and the likes) instructions as input and converts into object files, or the other way round, respectively.
But probably more interesting, there is the support for debug info, coming along with a BPF disassembler, in LLVM itself: as of today it has recently been merged, and its author (A. Starovoitov) has sent an email about it on the netdev mailing list. This means that with clang/LLVM 4.0+, you should be able to use llvm-objdump -S -no-show-raw-insn my_file.o to obtain a nicely formatted output.

How can I compile C code to get a bare-metal skeleton of a minimal RISC-V assembly program?

I have the following simple C code:
void main(){
int A = 333;
int B=244;
int sum;
sum = A + B;
}
When I compile this with
$riscv64-unknown-elf-gcc code.c -o code.o
If I want to see the assembly code I use
$riscv64-unknown-elf-objdump -d code.o
But when I explore the assembly code I see that this generates a lot of code which I assume is for Proxy Kernel support (I am a newbie to riscv). However, I do not want that this code has support for Proxy kernel, because the idea is to implement only this simple C code within an FPGA.
I read that riscv provides three types of compilation: Bare-metal mode, newlib proxy kernel and riscv Linux. According to previous research, the kind of compilation that I should do is bare metal mode. This is because I desire a minimum assembly without support for the operating system or kernel proxy. Assembly functions as a system call are not required.
However, I have not yet been able to find as I can compile a C code for get a skeleton of a minimal riscv assembly program. How can I compile the C code above in bare metal mode or for get a skeleton of a minimal riscv assembly code?
Warning: this answer is somewhat out-of-date as of the latest RISC-V Privileged Spec v1.9, which includes the removal of the tohost Control/Status Register (CSR), which was a part of the non-standard Host-Target Interface (HTIF) which has since been removed. The current (as of 2016 Sep) riscv-tests instead perform a memory-mapped store to a tohost memory location, which in a tethered environment is monitored by the front-end server.
If you really and truly need/want to run RISC-V code bare-metal, then here are the instructions to do so. You lose a bunch of useful stuff, like printf or FP-trap software emulation, which the riscv-pk (proxy kernel) provides.
First things first - Spike boots up at 0x200. As Spike is the golden ISA simulator model, your core should also boot up at 0x200.
(cough, as of 2015 Jul 13, the "master" branch of riscv-tools (https://github.com/riscv/riscv-tools) is using an older pre-v1.7 Privileged ISA, and thus starts at 0x2000. This post will assume you are using v1.7+, which may require using the "new_privileged_isa" branch of riscv-tools).
So when you disassemble your bare-metal program, it better
start at 0x200!!! If you want to run it on top of the proxy-kernel, it
better start at 0x10000 (and if Linux, it’s something even larger…).
Now, if you want to run bare metal, you’re forcing yourself to write up the
processor boot code. Yuck. But let’s punt on that and pretend that’s not
necessary.
(You can also look into riscv-tests/env/p, for the “virtual machine”
description for a physically addressed machine. You’ll find the linker script
you need and some macros.h to describe some initial setup code. Or better
yet, in riscv-tests/benchmarks/common.crt.S).
Anyways, armed with the above (confusing) knowledge, let’s throw that all
away and start from scratch ourselves...
hello.s:
.align 6
.globl _start
_start:
# screw boot code, we're going minimalist
# mtohost is the CSR in machine mode
csrw mtohost, 1;
1:
j 1b
and link.ld:
OUTPUT_ARCH( "riscv" )
ENTRY( _start )
SECTIONS
{
/* text: test code section */
. = 0x200;
.text :
{
*(.text)
}
/* data: Initialized data segment */
.data :
{
*(.data)
}
/* End of uninitalized data segement */
_end = .;
}
Now to compile this…
riscv64-unknown-elf-gcc -nostdlib -nostartfiles -Tlink.ld -o hello hello.s
This compiles to (riscv64-unknown-elf-objdump -d hello):
hello: file format elf64-littleriscv
Disassembly of section .text:
0000000000000200 <_start>:
200: 7810d073 csrwi tohost,1
204: 0000006f j 204 <_start+0x4>
And to run it:
spike hello
It’s a thing of beauty.
The link script places our code at 0x200. Spike will start at
0x200, and then write a #1 to the control/status register
“tohost”, which tells Spike “stop running”. And then we spin on an address
(1: j 1b) until the front-end server has gotten the message and kills us.
It may be possible to ditch the linker script if you can figure out how to
tell the compiler to move <_start> to 0x200 on its own.
For other examples, you can peruse the following repositories:
The riscv-tests repository holds the RISC-V ISA tests that are very minimal
(https://github.com/riscv/riscv-tests).
This Makefile has the compiler options:
https://github.com/riscv/riscv-tests/blob/master/isa/Makefile
And many of the “virtual machine” description macros and linker scripts can
be found in riscv-tests/env (https://github.com/riscv/riscv-test-env).
You can take a look at the “simplest” test at (riscv-tests/isa/rv64ui-p-simple.dump).
And you can check out riscv-tests/benchmarks/common for start-up and support code for running bare-metal.
The "extra" code is put there by gcc and is the sort of stuff required for any program. The proxy kernel is designed to be the bare minimum amount of support required to run such things. Once your processor is working, I would recommend running things on top of pk rather than bare-metal.
In the meantime, if you want to look at simple assembly, I would recommend skipping the linking phase with '-c':
riscv64-unknown-elf-gcc code.c -c -o code.o
riscv64-unknown-elf-objdump -d code.o
For examples of running code without pk or linux, I would look at riscv-tests.
I'm surprised no one mentioned gcc -S which skips assembly and linking altogether and outputs assembly code, albeit with a bunch of boilerplate, but it may be convenient just to poke around.

how to boot this code?

i am a newbie to assembly and program in c (use GCC in Linux)
can anyone here tell me how to compile c code into assembly and boot from it using pen drive
i use the command (in linux terminal) :
gcc -S bootcode.c
the code gives me a bootcode.S file
what do i do with that ???
i just wanna compile the following code and run it directly from a USB stick
#include<stdio.h>
void main()
{
printf ("hi");
}
any help here ???
First of all,
You Should be aware that when you are writing bootloader codes , you should know that you are CREATING YOUR OWN ENVIRONMENT of CODE, that means, there is nothing such ready made C Library available to you or anything similar , ONLY and ONLY BIOS SERVICES (or INTERRUPT ROUTINES).
Now, if you got this, you will probably figure out that the above code won't boot since, you don't have the "stdio.h" header, this means that the CPU when executing your compiled code won't find this header and thereby won't understand what is "printf" (since printf is a method of the stdio.h header).
So if you want to print any string you need to write this function by YOUR OWN either in a separate file as a header and link its object file at compilation time when creating the final binary file or in the same file. it is up to you. There could be other ways, I'm not well familiar with them, just do some researches.
Another thing you should know, it is the BIOS who is responsible for loading this boot code (your above code in your case) into memory location 0x07C00 (0x0000h:0x7C00 in segment:offset representation), so you HAVE to mention in your code that you are writing this code on this memory location, either by
1-using the ORG instruction
2-Or by loading the appropriate registers for that (cs,ds,es)
Also, you should get yourself familiar with the segment:offset memory representation scheme, just google it or read intel manuals.
Finally, for the BIOS to load your code into the 0x07C00, the boot code must not exceed 512byte (ONLY ON FIRST SECTOR OF THE BOOTABLE MEDIA, since a sectore is 512byte) and he must find at the last two byte of this first sector (byte 510 & byte 511) of your code the boot signature 0x55AA, otherwise the BIOS won't consider this code AS BOOTABLE.
Usually this is coded as :
ORG 0x7C00
...
your boot code and to load more codes since 512byte won't be sufficient.
...
times 510 - ($ - $$) db 0x00 ; Zerofill up to 510 bytes
dw 0xAA55 ;Boot Sector signature,written in reverse order since it
will be stored as little endian notation
Just to let you know, I'm not covering everything here, because if so, I'll be writing pages about it, you need to look for more resources on the net, and here is a link to start with(coding in assembly):
http://www.brokenthorn.com/Resources/OSDevIndex.html
That's all, hopefully this was helpful to you...^_^
Khilo - ALGERIA
Booting a computer is not that easy. A bootloader needs to be written. The bootloader must obey certain rules and correspond with hardware such as ROM. You also need to disable interrupts, reserve some memory etc. Look up MikeOS, it's a great project that can better help you understand the process.
Cheers

Resources