Does executable file of C++ program contain object code of system calls also - linux

We use Linux system calls like fork(), pthread(), signal() and so on in C or C++ programs and compile the program to generate executable file (a.out). Now my doubt is whether the file a.out contain the object code of all linux system calls used, or whether the executable contain only the calls to system functions and the system call functions are linked during runtime? Suppose if I move my a.out file to some other Linux operating system which implements system calls in different syntax and try to compile it there will it work?
My doubt is whether system call function definitions part of a.out file?

User space binaries don't contain implementations of system calls. That would mean that any user could inject any code into kernel and take over system.
Instead they need to switch to kernel mode, by using processor interrupt or special instruction. Then processor can execute system call implementation from the kernel.
User space library, such as libc, is usually used, which provides stubs, which convert arguments of a syscall to a proper protocol and trigger jump to kernel mode. It is usually linked dynamically, so these stubs also don't appear in executable file.

Related

Can eBPF call dynamic libraries?

Is it possible to write an eBPF program, that can dynamically call an external library? I.e. assume that this specific library is present on the host that runs the eBPF code.
Right now I don't care if the program is passing verification, but rather if it is possible to express this in the bytecode. It should be assumed that the external function is not embedded in the ELF binary.
No, this is not possible at the moment.
Once loaded and attached, eBPF programs can call:
eBPF functions from the same program (eBPF-to-eBPF function calls)
other eBPF programs, under certain conditions, through tail calls
other eBPF programs of type BPF_PROG_TYPE_EXT
kernel function helpers (library of functions defined in the kernel)
random kernel functions, if they are explicitly marked as callable (should be in Linux 5.13)
It cannot call functions from user space libraries.

Where are sys_fork, sys_execve and sys_exit functions in linux kernel 4.10

I needed to analyze sys_fork(), sys_execve(), sys_exit() kernel functions. I wrote a simple program that calls fork() and watched what system calls it uses. There was no sys_fork(). I find out that in modern kernel fork() calls function clone(). And basically it's the same thing with all three functions that i am interested in.
I tried to look at the sources of linux kernel and didn't find any definitions of sys_fork(), sys_execve(), sys_exit(). They are defined in headers, but there is no definitions for any architecture.
So my question is: are this functions still used in modern linux kernel, or they were removed and replaced in linux 3.x (I only found this functions in kernel 2.x)?

How to invoke newly added system call by the function id without using syscall(__NR_mysyscall)

I am working with Linux-3.9.3 kernel in Ubuntu 10.04. I have added a basic system call in the kernel directory of the linux-3.9.3 source tree. I am able to use it with syscall() by passing my newly system call number in it as an argument. But I want to invoke it directly by using its method name as in the case of getpid() or open() system calls. Can any one help me to add it in GNU C library. I went through few documents but did not get any clear idea of how to accomplish it.
Thanks!!!
Assuming you are on a 64 bits Linux x86-64, the relevant ABI is the x86-64 ABI. Read also the x86 calling conventions wikipage and the linux assembly howto and syscalls(2)
So syscalls are using a different convention than ordinary function calls (e.g. all arguments are passed by registers, error condition could use the carry bit). Hence, you need a C wrapper to make your syscall available to C applications.
You could look into the source code of existing C libraries, like GNU libc or musl libc (so you'll need to make your own library for that syscall).
The MUSL libc source code is very readable, see e.g. its src/unistd/fsync.c as an example.
I would suggest wrapping your new syscall in your own library without patching libc. Notice that some uncommon syscalls are sitting in a different library, e.g. request_key(2) has its C wrapper in libkeyutils

Linux kernel assembly and logic

My question is somewhat weird but I will do my best to explain.
Looking at the languages the linux kernel has, I got C and assembly even though I read a text that said [quote] Second iteration of Unix is written completely in C [/quote]
I thought that was misleading but when I said that kernel has assembly code I got 2 questions of the start
What assembly files are in the kernel and what's their use?
Assembly is architecture dependant so how can linux be installed on more than one CPU architecture
And if linux kernel is truly written completely in C than how can it get GCC needed for compiling?
I did a complete find / -name *.s
and just got one assembly file (asm-offset.s) somewhere in the /usr/src/linux-headers-`uname -r/
Somehow I don't think that is helping with the GCC working, so how can linux work without assembly or if it uses assembly where is it and how can it be stable when it depends on the arch.
Thanks in advance
1. Why assembly is used?
Because there are certain things then can be done only in assembly and because assembly results in a faster code. For eg, "you can get access to unusual programming modes of your processor (e.g. 16 bit mode to interface startup, firmware, or legacy code on Intel PCs)".
Read here for more reasons.
2. What assembly file are used?
From: https://www.kernel.org/doc/Documentation/arm/README
"The initial entry into the kernel is via head.S, which uses machine
independent code. The machine is selected by the value of 'r1' on
entry, which must be kept unique."
From https://www.ibm.com/developerworks/library/l-linuxboot/
"When the bzImage (for an i386 image) is invoked, you begin at ./arch/i386/boot/head.S in the start assembly routine (see Figure 3 for the major flow). This routine does some basic hardware setup and invokes the startup_32 routine in ./arch/i386/boot/compressed/head.S. This routine sets up a basic environment (stack, etc.) and clears the Block Started by Symbol (BSS). The kernel is then decompressed through a call to a C function called decompress_kernel (located in ./arch/i386/boot/compressed/misc.c). When the kernel is decompressed into memory, it is called. This is yet another startup_32 function, but this function is in ./arch/i386/kernel/head.S."
Apart from these assembly files, lot of linux kernel code has usage of inline assembly.
3. Architecture dependence?
And you are right about it being architecture dependent, that's why the linux kernel code is ported to different architecture.
Linux porting guide
List of supported arch
Things written mainly in assembly in Linux:
Boot code: boots up the machine and sets it up in a state in which it can start executing C code (e.g: on some processors you may need to manually initialize caches and TLBs, on x86 you have to switch to protected mode, ...)
Interrupts/Exceptions/Traps entry points/returns: there you need to do very processor-specific things, e.g: saving registers and reenabling interrupts, and eventually restoring registers and properly returning to user mode. Some exceptions may be handled entirely in assembly.
Instruction emulation: some CPU models may not support certain instructions, may not support unaligned data access, or may not have an FPU. An option is using emulation when getting the corresponding exception.
VDSO: the VDSO is a virtual library that the kernel maps into userspace. It allows e.g: selecting the optimal syscall sequence for the current CPU (on x86 use sysenter/syscall instead of int 0x80 if available), and implementing certain system calls without requiring a context switch (e.g: gettimeofday()).
Atomic operations and locks: Maybe in a future some of these could be written using C11 support for atomic operations.
Copying memory from/to user mode: Besides using an optimized copy, these check for out-of-bounds access.
Optimized routines: the kernel has optimized version of some routines, e.g: crypto routines, memset, clear_page, csum_copy (checksum and copy to another place IP data in one pass), ...
Support for suspend/resume and other ACPI/EFI/firmware thingies
BPF JIT: newer kernels include a JIT compiler for BPF expressions (used for example by tcpdump, secmode mode 2, ...)
...
To support different architectures, Linux has assembly code (re-)written for each architecture it supports (and sometimes, there are several implementations of some code for different platforms using the same CPU architecture). Just look at all the subdirectories under arch/
Assembly is needed for a couple of reasons.
There are many instructions that are needed for the operation of an operating system that have no C equivalent, at least on most processors. A good example on Intel x86/64 processors is the iret instruciton, which returns from hardware/software interrupts. These interrupts are key to handling hardware events (like a keyboard press) and system calls from programs on older processors.
A computer does not start up in a state that is immediately ready for execution of C code. For an Intel example, when execution gets to the startup routine the processor may not be in 32-bit mode (or 64-bit mode), and the stack required by C also may not be ready. There are some other features present in some processors (like paging) which need to be turned on from assembly as well.
However, most of the Linux kernel is written in C, which interfaces with some platform specific C/assembly code through standardized interfaces. By separating the parts in this way, most of the logic of the Linux kernel can be shared between platforms. The build system simply compiles the platform independent and dependent parts together for specific platforms, which results in different executable kernel files for different platforms (and kernel configurations for that matter).
Assembly code in the kernel is generally used for low-level hardware interaction that can't be done directly from C. They're like a platform- specific foundation that's used by higher-level parts of the kernel that are written in C.
The kernel source tree contains assembly code for a variety of systems. When you compile a kernel for a particular type of system (such as an x86 PC), only the appropriate assembly code for that platform is included in the build process.
Linux is not the second version of Unix (or Unix in general). It is Unix compatible, but Unix and Linux have separate histories and, in terms of code base (of their kernels), are completely separate. Linus Torvald's idea was to write an open source Unix.
Some of the lower level things like some of the architecture dependent parts of memory management are done in assembly. The old (but still available) Linux kernel API for x86, int 0x80, is implemented in assembly. There are probably other places in the kernel that are implemented in assembly, but I don't know any others.
When you compile the kernel, you select an architecture to target. Depending on the target, the right assembly files for that architecture are included in the build.
The reason you don't find anything is because you're searching the headers, not the sources. Download a tar ball from kernel.org and search that.

Learning x86 assembly on Mac/BSD: Kernel built-in functions? How to know arguments / order?

I have been playing around with yasm in an attempt to grasp a basic understanding of x86 assembly. From my tests, it seems you call functions from the kernel by setting the EAX register with the number of the function you want. Then, you push the function arguments onto the stack and issue a syscall (0x80) to execute the instruction. This is Mac OS X / BSD style, I know Linux uses registers to hold arguments instead of using the stack. Does this sound right? Is this the basic idea?
I am a little confused because where are the functions documented? How would I know what arguments, and in what order, to push them onto the stack? Should I look in syscall.h for the answers? It seems there would be a specific reference for supported kernel calls other than C headers.
Also, do standard C functions like printf() rely on the kernel's built-in functions for say, writing to stdout? In other words, does the C compiler know what the kernel functions are and is it trying to "figure out" how to take C code and translate it to kernel functions (which the assembler then translates to machine code)?
C code -> C compiler -> kernel calls / asm -> assembler -> machine binary
I'm sure these are really basic questions, but my understanding of everything that happens after the C compiler is rather muddy.
System Call Documentation
Make sure you have the XCode Developer Tools installed for the UNIX manpages for Mac OS X and then run man 2 intro on the commandline. For a list of system calls, you can use syscall.h (which is useful for the system call numbers) or you can run man 2 syscalls. Then to look up each specific system call, you can run man 2 syscall_name i.e. for read, you can run man 2 read.
UNIX manpages are a historically significant documentation reference for UNIX systems. Pretty much any low-level POSIX function or system call will be documented using them, as well as most commands. Section 2 covers just system calls, and so when you run man 2 pagename, you're asking for the manpage in the system calls section. Section 3 also deals with library functions, so you can run man 3 sprintf the next time you want to read about sprintf.
How C Libraries relate to System Calls
As for how C libraries implement their functionality, usually they build everything on top of system calls, especially in UNIX-like operating systems. malloc internally uses mmap() or brk() on a lot of platforms to get a hold of the actual memory for your process and I/O functions will often use buffers with read, write calls. If there's some other mechanism or library providing the needed functionality, they may also choose to use those instead (i.e. some C libraries for DOS may make use of direct BIOS interrupts instead of calling only DOS interrupts, whereas C libraries for Windows might use Win32 API calls).
Often only a subset of the library functions will need system calls or underlying mechanisms to be implemented though, since the remainder can be written in terms of that subset.
To actually know what's going on with your specific implementation, you should investigate what's happening in a debugger (just keep stepping into all the function calls) or browse the source code of the C library you're using.
How your C code using C libraries relates to machine code
In your question you also suggested:
C code -> C compiler -> kernel calls / asm -> assembler -> machine binary
This is combining two very different concepts. Functions and function calls are supported at the machine code and assembly level, so your C code has a very direct mapping to machine code:
C code -> C compiler -> Assembler -> Linker -> Machine Binary
That is, the compiler translates your function calls in C to function calls in Assembly and system calls in C to system calls in Assembly.
However on most platforms, that machine code contains references to shared libraries and functions in those libraries, so your machine code might have a function that calls other functions from a shared library. The OS then loads that shared library's machine code (if it hasn't been loaded yet for something else) and then runs the machine code for the library function. Then if that library function calls system calls via interrupts, the kernel receives the system call request and does low-level operations directly with the hardware or the BIOS.
So in a protected mode OS, your machine code can be seen as doing the following:
<----------+
|
Function call to -> Other function calls --+
or -> System calls to -> Direct hardware access (inside kernel)
or -> BIOS calls (inside kernel)
You can, of course, call system calls directly in your program as well, skipping the need for any libraries, but unless you're writing your own library, there's usually very little need to do this. If you want even lower-level access, you have to write kernel-level code such as drivers or kernel subsystems.
The recommended way is not doing INT 0x80 by yourself, but to use the wrapper functions from the stdlib. These are, of course, available for assembly as well.
Concerning printf, this works this way:
printf internally calls fprintf(stdout, ...), which in turn uses the FILE * stdout to write to the file descriptor 1 and does write(1, ...). This calls a small wrapper function to set the proper registers to the arguments and perform the kernel call.

Resources