How to create a simple function returns a string on a arm platform?
procedure Main is
function tst_func return String is
begin
return "string";
end tst_func;
str : String := tst_func; -- <-- Doesnt work, runtime error.
-- Adacore gpl compiller, crossdev, arm elf hosted of win os.
-- Hardware is smt32f407 discovery board.
begin
...
The problem is a bug in the runtime system: if your program doesn’t involve any tasking, the environment task’s secondary stack isn’t set up properly, so when your function tries to return a string it thinks the secondary stack has been exhausted and raises Storage_Error.
I have reported this to AdaCore: their recommendation was to include
delay until Ada.Real_Time.Clock;
in your main program.
The bug will likely be resolved in the next GNAT GPL release.
The issue here seems to be that using Ada on small embedded CPUs like the STm32 (ARM Cortex) or the Actel AVR or TI MSP430 often involves compromises, because the platform may not be capable of running a full Ada RTS (Runtime System) including things like tasking.
Instead, a minimal RTS may be supplied, with restrictions specified by pragmas, that doesn't support tasking, or in this case, features requiring the secondary stack. Funnily enough, the RTS for the AVR does include the files s-secsta.ads,.adb which implement package System.Secondary_Stack so the much more powerful STm32 ought to be capable of it. You could look at the RTS sources supplied with the Adacore GPL package to see if these files are present or not.
So - options.
1) Work around, either using fixed length strings, or a table of string constants, or returning an access String (i.e. pointer) to a string allocated on the heap (don't forget to free it!) though heap use is not normally recommended for embedded programming.
2) Find a better RTS. You can compile and link against a different RTS by supplying -RTS=... arguments to the compiler. Here is a thread discussing alternative RTS strategies for this CPU.
Related
I have some code that depends on CPU and OS support for various CPU features.
In particular I need to check for various SIMD instruction set support.
Namely sse2, avx, avx2, fma4, and neon.
(neon being the ARM SIMD feature. I'm less interested in that; given less ARM end-users.)
What I am doing right now is:
function cpu_flags()
if is_linux()
cpuinfo = readstring(`cat /proc/cpuinfo`);
cpu_flag_string = match(r"flags\t\t: (.*)", cpuinfo).captures[1]
elseif is_apple()
sysinfo = readstring(`sysctl -a`);
cpu_flag_string = match(r"machdep.cpu.features: (.*)", cpuinfo).captures[1]
else
#assert is_windows()
warn("CPU Feature detection does not work on windows.")
cpu_flag_string = ""
end
split(lowercase(cpu_flag_string))
end
This has two downsides:
It doesn't work on windows
I'm just not sure it is correct; it it? Or does it screw up, if for example the OS has a feature disabled, but physically the CPU supports it?
So my questions is:
How can I make this work on windows.
Is this correct, or even a OK way to go about getting this information?
This is part of a build script (with BinDeps.jl); so I need a solution that doesn't involve opening a GUI.
And ideally one that doesn't add a 3rd party dependency.
Extracting the information from GCC somehow would work, since I already require GCC to compile some shared libraries. (choosing which libraries, is what this code to detect the instruction set is for)
I'm just not sure it is correct; it it? Or does it screw up, if for example the OS has a feature disabled, but physically the CPU supports it?
I don't think that the OS has any say in disabling vector instructions; I've seen the BIOS being able to disable stuff (in particular, the virtualization extensions), but in that case you won't find them even in /proc/cpuinfo - that's kind of its point :-) .
Extracting the information from GCC somehow would work, since I already require GCC to compile some shared libraries
If you always have gcc (MinGW on Windows) you can use __builtin_cpu_supports:
#include <stdio.h>
int main()
{
if (__builtin_cpu_supports("mmx")) {
printf("\nI got MMX !\n");
} else
printf("\nWhat ? MMX ? What is that ?\n");
return (0);
}
and apparently this built-in functions work under mingw-w64 too.
AFAIK it uses the CPUID instruction to extract the relevant information (so it should reflect quite well the environment your code will run in).
(from https://stackoverflow.com/a/17759098/214671)
I have an Ada program that was written for a specific (embedded, multi-processor, 32-bit) architecture. I'm attempting to use this same code in a simulation on 64-bit RHEL as a shared object (since there are multiple versions and I have a requirement to choose a version at runtime).
The problem I'm having is that there are several places in the code where the people who wrote it (not me...) have used Unchecked_Conversions to convert System.Addresses to 32-bit integers. Not only that, but there are multiple routines with hard-coded memory addresses. I can make minor changes to this code, but completely porting it to x86_64 isn't really an option. There are routines that handle interrupts, CPU task scheduling, etc.
This code has run fine in the past when it was statically-linked into a previous version of the simulation (consisting of Fortran/C/C++). Now, however, the main executable starts, then loads a shared object based on some inputs. This shared object then checks some other inputs and loads the appropriate Ada shared object.
Looking through the code, it's apparent that it should work fine if I can keep the logical memory addresses between 0 and 2,147,483,647 (32-bit signed int). Is there a way to either force the shared object loader to leave space in the lower ranges for the Ada code or perhaps make the Ada code "think" that it's addresses are between 0 and 2,147,483,647?
Is there a way to either force the shared object loader to leave space in the lower ranges for the Ada code
The good news is that the loader will leave the lower ranges untouched.
The bad news is that it will not load any shared object there. There is no interface you could use to influence placement of shared objects.
That said, dlopen from memory (which we implemented in our private fork of glibc) would allow you to do that. But that's not available publicly.
Your other possible options are:
if you can fit the entire process into 32-bit address space, then your solution is trivial: just build everything with -m32.
use prelink to relocate the library to desired address. Since that address should almost always be available, the loader is very likely to load the library exactly there.
link the loader with a custom mmap implementation, which detects the library of interest through some kind of side channel, and does mmap syscall with MAP_32BIT set, or
run the program in a ptrace sandbox. Such sandbox can again intercept mmap syscall, and or-in MAP_32BIT when desirable.
or perhaps make the Ada code "think" that it's addresses are between 0 and 2,147,483,647?
I don't see how that's possible. If the library stores an address of a function or a global in a 32-bit memory location, then loads that address and dereferences it ... it's going to get a 32-bit truncated address and a SIGSEGV on dereference.
My question is somewhat weird but I will do my best to explain.
Looking at the languages the linux kernel has, I got C and assembly even though I read a text that said [quote] Second iteration of Unix is written completely in C [/quote]
I thought that was misleading but when I said that kernel has assembly code I got 2 questions of the start
What assembly files are in the kernel and what's their use?
Assembly is architecture dependant so how can linux be installed on more than one CPU architecture
And if linux kernel is truly written completely in C than how can it get GCC needed for compiling?
I did a complete find / -name *.s
and just got one assembly file (asm-offset.s) somewhere in the /usr/src/linux-headers-`uname -r/
Somehow I don't think that is helping with the GCC working, so how can linux work without assembly or if it uses assembly where is it and how can it be stable when it depends on the arch.
Thanks in advance
1. Why assembly is used?
Because there are certain things then can be done only in assembly and because assembly results in a faster code. For eg, "you can get access to unusual programming modes of your processor (e.g. 16 bit mode to interface startup, firmware, or legacy code on Intel PCs)".
Read here for more reasons.
2. What assembly file are used?
From: https://www.kernel.org/doc/Documentation/arm/README
"The initial entry into the kernel is via head.S, which uses machine
independent code. The machine is selected by the value of 'r1' on
entry, which must be kept unique."
From https://www.ibm.com/developerworks/library/l-linuxboot/
"When the bzImage (for an i386 image) is invoked, you begin at ./arch/i386/boot/head.S in the start assembly routine (see Figure 3 for the major flow). This routine does some basic hardware setup and invokes the startup_32 routine in ./arch/i386/boot/compressed/head.S. This routine sets up a basic environment (stack, etc.) and clears the Block Started by Symbol (BSS). The kernel is then decompressed through a call to a C function called decompress_kernel (located in ./arch/i386/boot/compressed/misc.c). When the kernel is decompressed into memory, it is called. This is yet another startup_32 function, but this function is in ./arch/i386/kernel/head.S."
Apart from these assembly files, lot of linux kernel code has usage of inline assembly.
3. Architecture dependence?
And you are right about it being architecture dependent, that's why the linux kernel code is ported to different architecture.
Linux porting guide
List of supported arch
Things written mainly in assembly in Linux:
Boot code: boots up the machine and sets it up in a state in which it can start executing C code (e.g: on some processors you may need to manually initialize caches and TLBs, on x86 you have to switch to protected mode, ...)
Interrupts/Exceptions/Traps entry points/returns: there you need to do very processor-specific things, e.g: saving registers and reenabling interrupts, and eventually restoring registers and properly returning to user mode. Some exceptions may be handled entirely in assembly.
Instruction emulation: some CPU models may not support certain instructions, may not support unaligned data access, or may not have an FPU. An option is using emulation when getting the corresponding exception.
VDSO: the VDSO is a virtual library that the kernel maps into userspace. It allows e.g: selecting the optimal syscall sequence for the current CPU (on x86 use sysenter/syscall instead of int 0x80 if available), and implementing certain system calls without requiring a context switch (e.g: gettimeofday()).
Atomic operations and locks: Maybe in a future some of these could be written using C11 support for atomic operations.
Copying memory from/to user mode: Besides using an optimized copy, these check for out-of-bounds access.
Optimized routines: the kernel has optimized version of some routines, e.g: crypto routines, memset, clear_page, csum_copy (checksum and copy to another place IP data in one pass), ...
Support for suspend/resume and other ACPI/EFI/firmware thingies
BPF JIT: newer kernels include a JIT compiler for BPF expressions (used for example by tcpdump, secmode mode 2, ...)
...
To support different architectures, Linux has assembly code (re-)written for each architecture it supports (and sometimes, there are several implementations of some code for different platforms using the same CPU architecture). Just look at all the subdirectories under arch/
Assembly is needed for a couple of reasons.
There are many instructions that are needed for the operation of an operating system that have no C equivalent, at least on most processors. A good example on Intel x86/64 processors is the iret instruciton, which returns from hardware/software interrupts. These interrupts are key to handling hardware events (like a keyboard press) and system calls from programs on older processors.
A computer does not start up in a state that is immediately ready for execution of C code. For an Intel example, when execution gets to the startup routine the processor may not be in 32-bit mode (or 64-bit mode), and the stack required by C also may not be ready. There are some other features present in some processors (like paging) which need to be turned on from assembly as well.
However, most of the Linux kernel is written in C, which interfaces with some platform specific C/assembly code through standardized interfaces. By separating the parts in this way, most of the logic of the Linux kernel can be shared between platforms. The build system simply compiles the platform independent and dependent parts together for specific platforms, which results in different executable kernel files for different platforms (and kernel configurations for that matter).
Assembly code in the kernel is generally used for low-level hardware interaction that can't be done directly from C. They're like a platform- specific foundation that's used by higher-level parts of the kernel that are written in C.
The kernel source tree contains assembly code for a variety of systems. When you compile a kernel for a particular type of system (such as an x86 PC), only the appropriate assembly code for that platform is included in the build process.
Linux is not the second version of Unix (or Unix in general). It is Unix compatible, but Unix and Linux have separate histories and, in terms of code base (of their kernels), are completely separate. Linus Torvald's idea was to write an open source Unix.
Some of the lower level things like some of the architecture dependent parts of memory management are done in assembly. The old (but still available) Linux kernel API for x86, int 0x80, is implemented in assembly. There are probably other places in the kernel that are implemented in assembly, but I don't know any others.
When you compile the kernel, you select an architecture to target. Depending on the target, the right assembly files for that architecture are included in the build.
The reason you don't find anything is because you're searching the headers, not the sources. Download a tar ball from kernel.org and search that.
Basically, what I wonder is how come an x86-64 OS can run a code compiled for x86 machine. I know when first x64 Systems has been introduced, this wasn't a feature of any of them. After that, they somehow managed to do this.
Note that I know that x86 assembly language is a subset of x86-64 assembly language and ISA's is designed in such a way that they can support backward compatibility. But what confuses me here is stack calling conventions. These conventions differ a lot depending on the architecture. For example, in x86, in order to backup frame pointer, proceses pushes where it points to stack(RAM) and pops after it is done. On the other hand, in x86-64, processes doesn't need to update frame pointer at all since all the references is given via stack pointer. And secondly, While in x86 architecture arguments to functions is passed by stack in x86-64, registers are used for that purpose.
Maybe this differences between stack calling conventions of x86-64 and x64 architecture may not affect the way program stack grows as long as different conventions are not used at the same time and this is mostly the case because x32 functions are called by other x32's and same for x64. But, at one point, a function (probably a system function) will call a function whose code is compiled for a x86-64 machine with some arguments, at this point, I am curious about how OS(or some other control unit) handle to get this function work.
Thanks in advance.
Part of the way that the i386/x86-64 architecture is designed is that the CS and other segment registers refer to entries in the GDT. The GDT entries have a few special bits besides the base and limit that describe the operating mode and privilege level of the current running task.
If the CS register refers to a 32-bit code segment, the processor will run in what is essentially i386 compatibility mode. Likewise 64-bit code requires a 64-bit code segment.
So, putting this all together.
When the OS wants to run a 32-bit task, during the task switch into it, it loads a value into CS which refers to a 32-bit code segment. Interrupt handlers also have segment registers associated with them, so when a system call occurs or an interrupt occurs, the handler will switch back to the OS's 64-bit code segment, (allowing the 64-bit OS code to run correctly) and the OS then can do its work and continue scheduling new tasks.
As a follow up with regards to calling convention. Neither i386 or x86-64 require the use of frame pointers. The code is free to do as it pleases. In fact, many compilers (gcc, clang, VS) offer the ability to compile 32-bit code without frame pointers. What is important is that the calling convention is implemented consistently. If all the code expects arguments to be passed on the stack, that's fine, but the called code better agree with that. Likewise, passing via registers is fine too, just everyone has to agree (at least at the library interface level, internal functions can generally do as they please).
Beyond that, just keep in mind that the difference between the two isn't really an issue because every process gets its own private view of memory. A side consequence though is that 32-bit apps can't load 64-bit dlls, and 64-bit apps can't load 32-bit dlls, because a process either has a 32-bit code segment or a 64-bit code segment. It can't be both.
The processor in put into legacy mode, but that requires everything executing at that time to be 32bit code. This switching is handled by the OS.
Windows : It uses WoW64. WoW64 is responsible for changing the processor mode, it also provides the compatible dll and registry functions.
Linux : Until recently Linux used to (like windows) shift to running the processor in legacy mode when ever it started executing 32bit code, you needed all the 32bit glibc libraries installed, and it would break if it tried to work together with 64bit code. Now there are implementing the X32 ABI which should make everything run like smoother and allow 32bit applications to access x64 feature like increased no. of registers. See this article on the x32 abi
PS : I am not very certain on the details of things, but it should give you a start.
Also, this answer combined with Evan Teran's answer probably give a rough picture of everything that is happening.
Linux kernel is written for compiling with gcc and uses a lot of small and ugly gcc-hacks.
Which compilers can compile linux kernel except gcc?
The one, which can, is the Intel Compiler. What minimal version of it is needed for kernel compiling?
There also was a Tiny C compiler, but it was able to compile only reduced and specially edited version of the kernel.
Is there other compilers capable of building kernel?
An outdatet information: you need to patch the kernel in order to compile using the Intel CC
Download Linux kernel patch for Intel® Compiler
See also Is it possible to compile Linux kernel with something other than gcc for further links and information
On of the most recent sources :http://forums.fedoraforum.org/showthread.php?p=1328718
There is ongoing process of committing LLVMLinux patches into vanilla kernel (2013-2014).
The LLVMLinux is project by The Linux Foundation: http://llvm.linuxfoundation.org/ to enable vanilla kernel to be built with LLVM. Lot of patches are prepared by Behan Webster, who is LLVMLinux project lead.
There is LWN article about the project from May 2013
https://lwn.net/Articles/549203/ "LFCS: The LLVMLinux project"
Current status of LLVMLinux project is tracked at page http://llvm.linuxfoundation.org/index.php/Bugs#Linux_Kernel_Issues
Things (basically gcc-isms) already eliminated from kernel:
* Expicit Registers Variables (non-C99)
* VLAIS (non C99-compliant undocumented GCC feature "Variable length arrays in structs") like struct S { int array[N];} or even struct S { int array[N]; int array_usb_gadget[M]; } where N and M are non-constant function argument
* Nested Functions (Ada feature ported into C by GCC/Gnat developers; not allowed in C99)
* Some gcc/gas magic like special segments, or macro
Things to be done:
* Usage of __builtin_constant_p builtin to implement scary magic like BUILD_BUG_ON(!__builtin_constant_p(offset));
The good news about LLVMLinux are that after its patches kernel not only becomes buildable with LLVM+clang, but also easier to build by other non-GCC compilers, because the project kills much not C99 code like VLAIS, created by usb gadget author, by netfilter hackers, and by crypto subsystem hackers; also nested functions are killed.
In short, you cannot, because the kernel code was written to take advantage of the gcc's compiler semantics...and between the kernel and the compiled code, the relationship is a very strong one, i.e. must be compiled with gcc...Since gcc uses 'ELF' (Embedded Linking Format) object files, the kernel must be built using the object code format. Unless you can hack it up to work with another compiler - it may well compile but may not work, as the compilers under Windows produces PE code, there could be unexpected results, meaning the kernel may not boot at all!