Difference between arm-none-eabi and arm-linux-gnueabi? - linux

What is the difference between arm-none-eabi and arm-linux-gnueabi? I know the difference in how to use them (one for bare metal software, the other one for software meant to be run on linux). But what is the technical background?
I see there is a difference in the ABI which is, as far as I understood, something like an API but on binary level. It ensures interoperability of different applications.
But I don't really understand in which way having or not having an operating system affects my toolchain. The only thing that came to my mind is, that libraries maybe have to be statically linked (do they?) while compiling bare metal software, because there is no os dynamically providing them.
The most pages I found related to this toppic just answered how to use the toolchains but not the technical background. I'm a student of mechatronics and new to embedded systems, so my experience in this field is somewhat limited.

Maybe this link to a description will help.
Probably the biggest difference:
"The bare-metal ABI will assume a different C library (newlib for example, or even no C library) to the Linux ABI (which assumes glibc). Therefore, the compiler may make different function calls depending on what it believes is available above and beyond the Standard C library."

Related

C++ .a: what affects portability across distros?

I'm building a .a from C++ code. It only depends on the standard library (libc++/libstdc++). From general reading, it seems that portability of binaries depends on
compiler version (because it can affect the ABI). For gcc, the ABI is linked to the major version number.
libc++/libstdc++ versions (because they could pass a vector<T> into the .a and its representation could change).
I.e. someone using the .a needs to use the same (major version of) the compiler + same standard library.
As far as I can see, if compiler and standard library match, a .a should work across multiple distros. Is this right? Or is there gubbins relating to system calls, etc., meaning a .a for Ubuntu should be built on Ubuntu, .a for CentOS should be built on CentOS, and so on?
Edit: see If clang++ and g++ are ABI incompatible, what is used for shared libraries in binary? (though it doens't answer this q.)
Edit 2: I am not accessing any OS features explicitly (e.g. via system calls). My only interaction with the system is to open files and read from them.
It only depends on the standard library
It could also depend implicitly upon other things (think of resources like fonts, configuration files under /etc/, header files under /usr/include/, availability of /proc/, of /sys/, external programs run by system(3) or execvp(3), specific file systems or devices, particular ioctl-s, available or required plugins, etc...)
These are kind of details which might make the porting difficult. For example look into nsswitch.conf(5).
The evil is in the details.
(in other words, without a lot more details, your question don't have much sense)
Linux is perceived as a free software ecosystem. The usual way of porting something is to recompile it on -or at least for- the target Linux distribution. When you do that several times (for different and many Linux distros), you'll understand what details are significant in your particular software (and distributions).
Most of the time, recompiling and porting a library on a different distribution is really easy. Sometimes, it might be hard.
For shared libraries, reading Program Library HowTo, C++ dlopen miniHowTo, elf(5), your ABI specification (see here for some incomplete list), Drepper's How To Write Shared Libraries could be useful.
My recommendation is to prepare binary packages for various common Linux distributions. For example, a .deb for Debian & Ubuntu (some particular versions of them).
Of course a .deb for Debian might not work on Ubuntu (sometimes it does).
Look also into things like autoconf (or cmake). You may want at least to have some externally provided #define-d preprocessor strings (often passed by -D to gcc or g++) which would vary from one distribution to the next (e.g. on some distributions, you print by popen-ing lp, on others, by popen-ing lpr, on others by interacting with some CUPS server etc...). Details matter.
My only interaction with the system is to open files
But even these vary a lot from one distribution to another one.
It is probable that you won't be able to provide a single -and the same one- lib*.a for several distributions.
NB: you probably need to budget more work than what you believe.

Choosing a compact c/c++ compiler for ARM based Embedded Linux System

I am working on ARM cortex A7 based embedded system that runs Linux. I am looking for c/c++ compiler (as GCC is around 100 mb) which is compact in size and reliable. I have shortlisted some as SDCC, TCC, OTCC, Digital Mars, NWCC, LCC, Small C, portable C compiler.
I want to know if compilers are dependent on operating system or hardware and how should I proceed to start strip down the list. I am not an expert and I am learning about Linux systems and embedded environment. If you think I am asking wrong question or going in wrong direction, Kindly let me know.
Thanks you
Note
I already have cross compiler on my linux (laptop) system. I compile program to be loaded using this only. But the embedded system is supposed to be able to load with a particular language designed by us, I am hoping to convert that language in to equivalent C code and run it. I tried writing my own interpreter in c that accepts the code in other language and parse it and executes but it's little slow, I tried same instructions in (directly written in) C with satisfactory results.
Edit:
I ended up using g++ on my system to compile code, as main function of system was to use generated code.
Generally, when dealing with embedded systems you are better off cross-compiling and sending the binaries than compiling directly on the device. Even though it may consume you some time setting up the toolchain on the beginning, it definitely pays you back with build time.
There are several pre-built Linaro GCC which are cross-compilers having (generally) x86 linux as host and arm linux as target platforms. This way, you should not worry about compiler size.

Does developing applications for SPARC, IBM power CPU require separate compilers, other than x86, x86-64 targets?

Does developing applications for SPARC, IBM PowerPC require separate compliers, other than x86 and x86-64 targets?
If true, how easily could x86, x64 binaries in Linux be ported to SPARC and PowerPC? Is there a way to simulate these environments using virtualization?
First answer is, yes, to develop compiled code for Power Architecture or SPARC you need compilers that will generate code for those processors. A compiler that generates x86 or x86_64 code will not generate code that runs on Power Architecture or SPARC. You might find cross compilers running on x86 (32 or 64) that will generate Power or SPARC code, though. But the other thing to be aware of is the object file format (elf, xcoff, and so on). Instruction set is just part of the picture. You might get clearer answers if your provide more details of your particular starting point and goals.
Second, one normally doesn't talk of porting binaries. We port source code, which may include assembly language as well as C or other languages. The process for doing this includes compiler selection, after which you can begin an iterative process of compiling, porting, compiling, and linking the code for the new hardware. I'm omitting many details. Again, if you provide more specifics in your question, you might get more specific answers.
Third, as others have said, no, you can't use virtualization in the scenarios you allude to. You might find acceptable emulation solutions. Again, please provide more specifics if you can.
No, virtualization is not the answer. Virtualization takes your hardware platform and creates an independent "virtual" machine of the same hardware. So when running on x86, you use virtualization to create a second x86 machine.
To simulate a completely different hardware architecture, you would want to look into emulation.
How easy / hard it is to port software from one architecture to another architecture depends completely on how the software was written. If it uses something particular to one architecture but not the other (for example, x86 can handle non-aligned memory accesses while SPARC does not) you are going to need to fix things like that. Another example that could make it difficult to port would be if the software has assumed a specific endian-ess of the hardware.
SPARC, IBM PowerPC require separate
compliers, other than x86 and x86-64
targets?
I hate to be really snippy, but given that IBM PowerPC and SPARC do not support the x86 or x86-64 command sets (i.e. talk totally separate machine langauge), where did you even get the idea they would be compatible?
Is there a way to simulate these
environments using virtualization?
Possibly yes, but it would be REALLY slow, because you would have to either translate the machine code, or - well - interpret it. Hardware virtualiaztion would not work, given that the CPU architectures are different. SPARC and PowerPC are not just "different labels for the same thing", they are really different internally.
Use Java or LLVM, or try QEMU to test other CPUs.
It's easy if your code was written to be portable, it's not if it wasn't. Varying sizes of data types per platform and code that depends on it, inline assembly, etc. will make it harder.
Home page for LLVM and QEMU:
http://llvm.org/
http://wiki.qemu.org/Main_Page

Bare metal cross compilers input

What are the input limitations of a bare metal cross compiler...as in does it not compile programs with pointers or mallocs......or anything that would require more than the underlying hardware....also how can 1 find these limitations..
I also wanted to ask...I built a cross compiler for target mips..i need to create a mips executable using this cross compiler...but i am not able to find where the executable is...as in there is 1 executable which i found mipsel-linux-cpp which is supposed to compile,assemble and link and then produce a.out but it is not doing so...
However the ./cc1 gives a mips assembly.......
There is an install folder which has a gcc executable which uses i386 assembly and then gives an exe...i dont understand how can the gcc exe give i386 and not mips assembly when i have specified target as mips....
please help im really not able to understand what is happ...
I followed the foll steps..
1. Installed binutils 2.19
2. configured gcc for mips..(g++,core)
I would suggest that you should have started two separate questions.
The GNU toolchain does not have any OS dependencies, but the GNU library does. Most bare-metal cross builds of GCC use the Newlib C library which provides a set of syscall stubs that you must map to your target yourself. These stubs include low-level calls necessary to implement stream I/O and heap management. They can be very simple or very complex depending on your needs. If the only I/O support is to a UART to stdin/stdout/stderr, then it is simple. You don't have to implement everything, but if you do not implement teh I/O stubs, you won't be able to use printf() for example. You must implement the sbrk()/sbrk_r() syscall is you want malloc() to work.
The GNU C++ library will work correctly with Newlib as its underlying library. If you use C++, the C runtime start-up (usually crt0.s) must include the static initialiser loop to invoke the constructors of any static objects that your code may include. The run-time start-up must also of course initialise the processor, clocks, SDRAM controller, timers, MMU etc; that is your responsibility, not the compiler's.
I have no experience of MIPS targets, but the principles are the same for all processors, there is a very useful article called "Building Bare Metal ARM with GNU" which you may find helpful, much of it will be relevant - especially porting the parts regarding implementing Newlib stubs.
Regarding your other question, if your compiler is called mipsel-linux-cpp, then it is not a 'bare-metal' build but rather a Linux build. Also this executable does not really "compile, assemble and link", it is rather a driver that separately calls the pre-processor, compiler, assembler and linker. It has to be configured correctly to invoke the cross-tools rather than the host tools. I generally invoke the linker separately in order to enforce decisions about which standard library to link (-nostdlib), and also because it makes more sense when a application is comprised of multiple execution units. I cannot offer much help other than that here since I have always used GNU-ARM tools built by people with obviously more patience than me, and moreover hosted on Windows, where there is less possibility of the host tool-chain being invoked instead (one reason why I have also avoided those tool-chains that rely on Cygwin)
EDIT
With more time available, I have rewritten my original answer in an attempt to provide something more useful.
I cannot provide a specific answer for your question. I have never tried to get code running on a MIPS machine. What I do have is plenty of experience getting a variety of "bare metal" boards up and running. All kinds of CPUs and all kinds of compilers and cross compilers. So I have an understanding of the principles that apply in all such situations. I will point out the kind of knowledge you will need to absorb before you can hope to succeed with a job like this, and hopefully I can list some links to resources to get you started on learning that knowledge.
I am worried you don't know that pointers are exactly the kind of thing a bare metal compiler can handle, they are a basic machine primitive. This tells me you are probably not an expert embedded developer who is just stuck in this particular scenario. Never mind. There isn't anything magic about programming an embedded system, and you can learn what you need to know.
The first step is getting to understand the relationship between C and the machine you wish to run code on. Basically C is a portable assembly language. This means that C is good for manipulating the basic operations of the machine. In this sense the basic operations of the machine are reading and writing memory locations, performing arithmetic and boolean operations on the data read from memory, and making branching and looping decisions based on that data. In particular the C concept of pointers allows you to manipulate data at locations in memory that you specify.
So far so good, but just doing raw computations in memory is not usually enough - you need a way to input and output data from memory. To do that you need to manipulate the hardware peripherals on your board. If the hardware peripherals are memory mapped then the machine registers used to control the peripherals look exactly like memory locations and C can manipulate them directly. Even in that case though, it is much more likely that doing useful I/O is best handled by extending the C core language with a library of routines provided just for that purpose. These library routines handle all the nasty details (timers, interrupts, non-memory mapped I/O) involved in manipulating the peripheral hardware on the board, and wrap them up with a convenient C function call interface. The idea is that you can go simply printf("hello world"); and the library call take care of the details of displaying the string.
An appropriately skilled developer knows how to adapt an existing I/O library to a new board, or how to develop new library routines to provide access to non-standard custom hardware. The classic way to develop these skills is to start with something simple, usually a LED for an output device, and a switch for an input device. Write a program that pulses a LED in a predictable way, or reads a switch and reflects in on a LED. The first time you get this working will be hugely satisfying.
Okay I have rambled enough. It is time to provide some more resources for you to study. The good news is that there's never been a better time to learn how things work at the interface between hardware and software. There is a wealth of freely available code and docs. Stackoverflow is a great resource as you know. Good luck! Links follow;
Embedded systems overview
Knowing the C language well is fundamental
Why not get your code working on a simulator before you try real hardware
Another emulated environment
Linux device drivers - an overlapping subject
Another book about bare metal programming

Is assembler portable between Linux distros?

Is a program shipped in assembler format portable between Linux distributions (modulo CPU architecture differences)?
Here's the background to my question: I'm working on a new programming language (named Aklo), whose modus operandi will be the classic compiling to .s and feeding the result to the GNU assembler.
Obviously it would be nice ultimately to have the implementation written in itself, but I had resigned myself to maintaining it in C++ to solve the chicken and egg problem: suppose you download the compiler for the first time and it is itself written in Aklo, how do you compile it? As I understand it, different Linux distributions and other UNIX like systems have different conventions for binary formats.
But it's just occurred to me, a solution might be to ship the .s file (well, one per CPU architecture): it's fair to assume you have or can install the GNU assembler. Of course I'd still need a bootstrap compiler, but that doesn't need to be fast; I can write it in Python.
Is assembler portable in the way that binaries are not? Are there any other stumbling blocks I haven't thought of?
Added in response to one answer:
I had looked wistfully at LLVM, there is certainly a lot of good stuff there and it would make my life easier -- except that it would incur a dependency on the correct version of LLVM being installed. It wouldn't be so bad having that dependency on development machines, but in a world where it's common to ship programs as source, the same dependency would be incurred for every user of every program ever written in Aklo, and I decided that was too high a price to pay.
But if the solution of shipping compiled programs as assembler works... then that solves that problem, and I can use LLVM after all, which would be a big win.
So the question about portability of assembler is even considerably more important than I had first realized.
Conclusion: from answers here and on the LLVM mailing list http://lists.cs.uiuc.edu/pipermail/llvmdev/2010-January/028991.html it seems the bad news is the problem is unsolvable, but the good news is that means using LLVM makes it no worse, so I'm free to do so and obtain all the advantages thereof.
You might want to check out LLVM before going down this particular path. It might make your life a lot easier, as it provides a low level virtual machine that makes a lot of hard stuff just work and has been very popular.
At a very high level, the ABI consists of { instruction set, system calls, binary format, libraries }.
Distribution as .s may free you from the binary format. This is still rather pointless, because you are fixed to a particular ISA and still need to use libraries and/or make system calls. Libraries vary from distribution to distribution (although this isn't really that bad, especially if you just use libc) and syscalls vary from OS to OS.
It's basically 20 years since I last bootstrapped a C compiler. At the level of compilers, the differences between Linux distributions are minimal.
The much more important reason for going LLVM is cross-platform; if you're not writing some intermediate language, your compiler will be extremely difficult to retarget for different processors. And seeing as, on my laptop, I have compilers for x86, x86_64, two kinds of MIPS, PowerPC, ARM and AVR... you see where I'm going? I can compile multiple languages for most of those targets too (only C for AVR).

Resources