Is all MIPS code on Linux supposed to be PIC? - linux

In Linux on MIPS CPUs (MIPSEL32 to be precise), is it true that all userland SO's are supposed be position independent (PIC)? A cite from an authoritative source would be the the best.
How about Android?
My interest stems from this.

The situation with PIC code on Linux appears to be somewhat interesting. In the past (pre EGLIBC-2.9) all binaries on MIPS where supposed to be PIC (both applications and shared libraries). However, to reduce the size of applications, the ABI extension was developed to allow for non-PIC executables (but shared objects stay PIC, as before):
At this time we do not propose any change to the position-independent
addressing conventions used by shared objects. Similarly,
position-independent executables compiled with '-fpie' -- as required
for address space randomisation in "hardened" Linux distributions --
shall continue to use the existing psABI addressing and calling
mechanisms.
http://gcc.gnu.org/ml/gcc/2008-07/txt00000.txt
The wiki page on linux-mips.org stating that all binaries on MIPS must be PIC appears to be somewhat out of date, as both recent GCC and EGLIBC on Linux support non-PIC executables: http://www.linux-mips.org/wiki/PIC_code

Related

C++ .a: what affects portability across distros?

I'm building a .a from C++ code. It only depends on the standard library (libc++/libstdc++). From general reading, it seems that portability of binaries depends on
compiler version (because it can affect the ABI). For gcc, the ABI is linked to the major version number.
libc++/libstdc++ versions (because they could pass a vector<T> into the .a and its representation could change).
I.e. someone using the .a needs to use the same (major version of) the compiler + same standard library.
As far as I can see, if compiler and standard library match, a .a should work across multiple distros. Is this right? Or is there gubbins relating to system calls, etc., meaning a .a for Ubuntu should be built on Ubuntu, .a for CentOS should be built on CentOS, and so on?
Edit: see If clang++ and g++ are ABI incompatible, what is used for shared libraries in binary? (though it doens't answer this q.)
Edit 2: I am not accessing any OS features explicitly (e.g. via system calls). My only interaction with the system is to open files and read from them.
It only depends on the standard library
It could also depend implicitly upon other things (think of resources like fonts, configuration files under /etc/, header files under /usr/include/, availability of /proc/, of /sys/, external programs run by system(3) or execvp(3), specific file systems or devices, particular ioctl-s, available or required plugins, etc...)
These are kind of details which might make the porting difficult. For example look into nsswitch.conf(5).
The evil is in the details.
(in other words, without a lot more details, your question don't have much sense)
Linux is perceived as a free software ecosystem. The usual way of porting something is to recompile it on -or at least for- the target Linux distribution. When you do that several times (for different and many Linux distros), you'll understand what details are significant in your particular software (and distributions).
Most of the time, recompiling and porting a library on a different distribution is really easy. Sometimes, it might be hard.
For shared libraries, reading Program Library HowTo, C++ dlopen miniHowTo, elf(5), your ABI specification (see here for some incomplete list), Drepper's How To Write Shared Libraries could be useful.
My recommendation is to prepare binary packages for various common Linux distributions. For example, a .deb for Debian & Ubuntu (some particular versions of them).
Of course a .deb for Debian might not work on Ubuntu (sometimes it does).
Look also into things like autoconf (or cmake). You may want at least to have some externally provided #define-d preprocessor strings (often passed by -D to gcc or g++) which would vary from one distribution to the next (e.g. on some distributions, you print by popen-ing lp, on others, by popen-ing lpr, on others by interacting with some CUPS server etc...). Details matter.
My only interaction with the system is to open files
But even these vary a lot from one distribution to another one.
It is probable that you won't be able to provide a single -and the same one- lib*.a for several distributions.
NB: you probably need to budget more work than what you believe.

Choosing a compact c/c++ compiler for ARM based Embedded Linux System

I am working on ARM cortex A7 based embedded system that runs Linux. I am looking for c/c++ compiler (as GCC is around 100 mb) which is compact in size and reliable. I have shortlisted some as SDCC, TCC, OTCC, Digital Mars, NWCC, LCC, Small C, portable C compiler.
I want to know if compilers are dependent on operating system or hardware and how should I proceed to start strip down the list. I am not an expert and I am learning about Linux systems and embedded environment. If you think I am asking wrong question or going in wrong direction, Kindly let me know.
Thanks you
Note
I already have cross compiler on my linux (laptop) system. I compile program to be loaded using this only. But the embedded system is supposed to be able to load with a particular language designed by us, I am hoping to convert that language in to equivalent C code and run it. I tried writing my own interpreter in c that accepts the code in other language and parse it and executes but it's little slow, I tried same instructions in (directly written in) C with satisfactory results.
Edit:
I ended up using g++ on my system to compile code, as main function of system was to use generated code.
Generally, when dealing with embedded systems you are better off cross-compiling and sending the binaries than compiling directly on the device. Even though it may consume you some time setting up the toolchain on the beginning, it definitely pays you back with build time.
There are several pre-built Linaro GCC which are cross-compilers having (generally) x86 linux as host and arm linux as target platforms. This way, you should not worry about compiler size.

Using x86 materials to learn assembly on a 64 bit OS?

I am teaching myself/reading up about assembly. Most of the books on assembly refer to x86- all the register names in the code begin with "e" and not "r" (as they would in x86-64). However, I use 64-bit Linux and I was wondering if these books have any value because they are not referring to x86-64.
So in short- is it really worth me using these resources to learn x86-64. Or reworded differently, besides the difference in register naming convention- are there any other differences between the two which could make learning from x86 materials difficult?
64 bit Linux allows running 32bit applications, so you still can create 32 bit applications on your computer. This way, the books and example 32 bit code are fully useful.
The only single problem you might have is if the assembly application dynamically link to some 32 bit shared library. In order to fix this you should install 32 bit compatibility layer.
The assembly programs that use only Linux system calls works fine without this layer, which is actually set of shared libraries compiled for 32 bit.
BTW, in my opinion, writing 32 bit code is still better if you want your programs to be useful for more people. There are still many 32 bit computers around and they will not disappear soon.
It's indeed a bit easier to learn assembly on 32bit since the calling conventions and stack management are simpler.
On 64bit you need to worry about ABI. Not only that but the conventions are not the same for every OSes. For instance, the ABI rules on Mac OS X are different than those on Windows (the registers are not the same and on Windows it only uses 4 registers).
You can compile your assembly code using -arch i386 with the assembler (as). With clang or gcc you can use -m32 (at least on Mac OS X, since I haven't used it on Linux proper). You won't be able to link modules that have different bitness (32bit vs 64bit).
Once you're ready to switch or compile your program for 64bit you will have to make sure that when you handle the stack you need to push 64bit words instead of 32bit ones but that kinda goes with saying.

Does developing applications for SPARC, IBM power CPU require separate compilers, other than x86, x86-64 targets?

Does developing applications for SPARC, IBM PowerPC require separate compliers, other than x86 and x86-64 targets?
If true, how easily could x86, x64 binaries in Linux be ported to SPARC and PowerPC? Is there a way to simulate these environments using virtualization?
First answer is, yes, to develop compiled code for Power Architecture or SPARC you need compilers that will generate code for those processors. A compiler that generates x86 or x86_64 code will not generate code that runs on Power Architecture or SPARC. You might find cross compilers running on x86 (32 or 64) that will generate Power or SPARC code, though. But the other thing to be aware of is the object file format (elf, xcoff, and so on). Instruction set is just part of the picture. You might get clearer answers if your provide more details of your particular starting point and goals.
Second, one normally doesn't talk of porting binaries. We port source code, which may include assembly language as well as C or other languages. The process for doing this includes compiler selection, after which you can begin an iterative process of compiling, porting, compiling, and linking the code for the new hardware. I'm omitting many details. Again, if you provide more specifics in your question, you might get more specific answers.
Third, as others have said, no, you can't use virtualization in the scenarios you allude to. You might find acceptable emulation solutions. Again, please provide more specifics if you can.
No, virtualization is not the answer. Virtualization takes your hardware platform and creates an independent "virtual" machine of the same hardware. So when running on x86, you use virtualization to create a second x86 machine.
To simulate a completely different hardware architecture, you would want to look into emulation.
How easy / hard it is to port software from one architecture to another architecture depends completely on how the software was written. If it uses something particular to one architecture but not the other (for example, x86 can handle non-aligned memory accesses while SPARC does not) you are going to need to fix things like that. Another example that could make it difficult to port would be if the software has assumed a specific endian-ess of the hardware.
SPARC, IBM PowerPC require separate
compliers, other than x86 and x86-64
targets?
I hate to be really snippy, but given that IBM PowerPC and SPARC do not support the x86 or x86-64 command sets (i.e. talk totally separate machine langauge), where did you even get the idea they would be compatible?
Is there a way to simulate these
environments using virtualization?
Possibly yes, but it would be REALLY slow, because you would have to either translate the machine code, or - well - interpret it. Hardware virtualiaztion would not work, given that the CPU architectures are different. SPARC and PowerPC are not just "different labels for the same thing", they are really different internally.
Use Java or LLVM, or try QEMU to test other CPUs.
It's easy if your code was written to be portable, it's not if it wasn't. Varying sizes of data types per platform and code that depends on it, inline assembly, etc. will make it harder.
Home page for LLVM and QEMU:
http://llvm.org/
http://wiki.qemu.org/Main_Page

Why can a shared library created from non-pic object work?

I'm confused. I try in Linux on x86.
PIC just makes live more simple for the loader since it only has to modify a few global addresses in the code. Non-PIC code just contains a lot more of these addresses, so the table with addresses which need relocation are bigger. But the loader must be able to relocate the code in either case (for example, to resolve the addresses of static/global variables and all function pointers).
x86 ABI kind of supports non-PIC code in shared libraries. As pointed out before it means pages that will normally be shared will not be shared (because ld.so needs to patch references in code rather special place (GOT)).
But libraries built that way may be a bit faster, because PIC code is generally slower.
amd64 ABI does not support that.

Resources