As all Linux distributions use the same kernel, is there any difference between their executable binary files?
If yes, what are the main differences? Or does that mean we can build a universal linux executable file?
All Linux distributions use the same binary format ELF, but there is still some differences:
different cpu arch use different instruction set.
the same cpu arch may use different ABI, ABI defines how to use the register file, how to call/return a routine. Different ABI can not work together.
Even on same arch, same ABI, this still does not mean we can copy one binary file in a distribution to another. Since most binary files are not statically linked, so they depends on the libraries under the distribution, which means different distribution may use different versions or different compilation configuration of libraries.
So if you want your program to run on all distribution, you may have to statically link a version that depends on the kernel's syscall only, even this you can only run a specified arch.
If you really want to run a program on any arch, then you have to compile binaries for all arches, and use a shell script to start up the right one.
All Linux ports (that is, the Linux kernel on different processors) use ELF as the file format for executables and libraries. A specific ELF binary is labeled with a single architecture/OS on which it can run (although some OSes have compatibility to run ELF binaries from other OSes).
Most ports have support for the older a.out format. (Some processors are new enough that there have never existed any a.out executables for them.)
Some ports support other executable file formats as well; for example, the PA-RISC port has support for HP-UX's old SOM executables, and the μcLinux (nonmmu) ports support their own FLAT format.
Linux also has binfmt_misc, which allows userspace to register handlers for arbitrary binary formats. Some distributions take advantage of this to be able to execute Windows, .NET, or Java applications -- it's really still launching an interpreter, but it's completely transparent to the user.
Linux on Alpha has support for loading Intel binaries, which are run via the em86 emulator.
It's possible to register binfmt_misc for executables of other architectures, to be run with qemu-user.
In theory, one could create a new format -- perhaps register a new "architecture" in ELF -- for fat binaries. Then the kernel binfmt loader would have to be taught about this new format, and you wouldn't want to miss the ld-linux.so dynamic linker and the whole build toolchain. There's been little interest in such a feature, and as far as I know, nobody is working on anything like it.
Almost all linux program files use the ELF standard.
Old Unixes also used COFF format. You may still find executables from times of yore in this format. Linux still has support for it (I don't know if it's compiled in current distros, though).
If you want to create a program that runs an all Linux distributions, you can consider using scripting languages (like Python and Perl) or a platform independent programming language like Java.
Programs written in scripting languages are complied at execution time, which means they are always compiled to match the platform they are executed on, and, hence, should always work (given that the libraries are set up properly).
Programs written in Java, on the other hand, are compiled before distributing them, but can be executed on any Linux distribution as long as it has a Java VM installed.
Furthermore, programs written in Java can be run on other operating systems like MS Windows and Mac OS.
The same is true for many programs written in Python and Perl; however, whether a Python or Perl program will work on another operating system depends on what libraries are used by that program and whether these libraries are available on the other operating systems.
Related
Is it possible to write a program (make executable) that runs on windows and linux without any interpreters?
Will it be able to take input and print output to console?
A program that runs directly on hardware, pure machine code as this should be possible in theory
edit:
Ok, file formats are different, system calls are different
But how hard or is it possible for kernel developers to introduce another executable format called raw for fun and science? Maybe raw program wont be able to report back but it should be able to inflict heavy load on cpu and raise its temperature as evidence of running for example
Is it possible to write a program (make executable) that runs on windows and linux without any interpreters?
in practice, no !
Levine's book Linkers and loaders explain why it is not possible in practice.
On recent Linux, an executable has the elf(5) format.
On Windows, it has some PE format.
The very first bytes of executables are different. And these two OSes have different system calls. The Linux ones are listed in syscalls(2).
And even on Linux, in practice, an executable is usually dynamically linked and depends on shared objects (and they are different from one distribution to the next one, so it is likely that an executable built for Debian/Testing won't run on Redhat). You could use the objdump(1), readelf(1), ldd(1) commands to inspect it, and strace(1) with gdb(1) to observe its runtime behavior.
Portability of software is often achieved by publishing it (in source form) with some open source license. The burden of recompilation is then on the shoulders of users.
In practice, real software (in particular those with a graphical user interface) depends on lots of OS specific and computer specific resources (e.g. fonts, screen size, colors) and user preferences.
A possible approach could be to have a small OS specific software base which generate machine code at runtime, like e.g. SBCL or LuaJit does. You could also consider using asmjit. Another approach is to generate opaque or obfuscated C or C++ code at runtime, compile it (with the system compiler), and load it -at runtime- as a plugin. On Linux, use dlopen(3) with dlsym(3).
Pitrat's book: Artificial Beings, the conscience of a conscious machine describes a software system (some artificial mathematician) which generates all of its C source code (half a million lines). Contact me by email to basile#starynkevitch.net for more.
The Wine emulator allows you to run some (but not all) simple Windows executables on Linux. The WSL layer is rumored to enable you to run some Linux executable on Windows.
PS. Even open source projects like RefPerSys or GCC or Qt may be (and often are) difficult to build.
No, mainly because executable formats are different, but...
With some care, you can use mostly the same code to create different executables, one for Linux and another one for windows. Depending on what you consider an interpreter Java also runs on both Windows and Linux (in a Java Virtual Machine though).
Also, it is possible to create scripts that can be interpreted both by PowerShell and by the Bash shell, such that running one of these scripts could launch a proper application compiled for the OS of the user.
You might require the windows user to run on WSL, which is maybe an ugly workaround but allows you to have the same executable for both Windows and Linux users.
Going thru the ARMv8 manual, I have the following questions to help understand the big picture.
Can legacy 32 bit app. (ARMv7 or earlier) run as is on the ARMv8 OS?
If the legacy applications need to be rebuilt for ARMv8 and assuming that I rebuild the application as 32 bit (Aarch32), does this need 32 bit OS underlying support? (It is interesting to know how the addressing mechanism works here.)
Please provide references wherever possible.
PS: I am targeting Linux OS with Aarch64 support (3.7 and later)
Aarch64 platform may run 32bit ARM but this compatibility is optional.
To run AArch32 binaries you need all libraries application would use in 32bit versions. Same as with i686 binaries on x86-64 systems.
There is also a Linux arm64 CONFIG_COMPAT at: https://github.com/torvalds/linux/blob/v4.17/arch/arm64/Kconfig#L1274 which says:
This option enables support for a 32-bit EL0 running under a 64-bit
kernel at EL1. AArch32-specific components such as system calls,
the user helper functions, VFP support and the ptrace interface are
handled appropriately by the kernel.
which will likely be required, and an ARM employee mentioned on this thread: https://community.arm.com/processors/f/discussions/5535/running-armv7-binaries-on-armv8 that userland instructions are basically the same with some exceptions:
For something like a Linux application, then yes. ARMv8-A includes AArch32, which provides backwards compatibility with ARMv7-A. There are some limitations, such as the SWP instruction no longer being supported. But these are types of things that applications are unlikely to be using (and were deprecated in ARMv7).
For baremetal, you have all the usual problems of using a binary from one platform on another. So you are going to need to do some degree of porting in most cases.
I then tried it for myself with this QEMU full system setup but my attempt failed: I compiled a C hello world with the armv7 compiler as:
arm-linux-gcc -static hello_world.c
and put the built file into the aarch64 target, but when I tried to run it it failed with:
a.out: line 1: syntax error: unexpected word (expecting ")")
even though /proc/config.gz says that CONFIG_COMPAT is set.
It seems that the Linux kernel is not identifying it as an ELF file but rather falling back to /bin/sh, I get the same error if I do:
sh /mnt/9p/a.out
is trying to use the shell binfmt instead of ELF.
In particular, I know that the Linux kernel can choose between archs from the binfmt signature because qemu-user does so: https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture
I apologize if my question is not precise because I don't have a lot
of Linux related experience. I'm currently building a Linux from
scratch (mostly following the guide at linuxfromscratch.org version
7.3). I ran into the following problem: when I build an executable it
gets a hardcoded path to something called ELF interpreter.
readelf -l program
shows something like
[Requesting program interpreter: /lib/ld-linux.so.2]
I traced this library ld-linux-so.2 to be part of glibc. I am not very
happy with this behaviour because it makes the binary very unportable
- if I change the location of /lib/ld-linux.so.2 the executable no
longer works and the only "fix" I found is to use the patchelf utility
from NixOS to change the hardcoded path to another hardcoded path. For
this reason I would like to link against a static version of the ld
library but such is not produced. And so this is my question, could
you please explain how could I build glibc so that it will produce a
static version of ld-linux.so.2 which I could later link to my
executables. I don't fully understand what this ld library does, but I
assume this is the part that loads other dynamic libraries (or at
least glibc.so). I would like to link my executables dynamically, but
I would like the dynamic linker itself to be statically built into
them, so they would not depend on hardcoded paths. Or alternatively I
would like to be able to set the path to the interpreter with
environment variable similar to LD_LIBRARY_PATH, maybe
LD_INTERPRETER_PATH. The goal is to be able to produce portable
binaries, that would run on any platform with the same ABI no matter
what the directory structure is.
Some background that may be relevant: I'm using Slackware 14 x86 to
build i686 compiler toolchain, so overall it is all x86 host and
target. I am using glibc 2.17 and gcc 4.7.x.
I would like to be able to set the path to the interpreter with environment variable similar to LD_LIBRARY_PATH, maybe LD_INTERPRETER_PATH.
This is simply not possible. Read carefully (and several times) the execve(2), elf(5) & ld.so(8) man pages and the Linux ABI & ELF specifications. And also the kernel code doing execve.
The ELF interpreter is responsible for dynamic linking. It has to be a file (technically a statically linked ELF shared library) at some fixed location in the file hierarchy (often /lib/ld.so.2 or /lib/ld-linux.so.2 or /lib64/ld-linux-x86-64.so.2)
The old a.out format from the 1990s had a builtin dynamic linker, partly implemented in old Linux 1.x kernel. It was much less flexible, and much less powerful.
The kernel enables, by such (in principle) arbitrary dynamic linker path, to have various dynamic linkers. But most systems have only one. This is a good way to parameterize the dynamic linker. If you want to try another one, install it in the file system and generate ELF executables mentioning that path.
With great pain and effort, you might make your own ld.so-like dynamic linker implementing your LD_INTERPRETER_PATH wish, but that linker still has to be an ELF shared library sitting at some fixed location in the file tree.
If you want a system not needing any files (at some predefined, and wired locations, like /lib/ld.so, /dev/null, /sbin/init ...), you'll need to build all its executable binaries statically. You may want (but current Linux distributions usually don't do that) to have a few statically linked executables (like /sbin/init, /bin/sash...) that will enable you to repair a system broken to the point of not having any dynamic linker.
BTW, the /sbin/init -or /bin/sh - path is wired inside the kernel itself. You may pass some argument to the kernel at boot load time -e.g. with GRUB- to overwrite the default. So even the kernel wants some files to be here!
As I commented, you might look into MUSL-Libc for an alternative Libc implementation (providing its own dynamic linker). Read also about VDSO and ASLR and initrd.
In practice, accept the fact that modern Linuxes and Unixes are expecting some non-empty file system ... Notice that dynamic linking and shared libraries are a huge progress (it was much more painful in the 1990s Linux kernels and distributions).
Alternatively, define your own binary format, then make a kernel module or a binfmt_misc entry to handle it.
BTW, most (or all) of Linux is free software, so you can improve it (but this will take months -or many years- of work to you). Please share your improvements by publishing them.
Read also Drepper's Hwo to Write Shared Libraries paper; and this question.
I ran into the same issue. In my case I want to bundle my application with a different GLIBC than comes system installed. Since ld-linux.so must match the GLIBC version I can't simply deploy my application with the according GLIBC. The problem is that I can't run my application on older installations that don't have the required GLIBC version.
The path to the loader interpreter can be modified with --dynamic-linker=/path/to/interp. However, this needs to be set at compile time and therefore would require my application to be installed in that location (or at least I would need to deploy the ld-linux.so that goes with my GLIBC in that location which goes against a simple xcopy deployment.
So what's needed is an $ORIGIN option equivalent to what the -rpath option can handle. That would allow for a fully dynamic deployment.
Given the lack of a dynamic interpreter path (at runtime) leaves two options:
a) Use patchelf to modify the path before the executable gets launched.
b) Invoke the ld-linux.so directly with the executable as an argument.
Both options are not as 'integrated' as a compiled $ORIGIN path in the executable itself.
I downloaded a binary file that was compiled (a C program) using GCC 4.4.4 for x86-64 Red Hat Linux.
Is it normal that when I try to run it on a Mac OS X (running Lion so also x86-64) running GCC 4.2.1 that it would say: cannot execute binary file? It can't detect it as a binary file.
Why would it do that? I believe the gcc version has nothing to do with that since the file has already been compiled. It has been compiled for x86-64 of which both machines run. Can someone please explain?
There are different binary formats. Linux systems use ELF for executables and libraries, but Mac OS X uses the Mach-O format. Windows uses another still: PE format.
It's highly unlikely that a binary compiled for a particular OS will run on another. Either get a binary for Mac, or get the source and compile it yourself.
There are many things that will cause issues - version of libc and libstdc++, there can be difference in versions of .so libraries - different API interface to the OS itself. Or even a different binary format (ie VMS binaries do not run on AIX).
Although Rad Hat Linux and Mac OS X are both 'Unix based', they cannot run each other's binaries. Just like you can't run Windows binaries on Macs and vice versa.
binaries in this sense are compiled to make operating system calls, when your program has a printf() that boils down to operating system calls. If the operating system it was compiled for is say a 64 bit redhat linux then that likely means the binary is going to look for redhat linux names for shared libraries in redhat linux paths. Which has absolutely nothing in any way shape or form to do with a completely different operating system, Mac OS X, and its system calls and shared or static libraries, etc. Its like taking a wheel off of a mini cooper and trying to bolt it onto a bicycle. Yes at one point in time it was raw metal and rubber, and could have been formed into a bike tire. But once you make that binary, the car tire or the bike tire, that is what it is. sometimes you find an emulator like wine that emulates windows on top of a posix system. or a virtual machine like vmware that lets you run a whole different operating system on top of another by virtualizing a whole computer.
it is also true that you cannot generally expect to take any old C program and have it compile and run on any operating system that say has a gcc compiler. yes you can learn to write c programs that are portable, but you have to carefully stick to libraries that are supported on all the target platforms. so even taking the source code for your program to the mac and compiling it is not necessarily going to just work.
NanoBSD is a script that makes light, small and in-memory FreeBSD copy. It is useful in embedded systems. Is there something similar to NanoBSD in Linux? Specially a feature like Everything is read-only at run-time as it mentioned here .
A lot of toolchain / system build systems build Linux root filesystems which are designed to run completely out of a ramdisc (rootfs / tmpfs). This means that everything is read/write at runtime, but it does not persist across reboots (a persistent FS can of course, be mounted as a non-root FS).
The most well known of these is Busybox (with or without uclibc), which ships with various scripts to build very small-footprint Linux-based embedded systems (root FS is typically a few Mb only; just add a kernel). Busybox/Linux is not the same as GNU/Linux, but it is fairly similar - most things are simpler or have fewer options; some features are entirely absent or can be disabled at compile-time.
Linux is NOT an Operating system like FreeBSD, rather it is a kernel. You can choose to layer either GNU C library and tools (which I think all major general-purpose distributions do) or something else - which is mostly used for smaller systems, including uclibc, Android etc.
There are literally hundreds of toolchains, build environments and embedded distros of Linux, some only a couple of megabytes in size. Many also support some or many of the different processors Linux runs on (i386 and friends, ARM, Power, ...).
To get you started a couple of projects I find interesting: OpenWrt and OpenEmbedded, and lpclinux, Linux for NXP LPC3xxx ARM processors but there are really hundreds of them.
Some other resources
A very good source that (also) touches a number of issues specific to embedded systems is Linux from scratch. And this pdf gives some insight in the different available filesystems for an embedded Linux system.
i would take a look at TinyCore-Linux.
which isn't really ro but nearly the same Concept and i think there is also a was to get the OS/Binary Part ro were the config part is writeable.