Binary compatibility between ARM Cortex A7 and A9 - linux

I tried to google but I could only find info about A15-A7 full compatibility, forward and backward.
Currently I have a task to build binaries for Cortex-a9, but I have no hardware to test them on, therefore I was thinking to test them on Cortex-a7 dev board, therefore the question:
Can I build code with -mcpu=cortex-a9 and run the resulting binary on a system where libc, libstdc++ and other binaries were built with -mcpu=cortex-a7? Is this compatible in both directions?
Let's assume that all binaries (mine and target runtime) are built with -mfpu=neon -mfloat-abi=hard.

Related

Build and bind against older libc version

I have dependencies in my code that requires libc. When building (cargo build --release) on Ubuntu 20.04 (glibc 2.31) the resulting executable doesn't run on CentOS 7 (glibc 2.17). It throws an error saying it requires GLIBC 2.18.
When build the same code on CentOS 7 the resulting executable runs on CentOS 7 and Ubuntu 20.04.
Is there a way to control which GLIBC version is required to build this version on Ubuntu 20.04 too?
If your project does not depend on any native libraries, then probably the easiest way would be to use the x86_64-unknown-linux-musl target.
This target statically links against MUSL Libc rather than dynamically linking against the system's libc. As a result it produces completely static binaries which should run on a wide range of systems.
To install this target:
rustup target add x86_64-unknown-linux-musl
To build your project using this target:
cargo build --target x86_64-unknown-linux-musl
See the edition guide for more details.
If you are using any non-rust libraries it becomes more difficult, because they may be dynamically linked and may in turn depend on the system libc. In that case you would either need to statically link the external libraries (assuming that is even possible, and that the libraries you are using will work with MUSL libc), or make different builds for each platform you want to target.
If you end up having to make different builds for each platform, a docker container would be the easiest way to achieve that.
Try cross.
Install it globally:
cargo install cross
Then build your project with it:
cross build --target x86_64-unknown-linux-gnu --release
cross take the same arguments as cargo but you have to specify a target explicitly. Also, the build directory is always target/{TARGET}/(debug|release), not target/(debug|release)
cross uses docker images prebuilt for different target architectures but nothing stops you from "cross-compiling" against the host architecture. The glibc version in these docker images should be conservative enough. If it isn't, you can always configure cross to use a custom image.
In general, you need to build binaries for a given OS on that OS, or at the very least build on the oldest OS you intend to support.
glibc uses symbol versioning to preserve the behavior of older programs while adding support for new functionality. For example, a newer version of pthread_mutex_lock may support lock elision, while the old one would not. You're seeing this error because when you link against libc, you link against the default version of the symbol if a version isn't explicitly specified, and in at least one case, the version you linked against is from glibc 2.18. Changing this would require recompiling libstd (and the libc crate, if you're using it) with custom changes to pick the old versioned symbols, which is a lot of work for little gain.
If your only dependency is glibc, then it might be sufficient to just compile on CentOS 7. However, if you depend on other libraries, like OpenSSL, then those just aren't compatible across OS versions because their SONAMEs differ, and there's no way around that. So that's why generally you want to build different binaries per OS.

Can't find gmp or mpfr library after installation of gcc

I installed gcc-9.3.0 in linux following the simple steps in https://gcc.gnu.org/wiki/InstallingGCC
The installation was successful. I thought this process would install gmp and mpfr along the way. But I cannot find the gmp or mpfr libraries.
Should I install them when I do ./configure?
There are two different ways you can provide GCC with prerequisite
external host libraries like GMP and
MPFR:
Before configuring, run ./contrib/download_prerequisites from the
toplevel GCC source directory. It will download the sources from the
respective project repositories. That's all you have to do; GCC will
configure and build these libraries before running configure for itself
or its target libraries.
configure with --with-gmp etc. to point to the place where the
respective host library had been installed.
Either approach has its ups and down:
The first approach needs internet connection and build times go up.
However the increase of build time is negligible to the build time
needed for the rest of the compiler and its target libraries.
On the other hand, you will get the right version and build options of
either host lib as preferred by GCC, irrespective of which lib version
might already have been installed on the host. And this configuration
is much more convenient and self-contained when you are cross-compiling
GCC (because it's about host libraries, and you thus have to install
the prerequisite libraries on the host; installing it on build is
pointless). And it's more robust / self-contained if you are
distributing the built compiler: If the intended host does not have GMP
etc installed, then you'll have additional work to do on that host.
The second approach is more complicated because you have to build /
install the prerequisites on the host; correct version, correct
configure flags etc.
On the other hand, you only have to build the prerequisites once for
each host.
When you are configuring / rebuilding the compiler more than once, for
example when you are doing GCC development, then it's a bit faster cycle.
In the first case, the lib versions are independent of the host versions
of the libraries; GCC will actually not care whether the libs are
present on the host because it is using it's own "copy". The libraries
will be linked statically into the executable, hence you won't find them
anywhere (Maybe GCC has the option to do shared in-tree builds for the
libs, I don't know).
I am very much preferring the 1st approach because it is self-contained
and easier, in particular because I am frequently building GCC as
canadian cross, i.e. build ≠ host ≠ target ≠ build.

glibc version for aarch64

I'm cross-compiling an application for aarch64 on my x86 Ubuntu Bionic system, and I have problems with glibc version mismatch. My cross-compile toolchain was using v2.27, while the system that is to run the application has v2.24. I thought that it might be due to my toolchain having a too high version, so I decided to downgrade.
After removing all previous cross-compilation installs, I installed gcc-4.8-aarch64-linux-gnu (as I had successfully cross-compiled the application with this version on a different host system), thinking that it would install an older aarch64 version of glibc to /usr/aarch64-linux-gnu/lib/. However, again, v2.27 was installed (I verified that this directory didn't exist before installing the new cross-compilation toolchain).
So my question is twofold:
What determines which aarch64 version of glibc is installed on my system when installing gcc-4.8-aarch64-linux-gnu? Is it directly tied to my own system's x86 version of glibc?
Is there a correct way to install the aarch64 version of glibc v2.24 (or lower) on my system?
I concur with your hypothesis. After battling similar symptoms for 40 hours straight, I've discovered this confirmation:
https://packages.ubuntu.com/impish/gcc-10-aarch64-linux-gnu
https://packages.debian.org/bullseye/gcc-aarch64-linux-gnu
Note that Ubuntu 21.10 (Impish) and Debian 11 (Bullseye) have packages for a gcc 10 cross compiler. Be wary of the very confusing fact the Ubuntu's default package is actually gcc 11, but Debian 11's default is gcc 10. The similar version numbers of Debian and gcc are a coincidence. Also ignore for now the fact that Ubuntu's package is gcc 10.3.0 and Debian's is gcc 10.2.1.
Focus instead on the recommendations and dependencies of each package. Ultimately the Ubuntu package calls up libc >= 2.34, while the Debian package calls up libc >= 2.28.
Sure enough, when I cross-compile from Impish on x86 for Bullseye on aarch64 (despite having a complete SYSROOT for the target), I get this at runtime:
/lib/aarch64-linux-gnu/libc.so.6: version 'GLIBC_2.34' not found
But your question remains, is there any tie between the host libc and that used by the cross-compiler? The answer is a definite maybe.
See this excellent answer and links for an overview of a cross-compiler. The take-away:
You don't just cross-compile glibc, you need to cross-compile an entire toolchain. Toolchain components are ALWAYS: ld + gcc + libc + gdb.
So the C library is an integral part of the cross-compiler.
What shenanigans then, are going on when you install gcc-aarch64-linux-gnu? It's just a compiler - only one of the four parts of a toolchain.
Well apparently there's some flexibility. Technically, a cross-compiler can be naked. That's typically only useful when you're compiling an operating system, rather than an executable that runs on an operating system. So you can construct special toolchains for special purposes.
But for the standard purpose (cross compiling for Linux on another architecture) you want a typical toolchain. Which is where the package's dependencies and recommendations come in. A gcc is always in want of an ld which is always in want of a libc, and the ménage à trois is intimate. In fact, gcc is built with libc using ld in a complex do-si-do. See this example from a great guide by Preshing on Programming:
It's possible to force separation and link to other libraries, but it's not easy.
For example, the linker you use has a set of default search directories that are baked in. From the fine manual:
The default set of paths searched (without being specified with -L) depends on which emulation mode ld is using, and in some cases also on how it was configured.
And it gets more intwined. By default, gcc will call on a dynamic linker whose location is hard-coded. For a cross-compiler, it might be something like /lib/ld-linux-aarch64.so.1. Not only that, the executable may also end up with the hardcoded path, as its program interpreter.
Again, if you're careful you can tear apart the toolchain and override things. But not only is it tricky to enforce, particularly if you have a complex build, the multitude of combinations of options and paths means there are also often bugs. So your host environment can easily leak into your cross-compiling toolchain.
So in summary, cross-compiling requires a toolchain. While pulling a cross-compiler from a package manager seems like an easy and legitimate thing to do, it comes with a lot of implicit baggage. You can either carefully follow the package dependencies to check what version you're getting, or use one of the many dedicated toolchain environments, such as crosstool-NG.

Can gcc produce binary for Arm without cross compilation

Can we configure gcc running on intel x64 architecture to produce binary for ARM chip by just passing some flags to gcc and not using a cross compiler.
Short: Nope
Compiler:
gcc is not a native crosscompiler, the target architecture has to be specified at the time you compile gcc. (Some exceptions apply, as for example x86 and x86_64 can be supported at the same time)
clang would be a native crosscompiler, and you can generate code for arm by passing -target=arm-linux-gnu, but you still cant produce binaries, as you need a linker and a C-library too. Means you can run clang -target=arm-linux-gnu -c <your file> and compile C/C++ Code (will likely need to point it to your C/C++ include paths) - but you cant build binaries.
Rest of the toolchain:
You need a fitting linker and toolchain too, both are specific to the architecture and OS you want to run at.
Possible solutions:
Get a fitting toolchain, or compile your own. For arm linux you have for ex. CrossToolchains if you are on debian, for barebones you can get a crosscompiler from codesourcery.
Since you were very vague, its not possible to give you a clearer answer

How do I compile and link a 32-bit Windows executable using mingw-w64

I am using Ubuntu 13.04 and installed mingw-w64 using apt-get install mingw-w64. I can compile and link a working 64-bit version of my program with the following command:
x86_64-w64-mingw32-g++ code.cpp -o app.exe
Which generates a 64-bit app.exe file.
What binary or command line flags do I use to generate a 32-bit version of app.exe?
That depends on which variant of toolchain you're currently using. Both DWARF and SEH variants (which come starting from GCC 4.8.0) are only single-target. You can see it yourself by inspecting the directory structure of their distributions, i.e. they contain only the libraries with either 64- or 32-bit addressing, but not both. On the other hand, plain old SJLJ distributions are indeed dual-target, and in order to build 32-bit target, just supply -m32 flag. If that doesn't work, then just build with i686-w64-mingw32-g++.
BONUS
By the way, the three corresponding dynamic-link libraries (DLLs) implementing each GCC exception model are
libgcc_s_dw2-1.dll (DWARF);
libgcc_s_seh-1.dll (SEH);
libgcc_s_sjlj-1.dll (SJLJ).
Hence, to find out what exception model does your current MinGW-w64 distribution exactly provide, you can either
inspect directory and file structure of MinGW-w64 installation in hope to locate one of those DLLs (typically in bin); or
build some real or test C++ code involving exception handling to force linkage with one of those DLLs and then see on which one of those DLLs does the built target depend (for example, can be seen with Dependency Walker on Windows); or
take brute force approach and compile some test code to assembly (instead of machine code) and look for presence of references like ___gxx_personality_v* (DWARF), ___gxx_personality_seh* (SEH), ___gxx_personality_sj* (SJLJ); see Obtaining current GCC exception model.

Resources