Compiling for amd64 under i386 Debian - linux

Cheers,
I want to avoid problems with compiling my code on amd64, yet I don't have a 64-bit CPU available and have no hopes of getting upgrade to my machine any time soon. I have no dreams of testing the code (although that should theoretically be possible using qemu-system) but I'd like to at least compile the code using gcc -m64.
Basic idea works:
CFLAGS=-m64 CXXFLAGS=-m64 ./configure --host x86_64-debian-linux
However, the code depends on some libraries which I typically install from Debian packages, such as libsdl1.2-dev, libgmp3-dev and such. Obviously, getting 64-bit versions of packages installed alongside of 32-bit versions is not a one-liner.
What would be your practices for installing the 64-bit packages? Where would you put them, how would you get them there and how would you use them?
To repeat, I don't have 64-bit CPU and cannot afford getting a new machine.
I have already set up amd64-libs-dev to give some basic push to gcc's -m64.
Attempted so far:
Setting up a 64-bit chroot jail with debootstrap in order to simplify installation of 64-bit development packages for libraries. Failed since finishing the setup (and installing anything afterwards!) requires 64-bit CPU.
Installing gcc-multilib and g++-multilib. This appears to do nothing beside depending on libc6-dev-amd64 which I already installed through amd64-libs-dev.

If you're using debian, before you can use gcc -m64, you need to install gcc-multilib and g++-multilib. This will also install all files needed to link and create a 64bit binary.
You don't have to have a 64bit capable CPU for this either.
Then you can call GCC as follows:
$ gcc -m64 source.c -o source
As for external libraries, debian takes care of that if you have multilib installed. I have a 32bit machine that compiles 64bit code for another machine and links a handful of libraries (libpng, libz for example). Works great and the executable run (debian to debian).

You want to look into the dchroot package to set up a simple chroot(8) environment -- that way you can compile real amd64 binaries in a real 64-bit setting with proper libraries and dependencies. This surely works the other way (i.e. I am using i386 chroots on amd64 hosts) but I don't see why it shouldn't work the other way if your cpu supports amd64.
Edit: Now that you stress that you do not have a amd64-capable cpu, it gets a little trickier. "In theory" you could just rebuild gcc from source as a cross-compiler. In practice, that may be too much work. Maybe you can just get another headless box for a few dollars and install amd64 on that?

check out this fine article that describes how to easily create a 32bit chroot, where you can install all the 32bit tools (gcc and libs)

Doesn't Debian distinguish between lib32 and lib64 directories? In that case, you can just grab the packages and force them to install, regardless of architecture.
If that does not work (or would hose your system!) I would set up a chroot environment and apt-get the 64-bit libraries into there.

Check out pbuilder, It can create build environments for many architectures, some instructions here

Try cross compiling SDL, gmp and other libraries yourself. Or manually extract the files you need from the Debain packages.

Related

glibc version for aarch64

I'm cross-compiling an application for aarch64 on my x86 Ubuntu Bionic system, and I have problems with glibc version mismatch. My cross-compile toolchain was using v2.27, while the system that is to run the application has v2.24. I thought that it might be due to my toolchain having a too high version, so I decided to downgrade.
After removing all previous cross-compilation installs, I installed gcc-4.8-aarch64-linux-gnu (as I had successfully cross-compiled the application with this version on a different host system), thinking that it would install an older aarch64 version of glibc to /usr/aarch64-linux-gnu/lib/. However, again, v2.27 was installed (I verified that this directory didn't exist before installing the new cross-compilation toolchain).
So my question is twofold:
What determines which aarch64 version of glibc is installed on my system when installing gcc-4.8-aarch64-linux-gnu? Is it directly tied to my own system's x86 version of glibc?
Is there a correct way to install the aarch64 version of glibc v2.24 (or lower) on my system?
I concur with your hypothesis. After battling similar symptoms for 40 hours straight, I've discovered this confirmation:
https://packages.ubuntu.com/impish/gcc-10-aarch64-linux-gnu
https://packages.debian.org/bullseye/gcc-aarch64-linux-gnu
Note that Ubuntu 21.10 (Impish) and Debian 11 (Bullseye) have packages for a gcc 10 cross compiler. Be wary of the very confusing fact the Ubuntu's default package is actually gcc 11, but Debian 11's default is gcc 10. The similar version numbers of Debian and gcc are a coincidence. Also ignore for now the fact that Ubuntu's package is gcc 10.3.0 and Debian's is gcc 10.2.1.
Focus instead on the recommendations and dependencies of each package. Ultimately the Ubuntu package calls up libc >= 2.34, while the Debian package calls up libc >= 2.28.
Sure enough, when I cross-compile from Impish on x86 for Bullseye on aarch64 (despite having a complete SYSROOT for the target), I get this at runtime:
/lib/aarch64-linux-gnu/libc.so.6: version 'GLIBC_2.34' not found
But your question remains, is there any tie between the host libc and that used by the cross-compiler? The answer is a definite maybe.
See this excellent answer and links for an overview of a cross-compiler. The take-away:
You don't just cross-compile glibc, you need to cross-compile an entire toolchain. Toolchain components are ALWAYS: ld + gcc + libc + gdb.
So the C library is an integral part of the cross-compiler.
What shenanigans then, are going on when you install gcc-aarch64-linux-gnu? It's just a compiler - only one of the four parts of a toolchain.
Well apparently there's some flexibility. Technically, a cross-compiler can be naked. That's typically only useful when you're compiling an operating system, rather than an executable that runs on an operating system. So you can construct special toolchains for special purposes.
But for the standard purpose (cross compiling for Linux on another architecture) you want a typical toolchain. Which is where the package's dependencies and recommendations come in. A gcc is always in want of an ld which is always in want of a libc, and the ménage à trois is intimate. In fact, gcc is built with libc using ld in a complex do-si-do. See this example from a great guide by Preshing on Programming:
It's possible to force separation and link to other libraries, but it's not easy.
For example, the linker you use has a set of default search directories that are baked in. From the fine manual:
The default set of paths searched (without being specified with -L) depends on which emulation mode ld is using, and in some cases also on how it was configured.
And it gets more intwined. By default, gcc will call on a dynamic linker whose location is hard-coded. For a cross-compiler, it might be something like /lib/ld-linux-aarch64.so.1. Not only that, the executable may also end up with the hardcoded path, as its program interpreter.
Again, if you're careful you can tear apart the toolchain and override things. But not only is it tricky to enforce, particularly if you have a complex build, the multitude of combinations of options and paths means there are also often bugs. So your host environment can easily leak into your cross-compiling toolchain.
So in summary, cross-compiling requires a toolchain. While pulling a cross-compiler from a package manager seems like an easy and legitimate thing to do, it comes with a lot of implicit baggage. You can either carefully follow the package dependencies to check what version you're getting, or use one of the many dedicated toolchain environments, such as crosstool-NG.

How to cross-compile a autotools project for ARM?

I am looking to cross-compile an existing library which uses GNU autotools build system. I have a Linaro arm-gcc toolchain installed in my host machine and I am able to compile small programs directly using arm-gcc.
Host machine: Ubuntu 12.04 Intel x64
Target machine: Ubuntu 14.04 ARM 32-bit (a board similar to Raspberry-Pi)
I have a library source code which has configure.ac and Makefile.am files for it. I want to compile this code on host machine and generate ARM binaries which can be copied over to the target platform.
What is the canonical way to do this?
For specifics, I am looking for something that would work for a "Hello World" application/library in C cross-compiled using arm-linux-gnueabi-gcc and autotools.
--build=`./config.guess` --host=arm-linux-gnueabi
might be sufficient, as it will look for a corresponding ARM toolchain. Otherwise, try adding: CC="arm-linux-gnueabi-gcc"
You can also add: CFLAGS="-pipe -Wall -O2 ... <other arm-gcc flags>"
for better code optimization.
The right way to do this on Ubuntu is to the use the distro-supplied cross-compiler, not a 3rd party one like Linaro. You only need an out-of-distro package when the distro one is not good enough for some reason (like you need some cutting-edge feature which is only in the Linaro toolchain and not yet in the distro). Hardly anyone needs to do that.
Install the gcc, g++ crosstollchains, a cross libc and some config tools with:
apt install crossbuild-essential-armhf
If the software you want to build needs nothing more than the C runtime library then you can build it as is. If it needs anything more then you need to install cross-build-dependencies.
If the software you want to build is packaged (and called $packagename), you should be able to:
dpkg --add-architecture armhf
apt update
apt build-dep $packagename
then build it with
dpkg-buildpackage -aarmhf
If it's not packaged you'll need to install build-dependencies, libraries for arch armhf, tools for the native arch (usually amd64 or arm64). For example:
apt-get install sgmltools ghostscript libpng-dev:armhf libssl-dev:armhf
would install native ghostscript and sgmltools (for doc-building) and headers/libraries for libpng and libssl for armhf.
More details on the Debian wiki.

Fixing libc.so.6 unexpected reloc type 0x25

I'm trying to install gcc4.9 on a SUSE system without an internet connection. I compiled gcc on an Ubuntu machine and installed it into a prefix, then copied the prefix folder to the SUSE machine. When I tried to run it gcc complained about not finding GLIBC_2_14, so I downloaded an rpm for libc6 online and included it into the prefix folders. my LD_LIBRARY_PATH includes prefix/lib and prefix/lib64. When I try to run any program now (ls, cp, cat, etc) I get the error error while loading shared libraries: /home/***/prefix/lib64/libc.so.6: unexpected reloc type 0x25.
Is there any way I can fix this so that I can get gcc4.9 up and running on this system?
As an alternative, is it possible to build gcc staticaly so that I don't have to worry about linking at all when I transfer it between computers?
my LD_LIBRARY_PATH includes prefix/lib and prefix/lib64
See this answer for explanation of why this can't work.
Is there any way I can fix this so that I can get gcc4.9 up and running on this system?
Your best bet is to install whatever GCC package comes with the SuSE system, then use that GCC to configure and install gcc-4.9 on it.
If for some reason you can't do that, this answer has some of the ways in which you can build gcc-4.9 on a newer system and have it still work on an older one.
is it possible to build gcc staticaly so that I don't have to worry about linking at all when I transfer it between computers?
Contrary to popular belief, fully-static binaries are generally less portable then dynamic ones on Linux.

Installing gcc on linux without c compiler

How can I install gcc on a system that have not any c compiler?
this system is a linux base firewall and have not any c compiler.
I guess you a have an appliance running Linux and shell-access, but neither a package manager nor a compiler is installed.
So, you need to cross-compile gcc and the whole toolchain (at least binutils) - this is quite simple, because the ./configure scripts of gcc, binutils, gdb etc. support cross-compiling with the --target= option. So all you have to do is to find out the target architecure (uname helps) and then download, unpack the gcc sources on a linux-host and run ./configure --target=$YOUR_TARGET.
With this, you now can build a cross-compiler gcc - this still runs on your host, but produces binaries for your target (firewall appliances).
This may already be sufficient for you, a typical desktop PC is much faster than a typical appliance, so it may make sense to compile everything you need on the Desktop PC with the cross-compiler and cross-binutils.
But if you really wish to do so, you can now also use your cross-compiler to compile a gcc running on your target (set this as --host= option) and compiling for your target (set this as --target option).
You can find details about allowed host/targets and examples in the gcc documentation: http://gcc.gnu.org/install/specific.html.
It depends on the distribution, if it's based on debian or some other of the big ones you can install gcc through apt-get or similar tool.
If it's a more basic system you need to compile gcc yourself on another computer and copy it over. It will be easiest if you have another computer with the same architecture (i386, arm or x86_64 for example).
I think that you might want to compile it statically also, so that you don't have dependencies on external libraries.
How do you plan to get all the source code needed for GCC loaded onto your machine? Could you mount the ISO image onto this machine and install from there?
Since you are using Endian Firewall, see "Building a development box" at the following link:
http://alumnus.caltech.edu/~igormt/endian/tips.html
If it's a debian based distribution, you can use
sudo apt-get install gcc
Note: maybe you must change "gcc" by a specific version of the debian package.

Which cross compiler?

What is the difference between
MinGW cross compiler and
GCC Cross compiler.
Which one used in which operating system?
I need to create an EXE file in the Linux operating system using Qt, hence which is the cross compiler to be used?
MinGW is a GCC cross compiler for Windows environments. (There are multiple GCC cross compilers for various different targets.)
To compile Windows executables on your Linux box, you want a MinGW install for your distribution of Linux.
If you're running
Debian, you want http://packages.debian.org/lenny/mingw32 (apt-get install mingw32)
Ubuntu, you want http://packages.ubuntu.com/jaunty/mingw32 (apt-get install mingw32)
Red Hat Linux or CentOS, you want several of the MinGW packages from http://download.fedora.redhat.com/pub/epel/5/i386/repoview/M.group.html (see EPEL how-to then yum install mingw32-binutils and mingw32-gcc-g++ at minimum)
Gentoo, see http://www.gentoo-wiki.info/MinGW
openSUSE, then you can find builds at http://download.opensuse.org/repositories/CrossToolchain:/mingw/
MingW32 is a port of GCC with "win32 target".
There are two architecture in a cross-compiler: host and target. The host is the platform the compiler run on; the target is what the result code will run.
Assume you are using Ubuntu, you can see the package here.
MinGW is basically a port of GCC and related tools, allowing them to run natively on Windows machines.
Cross compiling is the act of using a compiler on one operating system/architecture to generate a binary/EXE/DLL/object that is compatible with another operating system/architecture. Basically, you ask the compiler to generate assembly and startup routines for something other than the host OS's default.
If you were on a Linux machine, you'd use GCC to compile it for the Linux machine... If you were on a Windows machine, you'd use MinGW, but with flags to tell it to compile for the Linux machine's specifications.
GCC is usually used in Linux.. MinGW is just a Windows port of GCC to compile source to EXE files.

Resources