What does make install do when compiling GCC from source code? - linux

I am trying to compile GCC from source. After make finishes, I could not find the GCC binary executable. Here is the configure command I used:
../gcc-svn/configure --prefix=/home/user/Documents/mygcc
Here are my questions specifically:
What should I expect make install does?
Is make install going to do more compilation or just moves some files to ~/Documents/mygcc? If it is the latter where the GCC executable resides?
Any other directory in my system also get affected by make install?
Thank you in advance.

Related

How to link mach-o format object files on linux?

I have been attempting to link a MACHO formatted object file on Linux, but I have failed miserably. So far, I have created the object file by running:
nasm -fmacho -o machoh.o hello.o
I have tried linking using:
clang --target=x86_64-apple-darwin machoh.o
but that failed. I have attempted using GCC, LD, and other linkers but I have still failed miserably. Are there any ideas on how I could solve my problem?
Thank you very much.
The most accessible solution would be lld, the LLVM linker.
lld does not ship with clang, but is a separate package.
sudo apt install lld
If you installed a version of clang that isn't the default (e.g. clang-12 explicitly), then you should use the same version for lld (i.e. lld-12).
Get a MacOS SDK from somewhere. This GitHub repo archives them.
If you're uncomfortable using the above, the "legitimate" way of obtaining it without a Mac would be:
Create an Apple ID
Go to https://developer.apple.com/download/all/
Download the "Command Line Tools for Xcode <version>"
Mount or extract the dmg
Extract the XAR package
For each ".pkg" folder inside, run pbzx <Payload | cpio -i
Find the Library/Developer/CommandLineTools/SDKs/MacOSX.sdk inside.
Feed both of the above to clang:
clang --target=x86_64-apple-darwin -fuse-ld=lld --sysroot=path/to/MacOSX.sdk machoh.o
I have tried linking using: clang --target=x86_64-apple-darwin machoh.o
but that failed.
Failed how? Details matter.
Anyway, there are 3 commonly used linkers on Linux: BFD-ld, Gold, and (newest) LLD.
Of these, Gold is an ELF-only linker, and will not work for Mach-O.
BFD-ld is only configured to support a few emulations (use ld --help to see which ones) in my distribution. BFD does appear to support Mach-O, so it's probably possible to build a Linux BFD-ld cross-linker with such support.
LLD should support Mach-O out of the box, but you are probably not using LLD.
So your first step should be to figure out which linker clang --target=x86_64-apple-darwin ... uses, and then make it use the one which does support Mach-O.

Install openmpi and compilation failed with linking mpi_cxx

Hi, All
I am currently installing the openmpi-4.1.1 on ubuntu18.04 from the tar.gz file.
However, when I use the nvcc (CUDA 11.2.2) compiler with -lmpi_cxx, it reports that this linking option does not exist.
is there anything wrong when I am building and installing the openmpi?
I use the following commands when building openmpi with CUDA-aware capability.
./configure --with-cuda
make -j8 install
I try to remove -lmpi_cxx and only keep -lmpi, the compiler reports errors like
undefined reference to `MPI::Comm::Comm()'
Thanks a lot!
I just figure this out by myself.
I need to enable the c++ binding of the MPI when building the openmpi.
Here are the commands
./configure --enable-mpi-cxx --with-cuda
make all install

arm-none-eabi-objdump: error while loading shared libraries: libdebuginfod.so.1: cannot open shared object file

If you have an answer for this, or further information, I'd welcome it. I'm following advice from here, to offer some unsolicited help by posting this question then an answer I've already found for it.
I have a bare-metal ARM board for which I'm building a cross-toolchain, from sources for GNU binutils, gcc and gdb, and for SourceWare's Newlib. I got those four working and cross-built a DoNothing.c into an ELF file - but I couldn't disassemble it with this:
$ arm-none-eabi-objdump -S DoNothing.elf
The error was:
$ arm-none-eabi-objdump: error while loading shared libraries: libdebuginfod.so.1: cannot open shared object file: No such file or directory
I'll follow up with a solution.
The error was correct - my system didn't have libdebuginfod.so.1 installed - but I have another cross-binutils, installed from binary for a different target, and its objdump -S works fine on the same host. Why would one build of objdump complain about missing that shared library, when clearly not all builds of objdump need it?
First I tried rebuilding cross binutils, specifying --without-debuginfod as a configure option. No change, which seems odd: surely that should build tools that not only don't use debuginfod but which don't depend on it in any way. (If someone can answer that, or point out what I've misunderstood, it may help people.)
Next I figured debuginfod was inescapable (for my cross-tools built from source at least), so I'd install it to get rid of the error. It's a component of the elfutils package, but installing the latest elfutils available for my Ubuntu 20.04 system didn't bring libdebuginfod.so.1 with it.
I found a later one, for Arch Linux, whose package contents suggested it would - but its package format doesn't match Ubuntu's and installing it was going to involve a lot of work. Instead I opted to build it from the Arch Linux source package. However, running ./configure on that gave a couple of infuriatingly similar errors:
configure: checking libdebuginfod dependencies, --disable-libdebuginfod or --enable-libdebuginfo=dummy to skip
...
configure: error: dependencies not found, use --disable-libdebuginfod to disable or --enable-libdebuginfod=dummy to build a (bootstrap) dummy library.
No combination of those suggestions would allow configure for elfutils-0.182 to run to completion.
The problem of course was my own lack of understanding. The solution came from the Linux From Scratch project: what worked was to issue configure with both of the suggested options, like this:
$ ./configure --prefix=/usr \
--disable-debuginfod \
--enable-libdebuginfod=dummy \
--libdir=/lib
That gave a clean configure; make worked first time, as did make check and then sudo make install which of course installed libdebuginfod.so.1 as required. I then had an arm-none-eabi-objdump which disassembles cross-compiled ELF files without complaining.

Cross compile a library --- curlpp

I have written a program using curlpp and run succesfully on intel machine.
And now I want to compile it using a arm compiler called arm-linux-g++
What I need to do is recompile the library curlpp using the arm compiler. But it is weird that the there are .a , .la file in output ,but .so file is missing!
Here is my step:
1.recompile curl
./configure --host=arm-linux --prefix=/root/curl/build/target/
make
make install
2.recompile curlpp
env CPPFLAGS="-I/root/curl/build/target/include" LDFLAGS="-
L/root/curl/build/target/lib" ./configure --host=arm-linux --prefix=/root/curlpp/build/target --build=i586
make
make install
3. move /root/curlpp/build/target/,root/curl/build/target/ to /root/usr/local/
4. compile my program
arm-linux-g++ -I/root/usr/local/include -L/root/usr/local/lib abc.cpp -lcurlpp -o abc
And compiler complains that lcurlpp can't find (since .so file is
missing)
Please teach me how to compile in using cross-compiler.
Thank you very much.

Force gcc to compile 32 bit programs on 64 bit platform

I've got a proprietary program that I'm trying to use on a 64 bit system.
When I launch the setup it works ok, but after it tries to update itself and compile some modules and it fails to load them.
I'm suspecting it's because it's using gcc and gcc tries to compile them for a 64 bit system and therefore this program cannot use these modules.
Is there any way (some environmental variables or something like that) to force gcc to do everything for a 32 bit platform. Would a 32 bit chroot work?
You need to make GCC use the -m32 flag.
You could try writing a simple shell script to your $PATH and call it gcc (make sure you don't overwrite the original gcc, and make sure the new script comes earlier in $PATH, and that it uses the full path to GCC.
I think the code you need is just something like /bin/gcc -m32 $* depending on your shell (the $* is there to include all arguments, although it might be something else – very important!)
You may get a 32-bit binary by applying Alan Pearce's method, but you may also get errors as follows:
fatal error: bits/predefs.h: No such file or directory
If this is the case and if you have apt-get, just install gcc-multilib
sudo apt-get install gcc-multilib
For any code that you compile directly using gcc/g++, you will need to add -m32 option to the compilation command line, simply edit your CFLAGS, CXXFLAGS and LDFLAGS variables in your Makefile.
For any 3rd party code you might be using you must make sure when you build it to configure it for cross compilation. Run ./configure --help and see which option are available. In most cases you can provide your CFLAGS, CXXFLAGS and LDFLAGS variables to the configure script. You also might need to add --build and --host to the configure script so you end up with something like
./configure CFLAGS=-m32 CXXFLAGS=-m32 LDFLAGS=-m32 --build=x86_64-pc-linux-gnu --host=i686-pc-linux-gnu
If compilation fails this probably means that you need to install some 32 bit development packages on your 64 bit machine

Resources