Handling autoconf with Android after NDK16 - android-ndk

I'm trying to update an existing configuration we have we are cross compiling for a number of targets - the question specifically here is about Android. More specifically we are building code using cmake and the hunter package manager. However we are building ICU using a link that uses autoconf/configure, called from cmake. Not sure that is specifically important except that we have less control on the use of configure than is generally the case.
OK: we have a version that builds against an old NDK but I am updating and have hit a problem identified by https://android.googlesource.com/platform/ndk/+/master/docs/UnifiedHeaders.md: with NDK16 and later, the value of the sysroot parameter needs to vary between compilation and linkage. As it stands the configure script tries to build a small program conftest.c - the program fails to link. Manually I can compile the code in two stages using -c and then linking the subsequent .o, but that is not what configure is trying to do.
Now the reality is that when I build this code, I don't actually need to link the code - I am generating a library which is used elsewhere. However that is not currently the way that configure sees it.
I may look to redo the configuration script to just check that the code can be compiled when cross compiling. However I am curious to know if anybody has managed to handle this sort of thing by keeping the existing config files and just changing the parameters by which the scripts are called.

When r19 releases to stable this problem will go away on its own (https://github.com/android-ndk/ndk/issues/780), but since that's still in beta it's not a good solution just yet.
Prior to r19 (this isn't really unique to r16+, this has always been the case and it was just asymptomatic previously), autoconf builds should be done using a standalone toolchain.
You however should not use a standalone toolchain for CMake, so odds are something about your configuration will need to change until r19 is released. Depending on the effort involved, it may make sense to keep to r15 until r19 is available.

Related

Makefile explanation. Understanding someone else's Makefile

I am relatively new to programming on Linux.
I understand that Makefiles are used to ease the compiling process when compiling several files.
Rather than writing "g++ main.cpp x.cpp y.cpp -o executable" everytime you need to compile and run your program, you can throw it into a Makefile and run make in that directory.
I am trying to get a RPi and Arduino to communicate with each other using the nRF24L01 radios using tmrh20's library here. I have been successful using tmrh20's Makefile to build the the executable needed (on the RPi). I would like to, however, use tmrh20's library to build my own executables.
I have watched several tutorial videos on Makefiles but still cannot seem to piece together what is happening in tmrh20's.
The Makefile (1) in question is here. I believe it is somehow referencing a second Makefile (2) (for filenames?) here. (Why is this necessary?)
If it helps anyone understand (it took me a while) I had to build using SPIDEV (the instructions here) the Makefile (3) in the RF24 directory which produced several object files which I think are relevant to Makefile (1)&(2).
How do I find out what files I need to make my own Makefile, from tmrh20's Makefile (if that makes sense?) He seems to use variables in his Makefile that are not defined? Or are perhaps defined elsewhere?
Apologies for my poor explanation.
The canonical sequence is not just make and make install. There is an initial ./configure step (such a file is here) that sets up everything and generates several files used in the make steps.
You only need to run this configure script successfully only once, unless you want to change build parameters. I say "successfully" because the first execution will usually complain that you are missing libraries or header files. But ince ./configure runs without errors, make and make install should run without errors.
PS: I didn't try to compile it, but since the project has a rather comprehensive configure it is likely complete and you shouldn't need to tweak makefiles if your follow the usual procedure.
The reason for splitting the Makefiles in the way you've mentioned and linked to here is to separate the definition of the variables from the implementation. This way you could have multiple base Makefiles that define their PROGRAM variable differently, but all do the same thing based on the value of that variable.
In my own personal opinion, I see some value here - but there very many ways to skin this proverbial cat.
Having learned GNU Make the hard way, I can only recommend you do the same. There's a slight steep curve at the beginning, but once you get the main concepts down following other peoples Makefiles gets pretty easy.
Good luck: https://www.gnu.org/software/make/manual/html_node/index.html

why multiple passes for building Linux From Scratch (LFS)?

I am trying to understand the concept of Linux From Scratch and would like to know why there are multiple passes for building binutils, gcc etc.
Why do we need pass1 and pass2 separately? Why can't we build the tools in pass 1 and then use them to build gcc , glibc, libstdc++ , etc.
The goal is to ensure that your build is consistent, no matter which compiler you're using to compile your compiler (and thus which bugs that compiler has).
Let's say you're building gcc 4.1 with gcc 3.2 (I'm going to call that gcc 3.2 "stage-0"). The folks who did QA for gcc 4.1 didn't test it to work correctly when built with any compiler other than gcc 4.1 -- hence, the need to first build a stage-1 gcc, and then use that stage-1 to compile a stage-2 compiler, to prevent any bugs in the stage-0 compiler from impacting the final result.
Then, the default compile process for gcc uses the stage-2 compiler to build a stage-3 compiler, and compares the two binaries: Any difference between them can be used as proof of presence of a bug.
(Of course, this is only an effective mechanism to avoid unintended bugs; see the classic Ken Thompson paper Reflections on Trusting Trust for a discussion of how intended bugs can survive this kind of measure).
This goes beyond gcc into the entire toolchain because the same principles apply throughout: If you have any differences in the result between building glibc-x.y on a system running glibc-x.y and a system running glibc-x.(y-1) and you don't do an extra pass to ensure that you're building in a match for your target environment, then reproducing those bugs (and testing proposed fixes) is made far more difficult than would otherwise be the case: Nobody who doesn't have your (typically undisclosed) build environment can necessarily recreate the bug!
I know this query is a bit old, but I have something to add to the answers: a clarification of the meaning of 'bootstrap'.
The primary reason for the multi-stage build is to eliminate every vestige of the build host's programs/config/libs from the resultant software. It's not enough to have fresh software compiled. You also have to avoid any and all references to the host's libraries, the host's kernel interfaces (kernel headers), the host's pkg versions, and all other such dependencies on the host system.
Suppose you happened to be a masochist and wanted to build Debian 4 on Fedora 27 (it should be possible). Simply building the software would pull in references to 27's libraries and other things. And your resultant system would not run because those things are not available when the final system is installed.
LFS eases the process somewhat by building simple x86-to-x86 binutils and gcc cross tools in Stage 1, then installing the headers for the kernel to be used in the final system, then glibc. Stage 2 (binutils and gcc) is built using the cross tools, which guarantees that the host's programs/libs/config are not used at all. The rest of the toolchain (I call it Stage 3) is built using the tools from Stage 2. Now the final stage can be built (with a few small adjustments) with the assurance that no part of the build host will be referenced or used, and that no part of the toolchain will be referenced or used. The final stage is built using a path much like PATH=/bin:/usr/bin:/tools/bin; thus as the final tools are built, they will be used instead of those in the toolchain.
Building a toolchain is not for the impatient. It took me months to update Smoothwall Express' build system and the pkgs used, because building a toolchain is fraught with peril. I battled many dragons, balrocs, and dwarfs. I referenced LFS often to figure out how they did it. The result is an automated re-entrant build system that builds the entire distro with no references to the host system. I primarily build it on Debian 8, but it's been known to build on Gentoo, and it supposed to be able to build on itself.

Building libharu from scratch

Recently I'm trying to build and use libharu library in order to create PDFs from bitmaps.
I've made some research trough it's site : http://libharu.org/.
There are instructions showing how to build it, but i doesn't build because it has dependencies to two other libraries(which i don't understand how to integrate in the building process) - zlib and libpng.
But i cant understand clearly the entire process so my last hope is if someone has built it from scratch and could explain me or provide me with some details for the building process.
LibHaru was forked after 2.0.8. The later version uses a make system whose code seems to have changed. First of the new variant was 2.10.0. Old version is on sourceforge.
I couldn't get later version to compile but 2.0.8 worked. (dated 2006) In the past I have seen comment suggesting I am not alone. You are correct there are no instructions about the dependencies. If you can you should use the pre-built version, which is mentioned.
From your message I assume you have little software building experience. Outlining in a few words if not feasible, here is a little. Dependent libraries have to be available, either as source for compiling, or occasionally as pre-built libraries specifically for the compiler/OS you are using. You have to go and get them. Then the compiler system you are using to build libharu, has to be able to "see" the dependent libraries, in this case the *.h file. After compiling the whole lot has to be linked together. None of this is rocket science but is a major source of frustration, everything has to be just right, usually with nothing to tell you what is wrong.
And that is why some people favor using a third party "build" tool. If it works.
libharu has two major dependencies: zlib and libpng, both widely used libraries which usually compile easily but I think there are ways to omit these for a loss of functionality, are about handling import of bitmaps.
So you have three sets of sources and essentially three libraries where as a final step are linked to from the libharu source code.
Alternatively you could find a pre-built version.

best practice for building with/out included libraries

a project ships with a copy of library foo, in a filesystem layout like:
myproject/
myproject/src/ # sources of my project
myproject/libfoo/ # import of "foo" library
the standard (autotools-based) build-system builds libfoo, then builds myproject which dynamically links against libfoo.
libfoo is basically unmodified (with some minor amendments to properly fit into the build-system). libfoo uses autotools itself, so i'm usually calling configure recursively using AC_CONFIG_SUBDIRS.
however, libfoo is already packaged for various distributions, so i would like to avoid building against the imported library on these systems and rather use system-wide installation - this way i get the benefits of a better maintained version of libfoo (less bugs, security issues,...).
otoh, i want keep libfoo in my source-tree, so that i have a fallback for building on systems that do not ship that library (without the user requiring to separately fetch the sources and build the lib themselves).
i can think of a number of configure-flags i could instroduce, so the user can select whether they want to build the project with the system-installed, the local or without the library. (it's an optional dependency).
disabling the "local foo", should completely disable building of libfoo (and probably also configuring foo)
e.g. something like:
./configure --enable-foo=no # aka "--disable-foo": build without foo
./configure --enable-foo # use system-wide foo
./configure --enable-foo=local # use local copy of foo
alternatively:
./configure --disable-foo
./configure --enable-foo --disable-local-foo
./configure --enable-foo --enable-local-foo
but i'd like to do this in a standard-conformant way.
what's the best practice for selecting via autoconf, whether to use a local copy or a library, a system-wide copy or to not use the library at all?
pointers to projects that use such a mechanism are most welcome.
I have a similar in my project where I use the included version of the BuDDy library when (1) the library isn't already installed, or (2) it is installed but does not have to interface I expect, or (3) configure was run with --with-included-buddy.
You can see the configure macro here. After that I just use $(BUDDY_CPPFLAGS) and $(BUDDY_LDFLAGS) in the Makefile.ams, and the top-level Makefile.am only include the buddy directory conditionally in SUBDIRS.
I prefer --with-foo when dealing with external software, but it's just a preference. The examples and documentation at the link might help you decide how you want to do it. I'd go with your first example that uses only one flag rather than the second one that uses two flags for easier documentation/maintenance.
I really don't think you want to do this
You're going to make the build machinery a lot more complicated, and autotools are already considered black magic by most. It'll make things a lot more complicated for the developer, a little more complicated for a potential distro packager and ever-so-slightly easier for the end user.
If you're conditionally configuring then you make the process of building distribution tarballs (make dist/make distcheck) more brittle.
This is the sort of trouble you can cause.
But if you must...
You may be able to adapt the code in my recent answer.

Can autotools create multi-platform makefiles

I have a plugin project I've been developing for a few years where the plugin works with numerous combinations of [primary application version, 3rd party library version, 32-bit vs. 64-bit]. Is there a (clean) way to use autotools to create a single makefile that builds all versions of the plugin.
As far as I can tell from skimming through the autotools documentation, the closest approximation to what I'd like is to have N independent copies of the project, each with its own makefile. This seems a little suboptimal for testing and development as (a) I'd need to continually propagate code changes across all the different copies and (b) there is a lot of wasted space in duplicating the project so many times. Is there a better way?
EDIT:
I've been rolling my own solution for a while where I have a fancy makefile and some perl scripts to hunt down various 3rd party library versions, etc. As such, I'm open to other non-autotools solutions. For other build tools, I'd want them to be very easy for end users to install. The tools also need to be smart enough to hunt down various 3rd party libraries and headers without a huge amount of trouble. I'm mostly looking for a linux solution, but one that also works for Windows and/or the Mac would be a bonus.
If your question is:
Can I use the autotools on some machine A to create a single universal makefile that will work on all other machines?
then the answer is "No". The autotools do not even make a pretense at trying to do that. They are designed to contain portable code that will determine how to create a workable makefile on the target machine.
If your question is:
Can I use the autotools to configure software that needs to run on different machines, with different versions of the primary software which my plugin works with, plus various 3rd party libraries, not to mention 32-bit vs 64-bit issues?
then the answer is "Yes". The autotools are designed to be able to do that. Further, they work on Unix, Linux, MacOS X, BSD.
I have a program, SQLCMD (which pre-dates the Microsoft program of the same name by a decade and more), which works with the IBM Informix databases. It detects the version of the client software (called IBM Informix ESQL/C, part of the IBM Informix ClientSDK or CSDK) is installed, and whether it is 32-bit or 64-bit. It also detects which version of the software is installed, and adapts its functionality to what is available in the supporting product. It supports versions that have been released over a period of about 17 years. It is autoconfigured -- I had to write some autoconf macros for the Informix functionality, and for a couple of other gizmos (high resolution timing, presence of /dev/stdin etc). But it is doable.
On the other hand, I don't try and release a single makefile that fits all customer machines and environments; there are just too many possibilities for that to be sensible. But autotools takes care of the details for me (and my users). All they do is:
./configure
That's easier than working out how to edit the makefile. (Oh, for the first 10 years, the program was configured by hand. It was hard for people to do, even though I had pretty good defaults set up. That was why I moved to auto-configuration: it makes it much easier for people to install.)
Mr Fooz commented:
I want something in between. Customers will use multiple versions and bitnesses of the same base application on the same machine in my case. I'm not worried about cross-compilation such as building Windows binaries on Linux.
Do you need a separate build of your plugin for the 32-bit and 64-bit versions? (I'd assume yes - but you could surprise me.) So you need to provide a mechanism for the user to say
./configure --use-tppkg=/opt/tp/pkg32-1.0.3
(where tppkg is a code for your third-party package, and the location is specifiable by the user.) However, keep in mind usability: the fewer such options the user has to provide, the better; against that, do not hard code things that should be optional, such as install locations. By all means look in default locations - that's good. And default to the bittiness of the stuff you find. Maybe if you find both 32-bit and 64-bit versions, then you should build both -- that would require careful construction, though. You can always echo "Checking for TP-Package ..." and indicate what you found and where you found it. Then the installer can change the options. Make sure you document in './configure --help' what the options are; this is standard autotools practice.
Do not do anything interactive though; the configure script should run, reporting what it does. The Perl Configure script (note the capital letter - it is a wholly separate automatic configuration system) is one of the few intensively interactive configuration systems left (and that is probably mainly because of its heritage; if starting anew, it would most likely be non-interactive). Such systems are more of a nuisance to configure than the non-interactive ones.
Cross-compilation is tough. I've never needed to do it, thank goodness.
Mr Fooz also commented:
Thanks for the extra comments. I'm looking for something like:
./configure --use-tppkg=/opt/tp/pkg32-1.0.3 --use-tppkg=/opt/tp/pkg64-1.1.2
where it would create both the 32-bit and 64-bit targets in one makefile for the current platform.
Well, I'm sure it could be done; I'm not so sure that it is worth doing by comparison with two separate configuration runs with a complete rebuild in between. You'd probably want to use:
./configure --use-tppkg32=/opt/tp/pkg32-1.0.3 --use-tppkg64=/opt/tp/pkg64-1.1.2
This indicates the two separate directories. You'd have to decide how you're going to do the build, but presumably you'd have two sub-directories, such as 'obj-32' and 'obj-64' for storing the separate sets of object files. You'd also arrange your makefile along the lines of:
FLAGS_32 = ...32-bit compiler options...
FLAGS_64 = ...64-bit compiler options...
TPPKG32DIR = #TPPKG32DIR#
TPPKG64DIR = #TPPKG64DIR#
OBJ32DIR = obj-32
OBJ64DIR = obj-64
BUILD_32 = #BUILD_32#
BUILD_64 = #BUILD_64#
TPPKGDIR =
OBJDIR =
FLAGS =
all: ${BUILD_32} ${BUILD_64}
build_32:
${MAKE} TPPKGDIR=${TPPKG32DIR} OBJDIR=${OBJ32DIR} FLAGS=${FLAGS_32} build
build_64:
${MAKE} TPPKGDIR=${TPPKG64DIR} OBJDIR=${OBJ64DIR} FLAGS=${FLAGS_64} build
build: ${OBJDIR}/plugin.so
This assumes that the plugin would be a shared object. The idea here is that the autotool would detect the 32-bit or 64-bit installs for the Third Party Package, and then make substitutions. The BUILD_32 macro would be set to build_32 if the 32-bit package was required and left empty otherwise; the BUILD_64 macro would be handled similarly.
When the user runs 'make all', it will build the build_32 target first and the build_64 target next. To build the build_32 target, it will re-run make and configure the flags for a 32-bit build. Similarly, to build the build_64 target, it will re-run make and configure the flags for a 64-bit build. It is important that all the flags affected by 32-bit vs 64-bit builds are set on the recursive invocation of make, and that the rules for building objects and libraries are written carefully - for example, the rule for compiling source to object must be careful to place the object file in the correct object directory - using GCC, for example, you would specify (in a .c.o rule):
${CC} ${CFLAGS} -o ${OBJDIR}/$*.o -c $*.c
The macro CFLAGS would include the ${FLAGS} value which deals with the bits (for example, FLAGS_32 = -m32 and FLAGS_64 = -m64, and so when building the 32-bit version,FLAGS = -m32would be included in theCFLAGS` macro.
The residual issues in the autotools is working out how to determine the 32-bit and 64-bit flags. If the worst comes to the worst, you'll have to write macros for that yourself. However, I'd expect (without having researched it) that you can do it using standard facilities from the autotools suite.
Unless you create yourself a carefully (even ruthlessly) symmetric makefile, it won't work reliably.
As far as I know, you can't do that. However, are you stuck with autotools? Are neither CMake nor SCons an option?
We tried it and it doesn't work! So we use now SCons.
Some articles to this topic: 1 and 2
Edit:
Some small example why I love SCons:
env.ParseConfig('pkg-config --cflags --libs glib-2.0')
With this line of code you add GLib to the compile environment (env). And don't forget the User Guide which just great to learn SCons (you really don't have to know Python!). For the end user you could try SCons with PyInstaller or something like that.
And in comparison to make, you use Python, so a complete programming language! With this in mind you can do just everything (more or less).
Have you ever considered to use a single project with multiple build directories?
if your automake project is implemented in a proper way (i.e.: NOT like gcc)
the following is possible:
mkdir build1 build2 build3
cd build1
../configure $(YOUR_OPTIONS)
cd build2
../configure $(YOUR_OPTIONS2)
[...]
you are able to pass different configuration parameters like include directories and compilers (cross compilers i.e.).
you can then even run this in a single make call by running
make -C build1 -C build2 -C build3

Resources