"make install" - Changing output destination for all builds - linux

I am doing Linux development on a few machines, mainly Slackware 13.37 and Ubuntu 12.04. I am testing and validating the results of a few simple makefiles, and want to confirm the output of make install. However, before I go ahead testing this, I want to know if there is a portable means of changing the default output destination for make install for any makefile.
I would prefer if I could somehow stage my output, so all output goes to, for example:
/test/bin
/test/lib
/test/usr/bin
instead of:
/bin
/lib
/usr/bin
I know that in QNX development environments, for example, I can set environment variables like QCONF_OVERRIDE and INSTALL_ROOT_nto, and guarantee that no makefile is able to install anywhere other than a subdirectory of /test for example. Is there a similar mechanism for GCC on Ubuntu that just requires setting some environment variables in my ~/.bashrc file? I do all my work via command-line and VIM anyways, so I'm not worried about the case where a pretty IDE doesn't understand these environment variables due to them not being in a .kderc, .gnomerc, or equivalent.
Thank you.

Short answer: no.
Long answer:
There isn't a way to set the output destination for any Makefile; the Makefile or some other part of the build system has to be designed to make it possible. make is a very simple tool because it's intended to function identically across a wide variety of platforms. Consequently, it doesn't really use environment variables that aren't present in the Makefile itself. This is good in terms of environment pollution and for keeping things less magic, but bad for achieving your desired goal.
A bit of context: things are a bit more complicated in part because, unlike the QNX development environment (a largely homogeneous cross-compilation environment), a large portion of software that uses make (I'm assuming GNU make but this applies to other versions as well) to build is written for a heterogeneous build and run environment—it may be designed to be able to build for different distributions, operating systems (Linux, MS Windows, Mac OS X, FreeBSD, etc.), and even hardware architecture (x86, arm, mips, power, sparc, sh, etc.). Sometimes you're building for the same system and sometimes for a different one. Unsurprisingly, there isn't really a standard way to define an install path across such a variety of systems.
Basile mentioned in his answer that for programs that use the GNU build system, you can use ./configure --prefix=/test. This will typically work for simple programs that use Autotools. For more complicated programs using GNU Autotools, there are usually complications if more diverse options are available:
cross-compiling? You might want your prefix to be set to where it's going to be installed on the target system (maybe /usr/local), and to use make install DESTDIR=/test.
Does your build system expect dependencies in the prefix directory, but you want to find them elsewhere? Better run ./configure --help to see what other options there are for providing alternate paths (often --with-foo=/prefix/of/foo)
I'm sure there's more that I'm forgetting right now, but I think you get the picture.
Keep in mind that only applies to projects that use Autotools. Other projects may have other systems (perhaps naming a variable or editing a configuration file), so ultimately your best bet is to read the project documentation, and failing that, the Makefile. Fun, eh?
P.S. Having variables defined in the environment is different than passing them as a command argument to make, i.e. SPAM="alot" make target is different from make target SPAM="alot"—the latter will override makefile variables. See the GNU make docs on Variables from the Environment.

Just change prefix variable in makefile
prefix=/test
then run
make install
also you can run following command for installing binaries
make prefix=/test install
Refer http://www.gnu.org/prep/standards/html_node/Directory-Variables.html

At least for GNU software (using autotools) you should configure your software with
./configure --prefix=/test
just using (without a specific --prefix to configure before)
make prefix=/test install
usually won't work correctly, because some of the files are builtin for the program so their path becomes a constant inside it.
You could also use make install DESTDIR=/tmp/dest and then copy /tmp/dest to /test but it won't work correctly neither
For example, my /usr/local/bin/emacs binary has /usr/local/share/emacs/24.3.50/lisp as some string (checked with strings /usr/local/bin/emacs command) and the /usr/local/ part of that path is the configure-d prefix.
BTW, you could have a chroot-ed environment to test your applications for various distributions.

Related

How to install a single Perl Crypt::OpenSSL::AES for use by different linux environments

I have a sticky problem that I am not quite sure how to solve. The situation is as follows:
We have a common 32bit perl 5.10.0
It is used by both 32bit and 64bit linux machines
Now the problem is that I need to install Crypt::OpenSSL::AES module for the Perl, however since it builds a shared library a lot of problems appear:
If built on 64bit machines the module is not usable with "wrong ELF class: ELFCLASS64" error for the generated AES.so
If built on a 32bit machine the module is not usable on the 64bit with undefined symbol: AES_encrypt
The problem I'm guessing is that the different machines have different versions of OpenSSL installed and they are not compatible with each other.
My question is given that I cannot change any of the machine configurations, what should I do to get the AES module working on all the machines?
Thanks!
I solved the problem with a combination of staticperl and building statically linked Crypt::OpenSSL::AES so that I have a single perl executable that is fully statically linked.
Given that I am not able to modify the environment, this is the best I can come up with.
Perl's default configuration very intentionally puts platform-specific things in a separate directory; you appear to have broken that model. Consider restoring it.
I assume you built your perl on a 32 bit machine, so during the build process, Configure didn't include any of the 'make this 32 bit' compiler switches. If you build on a 64 bit machine now, the build process will use exactly the same switches, so you get a 64 bit binary that cant't be loaded from 32 bit perl - not even on the 64 bit machines, beacuse the 32 bit perl binary you're running there can't load a 64 bit shared library either.
You might try building your shared perl on a 64 bit machine, explicitly stating you want a 32 bit perl. There should be some configure parameters for this. That way, you have a perl that sets the "use 32 bit" compiler flag when building modules. Then, you can use that version of perl on each of the machines to build the module. The modules won't be identical, but each of them will run on its bit size, and your software distribution process could pull the correct module when distributing to a specific machine.
However, the real problem is somewhat behind. I assume someone in your company at some point said "We don't want to be dependent on what the distributions provide, let's build our own perl that we can copy everywhere". This sounds like a good idea, but it is NOT. Different Linux versions use different versions of shared libraries, default directories for configuration files, default path variables etc. The configure process takes care of that and creates a perl binary for exactly your machine. If you copy this to a different machine, it might not find symbols in other versions of shared libraries. It might try to read lib from directories that don't exist there. It might not include a workaround for some bug that was corrected on the machine where you built it, but need the workaround on the older system you copied it to. Or, it might provide a workaround for something that has long been fixed on the newer system, thus wasting CPU time.
So, essentially, creating one perl to copy everywhere will ONLY work well if you build a static perl that includes everything and doesn't need any shared libraries. The standard, shared-library-using perl you compile on one machine, does NOT meet the "behaves the same everywhere i copy it to" request you probably had, because it depends way too much on stuff "around" it.

Automake and files with the same name

I've a C++ autoconf managed project that I'm adapting to compile on FreeBSD hosts.
The original system was Linux so I made one AM_CONDITIONAL to distinguish the host I'm building and separate the code into system specific files.
configure.ac
AC_CANONICAL_HOST
AM_CONDITIONAL([IS_FREEBSD],false)
case $host in
*free*)
AC_DEFINE([IS_FREEBSD],[1],[FreeBSD Host])
AM_CONDITIONAL([IS_FREEBSD],true)
BP_ADD_LDFLAG([-L/usr/local/lib])
;;
esac
Makefile.am
lib_LTLIBRARIES=mylib.la
mylib_la_SOURCES=a.cpp \
b.cpp
if IS_FREEBSD
mylib_la_SOURCES+=freebsd/c.cpp
else
mylib_la_SOURCES+=linux/c.cpp
endif
When I run automake it fails with this kind of message:
Makefile.am: object `c.lo' created by `linux/c.cpp' and `freebsd/c.cpp'
Any ideas on how to configure automake to respect this conditional even in the Makefile.in build proccess?
I this works if the files have diferent names, but it's c++ code and I'm trying to keep the filenames the same as the class name.
Thanks in advance!
You could request for the objects to be built in their respective subdirectories with
AUTOMAKE_OPTIONS = subdir-objects
Another option, besides subdir-objects, is to give each sub-project some custom per-project build flags. When you do this, automake changes its *.o naming rules to prepend the target name onto the module name. For example, this:
mylib_la_CXXFLAGS=$(AM_CXXFLAGS)
mylib_la_SOURCES=a.cpp b.cpp
will result in the output files mylib_la-a.o and mylib_la-b.o, rather than a.o and b.o. Thus you can have two different projects with the same output directory that each have, say, a b.cpp file, and not have their outputs conflict.
Notice that I did this by setting the project-specific CXXFLAGS to the values automake was already going to use, AM_CXXFLAGS. Automake isn't smart enough to detect this trick and use the shorter *.o names. If it happens that you do need per-project build options, you can of course do that instead of this hack.
There's a whole list of automake variables that, when set on a per-executable basis, give this same effect. So for instance, maybe one sub-project needs special link flags already, so you give it something like:
mylib_la_LDFLAGS=-lfoo
This will give you the prefixed *.o files just as the AM_CXXFLAGS trick did, only now you are "legitimately" using this feature, instead of tricking automake into doing it.
By the way, it's bad autoconf style to change how your program builds based solely on the OS it's being built for. Good autoconf style is to check only for specific platform features, not whole platforms, because platforms change. FreeBSD might be a certain way today, but maybe in the next release it will copy a feature from Linux that would erase the need for you to build your program two different ways. Or, maybe the feature you're using today is deprecated, and will be dropped in the next version.
There's forty years of portable Unix programming wisdom in the autotools, grasshopper. The "maybes" I've given above have happened in the past, and will certainly do so again. Testing individual features is the nimblest way to cope with constantly changing platforms.
You can get unexpected bonuses from this approach, too. For instance, maybe your program needs two nonportable features to do its work. Say that on FreeBSD, these are the A and B features, and on Linux, they're the X and Y features; A and X are similar mechanisms but with different interfaces, and the same for B and Y. It could be that feature A comes from the original BSDs, and is in Solaris because it has BSD roots from SunOS in the 80's, and Solaris also has feature Y from it's System V based redesign in the early 90's. By testing for these features, your program could run on Solaris, too, because it has the features your program needs, just not in the same combination as on FreeBSD and Linux.

Why use build tools like Autotools when we can just write our own makefiles?

Recently, I switched my development environment from Windows to Linux. So far, I have only used Visual Studio for C++ development, so many concepts, like make and Autotools, are new to me. I have read the GNU makefile documentation and got almost an idea about it. But I am kind of confused about Autotools.
As far as I know, makefiles are used to make the build process easier.
Why do we need tools like Autotools just for creating the makefiles? Since all knows how to create a makefile, I am not getting the real use of Autotools.
What is the standard? Do we need to use tools like this or would just handwritten makefiles do?
You are talking about two separate but intertwined things here:
Autotools
GNU coding standards
Within Autotools, you have several projects:
Autoconf
Automake
Libtool
Let's look at each one individually.
Autoconf
Autoconf easily scans an existing tree to find its dependencies and create a configure script that will run under almost any kind of shell. The configure script allows the user to control the build behavior (i.e. --with-foo, --without-foo, --prefix, --sysconfdir, etc..) as well as doing checks to ensure that the system can compile the program.
Configure generates a config.h file (from a template) which programs can include to work around portability issues. For example, if HAVE_LIBPTHREAD is not defined, use forks instead.
I personally use Autoconf on many projects. It usually takes people some time to get used to m4. However, it does save time.
You can have makefiles inherit some of the values that configure finds without using automake.
Automake
By providing a short template that describes what programs will be built and what objects need to be linked to build them, Makefiles that adhere to GNU coding standards can automatically be created. This includes dependency handling and all of the required GNU targets.
Some people find this easier. I prefer to write my own makefiles.
Libtool
Libtool is a very cool tool for simplifying the building and installation of shared libraries on any Unix-like system. Sometimes I use it; other times (especially when just building static link objects) I do it by hand.
There are other options too, see StackOverflow question Alternatives to Autoconf and Autotools?.
Build automation & GNU coding standards
In short, you really should use some kind of portable build configuration system if you release your code to the masses. What you use is up to you. GNU software is known to build and run on almost anything. However, you might not need to adhere to such (and sometimes extremely pedantic) standards.
If anything, I'd recommend giving Autoconf a try if you're writing software for POSIX systems. Just because Autotools produce part of a build environment that's compatible with GNU standards doesn't mean you have to follow those standards (many don't!) :) There are plenty of other options, too.
Edit
Don't fear m4 :) There is always the Autoconf macro archive. Plenty of examples, or drop in checks. Write your own or use what's tested. Autoconf is far too often confused with Automake. They are two separate things.
First of all, the Autotools are not an opaque build system but a loosely coupled tool-chain, as tinkertim already pointed out. Let me just add some thoughts on Autoconf and Automake:
Autoconf is the configuration system that creates the configure script based on feature checks that are supposed to work on all kinds of platforms. A lot of system knowledge has gone into its m4 macro database during the 15 years of its existence. On the one hand, I think the latter is the main reason Autotools have not been replaced by something else yet. On the other hand, Autoconf used to be far more important when the target platforms were more heterogeneous and Linux, AIX, HP-UX, SunOS, ..., and a large variety of different processor architecture had to be supported. I don't really see its point if you only want to support recent Linux distributions and Intel-compatible processors.
Automake is an abstraction layer for GNU Make and acts as a Makefile generator from simpler templates. A number of projects eventually got rid of the Automake abstraction and reverted to writing Makefiles manually because you lose control over your Makefiles and you might not need all the canned build targets that obfuscate your Makefile.
Now to the alternatives (and I strongly suggest an alternative to Autotools based on your requirements):
CMake's most notable achievement is replacing AutoTools in KDE. It's probably the closest you can get if you want to have Autoconf-like functionality without m4 idiosyncrasies. It brings Windows support to the table and has proven to be applicable in large projects. My beef with CMake is that it is still a Makefile-generator (at least on Linux) with all its immanent problems (e.g. Makefile debugging, timestamp signatures, implicit dependency order).
SCons is a Make replacement written in Python. It uses Python scripts as build control files allowing very sophisticated techniques. Unfortunately, its configuration system is not on par with Autoconf. SCons is often used for in-house development when adaptation to specific requirements is more important than following conventions.
If you really want to stick with Autotools, I strongly suggest to read Recursive Make Considered Harmful (archived) and write your own GNU Makefile configured through Autoconf.
The answers already provided here are good, but I'd strongly recommend not taking the advice to write your own makefile if you have anything resembling a standard C/C++ project. We need the autotools instead of handwritten makefiles because a standard-compliant makefile generated by automake offers a lot of useful targets under well-known names, and providing all these targets by hand is tedious and error-prone.
Firstly, writing a Makefile by hand seems a great idea at first, but most people will not bother to write more than the rules for all, install and maybe clean. automake generates dist, distcheck, clean, distclean, uninstall and all these little helpers. These additional targets are a great boon to the sysadmin that will eventually install your software.
Secondly, providing all these targets in a portable and flexible way is quite error-prone. I've done a lot of cross-compilation to Windows targets recently, and the autotools performed just great. In contrast to most hand-written files, which were mostly a pain in the ass to compile. Mind you, it is possible to create a good Makefile by hand. But don't overestimate yourself, it takes a lot of experience and knowledge about a bunch of different systems, and automake creates great Makefiles for you right out of the box.
Edit: And don't be tempted to use the "alternatives". CMake and friends are a horror to the deployer because they aren't interface-compatible to configure and friends. Every half-way competent sysadmin or developer can do great things like cross-compilation or simple things like setting a prefix out of his head or with a simple --help with a configure script. But you are damned to spend an hour or three when you have to do such things with BJam. Don't get me wrong, BJam is probably a great system under the hood, but it's a pain in the ass to use because there are almost no projects using it and very little and incomplete documentation. autoconf and automake have a huge lead here in terms of established knowledge.
So, even though I'm a bit late with this advice for this question: Do yourself a favor and use the autotools and automake. The syntax might be a bit strange, but they do a way better job than 99% of the developers do on their own.
For small projects or even for large projects that only run on one platform, handwritten makefiles are the way to go.
Where autotools really shine is when you are compiling for different platforms that require different options. Autotools is frequently the brains behind the typical
./configure
make
make install
compilation and install steps for Linux libraries and applications.
That said, I find autotools to be a pain and I've been looking for a better system. Lately I've been using bjam, but that also has its drawbacks. Good luck finding what works for you.
Autotools are needed because Makefiles are not guaranteed to work the same across different platforms. If you handwrite a Makefile, and it works on your machine, there is a good chance that it won't on mine.
Do you know what unix your users will be using? Or even which distribution of Linux? Do you know where they want software installed? Do you know what tools they have, what architecture they want to compile on, how many CPUs they have, how much RAM and disk might be available to them?
The *nix world is a cross-platform landscape, and your build and install tools need to deal with that.
Mind you, the auto* tools date from an earlier epoch, and there are many valid complaints about them, but the several projects to replace them with more modern alternatives are having trouble developing a lot of momentum.
Lots of things are like that in the *nix world.
Autotools is a disaster.
The generated ./configure script checks for features that have not been present on any Unix system for last 20 years or so. To do this, it spends a huge amount of time.
Running ./configure takes for ages. Although modern server CPUs can have even dozens of cores, and there may be several such CPUs per server, the ./configure is single-threaded. We still have enough years of Moore's law left that the number of CPU cores will go way up as a function of time. So, the time ./configure takes will stay approximately constant whereas parallel build times reduce by a factor of 2 every 2 years due to Moore's law. Or actually, I would say the time ./configure takes might even increase due to increasing software complexity taking advantage of improved hardware.
The mere act of adding just one file to your project requires you to run automake, autoconf and ./configure which will take ages, and then you'll probably find that since some important files have changed, everything will be recompiled. So add just one file, and make -j${CPUCOUNT} recompiles everything.
And about make -j${CPUCOUNT}. The generated build system is a recursive one. Recursive make has for a long amount of time been considered harmful.
Then when you install the software that has been compiled, you'll find that it doesn't work. (Want proof? Clone protobuf repository from Github, check out commit 9f80df026933901883da1d556b38292e14836612, install it to a Debian or Ubuntu system, and hey presto: protoc: error while loading shared libraries: libprotoc.so.15: cannot open shared object file: No such file or directory -- since it's in /usr/local/lib and not /usr/lib; workaround is to do export LD_RUN_PATH=/usr/local/lib before typing make).
The theory is that by using autotools, you could create a software package that can be compiled on Linux, FreeBSD, NetBSD, OpenBSD, DragonflyBSD and other operating systems. The fact? Every non-Linux system to build packages from source has numerous patch files in their repository to work around autotools bugs. Just take a look at e.g. FreeBSD /usr/ports: it's full of patches. So, it would have been as easy to create a small patch for a non-autotools build system on a per project basis than to create a small patch for an autotools build system on a per project basis. Or perhaps even easier, as standard make is much easier to use than autotools.
The fact is, if you create your own build system based on standard make (and make it inclusive and not recursive, following the recommendations of the "Recursive make considered harmful" paper), things work in a much better manner. Also, your build time goes down by an order of magnitude, perhaps even two orders of magnitude if your project is very small project of 10-100 C language files and you have dozens of cores per CPU and multiple CPUs. It's also much easier to interface custom automatic code generation tools with a custom build system based on standard make instead of dealing with the m4 mess of autotools. With standard make, you can at least type a shell command into the Makefile.
So, to answer your question: why use autotools? Answer: there is no reason to do so. Autotools has been obsolete since when commercial Unix has become obsolete. And the advent of multi-core CPUs has made autotools even more obsolete. Why programmers haven't realized that yet, is a mystery. I'll happily use standard make on my build systems, thank you. Yes, it takes some amount of work to generate the dependency files for C language header inclusion, but the amount of work is saved by not having to fight with autotools.
I dont feel I am an expert to answer this but still give you a bit analogy with my experience.
Because upto some extent it is similar to why we should write Embedded Codes in C language(High Level language) rather then writing in Assembly Language.
Both solves the same purpose but latter is more lenghty, tedious ,time consuming and more error prone(unless you know ISA of the processor very well) .
Same is the case with Automake tool and writing your own makefile.
Writing Makefile.am and configure.ac is pretty simple than writing individual project Makefile.

Are there good reasons not to exploit '#!/bin/make -f' at the top of a makefile to give an executable makefile?

Mostly for my amusement, I created a makefile in my $HOME/bin directory called rebuild.mk, and made it executable, and the first lines of the file read:
#!/bin/make -f
#
# Comments on what the makefile is for
...
all: ${SCRIPTS} ${LINKS} ...
...
I can now type:
rebuild.mk
and this causes make to execute.
What are the reasons for not exploiting this on a permanent basis, other than this:
The makefile is tied to a single directory, so it really isn't appropriate in my main bin directory.
Has anyone ever seen the trick exploited before?
Collecting some comments, and providing a bit more background information.
Norman Ramsey reports that this technique is used in Debian; that is interesting to know. Thank you.
I agree that typing 'make' is more idiomatic.
However, the scenario (previously unstated) is that my $HOME/bin directory already has a cross-platform main makefile in it that is the primary maintenance tool for the 500+ commands in the directory.
However, on one particular machine (only), I wanted to add a makefile for building a special set of tools. So, those tools get a special makefile, which I called rebuild.mk for this question (it has another name on my machine).
I do get to save typing 'make -f rebuild.mk' by using 'rebuild.mk' instead.
Fixing the position of the make utility is problematic across platforms.
The #!/usr/bin/env make -f technique is likely to work, though I believe the official rules of engagement are that the line must be less than 32 characters and may only have one argument to the command.
#dF comments that the technique might prevent you passing arguments to make. That is not a problem on my Solaris machine, at any rate. The three different versions of 'make' I tested (Sun, GNU, mine) all got the extra command line arguments that I type, including options ('-u' on my home-brew version) and targets 'someprogram' and macros CC='cc' WFLAGS=-v (to use a different compiler and cancel the GCC warning flags which the Sun compiler does not understand).
I would not advocate this as a general technique.
As stated, it was mostly for my amusement. I may keep it for this particular job; it is most unlikely that I'd use it in distributed work. And if I did, I'd supply and apply a 'fixin' script to fix the pathname of the interpreter; indeed, I did that already on my machine. That script is a relic from the first edition of the Camel book ('Programming Perl' by Larry Wall).
One problem with this for generally distributable Makefiles is that the location of make is not always consistent across platforms. Also, some systems might require an alternate name like gmake.
Of course one can always run the appropriate command manually, but this sort of defeats the whole purpose of making the Makefile executable.
I've seen this trick used before in the debian/rules file that is part of every Debian package.
To address the problem of make not always being in the same place (on my system for example it's in /usr/bin), you could use
#!/usr/bin/env make -f
if you're on a UNIX-like system.
Another problem is that by using the Makefile this way you cannot override variables, by doing, for example make CFLAGS=....
"make" is shorter than "./Makefile", so I don't think you're buying anything.
The reason I would not do this is that typing "make" is more idiomatic to building Makefile based projects. Imagine if every project you built you had to search for the differently named makefile someone created instead of just typing "make && make install".
You could use a shell alias for this too.
We can look at this another way: is it a good idea to design a language whose interpreter looks for a fixed filename if you don't give it one? What if python looked for Pythonfile in the absence of a script name? ;)
You don't need such a mechanism in order to have a convention based around a known name. Example: Autoconf's ./configure script.

Can autotools create multi-platform makefiles

I have a plugin project I've been developing for a few years where the plugin works with numerous combinations of [primary application version, 3rd party library version, 32-bit vs. 64-bit]. Is there a (clean) way to use autotools to create a single makefile that builds all versions of the plugin.
As far as I can tell from skimming through the autotools documentation, the closest approximation to what I'd like is to have N independent copies of the project, each with its own makefile. This seems a little suboptimal for testing and development as (a) I'd need to continually propagate code changes across all the different copies and (b) there is a lot of wasted space in duplicating the project so many times. Is there a better way?
EDIT:
I've been rolling my own solution for a while where I have a fancy makefile and some perl scripts to hunt down various 3rd party library versions, etc. As such, I'm open to other non-autotools solutions. For other build tools, I'd want them to be very easy for end users to install. The tools also need to be smart enough to hunt down various 3rd party libraries and headers without a huge amount of trouble. I'm mostly looking for a linux solution, but one that also works for Windows and/or the Mac would be a bonus.
If your question is:
Can I use the autotools on some machine A to create a single universal makefile that will work on all other machines?
then the answer is "No". The autotools do not even make a pretense at trying to do that. They are designed to contain portable code that will determine how to create a workable makefile on the target machine.
If your question is:
Can I use the autotools to configure software that needs to run on different machines, with different versions of the primary software which my plugin works with, plus various 3rd party libraries, not to mention 32-bit vs 64-bit issues?
then the answer is "Yes". The autotools are designed to be able to do that. Further, they work on Unix, Linux, MacOS X, BSD.
I have a program, SQLCMD (which pre-dates the Microsoft program of the same name by a decade and more), which works with the IBM Informix databases. It detects the version of the client software (called IBM Informix ESQL/C, part of the IBM Informix ClientSDK or CSDK) is installed, and whether it is 32-bit or 64-bit. It also detects which version of the software is installed, and adapts its functionality to what is available in the supporting product. It supports versions that have been released over a period of about 17 years. It is autoconfigured -- I had to write some autoconf macros for the Informix functionality, and for a couple of other gizmos (high resolution timing, presence of /dev/stdin etc). But it is doable.
On the other hand, I don't try and release a single makefile that fits all customer machines and environments; there are just too many possibilities for that to be sensible. But autotools takes care of the details for me (and my users). All they do is:
./configure
That's easier than working out how to edit the makefile. (Oh, for the first 10 years, the program was configured by hand. It was hard for people to do, even though I had pretty good defaults set up. That was why I moved to auto-configuration: it makes it much easier for people to install.)
Mr Fooz commented:
I want something in between. Customers will use multiple versions and bitnesses of the same base application on the same machine in my case. I'm not worried about cross-compilation such as building Windows binaries on Linux.
Do you need a separate build of your plugin for the 32-bit and 64-bit versions? (I'd assume yes - but you could surprise me.) So you need to provide a mechanism for the user to say
./configure --use-tppkg=/opt/tp/pkg32-1.0.3
(where tppkg is a code for your third-party package, and the location is specifiable by the user.) However, keep in mind usability: the fewer such options the user has to provide, the better; against that, do not hard code things that should be optional, such as install locations. By all means look in default locations - that's good. And default to the bittiness of the stuff you find. Maybe if you find both 32-bit and 64-bit versions, then you should build both -- that would require careful construction, though. You can always echo "Checking for TP-Package ..." and indicate what you found and where you found it. Then the installer can change the options. Make sure you document in './configure --help' what the options are; this is standard autotools practice.
Do not do anything interactive though; the configure script should run, reporting what it does. The Perl Configure script (note the capital letter - it is a wholly separate automatic configuration system) is one of the few intensively interactive configuration systems left (and that is probably mainly because of its heritage; if starting anew, it would most likely be non-interactive). Such systems are more of a nuisance to configure than the non-interactive ones.
Cross-compilation is tough. I've never needed to do it, thank goodness.
Mr Fooz also commented:
Thanks for the extra comments. I'm looking for something like:
./configure --use-tppkg=/opt/tp/pkg32-1.0.3 --use-tppkg=/opt/tp/pkg64-1.1.2
where it would create both the 32-bit and 64-bit targets in one makefile for the current platform.
Well, I'm sure it could be done; I'm not so sure that it is worth doing by comparison with two separate configuration runs with a complete rebuild in between. You'd probably want to use:
./configure --use-tppkg32=/opt/tp/pkg32-1.0.3 --use-tppkg64=/opt/tp/pkg64-1.1.2
This indicates the two separate directories. You'd have to decide how you're going to do the build, but presumably you'd have two sub-directories, such as 'obj-32' and 'obj-64' for storing the separate sets of object files. You'd also arrange your makefile along the lines of:
FLAGS_32 = ...32-bit compiler options...
FLAGS_64 = ...64-bit compiler options...
TPPKG32DIR = #TPPKG32DIR#
TPPKG64DIR = #TPPKG64DIR#
OBJ32DIR = obj-32
OBJ64DIR = obj-64
BUILD_32 = #BUILD_32#
BUILD_64 = #BUILD_64#
TPPKGDIR =
OBJDIR =
FLAGS =
all: ${BUILD_32} ${BUILD_64}
build_32:
${MAKE} TPPKGDIR=${TPPKG32DIR} OBJDIR=${OBJ32DIR} FLAGS=${FLAGS_32} build
build_64:
${MAKE} TPPKGDIR=${TPPKG64DIR} OBJDIR=${OBJ64DIR} FLAGS=${FLAGS_64} build
build: ${OBJDIR}/plugin.so
This assumes that the plugin would be a shared object. The idea here is that the autotool would detect the 32-bit or 64-bit installs for the Third Party Package, and then make substitutions. The BUILD_32 macro would be set to build_32 if the 32-bit package was required and left empty otherwise; the BUILD_64 macro would be handled similarly.
When the user runs 'make all', it will build the build_32 target first and the build_64 target next. To build the build_32 target, it will re-run make and configure the flags for a 32-bit build. Similarly, to build the build_64 target, it will re-run make and configure the flags for a 64-bit build. It is important that all the flags affected by 32-bit vs 64-bit builds are set on the recursive invocation of make, and that the rules for building objects and libraries are written carefully - for example, the rule for compiling source to object must be careful to place the object file in the correct object directory - using GCC, for example, you would specify (in a .c.o rule):
${CC} ${CFLAGS} -o ${OBJDIR}/$*.o -c $*.c
The macro CFLAGS would include the ${FLAGS} value which deals with the bits (for example, FLAGS_32 = -m32 and FLAGS_64 = -m64, and so when building the 32-bit version,FLAGS = -m32would be included in theCFLAGS` macro.
The residual issues in the autotools is working out how to determine the 32-bit and 64-bit flags. If the worst comes to the worst, you'll have to write macros for that yourself. However, I'd expect (without having researched it) that you can do it using standard facilities from the autotools suite.
Unless you create yourself a carefully (even ruthlessly) symmetric makefile, it won't work reliably.
As far as I know, you can't do that. However, are you stuck with autotools? Are neither CMake nor SCons an option?
We tried it and it doesn't work! So we use now SCons.
Some articles to this topic: 1 and 2
Edit:
Some small example why I love SCons:
env.ParseConfig('pkg-config --cflags --libs glib-2.0')
With this line of code you add GLib to the compile environment (env). And don't forget the User Guide which just great to learn SCons (you really don't have to know Python!). For the end user you could try SCons with PyInstaller or something like that.
And in comparison to make, you use Python, so a complete programming language! With this in mind you can do just everything (more or less).
Have you ever considered to use a single project with multiple build directories?
if your automake project is implemented in a proper way (i.e.: NOT like gcc)
the following is possible:
mkdir build1 build2 build3
cd build1
../configure $(YOUR_OPTIONS)
cd build2
../configure $(YOUR_OPTIONS2)
[...]
you are able to pass different configuration parameters like include directories and compilers (cross compilers i.e.).
you can then even run this in a single make call by running
make -C build1 -C build2 -C build3

Resources