Where did the first make binary come from? - linux

I'm having to build gnu make from source for reasons too complicated to explain here.
I noticed to build it I require the make command itself, in the traditional fashion:
./configure
make install
So what if I didn't have the make binary already? Where did the first ever make binary come from?

From the same place the first gcc binary came from.
The first make was created probably using a shell script to do the build. After that, make would "make" itself.
It's a notable achievement in systems development when the platform becomes "self-hosting". That is the platform can build itself.
Things like "make make" and "gcc gcc.c".
Many language writers will create their language in another language (say, C) and when they have moved it far enough along, they will use that original bootstrap compiler to write a new compiler in the original language. Finally, they discard the original.
Back in the day, a friend was working on a debugger for OS/2, notable for being a multi-tasking operating system at the time. And he would regale about the times when they would be debugging the debugger, and find a bug. So, they would debug the debugger debugging the debugger. It's a novel concept and goes to the heart of computing and abstraction.
Inevitably, it all boils back to when someone keyed in something through a hardwire key pad or some other switches to get an initial program loaded. Then they leveraged that program to do other work, and it all just grows from there.

Stuart Feldman, then at AT&T, wrote the source code for make around the time of 7th Edition UNIX™, and used manual compilation (or maybe a shell script) until make was working well enough to be used to build itself. You can find the UNIX Programmer's Manual for 7th Edition online, and in particular, the original paper describing the original version of make, dated August 1978.

make is just one convenience tool. It is still possible to invoke cc, ld, etc. manually or via other scripting tools.

If you're building GNU make, have a look at build.sh in the source tree after running configure:
# Shell script to build GNU Make in the absence of any `make' program.
# build.sh. Generated from build.sh.in by configure.

Compiling C programs is not the only way to produce an executable file. The first make executable (or more notably the C compiler itself) could for example be an assembly program, or it could be hand coded in machine code. It could also be cross compiled on a completely different system.

The essence of make is that it is a simplified way of running some commands.
To make the first make, the author had to manually act as make, and run gcc or whatever toolset was available, rather than having it run automatically.

Related

Will writing C in both Windows and Linux cause compiling problems?

I work from 2 different machines. One is Windows and the other is Linux. If I alternately work on the same project but switch between both OSes, will I eventually run into compiling errors? I ask because maybe there are standards supported by one but not by the other.
That question is a pretty broad one and it depends, strictly speaking, on your tool chain. If you were to use the same tool chain (e.g. GCC/MinGW or Clang), you'd be minimizing the chance for this class of errors. If you were to use Visual Studio on Windows and GCC or Clang on the Linux side, you'd run into more issues alone because some of the headers differ. So once your program leaves the realm of strict ANSI C (C89) you'll be on your own.
However, if you aren't careful you may run into a lot of other more profane errors, such as the compiler on Linux choking on the line endings if you didn't tell your editor on the Windows side to use these.
Ah, and also keep in mind that if you want to actually cross-compile, GCC may be the best choice and therefore the first part I mentioned in my answer becomes a moot point. GCC is a proven choice on both ends. And given your question it's unlikely that you are trying to write something like a kernel mode driver - which would be fundamentally different.
That may be only if your application use some specific API.
It is entirely possible to write code that works on both platforms, with no issues to compile the code. It is, however, not without some difficulties. Compilers allow you to use non-standard features in the compiler, and it's often hard to do more fancy user interfaces (even if it's still just text) because as soon as you start wanting to do more than "read a line of text as it is entered in a shell", it's into "non-standard" land.
If you do find yourself needing to do more than what the standard C library can do, make sure you isolate those parts of the code into a separate file (or a couple of files, one for Linux/Unix style systems and one for Windows systems).
Using the same compiler (gcc) would help avoiding problems with "compiler B doesn't compile code that works fine in compiler A".
But it's far from an absolute necessity - just make sure you compile the code on both platforms and with all of your "suppoerted" compilers often enough that you haven't dug a very deep hole that is hard to get out of before you discover that "it's not working on the other system". It certainly helps if you have (at least) a virtual machine running the other OS, so you can easily try both variants.
Ideally, you want to set up an automated system, such that when you change the code [and feel that the changes are "complete"], it automatically gets built on both platforms and all compilers you want to use. And if possible, also automatically tested!
I would also seriously consider using version control - that way, when something breaks on one or the other side, you can go back and look at what the code looked like before it stopped working, and (hopefully) find the reason it broke much quicker than "Hmm, I think it's the change I made to foo.c, lets take that out... No, not that one, ok how about the change here..." - at least with version control, you can say "Ok, so version 1234 doesn't work, let's try version 1220 - ok, that works. Now try 1228, still works - so change between 1229 and 1234 - try 1232, ah, it's broken..." No editing files and you can still go to any other version you like with very little difficulty. I have used Mercurial quite a bit, git a little bit, some subversion, and worked on a project in Perforce for a few years. All of these are good - personally, I think I prefer mercurial.
As a side-effect: Most version control systems also deal with filename and line endings in the saner way than doing this manually.
If you combine your version control system with a "automated build and test-system", such as Jenkins, you can get everything very automated. Jenkins is free and runs on both Windows and Linux, and you can use it to automatically build and test your code as and when you submit the code to the version control system.
It will not create a problem until you recompile the source code in the respective OS. If you wanna run your compiled file generated by windows(.exe or .obj), into linux or vice-versa then it will definitely create a problem and wont be possible. But you can move you source code (file with extension .c/.c++) into any of the os. And sometimes it also create problems with different header files, so take care of that also. Best practice is to use single OS for you entire project, avoid multiple os until it is extremely necessary.

How to get started developing on *nix

Is there a good 'how-to' or 'getting started' guide for getting started using g++ and gdb?
Some background. Decent programmer, but so far I have done everything on Windows in Visual Studio.
I have a little experience using terminal to compile files (not much beyond a .h and 1 or 2 .cpp). But nothing beyond that.
Anyone know of a good primer on on how to get started coding on Linux?
Read some good books, notably Advanced Linux Programming and Advanced Unix Programming. Read also the advanced bash scripting guide and other documentation from Linux Documentation Project
Obviously, install some Linux distribution on your laptop (not in some VM, but on real disk partitions). If you have a debian like distribution, run aptitude build-dep gcc-4.6 gedit on it to get a lot of interesting developers packages.
Learn some command line skills. Learn to use the man command; after installing manpages and manpages-dev packages, type man man (use the space bar to "scroll text", the q key to quit). Read also the intro(2) man page. When you forgot how to use a command like cp try cp --help.
Use a version control system like git, even for one person tiny projects.
Backup your files.
Read several relevant Wikipedia pages on Linux, kernels, syscalls, free software, X11, Posix, Unix
Try hard to use the command line. For instance, try to do everything on the command line for a week or more. Avoid using your desktop, and possibly your mouse. Learn to use emacs.
Read about builder programs like GNU make
Retrieve several free software from their source code (e.g. from sourceforge or freecode or github) and practice building and compiling them. Study their source code
Basic tips to start (if a command is not found, you need to install the package providing it) in command line (in a terminal).
run emacs ; there is a tutorial menu; practice it for half an hour.
edit a helloworld.c program (with a main calling some hello function)
compile it with gcc -g -Wall helloworld.c -o helloworld; improve your code till no warnings are given. Always pass -Wall to gcc or g++ to get almost all warnings.
run it with ./helloworld
debug it with gdb ./helloworld, then
use the help command
use the b main command to add a breakpoint in main and likewise for your hello function.
run it under gdb using r
use bt to get a backtrace
use p to print some variable
use c to continue the execution of the debugged program.
write a tiny Makefile to be able to build your helloworld program using make
learn how to call make (with M-x compile) and gdb (with M-x gdb) from inside Emacs
Learn more about valgrind (to detect most memory leaks). Perhaps consider using Boehm's GC in some of your applications.
You've got a lot of things to learn. I won't give you details, but as someone who's done unix and c/c++ development for a couple decades now I'll try to give you some topics to start with.
My main advice is to start experimenting. Write the most trivial program you can in C or C++ (something that prints "Hello there, world!" is traditional) and figure out how to compile and run it from the command line. Then, once you've got a compiled version, start it up under the debugger and play around with breakpoints, printing expressions, etc, etc. Once you've got this simplest program up and running and you sort of understand what the debugger is telling you, add a class, function, struct, or whatever else feels like a good small step and go through the cycle again. You'll proceed much faster this way than if you start with a very large program.
Still at a very high level, here are a handful of topics you'll need to figure out at least a bit about. Note that the "learn by starting small" approach works well for any of the topics below.
Running g++: it has pretty good online documentation for the command line syntax, and although you're bound to find it intimidating at first, try to look for the simplest starting point.
Find a text editor to use. Vim and emacs are traditional (and very very powerful) but both have a relatively steep learning curve. If you have someone around to help you, that's so much the better. There are other alternatives, but as an emacs user myself, I'm afraid I'm not that familiar with them.
Get familiar with gdb. It's an incredibly powerful tool for understanding your program. Again, it has extensive online documentation that will repay close reading.
Some familiarity with standard unix commands will be useful: ls, cd, and moving basics of the navigating unix directories; grep for quickly searching source files.
You'll have to get used to the command line approach versus the ide approach. The former is the traditional unix developer model, where you put together the operations you want from a collection of other tools, rather than having the ide hide most of this knowledge from you.
If your project is multi-file, and especially if it's a full-semester project, you might also consider learning something about the following topics.
Make is a tool for describing how to compile and link a multi-file project so that you don't have to remember how to do it by hand each time. Make, unfortunately, has a well-deserved reputation for being tricky to use, but this is mostly true in very large projects spanning multiple directories, and there are probably good simple examples online.
I would strongly consider making use of a source code control system such as git or hg, even for a few relatively small project. It's so much safer to have an archived version of what you've done so that you can back up quickly. Both git and hg are overkill for a small one-shot project, but they are worth learning on their own. Conventional wisdom as I understand it today is that they're very similar in philosophy and core functionality, but that hg is definitely a bit more consistent at the command line level, and therefore easier to start with.
I suspect this is rather intimidating, especially if you've got effectively no exposure to a unix command environment before. I re-emphasize my first piece of advice above: learn by starting simple and experimenting. This minimizes the amount of new stuff you're having to wrap your head around at any given point in time.

C++ hello world on Linux

Under Windows I installed MinGW and Eclipse, and created a new C++ project with the inspiring name of foo, using the MinGW GCC toolchain, and this compiles, runs and even debugs. Wonderful.
Still under Windows, I installed Cygwin, an epic undertaking that stressed my internet connection. Eventually I specified Cygwin GCC toolchain and a projectname of bar. This compiles and runs but can't do step through debugging (claims it can't find source).
Under Linux, mint13 specifically, I installed the all-singing all-dancing C++ edition of Eclipse with all the trimmings and created a new C++ project, with the even more inspiring name of baz and the Linux GCC toolchain selected. Eclipse complains that it cannot find iostream.
I am rather confused by this. If I launch a terminal window and run g++ it is found, so clearly I have at least some of the GNU C++ stuff. I don't know what is missing. Linux is a new world for me. Can anyone offer guidance?
For the record, the generated code is in a file called foo.cpp (or bar.cpp according to project name) and looks like this:
#include <iostream>
using namespace std;
int main() {
cout << "Hello World" << endl; // prints Hello World
return 0;
}
#bmargulies - I know your comment was tongue in cheek but I wouldn't use emacs in a pink fit. I'd set up SAMBA and use Textpad on a Windows workstation because I have enough to learn without unnecessarily learning to use a new text editor. The reason I chose Eclipse was a vain hope that it might provide a working baseline with an integrated debugger from which to explore the brave new world of C++ on Linux. Combined with MinGW it did provide that on Windows.
I know the big problem here is not the tools, it is my ignorance and a set of expectations from a different world. This is compounded by a lack of experience with C++ - my sole experience with C++ was using TurboC fifteen years ago.
A source of great confusion is the mechanism used to resolve library references.
A lot of projects seem to use make, which as far as I can tell is a sort of script file for compiling and linking a project or set of projects. Make seems to come in a variety of flavours and there also seem to be alternatives that use makefile as well as alternatives that don't.
[expletive] what a mess.
#Basile - I am not wedded to the use of Eclipse, and I am well aware of the benefits of scripts over point and click use of IDE configuration (not least among which is that you can source-control the build process). I thank you for your reading list. Perhaps this is a silly (or premature) question but I have to ask: without an IDE like Eclipse that integrates an editor with a build tool, is it possible to do step-through debugging?
#bmargulies - I agree with you that there's probably something wrong with the toolchain definition but I lack the background and experience to conduct a meaningful investigation of that. As mentioned above, I had varying levels of success with different toolchains under Windows, so it is reasonable to conclude that the toolchain is a significant factor in the problem. Alas, I can't choose a MinGW toolchain under Linux.
Following Norm's advice I was able to compile foo.cpp from a command line. The hello world program executes with the desired behaviour, but I still don't know how g++ knew how to resolve iostream when the fancy IDE tool didn't.
Added a few more lines of code to foo and compiled it, to try out gdb. It works! Whoever would have imagined you could do step-through debugging with a teminal window! It's a bit clumsy though.
While Basile is clearly correct that a fancy IDE is not necessary, that's a bit like saying that I don't need my motorcycle because I can walk. I'll have a look at the other IDEs mentioned, but I suspect they will all make use of the same toolchains and therefore all be similarly afflicted by whatever I have misconfigured.
Basile, forgive me for moving the goalposts. My original goal was indeed "compile and run hello.cpp" but gdb was inevitably the next step. It works, and if this were the early 80s using a teletype at uni I would probably be pretty happy right now. But it's not the early eighties and I've spent the last decade with syntax-colouring, autocompletion, variable sniffing edit-and-continue debugging so (ungrateful sod that I am) now I want, well, everything!
I've used eclipse c/C++ version before and had a lot of the same problems. For me, eclipse was very difficult to work with. I would recommend using the command line to compile c/c++ programs to start with. Its easier and important to understand how executable are created, in my opinion.
g++ -Wall -g Hello.cpp -o Hello
will produce an executable Hello. -Wall is a option that gives you more warnings when you are compiling your programs. Some warnings will crash your program if you don't fix them so its nice to see them up front. -g gives you the debugging symbols so that gdb will be able to step through the program step by step.
When you do get into gdb by using gdb Hello you can check out this gdb cheatsheet.
Once you start writing programs with more than one source file you're going to need to understand the two main steps in compilation. The first step turns each individual source file into an object file. The next step links all the object files together to make an executable. This link might explain compiling and linking, obviously wikipedia is also a good source for that info.
You don't need a fancy IDE like Eclipse. The usual way of developing under Linux is to use several tools.
Use an editor like emacs or gedit to edit your helloworld.cpp file. Type emacs or gedit in your terminal to start the editor (possibly followed by helloworld.cpp i.e. the name of the edited file[s]).
Then, compile with the following command
g++ -Wall -g helloworld.cpp -o helloworld
that you type in your terminal. Improve your code till no warning is given. You might add -O after -g above if you want GCC to optimize (and -O2 or -O3 to optimize even more). You should ask GCC to optimize if you want to benchmark or release your binary executable program. Notice that g++ knows how to link the standard C++ library (libstdc++.so), and the headers are located in a standard location known to g++ (you could add a -v argument to g++ to make it show what is happenning). You'll need more arguments if you want to use additional libraries.
the order of arguments to g++ is important, particularly for -I options (include directories) and -l (libraries).
If you want to debug your program (avoid asking optimizations to GCC), type
gdb helloworld
which will run the debugger. Learn more about gdb (of course you can do step by step with gdb; you'll use the next, step, break, backtrace commands of gdb at first, and others -e.g. watch is very useful sometimes.)
If you want to run your program without a debugger, just type
./helloworld
Later, you'll want to develop a program in several compilation units. Learn to use a bulder like make for that. There are other builder programs, like omake and many others.
Read more about GCC (providing g++), Gnu Make, GDB (the gnu debugger), Emacs, GIT version control
I have configured my emacs to run make inside it by pressing the F12 key. You can compile under emacs if you want to.
For graphical user interface applications coded in C++, learn to use Qt.
PS. Linux is much more command line oriented that other systems. Believe us, this has incredible strengths. But it is a different way of working that on other systems, such as those sold by Microsoft.
PS. If you are fond of IDE, you could consider geany, anjuta, kate. However, few free software - coded in C or C++ - in a typical Linux distribution (Debian, Ubuntu, Fedora, ...) are built using these (or with Eclipse, which on Linux is often related to Java development). IMHO, this is significant.

Why use build tools like Autotools when we can just write our own makefiles?

Recently, I switched my development environment from Windows to Linux. So far, I have only used Visual Studio for C++ development, so many concepts, like make and Autotools, are new to me. I have read the GNU makefile documentation and got almost an idea about it. But I am kind of confused about Autotools.
As far as I know, makefiles are used to make the build process easier.
Why do we need tools like Autotools just for creating the makefiles? Since all knows how to create a makefile, I am not getting the real use of Autotools.
What is the standard? Do we need to use tools like this or would just handwritten makefiles do?
You are talking about two separate but intertwined things here:
Autotools
GNU coding standards
Within Autotools, you have several projects:
Autoconf
Automake
Libtool
Let's look at each one individually.
Autoconf
Autoconf easily scans an existing tree to find its dependencies and create a configure script that will run under almost any kind of shell. The configure script allows the user to control the build behavior (i.e. --with-foo, --without-foo, --prefix, --sysconfdir, etc..) as well as doing checks to ensure that the system can compile the program.
Configure generates a config.h file (from a template) which programs can include to work around portability issues. For example, if HAVE_LIBPTHREAD is not defined, use forks instead.
I personally use Autoconf on many projects. It usually takes people some time to get used to m4. However, it does save time.
You can have makefiles inherit some of the values that configure finds without using automake.
Automake
By providing a short template that describes what programs will be built and what objects need to be linked to build them, Makefiles that adhere to GNU coding standards can automatically be created. This includes dependency handling and all of the required GNU targets.
Some people find this easier. I prefer to write my own makefiles.
Libtool
Libtool is a very cool tool for simplifying the building and installation of shared libraries on any Unix-like system. Sometimes I use it; other times (especially when just building static link objects) I do it by hand.
There are other options too, see StackOverflow question Alternatives to Autoconf and Autotools?.
Build automation & GNU coding standards
In short, you really should use some kind of portable build configuration system if you release your code to the masses. What you use is up to you. GNU software is known to build and run on almost anything. However, you might not need to adhere to such (and sometimes extremely pedantic) standards.
If anything, I'd recommend giving Autoconf a try if you're writing software for POSIX systems. Just because Autotools produce part of a build environment that's compatible with GNU standards doesn't mean you have to follow those standards (many don't!) :) There are plenty of other options, too.
Edit
Don't fear m4 :) There is always the Autoconf macro archive. Plenty of examples, or drop in checks. Write your own or use what's tested. Autoconf is far too often confused with Automake. They are two separate things.
First of all, the Autotools are not an opaque build system but a loosely coupled tool-chain, as tinkertim already pointed out. Let me just add some thoughts on Autoconf and Automake:
Autoconf is the configuration system that creates the configure script based on feature checks that are supposed to work on all kinds of platforms. A lot of system knowledge has gone into its m4 macro database during the 15 years of its existence. On the one hand, I think the latter is the main reason Autotools have not been replaced by something else yet. On the other hand, Autoconf used to be far more important when the target platforms were more heterogeneous and Linux, AIX, HP-UX, SunOS, ..., and a large variety of different processor architecture had to be supported. I don't really see its point if you only want to support recent Linux distributions and Intel-compatible processors.
Automake is an abstraction layer for GNU Make and acts as a Makefile generator from simpler templates. A number of projects eventually got rid of the Automake abstraction and reverted to writing Makefiles manually because you lose control over your Makefiles and you might not need all the canned build targets that obfuscate your Makefile.
Now to the alternatives (and I strongly suggest an alternative to Autotools based on your requirements):
CMake's most notable achievement is replacing AutoTools in KDE. It's probably the closest you can get if you want to have Autoconf-like functionality without m4 idiosyncrasies. It brings Windows support to the table and has proven to be applicable in large projects. My beef with CMake is that it is still a Makefile-generator (at least on Linux) with all its immanent problems (e.g. Makefile debugging, timestamp signatures, implicit dependency order).
SCons is a Make replacement written in Python. It uses Python scripts as build control files allowing very sophisticated techniques. Unfortunately, its configuration system is not on par with Autoconf. SCons is often used for in-house development when adaptation to specific requirements is more important than following conventions.
If you really want to stick with Autotools, I strongly suggest to read Recursive Make Considered Harmful (archived) and write your own GNU Makefile configured through Autoconf.
The answers already provided here are good, but I'd strongly recommend not taking the advice to write your own makefile if you have anything resembling a standard C/C++ project. We need the autotools instead of handwritten makefiles because a standard-compliant makefile generated by automake offers a lot of useful targets under well-known names, and providing all these targets by hand is tedious and error-prone.
Firstly, writing a Makefile by hand seems a great idea at first, but most people will not bother to write more than the rules for all, install and maybe clean. automake generates dist, distcheck, clean, distclean, uninstall and all these little helpers. These additional targets are a great boon to the sysadmin that will eventually install your software.
Secondly, providing all these targets in a portable and flexible way is quite error-prone. I've done a lot of cross-compilation to Windows targets recently, and the autotools performed just great. In contrast to most hand-written files, which were mostly a pain in the ass to compile. Mind you, it is possible to create a good Makefile by hand. But don't overestimate yourself, it takes a lot of experience and knowledge about a bunch of different systems, and automake creates great Makefiles for you right out of the box.
Edit: And don't be tempted to use the "alternatives". CMake and friends are a horror to the deployer because they aren't interface-compatible to configure and friends. Every half-way competent sysadmin or developer can do great things like cross-compilation or simple things like setting a prefix out of his head or with a simple --help with a configure script. But you are damned to spend an hour or three when you have to do such things with BJam. Don't get me wrong, BJam is probably a great system under the hood, but it's a pain in the ass to use because there are almost no projects using it and very little and incomplete documentation. autoconf and automake have a huge lead here in terms of established knowledge.
So, even though I'm a bit late with this advice for this question: Do yourself a favor and use the autotools and automake. The syntax might be a bit strange, but they do a way better job than 99% of the developers do on their own.
For small projects or even for large projects that only run on one platform, handwritten makefiles are the way to go.
Where autotools really shine is when you are compiling for different platforms that require different options. Autotools is frequently the brains behind the typical
./configure
make
make install
compilation and install steps for Linux libraries and applications.
That said, I find autotools to be a pain and I've been looking for a better system. Lately I've been using bjam, but that also has its drawbacks. Good luck finding what works for you.
Autotools are needed because Makefiles are not guaranteed to work the same across different platforms. If you handwrite a Makefile, and it works on your machine, there is a good chance that it won't on mine.
Do you know what unix your users will be using? Or even which distribution of Linux? Do you know where they want software installed? Do you know what tools they have, what architecture they want to compile on, how many CPUs they have, how much RAM and disk might be available to them?
The *nix world is a cross-platform landscape, and your build and install tools need to deal with that.
Mind you, the auto* tools date from an earlier epoch, and there are many valid complaints about them, but the several projects to replace them with more modern alternatives are having trouble developing a lot of momentum.
Lots of things are like that in the *nix world.
Autotools is a disaster.
The generated ./configure script checks for features that have not been present on any Unix system for last 20 years or so. To do this, it spends a huge amount of time.
Running ./configure takes for ages. Although modern server CPUs can have even dozens of cores, and there may be several such CPUs per server, the ./configure is single-threaded. We still have enough years of Moore's law left that the number of CPU cores will go way up as a function of time. So, the time ./configure takes will stay approximately constant whereas parallel build times reduce by a factor of 2 every 2 years due to Moore's law. Or actually, I would say the time ./configure takes might even increase due to increasing software complexity taking advantage of improved hardware.
The mere act of adding just one file to your project requires you to run automake, autoconf and ./configure which will take ages, and then you'll probably find that since some important files have changed, everything will be recompiled. So add just one file, and make -j${CPUCOUNT} recompiles everything.
And about make -j${CPUCOUNT}. The generated build system is a recursive one. Recursive make has for a long amount of time been considered harmful.
Then when you install the software that has been compiled, you'll find that it doesn't work. (Want proof? Clone protobuf repository from Github, check out commit 9f80df026933901883da1d556b38292e14836612, install it to a Debian or Ubuntu system, and hey presto: protoc: error while loading shared libraries: libprotoc.so.15: cannot open shared object file: No such file or directory -- since it's in /usr/local/lib and not /usr/lib; workaround is to do export LD_RUN_PATH=/usr/local/lib before typing make).
The theory is that by using autotools, you could create a software package that can be compiled on Linux, FreeBSD, NetBSD, OpenBSD, DragonflyBSD and other operating systems. The fact? Every non-Linux system to build packages from source has numerous patch files in their repository to work around autotools bugs. Just take a look at e.g. FreeBSD /usr/ports: it's full of patches. So, it would have been as easy to create a small patch for a non-autotools build system on a per project basis than to create a small patch for an autotools build system on a per project basis. Or perhaps even easier, as standard make is much easier to use than autotools.
The fact is, if you create your own build system based on standard make (and make it inclusive and not recursive, following the recommendations of the "Recursive make considered harmful" paper), things work in a much better manner. Also, your build time goes down by an order of magnitude, perhaps even two orders of magnitude if your project is very small project of 10-100 C language files and you have dozens of cores per CPU and multiple CPUs. It's also much easier to interface custom automatic code generation tools with a custom build system based on standard make instead of dealing with the m4 mess of autotools. With standard make, you can at least type a shell command into the Makefile.
So, to answer your question: why use autotools? Answer: there is no reason to do so. Autotools has been obsolete since when commercial Unix has become obsolete. And the advent of multi-core CPUs has made autotools even more obsolete. Why programmers haven't realized that yet, is a mystery. I'll happily use standard make on my build systems, thank you. Yes, it takes some amount of work to generate the dependency files for C language header inclusion, but the amount of work is saved by not having to fight with autotools.
I dont feel I am an expert to answer this but still give you a bit analogy with my experience.
Because upto some extent it is similar to why we should write Embedded Codes in C language(High Level language) rather then writing in Assembly Language.
Both solves the same purpose but latter is more lenghty, tedious ,time consuming and more error prone(unless you know ISA of the processor very well) .
Same is the case with Automake tool and writing your own makefile.
Writing Makefile.am and configure.ac is pretty simple than writing individual project Makefile.

Can autotools create multi-platform makefiles

I have a plugin project I've been developing for a few years where the plugin works with numerous combinations of [primary application version, 3rd party library version, 32-bit vs. 64-bit]. Is there a (clean) way to use autotools to create a single makefile that builds all versions of the plugin.
As far as I can tell from skimming through the autotools documentation, the closest approximation to what I'd like is to have N independent copies of the project, each with its own makefile. This seems a little suboptimal for testing and development as (a) I'd need to continually propagate code changes across all the different copies and (b) there is a lot of wasted space in duplicating the project so many times. Is there a better way?
EDIT:
I've been rolling my own solution for a while where I have a fancy makefile and some perl scripts to hunt down various 3rd party library versions, etc. As such, I'm open to other non-autotools solutions. For other build tools, I'd want them to be very easy for end users to install. The tools also need to be smart enough to hunt down various 3rd party libraries and headers without a huge amount of trouble. I'm mostly looking for a linux solution, but one that also works for Windows and/or the Mac would be a bonus.
If your question is:
Can I use the autotools on some machine A to create a single universal makefile that will work on all other machines?
then the answer is "No". The autotools do not even make a pretense at trying to do that. They are designed to contain portable code that will determine how to create a workable makefile on the target machine.
If your question is:
Can I use the autotools to configure software that needs to run on different machines, with different versions of the primary software which my plugin works with, plus various 3rd party libraries, not to mention 32-bit vs 64-bit issues?
then the answer is "Yes". The autotools are designed to be able to do that. Further, they work on Unix, Linux, MacOS X, BSD.
I have a program, SQLCMD (which pre-dates the Microsoft program of the same name by a decade and more), which works with the IBM Informix databases. It detects the version of the client software (called IBM Informix ESQL/C, part of the IBM Informix ClientSDK or CSDK) is installed, and whether it is 32-bit or 64-bit. It also detects which version of the software is installed, and adapts its functionality to what is available in the supporting product. It supports versions that have been released over a period of about 17 years. It is autoconfigured -- I had to write some autoconf macros for the Informix functionality, and for a couple of other gizmos (high resolution timing, presence of /dev/stdin etc). But it is doable.
On the other hand, I don't try and release a single makefile that fits all customer machines and environments; there are just too many possibilities for that to be sensible. But autotools takes care of the details for me (and my users). All they do is:
./configure
That's easier than working out how to edit the makefile. (Oh, for the first 10 years, the program was configured by hand. It was hard for people to do, even though I had pretty good defaults set up. That was why I moved to auto-configuration: it makes it much easier for people to install.)
Mr Fooz commented:
I want something in between. Customers will use multiple versions and bitnesses of the same base application on the same machine in my case. I'm not worried about cross-compilation such as building Windows binaries on Linux.
Do you need a separate build of your plugin for the 32-bit and 64-bit versions? (I'd assume yes - but you could surprise me.) So you need to provide a mechanism for the user to say
./configure --use-tppkg=/opt/tp/pkg32-1.0.3
(where tppkg is a code for your third-party package, and the location is specifiable by the user.) However, keep in mind usability: the fewer such options the user has to provide, the better; against that, do not hard code things that should be optional, such as install locations. By all means look in default locations - that's good. And default to the bittiness of the stuff you find. Maybe if you find both 32-bit and 64-bit versions, then you should build both -- that would require careful construction, though. You can always echo "Checking for TP-Package ..." and indicate what you found and where you found it. Then the installer can change the options. Make sure you document in './configure --help' what the options are; this is standard autotools practice.
Do not do anything interactive though; the configure script should run, reporting what it does. The Perl Configure script (note the capital letter - it is a wholly separate automatic configuration system) is one of the few intensively interactive configuration systems left (and that is probably mainly because of its heritage; if starting anew, it would most likely be non-interactive). Such systems are more of a nuisance to configure than the non-interactive ones.
Cross-compilation is tough. I've never needed to do it, thank goodness.
Mr Fooz also commented:
Thanks for the extra comments. I'm looking for something like:
./configure --use-tppkg=/opt/tp/pkg32-1.0.3 --use-tppkg=/opt/tp/pkg64-1.1.2
where it would create both the 32-bit and 64-bit targets in one makefile for the current platform.
Well, I'm sure it could be done; I'm not so sure that it is worth doing by comparison with two separate configuration runs with a complete rebuild in between. You'd probably want to use:
./configure --use-tppkg32=/opt/tp/pkg32-1.0.3 --use-tppkg64=/opt/tp/pkg64-1.1.2
This indicates the two separate directories. You'd have to decide how you're going to do the build, but presumably you'd have two sub-directories, such as 'obj-32' and 'obj-64' for storing the separate sets of object files. You'd also arrange your makefile along the lines of:
FLAGS_32 = ...32-bit compiler options...
FLAGS_64 = ...64-bit compiler options...
TPPKG32DIR = #TPPKG32DIR#
TPPKG64DIR = #TPPKG64DIR#
OBJ32DIR = obj-32
OBJ64DIR = obj-64
BUILD_32 = #BUILD_32#
BUILD_64 = #BUILD_64#
TPPKGDIR =
OBJDIR =
FLAGS =
all: ${BUILD_32} ${BUILD_64}
build_32:
${MAKE} TPPKGDIR=${TPPKG32DIR} OBJDIR=${OBJ32DIR} FLAGS=${FLAGS_32} build
build_64:
${MAKE} TPPKGDIR=${TPPKG64DIR} OBJDIR=${OBJ64DIR} FLAGS=${FLAGS_64} build
build: ${OBJDIR}/plugin.so
This assumes that the plugin would be a shared object. The idea here is that the autotool would detect the 32-bit or 64-bit installs for the Third Party Package, and then make substitutions. The BUILD_32 macro would be set to build_32 if the 32-bit package was required and left empty otherwise; the BUILD_64 macro would be handled similarly.
When the user runs 'make all', it will build the build_32 target first and the build_64 target next. To build the build_32 target, it will re-run make and configure the flags for a 32-bit build. Similarly, to build the build_64 target, it will re-run make and configure the flags for a 64-bit build. It is important that all the flags affected by 32-bit vs 64-bit builds are set on the recursive invocation of make, and that the rules for building objects and libraries are written carefully - for example, the rule for compiling source to object must be careful to place the object file in the correct object directory - using GCC, for example, you would specify (in a .c.o rule):
${CC} ${CFLAGS} -o ${OBJDIR}/$*.o -c $*.c
The macro CFLAGS would include the ${FLAGS} value which deals with the bits (for example, FLAGS_32 = -m32 and FLAGS_64 = -m64, and so when building the 32-bit version,FLAGS = -m32would be included in theCFLAGS` macro.
The residual issues in the autotools is working out how to determine the 32-bit and 64-bit flags. If the worst comes to the worst, you'll have to write macros for that yourself. However, I'd expect (without having researched it) that you can do it using standard facilities from the autotools suite.
Unless you create yourself a carefully (even ruthlessly) symmetric makefile, it won't work reliably.
As far as I know, you can't do that. However, are you stuck with autotools? Are neither CMake nor SCons an option?
We tried it and it doesn't work! So we use now SCons.
Some articles to this topic: 1 and 2
Edit:
Some small example why I love SCons:
env.ParseConfig('pkg-config --cflags --libs glib-2.0')
With this line of code you add GLib to the compile environment (env). And don't forget the User Guide which just great to learn SCons (you really don't have to know Python!). For the end user you could try SCons with PyInstaller or something like that.
And in comparison to make, you use Python, so a complete programming language! With this in mind you can do just everything (more or less).
Have you ever considered to use a single project with multiple build directories?
if your automake project is implemented in a proper way (i.e.: NOT like gcc)
the following is possible:
mkdir build1 build2 build3
cd build1
../configure $(YOUR_OPTIONS)
cd build2
../configure $(YOUR_OPTIONS2)
[...]
you are able to pass different configuration parameters like include directories and compilers (cross compilers i.e.).
you can then even run this in a single make call by running
make -C build1 -C build2 -C build3

Resources