pkgsrc, ports, portage, macports etc - linux

I wonder if we can a reduce just a little bit the effort around packages
under linux/unix OS environments and software installations.
It is my stance that there is too much redundant effort about $subject.
I have been pondering about ways to connect build systems of $subject
with some next "stage build tools", like: easybuild (1) & openbuildservice (2);
read below for more details.
To be more specific, I was able last week to take pkgsrc's repository,
process the Makefiles via a tiny "pkg2eb" script and produce *.eb files
for easybuild, then fed many parallel gcc compilations with them.
That "blindly-driven process" ended up in >600 successful builds,
ie. these were packages that simply needed 'wget/configure/make/make install';
It's not bad for a first run, just wonder if it can be done any better.
So:
According to your experience, which OS has the cleanest/leanest
pkgsrc/port structure to be sourced & fed to other external tools?
This is NOT the same as which has the most available packages!
Have you heard of any similar efforts trying to massively produce
packages from eg. a common source list in a structured manner?
(I mean, in a transferable way across different build systems)

So,
much relevant information is visible here:
http://www.mancoosi.org/edos/packages/ # lengthy description of various packaging formats
this one shows the higher level picture:
http://www.mancoosi.org/edos/suggestions/ (esp. 2.1.1 Expressivity shortcomings)
Anyway, to answer to original question, the best bets as of now are:
RPM's .spec files
DEB control files
pkgsrc; possible but some hackery is still needed
portage; quite clean, distinguishes between DEPEND and RDEPEND
macports; easy to parse; very detailed dependencies aspects
ports; like pkgsrc; multiple dependencies defined

Related

How to proceed with Linux source code customization?

I am a non CS/IT student, but having knowledge of C, Java, DS and Algorithms. Now-a-days I am focusing on operating system and had gained some of its concepts. But I want some practical knowledge of it. Merely writing algo code in java/c has no fun in doing. I have gone through many articles where they mentioned we can customize source code of Linux-kernel.
I want to start customizing the kernel as I move ahead in the learning of OS concepts and apply the same. It will make two goals achievable 1. I will gain practical idea of the operating system 2. I will have a project.
Problem which I face-
1. From where to get the source code? Which source code should I download? Also the documentation if possible.
https://www.kernel.org/
I went in there but there are so many of them which one will be better?
2. How will I customize the code once I have it?
Please give me suggestions with detail about how I should start this journey (of changing source code to customize Linux).
Moreover I am using Windows 8.
I recommend first reading several books on OSes and on programming. You need a broad CS culture (if possible get a CS degree)
I am a non CS/IT student,
You'll better become one, or else spend years of work to learn all the stuff a CS graduate student has learnt.
First, you need to be very familiar with Linux programming on user side (application programs). So read at least Advanced Linux Programming and study the source code of several programs, including shells (and some kind of servers). Read also carefully syscalls(2). Explore the state of your kernel (e.g. thru proc(5)...). Look into https://kernelnewbies.org/
I also recommend learning several programming languages. You should in particular read SICP, an excellent introduction to programming. Read also some book like programming language pragmatics. Read something about continuation and continuation passing style. Read the Dragon book. Read some Introduction to Algorithms. Read something about computer architecture and instruction set architecture
Merely writing algo code in java/c has no fun in doing.
But the kernel is also written in C (mostly) and full of algorithmic code. What makes you think you'll get more fun in it?
I want to start customizing the kernel as I move ahead in the learning of OS concepts and apply the same.
But why? Why don't you also consider studying and contributing to some user-level code
I would recommend first reading a good book on OSes in general, notably Operating Systems: Three Easy Pieces. Look also on OSdev.
At last, the general advice about kernel programming is don't. A common mistake is to try adding code inside the kernel to solve some issue that can and should be solved in user-land.
How will I customize the code once I have it?
You probably should not customize the kernel, but if you did you'll use familiar tools (a good source code editor like emacs or vim, a compiler and linker on the command line, a build automation tool like make). Patching the kernel is similar to patching some other free software. But testing your kernel is harder (because you'll often reboot).
You'll also find several books explaining the Linux kernel.
If you still want to customize the kernel you should first try to code some kernel module.
Moreover I am using Windows 8.
This is a huge mistake. You first need to be an advanced Linux user. So wipe out Windows from your computer, and install some Linux distribution -I recommend Debian- (and use only Linux, no more Windows). Become familiar with command line.
I seriously recommend to avoid working on the kernel as your first project.
I strongly recommend looking at some existing user-land free software project first (there are thousands of them, notably on github, e.g. choose some package in your distribution, study its source code, work on it, propose the patch to the community). Be able to build from source code a lot of things.
A wise man once said you "must act your way into right thinking, as you cannot think your way into right acting". In your case, you'll need to act as an experienced programmer would act, which means before we write any code, we need to answer some questions.
What do we want to change?
Why do we want to change it?
What are the repercussions of this change (ie what other functions - out of all the 10's of millions of lines of source code - call this function)?
After we've made the change, how are we going to compile it? In other words, there is a defined process for this. What is it?
After we compile our new kernel/module, how are we going to test it?
A good start, in addition to the answer that was just posted, would be to run LFS (Linux from Scratch). Get a successful install of that and use it as a starting point.
Now, since we're experienced programmers, we know that tinkering with a 10M+ line codebase is a recipe for trouble; we need a bit more direction than that. Here's a list of bugs that need to be fixed: https://bugzilla.kernel.org/buglist.cgi?chfield=%5BBug%20creation%5D&chfieldfrom=7d
I, for one, would be glad to see the one called "AUFS hangs on fanotify" go away, as I use AUFS with Docker on a daily basis.
If, down the line, you decide you'd rather hack on something besides the kernel, there are plenty of other options.
From your question it follows that you've already gained some concepts of an operating system. However, if you feel that it's still insufficient, it is OK to spend more time on learning. An operating system (mainly, a kernel) has certain tasks to perform like memory management (or memory protection), multiprogramming, hardware abstraction and so on. Neither of the topics may be neglected - they are all as important. So, if you have some time, you may refer to such useful books as "Modern Operating Systems" by Andrew Tanenbaum. Special books like that will shed much light on all important aspects of a modern OS. Suffice it to say, Linux kernel itself was started by Linus Torvalds because of a strong inspiration by MINIX - an educational project by A. Tanenbaum.
Such a cumbersome project like an OS kernel (BSD, Linux, etc.) contains lots of code. Many people are collaborating to write or enhance whatever parts of the kernel. So, there is a common and inevitable need to use a version control system. So, if you have an intention to submit your code to the kernel in future, you also have to have hands on with version control. Particularly, Linux relies on Git SCM (software configuration management - a synonym for version control).
So, once you have some knowledge of Git, you can install it on your computer and download Linux source code: git clone https://github.com/torvalds/linux.git
Determine your goals at Linux kernel modification. What do you want to achieve? Perhaps, you have a network card which you suspect to miss some features in Linux? Take a look at the other vendors' drivers and make an attempt to fix the driver of interest to include the features. Of course, this will require some knowledge of the HW, and, if the features are HW dependent, you will unlikely succeed to elaborate your code without special knowledge. But, in general, - if you are trying to make an enhancement, it assumes that you are an experienced Linux user yourself. Otherwise, how will you understand that some fixes/enhancements/etc. are required? So, I can't help but agree with the proposal to postpone Windows 8 for a while and start using some Linux distribution (eg. Debian).
If you succeed to determine your goals (eg. if you find a paper describing some desired changes in Linux kernel or if you decide to enhance some device drivers / write your own), you will be able to try it hands on. However, you still might need some helpful books, but, in this case, some Linux-specific ones. Also, writing C code for the kernel itself will require one important detail - you will need to comply with a so called coding standard, otherwise Linux kernel maintainers will not be able to accept your patches.
So, I made an attempt to outline some tips based on your current question. Of course, the job of kernel development has far more broad prerequisites, but these are which are just obvious.

One multimode Haskell executable vs separate executables sharing a library

I'm working on a project now in which I configure the cabal file to build several executables which share the library built by the same cabal file. The cabal project is structured much like this one, with one library section followed by several executable sections that include this library in their build-depends sections.
I'm using this approach so I can make common functions available to any number of executables, and create more executables easily as needed.
Yet in his Monad Reader article on Hoogle p.33, Neil Mitchell advocates bundling up Haskell projects into a single executable with multiple modes (e.g. by using Neil Mitchell's CmdArgs library.) So there might be one mode to start a web server, another mode to query the database from the command line, etc. Quote:
Provide one executable
Version 3 had four executable programs – one to generate ranking
information, one to do command line searching, one to do web
searching, and one to do regression testing. Version 4 has one
executable, which does all the above and more, controlled by flags.
There are many advantages to providing only one end program – it
reduces the chance of code breaking without noticing it, it makes the
total file size smaller by not duplicating the Haskell run-time system,
it decreases the number of commands users need to learn. The move to
one multipurpose executable seems to be a common theme, which tools
such as darcs and hpc both being based on one command with multiple
modes.
Is a single multimode executable really the better way to go? Are there countervailing reasons to stick with separate executables sharing the same library?
Personally, I'm more of a fan of the Unix philosophy "write programs that do one thing and do it well". However there are reasons for doing either way, so the only reasonable answer here is: it depends.
One example where it makes senses to bundle everything into same executable, is when you're targeting a platform that is very limited on resources (e.g, embedded system). This is the approach taken by BusyBox.
On the other hand if you divide into multiple executables, you give your clients the option of just using those that matter to them. With a single executable, even if your client really just wanted one functionality, he'll have no way to get rid of the extra baggage.
I'm sure there are a lot of more reasons for going either way, but this just goes to show that there's no definitive answer. It depends on the use case.

Contributing to a Linux distribution

I'm interested in contributing to a Linux distro, but regarding the various distro's developer communities, I'm having a bit of trouble figuring out which one I'd most like to join.
What languages I know: C, C++, Lua, Python, and fairly familiar with Perl (though I wouldn't say I "know" it). In particular, I have very little experience with x86 assembly besides hacking stuff together for performance tweaks, though that will be partially rectified soon.
What I'm looking for: A community that provides plenty of opportunities for developers to work on various aspects of the distribution. To be honest I'm most interested in reading and working on the kernel source (in which case the distro doesn't matter), but it's pretty daunting and I figure getting into the Linux community and working with experienced Linux developers might give me a better idea of how to jump into the guts(let me know if this is bogus, or if you have any advice regarding that).
So...
Which distro has the "best" developer community in terms of organization, people who are fun to work with, and opportunities to contribute?
I've read various "Contributing to XXX" pages and mailing lists for distros like Ubuntu, OpenSuse, Fedora, etc. but I'd rather get a more personal testament from an actual developer.
Unless you have a specific desire to learn the ins and outs of various packaging formats you would probably be better off contributing directly upstream to applications/libraries that you find interesting. While individual distributions often have a few management applications that are unique(ish) to them most core applications and libraries are shared between them.
As you have expressed an interest in guts it would make sense to stick to one of the main community distros (Fedora and Ubuntu/Debian) as the rest tend to be variations on a base distro. The other option is to choose a source based distribution which have a number of advantages to developers although you may find yourself spending a bit of time keeping your machine trim.
As I'm a developer I personally use Gentoo which gives me a number of things:
Rolling release: New versions of applications are generally available soon after release
Stable/Unstable mix: I can run stable core with bleeding edge on upstream packages I care about
Development ready: Any installed package is by default a "dev" package, the distinction between buildtime/runtime dependencies is blurred
Packaging is easy: If it's a simple as "configure/make/make install" writing and ebuild is very easy.
Contribution is easy: Contributing new ebuilds is fairly painless, from there you can get as involved as you like
Of course there are downsides, not least of all your machine spends a considerable amount of time building things and if your run a large selection of "unstable" packages you may find you occasionally need to fix-up your machine. However I find these disadvantages minor compared to giving me an up to date platform with which to contribute to upstream from.
If you want to work with the kernel then you shouldn't be picking a distribution, but rather working upstream.
Somebody correct me if I'm wrong, but I think that contributing to Ubuntu can be very easy and fun if you use Launchpad. I haven't tried contributing code, but I contribute translations and file bugs on some projects.

How to build Linux system from kernel to UI layer

I have been looking into MeeGo, maemo, Android architecture.
They all have Linux Kernel, build some libraries on it, then build middle layer libraries [e.g telephony, media etc...].
Suppose i wana build my own system, say Linux Kernel, with some binariers like glibc, Dbus,.... UI toolkit like GTK+ and its binaries.
I want to compile every project from source to customize my own linux system for desktop, netbook and handheld devices. [starting from netbook first :)]
How can i build my own customize system from kernel to UI.
I apologize in advance for a very long winded answer to what you thought would be a very simple question. Unfortunately, piecing together an entire operating system from many different bits in a coherent and unified manner is not exactly a trivial task. I'm currently working on my own Xen based distribution, I'll share my experience thus far (beyond Linux From Scratch):
1 - Decide on a scope and stick to it
If you have any hope of actually completing this project, you need write an explanation of what your new OS will be and do once its completed in a single paragraph. Print that out and tape it to your wall, directly in front of you. Read it, chant it, practice saying it backwards and whatever else may help you to keep it directly in front of any urge to succumb to feature creep.
2 - Decide on a package manager
This may be the single most important decision that you will make. You need to decide how you will maintain your operating system in regards to updates and new releases, even if you are the only subscriber. Anyone, including you who uses the new OS will surely find a need to install something that was not included in the base distribution. Even if you are pushing out an OS to power a kiosk, its critical for all deployments to keep themselves up to date in a sane and consistent manner.
I ended up going with apt-rpm because it offered the flexibility of the popular .rpm package format while leveraging apt's known sanity when it comes to dependencies. You may prefer using yum, apt with .deb packages, slackware style .tgz packages or your own format.
Decide on this quickly, because its going to dictate how you structure your build. Keep track of dependencies in each component so that its easy to roll packages later.
3 - Re-read your scope then configure your kernel
Avoid the kitchen sink syndrome when making a kernel. Look at what you want to accomplish and then decide what the kernel has to support. You will probably want full gadget support, compatibility with file systems from other popular operating systems, security hooks appropriate for people who do a lot of browsing, etc. You don't need to support crazy RAID configurations, advanced netfilter targets and minixfs, but wifi better work. You don't need 10GBE or infiniband support. Go through the kernel configuration carefully. If you can't justify including a module by its potential use, don't check it.
Avoid pulling in out of tree patches unless you absolutely need them. From time to time, people come up with new scheduling algorithms, experimental file systems, etc. It is very, very difficult to maintain a kernel that consumes from anything else but mainline.
There are exceptions, of course. If going out of tree is the only way to meet one of your goals stated in your scope. Just remain conscious of how much additional work you'll be making for yourself in the future.
4 - Re-read your scope then select your base userland
At the very minimum, you'll need a shell, the core utilities and an editor that works without an window manager. Paying attention to dependencies will tell you that you also need a C library and whatever else is needed to make the base commands work. As Eli answered, Linux From Scratch is a good resource to check. I also strongly suggest looking at the LSB (Linux standard base), this is a specification that lists common packages and components that are 'expected' to be included with any distribution. Don't follow the LSB as a standard, compare its suggestions against your scope. If the purpose of your OS does not necessitate inclusion of something and nothing you install will depend on it, don't include it.
5 - Re-read your scope and decide on a window system
Again, referring to the everything including the kitchen sink syndrome, try and resist the urge to just slap a stock install of KDE or GNOME on top of your base OS and call it done. Another common pitfall is to install a full blown version of either and work backwards by removing things that aren't needed. For the sake of sane dependencies, its really better to work on this from bottom up rather than top down.
Decide quickly on the UI toolkit that your distribution is going to favor and get it (with supporting libraries) in place. Define consistency in UIs quickly and stick to it. Nothing is more annoying than having 10 windows open that behave completely differently as far as controls go. When I see this, I diagnose the OS with multiple personality disorder and want to medicate its developer. There was just an uproar regarding Ubuntu moving window controls around, and they were doing it consistently .. the inconsistency was the behavior changing between versions. People get very upset if they can't immediately find a button or have to increase their mouse mileage.
6 - Re-read your scope and pick your applications
Avoid kitchen sink syndrome here as well. Choose your applications not only based on your scope and their popularity, but how easy they will be for you to maintain. Its very likely that you will be applying your own patches to them (even simple ones like messengers updating a blinking light on the toolbar).
Its important to keep every architecture that you want to support in mind as you select what you want to include. For instance, if Valgrind is your best friend, be aware that you won't be able to use it to debug issues on certain ARM platforms.
Pretend you are a company and will be an employee there. Does your company pass the Joel test? Consider a continuous integration system like Hudson, as well. It will save you lots of hair pulling as you progress.
As you begin unifying all of these components, you'll naturally be establishing your own SDK. Document it as you go, avoid breaking it on a whim (refer to your scope, always). Its perfectly acceptable to just let linux be linux, which turns your SDK more into formal guidelines than anything else.
In my case, I'm rather fortunate to be working on something that is designed strictly as a server OS. I don't have to deal with desktop caveats and I don't envy anyone who does.
7 - Additional suggestions
These are in random order, but noting them might save you some time:
Maintain patch sets to every line of upstream code that you modify, in numbered sequence. An example might be 00-make-bash-clairvoyant.patch, this allows you to maintain patches instead of entire forked repositories of upstream code. You'll thank yourself for this later.
If a component has a testing suite, make sure you add tests for anything that you introduce. Its easy to just say "great, it works!" and leave it at that, keep in mind that you'll likely be adding even more later, which may break what you added previously.
Use whatever version control system is in use by the authors when pulling in upstream code. This makes merging of new code much, much simpler and shaves hours off of re-basing your patches.
Even if you think upstream authors won't be interested in your changes, at least alert them to the fact that they exist. Coordination is essential, even if you simply learn that a feature you just put in is already in planning and will be implemented differently in the future.
You may be convinced that you will be the only person to ever use your OS. Design it as though millions will use it, you never know. This kind of thinking helps avoid kludges.
Don't pull upstream alpha code, no matter what the temptation may be. Red Hat tried that, it did not work out well. Stick to stable releases unless you are pulling in bug fixes. Major bug fixes usually result in upstream releases, so make sure you watch and coordinate.
Remember that it's supposed to be fun.
Finally, realize that rolling an entire from-scratch distribution is exponentially more complex than forking an existing distribution and simply adding whatever you feel that it lacks. You need to reward yourself often by booting your OS and actually using it productively. If you get too frustrated, consistently confused or find yourself putting off work on it, consider making a lightweight fork of Debian or Ubuntu. You can then go back and duplicate it entirely from scratch. Its no different than prototyping an application in a simpler / rapid language first before writing it for real in something more difficult. If you want to go this route (first), gNewSense offers utilities to fork your own OS directly from Ubuntu. Note, by default, their utilities will strip any non free bits (including binary kernel blobs) from the resulting distro.
I strongly suggest going the completely from scratch route (first) because the experience that you will gain is far greater than making yet another fork. However, its also important that you actually complete your project. Best is subjective, do what works for you.
Good luck on your project, see you on distrowatch.
Check out Linux From Scratch:
Linux From Scratch (LFS) is a project
that provides you with step-by-step
instructions for building your own
customized Linux system entirely from
source.
Use Gentoo Linux. It is a compile from source distribution, very customizable. I like it a lot.

Why use build tools like Autotools when we can just write our own makefiles?

Recently, I switched my development environment from Windows to Linux. So far, I have only used Visual Studio for C++ development, so many concepts, like make and Autotools, are new to me. I have read the GNU makefile documentation and got almost an idea about it. But I am kind of confused about Autotools.
As far as I know, makefiles are used to make the build process easier.
Why do we need tools like Autotools just for creating the makefiles? Since all knows how to create a makefile, I am not getting the real use of Autotools.
What is the standard? Do we need to use tools like this or would just handwritten makefiles do?
You are talking about two separate but intertwined things here:
Autotools
GNU coding standards
Within Autotools, you have several projects:
Autoconf
Automake
Libtool
Let's look at each one individually.
Autoconf
Autoconf easily scans an existing tree to find its dependencies and create a configure script that will run under almost any kind of shell. The configure script allows the user to control the build behavior (i.e. --with-foo, --without-foo, --prefix, --sysconfdir, etc..) as well as doing checks to ensure that the system can compile the program.
Configure generates a config.h file (from a template) which programs can include to work around portability issues. For example, if HAVE_LIBPTHREAD is not defined, use forks instead.
I personally use Autoconf on many projects. It usually takes people some time to get used to m4. However, it does save time.
You can have makefiles inherit some of the values that configure finds without using automake.
Automake
By providing a short template that describes what programs will be built and what objects need to be linked to build them, Makefiles that adhere to GNU coding standards can automatically be created. This includes dependency handling and all of the required GNU targets.
Some people find this easier. I prefer to write my own makefiles.
Libtool
Libtool is a very cool tool for simplifying the building and installation of shared libraries on any Unix-like system. Sometimes I use it; other times (especially when just building static link objects) I do it by hand.
There are other options too, see StackOverflow question Alternatives to Autoconf and Autotools?.
Build automation & GNU coding standards
In short, you really should use some kind of portable build configuration system if you release your code to the masses. What you use is up to you. GNU software is known to build and run on almost anything. However, you might not need to adhere to such (and sometimes extremely pedantic) standards.
If anything, I'd recommend giving Autoconf a try if you're writing software for POSIX systems. Just because Autotools produce part of a build environment that's compatible with GNU standards doesn't mean you have to follow those standards (many don't!) :) There are plenty of other options, too.
Edit
Don't fear m4 :) There is always the Autoconf macro archive. Plenty of examples, or drop in checks. Write your own or use what's tested. Autoconf is far too often confused with Automake. They are two separate things.
First of all, the Autotools are not an opaque build system but a loosely coupled tool-chain, as tinkertim already pointed out. Let me just add some thoughts on Autoconf and Automake:
Autoconf is the configuration system that creates the configure script based on feature checks that are supposed to work on all kinds of platforms. A lot of system knowledge has gone into its m4 macro database during the 15 years of its existence. On the one hand, I think the latter is the main reason Autotools have not been replaced by something else yet. On the other hand, Autoconf used to be far more important when the target platforms were more heterogeneous and Linux, AIX, HP-UX, SunOS, ..., and a large variety of different processor architecture had to be supported. I don't really see its point if you only want to support recent Linux distributions and Intel-compatible processors.
Automake is an abstraction layer for GNU Make and acts as a Makefile generator from simpler templates. A number of projects eventually got rid of the Automake abstraction and reverted to writing Makefiles manually because you lose control over your Makefiles and you might not need all the canned build targets that obfuscate your Makefile.
Now to the alternatives (and I strongly suggest an alternative to Autotools based on your requirements):
CMake's most notable achievement is replacing AutoTools in KDE. It's probably the closest you can get if you want to have Autoconf-like functionality without m4 idiosyncrasies. It brings Windows support to the table and has proven to be applicable in large projects. My beef with CMake is that it is still a Makefile-generator (at least on Linux) with all its immanent problems (e.g. Makefile debugging, timestamp signatures, implicit dependency order).
SCons is a Make replacement written in Python. It uses Python scripts as build control files allowing very sophisticated techniques. Unfortunately, its configuration system is not on par with Autoconf. SCons is often used for in-house development when adaptation to specific requirements is more important than following conventions.
If you really want to stick with Autotools, I strongly suggest to read Recursive Make Considered Harmful (archived) and write your own GNU Makefile configured through Autoconf.
The answers already provided here are good, but I'd strongly recommend not taking the advice to write your own makefile if you have anything resembling a standard C/C++ project. We need the autotools instead of handwritten makefiles because a standard-compliant makefile generated by automake offers a lot of useful targets under well-known names, and providing all these targets by hand is tedious and error-prone.
Firstly, writing a Makefile by hand seems a great idea at first, but most people will not bother to write more than the rules for all, install and maybe clean. automake generates dist, distcheck, clean, distclean, uninstall and all these little helpers. These additional targets are a great boon to the sysadmin that will eventually install your software.
Secondly, providing all these targets in a portable and flexible way is quite error-prone. I've done a lot of cross-compilation to Windows targets recently, and the autotools performed just great. In contrast to most hand-written files, which were mostly a pain in the ass to compile. Mind you, it is possible to create a good Makefile by hand. But don't overestimate yourself, it takes a lot of experience and knowledge about a bunch of different systems, and automake creates great Makefiles for you right out of the box.
Edit: And don't be tempted to use the "alternatives". CMake and friends are a horror to the deployer because they aren't interface-compatible to configure and friends. Every half-way competent sysadmin or developer can do great things like cross-compilation or simple things like setting a prefix out of his head or with a simple --help with a configure script. But you are damned to spend an hour or three when you have to do such things with BJam. Don't get me wrong, BJam is probably a great system under the hood, but it's a pain in the ass to use because there are almost no projects using it and very little and incomplete documentation. autoconf and automake have a huge lead here in terms of established knowledge.
So, even though I'm a bit late with this advice for this question: Do yourself a favor and use the autotools and automake. The syntax might be a bit strange, but they do a way better job than 99% of the developers do on their own.
For small projects or even for large projects that only run on one platform, handwritten makefiles are the way to go.
Where autotools really shine is when you are compiling for different platforms that require different options. Autotools is frequently the brains behind the typical
./configure
make
make install
compilation and install steps for Linux libraries and applications.
That said, I find autotools to be a pain and I've been looking for a better system. Lately I've been using bjam, but that also has its drawbacks. Good luck finding what works for you.
Autotools are needed because Makefiles are not guaranteed to work the same across different platforms. If you handwrite a Makefile, and it works on your machine, there is a good chance that it won't on mine.
Do you know what unix your users will be using? Or even which distribution of Linux? Do you know where they want software installed? Do you know what tools they have, what architecture they want to compile on, how many CPUs they have, how much RAM and disk might be available to them?
The *nix world is a cross-platform landscape, and your build and install tools need to deal with that.
Mind you, the auto* tools date from an earlier epoch, and there are many valid complaints about them, but the several projects to replace them with more modern alternatives are having trouble developing a lot of momentum.
Lots of things are like that in the *nix world.
Autotools is a disaster.
The generated ./configure script checks for features that have not been present on any Unix system for last 20 years or so. To do this, it spends a huge amount of time.
Running ./configure takes for ages. Although modern server CPUs can have even dozens of cores, and there may be several such CPUs per server, the ./configure is single-threaded. We still have enough years of Moore's law left that the number of CPU cores will go way up as a function of time. So, the time ./configure takes will stay approximately constant whereas parallel build times reduce by a factor of 2 every 2 years due to Moore's law. Or actually, I would say the time ./configure takes might even increase due to increasing software complexity taking advantage of improved hardware.
The mere act of adding just one file to your project requires you to run automake, autoconf and ./configure which will take ages, and then you'll probably find that since some important files have changed, everything will be recompiled. So add just one file, and make -j${CPUCOUNT} recompiles everything.
And about make -j${CPUCOUNT}. The generated build system is a recursive one. Recursive make has for a long amount of time been considered harmful.
Then when you install the software that has been compiled, you'll find that it doesn't work. (Want proof? Clone protobuf repository from Github, check out commit 9f80df026933901883da1d556b38292e14836612, install it to a Debian or Ubuntu system, and hey presto: protoc: error while loading shared libraries: libprotoc.so.15: cannot open shared object file: No such file or directory -- since it's in /usr/local/lib and not /usr/lib; workaround is to do export LD_RUN_PATH=/usr/local/lib before typing make).
The theory is that by using autotools, you could create a software package that can be compiled on Linux, FreeBSD, NetBSD, OpenBSD, DragonflyBSD and other operating systems. The fact? Every non-Linux system to build packages from source has numerous patch files in their repository to work around autotools bugs. Just take a look at e.g. FreeBSD /usr/ports: it's full of patches. So, it would have been as easy to create a small patch for a non-autotools build system on a per project basis than to create a small patch for an autotools build system on a per project basis. Or perhaps even easier, as standard make is much easier to use than autotools.
The fact is, if you create your own build system based on standard make (and make it inclusive and not recursive, following the recommendations of the "Recursive make considered harmful" paper), things work in a much better manner. Also, your build time goes down by an order of magnitude, perhaps even two orders of magnitude if your project is very small project of 10-100 C language files and you have dozens of cores per CPU and multiple CPUs. It's also much easier to interface custom automatic code generation tools with a custom build system based on standard make instead of dealing with the m4 mess of autotools. With standard make, you can at least type a shell command into the Makefile.
So, to answer your question: why use autotools? Answer: there is no reason to do so. Autotools has been obsolete since when commercial Unix has become obsolete. And the advent of multi-core CPUs has made autotools even more obsolete. Why programmers haven't realized that yet, is a mystery. I'll happily use standard make on my build systems, thank you. Yes, it takes some amount of work to generate the dependency files for C language header inclusion, but the amount of work is saved by not having to fight with autotools.
I dont feel I am an expert to answer this but still give you a bit analogy with my experience.
Because upto some extent it is similar to why we should write Embedded Codes in C language(High Level language) rather then writing in Assembly Language.
Both solves the same purpose but latter is more lenghty, tedious ,time consuming and more error prone(unless you know ISA of the processor very well) .
Same is the case with Automake tool and writing your own makefile.
Writing Makefile.am and configure.ac is pretty simple than writing individual project Makefile.

Resources