A constant problem I run into with autoconf is init scripts. The whole point of autoconf is of course portability, and the various init systems (sysvinit, systemd, upstart) - not to mention various differences between the same init system on different distros (like /etc/init.d vs /etc/rc.d/ vs /etc/rc.d/init.d) - are crying out for a portable start/stop script solution.
I thought that someone would have created a nice m4 macro by now that handles this, but I can't find one.
This is strange because it seems like a common problem, and they even have m4 macros now to detect much newer issues like C++11 compatibility and other things.
So, before I roll up my sleeves and spend the time to make a (hopefully) decent m4 macro that tries to determine what init system to use, I'd be interested in knowing if someone else has already done it better. I can't seem to find anything on Google - and I've even read posts where people actually discourage using autoconf to install init scripts. Personally, I think that sentiment is simply impractical - users these days are used to package managers automatically installing init scripts, so there's no reason that you shouldn't get the same when compiling from source.
So, is there already a good m4 macro that handles this, or should I spend the time to write my own?
Related
im currently building LFS and looking for a package management solution
specifically a program that keeps track of what files got installed when you compiled
something from source also has a method for removing those files in case make uninstall isnt present
i have looked into programs like install-log and checkinstall but couldnt get both to compile
any help is appreciated, thanks!
Personally, I have always used the User Based Management (aka "Package Users"), as described in section 8.2. Package Management. It doesn't get much love, even from the LFS community, despite it (still) being in the book and being the only method "unique to LFS".
It's not great for the first timer as it will really force you to dig deep at times to solve issues and make important decisions. I suggest you complete your first LFS system build, then consider Package Users your next time around.
But once you get used to it, it works great.
Another, simpler, method is the Timestamp Based technique (described in the same link).
For example, when it comes time to copy a package's files to your system, you can do something like this:
touch timestamp_start
make install
# do other stuff as instructed
touch timestamp_stop
find / -newer timestamp_start -not -newer timestamp_stop > list_of_files_affected
And I do use this approach when installing the Nvidia proprietary drivers, because successfully installing them with a non-root account is a real pain.
Is there a good 'how-to' or 'getting started' guide for getting started using g++ and gdb?
Some background. Decent programmer, but so far I have done everything on Windows in Visual Studio.
I have a little experience using terminal to compile files (not much beyond a .h and 1 or 2 .cpp). But nothing beyond that.
Anyone know of a good primer on on how to get started coding on Linux?
Read some good books, notably Advanced Linux Programming and Advanced Unix Programming. Read also the advanced bash scripting guide and other documentation from Linux Documentation Project
Obviously, install some Linux distribution on your laptop (not in some VM, but on real disk partitions). If you have a debian like distribution, run aptitude build-dep gcc-4.6 gedit on it to get a lot of interesting developers packages.
Learn some command line skills. Learn to use the man command; after installing manpages and manpages-dev packages, type man man (use the space bar to "scroll text", the q key to quit). Read also the intro(2) man page. When you forgot how to use a command like cp try cp --help.
Use a version control system like git, even for one person tiny projects.
Backup your files.
Read several relevant Wikipedia pages on Linux, kernels, syscalls, free software, X11, Posix, Unix
Try hard to use the command line. For instance, try to do everything on the command line for a week or more. Avoid using your desktop, and possibly your mouse. Learn to use emacs.
Read about builder programs like GNU make
Retrieve several free software from their source code (e.g. from sourceforge or freecode or github) and practice building and compiling them. Study their source code
Basic tips to start (if a command is not found, you need to install the package providing it) in command line (in a terminal).
run emacs ; there is a tutorial menu; practice it for half an hour.
edit a helloworld.c program (with a main calling some hello function)
compile it with gcc -g -Wall helloworld.c -o helloworld; improve your code till no warnings are given. Always pass -Wall to gcc or g++ to get almost all warnings.
run it with ./helloworld
debug it with gdb ./helloworld, then
use the help command
use the b main command to add a breakpoint in main and likewise for your hello function.
run it under gdb using r
use bt to get a backtrace
use p to print some variable
use c to continue the execution of the debugged program.
write a tiny Makefile to be able to build your helloworld program using make
learn how to call make (with M-x compile) and gdb (with M-x gdb) from inside Emacs
Learn more about valgrind (to detect most memory leaks). Perhaps consider using Boehm's GC in some of your applications.
You've got a lot of things to learn. I won't give you details, but as someone who's done unix and c/c++ development for a couple decades now I'll try to give you some topics to start with.
My main advice is to start experimenting. Write the most trivial program you can in C or C++ (something that prints "Hello there, world!" is traditional) and figure out how to compile and run it from the command line. Then, once you've got a compiled version, start it up under the debugger and play around with breakpoints, printing expressions, etc, etc. Once you've got this simplest program up and running and you sort of understand what the debugger is telling you, add a class, function, struct, or whatever else feels like a good small step and go through the cycle again. You'll proceed much faster this way than if you start with a very large program.
Still at a very high level, here are a handful of topics you'll need to figure out at least a bit about. Note that the "learn by starting small" approach works well for any of the topics below.
Running g++: it has pretty good online documentation for the command line syntax, and although you're bound to find it intimidating at first, try to look for the simplest starting point.
Find a text editor to use. Vim and emacs are traditional (and very very powerful) but both have a relatively steep learning curve. If you have someone around to help you, that's so much the better. There are other alternatives, but as an emacs user myself, I'm afraid I'm not that familiar with them.
Get familiar with gdb. It's an incredibly powerful tool for understanding your program. Again, it has extensive online documentation that will repay close reading.
Some familiarity with standard unix commands will be useful: ls, cd, and moving basics of the navigating unix directories; grep for quickly searching source files.
You'll have to get used to the command line approach versus the ide approach. The former is the traditional unix developer model, where you put together the operations you want from a collection of other tools, rather than having the ide hide most of this knowledge from you.
If your project is multi-file, and especially if it's a full-semester project, you might also consider learning something about the following topics.
Make is a tool for describing how to compile and link a multi-file project so that you don't have to remember how to do it by hand each time. Make, unfortunately, has a well-deserved reputation for being tricky to use, but this is mostly true in very large projects spanning multiple directories, and there are probably good simple examples online.
I would strongly consider making use of a source code control system such as git or hg, even for a few relatively small project. It's so much safer to have an archived version of what you've done so that you can back up quickly. Both git and hg are overkill for a small one-shot project, but they are worth learning on their own. Conventional wisdom as I understand it today is that they're very similar in philosophy and core functionality, but that hg is definitely a bit more consistent at the command line level, and therefore easier to start with.
I suspect this is rather intimidating, especially if you've got effectively no exposure to a unix command environment before. I re-emphasize my first piece of advice above: learn by starting simple and experimenting. This minimizes the amount of new stuff you're having to wrap your head around at any given point in time.
I wonder if we can a reduce just a little bit the effort around packages
under linux/unix OS environments and software installations.
It is my stance that there is too much redundant effort about $subject.
I have been pondering about ways to connect build systems of $subject
with some next "stage build tools", like: easybuild (1) & openbuildservice (2);
read below for more details.
To be more specific, I was able last week to take pkgsrc's repository,
process the Makefiles via a tiny "pkg2eb" script and produce *.eb files
for easybuild, then fed many parallel gcc compilations with them.
That "blindly-driven process" ended up in >600 successful builds,
ie. these were packages that simply needed 'wget/configure/make/make install';
It's not bad for a first run, just wonder if it can be done any better.
So:
According to your experience, which OS has the cleanest/leanest
pkgsrc/port structure to be sourced & fed to other external tools?
This is NOT the same as which has the most available packages!
Have you heard of any similar efforts trying to massively produce
packages from eg. a common source list in a structured manner?
(I mean, in a transferable way across different build systems)
So,
much relevant information is visible here:
http://www.mancoosi.org/edos/packages/ # lengthy description of various packaging formats
this one shows the higher level picture:
http://www.mancoosi.org/edos/suggestions/ (esp. 2.1.1 Expressivity shortcomings)
Anyway, to answer to original question, the best bets as of now are:
RPM's .spec files
DEB control files
pkgsrc; possible but some hackery is still needed
portage; quite clean, distinguishes between DEPEND and RDEPEND
macports; easy to parse; very detailed dependencies aspects
ports; like pkgsrc; multiple dependencies defined
I'm interested in contributing to a Linux distro, but regarding the various distro's developer communities, I'm having a bit of trouble figuring out which one I'd most like to join.
What languages I know: C, C++, Lua, Python, and fairly familiar with Perl (though I wouldn't say I "know" it). In particular, I have very little experience with x86 assembly besides hacking stuff together for performance tweaks, though that will be partially rectified soon.
What I'm looking for: A community that provides plenty of opportunities for developers to work on various aspects of the distribution. To be honest I'm most interested in reading and working on the kernel source (in which case the distro doesn't matter), but it's pretty daunting and I figure getting into the Linux community and working with experienced Linux developers might give me a better idea of how to jump into the guts(let me know if this is bogus, or if you have any advice regarding that).
So...
Which distro has the "best" developer community in terms of organization, people who are fun to work with, and opportunities to contribute?
I've read various "Contributing to XXX" pages and mailing lists for distros like Ubuntu, OpenSuse, Fedora, etc. but I'd rather get a more personal testament from an actual developer.
Unless you have a specific desire to learn the ins and outs of various packaging formats you would probably be better off contributing directly upstream to applications/libraries that you find interesting. While individual distributions often have a few management applications that are unique(ish) to them most core applications and libraries are shared between them.
As you have expressed an interest in guts it would make sense to stick to one of the main community distros (Fedora and Ubuntu/Debian) as the rest tend to be variations on a base distro. The other option is to choose a source based distribution which have a number of advantages to developers although you may find yourself spending a bit of time keeping your machine trim.
As I'm a developer I personally use Gentoo which gives me a number of things:
Rolling release: New versions of applications are generally available soon after release
Stable/Unstable mix: I can run stable core with bleeding edge on upstream packages I care about
Development ready: Any installed package is by default a "dev" package, the distinction between buildtime/runtime dependencies is blurred
Packaging is easy: If it's a simple as "configure/make/make install" writing and ebuild is very easy.
Contribution is easy: Contributing new ebuilds is fairly painless, from there you can get as involved as you like
Of course there are downsides, not least of all your machine spends a considerable amount of time building things and if your run a large selection of "unstable" packages you may find you occasionally need to fix-up your machine. However I find these disadvantages minor compared to giving me an up to date platform with which to contribute to upstream from.
If you want to work with the kernel then you shouldn't be picking a distribution, but rather working upstream.
Somebody correct me if I'm wrong, but I think that contributing to Ubuntu can be very easy and fun if you use Launchpad. I haven't tried contributing code, but I contribute translations and file bugs on some projects.
Recently, I switched my development environment from Windows to Linux. So far, I have only used Visual Studio for C++ development, so many concepts, like make and Autotools, are new to me. I have read the GNU makefile documentation and got almost an idea about it. But I am kind of confused about Autotools.
As far as I know, makefiles are used to make the build process easier.
Why do we need tools like Autotools just for creating the makefiles? Since all knows how to create a makefile, I am not getting the real use of Autotools.
What is the standard? Do we need to use tools like this or would just handwritten makefiles do?
You are talking about two separate but intertwined things here:
Autotools
GNU coding standards
Within Autotools, you have several projects:
Autoconf
Automake
Libtool
Let's look at each one individually.
Autoconf
Autoconf easily scans an existing tree to find its dependencies and create a configure script that will run under almost any kind of shell. The configure script allows the user to control the build behavior (i.e. --with-foo, --without-foo, --prefix, --sysconfdir, etc..) as well as doing checks to ensure that the system can compile the program.
Configure generates a config.h file (from a template) which programs can include to work around portability issues. For example, if HAVE_LIBPTHREAD is not defined, use forks instead.
I personally use Autoconf on many projects. It usually takes people some time to get used to m4. However, it does save time.
You can have makefiles inherit some of the values that configure finds without using automake.
Automake
By providing a short template that describes what programs will be built and what objects need to be linked to build them, Makefiles that adhere to GNU coding standards can automatically be created. This includes dependency handling and all of the required GNU targets.
Some people find this easier. I prefer to write my own makefiles.
Libtool
Libtool is a very cool tool for simplifying the building and installation of shared libraries on any Unix-like system. Sometimes I use it; other times (especially when just building static link objects) I do it by hand.
There are other options too, see StackOverflow question Alternatives to Autoconf and Autotools?.
Build automation & GNU coding standards
In short, you really should use some kind of portable build configuration system if you release your code to the masses. What you use is up to you. GNU software is known to build and run on almost anything. However, you might not need to adhere to such (and sometimes extremely pedantic) standards.
If anything, I'd recommend giving Autoconf a try if you're writing software for POSIX systems. Just because Autotools produce part of a build environment that's compatible with GNU standards doesn't mean you have to follow those standards (many don't!) :) There are plenty of other options, too.
Edit
Don't fear m4 :) There is always the Autoconf macro archive. Plenty of examples, or drop in checks. Write your own or use what's tested. Autoconf is far too often confused with Automake. They are two separate things.
First of all, the Autotools are not an opaque build system but a loosely coupled tool-chain, as tinkertim already pointed out. Let me just add some thoughts on Autoconf and Automake:
Autoconf is the configuration system that creates the configure script based on feature checks that are supposed to work on all kinds of platforms. A lot of system knowledge has gone into its m4 macro database during the 15 years of its existence. On the one hand, I think the latter is the main reason Autotools have not been replaced by something else yet. On the other hand, Autoconf used to be far more important when the target platforms were more heterogeneous and Linux, AIX, HP-UX, SunOS, ..., and a large variety of different processor architecture had to be supported. I don't really see its point if you only want to support recent Linux distributions and Intel-compatible processors.
Automake is an abstraction layer for GNU Make and acts as a Makefile generator from simpler templates. A number of projects eventually got rid of the Automake abstraction and reverted to writing Makefiles manually because you lose control over your Makefiles and you might not need all the canned build targets that obfuscate your Makefile.
Now to the alternatives (and I strongly suggest an alternative to Autotools based on your requirements):
CMake's most notable achievement is replacing AutoTools in KDE. It's probably the closest you can get if you want to have Autoconf-like functionality without m4 idiosyncrasies. It brings Windows support to the table and has proven to be applicable in large projects. My beef with CMake is that it is still a Makefile-generator (at least on Linux) with all its immanent problems (e.g. Makefile debugging, timestamp signatures, implicit dependency order).
SCons is a Make replacement written in Python. It uses Python scripts as build control files allowing very sophisticated techniques. Unfortunately, its configuration system is not on par with Autoconf. SCons is often used for in-house development when adaptation to specific requirements is more important than following conventions.
If you really want to stick with Autotools, I strongly suggest to read Recursive Make Considered Harmful (archived) and write your own GNU Makefile configured through Autoconf.
The answers already provided here are good, but I'd strongly recommend not taking the advice to write your own makefile if you have anything resembling a standard C/C++ project. We need the autotools instead of handwritten makefiles because a standard-compliant makefile generated by automake offers a lot of useful targets under well-known names, and providing all these targets by hand is tedious and error-prone.
Firstly, writing a Makefile by hand seems a great idea at first, but most people will not bother to write more than the rules for all, install and maybe clean. automake generates dist, distcheck, clean, distclean, uninstall and all these little helpers. These additional targets are a great boon to the sysadmin that will eventually install your software.
Secondly, providing all these targets in a portable and flexible way is quite error-prone. I've done a lot of cross-compilation to Windows targets recently, and the autotools performed just great. In contrast to most hand-written files, which were mostly a pain in the ass to compile. Mind you, it is possible to create a good Makefile by hand. But don't overestimate yourself, it takes a lot of experience and knowledge about a bunch of different systems, and automake creates great Makefiles for you right out of the box.
Edit: And don't be tempted to use the "alternatives". CMake and friends are a horror to the deployer because they aren't interface-compatible to configure and friends. Every half-way competent sysadmin or developer can do great things like cross-compilation or simple things like setting a prefix out of his head or with a simple --help with a configure script. But you are damned to spend an hour or three when you have to do such things with BJam. Don't get me wrong, BJam is probably a great system under the hood, but it's a pain in the ass to use because there are almost no projects using it and very little and incomplete documentation. autoconf and automake have a huge lead here in terms of established knowledge.
So, even though I'm a bit late with this advice for this question: Do yourself a favor and use the autotools and automake. The syntax might be a bit strange, but they do a way better job than 99% of the developers do on their own.
For small projects or even for large projects that only run on one platform, handwritten makefiles are the way to go.
Where autotools really shine is when you are compiling for different platforms that require different options. Autotools is frequently the brains behind the typical
./configure
make
make install
compilation and install steps for Linux libraries and applications.
That said, I find autotools to be a pain and I've been looking for a better system. Lately I've been using bjam, but that also has its drawbacks. Good luck finding what works for you.
Autotools are needed because Makefiles are not guaranteed to work the same across different platforms. If you handwrite a Makefile, and it works on your machine, there is a good chance that it won't on mine.
Do you know what unix your users will be using? Or even which distribution of Linux? Do you know where they want software installed? Do you know what tools they have, what architecture they want to compile on, how many CPUs they have, how much RAM and disk might be available to them?
The *nix world is a cross-platform landscape, and your build and install tools need to deal with that.
Mind you, the auto* tools date from an earlier epoch, and there are many valid complaints about them, but the several projects to replace them with more modern alternatives are having trouble developing a lot of momentum.
Lots of things are like that in the *nix world.
Autotools is a disaster.
The generated ./configure script checks for features that have not been present on any Unix system for last 20 years or so. To do this, it spends a huge amount of time.
Running ./configure takes for ages. Although modern server CPUs can have even dozens of cores, and there may be several such CPUs per server, the ./configure is single-threaded. We still have enough years of Moore's law left that the number of CPU cores will go way up as a function of time. So, the time ./configure takes will stay approximately constant whereas parallel build times reduce by a factor of 2 every 2 years due to Moore's law. Or actually, I would say the time ./configure takes might even increase due to increasing software complexity taking advantage of improved hardware.
The mere act of adding just one file to your project requires you to run automake, autoconf and ./configure which will take ages, and then you'll probably find that since some important files have changed, everything will be recompiled. So add just one file, and make -j${CPUCOUNT} recompiles everything.
And about make -j${CPUCOUNT}. The generated build system is a recursive one. Recursive make has for a long amount of time been considered harmful.
Then when you install the software that has been compiled, you'll find that it doesn't work. (Want proof? Clone protobuf repository from Github, check out commit 9f80df026933901883da1d556b38292e14836612, install it to a Debian or Ubuntu system, and hey presto: protoc: error while loading shared libraries: libprotoc.so.15: cannot open shared object file: No such file or directory -- since it's in /usr/local/lib and not /usr/lib; workaround is to do export LD_RUN_PATH=/usr/local/lib before typing make).
The theory is that by using autotools, you could create a software package that can be compiled on Linux, FreeBSD, NetBSD, OpenBSD, DragonflyBSD and other operating systems. The fact? Every non-Linux system to build packages from source has numerous patch files in their repository to work around autotools bugs. Just take a look at e.g. FreeBSD /usr/ports: it's full of patches. So, it would have been as easy to create a small patch for a non-autotools build system on a per project basis than to create a small patch for an autotools build system on a per project basis. Or perhaps even easier, as standard make is much easier to use than autotools.
The fact is, if you create your own build system based on standard make (and make it inclusive and not recursive, following the recommendations of the "Recursive make considered harmful" paper), things work in a much better manner. Also, your build time goes down by an order of magnitude, perhaps even two orders of magnitude if your project is very small project of 10-100 C language files and you have dozens of cores per CPU and multiple CPUs. It's also much easier to interface custom automatic code generation tools with a custom build system based on standard make instead of dealing with the m4 mess of autotools. With standard make, you can at least type a shell command into the Makefile.
So, to answer your question: why use autotools? Answer: there is no reason to do so. Autotools has been obsolete since when commercial Unix has become obsolete. And the advent of multi-core CPUs has made autotools even more obsolete. Why programmers haven't realized that yet, is a mystery. I'll happily use standard make on my build systems, thank you. Yes, it takes some amount of work to generate the dependency files for C language header inclusion, but the amount of work is saved by not having to fight with autotools.
I dont feel I am an expert to answer this but still give you a bit analogy with my experience.
Because upto some extent it is similar to why we should write Embedded Codes in C language(High Level language) rather then writing in Assembly Language.
Both solves the same purpose but latter is more lenghty, tedious ,time consuming and more error prone(unless you know ISA of the processor very well) .
Same is the case with Automake tool and writing your own makefile.
Writing Makefile.am and configure.ac is pretty simple than writing individual project Makefile.