Should a "configure" script be distributed if configure.ac is available? - autoconf

Currently, our installation instructions are:
autoreconf -fi
./configure
...
The autoreconf step generates the configure file from configure.ac and Makefile.in from Makefile.in. If one of the dependencies (say pkg-config) is not installed, both configure and autoreconf fail although the latter prints a cryptic error message.
When releasing source tarballs, should the configure script be supplied in the package or not? What other files need to be included if it has to be distributed? The directories build-aux and autom4te.cache and files aclocal.m4 were also created.

In an SCM repository, nothing autogenerated should be present (including configure — but developer opinions digress here). A tarball should contain the state after autoreconf -fi and/or autogen.sh (or whichever name you chose for it). Third, you could also use make dist, though it requires that all files that shall appear in the tarball are also listed in the Makefiles.

Your installation instructions are horribly broken. The user should not need to have the autotool chain installed to build your software. You must distribute the configure script in your tarball. Note that you should not include the configure script in your version control system. (You should not use your version control system as a distribution system.)

The configure script should be built by the maintainer and distributed in the tarball. End users should never have to touch it, and it is a good idea to ensure this via AM_MAINTAINER_MODE if you are using automake. If not, make sure your Makefile.in doesn't re-generate configure when running for end users.
Let automake generate a distribution for you if you want to know what else belongs there. The auxiliary directory build-aux and aclocal.m4 do, automat4e.cache doesn't.

Related

Making os independent configure file which checks for curl dependency

I am making a configure.ac file which checks for library dependency.
The complete code is,
AC_CONFIG_AUX_DIR([build-aux])
AC_INIT([myprogram], [0.1], [])
AM_INIT_AUTOMAKE
AC_PROG_CC
AC_CHECK_LIB([curl], [curl_easy_setopt], [echo "libcurl library is present" > /dev/tty], [echo "libcurl library is not present" > /dev/tty] )
AC_CHECK_LIB([sqlite3], [sqlite3_open], [echo "sqlite3 library is present" > /dev/tty], [echo "sqlite library is not present" > /dev/tty] )
AC_CHECK_LIB([pthread], [pthread_create], [echo "pthread library is present" > /dev/tty], [echo "pthread library is not present" > /dev/tty] )
AC_CHECK_LIB([crypto], [SHA256], [echo "crypto library is present" > /dev/tty], [echo "crypto library is not present" > /dev/tty] )
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
"myprogram" is a program which needs to be installed in numerous user pcs.So, dependency check needs to be done in the begining, to find whether those four libraries are installed.
In the systems where, /usr/lib/i386-linux-gnu/libcurl.so is there, it is giving the message "libcurl library is present", when I run the configure file. But, in the systems where /usr/lib/i386-linux-gnu/libcurl.so.1.0 or something similar is present, it is telling that libcurl is not present. If I create a soft link to libcurl.so , then it is telling correctly that libcurl is present.
ln -s /usr/lib/i386-linux-gnu/libcurl.so.1.0.0 /usr/lib/i386-linux-gnu/libcurl.so.Same holds good for other libraries as well.
Actually, I want to automate this process. Is there a way to do this, without manually making a soft link?.I mean, by making changes in the configure.ac file itself, so that configure will run in any machine without the need for making soft link.
While installing a library, the installer program will typically create a symbolic link from the library's real name(libcurl.so.1.0.0) to its linker name(libcurl.so) to allow the linker to find the actual library file.But it is not always true.Sometimes it will not create the linker name.That is why these complications are happening.So the program which checks for the linker name, thinks that the library is not installed.
In systems where, /usr/lib/i386-linux-gnu/libcurl.so is there, it is giving the message "libcurl library is present", when I run the configure file. But, in the systems where /usr/lib/i386-linux-gnu/libcurl.so.1.0 or something similar is present, it is telling that libcurl is not present.
Right, this is the behavior I would expect. What's going on here is that AC_CHECK_LIB emits a program with the symbol you gave it to try and link (in this case curl_easy_setopt), does a compilation step and a link step to make sure the linker can link. On a typical Linux distro you'll want to make sure that some package called libcurl-dev (or something like that) is installed, so you'll have the header files and the libcurl.so symlink installed.
But I want to automate this process. Is there a way to do this, without manually making a soft link?
Installation of the libcurl-dev package can be easily automated. It can be accomplished several ways, depending on how you want to do it. Linux packaging systems (e.g. rpmbuild, debhelper, etc.) have ways of pulling in build dependencies before building if they aren't installed. Configuration management tools that you use to set up the build machine (e.g. ansible, SaltStack, etc.) could install it. The dependency should be listed in the release documentation at a minimum, so that if someone who has no access to these tools (or doesn't care to use them) can figure it out and build.
I wouldn't create a symlink in configure.ac -- it would likely break any future install of libcurl-dev. Furthermore you would have to run configure with elevated privileges (e.g. sudo) to create the link.
While installing a library, the installer program will typically create a symbolic link from the library's real name(libcurl.so.1.0.0) to its linker name(libcurl.so) to allow the linker to find the actual library file.But it is not always true.
Actually, I don't ever remember seeing anything like this. Typically when a DSO gets installed to the ldconfig "trusted directories" (e.g. /usr/lib, etc.) ldconfig gets run so the real library (e.g. libcurl.so.1.0.0) gets a symlink (libcurl.so.1) in the same directory, but not the development symlink (libcurl.so).
EDIT: Adding responses to comments
But why ./configure also expects development symlink s(libcurl.so, libcrypto.so etc)
Because configure can be told to run the linker, as you discovered with AC_CHECK_LIB, and if those symlinks aren't there, the link will fail.
configure checks whether the binary can run in the system, and not whether a program which uses these libraries can be build.
configure also has runtime tests as well as compile and link time tests, so it can to some limited testing if the output of compilation can run. configure's primary role is to ensure that prerequisites are installed/configured so make will work, so testing that tools, headers, libraries are installed and work in some fashion is what configure mostly does. The runtime tests will not work in some environments (cross-compilation), so lots of packages don't use them.
If I am not wrong, ./configure cannot be used for checking whether a binary can run in a system, as it is used in the case of building a program only.
configure can do some runtime testing of things configure has built as mentioned in the link above (e.g. AC_RUN_IFELSE).
If ./configure succeeds, then the binary can run in the machine.
But reverse is not true. That is , evenif ./configure fails, the binary may run, as it does not depened on development symlink(eg: libcurl.so).Am I right ?
Which binary are you referring to? The test created as part of AC_RUN_IFELSE or the output of make? If configure suceeeds, the output of make still might not work. That's what make check is for. If configure fails, it's likely make won't work, and you won't get to the part where you can test the output of make.
If the scenario is a missing libcurl.so, and configure fails to link the AC_TRY_LINK test, how's that same link step going to work for your executable then, because it's also going to depend on libcurl.so for the link step? It does depend on that file (just for the link step), because you may have multiple libcurl.so.x libraries installed.
By binary...I mean the program that has been successfully build in some other system having all the dependencies installed.What I was telling is that the binary will run in a machine even if the development symlink(libcurl.so) is not there.
Sure, it's already gone past the link step and is linked to say libcurl.so.x and whatever other dependencies it may have.

Why is "autoreconf" not used often?

I am newbie of Autotools. From my understanding, one would use the following basic steps to build software using Autotools:
autoreconf --install
./configure
make
However, I noticed that most open source software packages (on Linux) does not need the 1st step. Most of the time they just need step 2 and 3 to build. It seems that they already are packaged with a Makefile.in. I am wondering why? Do they manually code the Makefile.in, or does the software developer use autoreconf to generate the Makefile.in before creating the software package?
The software developer who creates the tarball (or who checks out the sources from a version control system) will usually invoke autoreconf from a script called bootstrap.sh or autogen.sh which may do other stuff. autoreconf might be invoked by Makefile as well (like when configure.ac has changed).
Most users will never need to run autoreconf, even those who are making some modifications to source (e.g. patches). Only those who need to make modifications to the package itself (making changes to configure.ac and/or Makefile.am) will need autoreconf.
Running autoreconf requires having the correct version of autotools installed already. This leads to a chicken-and-egg problem -- how do you get autotools installed in the first place? It also adds an extra dependency that most end-users don't really need.
As a result, most packagers run autoreconf before producing the source tarballs that they distribute. This means that if you download such a tarball, you can configure and build it without needing to install autotools first.

How to to build src from a CygPort?

I have a question about the structure of the source code from a cygport package.
Here is the contents of a Cygports source file:
the actual source bundle for the project (tar.gz, tar.bz2, etc.)
the any number of *.patch files.
a .cygport file
I am trying to build gedit-3.4.2 from cygports repository.
How does the .cygport file help me run the proper options in the ./configure ?
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error. How do I get the list of ./configure options that were used to build the project when the cygport was built?
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
Here is the contents of gedit-3.4.2-1.cygport:
inherit python gnome2
DESCRIPTION="GNOME text editor"
PATCH_URI="3.4.2-cygwin.patch"
DEPEND="gnome-common gtk-doc
girepository(Gtk-3.0)
pkgconfig(enchant)
pkgconfig(gtksourceview-3.0)
pkgconfig(libpeas-gtk-1.0)"
PKG_NAMES="${PN} ${PN}-devel"
PKG_HINTS="setup devel"
gedit_CONTENTS="--exclude=gtk-doc --exclude=libgedit* etc/ usr/bin/ usr/lib/gedit/ ${PYTHON_SITELIB#/} usr/share/"
gedit_devel_CONTENTS="usr/include/ usr/lib/gedit/libgedit* usr/lib/pkgconfig/ usr/share/gtk-doc/"
DIFF_EXCLUDES="*.desktop.in *.schemas.in *-marshal.h"
CYGCONF_ARGS="--libexecdir=/usr/lib --enable-python"
KEEP_LA_FILES="none"
EDIT Someone from Cygwin Ports mailing list said:
"The configure options are
--libexecdir=/usr/lib --enable-python
Which is from CYGCONF_ARGS."
Here is the contents of a Cygports source file:
You'd do better to think of it as a Cygwin package source file.
cygport is simply a tool for automating the creation of Cygwin binary and source packages. It is the primary tool available, but unlike with some other packaging systems, there's really nothing forcing you to use it. It is quite possible to build a Cygwin package entirely by hand, since it is really nothing more than a tarball that Cygwin's setup.exe can blindly unpack into the Cygwin root directory (typically c:\cygwin) with the expectation that this will put the package's files in sensible locations.
Before cygport existed, people did build their own ad hoc packaging systems. Many Cygwin package maintainers still use these tools they created. (Yours truly included; two of my three packages use cygport, but the third still uses a custom build system.)
Ultimately, you want to read the cygport manual, in /usr/share/doc/cygport/manual.html.
(Yes, I know, "RTFM" answers are frowned on here. But, as one who currently maintains two cygport based packages in the official Cygwin package repository, please believe me when I tell you that the manual is still the single best resource available on this topic.)
How does the .cygport file help me run the proper options in the ./configure ?
As you found out through other resources, you'd first need to edit the CYGCONF_ARGS value in the .cygport file.
The simplest possible step after that is cygport gedit-3.4.2-1.cygport all. That attempts to rebuild all the binary packages in a single step. It also builds a new source package containing updated .cygport and patch files.
If something breaks in the all build process, it is usually faster to switch to using the sub-commands contained by all instead of completely restarting the process. The all step just runs prep, compile, install, package, and finish for you, in that order. For instance, if all fails during the compilation step, there's probably no need to repeat the prep step.
(It is exceptionally uncommon for cygport or a sane build system to wreck the build tree, forcing you to re-run prep. Far more commonly, you end up needing to re-do prep when you manually wreck the build tree while trying to get a new package to build for the first time and need to start over.)
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error.
You can probably fix that by installing the libaspell-devel package from the official Cygwin package repository with setup.exe.
Personally, I wouldn't disable any feature unless it meant installing unofficial packages, such as those from the Cygwin Ports project.[*] It is nice to have Cygwin Ports repository, but because it contains so many packages, installing one can end up creating an "install the world" situation: package A depends on packages B, C and D, and C depends on E, F, G, H, and G depends on I, J, K, and... Dependency hierarchies within the Cygwin package repo tend to be flatter and narrower than those in the Cygports repo.
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
You have guessed that you just add --prefix=/my/private/program/tree to CYGCONF_ARGS, I trust.
[*] If you are feeling confused about "Cygwin Ports" and cygport, the naming similarity is no coincidence. cygport is a tool created by Yaakov Selkowitz for himself when creating the Cygwin Ports package repository. Later, it became popular enough among other Cygwin package maintainers that it pushed out most of the competing build systems.

Linux configure/make, --prefix?

Bear with me, this one's not very easy to explain...
I'm trying to configure, make and make install Xfce into my buildroot build directory. When configuring I'm using
--prefix=/home/me/somefolder/mybuild/output/target
so that it builds to the right folder, however when it's compressed and run I get errors from various config files where it's looking for files in
/home/me/somefolder/mybuild/output/target
(which of course doesn't exist.)
How do I set what folder to build into, yet set a different root directory for the config files to use?
Do configure --help and see what other options are available.
It is very common to provide different options to override different locations. By standard, --prefix overrides all of them, so you need to override config location after specifying the prefix. This course of actions usually works for every automake-based project.
The worse case scenario is when you need to modify the configure script, or even worse, generated makefiles and config.h headers. But yeah, for Xfce you can try something like this:
./configure --prefix=/home/me/somefolder/mybuild/output/target --sysconfdir=/etc
I believe that should do it.
In my situation, --prefix= failed to update the path correctly under some warnings or failures. please see the below link for the answer.
https://stackoverflow.com/a/50208379/1283198

What should Linux/Unix 'make install' consist of?

I've written a C++ program (command line, portable code) and I'm trying to release a Linux version at the same time as the Windows version. I've written a makefile as follows:
ayane: *.cpp *.h
g++ -Wno-write-strings -oayane *.cpp
Straightforward enough so far; but I'm given to understand it's customary to have a second step, make install. So when I put the install: target in the makefile... what command should be associated with it? (If possible I'd prefer it to work on all Unix systems as well as Linux.)
Installation
A less trivial installer will copy several things into place, first insuring that the appropriate paths exists (using mkdir -p or similar). Typically something like this:
the executable goes in $INSTALL_PATH/bin
any libraries built for external consumption go in $INSTALL_PATH/lib or $INSTALL_PATH/lib/yourappname
man pages go in $INSTALL_PATH/share/man/man1 and possibly other sections if appropriate
other docs go in $INSTALL_PATH/share/yourappname
default configuration files go in $INSTALL_PATH/etc/yourappname
headers for other to link against go in $INSTALL_PATH/include/yourappname
Installation path
The INSTALL_PATH is an input to the build system, and usually defaults to /usr/local. This gives your user the flexibility to install under their $HOME without needing elevated permission.
In the simplest case just use
INSTALL_PATH?=/usr/local
at the top of the makefile. Then the user can override it by setting an environment variable in their shell.
Deinstallation
You also occasionally see make installs that build a manifest to help with de-installation. The manifest can even be written as a script to do the work.
Another approach is just to have a make uninstall that looks for the things make install places, and removes them if they exist.
In the simplest case you just copy the newly created executable into the /usr/local/bin path. Of course, it's usually more complicated than that.
Notice that most of these operations require special rights, which is why make install is usually invoked using sudo.
make install is usually the step that "installs" the binary into the correct place.
For example, when compiling Vim, make install may place it in /usr/local/bin
Not all Makefiles have a make install

Resources