a project ships with a copy of library foo, in a filesystem layout like:
myproject/
myproject/src/ # sources of my project
myproject/libfoo/ # import of "foo" library
the standard (autotools-based) build-system builds libfoo, then builds myproject which dynamically links against libfoo.
libfoo is basically unmodified (with some minor amendments to properly fit into the build-system). libfoo uses autotools itself, so i'm usually calling configure recursively using AC_CONFIG_SUBDIRS.
however, libfoo is already packaged for various distributions, so i would like to avoid building against the imported library on these systems and rather use system-wide installation - this way i get the benefits of a better maintained version of libfoo (less bugs, security issues,...).
otoh, i want keep libfoo in my source-tree, so that i have a fallback for building on systems that do not ship that library (without the user requiring to separately fetch the sources and build the lib themselves).
i can think of a number of configure-flags i could instroduce, so the user can select whether they want to build the project with the system-installed, the local or without the library. (it's an optional dependency).
disabling the "local foo", should completely disable building of libfoo (and probably also configuring foo)
e.g. something like:
./configure --enable-foo=no # aka "--disable-foo": build without foo
./configure --enable-foo # use system-wide foo
./configure --enable-foo=local # use local copy of foo
alternatively:
./configure --disable-foo
./configure --enable-foo --disable-local-foo
./configure --enable-foo --enable-local-foo
but i'd like to do this in a standard-conformant way.
what's the best practice for selecting via autoconf, whether to use a local copy or a library, a system-wide copy or to not use the library at all?
pointers to projects that use such a mechanism are most welcome.
I have a similar in my project where I use the included version of the BuDDy library when (1) the library isn't already installed, or (2) it is installed but does not have to interface I expect, or (3) configure was run with --with-included-buddy.
You can see the configure macro here. After that I just use $(BUDDY_CPPFLAGS) and $(BUDDY_LDFLAGS) in the Makefile.ams, and the top-level Makefile.am only include the buddy directory conditionally in SUBDIRS.
I prefer --with-foo when dealing with external software, but it's just a preference. The examples and documentation at the link might help you decide how you want to do it. I'd go with your first example that uses only one flag rather than the second one that uses two flags for easier documentation/maintenance.
I really don't think you want to do this
You're going to make the build machinery a lot more complicated, and autotools are already considered black magic by most. It'll make things a lot more complicated for the developer, a little more complicated for a potential distro packager and ever-so-slightly easier for the end user.
If you're conditionally configuring then you make the process of building distribution tarballs (make dist/make distcheck) more brittle.
This is the sort of trouble you can cause.
But if you must...
You may be able to adapt the code in my recent answer.
Related
I need to install the same C++/Fortran library compiled with different compilers on the system with CMake. Is there a standard location where to install the different compiler-specific versions of the same library on the system? For example, assuming that lib.so and lib.a have already been installed using the system package manager under /usr/, is it good practice to install each of the additional compiler-specific versions in a different folder under let's say usr/local. Or is there a better way of doing this that you can advise?
It depends on how many compilers/libraries/versions you have. If you have just a few of them, I think that (almost) any choice on the location is right but I personally prefer /opt/ paths for manually installed code. But if you start having several combinations of them you easily get in trouble. Besides, I think that the question on the "best" location is related to the question on the "best" way to switch from the usage of one library to another one, possibly avoiding to manually set LD_LIBRARY_PATH, libraies to link or similar things.
I give some personal recommendations according to my experience for systems where you want to support many libraries/applications with many compilers/versions and also provide them for many users:
Do not use root user to install compiled software: just use an "installer account" and give read and execute permissions when needed to other users
Select a path for compiled software, e.g. /opt and define two subfolders /opt/build and /opt/install, the first one for your sources and where you compile them, the second as compilation target
Create some subfolders based on categories, e.g. /compilers, /libraries, /applications, ... from both /opt/build and /opt/install
Start preparing compilers under /compilers, e.g. /compilers/gnu/6.3 or /compilers/intel/2017. When possible compile them, e.g. from /opt/build/compilers/gnu/6.3 to /opt/install/compilers/gnu/6.3 or just put them into /install folder, e.g. /opt/install/compilers/intel/2017
Prepare the tree for libraries (or applications) adding subfolders which specify the version, the compiler and compiler version, e.g. compile from /opt/build/libraries/boost/1.64.0/gnu/6.3 and install to /opt/install/libraries/boost/1.64.0/gnu/6.3
At this stage, you have well organized things. But:
It is difficult to decide which library you want to use, you have to specify LD_LIBRARY_PATH or manually link the right one and the situation is worse when you deal also with applications
You are not considering dependencies between libraries: how can I force using g++ 6.3 when linking against boost/1.64.0/gnu/6.3?
To address these and many other issues, a good way of doing is using a tool which can help you, e.g. http://modules.sourceforge.net/ so that you can easily switch to one library to another one, force dependency, get help, and in general have something less error prone in the daily usage.
I think a major design flaw in Linux is the shared object hell when it comes to distributing programs in binary instead of source code form.
Here is my specific problem: I want to publish a Linux program in ELF binary form that should run on as many distributions as possible so my mandatory dependencies are as low as it gets: The only libraries required under any circumstances are libpthread, libX11, librt and libm (and glibc of course). I'm linking dynamically against these libraries when I build my program using gcc.
Optionally, however, my program should also support ALSA (sound interface), the Xcursor, Xfixes, and Xxf86vm extensions as well as GTK. But these should only be used if they are available on the user's system, otherwise my program should still run but with limited functionality. For example, if GTK isn't there, my program will fall back to terminal mode. Because my program should still be able to run without ALSA, Xcursor, Xfixes, etc. I cannot link dynamically against these libraries because then the program won't start at all if one of the libraries isn't there.
So I need to manually check if the libraries are present and then open them one by one using dlopen() and import the necessary function symbols using dlsym(). This, however, leads to all kinds of problems:
1) Library naming conventions:
Shared objects often aren't simply called "libXcursor.so" but have some kind of version extension like "libXcursor.so.1" or even really funny things like "libXcursor.so.0.2000". These extensions seem to differ from system to system. So which one should I choose when calling dlopen()? Using a hardcoded name here seems like a very bad idea because the names differ from system to system. So the only workaround that comes to my mind is to scan the whole library path and look for filenames starting with a "libXcursor.so" prefix and then do some custom version matching. But how do I know that they are really compatible?
2) Library search paths: Where should I look for the *.so files after all? This is also different from system to system. There are some default paths like /usr/lib and /lib but *.so files could also be in lots of other paths. So I'd have to open /etc/ld.so.conf and parse this to find out all library search paths. That's not a trivial thing to do because /etc/ld.so.conf files can also use some kind of include directive which means that I have to parse even more .conf files, do some checks against possible infinite loops caused by circular include directives etc. Is there really no easier way to find out the search paths for *.so?
So, my actual question is this: Isn't there a more convenient, less hackish way of achieving what I want to do? Is it really so complicated to create a Linux program that has some optional dependencies like ALSA, GTK, libXcursor... but should also work without it! Is there some kind of standard for doing what I want to do? Or am I doomed to do it the hackish way?
Thanks for your comments/solutions!
I think a major design flaw in Linux is the shared object hell when it comes to distributing programs in binary instead of source code form.
This isn't a design flaw as far as creators of the system are concerned; it's an advantage -- it encourages you to distribute programs in source form. Oh, you wanted to sell your software? Sorry, that's not the use case Linux is optimized for.
Library naming conventions: Shared objects often aren't simply called "libXcursor.so" but have some kind of version extension like "libXcursor.so.1" or even really funny things like "libXcursor.so.0.2000".
Yes, this is called external library versioning. Read about it here. As should be clear from that description, if you compiled your binaries using headers on a system that would normally give you libXcursor.so.1 as a runtime reference, then the only shared library you are compatible with is libXcursor.so.1, and trying to dlopen libXcursor.so.0.2000 will lead to unpredictable crashes.
Any system that provides libXcursor.so but not libXcursor.so.1 is either a broken installation, or is also incompatible with your binaries.
Library search paths: Where should I look for the *.so files after all?
You shouldn't be trying to dlopen any of these libraries using their full path. Just call dlopen("libXcursor.so.1", RTLD_GLOBAL);, and the runtime loader will search for the library in system-appropriate locations.
I'm trying to write a Haskell program which requires the output of external programs (such as lame, the mp3 encoder). While declaring dependency on a library is easy in cabal, how can one declare dependency on an executable?
You can't currently add a dependency in the .cabal file for external executables, other than a list of known build tools (see build-tools: alex for example).
You can however specify build-type: Configure, and then use a separate configure script to search for any additional binaries (for example, an autoconf-based configure script is perfectly fine, and can be used to set constants in your source).
Note that searching for a runtime dependency -- such as a lame encoder -- at compile time may be a bad idea, as the build and run environments are different on many package systems. It might be a better idea to dynamically search for required binaries at program startup.
For example, hmp3 hunts for mpg321 with
mmpg <- findExecutable (MPG321 :: String)
where MPG321 is the name of the program determined via a ./configure option. For more information, see the haddocks:
http://hackage.haskell.org/packages/archive/directory/latest/doc/html/System-Directory.html#v:findExecutable
I want to create a setup for my project so that it can be installed on any pc without installing the header files.
How can I do that?
There are two general ways to distribute programs:
Source Distribution (source code to be built). The most common way is to use GNU autotools to generate a configure script so that your project can be installed by doing ./configure && make install
Binary Distribution (prebuilt). Instead of shipping source, you ship binaries. There are a couple of competing standards although the two main ones are RPM and DEB file.
You just changed your question (appreciated, it was kind of vage), so my answer no longer applies ..
make sure you have a C compiler
I'd be surprised if you didn't, Linux normally has one
find an editor you are comfortable with
vi and emacs are the classics
write your first program and compile
learn about makefiles
learn about sub projects and libraries
In many respects, your question is too vague to be answerable. You will need to describe more what you have in mind. All else apart, if you are using an integrated development environment (IDE), then what you do should be coloured strongly by what the IDE encourages you to do. (Fighting your IDE is counter-productive; I've just never found an IDE that doesn't make me want to fight it.)
However, for a typical project on Linux, you will create a directory to hold the materials. For a small project (up to a few thousand lines of code in a few - say 5-20 - files), you might not need any more structure than a single directory. For bigger projects, you will segregate sub-sections of the project into separate sub-directories under the main project directory.
Depending on your build mechanisms, you may have a single makefile at the top of the project hierarchy (or the only directory in the 'hierarchy'). This goes in line with the 'Recursive Make Considered Harmful' paper (P Miller). Alternatively, you can create a separate makefile for each sub-directory and the top-level makefile simply coordinates builds across directories.
You should also consider which version control system (VCS) you will use.
I have a plugin project I've been developing for a few years where the plugin works with numerous combinations of [primary application version, 3rd party library version, 32-bit vs. 64-bit]. Is there a (clean) way to use autotools to create a single makefile that builds all versions of the plugin.
As far as I can tell from skimming through the autotools documentation, the closest approximation to what I'd like is to have N independent copies of the project, each with its own makefile. This seems a little suboptimal for testing and development as (a) I'd need to continually propagate code changes across all the different copies and (b) there is a lot of wasted space in duplicating the project so many times. Is there a better way?
EDIT:
I've been rolling my own solution for a while where I have a fancy makefile and some perl scripts to hunt down various 3rd party library versions, etc. As such, I'm open to other non-autotools solutions. For other build tools, I'd want them to be very easy for end users to install. The tools also need to be smart enough to hunt down various 3rd party libraries and headers without a huge amount of trouble. I'm mostly looking for a linux solution, but one that also works for Windows and/or the Mac would be a bonus.
If your question is:
Can I use the autotools on some machine A to create a single universal makefile that will work on all other machines?
then the answer is "No". The autotools do not even make a pretense at trying to do that. They are designed to contain portable code that will determine how to create a workable makefile on the target machine.
If your question is:
Can I use the autotools to configure software that needs to run on different machines, with different versions of the primary software which my plugin works with, plus various 3rd party libraries, not to mention 32-bit vs 64-bit issues?
then the answer is "Yes". The autotools are designed to be able to do that. Further, they work on Unix, Linux, MacOS X, BSD.
I have a program, SQLCMD (which pre-dates the Microsoft program of the same name by a decade and more), which works with the IBM Informix databases. It detects the version of the client software (called IBM Informix ESQL/C, part of the IBM Informix ClientSDK or CSDK) is installed, and whether it is 32-bit or 64-bit. It also detects which version of the software is installed, and adapts its functionality to what is available in the supporting product. It supports versions that have been released over a period of about 17 years. It is autoconfigured -- I had to write some autoconf macros for the Informix functionality, and for a couple of other gizmos (high resolution timing, presence of /dev/stdin etc). But it is doable.
On the other hand, I don't try and release a single makefile that fits all customer machines and environments; there are just too many possibilities for that to be sensible. But autotools takes care of the details for me (and my users). All they do is:
./configure
That's easier than working out how to edit the makefile. (Oh, for the first 10 years, the program was configured by hand. It was hard for people to do, even though I had pretty good defaults set up. That was why I moved to auto-configuration: it makes it much easier for people to install.)
Mr Fooz commented:
I want something in between. Customers will use multiple versions and bitnesses of the same base application on the same machine in my case. I'm not worried about cross-compilation such as building Windows binaries on Linux.
Do you need a separate build of your plugin for the 32-bit and 64-bit versions? (I'd assume yes - but you could surprise me.) So you need to provide a mechanism for the user to say
./configure --use-tppkg=/opt/tp/pkg32-1.0.3
(where tppkg is a code for your third-party package, and the location is specifiable by the user.) However, keep in mind usability: the fewer such options the user has to provide, the better; against that, do not hard code things that should be optional, such as install locations. By all means look in default locations - that's good. And default to the bittiness of the stuff you find. Maybe if you find both 32-bit and 64-bit versions, then you should build both -- that would require careful construction, though. You can always echo "Checking for TP-Package ..." and indicate what you found and where you found it. Then the installer can change the options. Make sure you document in './configure --help' what the options are; this is standard autotools practice.
Do not do anything interactive though; the configure script should run, reporting what it does. The Perl Configure script (note the capital letter - it is a wholly separate automatic configuration system) is one of the few intensively interactive configuration systems left (and that is probably mainly because of its heritage; if starting anew, it would most likely be non-interactive). Such systems are more of a nuisance to configure than the non-interactive ones.
Cross-compilation is tough. I've never needed to do it, thank goodness.
Mr Fooz also commented:
Thanks for the extra comments. I'm looking for something like:
./configure --use-tppkg=/opt/tp/pkg32-1.0.3 --use-tppkg=/opt/tp/pkg64-1.1.2
where it would create both the 32-bit and 64-bit targets in one makefile for the current platform.
Well, I'm sure it could be done; I'm not so sure that it is worth doing by comparison with two separate configuration runs with a complete rebuild in between. You'd probably want to use:
./configure --use-tppkg32=/opt/tp/pkg32-1.0.3 --use-tppkg64=/opt/tp/pkg64-1.1.2
This indicates the two separate directories. You'd have to decide how you're going to do the build, but presumably you'd have two sub-directories, such as 'obj-32' and 'obj-64' for storing the separate sets of object files. You'd also arrange your makefile along the lines of:
FLAGS_32 = ...32-bit compiler options...
FLAGS_64 = ...64-bit compiler options...
TPPKG32DIR = #TPPKG32DIR#
TPPKG64DIR = #TPPKG64DIR#
OBJ32DIR = obj-32
OBJ64DIR = obj-64
BUILD_32 = #BUILD_32#
BUILD_64 = #BUILD_64#
TPPKGDIR =
OBJDIR =
FLAGS =
all: ${BUILD_32} ${BUILD_64}
build_32:
${MAKE} TPPKGDIR=${TPPKG32DIR} OBJDIR=${OBJ32DIR} FLAGS=${FLAGS_32} build
build_64:
${MAKE} TPPKGDIR=${TPPKG64DIR} OBJDIR=${OBJ64DIR} FLAGS=${FLAGS_64} build
build: ${OBJDIR}/plugin.so
This assumes that the plugin would be a shared object. The idea here is that the autotool would detect the 32-bit or 64-bit installs for the Third Party Package, and then make substitutions. The BUILD_32 macro would be set to build_32 if the 32-bit package was required and left empty otherwise; the BUILD_64 macro would be handled similarly.
When the user runs 'make all', it will build the build_32 target first and the build_64 target next. To build the build_32 target, it will re-run make and configure the flags for a 32-bit build. Similarly, to build the build_64 target, it will re-run make and configure the flags for a 64-bit build. It is important that all the flags affected by 32-bit vs 64-bit builds are set on the recursive invocation of make, and that the rules for building objects and libraries are written carefully - for example, the rule for compiling source to object must be careful to place the object file in the correct object directory - using GCC, for example, you would specify (in a .c.o rule):
${CC} ${CFLAGS} -o ${OBJDIR}/$*.o -c $*.c
The macro CFLAGS would include the ${FLAGS} value which deals with the bits (for example, FLAGS_32 = -m32 and FLAGS_64 = -m64, and so when building the 32-bit version,FLAGS = -m32would be included in theCFLAGS` macro.
The residual issues in the autotools is working out how to determine the 32-bit and 64-bit flags. If the worst comes to the worst, you'll have to write macros for that yourself. However, I'd expect (without having researched it) that you can do it using standard facilities from the autotools suite.
Unless you create yourself a carefully (even ruthlessly) symmetric makefile, it won't work reliably.
As far as I know, you can't do that. However, are you stuck with autotools? Are neither CMake nor SCons an option?
We tried it and it doesn't work! So we use now SCons.
Some articles to this topic: 1 and 2
Edit:
Some small example why I love SCons:
env.ParseConfig('pkg-config --cflags --libs glib-2.0')
With this line of code you add GLib to the compile environment (env). And don't forget the User Guide which just great to learn SCons (you really don't have to know Python!). For the end user you could try SCons with PyInstaller or something like that.
And in comparison to make, you use Python, so a complete programming language! With this in mind you can do just everything (more or less).
Have you ever considered to use a single project with multiple build directories?
if your automake project is implemented in a proper way (i.e.: NOT like gcc)
the following is possible:
mkdir build1 build2 build3
cd build1
../configure $(YOUR_OPTIONS)
cd build2
../configure $(YOUR_OPTIONS2)
[...]
you are able to pass different configuration parameters like include directories and compilers (cross compilers i.e.).
you can then even run this in a single make call by running
make -C build1 -C build2 -C build3