Library resolution with autoconf? - autoconf

I'm building my first autoconf managed package.
However I can't find any simple examples anywhere of how to specify a required library, and find that library where it might be in various different places.
I've currently got:
AC_CHECK_LIB(['event'], ['event_init'])
but:
It doesn't find the version installed in /opt/local/lib
It doesn't complain if the library isn't actually found
I need to set the include path to /opt/local/include too
any help, or links to decent tutorials much appreciated...

autoconf script cannot guess the "optional" library locations, which may vary from one platform to another. So you can say
CPPFLAGS="-I/opt/local/include" LDFLAGS="-L/opt/local/lib" ./configure
For AC_CHECK_LIB() you need to specify the fail condition explicitly in "action-if-false" argument:
dnl This is simply print "no" and continue:
AC_CHECK_LIB([m], [sqrt123])
dnl This will stop:
AC_CHECK_LIB([m], [sqrt123], [], [AC_MSG_ERROR([sqrt123 was not found in libm])])
Output:
checking for sqrt123 in -lm... no
checking for sqrt123 in -lm... no
configure: error: sqrt123 was not found in libm
AC_CHECK_LIB() does not fail by default on obvious reasons: one may check for several different libraries that provide similar functionality and choose one of them :)
Also have a look at this post for similar topic.

You need to manually set CFLAGS, CXXFLAGS and LDFLAGS if you want gcc/g++ to look in non-standard locations.
So, before calling AC_CHECK_LIB(), do something like
CFLAGS="$CFLAGS -I/opt/local/include"
CXXFLAGS="$CXXFLAGS -I/opt/local/include"
LDFLAGS="$LDFLAGS -L/opt/local/lib"
You don't need CXXFLAGS if you're only using gcc throughout your configure script.

If the library ships a .pc file, consider using the PKG_CHECK_MODULES() macro which does the things you want. If it's your own library, just ship a .pc file into /usr/lib/pkgconfig, it'll make it much easier for other developers to depend/use it.

I know this is an old thread now, but I guess this may help some people out. This is how I find some stuff.
hdff="no"
hdffprefix="ERROR"
AC_ARG_WITH(hdf,[ --with-hdf Compile with hdf library, for output.],[hdffprefix=$withval hdff="yes"],[])
# if there is no value given, it appears tha hdffprefix is set to "yes"
if test $hdffprefix = "yes" -a $hdff = "yes"
then
echo "HDF: Attempting to find HDF"
hdffprefix="ERROR"
# check if hdffprefix is set, if it is not, it sets it to "ERROR" and the
# 'if' comparison evaluates to true
if [[ "$hdffprefix" == "ERROR" ]]
then
echo "HDF: hdffprefix not set, searching PATH"
for i in `echo $PATH | tr ':' '\n'`
do
if [[ $i == *hdf* ]]
then
if [[ $i == *bin/* ]]
then
hdffprefix=${i%bin/}
# if it doesn't exist, re-set to ERROR
if [[ ! -f ${hdffprefix}include/hdf.h ]]
then
hdffprefix="ERROR"
fi
elif [[ $i == *bin* ]]
then
hdffprefix=${i%bin}
# if it doesn't exist, re-set to ERROR
if [[ ! -f ${hdffprefix}include/hdf.h ]]
then
hdffprefix="ERROR"
fi
fi
fi
done
if [[ "$hdffprefix" == "ERROR" ]]
then
echo "HDF: hdffprefix not found in PATH, trying 'which'"
WHICH_TEST_HDF=`which hdf2gif`
if [[ WHICH_TEST_HDF != "" ]]
then
hdffprefix=${WHICH_TEST_HDF%bin/hdf2gif}
else
echo "HDF: Warning - hdf not found"
fi
fi
fi
if [[ "$hdffprefix" != "ERROR" ]]
then
hdff="yes"
echo "HDF found: $hdffprefix"
fi
fi
if test $hdff = 'yes'; then
hdfincs=" -DUSE_HDF -I"${hdffprefix}"include"
scriptotherlibsinc=${scriptotherlibsinc}" -L"${hdffprefix}"/lib"
scriptotherlibs=${scriptotherlibs}" -lmfhdf -ldf -ljpeg -lz"
AC_CHECK_HEADERS([${hdffprefix}/include/hdf.h],,[AC_MSG_ERROR([Cannot find hdf.h])])
AC_CHECK_HEADERS([${hdffprefix}/include/mfhdf.h],,[AC_MSG_ERROR([Cannot find mfhdf.h])])
fi

Here's how to do it:
# We need the math library for some tests.
AC_CHECK_LIB([m], [floor], [],
[AC_MSG_ERROR([Can't find or link to the math library.])])
Note that it does not automatically error out when the library is not found, you must called AC_MSG_ERROR() as in the code above.

So you want to setup autoconf to find these directories automatically and codelogic gives the answer; but suppose you don't want to search there on all system, only on a mac. You can add the following
AC_CANONICAL_HOST
case $host_os in
darwin* )
CFLAGS="$CFLAGS -I/opt/local/include"
CXXFLAGS="$CXXFLAGS -I/opt/local/include"
LDFLAGS="$LDFLAGS -L/opt/local/lib"
;;
esac
Note that I added it as a case tree so that you can add things for a variety of operating systems later (such as linux* and BSD).

If you are happen to be using GCC or CLANG, the standard way is having the environment variable CPLUS_INCLUDE_PATH with the path of the non-official includes files and LIBRARY_PATH for the libraries. Remind that you do not have to change anything in the configure.ac. So you can just call the configure in this way:
$ export CPLUS_INCLUDE_PATH=/opt/local/include
$ export LIBRARY_PATH=/opt/local/lib
$ ./configure
The facto standard variables
Variable | lang | Usage
-------------------|------|---------
C_INCLUDE_PATH | C | colon separated list of include directory paths
CPLUS_INCLUDE_PATH | C++ | colon separated list of include directory paths
LIBRARY_PATH | C/C++| colon separated compiling time static linking dirs
LD_RUN_PATH | C/C++| colon separated compiling time dynamic linking dirs
LD_LIBRARY_PATH | C/C++| colon separated run-time dynamic linking dirs
CPPFLAGS | C/C++| prepocessor flags
CFLAGS | C | Compiling flags
CXXFLAGS | C++ | Compiling flags
LDFLAGS | C++ | Linking flags
NOTE You can use CPPFLAGS or LDFLAGS, however, CPLUS_INCLUDE_PATH /LIBRARY_PATH exactly fits your requirement. CPPFLAGS/LDFLAGS are for flags which can be many things but *_PATH are for PATHs
Portability Note: While this will work on many modern compilers, not all compilers will respect these variables. Some cross-compilers will outright ignore or overwrite them, which forces one to resort to CFLAGS and LDFLAGS modifications as mentioned in other answers.
SOURCE Might the downvotes here be because of the lack of sources in my answer. Here is for CPLUS_INCLUDE_PATH in GCC: https://gcc.gnu.org/onlinedocs/cpp/Environment-Variables.html

Related

Is there a way to define custom implicit GNU Make rules?

I'm often creating png files out of dot (graphviz format) files. The command to do so is the following:
$ dot my_graph.dot -o my_graph.png -Tpng
However, I would like to be able to have a shorter command format like $ make my_graph.dot to automatically generate my png file.
For the moment, I'm using a Makefile in which I've defined the following rule, but the recipe is only available in the directory containing the Makefile
%.eps: %.dot
dot $< -o $# -Teps
Is it possible to define custom implicit GNU Make recipes ? Which would allow the above recipe to be available system-wide
If not, what solution do you use to solve those kind of problem ?
Setup:
Fedora Linux with ZSH/Bash
You could define shell functions in your shell's startup files, e.g.
dotpng()
{
echo dot ${1%.dot}.dot -o ${1%.dot}.png -Tpng;
}
This function can be called like
dotpng my_graph.dot
or
dotpng my_graph
The code ${1%.dot}.dot strips .dot from the file name if present and appends it (again) to allow both my_graph.dot and my_graph as function argument.
Is it possible to define custom implicit GNU Make recipes ?
Not without modifying the source code of GNU Make.
If not, what solution do you use to solve those kind of problem ?
I wouldn't be a fan o modyfying the system globally, but you could do:
Create a file /usr/local/lib/make/myimplicitrules.make with the content
%.eps: %.dot
dot $< -o $# -Teps
Use include /usr/local/lib/make/myimplicitrules.make in your Makefile.
I would rather use a git submodule or similar to share common configuration between projects, rather than depending on global configuration. Depending on global environment will make your program hard to test and non-portable.
I would rather go with a shell function, something along:
mymake() {
make -f <(cat <<'EOF'
%.eps: %.dot
dot $< -o $# -Teps
EOF
) "$#"
}
mymake my_graph.dot
GNU Make lets you specify extra makefiles to read using the MAKEFILES
environment variable. Quoting from info '(make)MAKEFILES Variable':
the default goal is never taken from one of these makefiles (or any
makefile included by them) and it is not an error if the files listed
in 'MAKEFILES' are not found
if you are running 'make' without a specific makefile, a makefile
in 'MAKEFILES' can do useful things to help the built-in implicit
rules work better
As an example, with no makefile in the current directory and the
following .mk files in make's include path (e.g. via
MAKEFLAGS=--include-dir="$HOME"/.local/lib/make/) you can create
subdir gen/ and convert my_graph.dot or dot/my_graph.dot by
running:
MAKEFILES=dot.mk make gen/my_graph.png
To further save some typing it's tempting to add MAKEFILES=dot.mk
to a session environment but defining MAKEFILES in startup files
can make things completely nontransparent. For that reason I prefer
seeing MAKEFILES=… on the command line.
File: dot.mk
include common.mk
genDir ?= gen/
dotDir ?= dot/
dotFlags ?= $(if $(DEBUG),-v)
Tvariant ?= :cairo:cairo
vpath %.dot $(dotDir)
$(genDir)%.png $(genDir)%.svg $(genDir)%.eps : %.dot | $(genDir).
dot $(dotFlags) $< -o $# -T'$(patsubst .%,%,$(suffix $#))$(Tvariant)'
The included common.mk is where you'd store general definitions to
manage directory creation, diagnostics etc., e.g.
.PRECIOUS: %/. ## preempt 'unlink: ...: Is a directory'
%/. : ; $(if $(wildcard $#),,mkdir -p -- $(#D))
References:
?= = := … - info '(make)Reading Makefiles'
vpath - info '(make)Selective Search'
order-only prerequisites (e.g. | $(genDir).) - info '(make)Prerequisite Types'
.PRECIOUS - info '(make)Chained Rules'

Meaning of UTS in UTS_RELEASE

UTS_RELEASE defines the kernel version in Linux. It's defined in generated/utsrelease.h, which is created by the main Makefile like so:
# KERNELRELEASE can change from a few different places, meaning version.h
# needs to be updated, so this check is forced on all builds
uts_len := 64
define filechk_utsrelease.h
if [ `echo -n "$(KERNELRELEASE)" | wc -c ` -gt $(uts_len) ]; then \
echo '"$(KERNELRELEASE)" exceeds $(uts_len) characters' >&2; \
exit 1; \
fi; \
(echo \#define UTS_RELEASE \"$(KERNELRELEASE)\";)
endef
I was wondering what UTS stands for, here?
I will do a bet : it comes from unix history age.
Unix Time Sharing
http://en.wikipedia.org/wiki/Time-sharing
( with another link to give more weight to my guess : http://www.linuxmisc.com/9-unix-programmer/515225795f89ebf5.htm )
Additionally if you search for UTS on Wikipedia you'll find this as evidence too:
UTS is a three-letter abbreviation which may describe:
Time-sharing, known as Unix Time-sharing System (UTS) when abbreviated in the source code of many Unix-like operating systems
Maybe https://lwn.net/Articles/531114/ and https://lwn.net/Articles/179345/ are the right(tm) answer :-)
For example, if KERNELRELEASE value is:
3.18.31-g18e453b
Also in the file
*utsrelease.h
Will be:
#define UTS_RELEASE "3.18.31-g18e453b"
In Android, it goes here:
Settings > About Phone > Kernel Version

Compiling static library for Google Native Client using SCons

I'm working on a few multi platform projects that all depend on common framework.
I want to add support for Google Native-Client (NaCl). The way I aproached the problem is first to compile the framework as static library (this is how I've been doing it on all other platforms).
I have to say that I have never used SCons before. I think I start grasping it. Starting from a build.scons from a tutorial I can get some code compiling and linking. Now I would want to skip the linking process but seems like the nacl_env was never intended to compile static libraries.
Reading the SCons help didn't help me much since the Library node is missing from the nacl_env.
I don't think I understand SCons enough to write the whole build process from scratch so I was hopping to not have to do so.
1. Am I approaching the problem correctly?
2. Any tips or sample nacl static libs, build using SCons?
Ok, what I did is way more trickery than what you probably need.
I wanted my static library to handle the initialization steps of the NaCl module, and then call some project-specific function.
I ended up turning my whole framework and the contents of the built-in libppapi_cpp.a into a single .o file, and then that into a single .a file, a static library.
I needed a single .o file, because otherwise I would run into dependency problems releated to initialization, I could not solve.
build_lib.sh (framework):
#!/bin/bash -e
SDK="/home/kalmi/ik/nacl_sdk/pepper_15"
function create_allIn_a {
TMPDIR="`mktemp -d`"
echo $TMPDIR
cp $O_FILES $TMPDIR
pushd $TMPDIR &> /dev/null
$AR x $LIBPPAPI_CPP_A
$LD -Ur * -o ALL.o
$AR rvs $OUTPUT_NAME ALL.o
$RANLIB $OUTPUT_NAME
popd &> /dev/null
}
./scons
BIN_BASE="$SDK/toolchain/linux_x86/bin"
LD="$BIN_BASE/i686-nacl-ld"
AR="$BIN_BASE/i686-nacl-ar"
RANLIB="$BIN_BASE/i686-nacl-ranlib"
LIBPPAPI_CPP_A="$SDK/toolchain/linux_x86_newlib/x86_64-nacl/lib32/libppapi_cpp.a"
O_FILES="`find $(pwd)/opt_x86_32 | grep .o$ | grep --invert-match my_main.o | tr "\n" " "`"
LIBDIR="../../../bin/lib/lib32"
mkdir -p $LIBDIR
if [ -f $LIBDIR/libweb2grid_framework.a ]; then
rm $LIBDIR/libweb2grid_framework.a
fi
OUTPUT_NAME="`readlink -m $LIBDIR/libweb2grid_framework.a`"
create_allIn_a
BIN_BASE="$SDK/toolchain/linux_x86/bin"
LD="$BIN_BASE/x86_64-nacl-ld"
AR="$BIN_BASE/x86_64-nacl-ar"
RANLIB="$BIN_BASE/x86_64-nacl-ranlib"
LIBPPAPI_CPP_A="$SDK/toolchain/linux_x86_newlib/x86_64-nacl/lib64/libppapi_cpp.a"
O_FILES="`find $(pwd)/opt_x86_64 | grep .o$ | grep --invert-match my_main.o | tr "\n" " "`"
LIBDIR="../../../bin/lib/lib64"
mkdir -p $LIBDIR
if [ -f $LIBDIR/libweb2grid_framework.a ]; then
rm $LIBDIR/libweb2grid_framework.a
fi
OUTPUT_NAME="`readlink -m $LIBDIR/libweb2grid_framework.a`"
create_allIn_a
./scons -c
The my_main.o file is excluded from the static library, because that file contains the function that is to be provided by the project that uses this framework.
The build.scons file for the framework is truly ordinary.
build.scons (for some project that uses this framework):
#! -*- python -*-
#What to compile:
sources = [ 'src/something.cpp', 'src/something_helper.cpp' ]
###############################################################x
import make_nacl_env
import nacl_utils
import os
nacl_env = make_nacl_env.NaClEnvironment(
use_c_plus_plus_libs=False,
nacl_platform=os.getenv('NACL_TARGET_PLATFORM'))
nacl_env.Append(
# Add a CPPPATH that enables the full-path #include directives, such as
# #include "examples/sine_synth/sine_synth.h"
CPPPATH=[os.path.dirname(os.path.dirname(os.path.dirname(os.getcwd())))],
LIBS=['web2grid_framework','srpc'],
LIBPATH=['../../../bin/lib/lib32','../../../bin/lib/lib64'],
LINKFLAGS=['-pthread']
)
nacl_env.AllNaClModules(sources, 'client')
Some lines worth highlighting:
use_c_plus_plus_libs=False,
LIBS=['web2grid_framework','srpc'],
LIBPATH=['../../../bin/lib/lib32','../../../bin/lib/lib64'],
LINKFLAGS=['-pthread']
I am not saying that this is a clean method, but it gets the job done.
So, there's two questions here
1. Using SCONS:
NaCl uses SCONS for it's examples, simply to help compiling of the examples easier. In reality, SCONS simply directs to the GCC/G++ compilers in the SDK build directories. (SCONS will take the input scripts, and create the final param string to send to GCC)
GCC is a common compiler, and is well documented on the net : http://gcc.gnu.org/
How you integrate NaCl compilation into your work-flow is up to you (ie you're not forced to use SCONS).
For instance, if you'd like to go to GCC directly, you can simply call :
<path to bin>/x86_64-nacl-gcc -m64 -o test.nexe main.c
For a more detailed look into how to compile NaCl modules, please read the documentation # gonacl.com on compiling which will detail how to compile with and without SCONS.
2.Compilng Static libs with GCC
Here is an example : http://www.adp-gmbh.ch/cpp/gcc/create_lib.html
~Main

How to use an older version of gcc in Linux

In Linux I am trying to compile something that uses the -fwritable-strings option. Apparently this is a gcc option that doesn't work in newer version of gcc. I installed gcc-3.4 on my system, but I think the newer version is still being used because I'm still get the error that says it can't recognize the command line option -fwritable-strings. How can I get make to use the older version of gcc?
You say nothing about the build system in use, but usually old versions of gcc can be invoked explicitly, by something like (this is for an autotools-based build):
./configure CXX=g++-3.4 CC=gcc-3.4
For a make-based build system, sometimes this will work:
make CXX=g++-3.4 CC=gcc-3.4
Most makefiles ought to recognise overriding CC and CXX in this way.
If editing the configuration/Makefile is not an option, Linux includes a utility called update-alternatives for such situations. However, it's a pain to use (links to various tutorials included below).
This is a little simpler - here's a script (from here) to easily switch your default gcc/g++ version:
#!/bin/bash
usage() {
echo
echo Sets the default version of gcc, g++, etc
echo Usage:
echo
echo " gcc-set-default-version <VERSION>"
echo
exit
}
cd /usr/bin
if [ -z $1 ] ; then
usage;
fi
set_default() {
if [ -e "$1-$2" ] ; then
echo $1-$2 is now the default
ln -sf $1-$2 $1
else
echo $1-$2 is not installed
fi
}
for i in gcc cpp g++ gcov gccbug ; do
set_default $i $1
done
If you 1) name this script switch-gcc, 2) put it in your path, and 3) make it executable (chmod +x switch-gcc), you can then switch compiler versions just by running
sudo switch-gcc 3.2
Further reading on update-alternatives:
https://lektiondestages.blogspot.com/2013/05/installing-and-switching-gccg-versions.html
https://codeyarns.com/2015/02/26/how-to-switch-gcc-version-using-update-alternatives/
https://askubuntu.com/questions/26498/choose-gcc-and-g-version
Maybe you could just give the whole path of the gcc-3.4 install while compiling your program:
/path_to_gcc_3.4/gcc your_program
If you can find where the writeable strings are actually being used, another possibility would be to use strdup and free on the subset of literal strings that the code is actually editing. This might be more complicated than downgrading versions of GCC, but will make the code much more portable.
Edit
In response to the clarification question / comment below, if you saw something like:
char* str = "XXX";
str[1] = 'Y';
str[2] = 'Z';
// ... use of str ...
You would replace the above with something like:
char* str = strdup("XXX");
str[1] = 'Y';
str[2] = 'Z';
// ... use of str ...
free(str);
And where you previously had:
char* str = "Some string that isn't modified";
You would replace the above with:
const char* str = "Some string that isn't modified";
Assuming you made these fixes, "-fwritable-strings" would no longer be necessary.

What is a reliable way to determine which shared library will be loaded across linux platforms?

I need to find out which library will be loaded given in the information returned from /sbin/ldconfig. I came up with the following:
#!/bin/bash
echo $(dirname $(/sbin/ldconfig -p | awk "/$1/ {print \$4}" | head -n 1))
Running this results with:
$ whichlib libGL.so
/usr/X11R6/lib
This a two part question:
Will this produce a reliable result across platform?
Is there a slicker way to parse the output of ldconfig?
Thanks,
Paul
There're several ways the library is loaded by executeable:
1.
Using $LD_LIBRARY_PATH
Using ld cache
Libary with full path compiled into binary (-rpath gcc flag)
You're using option 2, while option 1 and 3 are not considered.
Depending on what exactly you're doing you may want to run ldd directly on the executable you're planning to run rather than the general case ldconfig.
Since you asked, you could write your script like this:
dirname "$(/sbin/ldconfig -p | awk "\$1 == "$1" {print \$4; exit}")"
It's a little more precise and has one less pipe. Also echo $(cmd) is redundant; you can just write cmd.

Resources