configure keeps finding tcmalloc. How? - autoconf

I'm building NWChem on Cray. libtcmalloc_minimal is already added to an archive file by the cc in my Cray environment. In my configure routine, it explicitly appends a second -ltcmalloc_minimal resulting in a multiple definition and a configure fail. But none of the configure.* files or makefiles (or any files included with NWChem) contain any reference to tcmalloc_minimal.
How is tcmalloc_minimal getting in there?
How can I keep it out?

The autoconf _AC_FC_LIBRARY_LDFLAGS macro (called as part of AC_PROG_FC) and other macros querying library flags and objects retrieves this value from the verbose compiler output (which contains this library on Cray systems). For this reason Cray's patched autoconf contains a change of the above macro to get rid of the flag. I'm currently in the process of figuring out an override to the macro so configure scripts produced by unpatched versions of autoconf also work on Cray systems. I'll post an update once I've figured out something reliably working.

Related

How do I ignore a system file picked up by `configure' generated from AC_CHECK_HEADERS

We using an automated build system which downloads and compiles source. The only interface I have to control the behaviour of the compilation is by setting ENV VARs and the arguments given to `./configure'.
The issue is that the 'configure' script (of the particular source I'm compiling) checks for a system header file, which if found, adversely affects the compilation process. (the compilation process will avoid compiling libraries which it believes are already installed on the local system when the above mentioned system header file is found.)
Since this is an automated process, I cannot modify the 'configure' script in anyway, and as mentioned can only specify the environment variables and arguments passed to `configure'. The configure script uses the AC_CHECK_HEADERS macro to generate the code to do the check for the system file. Is there anyway to avoid a check of a specific system file from the configure arguments?
The troublesome header file is in the path /usr/include/pcap/.
Thanks
Well there's a few things you could try:
remove foo.h from AC_CHECK_HEADERS and always build the library
use AC_CHECK_HEADER for foo.h and check for /usr/include/pcap/foo.h and don't AC_DEFINE(HAVE_FOO_H) if /usr/include/pcap/foo.h is there.
you could use AC_ARG_ENABLE or AC_ARG_WITH to turn off the offending test on a host-by-host basis via arguments to configure. So the answer to that question is yes.
All of these assume you can modify configure.ac and regenerate configure. If you can't do that you might have to modify configure (in an automated fashion, of course).

can an RPM spec file "include" other files?

Is there a kind of "include" directive in RPM spec? I couldn't find an answer by googling.
Motivation: I have a RPM spec template which the build process modifies with the version, revision and other build-specific data. This is done by sed currently. I think it would be cleaner if the spec would #include a build-specific definitions file, which would be generated by the build process, so I don't need to search and replace in the spec.
If there is no include, is there an idiomatic way to do this (quite common, I believe) task?
Sufficiently recent versions of rpmbuild certainly do support %include:
%include common.inc
Unfortunately, they aren't very smart about it -- there is no known set of directories, in which it will look for the requested files, for example. But it is there and variables are expanded, for example:
%include %{_topdir}/Common/common.inc
RPM does not support includes.
I have solved similar problems with either m4 macro processor or by just concatenating parts of spec (when the "include" was at the beginning).
If you only need to pass a few variables at build time, and not include several lines from another file, you can run
rpmbuild --define 'myvar SOMEVALUE' -bb myspec.spec
and you can use %myvar in the spec.
I faced this same issue recently. I wanted to define multiple sub-packages that were similar, but each varied just slightly (they were language-specific RPMs). I didn't want to repeat the same boiler-plate stuff for each sub-package.
Here's a generic version of what I did:
%define foo_spec() %{expand:%(cat '%{myloc}/main-foo.spec')}
%{foo_spec bar}
%{foo_spec baz}
%{foo_spec qux}
The use of %{expand} ensures that %(cat) is only executed a single time, when the macro is defined. The content of the main-foo.spec file is then three times, and each time %1 in the main-foo.spec file expands to each of bar, baz and qux, in turn, allowing me to treat it as a template. You could easily expand this to more than one parameter, if you have the need (I did not).
For the underlying issue, there maybe two additional solutions that are present in all rpm versions that I am aware of.
Subpackages
macro and rpmrc files.
Subpackages
Another alternative (and perhaps the "RPM way") is to use sub-packages. Maximum RPM also has information and examples of subpackages.
I think the question is trying to structure something like,
two spec files; say rpm_debug.spec and rpm_production.spec
both use %include common.spec
debug and production could also be client and server, etc. For the examples of redefining a variable, each subpackage can have it's own list of variables.
Limitations
The main advantage of subpackages is that only one build takes place; This may also be a disadvantage. The debug and production example may highlight this. That can be worked around using strip to create variants or compiling twice with different output; perhaps using VPATH with Gnu Make). Having to compile large packages and then only have simple variations, like with/without developer info, like headers, static libraries, etc. can make you appreciate this approach.
Macros and Rpmrc
Subpackages don't solve the problem of structural defines that you wish for an entire rootfs hierarchy, or larger collection of RPMs. We have rpmbuild --showrc for this. You can have a large amount of variables and macros defined by altering rpmrc and macros when you run rpm and rpmbuild. From the man page,
rpmrc Configuration
/usr/lib/rpm/rpmrc
/usr/lib/rpm/redhat/rpmrc
/etc/rpmrc
~/.rpmrc
Macro Configuration
/usr/lib/rpm/macros
/usr/lib/rpm/redhat/macros
/etc/rpm/macros
~/.rpmmacros
I think these two features can solve all the problems that %include can. However, %include is a familiar concept and was probably added to make rpm more full-featured and developer friendly.
Which version are you talking about? I currently have %include filename.txt in my spec file and it seems to work just like the C #include directive.
> rpmbuild --version
RPM version 4.8.1
You can include the *.inc files from the SOURCES directory (%_sourcedir):
Source1: common.inc
%include %{SOURCE1}
In this way they will go automatically into SRPMS.
I've used scripts (name your favorite) to take a template and create the spec file from that. Also, the %files tag can import a file that is created by another process, e.g. Python's bdist-rpm.

Using library with different names within autoconf

I am trying to build an application with OpenSync 0.4 (0.3.9 indeed) dependency.
In the project's configure.ac the opensync library is written as libopensync1. However, this doesn't build on my Gentoo system. Changing libopensync1 to libopensync does fix the issue for me.
I searched with Google and found that libopensync1 is used in some distributions, while libopensync in others. So how to resolve this issue in configure.ac?
Thanks.
The macro AC_SEARCH_LIBS does what you need. (There is much heated debate about whether or not pkg-config should ever be used. If you choose to rely on it, ptomato gives a reasonable approach.) Simply add this to your configure.ac:
AC_SEARCH_LIBS([osync_mapping_new],[opensync1 opensync],[],
[AC_MSG_ERROR([can't find opensync])])
This will first look for a library named opensync1; if
it doesn't find that, it will look for opensync.
The primary drawback of using pkg-config is that most projects that
rely on it do not actually check if the data provided by the .pc
file is reliable, so configure may succeed but a subsequent build
will fail. It is always possible for a user to set PKG_CONFIG=true
when running configure and completely eliminate all of the data
provided by any associated .pc files, setting LIBS, CFLAGS, etc by
hand the 'old-fashioned' way.
The primary drawback of not using pkg-config is that the user
has to set LIBS, CFLAGS, etc. the old-fashioned way. In practice,
this is pretty trivial, and all pkg-config has done is move the
data from a single CONFIG_SITE file to separately maintained
.pc files for each package.
If you do use PKG_MODULE_CHECK, follow it up with a call to
AC_CHECK_LIB or AC_SEARCH_LIBS to validate the data in whatever
.pc file was located by PKG_CHECK_MODULES
I'm assuming that the place at which this occurs inside your configure.ac is inside a PKG_CHECK_MODULES call.
Looking at the libopensync sources, it seems that libopensync1 is the newer name, and libopensync is the old name. So, we'll use pkg-config macros to look for the newer name unless it doesn't exist.
Put this in your configure.ac:
# Check if libopensync1 is known to pkg-config, and if not, look for libopensync instead
PKG_CHECK_EXISTS([libopensync1], [OPENSYNC=libopensync1], [OPENSYNC=libopensync])
Then later in your PKG_CHECK_MODULES call, replace libopensync1 with $OPENSYNC.

Linking with a different .so file in linux

I'm trying to compile a piece of software which has the standard build process e.g.
configure
make
make install
The software requires a library e.g. libreq.so which is installed in /usr/local/lib. However, my problem is I'd like to build the software and link it with a different version of the same library (i have the source for the library as well) that I've installed in /home/user/mylibs.
My question is, how do I compile and link the software against the library in /home/user/mylibs rather than the one in /usr/local/lib
I tried setting "LD_LIBRARY_PATH" to include /home/user/mylibs but that didn't work.
Thanks!
When you have an autoconf configure script, use:
CPPFLAGS=-I/home/user/include LDFLAGS=-L/home/user/mylibs ./configure ...
This adds the nominated directory to the list of directories searched for headers (usually necessary when you're using a library), and adds the other nominated directory to the list searched for actual libraries.
I use this all the time - on my work machine, /usr/local is 'maintained' by MIS and contains obsolete code 99.9% of the time (and is NFS-mounted, read-only), so I struggle to avoid using it at all and maintain my own, more nearly current, versions of the software under /usr/gnu. It works for me.
Try using LD_PRELOAD set to your actual file.
LD_LIBRARY_PATH is for finding the dynamic link libraries at runtime. At compiling you should add -L parameters to gcc/g++ to specify in which directory the *.so files are. You also need to add the library name with -l<NAME> (where the library is libNAME.so).
Important! For linking you not only need the libNAME.so file but a libNAME.a is needed too.
When you run the application, don't forget to add the dir to the LD_LIBRARY_PATH.
When you added the /home/user/mylibs to the LD_LIBRARY_PATH did you add it to the front or end of the existing paths? The tokens are searched in-order so you will want yours to appear first in the list.
Also, many standard build environments that use configure will allow you to specify an exact library for each required piece. You will have to run ./configure --help but you should see something like --using-BLAH-lib=/path/to/your/library or similar.

On GNU/Linux systems, Where should I load application data from?

In this instance I'm using c with autoconf, but the question applies elsewhere.
I have a glade xml file that is needed at runtime, and I have to tell the application where it is. I'm using autoconf to define a variable in my code that points to the "specified prefix directory"/app-name/glade. But that only begins to work once the application is installed. What if I want to run the program before that point? Is there a standard way to determine what paths should be checked for application data?
Thanks
Thanks for the responses. To clarify, I don't need to know where the app data is installed (eg by searching in /usr,usr/local,etc etc), the configure script does that. The problem was more determining whether the app has been installed yet. I guess I'll just check in install location first, and if not then in "./src/foo.glade".
I dont think there's any standard way on how to locate such data.
I'd personally do it in a way that i'd have a list of paths and i'd locate if i can find the file from anyone of those and the list should containt the DATADIR+APPNAME defined from autoconf and CURRENTDIRECTORY+POSSIBLE_PREFIX where prefix might be some folder from your build root.
But in any case, dont forget to use those defines from autoconf for your data files, those make your software easier to package (like deb/rpm)
There is no prescription how this should be done in general, but Debian packagers usually installs the application data somewhere in /usr/share, /usr/lib, et cetera. They may also patch the software to make it read from appropriate locations. You can see the Debian policy for more information.
I can however say a few words how I do it. First, I don't expect to find the file in a single directory; I first create a list of directories that I iterate through in my wrapper around fopen(). This is the order in which I believe the file reading should be done:
current directory (obviously)
~/.program-name
$(datadir)/program-name
$(datadir) is a variable you can use in Makefile.am. Example:
AM_CPPFLAGS = $(ASSERT_FLAGS) $(DEBUG_FLAGS) $(SDLGFX_FLAGS) $(OPENGL_FLAGS) -DDESTDIRS=\"$(prefix):$(datadir)/:$(datadir)/program-name/\"
This of course depends on your output from configure and how your configure.ac looks like.
So, just make a wrapper that will iterate through the locations and get the data from those dirs. Something like a PATH variable, except you implement the iteration.
After writing this post, I noticed I need to clean up our implementation in this project, but it can serve as a nice start. Take a look at our Makefile.am for using $(datadir) and our util.cpp and util.h for a simple wrapper (yatc_fopen()). We also have yatc_find_file() in case some third-party library is doing the fopen()ing, such as SDL_image or libxml2.
If the program is installed globally:
/usr/share/app-name/glade.xml
If you want the program to work without being installed (i.e. just extract a tarball), put it in the program's directory.
I don't think there is a standard way of placing files. I build it into the program, and I don't limit it to one location.
It depends on how much customising of the config file is going to be required.
I start by constructing a list of default directories and work through them until I find an instance of glade.xml and stop looking, or not find it and exit with an error. Good candidates for the default list are /etc, /usr/share/app-name, /usr/local/etc.
If the file is designed to be customizable, before I look through the default directories, I have a list of user files and paths and work through them. If it doesn't find one of the user versions, then I look in the list of default directories. Good candidates for the user config files are ~/.glade.xml or ~/.app-name/glade.xml or ~/.app-name/.glade.xml.

Resources