Automake conditional compilation from C or Objective-C sources - autoconf

I'm using the following to do conditional compilation in automake of the amhello example program [1]:
In configure.ac:
AC_INIT([amhello], [1.0], [bug-automake#gnu.org])
AM_INIT_AUTOMAKE([-Wall -Werror foreign])
AC_PROG_CC
AC_PROG_OBJC
build_linux=no
build_windows=no
build_mac=no
AC_CANONICAL_HOST
case "${host_os}" in
cygwin*|mingw*)
build_windows=yes;;
darwin*)
build_mac=yes;;
*)
build_linux=yes;;
esac
AM_CONDITIONAL([LINUX], [test "$build_linux" = "yes"])
AM_CONDITIONAL([WINDOWS], [test "$build_windows" = "yes"])
AM_CONDITIONAL([MACOS], [test "$build_mac" = "yes"])
AC_CONFIG_HEADERS([config.h])
AC_CONFIG_FILES([
Makefile
src/Makefile
])
AC_OUTPUT
In src/Makefile.am:
bin_PROGRAMS = hello
hello_SOURCES = main.c
if MACOS
hello_SOURCES += hello-mac.m
endif
if LINUX
hello_SOURCES += hello-linux.c
endif
It works as expected except for one issue - even when compiling on Linux, it tries to use the Objective-C build suite instead of the C one. A side effect of this is that OBJCFLAGS gets used instead of CFLAGS, which is counter-intuitive given that no Objective-C source code is being compiled when built for Linux. A demonstration:
$ OBJCFLAGS="-DOBJCFLAGS" CFLAGS="-DCFLAGS" ./configure
...
$ make
make all-recursive
make[1]: Entering directory '/'
Making all in src
make[2]: Entering directory '/src'
gcc -DHAVE_CONFIG_H -I. -I.. -DCFLAGS -MT main.o -MD -MP -MF .deps/main.Tpo -c -o main.o main.c
mv -f .deps/main.Tpo .deps/main.Po
gcc -DHAVE_CONFIG_H -I. -I.. -DCFLAGS -MT hello-linux.o -MD -MP -MF .deps/hello-linux.Tpo -c -o hello-linux.o hello-linux.c
mv -f .deps/hello-linux.Tpo .deps/hello-linux.Po
gcc -DOBJCFLAGS -o hello main.o hello-linux.o
...
From the generated src/Makefile:
...
hello$(EXEEXT): $(hello_OBJECTS) $(hello_DEPENDENCIES) $(EXTRA_hello_DEPENDENCIES)
#rm -f hello$(EXEEXT)
$(AM_V_OBJCLD)$(OBJCLINK) $(hello_OBJECTS) $(hello_LDADD) $(LIBS)
...
Is there a good way to have the C compiler / CFLAGS be used when building for Linux and have the Objective-C compiler / OBJCFLAGS only be used when building for MacOS (when the Objective-C source file is to actually be built)? I tried using both approaches to conditional compilation described in [2] but both exhibit the same behavior.
[1] https://www.gnu.org/software/automake/manual/html_node/Creating-amhello.html#Creating-amhello
[2] https://www.gnu.org/software/automake/manual/html_node/Conditional-Sources.html#Conditional-Sources

I expect that if you tell automake to build different executables for Linux, Windows and MacOS instead of building the same executable with some OS specific variants, automake should not try an Objective-C compiler for the Linux and Windows versions as long as the Linux and Windows do not use any Objective-C sources.
NOTE: This is speculative, verification is needed.
Now... you will need to call these three executables three distinct names if you are going to define them in the same Makefile.am, or define them in multiple Makefile.am files. I can see a few possibilities to do that, and you may need to add subdir-objects to AM_INIT_AUTOMAKE([...]) and re-run autoreconf.
Note that I have not tested any of this, as I have no idea about how to write Objective-C code. If you happen to have your project available somewhere to look (with maybe a proof-of-concept hello-windows.c using winapi, and absolutely hello-macos.m using MacOS APIs), I can try to figure out which of the following proposals works best.
Use recursive make and src/linux/hello, src/windows/hello, src/macos/hello with one Makefile.am each, and move the OS-specific hello-$OS.* into the appropriate subdirectory:
# src/linux/Makefile.am
if LINUX
bin_PROGRAMS = hello
hello_SOURCES = hello-linux.c ../main.c
endif
# src/macos/Makefile.am
if MACOS
bin_PROGRAMS = hello
hello_SOURCES = hello-macos.m ../main.c
endif
# src/windows/Makefile.am
if WINDOWS
bin_PROGRAMS = hello
hello_SOURCES = hello-windows.c ../main.c
# special rules to build object from resource file and adding it
endif
I do not like source files beginning with ../, though. And the less recursive make we use, the better on multicore machines.
Use non-recursive make for the three executables built as src/linux/hello, src/windows/hello, src/macos/hello with one Makefile-files file each, all included from src/Makefile.am, moving the OS specific sources to the OS specific subdirectory:
# src/linux/Makefile-files -*- makefile-automake -*
if LINUX
bin_PROGRAMS += linux/hello
linux_hello_SOURCES = linux/hello-linux.c main.c
endif
# src/macos/Makefile-files -*- makefile-automake -*
if MACOS
bin_PROGRAMS += macos/hello
macos_hello_SOURCES = macos/hello-macos.m main.c
endif
# src/windows/Makefile-files -*- makefile-automake -*
if WINDOWS
bin_PROGRAMS += windows/hello
windows_hello_SOURCES = windows/hello-windows.c main.c
# special rules to build object from resource file and adding it
endif
# src/Makefile.am
bin_PROGRAMS =
include linux/Makefile-files
include macos/Makefile-files
include windows/Makefile-files
I would write the Makefile-files files using %reldir% and %canon_reldir% (or %D% and %C%).
This allows OS specific files and build rules (e.g. Windows resource files and the rules to compile them and link them to the Windows executable) to be all neatly put into the OS specific subdirectory.
Probably my preferred option for the longer term.
Just call the executables linux/hello, windows/hello, and macos/hello from src/Makefile.am without moving the sources or the build rules away from src/:
# src/Makefile.am
bin_PROGRAMS =
if LINUX
bin_PROGRAMS += linux/hello
linux_hello_SOURCES = hello-linux.c main.c
endif
if MACOS
bin_PROGRAMS += macos/hello
macos_hello_SOURCES = hello-macos.m main.c
endif
if WINDOWS
bin_PROGRAMS += windows/hello
windows_hello_SOURCES = hello-windows.c main.c
# special rules to build object from resource file and adding it
endif
If there is a lot of OS specific source files and rules all in the single directory src/ and its src/Makefile.am, this might become difficult to read.
My preferred option for a quick minimum working example.
Call the executables hello-linux, hello-windows, hello-macos from src/Makefile.am and then deal with installing the different executables as hello or hello.exe in install-hooks and the like.
I would avoid this as those hooks and related stuff are non-trivial to get right.
It still needs to be checked whether configure will actually succeed when building for Linux and Windows without an Objective-C compiler to be found.

The OBJCFLAGS are used only at link time, because automake selects the Objective-C linker if it sees any Objective-C source files. See this page in the automake manual.
You can use a per-target _LINK variable to override the default linker selection. In Makefile.am, you can write this, to force the use of the C linker:
hello_LINK = $(LINK)

Related

How do I correctly link a shared object (.so file) in a makefile when cross-compiling?

I have some C++ code on an openSuse platform that I need to compile to be executed on a different linux-based target. Part of the code is a dynamic library libfoo.so. I compile everything with make and then copy the compiled executable prog together with the libfoo.so to the target. When I then run the executable, I get some errors indicating the libfoo could not be initialized. I've tried everything I could find to tell the executable where it can find the libfoo.so but I still get the error.
Could anybody tell me what I am doing wrong here? I feel like it could be an error in the Makefile.
I am very new to C++ and using Makefiles in general, and on top of it all, the target runs kind of a proprietary linux version, so I cannot provide much information about it. I do have the appropriate compiler for it though.
My directory structure on the openSuse platform:
|src
|--Foolib
|----foolib.h
|----libfoo.so
|--Otherlib
|----otherlib.h
|----otherlib.hpp
|---+OtherlibSrcDirectory
|--bar.cpp
|--bar.h
|--Makefile
Directory structure on the target:
|program
|--libfoo.so
|--prog
My Makefile:
LIBS = -LFoolib -lfoo
INC = -I OtherLib -I Foolib
CXXFLAGS += -lpthread -std=c++11 -D_GLIBCXX_USE_NANOSLEEP $(INC)
LDFLAGS = '-Wl,-rpath,$$ORIGIN'
SRC_FILES = bar.cpp
OBJ = $(SRC_FILES:%.cpp=%.o)
prog: $(OBJ)
$(CXX) $(CXXFLAGS) $(LDFLAGS) $(LIBS) -o $# $^
%.o: %.cpp
$(CXX) $(CXXFLAGS) $(LIBS) -c $<
Basically, bar.h includes Foolib/foolib.h as well as Otherlib/OtherlibSrcDirectory and bar.cpp includes bar.h. Then some functions from foolib.h are called in bar.cpp and they return error values. If necessary I can provide some more insights into the code but I'll leave it out for now to keep it a bit shorter.
Any help would be highly appreciated!
Found my mistake.
libfoo.so was already on the target machine and it was located in the correct folder (/lib). My program had been able to find it without problems.
My mistake: I executed my program on the target machine without root permissions.
Without root permissions, I am not allowed to initialize Foolib.
sudo ./prog fixed everything.

How to specify directory when use "gcc -c" to generate *.o files? [duplicate]

I am wondering why gcc/g++ doesn't have an option to place the generated object files into a specified directory.
For example:
mkdir builddir
mkdir builddir/objdir
cd srcdir
gcc -c file1.c file2.c file3.c **--outdir=**../builddir/objdir
I know that it's possible to achive this with separate -o options given to the compiler, e.g.:
gcc -c file1.c -o ../builddir/objdir/file1.o
gcc -c file2.c -o ../builddir/objdir/file2.o
gcc -c file3.c -o ../builddir/objdir/file3.o
... and I know that I can write Makefiles via VPATH and vpath directives to simplify this.
But that's a lot of work in a complex build environment.
I could also use
gcc -c file1.c file2.c file3.c
But when I use this approach my srcdir is full of .o garbage afterwards.
So I think that an option with the semantics of --outdir would be very useful.
What is your opinion?
EDIT: our Makefiles are written in such a way that .o files actually placed into builddir/obj. But I am simply wondering if there might be a better approach.
EDIT: There are several approaches which place the burden to achieve the desired behavior to the build system (aka Make, CMake etc.). But I consider them all as being workarounds for a weakness of gcc (and other compilers too).
This is the chopped down makefile for one of my projects, which compiles the sources in 'src' and places the .o files in the directory "obj". The key bit is the the use of the patsubst() function - see the GNU make manual (which is actually a pretty good read) for details:
OUT = lib/alib.a
CC = g++
ODIR = obj
SDIR = src
INC = -Iinc
_OBJS = a_chsrc.o a_csv.o a_enc.o a_env.o a_except.o \
a_date.o a_range.o a_opsys.o
OBJS = $(patsubst %,$(ODIR)/%,$(_OBJS))
$(ODIR)/%.o: $(SDIR)/%.cpp
$(CC) -c $(INC) -o $# $< $(CFLAGS)
$(OUT): $(OBJS)
ar rvs $(OUT) $^
.PHONY: clean
clean:
rm -f $(ODIR)/*.o $(OUT)
How about changing to the directory and running the compile from there:
cd builddir/objdir
gcc ../../srcdir/file1.c ../../srcdir/file2.c ../../srcdir/file3.c
That's it. gcc will interpret includes of the form #include "path/to/header.h" as starting in the directory the file exists so you don't need to modify anything.
A trivial but effective workaround is to add the following right after the gcc call in your Makefile:
mv *.o ../builddir/objdir
or even a soft-clean (possibly recursive) after the compilation is done, like
rm -f *.o
or
find . -name \*.o -exec rm {} \;
You can use a simple wrapper around gcc that will generate the necessary -o options and call gcc:
$ ./gcc-wrap -c file1.c file2.c file3.c --outdir=obj
gcc -o obj/file1.o -c file1.c
gcc -o obj/file2.o -c file2.c
gcc -o obj/file3.o -c file3.c
Here is such a gcc_wrap script in its simplest form:
#!/usr/bin/perl -w
use File::Spec;
use File::Basename;
use Getopt::Long;
Getopt::Long::Configure(pass_through);
my $GCC = "gcc";
my $outdir = ".";
GetOptions("outdir=s" => \$outdir)
or die("Options error");
my #c_files;
while(-f $ARGV[-1]){
push #c_files, pop #ARGV;
}
die("No input files") if(scalar #c_files == 0);
foreach my $c_file (reverse #c_files){
my($filename, $c_path, $suffix) = fileparse($c_file, ".c");
my $o_file = File::Spec->catfile($outdir, "$filename.o");
my $cmd = "$GCC -o $o_file #ARGV $c_file";
print STDERR "$cmd\n";
system($cmd) == 0 or die("Could not execute $cmd: $!");
}
Of course, the standard way is to solve the problem with Makefiles, or simpler, with CMake or bakefile, but you specifically asked for a solution that adds the functionality to gcc, and I think the only way is to write such a wrapper. Of course, you could also patch the gcc sources to include the new option, but that might be hard.
I believe you got the concept backwards...?!
The idea behind Makefiles is that they only process the files that have been updated since the last build, to cut down on (re-)compilation times. If you bunch multiple files together in one compiler run, you basically defeat that purpose.
Your example:
gcc -c file1.c file2.c file3.c **--outdir=**../builddir/objdir
You didn't give the 'make' rule that goes with this command line; but if any of the three files has been updated, you have to run this line, and recompile all three files, which might not be necessary at all. It also keeps 'make' from spawning a seperate compilation process for each source file, as it would do for seperate compilation (when using the '-j' option, as I would strongly suggest).
I wrote a Makefile tutorial elsewhere, which goes into some extra detail (such as auto-detecting your source files instead of having them hard-coded in the Makefile, auto-determining include dependencies, and inline testing).
All you would have to do to get your seperate object directory would be to add the appropriate directory information to the OBJFILES := line and the %.o: %.c Makefile rule from that tutorial. Neil Butterworth's answer has a nice example of how to add the directory information.
(If you want to use DEPFILES or TESTFILES as described in the tutorial, you'd have to adapt the DEPFILES := and TSTFILES := lines plus the %.t: %.c Makefile pdclib.a
rule, too.)
Meanwhile I found a "half-way" solution by using the -combine option.
Example:
mkdir builddir
mkdir builddir/objdir
cd srcdir
gcc -combine -c file1.c file2.c file3.c -o ../builddir/objdir/all-in-one.o
this "combines" all source files into one single object file.
However, this is still "half-way" because it needs to recompile everything when only one source file changes.
I think that telling pass gcc doesn't have an separate option to say where to put object file, since it already has it. It's "-c" - it says in what directory to put object.
Having additional flag for directory only must change meening of "-c".
For example:
gcc -c file.c -o /a/b/c/file.o --put-object-in-dir-non-existing-option /a1/a2/a3
You can not put /a/b/c/file.o under /a1/a2/a3, since both paths are absolute. Thus "-c" should be changed to name object file only.
I advise you to consider a replacement of makefile, like cmake, scons and other.
This will enable to implement build system as for for simple project as well as for bigger one too.
See for example how it's easy to compile using cmake your example.
Just create file CMakeList.txt in srcdir/:
cmake_minimum_required(VERSION 2.6)
project(test)
add_library(test file1.c file2c file3.c)
And now type:
mkdir -p builddir/objdir
cd builddir/objdir
cmake ../../srcdir
make
That's all, object files will reside somewhere under builddir/objdir.
I personaly use cmake and find it very convinient. It automatically generates dependencies and has other goodies.
I am trying to figure out the same thing. For me this worked
CC = g++
CFLAGS = -g -Wall -Iinclude
CV4LIBS = `pkg-config --libs opencv4`
CV4FLAGS = `pkg-config --cflags opencv4`
default: track
track: main.o
$(CC) -o track $(CV4LIBS) ./obj/main.o
ALLFLAGS = $(CFLAGS) $(CV4FLAGS)
main.o: ./src/main.cpp ./include/main.hpp
$(CC) $(ALLFLAGS) -c ./src/main.cpp $(CV4LIBS) -o ./obj/main.o
``
This is among the problems autoconf solves.
If you've ever done ./configure && make you know what autoconf is: it's the tool that generates those nice configure scripts. What not everyone knows is that you can instead do mkdir mybuild && cd mybuild && ../configure && make and that will magically work, because autoconf is awesome that way.
The configure script generates Makefiles in the build directory. Then the entire build process happens there. So all the build files naturally appear there, not in the source tree.
If you have source files doing #include "../banana/peel.h" and you can't change them, then it's a pain to make this work right (you have to copy or symlink all the header files into the build directory). If you can change the source files to say #include "libfood/comedy/banana/peel.h" instead, then you're all set.
autoconf is not exactly easy, especially for a large existing project. But it has its advantages.
Personally for single files I do this,
rm -rf temps; mkdir temps; cd temps/ ; gcc -Wall -v --save-temps ../thisfile.c ; cd ../ ; geany thisfile.c temps/thisfile.s temps/thisfile.i
temps folder will keep all the object, preprocessed and assembly files.
This is a crude way of doing things and I would prefer above answers using Makefiles.

Converting a visual studio makefile to a linux makefile

i am new to makefiles and have just rescently created a makefile that works for a c++ project. it has two cpp files and one h file. i am trying to convert my file to work in linux but cant seem to figure out how. any ideas?
EXE = NumberGuessingGame.exe
CC = cl
LD = cl
OBJ = game.obj userInterface.obj
STD_HEADERS = header.h
CFLAGS = /c
LDFLAGS = /Fe
$(EXE): $(OBJ)
$(LD) $(OBJ) $(LDFLAGS)$(EXE)
game.obj: game.cpp $(STD_HEADERS)
$(CC) game.cpp $(CFLAGS)
userInterface.obj: userInterface.cpp $(STD_HEADERS)
$(CC) userInterface.cpp $(CFLAGS)
#prepare for complete rebuild
clean:
del /q *.obj
del /q *.exe
For in depth treatment of make on Linux, see GNU make.
There are a few differences. Binaries have no extension
EXE = NumberGuessingGame
The compiler is gcc, but need not be named, because CC is built in, same goes for LD. But since your files are named .cpp, the appropriate compiler is g++, which is CXX in make.
Object files have extension .o
OBJ = game.o userInterface.o
STD_HEADERS = header.h
Compiler flags
CXXFLAGS = -c
The equivalent for /Fe is just -o, which is not specified as LDFLAGS, but spelled out on the linker command line.
Usually, you use the compiler for linking
$(EXE): $(OBJ)
$(CXX) $(LDFLAGS) $(OBJ) -o $(EXE)
You don't need to specify the rules for object creation, they are built in. Just specify the dependencies
game.o: $(STD_HEADERS)
userInterface.o: $(STD_HEADERS)
del is called rm
clean:
rm -f $(OBJ)
rm -f $(EXE)
One important point is, indentation is one tab character, no spaces. If you have spaces instead, make will complain about
*** missing separator. Stop.
or some other strange error.
You can also use CMake to accomplish your task:
Put following into CMakeLists.txt file in the root directory of your project (<project-dir>):
cmake_minimum_required (VERSION 2.6)
project (NumberGuessingGame)
add_executable(NumberGuessingGame game.cpp serInterface.cpp)
Then on the console do
"in-source" build
$ cd <project-dir>
$ cmake .
$ make
or "out-source" build
$ mkdir <build-dir>
$ cd <build-dir>
$ cmake <project-dir>
$ make
You can adjust build setting using nice GUI tool. Just go to the build directory and run cmake-gui.
You don't need to include headers in the dependency list. The compiler will fail on its own, stopping make from continuing. However, if you're including them in the dependency list to force make to rebuild files in case the header changes, nobody will stop you.
CFLAGS never needs to contain -c, nor does LDFLAGS need -o. Below is a revamped makefile. Note that you can always override a macro explicitly defined in a makefile or implicitly defined using something like make CFLAGS=-Wall for example. I used the de facto standard CXX macro name in the event that you have C source files, which must be compiled using a C compiler (the value of the CC macro) instead of a C++ compiler.
.POSIX:
#CC is already implicitly defined.
CXX = g++
OBJ = game.o userInterface.o
STD_HEADERS = header.h
.SUFFIXES:
.SUFFIXES: .o .cpp .c
NumberGuessingGame: $(OBJ) $(STD_HEADERS)
$(CXX) $(CFLAGS) -o $# $(OBJ) $(LDFLAGS)
.cpp.o: $(STD_HEADERS)
$(CXX) $(CFLAGS) -c $<
#There is already an implicit .c.o rule, thus there is no need for it here.
#prepare for complete rebuild
clean:
-rm -f NumberGuessingGame *.o
As yegorich answered, you can use a build system like Cmake. It is much more flexible, cross-platform, and can generate Unix Makefiles as well as Nmake Makefiles and Visual Studio solutions on Windows.

scons: changing compilation flags for a single source file

I have a fairly complex scons system with several subdirectories, with many libraries and executables.
Currently, every SConscript gets its own cloned environment, so I can easily change CFLAGS (or any other wariable) on a per-SConscript basis, but I'd like to change it per-target, and even per-object-file within a target.
I created a simple example SConscript and SConstruct to explain the problem, as follows.
SConstruct:
env = Environment()
env['CFLAGS'] = '-O2'
env.SConscript('SConscript', 'env')
SConscript:
Import('env')
env=env.Clone()
env.Program('foo', ['foo.c', 'bar.c'])
If I run scons, both foo.c and bar.c compile with -O2 flags. I could easily change flags SConscript-wide by just adding env['CFLAGS'] = '...' within the SConscript, but let's say that I want to compile foo.c with -O2, but bar.c with full debugging, -O0 -g. How do I do that (in the simplest possible way)?
The example uses gcc, but I'd like something that can be used with any compiler.
This happens frequently with performance-sensitive projects where compiling everything without optimization would result in unacceptable performance, but there is a need to debug one single file (or a subset of them).
The simplest one-liner answer is probably just to replace your Program line with this:
env.Program('foo', ['foo.c', env.Object('bar.c', CFLAGS='-g')])
because Program can take Object nodes as well as source files, and you can override any construction variable(s) in any builder (here, we override CFLAGS in the Object builder call). If you want to break out the Object into its own line for clarity:
debug_objs = env.Object('bar.c', CFLAGS='-g')
env.Program('foo', ['foo.c', debug_objs])
and of course taking that to the limit you get a system like Avatar33 showed above.
I suppose this is a bit harder in scons than it would be in make where you could just clean the required target and rebuilt with debug flags. Which would then just rebuild a specific object.
The solution to your particular project depends on it's size and how much effort the developer is prepared to put in.
So here's a rough solution where you specify source files on the command line that you want to be compiled with debug and no optimization, the rest will be compiled with -O2.
In your SConsctruct one additional line to get source files that we want to compile with debug from a command line option:
env = Environment()
env['CFLAGS'] = '-O2'
AddOption('--debug-targets', dest='debug-targets', type='string')
env.SConscript('SConscript', 'env')
And now in the SConscript file:
Import('env')
env=env.Clone()
debug_env = env.Clone()
debug_env['CFLAGS'] = '-g -O0'
normal_src = ['foo.c', 'bar.c']
debug_src = []
#Add src specified via the command line to the debug build
if GetOption('debug-targets'):
for x in GetOption('debug-targets').split(','):
if x in normal_src:
normal_src.remove(x)
debug_src.append(x)
normal_obj = env.Object(normal_src)
debug_obj = debug_env.Object(debug_src)
all_obj = normal_obj + debug_obj
env.Program('foo', all_obj)
Running our scons with out our debug-targets flag:
scons -Q
gcc -o bar.o -c -O2 bar.c
gcc -o foo.o -c -O2 foo.c
gcc -o foo foo.o bar.o
But now we want to compile bar.c with debug info:
scons -Q --debug-targets=bar.c
gcc -o bar.o -c -g -O0 bar.c
gcc -o foo foo.o bar.o
So that adds a bit of complexity to your build system, but if you don't need to specify debug targets from the command line like that, then the developer can obviously just cut and past sources from the normal_src list to debug_src.
There's probably many ways to improve and fine tune this for your specific environment

How to build *.so module in Automake and a libtool-using project?

I have the same problem as others have:
I have a *.la file generated by libtool in an Automake project (e.g. module.la),
but I need the *.so of it to use it for dlopen() (eg. module.so).
But: project is configured and built with --disable-shared to make sure the created main binary is one big statically linked program, e.g. main.x (easier for deployment and debugging). Thus *.so files are not created.
The program main.x is a huge framework-like application which is capable of loading extensions (modules) via dlopen() -- despite it being linked statically.
This works fine when I build module.so by hand. But putting this to work in Makefile.am seems impossible to me. Yes, I can write lib_LTLIBRARIES, but with my standard --disable-shared I do not get a *.so file.
lib_LTLIBRARIES = module.la
module_so_SOURCES = module.cpp
The file module.la is created, which dlopen() refuses to load (of course).
I tried to put rules into Makefile.am building it manually and that works:
# Makefile.am (yes, .am)
all: mm_cpp_logger.so
SUFFIXES = .so
%.so: %.cpp
$(CXX) $(CXXFLAGS) -fPIC -fpic -c -I $(top_srcdir)/include -o $# $<
%.so: %.o
$(CXX) $(LDFLAGS) -shared -fPIC -fpic -o $# $<
But this can only be a workaround. I do not get all the nice auto-features like dependency-checking and installation.
How can I build module.so with still building the main program with --disable-shared (or with the same effect) in the Makefile.am-way?
can I postprocess *.la files to *.so files with a special automake rule?
can I tweak the lib_LTLIBRARIES process to create *.so files in any case?
What you are looking for is called a module. You can tell Autotools to create a static binary (executable) by adding -all-static to the LDFLAGS of the application. I think this is the preferred way over using --disable-shared configure flag (which really is aimed at the libraries rather than the executable)
Something like this should do the trick:
AM_CPPFLAGS=-I$(top_srcdir)/include
lib_LTLIBRARIES = module.la
module_la_LDFLAGS = -module -avoid-version -shared
module_la_SOURCES = mm_cpp_logger.cpp
bin_PROGRAMS = application
application_LDFLAGS = -all-static
application_SOURCES = main.cpp
The .so file will (as usual) end up in the .libs/ subdirectory (unless you install it, of course).
And you can build both your application and plugins in one go (even with a single Makefile.am), so there is no need to call configure multiple times.
The use of -fPIC (and friends) should be auto-detected by Autotools.
Update: here's a little trick to make the shared-libraries available where you expect them. Since all shlibs end up in .libs/, it is sometimes nice to have them in a non-hidden directory.
The following makefile snippet creates convenience links (on platforms that support symlinks; otherwise they are copied). Simply adding the snippet to your makefile (i usually use an -include convenience-link.mk) should be enough (you might need an AC_PROG_LN_S in your configure.ac)
.PHONY: convenience-link clean-convenience-link
convenience-link: $(lib_LTLIBRARIES)
#for soname in `echo | $(EGREP) "^dlname=" $^ | $(SED) -e "s|^dlname='\(.*\)'|\1|"`; do \
echo "$$soname: creating convenience link from $(abs_builddir)/.libs to $(top_builddir)"; \
rm -f $(top_builddir)/$$soname ; \
test -e $(abs_builddir)/.libs/$$soname && \
cd $(top_builddir) && \
$(LN_S) $(abs_builddir)/.libs/$$soname $$soname || true;\
done
clean-convenience-link:
#for soname in `echo | $(EGREP) "^dlname=" $(lib_LTLIBRARIES) | $(SED) -e "s|^dlname='\(.*\)'|\1|"`; do \
echo "$$soname: cleaning convenience links"; \
test -L $(top_builddir)/$$soname && rm -f $(top_builddir)/$$soname || true; \
done
all-local:: convenience-link
clean-local:: clean-convenience-link
I've solved a similar problem using the noinst_LTLIBRARIES macro.
The noinst_LTLIBRARIES macro creates static, non installable libraries to be only used internally. all noinst_LTLIBRARIES static libraries are created also if you specify the --disable-static configure option.
lib_LTLIBRARIES = libtokenclient.la
noinst_LTLIBRARIES = libtokenclient_static.la
libtokenclient_la_SOURCES = $(TOKEN_SERVER_CLIENT_SOURCES) cDynlib.c cDynlib.h token_mod.h
libtokenclient_la_CFLAGS = #BASE_CFLAGS#
libtokenclient_la_CXXFLAGS = $(libtokenclient_la_CFLAGS)
libtokenclient_la_LIBADD = #B_BASE_OS_LIBS#
libtokenclient_la_LDFLAGS = #LT_PLUGIN_LIBS_FLAGS# #LIBS_FLAGS# $(TOKEN_SERVER_CLIENT_EXPORT_SYMBOLS)
libtokenclient_static_la_SOURCES = $(libtokenclient_la_SOURCES)
libtokenclient_static_la_CFLAGS = $(libtokenclient_la_CFLAGS)
libtokenclient_static_la_CXXFLAGS = $(libtokenclient_static_la_CFLAGS)
token_test_SOURCES = $(TEST_SOURCES)
token_test_LDADD = #B_BASE_OS_LIBS# libtokenclient_static.la
token_test_CFLAGS = #BASE_CFLAGS#
token_test_CXXFLAGS = $(token_test_CFLAGS)
I use noinst_LTLIBRARIES static libraries for 2 reasons:
to speed up compile time I create static libraries to be used as intermediate containers for code that shall be linked against more than once: the code is compiled just once, otherwise automake would compile same source files once for each target
to statically link the code to some executable
One thing that could work according to the libtool documentation for LT_INIT is to partition your build into two packages: the main application and the plugins. That way you could (in theory) invoke:
./configure --enable-shared=plugins
and things would work the way you would expect.

Resources