I'm currently working on a Linux project using autotools. The code is submitted in SCM (Perforce) and we have the configure script, Makefile.am, Makefile.in - the usual autotools boilerplate. Recently, somebody has changed Makefile.am, but forgot to regenerate Makefile.in; when I tried to build, I got this error:
WARNING: `automake-1.11' is missing on your system. You should only need it if
you modified `Makefile.am', `acinclude.m4' or `configure.ac'.
You might want to install the `Automake' and `Perl' packages.
Grab them from any GNU archive site.
cd . && /bin/bash ./config.status Makefile depfiles
I see the automake version is hardcoded in the configure script (and seems to come from aclocal.m4):
am__api_version='1.11'
So I guess I need automake-1.11 (not 1.10, not anything newer) to regenerate the Makefile.in file.
Why is that? Why should we be tied to a specific automake version? We're mostly building on Ubuntu 14.04, where 1.14 is the default version installed. Is there a way to tell the build system to simply use whatever version of automake is installed? Or is it safe to maybe remove the am__api_version definition from aclocal.m4?
The problem is that you are trying to recreate Makefile.in with other version of autotools. It would lead to version mismatch as aclocal.m4 was built with different version and it is used to generate the remaining files.
Instead of recreating only Makefile.in, try to also recreate aclocal.m4 and all remaining autotools generated files:
autoreconf --force --install
The important question is why would someone fix am__api_versions.
The most probable answer is: Because automake tends to alter the macro's arguments or even remove entirely macros of previous release. In each release announcement of automake there is a section called
WARNING: Future backward-incompatibilities!
and an other one called
Obsolete features removed
You can refer to releases 1.12, 1.13, 1.14
So the configure.ac or Makefile.am might contain some macros which have become obsolete in later releases. When encountering this problem you have two possibilities. Either find out which feature replaced the obsolete one or stick to one version of automake. Most developers do not feel that autotools files are part of the projects source code. They just wish to keep the working version running and stick to the current am version.
Note that all distributions support older versions of automake. In ubuntu you can find:
$ apt-cache search automake | grep automake
automake - Tool for generating GNU Standards-compliant Makefiles
automake1.4 - A tool for generating GNU Standards-compliant Makefiles
automake1.9 - A tool for generating GNU Standards-compliant Makefiles
automake1.10 - Tool for generating GNU Standards-compliant Makefiles
automake1.11 - Tool for generating GNU Standards-compliant Makefiles
Meaning that you can install the requested version of automake.
So, you could remove the line am__api_version='1.11' and find out which macro is obsolete. Then you will have to decide which of the above two solutions you will follow.
Related
I found a package on github (https://github.com/okbob/ncurses-st-menu) and am having trouble compile it for BSD platforms like NetBSD or OpenBSD. The instructions say to do ./autogen.sh, ./configure, and then make. So I install the autoconf, autotools, libtool, gettext, and any other necessary packages and run ./autogen.sh. It works without spitting out any errors. But ./configure says it doesn't support "OS x86_64-unknown-netbsd9.0" if for example on NetBSD. Can someone else try to compile this program? Because if this was done by autotools, it certainly should support any of the four major BSD operating systems.
I created a port for FreeBSD here, maybe it will help you get it running on NetBSD. The most important part is the removal of the AC_MSG_ERROR(["OS $host_os is not supported"]) line from tools/ax_pdcurses.m4, then touching config.make, calling autogen.sh to re-generate the configure script. It's also important to set CFLAGS properly and have the appropriate dependencies installed. Also, I used gmake rather than patch the Makefile since I didn't feel motivated to fix it completely.
I don't know the autogen/config tools,
but if you look the configure file:
https://github.com/okbob/ncurses-st-menu/blob/master/configure
lines 4245-4269 only checks for linux,cygwin,mingw.
For other OS gives the error: OS $host_os is not supported
Recently I was trying to install llvm-general-3.5.1.0 package.. for about a week. Basically I am getting this error: link. My situation is identical. Windows 10, ghc 7.10.2, cabal 1.22.4.0. I installed llvm 3.5.2 from sources with cmake and everything went fine. In llvm/lib directory I have *.lib files (eg. LLVMAnalysis.lib).
But somehow cabal can't see those libraries and gives this frustrating error:
Configuring llvm-general-3.5.1.0...
setup.exe: Missing dependencies on foreign libraries:
* Missing C libraries: LLVMLTO, LLVMObjCARCOpts, LLVMLinker, LLVMipo,
LLVMVectorize, LLVMBitWriter, LLVMCppBackendCodeGen, LLVMCppBackendInfo,
LLVMTableGen, LLVMDebugInfo, LLVMOption, LLVMX86Disassembler,
LLVMX86AsmParser, LLVMX86CodeGen, LLVMSelectionDAG, LLVMAsmPrinter,
LLVMX86Desc, LLVMX86Info, LLVMX86AsmPrinter, LLVMX86Utils, LLVMJIT,
LLVMIRReader, LLVMAsmParser, LLVMLineEditor, LLVMMCAnalysis,
LLVMMCDisassembler, LLVMInstrumentation, LLVMInterpreter, LLVMCodeGen,
LLVMScalarOpts, LLVMInstCombine, LLVMTransformUtils, LLVMipa, LLVMAnalysis,
LLVMProfileData, LLVMMCJIT, LLVMTarget, LLVMRuntimeDyld, LLVMObject,
LLVMMCParser, LLVMBitReader, LLVMExecutionEngine, LLVMMC, LLVMCore,
LLVMSupport
This problem can usually be solved by installing the system packages that
provide these libraries (you may need the "-dev" versions). If the libraries
are already installed but in a non-standard location then you can use the
flags --extra-include-dirs= and --extra-lib-dirs= to specify where they are.
I really want to use this package on my Windows, but nothing seems to work (I tried everything like --extra-lib-dirs and compiled also with MinGW and VS - the same problem).
I can't accept the fact that it won't install. I mean, there must be some way to fix Setup.hs from this cabal package or something. Does anyone have an idea what can be wrong with cabal in this case and how can I try to workaround this? I don't know how exactly cabal works, maybe someone with this knowledge will have an idea? Or maybe there is a way to do this without cabal?
Ok, i've managed to build it and, i think, found the root of the issue.
First, steps to build:
Get the MinGW. My installation of MinGW has gcc 4.8.
Get 32-bit MinGHC.
Compile LLVM 3.5 with MinGW's gcc and install it somewhere.
Copy contents of MinGW installation directory into MinGHC Install
Dir\ghc-7.10.2\mingw, replacing conflict files.
In the command line set your PATH so it has haskell toolset from
MinGHC (i recommend using switch .bat scripts) and llvm-config.exe.
Get the llvm-general package source either using cabal fetch or
downloading via browser from hackage.
Replace cc-options: -std=c++11 line of llvm-general.cabal with
cc-options: -std=gnu++11.
Finally, cabal configure and cabal build should work.
I have been changing my build environment many times, so if this doesn't work for you let me know, i probably forgot something.
Now let's go into details.
What we thought is a bug of cabal is not, actually. The problem is that both stack and MinGHC (and Haskell Platform, i guess) use quite old gcc - 4.6. This gcc has even two defects:
It doesn't support -std=c++11 and LLVM 3.5 can't be built using it.
As a consequence, this gcc can't be used by ghc when compiling
llvm-general, because it can't parse LLVM headers properly.
Even if it could, its linker can't link against LLVM libs compiled by
MinGW using gcc 4.8. This is why cabal was telling you it
couldn't find LLVM libs. I've hacked Setup.hs so that it wouldn't
look for these libs, but pass -lLLVMSomething to linker via -pgml
ghc option. This lead to clear error message:
ld.exe: ignoring libLLVMSupport.a ...
ld.exe: can't find -lLLVMSupport
So, the cabal was actually finding these libs, but was dropping them because they couldn't be linked to.
Ideally, the solution would be to update mingw distribution used by stack/MinGHC. But as a workaround you can just replace old gcc with new one.
Finally, -std=gnu++11 is used because current MinGW release is affected by this bug, which prevents compilation of c++ bits of the package. Whew, that was a long way.
I mean, those C or C++ projects that you build from source on Linux and UNIX systems, usually by issuing those commands:
./configure
make
sudo make install
And they also have files like ./configure, ./configure.ac, ./configure.in at the top directory.
I've heard them variously called autotools projects, or autoconf projects, or automake projects, but I'm not sure which name is the correct one. Is there even a consensus on what they should be called?
Autoconf and automake are collectively called the GNU autotools (and libtool may be included in that category as well), so autotools is the most general name.
Note that not all programs that have a configure script to generate a makefile are necessarily using the autotools, or not all of them.
According to GNU it's officially called the GNU Build System, which is where you find it on Wikipedia. But most people (er, almost everyone?) calls it "the GNU autotools" or "autotools". As stated by #larsmans and others in his answer, none of these tools are required. To quote the last paragraph of the link:
The Autotools are tools that will create a GNU Build System for your package. Autoconf mostly focuses on configure and Automake on Makefiles. It is entirely possible to create a GNU Build System without the help of these tools....
The first page of the autoconf info page refers to the packages as "autoconfiscated".
I am trying to build Guile 1.8.8 from source. I am stuck at the point where the build system is looking for libtool. I have installed it in a non-standard location.
I have already built Guile 2.0.11. In 2.0.11 build system, there is an explicit flag to configure --with-libltdl-prefix, which I think tells the build system where libtool is installed.
For Guile 1.8.8, I have Libtool installed in a non-standard location. How do I tell the build system where it is installed?
I am specifically getting error messages like:
libguile/Makefile.am:40: Libtool library used but `LIBTOOL' is undefined
libguile/Makefile.am:40: The usual way to define `LIBTOOL' is to add `LT_INIT'
I think in general this is a question regarding one or more of the autotools and how the build system finds programs / headers / libraries in non-standard locations.
This link is informative: How to point autoconf/automake to non-standard packages
Find the directory where *.m4 exists, which corresponds to libtool, or package which is in non-standard location.
export ACLOCAL_PATH=/path/to/m4/file
cd /path/to/configure.[in,ac]
autoreconf -if
./configure
I experienced a (for me) strange behaviour today: Using QMake with the PkgConfig-options etc. I was able to link the opencv libraries, but then I switched to CMake using PkgConfig. Once I tried to build my software, the linker complained that it was not able to find the library libcvaux, which pkg-config returns asked to deliver the libraries for opencv (pkg-config --libs opencv).
In /usr/lib I found a libcvaux.so.{version}, but no "plain" entry libcvaux.so. So what I did was to create a symlink, and now it works.
Now I wonder why it worked before. Is there something to pass ld an option saying "use the newest version, and you get the version by looking at the numbers behind the so suffix"? Or is it more some kind of bug that the maintainers of the opencv package forgot to add this symlink? Because e.g. libcv or libhighgui have such symbolic links.
Thank you!
From the ldconfig manpage:
ldconfig checks the header and file
names of the libraries it encounters
when determining which versions should
have their links updated.
Maybe an earlier ldconfig run deleted the link.