error: am__fastdepCXX does not appear in AM_CONDITIONAL - linux

Trying to follow this tutorial I have done my own "Hello World" in c++.
This is the code prueba.cpp:
#include <iostream>
int main()
{
std::cout<<"Hola Mundo"<<std::endl;
return 0;
}
Then, I have created configure.ac file with this information:
AC_INIT([holamundo], [0.1], [address#address.com])
AM_INIT_AUTOMAKE
AC_PROG_CC
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
and Makefile.am
AUTOMAKE_OPTIONS = foreign
bin_PROGRAMS = holamundo
holamundo_SOURCES = ./prueba.cpp
Those files are in the same folder of prueba.cpp
Finnally, in console and in the same folder of prueba.cpp I run the commands:
aclocal (no errors)
autoconf (no errors)
automake --add-missing Then I have the next errors:
Makefile.am:3: warning: source file './prueba.cpp' is in a subdirectory,
Makefile.am:3: but option 'subdir-objects' is disabled
automake: warning: possible forward-incompatibility.
automake: At least one source file is in a subdirectory, but the 'subdir-objects'
automake: automake option hasn't been enabled. For now, the corresponding output
automake: object file(s) will be placed in the top-level directory. However, this
automake: behavior may change in a future Automake major version, with object
automake: files being placed in the same subdirectory as the corresponding sources.
automake: You are advised to start using 'subdir-objects' option throughout your
automake: project, to avoid future incompatibilities.
/usr/share/automake-1.16/am/depend2.am: error: am__fastdepCXX does not appear in AM_CONDITIONAL
/usr/share/automake-1.16/am/depend2.am: The usual way to define 'am__fastdepCXX' is to add 'AC_PROG_CXX'
/usr/share/automake-1.16/am/depend2.am: to 'configure.ac' and run 'aclocal' and 'autoconf' again
Makefile.am: error: C++ source seen but 'CXX' is undefined
Makefile.am: The usual way to define 'CXX' is to add 'AC_PROG_CXX'
Makefile.am: to 'configure.ac' and run 'autoconf' again.

Issue 1
Makefile.am:3: warning: source file './prueba.cpp' is in a subdirectory,
Makefile.am:3: but option 'subdir-objects' is disabled
automake: warning: possible forward-incompatibility.
[...]
Do not prefix source names with ./ (or ../) in Makefile.am.
Automake can handle sources and targets in bona fide subdirectories, with or without recursive make, but you do need to set up your project for that, and I would not go there until you have a better handle on Autotools basics.
Issue 2
Makefile.am: error: C++ source seen but 'CXX' is undefined
Makefile.am: The usual way to define 'CXX' is to add 'AC_PROG_CXX'
Makefile.am: to 'configure.ac' and run 'autoconf' again.
The diagnostic already explains the problem and the solution, but see also below.
Issue 3
/usr/share/automake-1.16/am/depend2.am: error: am__fastdepCXX does not appear in AM_CONDITIONAL
/usr/share/automake-1.16/am/depend2.am: The usual way to define 'am__fastdepCXX' is to add 'AC_PROG_CXX'
/usr/share/automake-1.16/am/depend2.am: to 'configure.ac' and run 'aclocal' and 'autoconf' again
Again, the diagnostic already describes a solution. Since it is the same solution that another diagnostic suggests, and that seems plausible and appropriate, that seems to be a pretty good bet. Specifically:
configure.ac
AC_INIT([holamundo], [0.1], [address#address.com])
AM_INIT_AUTOMAKE
AC_PROG_CC
# Configure the C++ compiler:
AC_PROG_CXX
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
Issue 4
Finnally, in console and in the same folder of prueba.cpp I run the commands:
Generally speaking, you should not manually run the individual autotools (autoconf, automake, etc.). Instead, use autoreconf, which will identify which of the (other) autotools need to be run, and will run them in the correct order. Among the command-line options it supports are -i / --install and -f / --force, which will provide for installing the local autotool components in the source tree. You should probably run autoreconf --install --force once in your source tree. After that, you should need only plain autoreconf, unless you change to a different version of the autotools or modify one of the local autotool components.

Related

Target dependent ParseConfig

I'm trying to only build a library without having all tests dependencies available,
my tests directory have it's own SConscript file that run env.ParseConfig('pkg-config --libs --cflags libfuzzertestdependonthis')
And if I'm building the library by specifying the lib target only, the command ParseConfig will fail because the lib is not available in my build environment
The only solutions i found are really bad:
enclose env.ParseConfig in try expect block
checking the command line build target content to exclude some part of the SConstruct file
I wonder if there a smart way to do this, it would be great if ParseConfig could be a handled as a source node for a specific target instead of being run immediately.
Edit:my question don't seems to be clear enough, so I will try with a better example.
When I'm building in release mode, I don't have(don't want) the libcunit required to build the tests, the issue I'm facing is that ParseConfig command is always executed regarless of the target, and in this example ParsConfig will execute pkg-config --libs libcunit, which will fail because this lib is not installed.
Ok. From your update.
If you're building in your release mode, don't call ParseConfig()..

How to compile an extension into sqlite?

I would like to compile an extension into sqlite for loading at runtime.
The file I am using is extension - functions.c from https://www.sqlite.org/contrib
I have been able to compile into a loadable module but I need to statically link it for loading at runtime (using shell.c to create an interface at run time)
I have read the manual on linking, but to be honest, it's a little bit beyond my scope of comprehension!
Could someone let me know what I need to do to compile please?
I found a way to compile sqlite3 from source code with additional functions provided by extension_functions.c.
Note:
At this time I show the quite dirty and easy way to compile sqlite with additional features because I haven't succeed to do that in right way.
But please remember that it would perhaps be much better to prepare a brand new part of amalgamation for adding custom features as #ngreen says above.
That's the designed way of sqlite itself.
1. Download the sqlite source code
https://www.sqlite.org/download.html
Choose amalgamation one, and better to use autoconf version.
For example, here is the download link of version 3.33.0.
https://www.sqlite.org/2020/sqlite-autoconf-3330000.tar.gz
curl -O https://www.sqlite.org/2020/sqlite-autoconf-3330000.tar.gz
tar -xzvf sqlite-autoconf-3330000.tar.gz
cd sqlite-autoconf-3330000
2. Download extension_functions.c
Listed at this url.
https://sqlite.org/contrib
Actual url:
https://sqlite.org/contrib/download/extension-functions.c?get=25
curl -o extension_functions.c https://sqlite.org/contrib/download/extension-functions.c?get=25
3. Configure compilation
We can specify the --prefix option to determine the destination of built stuffs.
./configure --prefix=/usr/local/sqlite/3.33.0
Other configuration time options can be specified as environment variables at this time.
Check https://www.sqlite.org/draft/compile.html for more details.
Here is an example to enable JSON and RTree Index features.
CPPFLAGS="-DSQLITE_ENABLE_JSON1=1 -DSQLITE_ENABLE_RTREE=1" ./configure --prefix=/usr/local/sqlite/3.33.0
And autoconf options can also be specified.
CPPFLAGS="-DSQLITE_ENABLE_JSON1=1 -DSQLITE_ENABLE_RTREE=1" ./configure --prefix=/usr/local/sqlite/3.33.0 --enable-dynamic-extensions
I couldn't find any documentation about these options at the official website, but found something in configure script itself.
Optional Features:
--disable-option-checking ignore unrecognized --enable/--with options
--disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no)
--enable-FEATURE[=ARG] include FEATURE [ARG=yes]
--enable-silent-rules less verbose build output (undo: "make V=1")
--disable-silent-rules verbose build output (undo: "make V=0")
--disable-largefile omit support for large files
--enable-dependency-tracking
do not reject slow dependency extractors
--disable-dependency-tracking
speeds up one-time build
--enable-shared[=PKGS] build shared libraries [default=yes]
--enable-static[=PKGS] build static libraries [default=yes]
--enable-fast-install[=PKGS]
optimize for fast installation [default=yes]
--disable-libtool-lock avoid locking (might break parallel builds)
--enable-editline use BSD libedit
--enable-readline use readline
--enable-threadsafe build a thread-safe library [default=yes]
--enable-dynamic-extensions
support loadable extensions [default=yes]
--enable-fts4 include fts4 support [default=yes]
--enable-fts3 include fts3 support [default=no]
--enable-fts5 include fts5 support [default=yes]
--enable-json1 include json1 support [default=yes]
--enable-rtree include rtree support [default=yes]
--enable-session enable the session extension [default=no]
--enable-debug build with debugging features enabled [default=no]
--enable-static-shell statically link libsqlite3 into shell tool
[default=yes]
FYI, Here is the default install script which is used in Homebrew. Maybe it would be useful to determine which option should be specified.
def install
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_COLUMN_METADATA=1"
# Default value of MAX_VARIABLE_NUMBER is 999 which is too low for many
# applications. Set to 250000 (Same value used in Debian and Ubuntu).
ENV.append "CPPFLAGS", "-DSQLITE_MAX_VARIABLE_NUMBER=250000"
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_RTREE=1"
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_FTS3=1 -DSQLITE_ENABLE_FTS3_PARENTHESIS=1"
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_JSON1=1"
args = %W[
--prefix=#{prefix}
--disable-dependency-tracking
--enable-dynamic-extensions
--enable-readline
--disable-editline
--enable-session
]
system "./configure", *args
system "make", "install"
end
4. Remove confliction
Now we have to modify extension_functions.c to avoid conflicting against the source code of sqlite before compiling them together.
Open extension_functions.c and replace line 123 ~ 128 to a single line SQLITE_EXTENSION_INIT1.
#ifdef COMPILE_SQLITE_EXTENSIONS_AS_LOADABLE_MODULE
#include "sqlite3ext.h"
SQLITE_EXTENSION_INIT1
#else
#include "sqlite3.h"
#endif
↓
SQLITE_EXTENSION_INIT1
5. Enable extension functions
We need to insert some line into shell.c to import and enable extension functions.
Open shell.c, search static void open_db and insert #include "extension_functions.c" at the line above.
#include "extension_functions.c"
static void open_db(ShellState *p, int openFlags){
Then search sqlite3_shathree_init(p->db, 0, 0); and insert sqlite3_extension_init(p->db, 0, 0); at the bottom of init funcs.
#endif
sqlite3_fileio_init(p->db, 0, 0);
sqlite3_shathree_init(p->db, 0, 0);
sqlite3_completion_init(p->db, 0, 0);
sqlite3_uint_init(p->db, 0, 0);
sqlite3_decimal_init(p->db, 0, 0);
sqlite3_ieee_init(p->db, 0, 0);
sqlite3_extension_init(p->db, 0, 0);
6. Compile
Finally it's ready to compile sqlite including extension functions.
make install
It takes a while, and once done, distribution files will be generated at the destination which is specified at configuration time through --prefix option.
# Now we can use extension_functions without loading it manually.
$ /usr/local/sqlite/3.33.0/bin/sqlite3
sqlite> select cos(10);
-0.839071529076452
Q: "How to compile an extension into sqlite?"
A: That depends on the extension. To compile extension-functions.c referenced in the OP:
gcc -fPIC -shared extension-functions.c -o libsqlitefunctions.so -lm
(to remove the compilation warning see here)
Usage:
$ sqlite3
sqlite3> select cos(radians(45));
0.707106781186548
sqlite> .exit
I'm not sure if this is a complete answer yet, but from the how to compile document, it looks like you might want to make an amalgamation first. In src/shell.c.in you can search for ext/misc and you'll see lines such as this:
INCLUDE ../ext/misc/completion.c
These lines are used by the tool/mkshellc.tcl script to build the combined source file that will end up being compiled into the command line shell. Once the make process for sqlite3.c is complete, you should see the code you want in the combined source file.
Then, I found a function that contained this code:
sqlite3_shathree_init(p->db, 0, 0);
All I had to do was add this in the same place:
sqlite3_series_init(p->db, 0, 0);
And now I'm able to use the generate_series function. I can't find the functions.c file you were talking about, but the process should be something similar.

How to properly create autoconf setup of netcdf 4.x?

I am not sure exactly what my question is as I get seriously turned around by autoconf/automake/libtoolize etc. Several of us are trying to autoconferize mbsystem. I've thrown a repo up of the work to date here:
https://bitbucket.org/schwehr/mbsystem
I'm trying to improve the netcdf setup to use nc-config, but am uncertain how to do this correctly. I am working on configure.in. It seems unable to find a header with AC_CHECK_HEADER("netcdfcpp.h") after INCLUDES="$INCLUDES ``$nc_config --cflags``" (pardon the incorrect back ticks) as taken from the gdl netcdf check. What is the correct way to update the path from nc-config --cflags?
http://gnudatalanguage.cvs.sourceforge.net/viewvc/gnudatalanguage/gdl/configure.in?revision=1.121
I then tried to use AX_PATH_GENERIC and get stuck on this error with m4_include([m4/ax_path_generic.m4])
Running autoconf ...
configure.in:29: error: possibly undefined macro: AC_SUBST
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure:12992: error: possibly undefined macro: AC_MSG_RESULT
Any help in getting better at creating a netcdf check that actually will work with funky non-standard install locations via nc-config and figuring out how to properly put a macro in the m4 directory would be a huge help.
A pointer to a package doing this really cleanly would be a super help. I've been looking at the netcdf, gdal, geos and gdl sources for examples. And things like the octopus netcdf check do not use nc-config... http://www.tddft.org/trac/octopus/browser/trunk/m4/netcdf.m4
The current setup with fink for netcdf 4.x:
nc-config --cflags --libs
-I/sw/opt/netcdf7/include -I/sw/include
-L/sw/opt/netcdf7/lib -lnetcdf
Thanks!
See Makefile.am: How to use curl-config and xml2-config in configure.ac? and substitute xml2/curl by netcdf.
Just use
PKG_CHECK_MODULES([libnetcdf], [netcdf])
in configure.ac, and then, in Makefile.am:
AM_CPPFLAGS = ${libnetcdf_CFLAGS}
bin_PROGRAMS = foo
foo_SOURCES = ...
foo_LDADD = ${libnetcdf_LIBS}
The "correct" way to use a third party m4 macro is to use aclocal (usually via automake) to generate aclocal.m4. If you are using automake, just add
ACLOCAL_AMFLAGS = -I m4
to Makefile.am and put
AC_CONFIG_MACRO_DIR([m4])
in configure.ac (after renaming configure.in).
If you are not using automake, add '-I m4' when you invoke aclocal. If you are not using aclocal, then you'll have to append the definition of the macro to the end of aclocal.m4 (and be careful to never run aclocal, as that will overwrite the file.)
There is no good example of a clean way to use conf scripts to do a build because using such scripts is an inherently flawed approach. A slightly cleaner approach is to stop using custom scripts and use pkg-config via PKG_CHECK_MODULES, but the cleanest way to do this is to educate your users. If the user wants to install the library in funky non-standard locations then they need to be educated enough to set LDFLAGS and CPPFLAGS appropriately.

Creating binary with CMake removes runtime path

I am using CMake to build a program on linux. The program compiles successfully and runs from the project build directory. The program is linked with a custom library in the directory ${HOME}/build/lib
I have an install stage with:
install(TARGETS ProgName RUNTIME DESTINATION bin)
When I run make install the program gets put in the correct place, but the cmake installer removes the runtime path from the binary.
-- Install configuration: "Debug"
-- Installing: *binary name*
-- Removed runtime path from "*binary name*"
I have read articles on the internet discussing the misuse of the LD_LIBRARY_PATH variable so I like to keep mine limited to system library locations if possible. I am not sysadmin so I cannot add the location to the default linker search path either.
Does anyone know how I can keep the development-time linking paths when installing or at least customising which paths are added to the runtime?
Cheers
Note: if you don't want to modify the cmake scripts themselves, setting property around, you can launch you cmake with a directive asking to not remove the runtime path:
See "Variables that Control the Build", with variable: "CMAKE_SKIP_RPATH"
If true, do not add run time path information.
If this is set to TRUE, then the rpath information is not added to compiled executables.
The default is to add rpath information if the platform supports it. This allows for easy running from the build tree.
To omit RPATH in the install step, but not the build step, use CMAKE_SKIP_INSTALL_RPATH instead.
If the deliveries already contained the right runtime path, that directive will avoid cmake to do any modification to the current runtime path included in said deliveries.
cmake -DCMAKE_SKIP_RPATH=ON xxx.cmake
You should look at set_target_properties command and the property BUILD_WITH_INSTALL_RPATH
http://www.cmake.org/cmake/help/cmake-2-8-docs.html#command:set_target_properties
This works for CMake 2.8
set_target_properties(foo PROPERTIES INSTALL_RPATH_USE_LINK_PATH TRUE)
where foo is the target you defined earlier:
project(foo)
add_executable(foo ...)
...
install(TARGETS foo DESTINATION bin)
...
Before
% sudo make install
Install the project...
-- Install configuration: ""
-- Installing: /opt/mystuff/bin/foo
-- Removed runtime path from "/opt/mystuff/bin/foo"
After
% sudo make install
Install the project...
-- Install configuration: ""
-- Installing: /opt/mystuff/bin/foo
-- Set runtime path of "/opt/mystuff/bin/foo" to "/opt/zzyzx/lib:/opt/bar/lib/x86_64"

Installing and Linking PhysX Libraries in Debian Linux

I am trying to get PhysX working using Ubuntu.
First, I downloaded the SDK here:
http://developer.download.nvidia.com/PhysX/2.8.1/PhysX_2.8.1_SDK_CoreLinux_deb.tar.gz
Next, I extracted the files and installed each package with:
dpkg -i filename.deb
This gives me the following files located in /usr/lib/PhysX/v2.8.1:
libNxCharacter.so
libNxCooking.so
libPhysXCore.so
libNxCharacter.so.1
libNxCooking.so.1
libPhysXCore.so.1
Next, I created symbolic links to /usr/lib:
sudo ln -s /usr/lib/PhysX/v2.8.1/libNxCharacter.so.1 /usr/lib/libNxCharacter.so.1
sudo ln -s /usr/lib/PhysX/v2.8.1/libNxCooking.so.1 /usr/lib/libNxCooking.so.1
sudo ln -s /usr/lib/PhysX/v2.8.1/libPhysXCore.so.1 /usr/lib/libPhysXCore.so.1
Now, using Eclipse, I have specified the following libraries (-l):
libNxCharacter.so.1
libNxCooking.so.1
libPhysXCore.so.1
And the following search paths just in case (-L):
/usr/lib/PhysX/v2.8.1
/usr/lib
Also, as Gerald Kaszuba suggested, I added the following include paths (-I):
/usr/lib/PhysX/v2.8.1
/usr/lib
Then, I attempted to compile the following code:
#include "NxPhysics.h"
NxPhysicsSDK* gPhysicsSDK = NULL;
NxScene* gScene = NULL;
NxVec3 gDefaultGravity(0,-9.8,0);
void InitNx()
{
gPhysicsSDK = NxCreatePhysicsSDK(NX_PHYSICS_SDK_VERSION);
if (!gPhysicsSDK)
{
std::cout<<"Error"<<std::endl;
return;
}
NxSceneDesc sceneDesc;
sceneDesc.gravity = gDefaultGravity;
gScene = gPhysicsSDK->createScene(sceneDesc);
}
int main(int arc, char** argv)
{
InitNx();
return 0;
}
The first error I get is:
NxPhysics.h: No such file or directory
Which tells me that the project is obviously not linking properly. Can anyone tell me what I have done wrong, or what else I need to do to get my project to compile? I am using the GCC C++ Compiler. Thanks in advance!
It looks like you're confusing header files with library files. NxPhysics.h is a source code header file. Header files are needed when compiling source code (not when linking). It's probably located in a place like /usr/include or /usr/include/PhysX/v2.8.1, or similar. Find the real location of this file and make sure you use the -I option to tell the compiler where it is, as Gerald Kaszuba suggests.
The libraries are needed when linking the compiled object files (and not when compiling). You'll need to deal with this later with the -L and -l options.
Note: depending on how you invoke gcc, you can have it do compiling and then linking with a single invocation, but behind the scenes it still does a compile step then a link step.
EDIT: Extra explanation added...
When building a binary using a C/C++ compiler, the compiler reads the source code (.c or .cpp files). While reading it, there are frequently #include statements that are used to read .h files. The #include statements give the names of files that must be loaded. Those exact files must exist in the include path. In your case, a file with the exact name "NxPhysics.h" must be found somewhere in the include path. Typically, /usr/include is in the path by default, and so is the current directory. If the headers are somewhere else such as a subdirectory of /usr/include, then you always need to explicitly tell the compiler where to look using the -I command-line switches (or sometimes with environment variables or other system configuration methods).
A .h header file typically includes data structure declarations, inline function definitions, function and class declarations, and #define macros. When the compilation is done, a .o object file is created. The compiler does not know about .so or .a libraries and cannot use them in any way, other than to embed a little bit of helper information for the linker. Note that the compiler also embeds some "header" information in the object files. I put "header" in quotes because the information only roughly corresponds to what may or may not be found in the .h files. It includes a binary representation of all exported declarations. No macros are found there. I believe that inline functions are omitted as well (though I could be wrong there).
Once all of the .o files exist, it is time for another program to take over: the linker. The linker knows nothing of source code files or .h header files. It only cares about binary libraries and object files. You give it a collection of libraries and object files. In their "headers" they list what things (data types, functions, etc.) they define and what things they need someone else to define. The linker then matches up requests for definitions from one module with actual definitions for other modules. It checks to make sure there aren't multiple conflicting definitions, and if building an executable, it makes sure that all requests for definitions are fulfilled.
There are some notable caveats to the above description. First, it is possible to call gcc once and get it to do both compiling and linking, e.g.
gcc hello.c -o hello
will first compile hello.c to memory or to a temporary file, then it will link against the standard libraries and write out the hello executable. Even though it's only one call to gcc, both steps are still being performed sequentially, as a convenience to you. I'll skip describing some of the details of dynamic libraries for now.
If you're a Java programmer, then some of the above might be a little confusing. I believe that .net works like Java, so the following discussion should apply to C# and the other .net languages. Java is syntactically a much simpler language than C and C++. It lacks macros and it lacks true templates (generics are a very weak form of templates). Because of this, Java skips the need for separate declaration (.h) and definition (.c) files. It is also able to embed all the relevant information in the object file (.class for Java). This makes it so that both the compiler and the linker can use the .class files directly.
The problem was indeed with my include paths. Here is the relevant command:
g++ -I/usr/include/PhysX/v2.8.1/SDKs/PhysXLoader/include -I/usr/include -I/usr/include/PhysX/v2.8.1/LowLevel/API/include -I/usr/include/PhysX/v2.8.1/LowLevel/hlcommon/include -I/usr/include/PhysX/v2.8.1/SDKs/Foundation/include -I/usr/include/PhysX/v2.8.1/SDKs/Cooking/include -I/usr/include/PhysX/v2.8.1/SDKs/NxCharacter/include -I/usr/include/PhysX/v2.8.1/SDKs/Physics/include -O0 -g3 -DNX_DISABLE_FLUIDS -DLINUX -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp"
Also, for the linker, only "PhysXLoader" was needed (same as Windows). Thus, I have:
g++ -o"PhysXSetupTest" ./main.o -lglut -lPhysXLoader
While installing I got the following error
*
dpkg: dependency problems prevent configuration of libphysx-dev-2.8.1:
libphysx-dev-2.8.1 depends on libphysx-2.8.1 (= 2.8.1-4); however:
Package libphysx-2.8.1 is not configured yet.
dpkg: error processing libphysx-dev-2.8.1 (--install):
dependency problems - leaving unconfigured
Errors were encountered while processing:
*
So I reinstalled *libphysx-2.8.1_4_i386.deb*
sudo dpkg -i libphysx-2.8.1_4_i386.deb

Resources