How to compile an extension into sqlite? - linux

I would like to compile an extension into sqlite for loading at runtime.
The file I am using is extension - functions.c from https://www.sqlite.org/contrib
I have been able to compile into a loadable module but I need to statically link it for loading at runtime (using shell.c to create an interface at run time)
I have read the manual on linking, but to be honest, it's a little bit beyond my scope of comprehension!
Could someone let me know what I need to do to compile please?

I found a way to compile sqlite3 from source code with additional functions provided by extension_functions.c.
Note:
At this time I show the quite dirty and easy way to compile sqlite with additional features because I haven't succeed to do that in right way.
But please remember that it would perhaps be much better to prepare a brand new part of amalgamation for adding custom features as #ngreen says above.
That's the designed way of sqlite itself.
1. Download the sqlite source code
https://www.sqlite.org/download.html
Choose amalgamation one, and better to use autoconf version.
For example, here is the download link of version 3.33.0.
https://www.sqlite.org/2020/sqlite-autoconf-3330000.tar.gz
curl -O https://www.sqlite.org/2020/sqlite-autoconf-3330000.tar.gz
tar -xzvf sqlite-autoconf-3330000.tar.gz
cd sqlite-autoconf-3330000
2. Download extension_functions.c
Listed at this url.
https://sqlite.org/contrib
Actual url:
https://sqlite.org/contrib/download/extension-functions.c?get=25
curl -o extension_functions.c https://sqlite.org/contrib/download/extension-functions.c?get=25
3. Configure compilation
We can specify the --prefix option to determine the destination of built stuffs.
./configure --prefix=/usr/local/sqlite/3.33.0
Other configuration time options can be specified as environment variables at this time.
Check https://www.sqlite.org/draft/compile.html for more details.
Here is an example to enable JSON and RTree Index features.
CPPFLAGS="-DSQLITE_ENABLE_JSON1=1 -DSQLITE_ENABLE_RTREE=1" ./configure --prefix=/usr/local/sqlite/3.33.0
And autoconf options can also be specified.
CPPFLAGS="-DSQLITE_ENABLE_JSON1=1 -DSQLITE_ENABLE_RTREE=1" ./configure --prefix=/usr/local/sqlite/3.33.0 --enable-dynamic-extensions
I couldn't find any documentation about these options at the official website, but found something in configure script itself.
Optional Features:
--disable-option-checking ignore unrecognized --enable/--with options
--disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no)
--enable-FEATURE[=ARG] include FEATURE [ARG=yes]
--enable-silent-rules less verbose build output (undo: "make V=1")
--disable-silent-rules verbose build output (undo: "make V=0")
--disable-largefile omit support for large files
--enable-dependency-tracking
do not reject slow dependency extractors
--disable-dependency-tracking
speeds up one-time build
--enable-shared[=PKGS] build shared libraries [default=yes]
--enable-static[=PKGS] build static libraries [default=yes]
--enable-fast-install[=PKGS]
optimize for fast installation [default=yes]
--disable-libtool-lock avoid locking (might break parallel builds)
--enable-editline use BSD libedit
--enable-readline use readline
--enable-threadsafe build a thread-safe library [default=yes]
--enable-dynamic-extensions
support loadable extensions [default=yes]
--enable-fts4 include fts4 support [default=yes]
--enable-fts3 include fts3 support [default=no]
--enable-fts5 include fts5 support [default=yes]
--enable-json1 include json1 support [default=yes]
--enable-rtree include rtree support [default=yes]
--enable-session enable the session extension [default=no]
--enable-debug build with debugging features enabled [default=no]
--enable-static-shell statically link libsqlite3 into shell tool
[default=yes]
FYI, Here is the default install script which is used in Homebrew. Maybe it would be useful to determine which option should be specified.
def install
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_COLUMN_METADATA=1"
# Default value of MAX_VARIABLE_NUMBER is 999 which is too low for many
# applications. Set to 250000 (Same value used in Debian and Ubuntu).
ENV.append "CPPFLAGS", "-DSQLITE_MAX_VARIABLE_NUMBER=250000"
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_RTREE=1"
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_FTS3=1 -DSQLITE_ENABLE_FTS3_PARENTHESIS=1"
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_JSON1=1"
args = %W[
--prefix=#{prefix}
--disable-dependency-tracking
--enable-dynamic-extensions
--enable-readline
--disable-editline
--enable-session
]
system "./configure", *args
system "make", "install"
end
4. Remove confliction
Now we have to modify extension_functions.c to avoid conflicting against the source code of sqlite before compiling them together.
Open extension_functions.c and replace line 123 ~ 128 to a single line SQLITE_EXTENSION_INIT1.
#ifdef COMPILE_SQLITE_EXTENSIONS_AS_LOADABLE_MODULE
#include "sqlite3ext.h"
SQLITE_EXTENSION_INIT1
#else
#include "sqlite3.h"
#endif
↓
SQLITE_EXTENSION_INIT1
5. Enable extension functions
We need to insert some line into shell.c to import and enable extension functions.
Open shell.c, search static void open_db and insert #include "extension_functions.c" at the line above.
#include "extension_functions.c"
static void open_db(ShellState *p, int openFlags){
Then search sqlite3_shathree_init(p->db, 0, 0); and insert sqlite3_extension_init(p->db, 0, 0); at the bottom of init funcs.
#endif
sqlite3_fileio_init(p->db, 0, 0);
sqlite3_shathree_init(p->db, 0, 0);
sqlite3_completion_init(p->db, 0, 0);
sqlite3_uint_init(p->db, 0, 0);
sqlite3_decimal_init(p->db, 0, 0);
sqlite3_ieee_init(p->db, 0, 0);
sqlite3_extension_init(p->db, 0, 0);
6. Compile
Finally it's ready to compile sqlite including extension functions.
make install
It takes a while, and once done, distribution files will be generated at the destination which is specified at configuration time through --prefix option.
# Now we can use extension_functions without loading it manually.
$ /usr/local/sqlite/3.33.0/bin/sqlite3
sqlite> select cos(10);
-0.839071529076452

Q: "How to compile an extension into sqlite?"
A: That depends on the extension. To compile extension-functions.c referenced in the OP:
gcc -fPIC -shared extension-functions.c -o libsqlitefunctions.so -lm
(to remove the compilation warning see here)
Usage:
$ sqlite3
sqlite3> select cos(radians(45));
0.707106781186548
sqlite> .exit

I'm not sure if this is a complete answer yet, but from the how to compile document, it looks like you might want to make an amalgamation first. In src/shell.c.in you can search for ext/misc and you'll see lines such as this:
INCLUDE ../ext/misc/completion.c
These lines are used by the tool/mkshellc.tcl script to build the combined source file that will end up being compiled into the command line shell. Once the make process for sqlite3.c is complete, you should see the code you want in the combined source file.
Then, I found a function that contained this code:
sqlite3_shathree_init(p->db, 0, 0);
All I had to do was add this in the same place:
sqlite3_series_init(p->db, 0, 0);
And now I'm able to use the generate_series function. I can't find the functions.c file you were talking about, but the process should be something similar.

Related

error: am__fastdepCXX does not appear in AM_CONDITIONAL

Trying to follow this tutorial I have done my own "Hello World" in c++.
This is the code prueba.cpp:
#include <iostream>
int main()
{
std::cout<<"Hola Mundo"<<std::endl;
return 0;
}
Then, I have created configure.ac file with this information:
AC_INIT([holamundo], [0.1], [address#address.com])
AM_INIT_AUTOMAKE
AC_PROG_CC
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
and Makefile.am
AUTOMAKE_OPTIONS = foreign
bin_PROGRAMS = holamundo
holamundo_SOURCES = ./prueba.cpp
Those files are in the same folder of prueba.cpp
Finnally, in console and in the same folder of prueba.cpp I run the commands:
aclocal (no errors)
autoconf (no errors)
automake --add-missing Then I have the next errors:
Makefile.am:3: warning: source file './prueba.cpp' is in a subdirectory,
Makefile.am:3: but option 'subdir-objects' is disabled
automake: warning: possible forward-incompatibility.
automake: At least one source file is in a subdirectory, but the 'subdir-objects'
automake: automake option hasn't been enabled. For now, the corresponding output
automake: object file(s) will be placed in the top-level directory. However, this
automake: behavior may change in a future Automake major version, with object
automake: files being placed in the same subdirectory as the corresponding sources.
automake: You are advised to start using 'subdir-objects' option throughout your
automake: project, to avoid future incompatibilities.
/usr/share/automake-1.16/am/depend2.am: error: am__fastdepCXX does not appear in AM_CONDITIONAL
/usr/share/automake-1.16/am/depend2.am: The usual way to define 'am__fastdepCXX' is to add 'AC_PROG_CXX'
/usr/share/automake-1.16/am/depend2.am: to 'configure.ac' and run 'aclocal' and 'autoconf' again
Makefile.am: error: C++ source seen but 'CXX' is undefined
Makefile.am: The usual way to define 'CXX' is to add 'AC_PROG_CXX'
Makefile.am: to 'configure.ac' and run 'autoconf' again.
Issue 1
Makefile.am:3: warning: source file './prueba.cpp' is in a subdirectory,
Makefile.am:3: but option 'subdir-objects' is disabled
automake: warning: possible forward-incompatibility.
[...]
Do not prefix source names with ./ (or ../) in Makefile.am.
Automake can handle sources and targets in bona fide subdirectories, with or without recursive make, but you do need to set up your project for that, and I would not go there until you have a better handle on Autotools basics.
Issue 2
Makefile.am: error: C++ source seen but 'CXX' is undefined
Makefile.am: The usual way to define 'CXX' is to add 'AC_PROG_CXX'
Makefile.am: to 'configure.ac' and run 'autoconf' again.
The diagnostic already explains the problem and the solution, but see also below.
Issue 3
/usr/share/automake-1.16/am/depend2.am: error: am__fastdepCXX does not appear in AM_CONDITIONAL
/usr/share/automake-1.16/am/depend2.am: The usual way to define 'am__fastdepCXX' is to add 'AC_PROG_CXX'
/usr/share/automake-1.16/am/depend2.am: to 'configure.ac' and run 'aclocal' and 'autoconf' again
Again, the diagnostic already describes a solution. Since it is the same solution that another diagnostic suggests, and that seems plausible and appropriate, that seems to be a pretty good bet. Specifically:
configure.ac
AC_INIT([holamundo], [0.1], [address#address.com])
AM_INIT_AUTOMAKE
AC_PROG_CC
# Configure the C++ compiler:
AC_PROG_CXX
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
Issue 4
Finnally, in console and in the same folder of prueba.cpp I run the commands:
Generally speaking, you should not manually run the individual autotools (autoconf, automake, etc.). Instead, use autoreconf, which will identify which of the (other) autotools need to be run, and will run them in the correct order. Among the command-line options it supports are -i / --install and -f / --force, which will provide for installing the local autotool components in the source tree. You should probably run autoreconf --install --force once in your source tree. After that, you should need only plain autoreconf, unless you change to a different version of the autotools or modify one of the local autotool components.

How to build and use libicu in webassembly

I am interested in the word iterator of the ICU63 library in a JavaScript project (in a browser). So after reading the docs, I believe that ICU uses UTF-16 by default which is the same than JS and it would avoid me to encode JS strings into something else.
First step was to build a wrapper with the only function that I need (I don't know yet if it is working):
#include "emscripten.h"
#include <string.h>
#include <unicode/brkiter.h>
#include <unicode/unistr.h>
#include <unicode/errorcode.h>
using namespace icu_63;
EMSCRIPTEN_KEEPALIVE
int splitWords(const char *locale, const uint16_t *text, uint16_t *splitted) {
//Note that Javascript is working in UTF-16
//icu::
UnicodeString result = UnicodeString();
UnicodeString visibleSpace = UnicodeString(" ");
int32_t previousIdx = 0;
int32_t idx = -1;
//Create a Unicode String from input
UnicodeString uTextArg = UnicodeString(text);
if (uTextArg.isBogus()) {
return -1; // input string is bogus
}
//Create and init the iterator
UErrorCode err = U_ZERO_ERROR;
BreakIterator *iter = BreakIterator::createWordInstance(locale, err);
if (U_FAILURE(err)) {
return -2; // cannot build iterator
}
iter->setText(uTextArg);
//Iterate and store results
while ((idx = iter->next()) != -1) {
UnicodeString word = UnicodeString(uTextArg, idx, idx - previousIdx);
result += word;
result += visibleSpace;
previousIdx = idx;
}
result.trim();
//The buffer contains UTF-16 characters, so it takes 2 bytes per point
memcpy(splitted, result.getBuffer(), result.getCapacity() * 2);
return 0;
}
It compiles and looks good except that symbols are missing when trying to link because I have no clue about how to proceed.
LibICU looks to need a lot of builtin data. For my case, the frequency tables are mandatory for using the word iterator.
Should I try to copy my wrapper into the source folder and try to figure out how to use emconfigure. Or is it possible to link the libicu when I try to compile my wrapper? Second option looks like a waste of data as I am not interested by the larger portion of the lib.
In my experience, the easiest way to deal with libraries is to build the libraries using emconfigure/emmake first then link them statically with your own code. Like the following:
$ emcc your_wrapper.cpp \
your_compiled_libICU_static_lib.a \
-o result.js
Compiling libraries using emconfigure/emmake sometimes quite hard because you may need to modify the source code in order to make it work in WebAssembly.
But...Good news! Emscripten provides ports of some popular and complicated libraries and ICU is one of them.
You can compile your code without compiling ICU yourself using -s USE_ICU=1 flag:
$ emcc your_wrapper.cpp \
-s USE_ICU=1 \
-s ERROR_ON_UNDEFINED_SYMBOLS=0 \
-std=c++11
The caveats is that Emscripten ICU port is ICU 62. So you need to change using namespace icu_63; to using namespace icu_62;
While -s USE_ICU=1 is convenient when you can easily modify your build flags, I've found it more convenient to install ICU from source, because I also had to build other libraries whose configure/make/build processes do not play nicely with -s USE_ICU=1 (at least not without plenty of modification) and instead expect a more traditional way to find and link to the icu libs.
Unfortunately, building libicu does not seem to work with the usual configure && make install without some tweaking. To do that, first you have to do a "regular" native build (./configure && make) to create the necessary local files.
Then, if you do not need PTHREADS, you can build in a fairly straightforward manner as follows, assuming /opt/wasm is your PREFIX.
PKG_CONFIG_LIBDIR=/opt/wasm/lib/pkgconfig emconfigure ./configure --prefix=/opt/wasm --with-cross-build=`pwd` --enable-static=yes --enable-shared=no --target=wasm32-unknown-emscripten --with-data-packaging=static --enable-icu-config --enable-extras=no --enable-tools=no --enable-samples=no --enable-tests=no
emmake make clean install
If you do need PTHREADS for some downstream consumer of the lib, you might have to rebuild the lib with that enabled from the get-go. This is trickier because configure scripts will break when they do their tests that require building and running C snippets, due to warnings about requiring additional node flags (see https://github.com/emscripten-core/emscripten/issues/15736), which to the configure scripts mean an error. The easiest solution I found was to temporarily modify make_js_executable in emcc.py:
...
with open(script, 'w') as f:
# f.write('#!%s\n' % cmd); ## replaced with the below line
f.write('#!%s --experimental-wasm-threads --experimental-wasm-bulk-memory\n' % cmd)
f.write(src)
...
With that hack done, you can proceed to something like the below (though possibly, not all of those thread-related flags are absolutely needed)
CXXFLAGS='-s PTHREAD_POOL_SIZE=8 -s USE_PTHREADS=1 -O3 -pthread' CFLAGS='-s PTHREAD_POOL_SIZE=8 -s USE_PTHREADS=1 -O3 -pthread' FORCE_LIBS='-s PTHREAD_POOL_SIZE=8 -s USE_PTHREADS=1 -pthread -lm' PKG_CONFIG_LIBDIR=/opt/wasm/lib/pkgconfig emconfigure ./configure --prefix=/opt/wasm --with-cross-build=`pwd` --enable-static=yes --enable-shared=no --target=wasm32-unknown-emscripten --with-data-packaging=static --enable-icu-config --enable-extras=no --enable-tools=no --enable-samples=no --enable-tests=no
emmake make clean install
After that, set your emcc.py back to its original state. Note that if you try to build the tools, they will fail-- I haven't yet found a solution to that-- but the lib does successfully install with the above.

Adding path and linking libraries

I am trying to complete the installation for some software. According to which I have to add path. I am not getting it how to do that. Please guide me how to do the following steps.
Add the path to Rpa/Tk headers:
-I/usr/include/rpatk
To link to the Rpa/Tk libraries on Linux add the following link options:
-lrpa -lrvm -lrex -lrlib -lm
RVM library uses some math functions from the system math library, that is why you must include '-lm' to include the math library to your project in addition to the Rpa/Tk built libraries:
librpa librex librvm librlib
http://www.rpasearch.com/rpatk/doc/doxygen/rpadoc/html/rpatk_build.html
this will be within your configure script -- you will need to hack this and add the paths to where gcc/cc gets called
give the link to the actual tar.gz
looking at the example code: http://www.rpasearch.com/rpatk/doc/doxygen/rpadoc/html/js-tokenizer_8c-example.html
gcc -I/usr/include/rpatk -o js-tokenizer js-tokenizer.c -lrex -lrlib
so in short :
export LD_LIBRARY_PATH=/usr/include/rpatk:/usr/local/lib:/lib:/usr/lib
then running the configure may fix the issue
where the bit in bold would the path to the includes for this project

Reusing custom makefile for static library with cmake

I guess this would be a generic question on including libraries with existing makefiles within cmake; but here's my context -
I'm trying to include scintilla in another CMake project, and I have the following problem:
On Linux, scintilla has a makefile in (say) the ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk directory; if you run make in that directory (as usual), you get a ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/bin/scintilla.a file - which (I guess) is the static library.
Now, if I'd try to use cmake's ADD_LIBRARY, I'd have to manually specify the sources of scintilla within cmake - and I'd rather not mess with that, given I already have a makefile. So, I'd rather call the usual scintilla make - and then instruct CMAKE to somehow refer to the resulting scintilla.a. (I guess that this then would not ensure cross-platform compatibility - but note that currently cross-platform is not an issue for me; I'd just like to build scintilla as part of this project that already uses cmake, only within Linux)
So, I've tried a bit with this:
ADD_CUSTOM_COMMAND(
OUTPUT scintilla.a
COMMAND ${CMAKE_MAKE_PROGRAM}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk
COMMENT "Original scintilla makefile target" )
... but then, add_custom_command adds a "target with no output"; so I'm trying several approach to build upon that, all of which fail (errors given as comment):
ADD_CUSTOM_TARGET(scintilla STATIC DEPENDS scintilla.a) # Target "scintilla" of type UTILITY may not be linked into another target.
ADD_LIBRARY(scintilla STATIC DEPENDS scintilla.a) # Cannot find source file "DEPENDS".
ADD_LIBRARY(scintilla STATIC) # You have called ADD_LIBRARY for library scintilla without any source files.
ADD_DEPENDENCIES(scintilla scintilla.a)
I'm obviously quote a noob with cmake - so, is it possible at all to have cmake run a pre-existing makefile, and "capture" its output library file, such that other components of the cmake project can link against it?
Many thanks for any answers,
Cheers!
EDIT: possible duplicate: CMake: how do i depend on output from a custom target? - Stack Overflow - however, here the breakage seems to be due to the need to specifically have a library that the rest of the cmake project would recognize...
Another related: cmake - adding a custom command with the file name as a target - Stack Overflow; however, it specifically builds an executable from source files (which I wanted to avoid)..
You could also use imported targets and a custom target like this:
# set the output destination
set(SCINTILLA_LIBRARY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk/scintilla.a)
# create a custom target called build_scintilla that is part of ALL
# and will run each time you type make
add_custom_target(build_scintilla ALL
COMMAND ${CMAKE_MAKE_PROGRAM}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk
COMMENT "Original scintilla makefile target")
# now create an imported static target
add_library(scintilla STATIC IMPORTED)
# Import target "scintilla" for configuration ""
set_property(TARGET scintilla APPEND PROPERTY IMPORTED_CONFIGURATIONS NOCONFIG)
set_target_properties(scintilla PROPERTIES
IMPORTED_LOCATION_NOCONFIG "${SCINTILLA_LIBRARY}")
# now you can use scintilla as if it were a regular cmake built target in your project
add_dependencies(scintilla build_scintilla)
add_executable(foo foo.c)
target_link_libraries(foo scintilla)
# note, this will only work on linux/unix platforms, also it does building
# in the source tree which is also sort of bad style and keeps out of source
# builds from working.
OK, I think I have it somewhat; basically, in the CMakeLists.txt that build scintilla, I used this only:
ADD_CUSTOM_TARGET(
scintilla.a ALL
COMMAND ${CMAKE_MAKE_PROGRAM}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk
COMMENT "Original scintilla makefile target" )
... and then, the slightly more complicated part, was to find the correct cmake file elsewhere in the project, where the ${PROJECT_NAME} was defined - so as to add a dependency:
ADD_DEPENDENCIES(${PROJECT_NAME} scintilla.a)
... and finally, the library needs to be linked.
Note that in the commands heretofore, the scintilla.a is merely a name/label/identifier/string (so it could be anything else, like scintilla--a or something); but for linking - the full path to the actual `scintilla.a file is needed (which in this project ends up in a variable ${SCINTILLA_LIBRARY}). In this project, the linking basically occurs through a form of a
list(APPEND PROJ_LIBRARIES ${SCINTILLA_LIBRARY} )
... and I don't really know how cmake handles the actual linking afterwards (but it seems to work)
For consistency, I tried to use ${SCINTILLA_LIBRARY} instead of scintilla.a as identifier in the ADD_CUSTOM_TARGET, but got error: "Target names may not contain a slash. Use ADD_CUSTOM_COMMAND to generate files". So probably this could be solved smarter/more correct with ADD_CUSTOM_COMMAND - however, I read that it "defines a new command that can be executed during the build process. The outputs named should be listed as source files in the target for which they are to be generated."... And by now I'm totally confused so as to what is a file, what is a label, and what is a target - so I think I'll leave at this (and not fix it if it ain't broken :) )
Well, it'd still be nice to know a more correct way to do this eventually,
Cheers!

Installing and Linking PhysX Libraries in Debian Linux

I am trying to get PhysX working using Ubuntu.
First, I downloaded the SDK here:
http://developer.download.nvidia.com/PhysX/2.8.1/PhysX_2.8.1_SDK_CoreLinux_deb.tar.gz
Next, I extracted the files and installed each package with:
dpkg -i filename.deb
This gives me the following files located in /usr/lib/PhysX/v2.8.1:
libNxCharacter.so
libNxCooking.so
libPhysXCore.so
libNxCharacter.so.1
libNxCooking.so.1
libPhysXCore.so.1
Next, I created symbolic links to /usr/lib:
sudo ln -s /usr/lib/PhysX/v2.8.1/libNxCharacter.so.1 /usr/lib/libNxCharacter.so.1
sudo ln -s /usr/lib/PhysX/v2.8.1/libNxCooking.so.1 /usr/lib/libNxCooking.so.1
sudo ln -s /usr/lib/PhysX/v2.8.1/libPhysXCore.so.1 /usr/lib/libPhysXCore.so.1
Now, using Eclipse, I have specified the following libraries (-l):
libNxCharacter.so.1
libNxCooking.so.1
libPhysXCore.so.1
And the following search paths just in case (-L):
/usr/lib/PhysX/v2.8.1
/usr/lib
Also, as Gerald Kaszuba suggested, I added the following include paths (-I):
/usr/lib/PhysX/v2.8.1
/usr/lib
Then, I attempted to compile the following code:
#include "NxPhysics.h"
NxPhysicsSDK* gPhysicsSDK = NULL;
NxScene* gScene = NULL;
NxVec3 gDefaultGravity(0,-9.8,0);
void InitNx()
{
gPhysicsSDK = NxCreatePhysicsSDK(NX_PHYSICS_SDK_VERSION);
if (!gPhysicsSDK)
{
std::cout<<"Error"<<std::endl;
return;
}
NxSceneDesc sceneDesc;
sceneDesc.gravity = gDefaultGravity;
gScene = gPhysicsSDK->createScene(sceneDesc);
}
int main(int arc, char** argv)
{
InitNx();
return 0;
}
The first error I get is:
NxPhysics.h: No such file or directory
Which tells me that the project is obviously not linking properly. Can anyone tell me what I have done wrong, or what else I need to do to get my project to compile? I am using the GCC C++ Compiler. Thanks in advance!
It looks like you're confusing header files with library files. NxPhysics.h is a source code header file. Header files are needed when compiling source code (not when linking). It's probably located in a place like /usr/include or /usr/include/PhysX/v2.8.1, or similar. Find the real location of this file and make sure you use the -I option to tell the compiler where it is, as Gerald Kaszuba suggests.
The libraries are needed when linking the compiled object files (and not when compiling). You'll need to deal with this later with the -L and -l options.
Note: depending on how you invoke gcc, you can have it do compiling and then linking with a single invocation, but behind the scenes it still does a compile step then a link step.
EDIT: Extra explanation added...
When building a binary using a C/C++ compiler, the compiler reads the source code (.c or .cpp files). While reading it, there are frequently #include statements that are used to read .h files. The #include statements give the names of files that must be loaded. Those exact files must exist in the include path. In your case, a file with the exact name "NxPhysics.h" must be found somewhere in the include path. Typically, /usr/include is in the path by default, and so is the current directory. If the headers are somewhere else such as a subdirectory of /usr/include, then you always need to explicitly tell the compiler where to look using the -I command-line switches (or sometimes with environment variables or other system configuration methods).
A .h header file typically includes data structure declarations, inline function definitions, function and class declarations, and #define macros. When the compilation is done, a .o object file is created. The compiler does not know about .so or .a libraries and cannot use them in any way, other than to embed a little bit of helper information for the linker. Note that the compiler also embeds some "header" information in the object files. I put "header" in quotes because the information only roughly corresponds to what may or may not be found in the .h files. It includes a binary representation of all exported declarations. No macros are found there. I believe that inline functions are omitted as well (though I could be wrong there).
Once all of the .o files exist, it is time for another program to take over: the linker. The linker knows nothing of source code files or .h header files. It only cares about binary libraries and object files. You give it a collection of libraries and object files. In their "headers" they list what things (data types, functions, etc.) they define and what things they need someone else to define. The linker then matches up requests for definitions from one module with actual definitions for other modules. It checks to make sure there aren't multiple conflicting definitions, and if building an executable, it makes sure that all requests for definitions are fulfilled.
There are some notable caveats to the above description. First, it is possible to call gcc once and get it to do both compiling and linking, e.g.
gcc hello.c -o hello
will first compile hello.c to memory or to a temporary file, then it will link against the standard libraries and write out the hello executable. Even though it's only one call to gcc, both steps are still being performed sequentially, as a convenience to you. I'll skip describing some of the details of dynamic libraries for now.
If you're a Java programmer, then some of the above might be a little confusing. I believe that .net works like Java, so the following discussion should apply to C# and the other .net languages. Java is syntactically a much simpler language than C and C++. It lacks macros and it lacks true templates (generics are a very weak form of templates). Because of this, Java skips the need for separate declaration (.h) and definition (.c) files. It is also able to embed all the relevant information in the object file (.class for Java). This makes it so that both the compiler and the linker can use the .class files directly.
The problem was indeed with my include paths. Here is the relevant command:
g++ -I/usr/include/PhysX/v2.8.1/SDKs/PhysXLoader/include -I/usr/include -I/usr/include/PhysX/v2.8.1/LowLevel/API/include -I/usr/include/PhysX/v2.8.1/LowLevel/hlcommon/include -I/usr/include/PhysX/v2.8.1/SDKs/Foundation/include -I/usr/include/PhysX/v2.8.1/SDKs/Cooking/include -I/usr/include/PhysX/v2.8.1/SDKs/NxCharacter/include -I/usr/include/PhysX/v2.8.1/SDKs/Physics/include -O0 -g3 -DNX_DISABLE_FLUIDS -DLINUX -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp"
Also, for the linker, only "PhysXLoader" was needed (same as Windows). Thus, I have:
g++ -o"PhysXSetupTest" ./main.o -lglut -lPhysXLoader
While installing I got the following error
*
dpkg: dependency problems prevent configuration of libphysx-dev-2.8.1:
libphysx-dev-2.8.1 depends on libphysx-2.8.1 (= 2.8.1-4); however:
Package libphysx-2.8.1 is not configured yet.
dpkg: error processing libphysx-dev-2.8.1 (--install):
dependency problems - leaving unconfigured
Errors were encountered while processing:
*
So I reinstalled *libphysx-2.8.1_4_i386.deb*
sudo dpkg -i libphysx-2.8.1_4_i386.deb

Resources