I compiled libdispatch.
This code is working:
import Dispatch
var lockQueue = dispatch_queue_create("com.test.async", nil);
But if I put this code to end file:
dispatch_async(lockQueue) {
print("test1");
}
I got an error:
use of unresolved identifier 'dispatch_async'
As I commented above, this appears to be a current limitation with the Swift Package Manager. It doesn't currently support the addition of the appropriate compile-time options, such as the ones needed to support blocks as inputs to GCD functions (-Xcc -fblocks).
In the meantime, you can avoid the Swift Package Manager and compile your files directly using swiftc, with the appropriate options. An example is provided by sheffler in their test repository:
swiftc -v -o gcd4 Sources/main.swift -I .build/debug -j8 -Onone -g -Xcc -fblocks -Xcc -F-module-map=Packages/CDispatch-1.0.0/module.modulemap -I Packages/CDispatch-1.0.0 -I /usr/local/include
The -I options will pull in your module maps for libdispatch, so adjust those to match where you've actually placed these system module directories.
Related
I want to build https://github.com/wallix/redemption - and for the first time ever, I see bjam as a tool. This project has a tools/bjam/user-config.jam file.
The problem is, I'm trying to build this with a "custom" (that is, not the system version of) g++, which I have here:
$ which arm-linux-gnueabihf-g++-10
/home/pi/opt/gcc-10.1.0/bin/arm-linux-gnueabihf-g++-10
$ arm-linux-gnueabihf-g++-10 --version | head -n1
arm-linux-gnueabihf-g++-10 (pi-raspberry) 10.1.0
$ /home/pi/opt/gcc-10.1.0/bin/arm-linux-gnueabihf-g++-10 --v
ersion | head -n1
arm-linux-gnueabihf-g++-10 (pi-raspberry) 10.1.0
I guess, this qualifies at least as the compiler existing, right?
Anyways - I tried first, without knowing any better:
$ bjam --version
Boost.Build 2015.07-git
$ bjam toolset=arm-linux-gnueabihf-gcc-10 linkflags=-static-libstdc++ exe libs
arm.jam: No such file or directory
/usr/share/boost-build/src/build/toolset.jam:43: in toolset.using
ERROR: rule "arm.init" unknown in module "toolset".
/usr/share/boost-build/src/build-system.jam:461: in process-explicit-toolset-requests
/usr/share/boost-build/src/build-system.jam:527: in load
/usr/share/boost-build/src/kernel/modules.jam:295: in import
/usr/share/boost-build/src/kernel/bootstrap.jam:139: in boost-build
/usr/share/boost-build/boost-build.jam:8: in module scope
Then I found Building boost with different gcc version which mentions:
I cross built Boost for an ARM toolchain using something like this:
echo "using gcc : arm-unknown-linux-gnueabi : /usr/local/arm/bin/g++ ; " >> tools/build/v2/user-config.jam
Ok, so by that logic, I try:
echo "using gcc : arm-unknown-linux-gnueabi : /home/pi/opt/gcc-10.1.0/bin/arm-linux-gnueabihf-g++-10 ; " >> tools/bjam/user-config.jam
... and then:
$ bjam toolset=gcc-arm-unknown-linux-gnueabi linkflags=-static-libstdc++ exe libs
/usr/share/boost-build/src/tools/gcc.jam:123: in gcc.init from module gcc
error: toolset gcc initialization:
error: version 'arm-unknown-linux-gnueabi' requested but 'g++-arm-unknown-linux-gnueabi' not found and version '6.3.0' of default 'g++' does not match
error: initialized from
/usr/share/boost-build/src/build/toolset.jam:43: in toolset.using from module toolset
/usr/share/boost-build/src/build-system.jam:461: in process-explicit-toolset-requests from module build-system
/usr/share/boost-build/src/build-system.jam:527: in load from module build-system
/usr/share/boost-build/src/kernel/modules.jam:295: in import from module modules
/usr/share/boost-build/src/kernel/bootstrap.jam:139: in boost-build from module
/usr/share/boost-build/boost-build.jam:8: in module scope from module
Well, I agree that "version '6.3.0' of default 'g++' does not match" -> but how on earth is "'g++-arm-unknown-linux-gnueabi' not found"? What is that absolute path /home/pi/opt/gcc-10.1.0/bin/arm-linux-gnueabihf-g++-10 doing in that entry in user-config.jam otherwise?
So - can I get a more verbose printout of what actually bjam does in finding my compiler? Or even better, how can I format my "custom gcc" entry in user-config.jam, so I can get bjam to compile whatever it has to, and I can happily forget that bjam exists?
EDIT: even the official documentation for successor to bjam states:
When using gcc, you first need to specify your cross compiler in user-config.jam (see the section called “Configuration”), for example:
using gcc : arm : arm-none-linux-gnueabi-g++ ;
After that, if the host and target os are the same, for example Linux, you can just request that this compiler version to be used:
b2 toolset=gcc-arm
Isn't that exactly what I'm doing? Why doesn't it work then?
Well, I found a bit of documentation in /usr/share/boost-build/src/tools/gcc.jam:
# Initializes the gcc toolset for the given version. If necessary, command may
# be used to specify where the compiler is located. The parameter 'options' is a
# space-delimited list of options, each one specified as
# <option-name>option-value. Valid option names are: cxxflags, linkflags and
# linker-type. Accepted linker-type values are aix, darwin, gnu, hpux, osf or
# sun and the default value will be selected based on the current OS.
# Example:
# using gcc : 3.4 : : <cxxflags>foo <linkflags>bar <linker-type>sun ;
Ok, so here I have a string, delimited with colon, the seconf field says "3.4", the third field is empty - so WHERE does the "command may be used to specify where the compiler is located" go - in second or third field?
Well, I managed to get it running, quite hacky - I added these statements to /usr/share/boost-build/src/tools/gcc.jam:
...
rule init ( version ? : command * : options * )
{
#1): use user-provided command
local tool-command = ;
ECHO notice: 1) user-provided command '$(command)' version '$(version)' options '$(options)' ;
if $(version) = "arm"
{
command = arm-linux-gnueabihf-g++-10 ;
}
if $(command)
{
tool-command = [ common.get-invocation-command-nodefault gcc : g++ :
$(command) ] ;
ECHO notice: tool-command 1) user-provided '$(command)' '$(tool-command)' ;
...
The printouts were like:
notice: 1) user-provided command version 'arm' options
notice: tool-command 1) user-provided 'arm-linux-gnueabihf-g++-10' 'arm-linux-gnueabihf-g++-10'
...
... both 'command' and 'options' there are empty - as if the line I added to user-config.jam does not get parsed beyond the two first fields.
So, since the second field ("arm") does get parsed, I simply added a conditional on it, and forced the use of the command - and now that passes.
Well, I wish bjam just worked, and I did not have to go through this ...
Has anyone been able to compile fluoride bluetooth stack separately for an embedded linux device?
There is a guide at https://android.googlesource.com/platform/system/bt/+/181144a50114c824cfe3cdfd695c11a074673a5e/README.md, but following these instructions gn gen fails without getting the common-mk folder and modding some build files so there are no missing variables, folders etc.
I have been able to generate Ninja files, but when building, there are missing gtest and modp_b64 headers. After getting them from Google's source search, Ninja seems to be able to run a bit without errors, but ultimately fails with:
In file included from ../../third_party/libchrome/base/message_loop/message_loop.h:18:
../../third_party/libchrome/base/message_loop/message_loop_current.h:209:3: error: static_assert failed due to requirement 'std::is_same<MessagePumpForUI, MessagePumpLibevent>::value' "MessageLoopCurrentForUI::WatchFileDescriptor is not supported when MessagePumpForUI is not a MessagePumpLibevent."
static_assert(std::is_same<MessagePumpForUI, MessagePumpLibevent>::value,
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../third_party/libchrome/base/message_loop/message_loop_current.h:214:28: error: no type named 'Mode' in 'base::MessagePumpGlib'; did you mean 'MessagePumpLibevent::Mode'?
MessagePumpForUI::Mode mode,
^~~~~~~~~~~~~~~~~~~~~~
MessagePumpLibevent::Mode
../../third_party/libchrome/base/message_loop/watchable_io_message_pump_posix.h:55:8: note: 'MessagePumpLibevent::Mode' declared here
enum Mode {
^
In file included from ../../third_party/libchrome/base/run_loop.cc:10:
In file included from ../../third_party/libchrome/base/message_loop/message_loop.h:18:
../../third_party/libchrome/base/message_loop/message_loop_current.h:215:28: error: no type named 'FdWatchController' in 'base::MessagePumpGlib'; did you mean 'MessagePumpLibevent::FdWatchController'?
MessagePumpForUI::FdWatchController* controller,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MessagePumpLibevent::FdWatchController
../../third_party/libchrome/base/message_loop/message_pump_libevent.h:28:9: note: 'MessagePumpLibevent::FdWatchController' declared here
class FdWatchController : public FdWatchControllerInterface {
^
In file included from ../../third_party/libchrome/base/run_loop.cc:10:
In file included from ../../third_party/libchrome/base/message_loop/message_loop.h:18:
../../third_party/libchrome/base/message_loop/message_loop_current.h:216:28: error: no type named 'FdWatcher' in 'base::MessagePumpGlib'; did you mean 'MessagePumpLibevent::FdWatcher'?
MessagePumpForUI::FdWatcher* delegate);
^~~~~~~~~~~~~~~~~~~~~~~~~~~
MessagePumpLibevent::FdWatcher
../../third_party/libchrome/base/message_loop/watchable_io_message_pump_posix.h:17:9: note: 'MessagePumpLibevent::FdWatcher' declared here
class FdWatcher {
^
4 errors generated.
All the errors and missing files are coming from third_party/libchrome
Any help would be much appreciated.
I followed instructions at https://cs.android.com/android/platform/superproject/+/master:system/bt/README.md
In addition to instructions there, I had to resolve issues in the build script manually. I was able to compile the bluetooth stack successfully in Ubuntu 21.04
Replace //bt with /home/udara/fluoride/bt in build files.
Used sed command. Change to your fluoride directory as appropriate.
for file in $(grep -r -l "//bt"); do sed -i 's/\/\/bt/\/home\/udara\/fluoride\/bt/g' $file; done
Copied common-mk symlink generated by bootstrap.py to the fluoride directory.
And replaced //common-mk with /home/udara/fluoride/common-mk.
for file in $(grep -r -l "//common-mk"); do sed -i 's/\/\/common-mk/\/home\/udara\/fluoride\/common-mk/g' $file; done
Installed few missing dependencies
sudo apt install llvm
sudo apt install libc++abi-dev
Copied /home/udara/fluoride/bt/output/out/Default/gen/ABS_PATH/home/udara/fluoride/bt/gd/dumpsys/bundler/bundler_generated.h to /home/udara/fluoride/bt/gd/dumpsys/
cp /home/udara/fluoride/bt/output/out/Default/gen/ABS_PATH/home/udara/fluoride/bt/gd/dumpsys/bundler/bundler_generated.h /home/udara/fluoride/bt/gd/dumpsys/
Created directory named output in the bt directory.
Set environment variables
# this is set by bootstrap.py
export STAGING_DIR=/home/udara/fluoride/staging
# you have to manually set this
export OUTPUT_DIR=/home/udara/fluoride/bt/output
Then compile the bluetooth stack with
./build.py --output ${OUTPUT_DIR} --platform-dir ${STAGING_DIR} --clang
Add the following lines to /etc/dbus-1/system.d/bluetooth.conf
<policy>
...
<allow own="org.chromium.bluetooth"/>
<allow own="org.chromium.bluetooth.Manager"/>
</policy>
Finally run fluoride with
cd /home/udara/fluoride/bt/output/debug
sudo ./btadapterd --hci=0 INIT_gd_hci=true
The reason the common-mk issues are coming up because between the instructions being written, and now, libchrome has added a BUILD.gn file - so it is being used, instead of the substitute one in build/secondary/third-party/libchrome as intended. To fix this part of the build, just delete third-party/libchrome/BUILD.gn - should prevent the need for a lot of build fiddling.
The second part is because of the configuration of libchrome. Once you remove the file mentioned, then you need to modify build/secondary/third-party/libchrome/BUILD.gn to add the following not just to the source_set (as is done in upstream) but also to libchrome_config for downstream users:
defines = [
"__ANDROID_HOST__=1",
]
This will affect the build config so it doesn't try to use glib.
My best work on getting this to build is here, though I haven't gotten it to work entirely. https://github.com/rpavlik/fluoride I did this mostly as an experiment, feel free to continue where I left off.
I would like to compile an extension into sqlite for loading at runtime.
The file I am using is extension - functions.c from https://www.sqlite.org/contrib
I have been able to compile into a loadable module but I need to statically link it for loading at runtime (using shell.c to create an interface at run time)
I have read the manual on linking, but to be honest, it's a little bit beyond my scope of comprehension!
Could someone let me know what I need to do to compile please?
I found a way to compile sqlite3 from source code with additional functions provided by extension_functions.c.
Note:
At this time I show the quite dirty and easy way to compile sqlite with additional features because I haven't succeed to do that in right way.
But please remember that it would perhaps be much better to prepare a brand new part of amalgamation for adding custom features as #ngreen says above.
That's the designed way of sqlite itself.
1. Download the sqlite source code
https://www.sqlite.org/download.html
Choose amalgamation one, and better to use autoconf version.
For example, here is the download link of version 3.33.0.
https://www.sqlite.org/2020/sqlite-autoconf-3330000.tar.gz
curl -O https://www.sqlite.org/2020/sqlite-autoconf-3330000.tar.gz
tar -xzvf sqlite-autoconf-3330000.tar.gz
cd sqlite-autoconf-3330000
2. Download extension_functions.c
Listed at this url.
https://sqlite.org/contrib
Actual url:
https://sqlite.org/contrib/download/extension-functions.c?get=25
curl -o extension_functions.c https://sqlite.org/contrib/download/extension-functions.c?get=25
3. Configure compilation
We can specify the --prefix option to determine the destination of built stuffs.
./configure --prefix=/usr/local/sqlite/3.33.0
Other configuration time options can be specified as environment variables at this time.
Check https://www.sqlite.org/draft/compile.html for more details.
Here is an example to enable JSON and RTree Index features.
CPPFLAGS="-DSQLITE_ENABLE_JSON1=1 -DSQLITE_ENABLE_RTREE=1" ./configure --prefix=/usr/local/sqlite/3.33.0
And autoconf options can also be specified.
CPPFLAGS="-DSQLITE_ENABLE_JSON1=1 -DSQLITE_ENABLE_RTREE=1" ./configure --prefix=/usr/local/sqlite/3.33.0 --enable-dynamic-extensions
I couldn't find any documentation about these options at the official website, but found something in configure script itself.
Optional Features:
--disable-option-checking ignore unrecognized --enable/--with options
--disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no)
--enable-FEATURE[=ARG] include FEATURE [ARG=yes]
--enable-silent-rules less verbose build output (undo: "make V=1")
--disable-silent-rules verbose build output (undo: "make V=0")
--disable-largefile omit support for large files
--enable-dependency-tracking
do not reject slow dependency extractors
--disable-dependency-tracking
speeds up one-time build
--enable-shared[=PKGS] build shared libraries [default=yes]
--enable-static[=PKGS] build static libraries [default=yes]
--enable-fast-install[=PKGS]
optimize for fast installation [default=yes]
--disable-libtool-lock avoid locking (might break parallel builds)
--enable-editline use BSD libedit
--enable-readline use readline
--enable-threadsafe build a thread-safe library [default=yes]
--enable-dynamic-extensions
support loadable extensions [default=yes]
--enable-fts4 include fts4 support [default=yes]
--enable-fts3 include fts3 support [default=no]
--enable-fts5 include fts5 support [default=yes]
--enable-json1 include json1 support [default=yes]
--enable-rtree include rtree support [default=yes]
--enable-session enable the session extension [default=no]
--enable-debug build with debugging features enabled [default=no]
--enable-static-shell statically link libsqlite3 into shell tool
[default=yes]
FYI, Here is the default install script which is used in Homebrew. Maybe it would be useful to determine which option should be specified.
def install
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_COLUMN_METADATA=1"
# Default value of MAX_VARIABLE_NUMBER is 999 which is too low for many
# applications. Set to 250000 (Same value used in Debian and Ubuntu).
ENV.append "CPPFLAGS", "-DSQLITE_MAX_VARIABLE_NUMBER=250000"
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_RTREE=1"
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_FTS3=1 -DSQLITE_ENABLE_FTS3_PARENTHESIS=1"
ENV.append "CPPFLAGS", "-DSQLITE_ENABLE_JSON1=1"
args = %W[
--prefix=#{prefix}
--disable-dependency-tracking
--enable-dynamic-extensions
--enable-readline
--disable-editline
--enable-session
]
system "./configure", *args
system "make", "install"
end
4. Remove confliction
Now we have to modify extension_functions.c to avoid conflicting against the source code of sqlite before compiling them together.
Open extension_functions.c and replace line 123 ~ 128 to a single line SQLITE_EXTENSION_INIT1.
#ifdef COMPILE_SQLITE_EXTENSIONS_AS_LOADABLE_MODULE
#include "sqlite3ext.h"
SQLITE_EXTENSION_INIT1
#else
#include "sqlite3.h"
#endif
↓
SQLITE_EXTENSION_INIT1
5. Enable extension functions
We need to insert some line into shell.c to import and enable extension functions.
Open shell.c, search static void open_db and insert #include "extension_functions.c" at the line above.
#include "extension_functions.c"
static void open_db(ShellState *p, int openFlags){
Then search sqlite3_shathree_init(p->db, 0, 0); and insert sqlite3_extension_init(p->db, 0, 0); at the bottom of init funcs.
#endif
sqlite3_fileio_init(p->db, 0, 0);
sqlite3_shathree_init(p->db, 0, 0);
sqlite3_completion_init(p->db, 0, 0);
sqlite3_uint_init(p->db, 0, 0);
sqlite3_decimal_init(p->db, 0, 0);
sqlite3_ieee_init(p->db, 0, 0);
sqlite3_extension_init(p->db, 0, 0);
6. Compile
Finally it's ready to compile sqlite including extension functions.
make install
It takes a while, and once done, distribution files will be generated at the destination which is specified at configuration time through --prefix option.
# Now we can use extension_functions without loading it manually.
$ /usr/local/sqlite/3.33.0/bin/sqlite3
sqlite> select cos(10);
-0.839071529076452
Q: "How to compile an extension into sqlite?"
A: That depends on the extension. To compile extension-functions.c referenced in the OP:
gcc -fPIC -shared extension-functions.c -o libsqlitefunctions.so -lm
(to remove the compilation warning see here)
Usage:
$ sqlite3
sqlite3> select cos(radians(45));
0.707106781186548
sqlite> .exit
I'm not sure if this is a complete answer yet, but from the how to compile document, it looks like you might want to make an amalgamation first. In src/shell.c.in you can search for ext/misc and you'll see lines such as this:
INCLUDE ../ext/misc/completion.c
These lines are used by the tool/mkshellc.tcl script to build the combined source file that will end up being compiled into the command line shell. Once the make process for sqlite3.c is complete, you should see the code you want in the combined source file.
Then, I found a function that contained this code:
sqlite3_shathree_init(p->db, 0, 0);
All I had to do was add this in the same place:
sqlite3_series_init(p->db, 0, 0);
And now I'm able to use the generate_series function. I can't find the functions.c file you were talking about, but the process should be something similar.
If Perl code use fork or its variant like Paralell::Loops or Parallel::ForkManager, pp from Par::Packer generated standalone exe will crash when run, see example in https://groups.google.com/forum/?fromgroups#!topic/perl.par/U4HbbbcRRTQ. B::C or B::CC are also tested not working in such cases: activeperl can't even install it,show error like:
C.obj : error LNK2001: unresolved external symbol Perl_Iwatchaddr_ptr
C.obj : error LNK2001: unresolved external symbol Perl_Iwatchok_ptr
cygwin perl can install and generate exe as instructed in https://github.com/rurban/perl-compiler, but when we run it, it crashes with error like END failed--call queue abortedCouldn't print to pipe. Are there some tested working Perl compiler or packer that would work for fork?
Enclose all the code inside END {}, and change the use to require, like require Parallel::Loops. For those implicity dependance module, explicitly include them in pp, for example:
pp -M Params::Validate::XS -M Params::Validate::PP -M Class/Load/XS.pm -M Class/Load/PP.pm -o some-code.exe some-code.pl
If the script do not need to share data between the parent and the children, we can also use MCE, which have a use threads option that is compatible with pp and we do not need the END {} method.
I am trying to get PhysX working using Ubuntu.
First, I downloaded the SDK here:
http://developer.download.nvidia.com/PhysX/2.8.1/PhysX_2.8.1_SDK_CoreLinux_deb.tar.gz
Next, I extracted the files and installed each package with:
dpkg -i filename.deb
This gives me the following files located in /usr/lib/PhysX/v2.8.1:
libNxCharacter.so
libNxCooking.so
libPhysXCore.so
libNxCharacter.so.1
libNxCooking.so.1
libPhysXCore.so.1
Next, I created symbolic links to /usr/lib:
sudo ln -s /usr/lib/PhysX/v2.8.1/libNxCharacter.so.1 /usr/lib/libNxCharacter.so.1
sudo ln -s /usr/lib/PhysX/v2.8.1/libNxCooking.so.1 /usr/lib/libNxCooking.so.1
sudo ln -s /usr/lib/PhysX/v2.8.1/libPhysXCore.so.1 /usr/lib/libPhysXCore.so.1
Now, using Eclipse, I have specified the following libraries (-l):
libNxCharacter.so.1
libNxCooking.so.1
libPhysXCore.so.1
And the following search paths just in case (-L):
/usr/lib/PhysX/v2.8.1
/usr/lib
Also, as Gerald Kaszuba suggested, I added the following include paths (-I):
/usr/lib/PhysX/v2.8.1
/usr/lib
Then, I attempted to compile the following code:
#include "NxPhysics.h"
NxPhysicsSDK* gPhysicsSDK = NULL;
NxScene* gScene = NULL;
NxVec3 gDefaultGravity(0,-9.8,0);
void InitNx()
{
gPhysicsSDK = NxCreatePhysicsSDK(NX_PHYSICS_SDK_VERSION);
if (!gPhysicsSDK)
{
std::cout<<"Error"<<std::endl;
return;
}
NxSceneDesc sceneDesc;
sceneDesc.gravity = gDefaultGravity;
gScene = gPhysicsSDK->createScene(sceneDesc);
}
int main(int arc, char** argv)
{
InitNx();
return 0;
}
The first error I get is:
NxPhysics.h: No such file or directory
Which tells me that the project is obviously not linking properly. Can anyone tell me what I have done wrong, or what else I need to do to get my project to compile? I am using the GCC C++ Compiler. Thanks in advance!
It looks like you're confusing header files with library files. NxPhysics.h is a source code header file. Header files are needed when compiling source code (not when linking). It's probably located in a place like /usr/include or /usr/include/PhysX/v2.8.1, or similar. Find the real location of this file and make sure you use the -I option to tell the compiler where it is, as Gerald Kaszuba suggests.
The libraries are needed when linking the compiled object files (and not when compiling). You'll need to deal with this later with the -L and -l options.
Note: depending on how you invoke gcc, you can have it do compiling and then linking with a single invocation, but behind the scenes it still does a compile step then a link step.
EDIT: Extra explanation added...
When building a binary using a C/C++ compiler, the compiler reads the source code (.c or .cpp files). While reading it, there are frequently #include statements that are used to read .h files. The #include statements give the names of files that must be loaded. Those exact files must exist in the include path. In your case, a file with the exact name "NxPhysics.h" must be found somewhere in the include path. Typically, /usr/include is in the path by default, and so is the current directory. If the headers are somewhere else such as a subdirectory of /usr/include, then you always need to explicitly tell the compiler where to look using the -I command-line switches (or sometimes with environment variables or other system configuration methods).
A .h header file typically includes data structure declarations, inline function definitions, function and class declarations, and #define macros. When the compilation is done, a .o object file is created. The compiler does not know about .so or .a libraries and cannot use them in any way, other than to embed a little bit of helper information for the linker. Note that the compiler also embeds some "header" information in the object files. I put "header" in quotes because the information only roughly corresponds to what may or may not be found in the .h files. It includes a binary representation of all exported declarations. No macros are found there. I believe that inline functions are omitted as well (though I could be wrong there).
Once all of the .o files exist, it is time for another program to take over: the linker. The linker knows nothing of source code files or .h header files. It only cares about binary libraries and object files. You give it a collection of libraries and object files. In their "headers" they list what things (data types, functions, etc.) they define and what things they need someone else to define. The linker then matches up requests for definitions from one module with actual definitions for other modules. It checks to make sure there aren't multiple conflicting definitions, and if building an executable, it makes sure that all requests for definitions are fulfilled.
There are some notable caveats to the above description. First, it is possible to call gcc once and get it to do both compiling and linking, e.g.
gcc hello.c -o hello
will first compile hello.c to memory or to a temporary file, then it will link against the standard libraries and write out the hello executable. Even though it's only one call to gcc, both steps are still being performed sequentially, as a convenience to you. I'll skip describing some of the details of dynamic libraries for now.
If you're a Java programmer, then some of the above might be a little confusing. I believe that .net works like Java, so the following discussion should apply to C# and the other .net languages. Java is syntactically a much simpler language than C and C++. It lacks macros and it lacks true templates (generics are a very weak form of templates). Because of this, Java skips the need for separate declaration (.h) and definition (.c) files. It is also able to embed all the relevant information in the object file (.class for Java). This makes it so that both the compiler and the linker can use the .class files directly.
The problem was indeed with my include paths. Here is the relevant command:
g++ -I/usr/include/PhysX/v2.8.1/SDKs/PhysXLoader/include -I/usr/include -I/usr/include/PhysX/v2.8.1/LowLevel/API/include -I/usr/include/PhysX/v2.8.1/LowLevel/hlcommon/include -I/usr/include/PhysX/v2.8.1/SDKs/Foundation/include -I/usr/include/PhysX/v2.8.1/SDKs/Cooking/include -I/usr/include/PhysX/v2.8.1/SDKs/NxCharacter/include -I/usr/include/PhysX/v2.8.1/SDKs/Physics/include -O0 -g3 -DNX_DISABLE_FLUIDS -DLINUX -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp"
Also, for the linker, only "PhysXLoader" was needed (same as Windows). Thus, I have:
g++ -o"PhysXSetupTest" ./main.o -lglut -lPhysXLoader
While installing I got the following error
*
dpkg: dependency problems prevent configuration of libphysx-dev-2.8.1:
libphysx-dev-2.8.1 depends on libphysx-2.8.1 (= 2.8.1-4); however:
Package libphysx-2.8.1 is not configured yet.
dpkg: error processing libphysx-dev-2.8.1 (--install):
dependency problems - leaving unconfigured
Errors were encountered while processing:
*
So I reinstalled *libphysx-2.8.1_4_i386.deb*
sudo dpkg -i libphysx-2.8.1_4_i386.deb