get package version to cpp - haskell

When parsing source codes with c preprocessor enabled, the parser doesn't like undefined macros like MIN_VERSION_packagename(a,b,c). How can I get cabal/ghc tell cpp the package info and add the macro definitons?

You can use the very idiomatic (/s) options:
ghc -optP-include -optPdist/build/autogen/cabal_macros.h
I happen to have just been writing a pull request to doctest about this, you may be interested in referencing it:
https://github.com/sol/doctest/pull/109/files#diff-438bc19bd41887f8cacb796eaa990b0aR81

Related

Using an external version number in Haskell packages (Cabal)

I'm working on a project that's chosen CMake as its build tool. The project is made up of several executables and since a few months back a few of them are written in Haskell. We strongly wish all executables to show the same version number when called as foo --version. Ideally that version should be recorded in one place, and ideally that place should be the top-level CMakeLists.txt (this is where the source for all the other executables get it, via the use of CMake's configure_file function).
Is there some nice way of achieving this?
Some extra information that might be useful:
The source for each executable lives in its own dir, with its own Cabal file.
We use stack to build, and there is a single stack.yaml file that points to all directories with Haskell code.
I thought I'd document the solution I've landed on.
the source changes
I added the CPP language extension and use a macro (VERSION) to chose between a version provided as a CPP macro and the version provided by Cabal:
{-# LANGUAGE CPP #-}
module Main where
import Data.Version
import Text.ParserCombinators.ReadP
.... -- other imports
#ifdef VERSION
#define xstr(s) str(s)
#define str(s) #s
version = fst $ last $ readP_to_S parseVersion xstr(VERSION)
#else
import Paths_<Cabal project name> (version)
#endif
The need for the double-expansion (xstr and str) is explained in the answer to another question.
building
The above code will unfortunately not build with a simple stack build command. This apparently has to do with the default CPP that ghc uses (/usr/bin/gcc) and the flags it passes to it (as I understand it the culprit is -traditional). The solution is to tell ghc to use GNU CPP:
stack build --ghc-options "-pgmP=/usr/bin/cpp -DVERSION=1.2.3"
or as I put it in my CMakeLists.txt (I use ExternalProject to integrate stack into our CMake-based build):
ExternalProject_Add(haskell-bits
...
BUILD_COMMAND cd <SOURCE_DIR>
&& ${HaskellStack_EXE} --local-bin-path <BINARY_DIR> --install-ghc
install --ghc-options "-pgmP=/usr/bin/cpp -DVERSION=${<CMake proj name>_VERSION}"
...
)

Replace compiler when building Haskell project with Cabal

Is it possible somehow to configure cabal project to use different compiler than GHC? Additional is it possible to control this by some flags?
I want to compile my project with GHC or Haste (to JavaScript) based on some compilation flags.
It would be ideal if I could set my cabal configuration or my custom script to do something like:
-- target JS
cabal configure --target=js
cabal build
-- target Native
cabal configure --target=native
cabal build
To build a Cabal project with either GHC or Haste, use the cabal binary for the former, and haste-inst (comes with haste) for the latter.
To have conditional code in in your modules, add {-# LANGUAGE CPP #-} and use #ifdef __HASTE__, which will only be defined by haste, but not by GHC. Note that __GLASGOW_HASKELL__ is defined in both cases (which makes sense, as haste builds on GHC for large parts of the compilation). So you would use it like
{-# LANGUAGE CPP #-}
module Module where
compiler :: String
#ifdef __HASTE__
compiler = "haste"
#else
compiler = "GHC"
#endif
Theoretically, for conditional settings in the Cabal file something like this should work:
library
exposed-modules:
Module
if impl(ghc)
exposed-modules:
Module.GHC
if impl(haste)
exposed-modules:
Module.GHC
build-depends: base ==4.6.*
but it seems that even with haste-inst, impl(ghc) is true; bug report is filed.
While it's currently not possible to use impl(haste) in your cabal files, you can now check for flag(haste-inst) to see if your package is being built using haste-inst.

Haskell doctest and FFI

I have a module which binds to a C function using the FFI. How can I make this module use doctest?
The error I get when running doctest Foo.hs is something like this:
ByteCodeLink: can't find label
During interactive linking, GHCi couldn't find the following symbol:
bar
This may be due to you not asking GHCi to load extra object files,
archives or DLLs needed by your current session. Restart GHCi, specifying
the missing library using the -L/path/to/object/dir and -lmissinglibname
flags, or simply by naming the relevant files on the GHCi command line.
Alternatively, this link failure might indicate a bug in GHCi.
If you suspect the latter, please send a bug report to:
glasgow-haskell-bugs#haskell.org
### Failure in Foo.hs:41: expression `foo'
expected: [42]
but got:
<interactive>:24:1: Not in scope: `bar'
Examples: 2 Tried: 2 Errors: 0 Failures: 1
Doctest accepts arbitrary GHC flags. If you want to run Doctest with FFI code you need to pass the exact same flags that you would need to run a GHCi session with that code. Have e.g. a look at the Doctest driver of unix-time.

How to properly create autoconf setup of netcdf 4.x?

I am not sure exactly what my question is as I get seriously turned around by autoconf/automake/libtoolize etc. Several of us are trying to autoconferize mbsystem. I've thrown a repo up of the work to date here:
https://bitbucket.org/schwehr/mbsystem
I'm trying to improve the netcdf setup to use nc-config, but am uncertain how to do this correctly. I am working on configure.in. It seems unable to find a header with AC_CHECK_HEADER("netcdfcpp.h") after INCLUDES="$INCLUDES ``$nc_config --cflags``" (pardon the incorrect back ticks) as taken from the gdl netcdf check. What is the correct way to update the path from nc-config --cflags?
http://gnudatalanguage.cvs.sourceforge.net/viewvc/gnudatalanguage/gdl/configure.in?revision=1.121
I then tried to use AX_PATH_GENERIC and get stuck on this error with m4_include([m4/ax_path_generic.m4])
Running autoconf ...
configure.in:29: error: possibly undefined macro: AC_SUBST
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure:12992: error: possibly undefined macro: AC_MSG_RESULT
Any help in getting better at creating a netcdf check that actually will work with funky non-standard install locations via nc-config and figuring out how to properly put a macro in the m4 directory would be a huge help.
A pointer to a package doing this really cleanly would be a super help. I've been looking at the netcdf, gdal, geos and gdl sources for examples. And things like the octopus netcdf check do not use nc-config... http://www.tddft.org/trac/octopus/browser/trunk/m4/netcdf.m4
The current setup with fink for netcdf 4.x:
nc-config --cflags --libs
-I/sw/opt/netcdf7/include -I/sw/include
-L/sw/opt/netcdf7/lib -lnetcdf
Thanks!
See Makefile.am: How to use curl-config and xml2-config in configure.ac? and substitute xml2/curl by netcdf.
Just use
PKG_CHECK_MODULES([libnetcdf], [netcdf])
in configure.ac, and then, in Makefile.am:
AM_CPPFLAGS = ${libnetcdf_CFLAGS}
bin_PROGRAMS = foo
foo_SOURCES = ...
foo_LDADD = ${libnetcdf_LIBS}
The "correct" way to use a third party m4 macro is to use aclocal (usually via automake) to generate aclocal.m4. If you are using automake, just add
ACLOCAL_AMFLAGS = -I m4
to Makefile.am and put
AC_CONFIG_MACRO_DIR([m4])
in configure.ac (after renaming configure.in).
If you are not using automake, add '-I m4' when you invoke aclocal. If you are not using aclocal, then you'll have to append the definition of the macro to the end of aclocal.m4 (and be careful to never run aclocal, as that will overwrite the file.)
There is no good example of a clean way to use conf scripts to do a build because using such scripts is an inherently flawed approach. A slightly cleaner approach is to stop using custom scripts and use pkg-config via PKG_CHECK_MODULES, but the cleanest way to do this is to educate your users. If the user wants to install the library in funky non-standard locations then they need to be educated enough to set LDFLAGS and CPPFLAGS appropriately.

Installing and Linking PhysX Libraries in Debian Linux

I am trying to get PhysX working using Ubuntu.
First, I downloaded the SDK here:
http://developer.download.nvidia.com/PhysX/2.8.1/PhysX_2.8.1_SDK_CoreLinux_deb.tar.gz
Next, I extracted the files and installed each package with:
dpkg -i filename.deb
This gives me the following files located in /usr/lib/PhysX/v2.8.1:
libNxCharacter.so
libNxCooking.so
libPhysXCore.so
libNxCharacter.so.1
libNxCooking.so.1
libPhysXCore.so.1
Next, I created symbolic links to /usr/lib:
sudo ln -s /usr/lib/PhysX/v2.8.1/libNxCharacter.so.1 /usr/lib/libNxCharacter.so.1
sudo ln -s /usr/lib/PhysX/v2.8.1/libNxCooking.so.1 /usr/lib/libNxCooking.so.1
sudo ln -s /usr/lib/PhysX/v2.8.1/libPhysXCore.so.1 /usr/lib/libPhysXCore.so.1
Now, using Eclipse, I have specified the following libraries (-l):
libNxCharacter.so.1
libNxCooking.so.1
libPhysXCore.so.1
And the following search paths just in case (-L):
/usr/lib/PhysX/v2.8.1
/usr/lib
Also, as Gerald Kaszuba suggested, I added the following include paths (-I):
/usr/lib/PhysX/v2.8.1
/usr/lib
Then, I attempted to compile the following code:
#include "NxPhysics.h"
NxPhysicsSDK* gPhysicsSDK = NULL;
NxScene* gScene = NULL;
NxVec3 gDefaultGravity(0,-9.8,0);
void InitNx()
{
gPhysicsSDK = NxCreatePhysicsSDK(NX_PHYSICS_SDK_VERSION);
if (!gPhysicsSDK)
{
std::cout<<"Error"<<std::endl;
return;
}
NxSceneDesc sceneDesc;
sceneDesc.gravity = gDefaultGravity;
gScene = gPhysicsSDK->createScene(sceneDesc);
}
int main(int arc, char** argv)
{
InitNx();
return 0;
}
The first error I get is:
NxPhysics.h: No such file or directory
Which tells me that the project is obviously not linking properly. Can anyone tell me what I have done wrong, or what else I need to do to get my project to compile? I am using the GCC C++ Compiler. Thanks in advance!
It looks like you're confusing header files with library files. NxPhysics.h is a source code header file. Header files are needed when compiling source code (not when linking). It's probably located in a place like /usr/include or /usr/include/PhysX/v2.8.1, or similar. Find the real location of this file and make sure you use the -I option to tell the compiler where it is, as Gerald Kaszuba suggests.
The libraries are needed when linking the compiled object files (and not when compiling). You'll need to deal with this later with the -L and -l options.
Note: depending on how you invoke gcc, you can have it do compiling and then linking with a single invocation, but behind the scenes it still does a compile step then a link step.
EDIT: Extra explanation added...
When building a binary using a C/C++ compiler, the compiler reads the source code (.c or .cpp files). While reading it, there are frequently #include statements that are used to read .h files. The #include statements give the names of files that must be loaded. Those exact files must exist in the include path. In your case, a file with the exact name "NxPhysics.h" must be found somewhere in the include path. Typically, /usr/include is in the path by default, and so is the current directory. If the headers are somewhere else such as a subdirectory of /usr/include, then you always need to explicitly tell the compiler where to look using the -I command-line switches (or sometimes with environment variables or other system configuration methods).
A .h header file typically includes data structure declarations, inline function definitions, function and class declarations, and #define macros. When the compilation is done, a .o object file is created. The compiler does not know about .so or .a libraries and cannot use them in any way, other than to embed a little bit of helper information for the linker. Note that the compiler also embeds some "header" information in the object files. I put "header" in quotes because the information only roughly corresponds to what may or may not be found in the .h files. It includes a binary representation of all exported declarations. No macros are found there. I believe that inline functions are omitted as well (though I could be wrong there).
Once all of the .o files exist, it is time for another program to take over: the linker. The linker knows nothing of source code files or .h header files. It only cares about binary libraries and object files. You give it a collection of libraries and object files. In their "headers" they list what things (data types, functions, etc.) they define and what things they need someone else to define. The linker then matches up requests for definitions from one module with actual definitions for other modules. It checks to make sure there aren't multiple conflicting definitions, and if building an executable, it makes sure that all requests for definitions are fulfilled.
There are some notable caveats to the above description. First, it is possible to call gcc once and get it to do both compiling and linking, e.g.
gcc hello.c -o hello
will first compile hello.c to memory or to a temporary file, then it will link against the standard libraries and write out the hello executable. Even though it's only one call to gcc, both steps are still being performed sequentially, as a convenience to you. I'll skip describing some of the details of dynamic libraries for now.
If you're a Java programmer, then some of the above might be a little confusing. I believe that .net works like Java, so the following discussion should apply to C# and the other .net languages. Java is syntactically a much simpler language than C and C++. It lacks macros and it lacks true templates (generics are a very weak form of templates). Because of this, Java skips the need for separate declaration (.h) and definition (.c) files. It is also able to embed all the relevant information in the object file (.class for Java). This makes it so that both the compiler and the linker can use the .class files directly.
The problem was indeed with my include paths. Here is the relevant command:
g++ -I/usr/include/PhysX/v2.8.1/SDKs/PhysXLoader/include -I/usr/include -I/usr/include/PhysX/v2.8.1/LowLevel/API/include -I/usr/include/PhysX/v2.8.1/LowLevel/hlcommon/include -I/usr/include/PhysX/v2.8.1/SDKs/Foundation/include -I/usr/include/PhysX/v2.8.1/SDKs/Cooking/include -I/usr/include/PhysX/v2.8.1/SDKs/NxCharacter/include -I/usr/include/PhysX/v2.8.1/SDKs/Physics/include -O0 -g3 -DNX_DISABLE_FLUIDS -DLINUX -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp"
Also, for the linker, only "PhysXLoader" was needed (same as Windows). Thus, I have:
g++ -o"PhysXSetupTest" ./main.o -lglut -lPhysXLoader
While installing I got the following error
*
dpkg: dependency problems prevent configuration of libphysx-dev-2.8.1:
libphysx-dev-2.8.1 depends on libphysx-2.8.1 (= 2.8.1-4); however:
Package libphysx-2.8.1 is not configured yet.
dpkg: error processing libphysx-dev-2.8.1 (--install):
dependency problems - leaving unconfigured
Errors were encountered while processing:
*
So I reinstalled *libphysx-2.8.1_4_i386.deb*
sudo dpkg -i libphysx-2.8.1_4_i386.deb

Resources