Haskell stack build including makefile - haskell

I am developing a library that uses some C bindings via inline-c. As of now, the build process involves a makefile, as follows, since we first need to produce C stubs from the inline-c macros, compile them into object code and link them dynamically, in this case to GHCi.
step1:
ghc ${SRCDIR}/Internal/InlineC.hs -isrc/
step2:
cc -c ${SRCDIR}/Internal/InlineC.c -o ${LIBDIR}/InlineC_c.o -I${PETSC_DIR_ARCH}/include -I${PETSC_DIR}/include
step3:
ghci ${SRCDIR}/Test.hs ${SRCDIR}/Internal/InlineC.hs ${LIBDIR}/InlineC_c.o ${LIBDIR}/Internal.o -isrc/ -L${PETSC_DIR_ARCH}/lib -lpetsc -lmpich
Question
Is there a way to package up the above build sequence in a stack build recipe?
Thank you in advance

Yes, there is. Read carefully the Stack guide and adhere to the correct .cabal file syntax.
More precisely, when in a similar situation (HaskellC -> C -> Haskell), make sure the .cabal file has:
Binding libraries within scope by specifying the include-dirs, extra-lib-dirs and extra-libraries field in the library stanza.
the c-sources field, pointing to a file with the same name as the Haskell file that contains the inline-c stuff (i.e. all the [C.exp| |] quasiquoters for wrapping C functions) but with the .c extension.
This will trigger a warning because the C file doesn't exist yet at the beginning of the build, but it will ALSO call gcc, which will take care of the C output produced by inline-c.

Related

How to delete temporary files in scons build without causing them to be rebuilt

In the build I'm trying to do, there're 2 steps to generate the output
a.bin -> a.c (convert any file into a C file with an array storing the contents)
a.c -> a.o
a.c could be massive and should be deleted after a.o is generated.
Here's what I tried so far
Use separate builders for each step. For the last step, use target.AddPostAction(Delete("a.c"))
This does delete the temp file, but when you build again, a.c gets regenerated because SCons builds all targets by default unlike make
Is there some command to tell SCons to not build a target unless it's needed by another target? It would be the opposite of AlwaysBuild()
Create a custom builder that does both steps (target=a.o, source=a.json), so that SCons doesn't even know about a.c.
This required me to generate my own command for compiling C, which works for GCC, but not the Microsoft compiler. Is there any way to get the SCons generated C compile command or have SCons execute that command immediately?
Did you try:
Ignore('a.o','a.c')
haven't tried, but it might work.

What is the preferred way to write quick Haskell test programs that depend on Stack libraries in local directories?

I have a Haskell library that I am developing using Stack. As I am developing the library, I like to write small test/experimentation programs that use the library. I keep a collection of these test programs for myself in a directory locally. These test modules are very quick and informal, and not appropriate to include as unit tests in the committed library code. Typically, most of them aren't even maintained and won't compile against the latest version of the library, but I keep them around in case I want to update them later. When I'm working on a test program, I want it to build against my working copy of the library, with any changes that I've made to the library locally.
How should I set up my Stack build environment for this situation? Here are some options I've tried, and the problems with each options.
Two Cabal packages, one Stack configuration. The stack.yaml file lists both packages and defines the build environment for both at once.
Problem: The stack.yaml file needs to be included as part of the committed library source code, so that other developers can build the library from source reproducibly. I don't want the public stack.yaml file for my library to include build information for my local test projects.
Problem: As far as I know, to make this work I need to have a .cabal file that lists all the executables and modules for my test programs. This is annoying to update whenever I want to throw together a quick experimental script, and will fail to build any of the test programs if I have even a single module that doesn't compile. I can't have a .cabal file with no sections, because Cabal gives "No executables, libraries,tests, or benchmarks found. Nothing to do.", and because this offers nowhere to list build-depends.
Create a Cabal sandbox for the test programs. Use cabal sandbox add-source to add the local library as a package. See also this answer.
Problem: Using Cabal sandboxes instead of Stack reintroduces a lot of the dependency problems that Stack is supposed to fix, such as using the system-global GHC instead of the GHC defined by the resolver.
Have a separate stack.yaml for the test programs. Add the library under packages as location: 'C:\Path\To\Local\Library' and set extra-dep: true for that dependency. (See here for more info on this feature.) Don't put any other Cabal packages under packages in the stack.yaml for the test programs. Use stack runghc to invoke test programs within the scope of their stack.yaml.
Problem: I just can't get this one to work. Running stack build inside the test program directory gives "Error parsing targets: The project contains no local packages (packages not marked with 'extra-dep')". Running stack runghc acts as if no dependencies are present at all. I don't want to add a Cabal package for the test programs because this has the same problem as option 1 with needing to construct an explicit .cabal file describing the modules to build.
Problem: Stack build configuration info that I want to be identical between the library and the test programs has to be copied manually. For example, if I change the resolver in my library's stack.yaml, I also need to change it in the stack.yaml for my test programs separately.
Have a directory inside my working copy of the library that contains all of my test programs. Use stack runghc to invoke test programs in the context of the library.
Problem: I'd like the directory with my test programs to be outside of the directory containing my library source code and build configuration, so that I don't have to tell the version control for my library to ignore my test code, and can have my own local version control just for the test programs.
Problem: Only works with a single local library dependency. If my test programs need to depend on local working copies of two different libraries with their own stack.yaml files, I'm out of luck.
Add a symbolic link inside my working copy of the library to a separate directory that contains all of my test programs. Navigate through the symlink and use stack runghc to invoke test programs in the context of the library.
Problem: Super awkward to use, especially since I'm on Windows and Windows has terrible symlink support.
Problem: Still need to tell my version control system to ignore the symlink.
Problem: Still only works with a single library dependency.
If only one local library is involved, I use option 4. You can put your tests outside the directory of your library, and either invoke stack from the directory of your library, or using --stack-yaml path/to/library/stack.yaml.
Otherwise, I use option 3, creating a separate stack project without setting extra-dep.
...
packages:
- 'path/to/package1'
- 'path/to/package2'
...
I can't think of a good workaround for the issue of configuration duplication. There would otherwise be conflicts if multiple packages specified different resolvers/package versions.
Edit: Actually a stub library works better, so edited to add.
I think the way to get #3 to work is -- under your scratch program directory -- (1) add . under packages in stack.yaml alongside the location/extra-dep: true package:
packages:
- '.'
- location: ../mylib
extra-dep: true
(2) create an executable clause in scratch.cabal that points to a stub main program (i.e., a "Hello World" program that compiles but need not do anything) which depends on your library:
executable main
hs-source-dirs: src
main-is: Stub.hs
build-depends: base
, mylib
default-language: Haskell2010
or a library clause with no exposed modules, again that depends on your mylib library:
library
hs-source-dirs: src
build-depends: base >= 4.7 && < 5
, mylib
default-language: Haskell2010
and (3) run stack build in the scratch directory. This should build and register mylib, and now stack runghc Prog1.hs should work fine for running programs that depend on mylib modules.
If you use the executable approach, the stub program is compiled as a side effect but otherwise ignored. If you use the library approach, it looks like the stub library isn't even built; and you then have the option of actually building a scratch library by adding some exposed modules of shared code for your test programs to use, if it's convenient, so the stub library might be best.
None of this solves the problem of keeping stack.yaml info like the resolver version in sync, but it seems to address all the problems you list in 1, 2, 4, and 5. In particular, it should work fine for test programs that depend on multiple local libraries you're developing.

Cabal can't find foreign library when building on NixOS

I am trying to build an internal Haskell project on NixOS using cabal2nix. It wraps (and thus depends on) a foreign library which on Ubuntu one would build by wgetting the source, then running make && make install && ldconfig. Thus when cabal goes to build the program, it is apparently able to find the appropriate header files (which are in /usr/local/include/ta-lib or /usr/include/ta-lib).
On Nix, the process as I understand is to setup a .nix file to specify how to get and build the source, and then Nix sets up the isolated build environments. When I do this, the foreign library is fetched and built appropriately.
When Nix runs the configure step, it looks alright:
configureFlags: --verbose --prefix=/nix/store/fwpw03bd0c2m5yb7v2wc7g6f0qj912ra-talib-0.1.0.0 --libdir=$prefix/lib/$compiler --libsubdir=$pkgid --with-gcc=gcc --package-db=/tmp/nix-build-talib-0.1.0.0.drv-0/package.conf.d --ghc-option=-optl=-Wl,-rpath=/nix/store/fwpw03bd0c2m5yb7v2wc7g6f0qj912ra-talib-0.1.0.0/lib/ghc-7.10.2/talib-0.1.0.0 --enable-split-objs --disable-library-profiling --disable-executable-profiling --enable-shared --enable-library-vanilla --enable-executable-dynamic --enable-tests --extra-include-dirs=/nix/store/gvglncjgd5yif9bc03qalmp2mrjp524n-ta-lib-0.4.0/include --extra-lib-dirs=/nix/store/gvglncjgd5yif9bc03qalmp2mrjp524n-ta-lib-0.4.0/lib
With --extra-include-dirs and --extra-lib-dirs set to the correct paths in the Nix store. However, when it goes to build it complains with,
Setup: Missing dependency on a foreign library:
* Missing C library: ta_lib
Unfortunately I don't understand how cabal is determining whether the foreign library is present. I read here (Haskell how to resolve cabal error: Missing dependencies on foreign libraries?) that cabal will try to build and link a C program that consists of for each header it finds. So, somehow it is not finding the correct library.
What is wrong? Does this have to do with the step in Ubuntu of running ldconfig?
The problem is that ta_lib depends on the system math library m, but that library isn't linked by default. You can check that by creating a stub C program
echo "int main() { return 0; }" >test.c
and trying to link that with ta_lib:
$ nix-shell -p ta_lib --run "gcc test.c -lta_lib"
/nix/store/ghinzmxfm2s41nz8y873jlywwmcbw38l-ta-lib-0.4.0/lib/libta_lib.so: undefined reference to `sinh'
/nix/store/ghinzmxfm2s41nz8y873jlywwmcbw38l-ta-lib-0.4.0/lib/libta_lib.so: undefined reference to `sincos'
[...]
collect2: error: ld returned 1 exit status
Now, when Cabal tries to determine whether the library is available, it will attempt to link it to a stub test program, but that attempt will fail because of all those undefined symbol. Hence, Cabal complains that the library cannot be linked (even though its paths are configured and set-up correctly).
To remedy that issue, add the m library to the extra-libraries attribute in your project's Cabal file, like so:
extra-libraries: ta_lib, m
That should make the Cabal configure phase succeed.

alglib undefined reference compilation error

I'm trying to compile a program which uses the alglib function pearsoncorr2.
Unfortunately I always get compilation errors like the following:
undefined reference to `alglib::real_1d_array::real_1d_array()'
I know that I have to compile all the dependencies of the alglib unit which contains the function I want to use. In my case it's statistics.h.
I'm including all the necessary files (ap.h, statistics.h, alglibinternal.h, alglibmisc.h, linalg.h, specialfunctions.h) when compiling my program, but still I get these undefined reference errors.
I'm using g++ on linux.
What am I doing wrong?
Thanks in advance.
You also need to include the binary part -- i.e., either the *.o files or the *.so library file -- on your final link line. So for example, you probably need to link with linalg.o .
Alglib needs to compile all 13 cpp files before using it.
I have CMakeList.txt to tare care all the dependencies for me.

Installing and Linking PhysX Libraries in Debian Linux

I am trying to get PhysX working using Ubuntu.
First, I downloaded the SDK here:
http://developer.download.nvidia.com/PhysX/2.8.1/PhysX_2.8.1_SDK_CoreLinux_deb.tar.gz
Next, I extracted the files and installed each package with:
dpkg -i filename.deb
This gives me the following files located in /usr/lib/PhysX/v2.8.1:
libNxCharacter.so
libNxCooking.so
libPhysXCore.so
libNxCharacter.so.1
libNxCooking.so.1
libPhysXCore.so.1
Next, I created symbolic links to /usr/lib:
sudo ln -s /usr/lib/PhysX/v2.8.1/libNxCharacter.so.1 /usr/lib/libNxCharacter.so.1
sudo ln -s /usr/lib/PhysX/v2.8.1/libNxCooking.so.1 /usr/lib/libNxCooking.so.1
sudo ln -s /usr/lib/PhysX/v2.8.1/libPhysXCore.so.1 /usr/lib/libPhysXCore.so.1
Now, using Eclipse, I have specified the following libraries (-l):
libNxCharacter.so.1
libNxCooking.so.1
libPhysXCore.so.1
And the following search paths just in case (-L):
/usr/lib/PhysX/v2.8.1
/usr/lib
Also, as Gerald Kaszuba suggested, I added the following include paths (-I):
/usr/lib/PhysX/v2.8.1
/usr/lib
Then, I attempted to compile the following code:
#include "NxPhysics.h"
NxPhysicsSDK* gPhysicsSDK = NULL;
NxScene* gScene = NULL;
NxVec3 gDefaultGravity(0,-9.8,0);
void InitNx()
{
gPhysicsSDK = NxCreatePhysicsSDK(NX_PHYSICS_SDK_VERSION);
if (!gPhysicsSDK)
{
std::cout<<"Error"<<std::endl;
return;
}
NxSceneDesc sceneDesc;
sceneDesc.gravity = gDefaultGravity;
gScene = gPhysicsSDK->createScene(sceneDesc);
}
int main(int arc, char** argv)
{
InitNx();
return 0;
}
The first error I get is:
NxPhysics.h: No such file or directory
Which tells me that the project is obviously not linking properly. Can anyone tell me what I have done wrong, or what else I need to do to get my project to compile? I am using the GCC C++ Compiler. Thanks in advance!
It looks like you're confusing header files with library files. NxPhysics.h is a source code header file. Header files are needed when compiling source code (not when linking). It's probably located in a place like /usr/include or /usr/include/PhysX/v2.8.1, or similar. Find the real location of this file and make sure you use the -I option to tell the compiler where it is, as Gerald Kaszuba suggests.
The libraries are needed when linking the compiled object files (and not when compiling). You'll need to deal with this later with the -L and -l options.
Note: depending on how you invoke gcc, you can have it do compiling and then linking with a single invocation, but behind the scenes it still does a compile step then a link step.
EDIT: Extra explanation added...
When building a binary using a C/C++ compiler, the compiler reads the source code (.c or .cpp files). While reading it, there are frequently #include statements that are used to read .h files. The #include statements give the names of files that must be loaded. Those exact files must exist in the include path. In your case, a file with the exact name "NxPhysics.h" must be found somewhere in the include path. Typically, /usr/include is in the path by default, and so is the current directory. If the headers are somewhere else such as a subdirectory of /usr/include, then you always need to explicitly tell the compiler where to look using the -I command-line switches (or sometimes with environment variables or other system configuration methods).
A .h header file typically includes data structure declarations, inline function definitions, function and class declarations, and #define macros. When the compilation is done, a .o object file is created. The compiler does not know about .so or .a libraries and cannot use them in any way, other than to embed a little bit of helper information for the linker. Note that the compiler also embeds some "header" information in the object files. I put "header" in quotes because the information only roughly corresponds to what may or may not be found in the .h files. It includes a binary representation of all exported declarations. No macros are found there. I believe that inline functions are omitted as well (though I could be wrong there).
Once all of the .o files exist, it is time for another program to take over: the linker. The linker knows nothing of source code files or .h header files. It only cares about binary libraries and object files. You give it a collection of libraries and object files. In their "headers" they list what things (data types, functions, etc.) they define and what things they need someone else to define. The linker then matches up requests for definitions from one module with actual definitions for other modules. It checks to make sure there aren't multiple conflicting definitions, and if building an executable, it makes sure that all requests for definitions are fulfilled.
There are some notable caveats to the above description. First, it is possible to call gcc once and get it to do both compiling and linking, e.g.
gcc hello.c -o hello
will first compile hello.c to memory or to a temporary file, then it will link against the standard libraries and write out the hello executable. Even though it's only one call to gcc, both steps are still being performed sequentially, as a convenience to you. I'll skip describing some of the details of dynamic libraries for now.
If you're a Java programmer, then some of the above might be a little confusing. I believe that .net works like Java, so the following discussion should apply to C# and the other .net languages. Java is syntactically a much simpler language than C and C++. It lacks macros and it lacks true templates (generics are a very weak form of templates). Because of this, Java skips the need for separate declaration (.h) and definition (.c) files. It is also able to embed all the relevant information in the object file (.class for Java). This makes it so that both the compiler and the linker can use the .class files directly.
The problem was indeed with my include paths. Here is the relevant command:
g++ -I/usr/include/PhysX/v2.8.1/SDKs/PhysXLoader/include -I/usr/include -I/usr/include/PhysX/v2.8.1/LowLevel/API/include -I/usr/include/PhysX/v2.8.1/LowLevel/hlcommon/include -I/usr/include/PhysX/v2.8.1/SDKs/Foundation/include -I/usr/include/PhysX/v2.8.1/SDKs/Cooking/include -I/usr/include/PhysX/v2.8.1/SDKs/NxCharacter/include -I/usr/include/PhysX/v2.8.1/SDKs/Physics/include -O0 -g3 -DNX_DISABLE_FLUIDS -DLINUX -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp"
Also, for the linker, only "PhysXLoader" was needed (same as Windows). Thus, I have:
g++ -o"PhysXSetupTest" ./main.o -lglut -lPhysXLoader
While installing I got the following error
*
dpkg: dependency problems prevent configuration of libphysx-dev-2.8.1:
libphysx-dev-2.8.1 depends on libphysx-2.8.1 (= 2.8.1-4); however:
Package libphysx-2.8.1 is not configured yet.
dpkg: error processing libphysx-dev-2.8.1 (--install):
dependency problems - leaving unconfigured
Errors were encountered while processing:
*
So I reinstalled *libphysx-2.8.1_4_i386.deb*
sudo dpkg -i libphysx-2.8.1_4_i386.deb

Resources