I've created a Haskell package that makes FFI calls to functions defined in CUDA code. I'd like to compile .cu file to an object (.o) file during package build and force linker to link it in.
So far, I tried to use a technique found this question. I've customized buildHook to:
run nvcc
run default buildHook
create ar library file with nvcc compiled code.
Setup.hs is available here.
This solution has a major disadvantage in restricting this package to static linking. Although cabal produces a shared library, it won't work because it has no way of resolving symbols located in the object file.
Is there a simpler way to link custom code during building?
I do a similar thing. I have a Haskell file which calls CUDA code.
Here's how I compile CUDA libraries and link with Haskell:
$(NVCC) -c -E $(NVCC_OPTS) -o build/file.i file.cu
$(NVCC) -c $(NVCC_OPTS) -o build/file.o file.cu
I then link everything into a C++ Shared Library called LibSO with Haskell options
$(CXX) -shared -Wl,-rpath=\$$$$ORIGIN $(CXX_LINK_LIBS) $(PACKAGE_RPATH) -Lbuild -rdynamic -L/usr/local/lib/ghc-7.6.3 -lHSrts-ghc7.6.3 -o build/LibSO.so build/file.o
where
CXX_LINK_LIBS = -Lbuild -lcudart -lcuda -lpthread -lcupti -lcurand -lnvidia-ml
NVCC_OPTS = --compiler-options -fPIC -maxrregcount=0 --machine 64 --DCUDA
I then take my Haskell files and compile them into o and hi files. (I compile twice because of TemplateHaskell)
ghc -v0 -Wall -rtsopts -threaded -stubdir build -ibuild/ -no-hs-main -o build/iop.o -ohi build/iop.hi -c haskell/iop.lhs
ghc -v0 -Wall -rtsopts -threaded -stubdir build -ibuild/ -no-hs-main -fPIC -dynamic -osuf dyn_o -hisuf dyn_hi -o build/iop.dyn_o -ohi build/iop.dyn_hi -c haskell/iop.lhs
So now we have haskell dynamic objects and a C++ shared library.
In the end, I link a main haskell file with everything:
ghc -optl "-Wl,-rpath=\$$ORIGIN" $(CXX_LINK_LIBS) -Lbuild -rtsopts -threaded -lstdc++ -lLibSO -o build/Main build/iop.dyn_o
Does this sort of help?
Related
I am trying to compile a haskell package without using cabal
Given a correct .conf file, this seems to work
cd src; ghc --make -dynamic -shared -fPIC -package-name adventlib System/IO/Advent.hs System/IO/Test.hs -osuf dyn_o -hisuf dyn_hi -o libHSadventlib-ghc8.6.5.so
cd src; ghc -c --make -package-name adventlib System/IO/Advent.hs System/IO/Test.hs
ar cqs src/libHSadventlib.a src/System/IO/*.o
ghc --make src/MainTest.hs
ghc --make -dynamic src/MainTest.hs -o src/MainTest_dyn
The last two lines test that I can compile an executable binary and link the library both statically and dynamically.
Reading the docs it seems that it should be possible to use -dynamic-too to combine the first two lines in a single ghc run. However, I haven't managed to make that work.
The next line produces the static and dynamic object files, but doesn't create the so file:
cd src; ghc -c --make -dynamic-too -fPIC -package-name adventlib System/IO/Advent.hs System/IO/Test.hs
I can link the so file afterwards with ghc, but then I need to add all the package dependencies manually, missing much of the benefit of using --make
If I remove the -c flag, to make ghc run the link stage like this
cd src; ghc --make -shared -dynamic-too -fPIC -package-name adventlib System/IO/Advent.hs System/IO/Test.hs -o libHSadventlib.a -dyno libHSadventlib-ghc8.6.5.so
Then it seems to fail linking the static library (I guess it is trying to link it as dynamic but using static objects):
cd src; ghc --make -shared -dynamic-too -fPIC -package-name adventlib System/IO/Advent.hs System/IO/Test.hs -o libHSadventlib.a -dyno libHSadventlib-ghc8.6.5.so
[1 of 3] Compiling System.IO.Advent ( System/IO/Advent.hs, System/IO/Advent.o )
[2 of 3] Compiling System.IO.TestInternal ( System/IO/TestInternal.hs, System/IO/TestInternal.o )
[3 of 3] Compiling System.IO.Test ( System/IO/Test.hs, System/IO/Test.o )
Linking libHSadventlib.a ...
/nix/store/0hr45a0pzlh51hhcgynmfjpzff9d3ddv-binutils-2.31.1/bin/ld: /nix/store/gdpi6mrz1wcgmvpnfm9i9la9lpsb8lag-unliftio-0.2.12/lib/ghc-8.6.5/x86_64-linux-ghc-8.6.5/unliftio-0.2.12-Au2Yw1nUjiS94bY0JG3imp/libHSunliftio-0.2.12-Au2Yw1nUjiS94bY0JG3imp.a(Environment.o): relocation R_X86_64_32S against undefined symbol `stg_ap_p_info' can not be used when making a shared object; recompile with -fPIC
/nix/store/0hr45a0pzlh51hhcgynmfjpzff9d3ddv-binutils-2.31.1/bin/ld: /nix/store/7dx9j6hiscwr1a2nq9bjj91p33s9nqgg-unliftio-core-0.1.2.0/lib/ghc-8.6.5/x86_64-linux-ghc-8.6.5/unliftio-core-0.1.2.0-DmlZdkLzX278vkyONsp8WQ/libHSunliftio-core-0.1.2.0-DmlZdkLzX278vkyONsp8WQ.a(Unlift.o): relocation R_X86_64_32S against `.text.unliftiozmcorezm0zi1zi2zi0zmDmlZZdkLzzX278vkyONsp8WQ_ControlziMonadziIOziUnlift_zdp1MonadUnliftIO_info' can not be used when making a shared object; recompile with -fPIC
... etc ...
Am I missing something or doesn't dynamic-too work to create shared libraries?
I have the full working experiment here , for reference.
Is it possible to build a shared object file on Linux without using libc? I tried building the shared object using -nostdlib, and it complains that there is a conflicting type for built-in function 'memset'(I have my own version of the function defined within the shared object I am trying to build).
I am not using any libc functions from within the shared object file. I am building the shared object as follows :-
CC = gcc
CFLAGS = -Wall -Wextra -Werror -nostdlib
OUTPUTDIR = ./build
test: outputdir
$(CC) $(CFLAGS) -c -fPIC test.c -o ${OUTPUTDIR}/test.o
$(CC) $(CFLAGS) ${OUTPUTDIR}/test.o -shared -o ${OUTPUTDIR}/libtest.so
outputdir:
mkdir -p ${OUTPUTDIR}
clean:
rm -rf ${OUTPUTDIR}
If you link with -nostdlib, you should also compile with -ffreestanding and/or -fno-builtin as well.
You also have to be careful that you do not reference a libc.so.6 symbol without linking against glibc. Things may appear to work superficially, but it tends to introduce breakage in certain environments, especially once additional IFUNCs are added to glibc. (Intel did that with the ICC 16 compiler library.)
I have a libSomelib.a that can be linked to an executable by the following command:
g++ -L. -lsomeLib -lpcrecpp -lpcre -lpthread main.cpp -o main
But how could I link a shared object from it, that contains all depentencies?
I want to achieve the following with my new someLib.so:
g++ -L. -lsomeLib main.cpp -o main
I have tried the following:
g++ -shared -L. -lsomeLib -lpcrecpp -lpcre -lpthread -o libSomelib_static.so
This gives me an .so file with no symbols.
PS: I'm completely beginer of compilers.
There are a few issues at play here:
Linkers only use object files from an archive that resolve unresolved symbols. This is why the order of archives in the command line is important. Correct order is object files, followed by static libraries, followed by shared libraries. E.g. g++ -o main -pthread main.cpp -L. -lsomeLib -lpcrecpp -lpcre.
When you build the shared library that archive does not resolve any symbols, hence the linker ignores it completely. See 1.
Object files for a shared library must be compiled as position independent code (-fPIC compiler flag). Archives are normally built without this flag.
Use -pthread flag both when compiling and linking multi-threaded applications, not -lpthread.
I am learning to create shared libraries in Linux, subsequently to develop parallelised scientific computing programs. I took the toy example from here for shared library. I modified the Makefile from this question to suit the toy example. My Makefile now is
CC = mpicc
INCDIR = -I ./
CFLAGS = -Wall -rdynamic -g -fPIC $(INCDIR)
LIBADD = -L ./ -lcalc_mean
all: dyn_main.out
dyn_main.out: libcalc_mean.so
$(CC) -o $# main.c $(LIBADD)
libcalc_mean.so: calc_mean.o
$(CC) -shared --export-dynamic -o $# $<
calc_mean.o: calc_mean.c
$(CC) $(CFLAGS) -c $<
clean :
-rm *.o
-rm *.out
-rm *.so
.PHONY:
clean
When I make with CC = gcc in the Makefile, things run fine. I could run the binary even with mpirun.
When I have CC = mpicc in the Makefile, I get the following error.
mpicc -Wall -rdynamic -g -fPIC -I ./ -c calc_mean.c
mpicc -shared --export-dynamic -o libcalc_mean.so calc_mean.o
mpicc -o dyn_main.out main.c -L ./ -lcalc_mean
/home/elan/localinstalls/lib/libmpi.so: undefined reference to `pthread_key_create'
/home/elan/localinstalls/lib/libmpi.so: undefined reference to `pthread_getspecific'
/home/elan/localinstalls/lib/libmpi.so: undefined reference to `pthread_create'
/home/elan/localinstalls/lib/libmpi.so: undefined reference to `pthread_atfork'
/home/elan/localinstalls/lib/libmpi.so: undefined reference to `pthread_setspecific'
/home/elan/localinstalls/lib/libmpi.so: undefined reference to `pthread_join'
collect2: ld returned 1 exit status
make: *** [dyn_main.out] Error 1
I added the path to libpthread.so,.a to LD_LIBRARY_PATH, but no avail. I have a self compiled openmpi-1.5.4. If this were a openmpi dependency, shouldn't it have been resolved when I configured it?
Is this error familiar? I am using Ubuntu 11.04, with gcc 4.5.2. I already built and run some mpi parallel programs successfully. But they are large packages configured with autotools. One of the config.log s display the same error. But even that one runs fine.
References to / examples of creating static/shared libraries with mpi will also be appreciated (though Openmpi discourages fully static libraries.)
Thank you very much,
Elan.
You should be able to just add -lpthread.
Open MPI probably didn't add it because it found that adding -lpthread wasn't necessary (likely due to some other dependency implicitly pulling in the pthread library). But with the linker flags you're using, you might well have changed that implicit dependency, so the pthread library isn't being pulled in automatically anymore.
If adding -lpthread to the command line fixes the issue, then see this FAQ entry for how to update the wrapper compilers (E.g., add your own flags): http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0
You can see what options the Open MPI compiler wrapper supplies to the underlying compiler and linker using the -showme option or one of its specific variants:
-showme:compile to just show the compiler flags
-showme:link to just show the linker flags
For example:
$ mpicc -showme
icc -I/opt/MPI/openmpi-1.5.3/linux/intel/include -I/opt/MPI/openmpi-1.5.3/linux/intel/include/openmpi -fexceptions -pthread -I/opt/MPI/openmpi-1.5.3/linux/intel/lib -Wl,-rpath,/opt/MPI/openmpi-1.5.3/linux/intel/lib -I/opt/MPI/openmpi-1.5.3/linux/intel/lib -L/opt/MPI/openmpi-1.5.3/linux/intel/lib -lmpi -ldl -Wl,--export-dynamic -lnsl -lutil
I'm trying to install GHC with -fPIC support in Fedora.
I've grabbed a source tarball since it seems no binary one has this.
In Build.mk i've changed the quick build type to
ifeq "$(BuildFlavour)" "quick"
SRC_HC_OPTS = -H64m -O0 -fasm -fPIC
GhcStage1HcOpts = -O -fasm -fPIC
GhcStage2HcOpts = -O0 -fasm -fPIC
GhcLibHcOpts = -O -fasm -fPIC
SplitObjs = NO
HADDOCK_DOCS = NO
BUILD_DOCBOOK_HTML = NO
BUILD_DOCBOOK_PS = NO
BUILD_DOCBOOK_PDF = NO
endif
unfortunately, when compiling i still get the ld error
ghc -fglasgow-exts --make -shared -oHs2lib.a /tmp/Hs2lib924498/Hs2lib.hs dllmain.o -static -fno-warn-deprecated-flags -O2 -package ghc -package Hs2lib -i/home/phyx/Documents/Haskell/Hs2lib -optl-Wl,-s -funfolding-use-threshold=16 -optc-O3 -optc-ffast-math
Linking a.out ...
/usr/bin/ld: /tmp/Hs2lib924498/Hs2lib.o: relocation R_X86_64_32 against `ghczmprim_GHCziUnit_Z0T_closure' can not be used when making a shared object; recompile with -fPIC
/tmp/Hs2lib924498/Hs2lib.o: could not read symbols: Bad value
So it seems that GHC-prim still isn't compiled with -FPIC
I've also told cabal to build any packages with -fPIC and shared.
Anyone have any ideas?
EDIT:
Thanks to dcouts I've been able to make some progress. But now i'm at the point where I thnk libffi isn't compiled with -fPIC. I've edited the makefile(.in) for it but so far, no luck.
The new command is:
ghc -fPIC -shared dllmain.o Hs2lib.o /usr/local/lib/ghc-7.0.3/libHSrts.a -o Hs2lib.so
where dllmain.c and Hs2lib.hs have both been compiled using -fPIC.
The error I get is:
/usr/bin/ld: /usr/local/lib/ghc-7.0.3/libHSffi.a(closures.o): relocation R_X86_64_32
against `.rodata' can not be used when making a shared object; recompile with -fPIC
/usr/local/lib/ghc-7.0.3/libHSffi.a: could not read symbols: Bad value
collect2: ld returned 1 exit status
After you see this error, do the following:
cd /tmp/Hs2lib924498/
ghc -fglasgow-exts --make -shared -oHs2lib.a /tmp/Hs2lib924498/Hs2lib.hs dllmain.o -static -fno-warn-deprecated-flags -fPIC -O2 -package ghc -package Hs2lib -i/home/phyx/Documents/Haskell/Hs2lib -optl-Wl,-s -funfolding-use-threshold=16 -optc-O3 -optc-ffast-math
Note I added -fPIC to the failed ghc command.
Once the command succeeds, continue the compilation from within the tmp directory without cleaning already compiled files. It should skip them and continue where it ended.
There's an FAQ entry on this topic on the Haskell Stack page.
It basically says the problem is environment related and sometimes non-deterministic.
The issue may be related to the use of hardening flags in some cases, specifically those related to producing position independent executables (PIE).
There's also a work around suggestion for Arch Linux:
On Arch Linux, installing the ncurses5-compat-libs package from AUR resolves this issue.