My question comes in two parts. The first is that when I compile my project I get a long list of errors of the form
(.text+0x137f): undefined reference to `raytrzuAd6RComi0WmBiuT4685WWH_Types_zdfBinaryColor_closure'
The full list of errors can be found here
The code that produces this error can be found here.
I am using ghc 7.10.1 and cabal 1.22.4.0.
The second part of my question is that despite following the same cabal structure as this question, cabal still recompiles the library 3 times each cabal build despite the executables and the library all having a unique hs-source-dir and depending on the library.
EDIT: as far as the triple compilation is concerned, the first time it builds .o files [ 2 of 15] Compiling Types ( src/Types.hs, dist/build/Types.o ). The second time it builds .p_o files [ 2 of 15] Compiling Types ( src/Types.hs, dist/build/Types.p_o ) which are caused by TemplateHaskell and profiling.
You should include all the other non-exported modules in your Cabal file in the other-modules field, otherwise they won't be linked in properly when producing the final library or executable.
You can see the Cabal User's Guide for more information (although that pretty much sums up the situation with other-modules!).
Related
I am trying to build a Windows DLL from my Haskell code. The functions in this DLL are supposed to be called from a managed code in C#. And, atleast one of the function (defined in the c# code) is to be called from a function in this DLL.
At the risk of over explaining, here's a small diagram to depict what I want:
+----------------------+ +------------------------+
| Managed C# code | | Haskell code (in DLL) |
| | (1) | |
| fn_calling_hs() -----------------> fn_called_from_cs() |
| | | |
| | | |
| fn_called_from_hs() <--------------- fn_calling_cs() |
| | (2) | |
+----------------------+ +------------------------+
I managed to make the (1) work perfectly, i.e., a Haskell function in the DLL is called by C# code, with correct marshalling of structures and arrays, and the results from the function execution in Haskell is also correct. So far, so good.
The problem is with (2), i.e., a function from Haskell (in the DLL) calling a managed function defined in C#. The problem is in the build itself - I have not yet gone past that to actually check the results of (2).
As the fn_called_from_hs() in the c# managed code is defined in C#, I only have the function symbol "imported" in the Haskell code (in DLL):
foreign import ccall fn_called_from_hs :: IO CString
Now, when I build my Haskell project with stack, it builds the Haskell DLL without problems, but the build continues to also link "main.exe" - and this fails (obviously), because there is no function fn_called_from_hs() defined anywhere in the Haskell code (it is defined in c#).
Is there any way that I can stop stack from continuing to build main.exe after building HsDLL.dll? I am ok with HsDLL.dll having unresolved symbol (fn_called_from_hs()) because this symbol will be found by the runtime linker during the loading of this DLL by the managed C# code.
So far, I have tried these steps, but none of them helped:
Removed the "executables" and "test" from package.yaml
Added the GHC option: -no-hs-main in the package.yaml. The package.yaml
portion that contains building of HsDLL looks like this:
library:
source-dirs:
- src
- src/csrc
include-dirs: src/csrc
ghc-options:
- -shared
- -fno-shared-implib
- -no-hs-main
Completely removed the Main module (i.e., removed Main.hs that was automatically created by stack from the "app" folder)
I added the -dynamic flag in the ghc-options in the hopes that GHC will assume that the unresolved symbols will be defined elsewhere, but this gave other problems: GHC now complains that it needs "dyn" libraries of base, etc.
So, finally, I always end up with this:
PS C:\workspace\Haskell\hscs\src\csrc> stack build
hscs-0.1.0.0: configure (lib)
Configuring hscs-0.1.0.0...
hscs-0.1.0.0: build (lib)
Preprocessing library for hscs-0.1.0.0..
Building library for hscs-0.1.0.0..
Linking main.exe ...
.stack-work\dist\5c8418a7\build\HsLib.o:fake:(.text+0x541): undefined reference to `fn_called_from_hs'
collect2.exe: error: ld returned 1 exit status
`gcc.exe' failed in phase `Linker'. (Exit code: 1)
-- While building custom Setup.hs for package hscs-0.1.0.0 using:
C:\tools\HaskellStack\setup-exe-cache\x86_64-windows\Cabal-simple_Z6RU0evB_2.0.1.0_ghc-8.2.2.exe --builddir=.stack-work\dist\5c8418a7 build lib:hscs --ghc-options " -ddump-hi -ddump-to-file -fdiagnostics-color=always"
Process exited with code: ExitFailure 1
So, my questions are:
(1) I have absolutely no idea how to stop linking "main.exe"! I know that the function fn_called_from_hs() is not defined within the HsDLL, but, as I said, I am ok because it is defined in the managed c# code. I just want main.exe not to be built.
OR
(2) Should I go ahead with adding -dynamic flag to GHC (keeping all the other flags as above)? In this case, how do I get stack to install the "dyn" libraries that GHC is complaining about?
Can somebody help me? Thanks in advance for your patience in reading this (rather) long question!
And so finally, I managed to solve this myself! After a week of struggle, that is. And any helpful comments to add it this answer is welcome.
I did this as follows:
In C# class DLL:
I had to find a way to "export" my function fn_called_from_hs() to unsafe native code. I found this is not really straight-forward, and there are really quite some amount of articles on the internet to explain how this is done. Everything amounts to actually disassembling the .NET DLL via the tool ildasm, and in the intermediate IL file generated, adding an ".export" prefix to the function that we want to export, and then again assembling the IL file back to the DLL form using ilasm.
I found all these steps are automated by the NUGetPackage Unmanaged Exports, so the first step is to install this package as a part of your .NET project, and then adding the DLLExport attribute to your function to be exported. Make sure you have RGiesecke.DllExport in your list of imports:
using RGiesecke.DllExport;
[DllExport("fn_called_from_hs", CallingConvention=CallingConvention.Cdecl)]
public static string FnCalledFromHs()
{
// Your function code here
}
As you can see, I have named the actual function as FnCalledFromHs() (in accordance with the naming convention in C#), but exported the same function as fn_called_from_hs (in accordance with the naming convention in Haskell). This way, when you look at the Haskell code, you will not see anything that looks out of place.
One of the most important steps for this to actually work is to make sure that the project in which you are exporting the function is made to target x64 or x86 - On default the projects target "Any CPU" - RGiesecke.DllExport does not work if the project targets "Any CPU".
Now build the project to get the csharp.dll which contains your exported fn_called_from_hs.
Before linking Haskell code
Mingw GCC (which ghc on Windows internally uses) can actually directly link with DLLs, provided they were created with gcc before. However, since we have created our C# DLL using the .NET compiler csc, we need to specifically create an import library that our Haskell can see.
We use two tools to our aid: gendef and dlltool, both of which are in the "mingw\bin" folder within your ghc installation (so, of course, you need to have this in your PATH env variable to access these tools).
Here's how I went about it:
Created a .def file which in-turn can be used for creating an import library:
gendef csharp.dll
Created an import library with dlltool:
dlltool -k -d csharp.def -l csharp.lib
Copied the above import lib to the same directory in which the DLL was present.
The last step (below) now will use this import library for actually linking with the csharp DLL.
Linking Haskell code with the above import library
This was a little trickier, and has possibly made me hit a bug in stack / GHC (not sure), but have already filed here.
I went about this as follows:
Added extra-lib-dirs in my stack.yaml, and added the directory in which the above import-lib was created:
extra-lib-dirs: ["<drive>:\\path\\to\\importlib"]
(Note that this could have also been added to your package.yaml under "libraries", but I chose to have it in my stack.yaml).
Added extra-libraries to my stack.yaml, under libraries.
extra-libraries: csharp
And, added also the options -l and -L to my ghc-options for linking my library. This is what I did to circumvent the (possible) bug that stack somehow is not passing the extra-lib-dirs and extra-libraries to ghc and ld during linking. So, my final "library" section in package.yaml looks like this (compare it to how it was before in my question above):
library:
source-dirs:
- src
- src/csrc
include-dirs: src/csrc
ghc-options:
- -shared
- -fno-shared-implib
- -lcslib
- -L<drive>:\\path\\to\\importlib
extra-libraries: csharp
Conclusion
With all this done, my Haskell code now simply builds well with the normal stack build command, without any "unreferenced symbols" error. On executing my Haskell code, I also checked that the c# function fn_called_from_hs was actually called, and the results got returned correctly.
Of course, there is more to this from the c# side: correct marshalling of parameters, etc., and I had to also work on those to get my result correct. The only place I can cover all of these nitty-gritties is in a blog :-)
Please feel free to cross-verify my solution, and also comment on any better way of doing this. This was the best way I could figure out after my struggles!
I'm using scons for building. I had encountered the following warning (when compiling some classes that are used in multiple build targets):
scons: warning: Two different environments were specified for target /home/stackuser/src/dsl/build/debug/common/LocalLog.o,
but they appear to have the same action: $CXX -o $TARGET -c $CXXFLAGS $CCFLAGS $_CCCOMCOM $SOURCES
So the accepted way around this warning is to use env.Object in the source list for common cpp files:
client_srcs = [
env.Object("../common/LocalLog.cpp"),
env.Object("../common/LogMsg.cpp"),
"LogWriter.cpp",
"QueueConsumer.cpp",
env.Object("../common/QueueStore.cpp"),
env.Object("../common/TimeFunctions.cpp")
]
However, when using this env.Object function around the common cpp files, some targets don't build (linker error linking to boost):
/usr/include/boost/system/error_code.hpp:208: undefined reference to `boost::system::get_system_category()'
/usr/include/boost/system/error_code.hpp:209: undefined reference to `boost::system::get_generic_category()'
/usr/include/boost/system/error_code.hpp:214: undefined reference to `boost::system::get_generic_category()'
/usr/include/boost/system/error_code.hpp:215: undefined reference to `boost::system::get_generic_category()'
This linker error is described here; to summarize the accepted answer:
When statically linking the linker expects that libraries will come
after the files containing references to them. You need to move your
.o files before your -l flags.
However, if I just remove the env.Object calls in the SConscript, I get these scons warnings, but compilation and linking is successful.
I'd just like to ignore these scons warnings; (how) can I turn them off?
If you take a short look at the MAN page ( http://scons.org/doc/production/HTML/scons-man.html ), you'll find the "warn=no-all" option...amongst a lot of other useful stuff.
Note however, that switching off this warning is a bad idea in general, because it's hinting at flaws in your build description. You're telling SCons to build a file like "debug/common/LocalLog.o" from two (or more) different environments. This may work as long as the command lines used (and the environments, including all shell variable settings) are exactly the same, so that's why SCons continues.
But usually you want to have one single way to build a specific target file.
There are three proper solutions to your dilemma (probably more, but these are the ones that came to my head immediately):
1.) Put the sources/objects that you want to use in multiple places into a separate lib and then link against that.
2.) For each time you compile the same *.CPP file, give a different unique name to the object file (LocalLog_a.o, LocalLog_b.log, ...).
3.) Compile the source one time (env.Object('LocalLog.cpp'), and then add the resulting object file to your list of sources for each program/library in question:
client_srcs = [
"../common/LocalLog.$OBJSUFFIX",
"../common/LogMsg.$OBJSUFFIX",
"LogWriter.cpp",
"QueueConsumer.cpp",
"../common/QueueStore.$OBJSUFFIX",
"../common/TimeFunctions.$OBJSUFFIX")
]
I've got the following situation:
Library X is a wrapper over some code in C.
Library A depends on library X.
Library B uses Template Haskell and depends on library A.
GHC bug #9010 makes it impossible to install library B using GHC 7.6. When TH is processed, GHCi fires up and tries to load library X, which fails with a message like
Loading package charsetdetect-ae-1.0 ... linking ... ghc:
~/.cabal/lib/x86_64-linux-ghc-7.6.3/charsetdetect-ae-1.0/
libHScharsetdetect-ae-1.0.a: unknown symbol `_ZTV15nsCharSetProber'
(the actual name of the “unknown symbol” differs from machine to machine).
Are there any workarounds for this problem (apart from “don't use Template Haskell”, of course)? Maybe library X has to be compiled differently, or there's some way to stop it from loading (as it shouldn't be called during code generation anyway)?
This is really one of the main reasons that 7.8 switched to dynamic GHCi by default. Rather than try to support every feature of every object file format, it builds dynamic libraries and lets the system dynamic loader handle them.
Try building with the g++ option -fno-weak. From the g++ man page:
-fno-weak
Do not use weak symbol support, even if it is provided by the linker. By default, G++ will use weak symbols if they are available. This option exists only for testing, and should not be used by end-users; it will result in inferior code and has no benefits. This option may be removed in a future release of G++.
There is another issue with __dso_handle. I found that you can at least get the library to load and apparently work by linking in a file which defines that symbol. I don't know whether this hack will cause anything to go wrong.
So in X.cabal add
if impl(ghc < 7.8)
cc-option: -fno-weak
c-sources: cbits/dso_handle.c
where cbits/dso_handle.c contains
void *__dso_handle;
When running Haskell programs that import several packages like this one:
import Text.Feed.Import
import Network.HTTP
main = do
page <- simpleHTTP (getRequest "http://stackoverflow.com")
print $ page
I get an error like this one (Note: This question intends to solve the general problem, this specific case is just an example) :
GHCi runtime linker: fatal error: I found a duplicate definition for symbol get_current_timezone_seconds
whilst processing object file
/usr/lib/ghc/time-1.4.0.1/HStime-1.4.0.1.o
This could be caused by:
* Loading two different object files which export the same symbol
* Specifying the same object file twice on the GHCi command line
* An incorrect `package.conf' entry, causing some object to be
loaded twice.
GHCi cannot safely continue in this situation. Exiting now. Sorry
Reinstalling the packages (e.g. HTTP and feed in the above case) as described in this previous post doesn't help. How can I resolve this issue?
Why this error occurs
This issue is not specific to a single package (e.g. it was described here in relation to Yesod three years ago), but is caused by the different libraries you import (e.g. HTTP and feed) linking to different versions of a single library (this issue occurs only for libraries that export C-style symbols. Their symbol names are not unique. time is one of those packages.).
As denoted in the error message, the library that causes the issues in this specific case is time-1.4.0.1.
Diagnosing the exact problem
First, you need to identify which different versions exist of your library. You can do this by checking the packages using ghc-pkg describe <packagename>, or just take a look into your cabal installation directory (usually ~/.cabal/lib).
At the time of writing this, the issue was caused by both time-1.4.0.1 and time-1.4.1 being installed. By using ghc-pkg describe I figured out that feed (and only feed, in my case), linked to time-1.4.1 whereas about 100 libraries linked to time-1.4.0.1.
How to resolve
Identify the library version (of the library that causes the error, as denoted in the error message) as described above that fewer packages depend on. You'll need to rebuild all packages that depend on it. In my case this is time-1.4.1.
Then, uninstall the package:
$ ghc-pkg unregister time-1.4.1 --force
unregistering time-1.4.1 would break the following packages: feed-0.3.9.2 (ignoring)
Note that the feed package is now broken and needs to be rebuilt and reinstalled. After rebuilding however, it won't link to time-1.4.1 but time-1.4.0.1 (in my specific case). This re-linking will resolve the duplicate symbol problem.
$ cabal install feed
If the error still occurs after that, re-check all dependencies as described above. You need to make sure any library you import will show the same library it's linked to when analyzed with ghc-pkg describe <pkg>
Update: In order to find out, which packages depend on the problematic library, simply use ghc-pkg unregister without the --force flag (Thanks to John J. Camilleri for pointing that out!). Note that if no packages depend on said problematic package, it will be removed.
An alternative cause of the same problem, is when using common symbols in an external library, on windows. I have an issue with a fortran code base using the common symbols. It is better explained here> https://gitlab.haskell.org/ghc/ghc/-/issues/6107
This only happens in dynamic linking, so ghc works, but ghci does not.
I'm trying to compile some code in Real World Haskell - Chapter 24. LineCount.hs.
I have not made any changes to the code.
However, when I do:
ghc -O2 --make -threaded LineCount.hs
(as instructed in the book), I get the message:
MapReduce.hs:6:7: Not in scope: `rnf'
What might I be doing wrong?
A quick search showed up that there was some trouble with the packages parallel and strict-concurrency in the past, and that reinstalling them would fix the issue. However, I tried that and it didn't work. Moreover, it is noted there that that issue was fixed sometime in 2010:
https://groups.google.com/forum/?fromgroups=#!msg/happs/gOieP4xfpNc/nrasm842JlUJ
Note: I get various other errors when compiling other files in the same chapter. For example, on compiling Strat.hs I get: Module Control.Parallel.Strategies' does not exportparZipWith'. On compiling LineChunks.hs I get: Module Control.Parallel.Strategies' does not exportrnf'.
Honestly, as a novice Haskell programmer I expected to run into trouble once I started modifying code - but I didn't expect to have trouble with code from a book!
The function is no longer called rnf. It's called rdeepseq now. Just replace it. :)
You can find the contents of the parallel package online by googling "control parallel strategies hackage", or clicking here.