Yesod build error because of the duplicate definition for symbol "hsprimitive_memcpy" - haskell

I followed "Yesod quick start guide" to install Yesod in Windows 10.
But, when I issued the stack build command, it failed.
Environment
Windows 10 (64bits)
stack-0.1.5 (for Windows10 64bits)
Haskell Platform 7.10.2-a (from HaskellPlatform-7.10.2-a-x86_64-setup.exe)
alex-3.1.4.log
GHC runtime linker: fatal error: I found a duplicate definition for symbol
hsprimitive_memcpy
whilst processing object file
C:\Users\xxxxx\AppData\Roaming\stack\snapshots\x86_64-windows\lts-3.8\7.10.2\lib\x86_64-windows-ghc-7.10.2\primitive-0.6.1.0-5Jnw7oEuYtM9dmKXelGXVb\HSprimitive-0.6.1.0-5Jnw7oEuYtM9dmKXelGXVb.o
This could be caused by:
* Loading two different object files which export the same symbol
* Specifying the same object file twice on the GHCi command line
* An incorrect `package.conf' entry, causing some object to be
loaded twice.
ghc: panic! (the 'impossible' happened)
(GHC version 7.10.2 for x86_64-unknown-mingw32):
loadObj "C:\\Users\\xxxxx\\AppData\\Roaming\\stack\\snapshots\\x86_64-windows\\lts-3.8\\7.10.2\\lib\\x86_64-windows-ghc-7.10.2\\primitive-0.6.1.0-5Jnw7oEuYtM9dmKXelGXVb\\HSprimitive-0.6.1.0-5Jnw7oEuYtM9dmKXelGXVb.o": failed

It seems that the cause of the error were the duplicated GHC installation.
Thanks for #Reid comment, I realized that I installed Haskel Platform from exe file before and I also installed GHC through stack by folloing the guide.
I unisatlled GHC of Haskel Platform and executed 'stack setup' command.
Then, I executed 'stack build' command and it seemed worked.
I still have problems with the 'stack build' command, but I solved this issue.

Related

cabal install fails with "arithmetic overflow"

I saw a possible solution for an utf8 problem here: Read file with UTF-8 in Haskell as IO String. I wanted to try that out, but I'm having a problem I can't resolve.
When I run the command cabal v2-install encoding --lib almost everything works but fails in the end with these lines:
[8 of 8] Compiling Main ( /tmp/cabal-install.-169090/dist-newstyle/tmp/src-169090/encoding-0.8.5/dist/setup/setup.hs, /tmp/cabal-install.-169090/dist-newstyle/tmp/src-169090/encoding-0.8.5/dist/setup/Main.o )
Linking /tmp/cabal-install.-169090/dist-newstyle/tmp/src-169090/encoding-0.8.5/dist/setup/setup ...
Configuring encoding-0.8.5...
Preprocessing library for encoding-0.8.5..
arithmetic overflow
cabal: Failed to build encoding-0.8.5. See the build log above for details.
If I add --verbose=3 to the command line the last few output lines are
creating dist/build/Data
creating dist/build/Data/Encoding
Data/Encoding/ISO88592.hs generated from mapping
Data/Encoding/ISO88592.mapping
arithmetic overflow
CallStack (from HasCallStack):
   die', called at ./Distribution/Client/ProjectOrchestration.hs:1041:55 in main:Distribution.Client.ProjectOrchestration
cabal: Failed to build
encoding-0.8.5-aa69e7dd952ebb6bcbe7b0947ad7f87838ecbfac327d0aa020c7f7f0f19b3e18.
I'm using cabal 3.2 and GHC 8.10.2 under Linux Mint 20.
I've looked "all over the place" for a solution, and the only trace of something similar is that the error is confirmed in Gentoo's Bugzilla.
Any help is appreciated!
This is apparently a bug in the library encoding (I could reproduce it), and there's a fix available as a PR on the source repository:
https://github.com/dmwit/encoding/pull/11

ghci gives ghc panic when using foreign c calls

I'm trying to use ghci / stack repl on a project where one module has foreign calls linked to a C lib tdsodbc, but I keep getting
ghc: panic! (the 'impossible' happened)
(GHC version 7.10.3 for x86_64-unknown-linux):
Loading temp shared object failed: /tmp/ghc4628_0/libghc_71.so: undefined symbol: SQLPrepareW
(where SQLPrepareW is defined in that C lib). Building with stack works fine. This happens even on other modules that just happen to import the foreign-calling module, even without actually calling the foreign functions. It doesn't happen on load, but as soon as I try to fully evaluate any function in the repl.
How can I tell ghci that some of the functions are defined in libs outside of ghc?
I've tried the -l option (e.g. stack exec ghci -- -ltdsodbc), but the only difference then is that a different function from the same lib is in the error message:
ghc: panic! (the 'impossible' happened)
(GHC version 7.10.3 for x86_64-unknown-linux):
Loading temp shared object failed: /tmp/ghc24107_0/libghc_25.so: undefined symbol: SQLDriverConnectW
Note that it's obviously checking for the lib when using -l, since if I misspell it, it'll say it can't find it:
$ stack exec ghci -- -L/usr/lib/x86_64-linux-gnu/odbc -ltdsodbctypo
Warning (added by new or init): Specified resolver could not satisfy all dependencies. Some external packages have been added as dependencies.
You can suppress this message by removing it from stack.yaml
GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help
<command line>: user specified .o/.so/.DLL could not be loaded (libtdsodbctypo.so: cannot open shared object file: No such file or directory)
Whilst trying to load: (dynamic) tdsodbctypo
Additional directories searched: /usr/lib/x86_64-linux-gnu/odbc
This is with
$ stack --version
Version 1.4.0, Git revision e714f1dd3fade19496d91bd6a017e435a96a6bcd (4640 commits) x86_64 hpack-0.17.0
I've also tried stack ghci --ghci-options '-ltdsodbc -fobject-code', but it also panics with undefined symbol: SQLPrepareW.
The nice folks in #haskell on freenode said maybe I should try passing -fobject-code to ghci. That didn't work. I tried :set and :seti to see if it was already set, but ghci didn't show anything about object code. (Doing :unset -fobject-code just gave Some flags have not been recognized: -fno-object-code.)
Then today I happened to look at my ~/.ghci for some other reason, and that did have :set -fobject-code, even though :set/:seti doesn't show that. Removing :set -fobject-code from my ~/.ghci took away the panic attacks, and I can now use functions from modules that import the module that defines foreign functions :)
Actually calling any of the foreign functions from ghci leads to a segfault (catchsegv log for the interested), but at least I can test the pure stuff now …

Stack setup fails with package installation errors

I'm trying to get stack running for the first time, but running stack setup in an example project (from stack new helloworld new-template) fails with the following output (I skipped the beginning, which I think was normal):
Installing library in
/home/ajl/.stack/programs/x86_64-linux/ghc-7.10.2/lib/ghc7.10.2/ghc_JzwEp1oQ8kA7NFNTGk1ho5 "/home/ajl/.stack/programs/x86_64-linux/ghc-7.10.2/lib/ghc-7.10.2/bin/ghc-pkg" --force --global-package-db "/home/ajl/.stack/programs/x86_64-linux/ghc-7.10.2/lib/ghc-7.10.2/package.conf.d" update rts/dist/package.conf.install
Reading package info from "rts/dist/package.conf.install" ... done.
: Warning: Unrecognized field 420 on line 420
(Skipped unrecognized field for every line 419 down to 1)
: Warning: Unrecognized field 1 on line 1
: missing id field
: invalid package identifier:
: invalid package key:
make[1]: *** [install_packages] Error 1
make: *** [install] Error 2
Installing GHC ...%
I'm on Ubuntu 14.04, running stack 1.0.2. Not sure if it's relevant but I have ghc 7.10.1 with Cabal 1.23.0.0 installed on the system already, which work fine.
I have tried changing the resolver to older LTS versions with older ghc versions. I also tried deleting ~/.stack. Not sure what else to try given the unhelpfulness of the errors.
I figured it out. The GHC build uses grep to make packages.conf.install. I have GREP_OPTIONS=--color=auto -n set in my zsh config. The -n was putting line numbers in front of everything, which was causing the errors.
The reason I couldn't find packages.conf.install anywhere before is because it is made on the fly during GHC make. And stack does that in /tmp.

ghc-modi and cabal version

(Haskell newbie here)
I'm trying to configure HaskForce plugin into IntelliJ IDEA, configured "ghc-mod" with "legacy-interactive" in "GHC Modi" Flags. The root problem seems to be related to cabal version, although when I try autocompleting on any Haskell symbol, I get this:
ghc-modi error
Unable to parse problems from ghc-modi: cabal-helper-wrapper.exe: ghc: readCreateProcess: does not exist (No such file or directory)
ghc-mod: readCreateProcess: C:\ACME\projects\htest\.cabal-sandbox\cabal-helper-0.5.3.0-553kah86RQN6BuDX6XLBiX\cabal-helper-wrapper.exe "C:\\ACME\\projects\\htest" "C:\\ACME\\projects\\htest\\dist" (exit 1): failed
When I run this last command (C:\ACME\projects\htest\.cabal-sandbox\cabal-helper-0.5.3.0-553kah86RQN6BuDX6XLBiX\cabal-helper-wrapper.exe "C:\\ACME\\projects\\htest" "C:\\ACME\\projects\\htest\\dist"), it tries to install cabal 1.18:
cabal-helper-wrapper.exe: Installing Cabal version 1.18.1.3 failed.
I already have cabal, of version 1.22 (installed via Haskell Platform v7.10.2-a, released recently in August).
Is there any way to work around this issue (i.e. still use ghc-mod / ghc-modi)?
Updated Haskforce plugin (0.3-beta24) seems to work correctly with following ghc-mod:
ghc-mod version 5.4.0.0 compiled by GHC 7.10.2

Haskell Plugins and cabal sandbox

So, I'm trying to use the Plugins package to dynamically load a haskell function from a source file. The source file depends on a package foo with module Foo.Bar. I'm running my project in a Cabal sandbox, where I have foo installed. Both my main program, and the module I'm loading with plugins, depend on foo. I always get one of the following two errors:
When I have foo installed in ~/.cabal, I get the error:
GHCi runtime linker: fatal error: I found a duplicate definition for symbol
aizmvszmaizmlibzm0zi1_FooziBar_zdfTypeableBazzuds2_closure
whilst processing object file
/home/joey/.cabal/lib/foo-0.1/ghc-7.6.3/HSfoo-0.1.o
This could be caused by:
* Loading two different object files which export the same symbol
* Specifying the same object file twice on the GHCi command line
* An incorrect `package.conf' entry, causing some object to be
loaded twice.
GHCi cannot safely continue in this situation. Exiting now. Sorry.
When I don't have it installed in ~/.cabal, I get a standard "module not found" error. And when I don't have it installed in my sandbox, I get the same module not found error trying to compile my main program code.
The plugins documentation is scarce at best. Any thoughts on how to solve this?
I got this working by using System.Plugins.Make to actually do the compliation, instead of relying on pre-existing object files. Not a complete solution, doesn't explain the problem, but it works for me for now.

Resources