Typechecking multiple 'Main's - haskell

I have a Haskell library with several executables (tests, benchmarks, etc), in total about six. When I do some refactoring in the library, I usually need to make some small change to each of the executables.
In my current workflow, I separately compile each executable (say, with GHCi) and fix each one up. This is tedious because I have to type out the path to each executable, and moreover have to reload all of the (very large) library, which even with GHCi takes some time.
My first thought to solve this issue was to create a single dummy module that imports the executable "Main" modules. However, this (of course) requires that the "Main" modules have a module name like module Executable1 where .... But now cabal complains when compiling the executable that it can't find a module called "Main" (despite explicitly listing "main-is" in the cabal file for each executable.)
I also tried ghci Exec1.hs Exec2.hs ..., but it complains module ‘main#main:Main’ is defined in multiple files.
Is there an easy way to load multiple "Main" modules at once with GHCi so I can typecheck them simultaneously?

Cabal’s main-is option only tells Cabal what filename it should pass to GHC. Cabal does not care about it’s module name.
GHC itself has a flag, also called -main-is, documented here which tells the compiler what module conains the main function.
So this works:
executable foo
main-is: Foo.hs
ghc-options: -main-is Foo
Of course Foo.hs should start with module Foo where… and export main. As usual, the module name and file name needs to match.
This way, all executable can have different module names and you can load them all in GHCi.
If you also want to change the name of the main function, write ghc-options: -main-is Foo.fooMain. I would guess you could even have all executables have the same module but different main-functions this way.

Related

Cabal update now can't load any modules from "hidden packages"

I've been working on a project and recently I did a cabal update.
I usually roll into ghci like:
$ ghci -package-db ~/.cabal/store/ghc-8.10.7/package.db
After the update loading module in my project results in even the basic Haskell modules like System.Random or MonadIO fails with the following errors when trying to load my own module called ProcessIO:
ProcessIO.hs:50:1: error:
Could not load module ‘Data.IORef.MonadIO’
It is a member of the hidden package ‘monadIO-0.11.1.0’.
You can run ‘:set -package monadIO’ to expose it.
(Note: this unloads all the modules in the current scope.)
Locations searched:
Data/IORef/MonadIO.hs
Data/IORef/MonadIO.lhs
Data/IORef/MonadIO.hsig
Data/IORef/MonadIO.lhsig
I checked that maybe the .cabal file build-depends versions might have been altered, but the cabal package.db directory contains all the right versions of the dependencies in the .cabal file. For example the error above complains abot monadIO-0.11.1.0 being hidden however: in package.db/ we see the right version exists:
monadIO-0.11.1.0-0aec75273f3fef94783e211a1933f8ac923485a963be3b6a61995d4a88dd1135.conf
I should say I haven't looked at the package.db files before because everything simple worked so there may be something telling about the .conf file name that signals something is wrong.
Either way, can't build anything and I need some help!
EDIT: posting my default environments file ~/.ghc/x86_64-linux-8.10.7/environments/default in case it matters:
clear-package-db
global-package-db
package-db /home/surya/.cabal/store/ghc-8.10.7/package.db
package-id ghc-8.10.7
package-id bytestring-0.10.12.0
...
(Let me know if I need to share more of it... or less)

What is the preferred way to write quick Haskell test programs that depend on Stack libraries in local directories?

I have a Haskell library that I am developing using Stack. As I am developing the library, I like to write small test/experimentation programs that use the library. I keep a collection of these test programs for myself in a directory locally. These test modules are very quick and informal, and not appropriate to include as unit tests in the committed library code. Typically, most of them aren't even maintained and won't compile against the latest version of the library, but I keep them around in case I want to update them later. When I'm working on a test program, I want it to build against my working copy of the library, with any changes that I've made to the library locally.
How should I set up my Stack build environment for this situation? Here are some options I've tried, and the problems with each options.
Two Cabal packages, one Stack configuration. The stack.yaml file lists both packages and defines the build environment for both at once.
Problem: The stack.yaml file needs to be included as part of the committed library source code, so that other developers can build the library from source reproducibly. I don't want the public stack.yaml file for my library to include build information for my local test projects.
Problem: As far as I know, to make this work I need to have a .cabal file that lists all the executables and modules for my test programs. This is annoying to update whenever I want to throw together a quick experimental script, and will fail to build any of the test programs if I have even a single module that doesn't compile. I can't have a .cabal file with no sections, because Cabal gives "No executables, libraries,tests, or benchmarks found. Nothing to do.", and because this offers nowhere to list build-depends.
Create a Cabal sandbox for the test programs. Use cabal sandbox add-source to add the local library as a package. See also this answer.
Problem: Using Cabal sandboxes instead of Stack reintroduces a lot of the dependency problems that Stack is supposed to fix, such as using the system-global GHC instead of the GHC defined by the resolver.
Have a separate stack.yaml for the test programs. Add the library under packages as location: 'C:\Path\To\Local\Library' and set extra-dep: true for that dependency. (See here for more info on this feature.) Don't put any other Cabal packages under packages in the stack.yaml for the test programs. Use stack runghc to invoke test programs within the scope of their stack.yaml.
Problem: I just can't get this one to work. Running stack build inside the test program directory gives "Error parsing targets: The project contains no local packages (packages not marked with 'extra-dep')". Running stack runghc acts as if no dependencies are present at all. I don't want to add a Cabal package for the test programs because this has the same problem as option 1 with needing to construct an explicit .cabal file describing the modules to build.
Problem: Stack build configuration info that I want to be identical between the library and the test programs has to be copied manually. For example, if I change the resolver in my library's stack.yaml, I also need to change it in the stack.yaml for my test programs separately.
Have a directory inside my working copy of the library that contains all of my test programs. Use stack runghc to invoke test programs in the context of the library.
Problem: I'd like the directory with my test programs to be outside of the directory containing my library source code and build configuration, so that I don't have to tell the version control for my library to ignore my test code, and can have my own local version control just for the test programs.
Problem: Only works with a single local library dependency. If my test programs need to depend on local working copies of two different libraries with their own stack.yaml files, I'm out of luck.
Add a symbolic link inside my working copy of the library to a separate directory that contains all of my test programs. Navigate through the symlink and use stack runghc to invoke test programs in the context of the library.
Problem: Super awkward to use, especially since I'm on Windows and Windows has terrible symlink support.
Problem: Still need to tell my version control system to ignore the symlink.
Problem: Still only works with a single library dependency.
If only one local library is involved, I use option 4. You can put your tests outside the directory of your library, and either invoke stack from the directory of your library, or using --stack-yaml path/to/library/stack.yaml.
Otherwise, I use option 3, creating a separate stack project without setting extra-dep.
...
packages:
- 'path/to/package1'
- 'path/to/package2'
...
I can't think of a good workaround for the issue of configuration duplication. There would otherwise be conflicts if multiple packages specified different resolvers/package versions.
Edit: Actually a stub library works better, so edited to add.
I think the way to get #3 to work is -- under your scratch program directory -- (1) add . under packages in stack.yaml alongside the location/extra-dep: true package:
packages:
- '.'
- location: ../mylib
extra-dep: true
(2) create an executable clause in scratch.cabal that points to a stub main program (i.e., a "Hello World" program that compiles but need not do anything) which depends on your library:
executable main
hs-source-dirs: src
main-is: Stub.hs
build-depends: base
, mylib
default-language: Haskell2010
or a library clause with no exposed modules, again that depends on your mylib library:
library
hs-source-dirs: src
build-depends: base >= 4.7 && < 5
, mylib
default-language: Haskell2010
and (3) run stack build in the scratch directory. This should build and register mylib, and now stack runghc Prog1.hs should work fine for running programs that depend on mylib modules.
If you use the executable approach, the stub program is compiled as a side effect but otherwise ignored. If you use the library approach, it looks like the stub library isn't even built; and you then have the option of actually building a scratch library by adding some exposed modules of shared code for your test programs to use, if it's convenient, so the stub library might be best.
None of this solves the problem of keeping stack.yaml info like the resolver version in sync, but it seems to address all the problems you list in 1, 2, 4, and 5. In particular, it should work fine for test programs that depend on multiple local libraries you're developing.

Flexibility of the hierarchy of module sources allowed in cabal project

I have a project with source tree:
src/
src/A/
src/A/A.hs
src/B/
src/B/C/
src/B/C/C.hs
...
The two haskell files divide source code into modules:
-- File A/A.hs
module A where
...
and
-- File B/C/C.hs
module B.C where
...
The cabal file contains:
other-modules: A, B.C, ...
hs-source-dirs: src/, src/A/, src/B/, src/B/C/, ...
But while the module A can be easily found, cabal complains about B.C:
cabal: can't find source for B/C in ...
I see no rational explanation why placing a file defining module A under A/A.hs is OK but placing B.C under B/C/C.hs isn't. Is there a workaround other than placing C.hs directly under B (I would like to maintain some separation of sources)?
The reason for the error is that module B.C should be defined in file B/C.hs, not B/C/C.hs (that would be module B.C.C). This error would have appeared if you had only one source dir with one source file, it is not because of the extra parts you have put in.
Also, the dir that appears in the hs-source-dirs directive should only be the root of the dir tree, so it is doubtful that you need all of the parts that you put in, for instance, src/B/C (which would treat src/B/C as another root.... meaning you can define top level modules in that dir. If you are actually doing that, I would consider this a mistake).
What you probably want to do is define multiple top level source dirs, like this
A_src/A.hs
B_src/B/C.hs
hs-source-dirs: A_src, B_src
Even better, I would suggest you use stack, which allows you to separate different modules completely with their own source dirs, called src, and independent .cabal files, allowing for richer dependencies between each module.

How to tell HSpec where to look for the source files to be tested

I'm new to Haskell and I wanted to add tests to my first project. I chose HSpec for this. My only spec file doesn't contain anything special so far. I just copied the example from the HSpec website and added import statements for my own modules to be tested. When I try to run it via runhaskell test/XSpec.hs it complains that it "could not find module X". How do I tell it about the load paths it should take a look into before complaining?
Adding -isrc helped, so the call looks like this:
runhaskell -isrc test/Spec.hs
Additionally, it is important to note, that a module's file name should match the module name, including the case. I.e. the filename of the module Foo should be Foo.hs.

Different behavior of cabal repl for library vs. executable

Using cabal repl seems to do nothing at all when used on library projects, but works fine for executable projects. Is this expected behavior that I just don't understand?
If I have a file containing simply
go = putStrLn "test"
and use cabal init with all the defaults (but choose "library" as the type), then running cabal repl just produces the some text about configuring and preprocessing the library and never enters a REPL environment. The exact same steps, but with "executable" selected as the type, puts me right into GHCi as expected.
The code works fine when loaded directly into GHCi.
For cabal repl to load your modules, you have to first name them in code and then specify them in your project's .cabal file as exposed:
-- MyModule.hs
module MyModule where
go = putStrLn "test"
-- MyProject.cabal
name: MyProject
-- other info ...
library
exposed-modules: MyModule
-- other options ...
Then when you run cabal repl, it'll have access to everything in your sandbox (if present) and the exposed modules. It might also work if you specify them as other-modules instead of exposed-modules, but I haven't tried that one out.

Resources