Haskell Criterion: Could not load module, hidden package - haskell

When I do exactly what this Criterion tutorial says to do to get started, I get an error. What am I doing wrong? Is the tutorial wrong? If so, is there a place that I can learn the right way to use Criterion?
Specifically, as the tutorial says, I ran the following in command line:
cabal update
cabal install -j --disable-tests criterion
This ran without error. Then I copied exactly the example program in the tutorial:
import Criterion.Main
-- The function we're benchmarking.
fib m | m < 0 = error "negative!"
| otherwise = go m
where
go 0 = 0
go 1 = 1
go n = go (n-1) + go (n-2)
-- Our benchmark harness.
main = defaultMain [
bgroup "fib" [ bench "1" $ whnf fib 1
, bench "5" $ whnf fib 5
, bench "9" $ whnf fib 9
, bench "11" $ whnf fib 11
]
]
I put that into a file called benchTest.hs, and then I used the command line to compile the program exactly as it says in the tutorial, but with benchTest in place of Fibber, which is what they called it. Specifically, the I ran the following in the command line:
ghc -O --make benchTest
This resulted in this error:
benchTest.hs:1:1: error:
Could not load module `Criterion.Main'
It is a member of the hidden package `criterion-1.5.13.0'.
You can run `:set -package criterion' to expose it.
(Note: this unloads all the modules in the current scope.)
It is a member of the hidden package `criterion-1.5.13.0'.
You can run `:set -package criterion' to expose it.
(Note: this unloads all the modules in the current scope.)
Use -v (or `:set -v` in ghci) to see a list of the files searched for.
|
1 | import Criterion.Main
| ^^^^^^^^^^^^^^^^^^^^^

Short history of Cabal evolution
Through its history, Cabal has gone through a big transformation of how it
works, which is commonly denoted as v1- commands vs. v2- commands; e.g.
since Cabal 2 you can say cabal v1-install or cabal v2-install. What happens
when you say just cabal install depends on the Cabal version: Cabal 2 will use
v1-install by default while Cabal 3 will use v2-install. The change in
defaults reflects the preferred mode of operation. So much so, v1 has become
basically unmaintained. I don't expect it to be removed soon because there is a
group of hard proponents of the old way. But personally I think that, first, the
new way (fun fact you can use cabal new-install as a synonym) is technically
superior, and second, that newcomers should just use it because it's better
documented and you can have more luck in getting help with it (in many cases,
it's easier to help because of the above-mentioned superiority)
Why v1 was subsumed by v2 (in a nutshell)
The main trouble you can run in with v1 is incompatible dependencies across
several projects. Imagine, you work on project A that depends on package X
of version 42, and, at the same time you're starting with project B that
also depends on X but of version 43. Guess what: you can't v1-build the
two projects on the same machine without wiping out cabal's cache in between.
This was the way it worked in dark ages (from the middle of 2000s to yearly
2010s).
After that, cabal sandboxes arrived. They allowed you to build our imaginary
projects A and B with less hussle but the interface was not great and, more
importantly, every sandbox was independent and, therefore, held a big chunk of
repetitive binaries; e.g. A and B could depend also on Y of the same
version 13, so there's theoretically no need to build and store Y twice,
but that's exactly what cabal sandboxes would do.
Cabal v2 arrived in the late 2010s and brought exactly that: isolation between
projects via an (also recent) GHC feature called environment files and sharing
of build artifacts via Cabal store (so that you don't store many copies of the same thing).
Environments and v2-install
You can create a separate environment for
every project (A, B, etc.) by doing
cabal v2-install --lib X-42 --package-env=.
in the directory of the respective project. Couple notes on the syntax:
v2- can be omitted in Cabal 3 because it's the default;
The order of flags is not important as long as install goes right
after cabal;
--lib is important, because by default you get only executables (that's
what happens with criterion: the package holds an executable);
--package-env=. means: create a GHC environment file in the current
directory (hence .). If the command succeeds you will notice a new
("hidden" on Linux) file in the current directory, named something like
.ghc.environment.x86_64-linux-9.0.2. This is the file that tells all
subsequent calls to GHC in this directory where to search for the libraries compiled by Cabal
and stored in… Cabal store (on Linux it's the ~/.cabal/store directory by
default). In principle, you can use other than . values for environments,
and if the value doesn't correspond to a path, it will be a named
environment. More details in Cabal reference manual… In practice, I find
99.99% cases perfectly served by --package-env=..
X-42 means the package X of version 42 should be added to the newly
created environment. You can omit the version (you will get "some compatible
version"), and you can list more than one package.
What cabal v2-install --lib means if no environment specified
It means the default environment. There is a single shared environment called
default. It has the same very problem that v1 had (see above). So, in
practice it could work but it will be very fragile, especially if you get into
the "project A and project B" situation described above. Even if you only
work with one project now, I suggest using --package-env because it's future
proof.
Why the initial error
As you say, you were using Cabal 2 and therefore v1-install initially and saw
the dreaded "hidden package" error — what's the reason for this? Honestly, I
have no idea. And I doubt it's easy to figure without rolling back to that older
Cabal version and experimenting more. As I say above, v1 is not really maintained
anymore, and even if it looks like a bug in Cabal (which is perfectly possible
especially with earlier releases in the Cabal 2 line), no one will probably
bother about it.
Isn't it sad that old tutorials don't work anymore
It is. Unfortunately, the software technology has to develop to make the world a
better place to live (see the reasons for v2 above again). Sometimes this
development has to break backward compatibility. Ideally, we'd go and update all
educational materials and manuals to reflect the change but that's hardly
possible. Sigh. New users of Haskell has to be careful and creative with respect
to the v1 to v2 shift and try to get a basic understanding of v2 early on
and try to apply it to the good but old tutorials that are still out there.
Are environments the best approach?
Some of the designers and proponents of v2 argue that environment files are a
too subtle of a feature. As a ("proper") alternative, they suggest creating a
full-fledged cabal package for every project you start. This amounts to calling
cabal init, which will create a <project-name>.cabal file in the current
directory, and maintaining the .cabal file, including the list of package
dependencies there; you will also use cabal v2-build to build the project (instead of directly calling GHC).
While more robust, unsurprisingly, this idea doesn't sleep
well with many people who use Haskell to try a lot of small independent things:
it feels lame to create a whole "package" every time. Well, it's just one extra
file in practice, and it's not even extra if you compare it to the
environments-based approach I described above, which also maintains one extra
file, but in that case you don't ever need to edit it by hand (unlike the
.cabal file). All in all, in the scenarios of "trying one small thing" I find
the environments-based approach working better for me. But it does have its
limitations compared to the package-based approach, notably, it's hard to figure out
how to get profiling versions of dependencies in the environment. But that's a
story for another day…
You can find more discussion about how cabal v2-install --lib can be improved in Cabal issue.
If you want to follow the officially blessed way of doing things (i.e. via a package), please, take a minute to read the Getting Started section of the Cabal manual — it's very clear and shows exactly an example of simple application with a dependency on an external package.

Related

NixOS, Haskell LSP in Neovim, XMonad imports are not found

Here is a screenshot of the problem:
OS: NixOS, unstable channel.
Neovim: 0.7.2.
Haskell LSP: haskell-language-server.
Running xmonad --recompile in the terminal works.
Please help :-)
EDIT 1:
As asked by #ArtemPelenitsyn in the comments below, here is my init.lua: https://pastebin.com/70jMHm02.
The parts that I think relevant:
require'lspconfig'.hls.setup{}
I think it's something more related to NixOS and not to Neovim.
EDIT 2:
As asked by #Ben in the comments below, here is the needed info:
λ ghci
GHCi, version 9.0.2: https://www.haskell.org/ghc/ :? for help
ghci> import XMonad
<no location info>: error:
Could not find module ‘XMonad’
It is not a module in the current program, or in any known package.
ghci>
Here is everything related to Haskell and XMonad in my configuration.nix:
# snip
services.xserver.windowManager.xmonad.enable = true;
services.xserver.displayManager.defaultSession = "none+xmonad";
services.xserver.windowManager.xmonad.enableConfiguredRecompile = true;
services.xserver.windowManager.xmonad.enableContribAndExtras = true;
# snip
environment.systemPackages = with pkgs; [
ghc
haskell-language-server
haskellPackages.xmobar
haskellPackages.xmonad
haskellPackages.xmonad-contrib
];
# snip
In your configuration.nix, you have the following:
environment.systemPackages = with pkgs; [
ghc
haskell-language-server
haskellPackages.xmobar
haskellPackages.xmonad
haskellPackages.xmonad-contrib
];
Here you actually haven't installed a GHC that can access the xmobar, xmonad, and xmonad-contrib packages. Installed you've installed a GHC that doesn't know about any packages, and separately installed those Haskell packages. Any executables in those packages will be added to the PATH environment variable (which is how you can actually run xmonad), but PATH isn't know GHC finds installed packages. You need another step to connect the installation of GHC with the packages, so that you (and haskell-language-server) can import them.
The reason is that the way GHC is expecting to work is that installing packages is a mutating operation on the file system. On a "normal" system you install GHC and it knows about the packages that were bundled with it, then you install another package like xmonad into a folder that GHc will look in for packages1 and now the effect of running that same ghc program has changed.
Nix doesn't like that. Packages are supposed to be immutable in Nix. You can't change a GHC-without-xmonad into a GHC-with-xmonad after the fact.
So just installing pkgs.ghc isn't actually what you want. That package is already completely determined by the nix code that evaluates to, and the package it determines is a baseline ghc with no additional packages. Instead you need to create an entirely new package that consists of GHC installed with xmonad.
Fortunately, this is an extremely common need so there is already a wrapper function to generate this package for you. haskellPackages.ghcWithPackages2. This function takes a single argument, which must be a function you provide. That function will itself be called on a single argument which is the collection of Haskell packages available, and should return a list of which ones you want included in the GHC installation package you're building.
So that means what you actually want is something like this:
environment.systemPackages = with pkgs; [
haskell-language-server
# If you want you can use `with hpkgs; [` to avoid explicitly
# selecting into the hpkgs set on every line
(haskellPackages.ghcWithPackages (hpkgs: [
hpkgs.xmobar
hpkgs.xmonad
hpkgs.xmonad-contrib
]))
];
Under the hood what ghcWithPackages actually does is install ghc and those packages, just as you did, but then it also creates a very small "wrapper package" around ghc that sets environment variables telling it where to find the specific set of packages you installed. The thing that gets added to PATH to provide commands like ghc, ghci, etc is not the underlying GHC, but the wrapped one.3
You don't really need to know any of this "under the hood" stuff, just that every time you need GHC to have a specific set of packages you need to create a new nix package with ghcWithPackages. Knowing that it's based on wrapper scripts can help you not stress about space being wasted though; if you have 100 Haskell projects they can all share any GHC versions and Haskell package versions that are common; it's only the tiny wrappers that you have 100 copies of.
This is also the basic model used by most programming languages that have direct support in nixpkgs (and even some other things that aren't strictly programming languages but can be extended by installing plugins after the fact). It doesn't work precisely the same way for every language, as it depends on what code had to be written around the packaging tools each language has. But the basic conceptual model is frequently something like this.
This is all documented in the nixpkgs manual; as opposed to the manual for Nix itself, or for NixOS. It has a section on Languages and Frameworks where you can find the documentation for how a number of programming language ecosystems are supported in nixpkgs. Although the Haskell section under that has been turned into a small paragraph telling you to go to a separate site for the nixpkgs Haskell docs.
One final note: I'm not 100% sure whether haskell-language-server will just automatically pick up the ghc in your PATH and run with those packages, or if you need any further configuration. Since I am a Haskell developer I have a number of projects that each need different sets of available packages (or even GHC version in some cases), so I don't have any GHC (or HLS) installed at the environment.system-packages level; each of my projects has its own shell environment, ultimately generated from the projects .cabal file. This means I've never actually used haskell-language-server on "loose" Haskell files living outside of a project, and I'm not sure whether you need to do anything more to get it to work. But this is definitely what you need to get ghci> import XMonad to work (without dramatically changing how you do things).
1 And I believe also update some registry files, but I'm not 100% across all of the details. They're not important for this level of explanation.
2 And if you don't like the version of GHC (and everything else) contained in haskellPackages, all the other Haskell package sets also contain this ghcWithPackages function, such as haskell.packages.ghc924, haskell.packages.ghc8102, etc (the top level haskellPackages is one of these sets; whichever is determined to be a good default in the revision of nixpkgs you happen to be using).
3 The environment variables I can see in the wrapper script all have NIX_ in the name, so I suspect the base GHC packages in nixpkgs are patched to support this behaviour.

How does the workflow with Haskell Stack work?

I don't get the point about Stack.
I used to write my Haskell code in my favourite environment, ran or compiled using GHC(i), and if necessary, installed packages using Cabal. Now, that apparently is not the way to go any more, but I don't understand how to work with Stack. So far, I have only understood that I need to write stack exec ghci instead ghci to start a repl.
Apart from that, the docs always talk about 'projects' for which I have to write some yaml files. But I probably don't have any project -- I just want to launch a GHCi repl and experiment a bit with my ideas. At the moment, this fails with the unability to get the packages that I want to work with installed.
How is working with Stack meant? Is there any explanation of their use cases? Where do I find my use case in there?
Edit. My confusion comes from the fact that I want to work with some software (IHaskell) whose installation guide explains the installation via stack. Assuming I already have a GHCi installed whose package base I maintain e.g. using Cabal. How would I have to set up stack.yaml to make stack use my global GHCi for that project?
First, notice that stack will use its own package base independent from cabal. AFAIK they can't be shared... hence, if you run stack build it'll download packages (including the compiler) on its own package database.
Nevertheless stack allows to use a system compiler (but not other libraries). To do so, in the stack.yaml you must have the following two lines
resolver: lts-XX.XX -- keep reading below
system-ghc: True
The version of the stackage snapshot can be found in: https://www.stackage.org/. Each snapshot works with a different version of the compiler. Be sure to use a snapshot with the same compiler version you have in you system. If it happens your system ghc is greater than any lts, then you can set allow-newer: true in stack.yaml.
Now, if getting a different database from stack feels wrong to you, notice that you can build the project with cabal too, since stack at the end of the day spits out a cabal file. Probably, It wont work out of the box if you build with cabal. You can modify the cabal file to match exactly the version of the packages of the snapshot you are using
In summary:
You can use your system-wide ghc
you can not share libraries installed with cabal.
you can use cabal to build the project, probably modifying the ihaskell.cabal file to match versions of the stackage's snapshot.

What does cabal mean when it says "The following packages are likely to be broken by the reinstalls"

I've seen this message pop up a couple times when running cabal v1-install with a suggestion to use --force-reinstalls to install anyway. As I don't know that much about cabal, I'm not sure why a package would break due to a reinstall. Could someone please fill me in on the backstory behind this message?
Note for future readers: this discussion is about historical matters. For practical purposes, you can safely ignore all of that if you are using Cabal 3.
The problem had to do with transitive dependencies. For instance, suppose we had the following three packages installed at specific versions:
A-1.0;
B-1.0, which depends on A; and
C-1.0, which depends on B, but not explicitly on A.
Then, we would install A-1.1, which seemingly would work fine:
A-1.1 would be installed, but the older A-1.0 version would be kept around, solely for the sake of other packages built using it;
B-1.0 would keep using A-1.0; and
C-1.0 would keep using B-1.0.
However, there would be trouble if we, for whatever reason, attempted to reinstall B-1.0 (as opposed to, say, update to B-1.1):
A-1.1 and A-1.0 would still be available for other packages needing them;
B-1.0, however, would be rebuilt against A-1.1, there being no way of keeping around a second installation of the same version of B; and
C-1.0, which was built against the replaced B-1.0 (which depended on A-1.0), would now be broken.
v1-install provided a safeguard against this kind of dangerous reinstall. Using --force-reinstalls would disable that safeguard.
For a detailed explanation of the surrounding issues, see Albert Y. C. Lai's Storage and Identification of Cabalized Packages (in particular, the example I used here is essentially a summary of its Corollary: The Pigeon Drop Con section).
While Cabal 1, in its later versions, was able to, in the scenario above, detect that the reinstall changed B even though the version number remained the same (which is what made the safeguard possible), it couldn't keep around the two variants of B-1.0 simultaneously. Cabal 3, on the other hand, is able to do that, which eliminates the problem.

What is Cabal Hell?

I am a little bit confused while reading about Cabal Hell, as the term is overloaded. I guess originally Cabal Hell referred to the diamond dependency problem, which was solved by restricting the build plan to have only a single version of any package in each build plan (two different versions of a package can't exist in a single build plan) as explained in this answer.
However, the term is also used in various other contexts. Such as destructive re-installations, incorrect package dependency boundaries (lower/upper version bounds), inconsistent environments ... (or any other error reported by Cabal).
Particular among these, I am confused about 1) destructive re-installations and 2) inconsistent environments? What do they mean, and how cabal new-build solves these problems (is it just sandboxing like cabal sandbox)? And what role ghc-pkg has to play here?
Any references or a simple example where these problems could be reproduced would be very appreciated.
Regarding "destructive re-installations": If I am not wrong, GHC has a package manager of itself (ghc-pkg), and the packages are installed as dynamically linkable libraries i.e: base depends on ghc-prim, so if ghc-prim is removed it will break base, am I right? And since GHC only allows one instance of a package with the same version, cabal install might register a newer build of the same (package, version) such that it breaks the dependents of the unregistered package. If the above understanding regarding "destructive re-installations" are correct; how does cabal new-build help here?
The only meaningful use of the term is the one given in the linked answer. Related are the follow-on problems from having lots of different packages in the global database, which can make encountering diamond dependencies more common, requiring destructive reinstalls to resolve, etc.
The other usages of the term are not helpful and just mean "problems somehow involving cabal."
That said, let me answer your other questions.
1) ghc-pkg is not a package manager, but rather a tool for managing ghc package databases. It is used by cabal to register packages into databases, and can be used by end-users to inspect the contents of the databases. Think of it as part of the underlying substrate provided by ghc, not a competing tool.
2) new-build eliminates and replaces the standard notion of a packagedb entirely. Instead of saying that a db consists of packages and versions, with at most one of each pair, instead a db consists of potentially many copies of packages at any given version, each with potentially different versions of its dependencies, all of which are managed in part by hash-addressing, so marked by a unique "fingerprint". This is called the store. When you new-build, cabal calculates a build plan irrespective of any previously installed dependencies, from scratch. If a particular fingerprint (consisting of a package, version, and the versions of all its dependencies, certain flags, etc) already exists in the store, then it makes use of it. If it does not, it calculates it.
As such, the only "diamond dependencies" that can occur are the truly insoluble ones, and not the ones occasioned by having fixed too-early (due to already-installed deps) some portion of the dependency tree.
tldr; you write "since GHC only allows one instance of a package with the same version" but new-build partially lifts this restriction in the store which allows the solver to produce better, more reproducible plans more often.

How do I disable version parsing in cabal or stack?

I am using alternative version numbering approach for my projects. I have encountered strange behavior by cabal and stack that does not allow me to fully enjoy benefits of this approach. Both cabal and stack enforce version to be of format Int.Int.Int, which does not cover the case of another version format I use for branches (0.x.x, 1.x.x, 1.0.x, etc).
If I have line version: 0.x.x in my .cabal file, I am getting Parse of field 'version' failed. error when running cabal build or Unable to parse cabal file {PROJECT_NAME}.cabal: NoParse "version" 5 when running stack init.
Is there a way to disable version parsing on cabal and stack commands? Is there a flag for it? Or do I have to request this kind of change (adding flags, disabling version parsing) from the developers of cabal and stack?
Why is there any parsing at all? How does it help with building a package? Does cabal or stack automatically increment build numbers on some event? If yes, where could I read more about this? How could I influence the way version numbering incrementation gets implemented in cabal and stack? I want developers of haskell packages take into account the possibility of alternative version numbering approaches.
PS. For all interested folks, I want to quickly summarize the idea behind "weird" version numbers, such as 0.x.x, 1.x.x, 1.0.x. I use the version numbers with x's to describe streamlines of development that allow code changes while such version numbers as 1.0.0, 1.1.0, 2.35.46 are used to describe frozen states of development (to be precise, they are used for released versions of software). Note that such version numbers as 0.x.0, 1.x.15, 2.x.23 are also possible (used for snapshots/builds of software) and they mean that codebase has been inherited from branches with version numbers 0.x.x, 1.x.x and 2.x.x correspondingly.
Why do I need such version numbers as 0.x.x, 1.x.x and 2.x.x at all? In brief, different number of x's mean branches of different types. For example, version number pattern N.x.x is used for support branches, while pattern N.M.x is used for release branches. Idea behind support branches is that they get created due to incompatibility of the corresponding codebases. Release branches get created due to feature freeze in corresponding codebase. For example, branches 1.0.x, 1.1.x, 1.2.x, ... get created as a result of feature freezes (or releases) in branch 1.x.x.
I know this is all confusing, but I worked hard to establish this version numbering approach and I continue working on awareness about the inconsistencies of version numbering through my presentations and other projects. This all makes sense once you think more about the pitfalls of semver approach (you can find detailed slideshare presentation on the matter following the link). But I do not want to defend it for now. For the time being, I just want cabal and stack to stop enforcing their, as I perceive them, unjustified rules to my project. Hope you can help me with that.
You can't. The version will be parsed to Version, which is:
data Version = PV0 {-# UNPACK #-} !Word64
| PV1 !Int [Int]
Stack uses Cabal as a library but has its own Version type:
newtype Version =
Version {unVersion :: Vector Word}
deriving (Eq,Ord,Typeable,Data,Generic,Store,NFData)
Neither cabal nor stack have a way to customize the parsing. You have to write your own variant of those programs if you want to use another version type. But then again, you're not winning anything at that point: neither Hackage nor Stackage will recognize your package's version.
So the 1.x.x isn't possible at the moment. You could exchange x with 99999999 or something similar to mitigate the problem. That being said, it's not clear what cabal install should then install. The 99999999 version? Or the latest stable variant?
If you can express the semantics, a discussion on the mailing list as well as a feature request might change the behaviour in the (far away) future, but for now, you either have to patch the programs yourself or use another numbering scheme.
Is there a way to disable version parsing on cabal and stack commands? Is there a flag for it?
No.
Or do I have to request this kind of change (adding flags, disabling version parsing) from the developers of cabal and stack?
You can of course ask, but there are so many outstanding issues that you are unlikely to get any traction. You will have to be very convincing -- convincing enough to overturn more than 20 years of experience that says the current versioning scheme is basically workable. Realistically, if you want this to happen you'll probably have to maintain a fork of these tools yourself, and provide an alternative place to host packages using this scheme.
Why is there any parsing at all? How does it help with building a package?
Packages specify dependencies, and for each dependency, specify what version ranges they work with. The build tools then use a constraint solver to choose a coherent set of package/version pairs to satisfy all the (transitive) dependencies. To do this, they must at a minimum be able to check whether a given version is in a given range -- which requires parsing the version number at least a little bit.
Does cabal or stack automatically increment build numbers on some event? If yes, where could I read more about this?
There is nothing automatic. But you should take a look at the Package Version Policy, which serves as a social contract between package maintainers. It lets one package maintainer say, "I am using bytestring version 0.10.0.1 and it seems to work. I'm being careful about qualifying all my bytestring imports; therefore I can specify a range like >=0.10 && <0.11 and be sure that things will just work, while giving the bytestring maintainer the ability to push security and efficiency updates to my users." without having to pore through the full documentation of bytestring and hope its maintainer had written about what his version numbers mean.
How could I influence the way version numbering incrementation gets implemented in cabal and stack?
As with your previous question about changing the way the community does things, I think modifications to the Package Versioning Policy are going to be quite difficult, especially changes as radical as you seem to be proposing here. The more radical the change, the more carefully motivated it will have to be to gain traction.
I honestly don't know what a reasonable place to take such motivation and discussion would be; perhaps the haskell-cafe mailing list or similar.

Resources