This question is similar to Compile stack script instead of running it but I don't think it has been answered completely.
I have a script which I want to run in two ways:
Using ./Script.hs when I am developing it locally, and need a fast develop-run-test cycle
Compiling it to a standalone binary (static, if possible), so that I can ship it to my production servers without any dependency on haskell/stack.
I've tried putting the following on top of my script, but that results in the following error Did not find executable at specified path: <the path of my script>:
#!/usr/bin/env stack
{- stack
--resolver=lts-15.6
script
--compile
--copy-bins
--package shake
--package bytestring
--package text
--package hashable
--package binary
--package deepseq
--package string-conv
--package http-client
--package http-types
--package safe
--ghc-options=-threaded
--ghc-options=-with-rtsopts=-N
-}
I've been able to use the Data.Tuple.Extra module after installing the extra package by stack build extra. But the same doesn't apply to the Data.List.Unique module (https://hackage.haskell.org/package/Unique-0.4.7.8/docs/Data-List-Unique.html). According to the website, it's included in package Unique. So, I installed it with
$ stack build Unique
The installation seems to have been successful because I didn't see any error messages. But
$ cat try.hs
import Data.List.Unique
main = do putStrLn "hello"
$ stack runghc try.hs
try.hs:1:1: error:
Could not find module ‘Data.List.Unique’
Use -v (or `:set -v` in ghci) to see a list of the files searched for.
|
1 | import Data.List.Unique
| ^^^^^^^^^^^^^^^^^^^^^^^
$
The output from stack runghc -v try.hs is too big for me to analyze . . .
Note: I've modified the question to focus on the installation and use of modules. Following the answers provided below, I'm able to run the program using an option. Still, it would be nice if one wouldn't need to use a package option.
stack build doesn't "install" the package globally for use with every compilation, and runghc can't pick up Stack packages. It's meant to be a more or less direct execution of GHC.
You can specify packages to use if you run your program with stack try.hs, as described in the docs. In that case, you'll be able to specify packages to use with specially formatted comments, like this:
-- stack script --package Unique
import Data.List.Unique
main = do putStrLn "hello"
The Unique dependency can be specified by passing the --package option to runghc:
stack runghc --package Unique try.hs
Alternatively, you can make try.hs a Stack script, as illustrated by Fyodor Soikin's answer.
If you plan to develop try.hs into more than a standalone file, consider setting up a Stack project. See the Hello World example in the Stack User Guide for initial guidance on that.
I'm relatively new to Haskell and I realize I might be swimming against the stream here, but nonetheless, I'll ask:
Say I have a short Haskell script:
import Data.List.Split (splitOn)
main :: IO ()
main = do
let orders = splitOn "x" "axbxc"
putStrLn $ head orders
If I used only standard functions I could compile this with ghc <script.hs>. Because I depend on the split package to provide the splitOn function, the compilation fails.
Now, I have no difficulties setting up a cabal project with a project.cabal and a Setup.hs file in order to get this to actually compile. However, this feels like a lot of extra boilerplate for a standalone script.
So, is there a way to compile a single .hs file against some external package? Something similar to what in Python would be done by pip install something, "installing the package into the interpreter", i.e. is there a way to install extra packages "into ghc", so that I for instance only need to provide some extra linking flag to ghc?
The Cabal equivalent of the Stack script in bradrn's answer would be:
#!/usr/bin/env cabal
{- cabal:
build-depends: base
, split
-}
import Data.List.Split (splitOn)
main :: IO ()
main = do
let orders = splitOn "x" "axbxc"
putStrLn $ head orders
The script can be run with cabal run, or directly by giving it execute permission. If need be, version bounds can be added as usual to the build-depends on the top of the script.
(Note this isn't literally a solution without Cabal, as doing this with GHC alone, even if it is possible, wouldn't be worth the trouble. In any case, it certainly avoid the boilerplate of needing multiple files.)
If you use Stack, the simplest way to do this is to write a ‘Stack script’, which is a Haskell file with a description of the required packages in the first line (really an invocation of stack specifying the appropriate command line arguments). An example (slightly modified from the docs):
$ cat turtle-example.hs
-- stack --resolver lts-6.25 script --package turtle
{-# LANGUAGE OverloadedStrings #-}
import Turtle
main = echo "Hello World!"
$ stack ./turtle-example.hs
Completed 5 action(s).
Hello World!
$ stack ./turtle-example.hs
Hello World!
This script uses the turtle package; when run, Stack downloads and builds this dependency, after which it is available in the script. (Note that the second time it is run, turtle has already been built so does not need to be rebuilt again.)
As it happens, the --package command in Stack is not limited to scripts. It can be used with other Stack commands as well! For instance, to compile your program, you should be able to run stack ghc --resolver lts-16.27 --package split -- -ghc-options your-program-name.hs. And stack ghci --package split will give you a GHCi prompt where you can import Data.List.Split.
(Note: This answer focuses on Stack rather than Cabal, simply because I don’t know Cabal very well. However, I believe all this can be done using Cabal as well. For instance, I do know that Cabal has something very similar to the Stack scripts I mentioned above, though I can’t remember the syntax just at the moment.)
EDIT: See #duplode’s answer for how to do this with Cabal.
You can install into the default environment for the current user by doing cabal install --lib split. The package should then be available to ghc and ghci without needing any special options.
More information is at the bottom of this section in the Cabal manual. The v2 commands that it uses are the default now so if you have a fairly new cabal you can just use install rather than v2-install.
I think this is the quintessential entry point to the package management battle in Haskell. It's not there's not enough advice, but there's so much, each with its own caveats and assumptions. Climbing that mountain for the sake of splitOn feels to the newbie like they're Doing It Wrong.
After spending far too much time trying each permutation, I've collated the fine answers here, and many, many others from elsewhere, put them to the test, and summarised the results. The full write up is here.
The pertinent summary of solutions is:
Install globally
You can still do global installs with cabal install --lib the-package.
Use Stack as a run command
You can use stack directly, eg: stack exec --package containers --package optparse-generic [...and so on] -- runghc hello.hs
Create a Stack project
The real deal. Run stack new my-project hraftery/minimal, put your code in Main.hs and your dependencies in my-project.cabal (and maybe stack.yaml - check the article), and then run stack build to automagically pull and build all dependencies.
Use a Stack script
Make your Haskell file itself an executable Stack script. Add -- stack --resolver lts-6.25 script --package the-package to the top of your .hs file and set its executable bit.
For my Edit/Test/Run workflow (eg. using VS Code, GHCi and GHC respectively), I found it pretty clear what works in practice. In summary:
Amongst a chorus of discouragement, Global Installs suit what I know of your use case just fine.
Where Global Installs don't make sense (eg. for managing dependency versions or being portable) a Stack project starting from my minimal template is a smooth transition to a more sophisticated and popular method.
I've previously used stack ghci app:exe:executable to get a list of any errors in my Haskell project managed via cabal.
However now that I'm not using stack, how would I achieve the above (essentially load all the modules from the executable defined in the cabal project file)?
https://cabal.readthedocs.io/en/latest/nix-local-build.html#cabal-v2-repl
cabal v2-repl executableNameGoesHere
I'm trying to use haskell for scripting in the pattern specified here on the "Script interpreter" section:
https://haskell-lang.org/tutorial/stack-script
When I have comments like this in my script after the shebang:
#!/usr/bin/env stack
{- stack
--resolver nightly-2016-11-26
--install-ghc
runghc
--package http-conduit
-}
{-# LANGUAGE OverloadedStrings #-}
I know I can safely say run:
$ stack myfile.hs
And it correctly picks up the resolver version I specified I need.
However, if I also try to build my script, to have the best of both worlds in scripting for development while I figure things out, and compilation once I'm more sure...
username#myhost:~/$ stack exec -- ghc myfile.hs && ./myfile
Run from outside a project, using implicit global project config
Using resolver: lts-7.10 from implicit global project's config file: /home/username/.stack/global-project/stack.yaml
That will error out because it's using resolver 'lts-7.10' from my global config instead of 'nightly-2016-11-26' even though my comments after the shebang specify a specific, non-global resolver to use.
While I know I could run ghc above and explicitly call --resolver on the command line, I would prefer if this were picked up from what's within the file itself. Is this possible?