Should I use stack to build and upload to Hackage? - haskell

Over time I've developed a messy system level Haskell installation that I'm not sure how to completely clean up. But for the most part this isn't of much concern as I simply use stack to manage per-project Haskell configurations. However as my project requirements diverge from my system Haskell setup, I wonder what the best way is to build and upload packages for Hackage.
Specifically (1) should I be using
stack exec -- cabal sdist
stack exec -- cabal upload
instead of simply
cabal sdist
cabal upload
and (2) should is there any reason to install a project version of cabal (with stack build cabal?)
Or is there some better stack-based approach to building and distributing to Hackage that doesn't involve invoking cabal directly?

Adding an answer based on my earlier comment.
stack offers equivalent functionality via its
stack sdist
stack upload
commands, which don't require interfacing with cabal directly in stack-based projects.
A full list of commands supported by stack can be obtained via:
$ stack --help
and the official documentation.
Individual commands also support --help to see what command line flags they support.

Related

How to prevent stack (haskell) from running hpack?

My local machine and CI have different versions of stack (and hpack). Due to this reason there are minor differences in the cabal files auto-generated on my local machine vs the CI machine (mostly whitespace).
Due to hpack being run again in CI (even though the generated cabal files are checked into the repo), the cabal files change and end-up triggering a re-build in the CI.
Is there any way to invoke stack install or stack build in a way that they DONT run hpack again (and use the given cabal file)?

Install a Haskell package directly from a git repository?

stack allows one to define git repositories as packages using the stack.yaml file. Is it possible to do something like the following, directly via command-line:
stack install --resolver=lts-12.1 git#github.com:saurabhnanda/some-repo.git
Use-case: Install a command-line tool that I have written during a docker build process. I want to avoid cloning the repo and then building it. Is there a short-hand for this?
EDIT
New solution
Right after submitting the answer I thought of a separate solution.
You can create a custom custom-snapshot.yaml file in your repository that extends some existing snapshot, such as lts-15.3 for example. Add your package to it in a similar way you would add it to the stack.yaml.
And the point to it when installing the tool:
$ stack install --resolver https://raw.githubusercontent.com/saurabhnanda/my-cool-tool/master/custom-snapshot.yaml my-cool-tool
or even shorter:
$ stack install --resolver github:saurabhnanda/my-cool-tool:custom-snapshot.yaml my-cool-tool
Disclaimer - I have not tried it, but in theory it should work.
Old solution
I don't think you can do it at cli without stack.yaml
So two options are:
either create a temporary project with stack new add your repository to the stack.yaml
or add the same information to into the global stack.yaml, location of which can be found programmatically:
$ stack path --config-location
/home/saurabhnanda/.stack/global-project/stack.yaml
and add this to extra-deps:
- github: saurabhnanda/some-repo
commit: master
subdirs:
- my-cool-tool
After that running stack install my-cool-tool should work as normal.
I don't think it would be too hard too write up a Haskell script that could do one of those two solutions for you and host as a gist that can be curled and executed on demand with stack.

How to download packages without compiling/installing them?

Is there any command-line switch to stack that tells it to download all relevant packages without compiling/installing anything?
I think you probably want a combination of the --prefetch and --dry-run flags. For example, the following command:
stack build --prefetch --dry-run acme-missiles
downloads the acme-missiles-0.3.tar.gz source file without building it. If you later run stack build acme-missiles, it should configure and build it from the previously downloaded source.
If you want to download sources of the package locally you can use stack unpack command:
stack unpack typerep-map-0.3.0
The same can be done with cabal-install as well but with cabal get command:
cabal get typerep-map-0.3.0

What is the difference between `stack clean` and removing the `.stack-work` directory?

1 Context
I am involved in a Haskell project that involves lots of C-bits and FFI. So I find myself frequently running and re-running commands like
$ stack build
$ stack build --force-dirty
$ stack clean
$ rm ./.stack-work
over and over in order for the C-bits to be linked properly to the Haskell bits. Put differently, sometimes things just work when running stack build, and sometimes they don't (in which case I'm forced to cycle through the above commands over and over until my project builds properly).
This means I don't have a proper understanding of how stack (through ghc) assembles the C-bits before assembling the Haskell bits. So here is one question to help me start clearing up my confusion:
2 Question
Are there any noteworthy difference between running stack clean and deleting the contents of the .stack-work directory? Are there cases where deleting the .stack-work directory is needed as a good precaution to ensure that you are actually running a clean build?
As you can see by reading the source here:
https://github.com/commercialhaskell/stack/blob/master/src/Stack/Clean.hs
There are two levels, full and shallow. I think shallow seems to be the default. It seems to be able to clean specific packages, or if you don't provide no options at all, it'll clean everything but extra-deps in local packages.

Are there any Haskell specific tools that can show source code from imported modules?

How can I browse Haskell source code preferably without internet connection? Right now I click through hackage search results, click source link and search the source page. There are two problems:
I'm using current version as a proxy of what I have locally
This does not work recursively well (another clicks and searches for next definition)
Usually IDEs let you download sources for any library and open new editor tab with definition. I prefer reading code than documentation, less surprises along the way and I can learn something from them.
So, how can I setup for recursive source searches using Haskell tools or standard GNU tools if necessary? All I know right now is that I can generate ctags for vim but where does cabal store sources?
This is the opinionated workflow I follow to render the documentation with the source link enabled.
$ cd <package-name>
$ cabal sandbox init
$ cabal install --only-dependencies --enable-documentation --haddock-hyperlink-source
$ cabal configure --enable-documentation --haddock-hyperlink-source
$ cabal haddock --hyperlink-source
$ firefox dist/doc/html/<package-name>/index.html
The Source link should be enabled for all packages, including the dependencies, as long as they are installed in the sandbox.
In the particular case of Arch Linux, the distro I use, I try to avoid installing Haskell system packages through pacman because, by default, the documentation is not built with the source link enabled. In Arch Linux you can use ABS and modify the PKGBUILD with the parameters described above. I'm pretty sure something similar could be done in other distros, but have no idea about Windows or Mac OS X.
It's also worth mentioning that you don't need to type those parameters every time you run cabal. You can enable them by default in your .cabal/config
This should work without the sandbox but if you are dealing with more than one Haskell project I strongly recommend to use sandboxes.

Resources