How to prevent stack (haskell) from running hpack? - haskell

My local machine and CI have different versions of stack (and hpack). Due to this reason there are minor differences in the cabal files auto-generated on my local machine vs the CI machine (mostly whitespace).
Due to hpack being run again in CI (even though the generated cabal files are checked into the repo), the cabal files change and end-up triggering a re-build in the CI.
Is there any way to invoke stack install or stack build in a way that they DONT run hpack again (and use the given cabal file)?

Related

Cabal - Rebuild on file change

Is there a cabal option to rebuild (or even run the tests and do other stuff) on every source file change? With Stack there is the --file-watch option, anything similar for Cabal? Or do people just use ghcid for that?

Install a Haskell package directly from a git repository?

stack allows one to define git repositories as packages using the stack.yaml file. Is it possible to do something like the following, directly via command-line:
stack install --resolver=lts-12.1 git#github.com:saurabhnanda/some-repo.git
Use-case: Install a command-line tool that I have written during a docker build process. I want to avoid cloning the repo and then building it. Is there a short-hand for this?
EDIT
New solution
Right after submitting the answer I thought of a separate solution.
You can create a custom custom-snapshot.yaml file in your repository that extends some existing snapshot, such as lts-15.3 for example. Add your package to it in a similar way you would add it to the stack.yaml.
And the point to it when installing the tool:
$ stack install --resolver https://raw.githubusercontent.com/saurabhnanda/my-cool-tool/master/custom-snapshot.yaml my-cool-tool
or even shorter:
$ stack install --resolver github:saurabhnanda/my-cool-tool:custom-snapshot.yaml my-cool-tool
Disclaimer - I have not tried it, but in theory it should work.
Old solution
I don't think you can do it at cli without stack.yaml
So two options are:
either create a temporary project with stack new add your repository to the stack.yaml
or add the same information to into the global stack.yaml, location of which can be found programmatically:
$ stack path --config-location
/home/saurabhnanda/.stack/global-project/stack.yaml
and add this to extra-deps:
- github: saurabhnanda/some-repo
commit: master
subdirs:
- my-cool-tool
After that running stack install my-cool-tool should work as normal.
I don't think it would be too hard too write up a Haskell script that could do one of those two solutions for you and host as a gist that can be curled and executed on demand with stack.

What is the difference between `stack clean` and removing the `.stack-work` directory?

1 Context
I am involved in a Haskell project that involves lots of C-bits and FFI. So I find myself frequently running and re-running commands like
$ stack build
$ stack build --force-dirty
$ stack clean
$ rm ./.stack-work
over and over in order for the C-bits to be linked properly to the Haskell bits. Put differently, sometimes things just work when running stack build, and sometimes they don't (in which case I'm forced to cycle through the above commands over and over until my project builds properly).
This means I don't have a proper understanding of how stack (through ghc) assembles the C-bits before assembling the Haskell bits. So here is one question to help me start clearing up my confusion:
2 Question
Are there any noteworthy difference between running stack clean and deleting the contents of the .stack-work directory? Are there cases where deleting the .stack-work directory is needed as a good precaution to ensure that you are actually running a clean build?
As you can see by reading the source here:
https://github.com/commercialhaskell/stack/blob/master/src/Stack/Clean.hs
There are two levels, full and shallow. I think shallow seems to be the default. It seems to be able to clean specific packages, or if you don't provide no options at all, it'll clean everything but extra-deps in local packages.

Should I use stack to build and upload to Hackage?

Over time I've developed a messy system level Haskell installation that I'm not sure how to completely clean up. But for the most part this isn't of much concern as I simply use stack to manage per-project Haskell configurations. However as my project requirements diverge from my system Haskell setup, I wonder what the best way is to build and upload packages for Hackage.
Specifically (1) should I be using
stack exec -- cabal sdist
stack exec -- cabal upload
instead of simply
cabal sdist
cabal upload
and (2) should is there any reason to install a project version of cabal (with stack build cabal?)
Or is there some better stack-based approach to building and distributing to Hackage that doesn't involve invoking cabal directly?
Adding an answer based on my earlier comment.
stack offers equivalent functionality via its
stack sdist
stack upload
commands, which don't require interfacing with cabal directly in stack-based projects.
A full list of commands supported by stack can be obtained via:
$ stack --help
and the official documentation.
Individual commands also support --help to see what command line flags they support.

(haskell) stack init does not finish. how to ignore all bounds in existing cabal files?

I am converting a project (consisting of several cabalized packages) to stack. "stack init" does not seem to be able to "calculate a build plan" (it takes ages).
Perhaps this gets easier when all version bounds are ignored. But how can I do this - other than actually removing them from the cabal files manually?
EDIT: there is "allow-newer" in http://docs.haskellstack.org/en/stable/yaml_configuration/ but this only helps if the initial stack.yaml file is already there.
Anyway, I could work around my particular problem by manually removing some packages (that is, subdirs) to be built.
My actual comand line was
stack init --verbose --resolver=lts-5 $(cat DIRS) --solver

Resources