I have a package A which is build with stack producing a library. The library is used in package B (i.e. A is a dependent package listed in B's cabal file). B is build with stack as well. Consider the four cases of changes in A and use of solver on B:
1 - When A changes then B continues to use the old state of A - this follows the guarantee of stack that a compilation always works the same way and is not influenced by changes in other programs.
2 - If the package A has a new version number, then stack build on A followed by stack build B does silently use the new version. I think this is wrong, as it violates the guarantee; it should continue to use the old version.
3 - If package A changes without a new version number and solver is run on B, packages B continues using the old state. I think this is wrong; after running solver, the guarantee does not apply and the new state should be used.
4 - if package A changes with a new version number and solver is run on B then B uses the new version. This is is correct.
I cannot understand this behavior and how version numbers and solver interact. How to control the use of a new state of a without each time bump the version number up? Changing version numbers all the time is inconvenient if two packages are worked on in parallel; it should be sufficient to run solver to bring the changes from A into B and without running solver, the package should always recompile, independent of changes in other packages.
For development, I wish there were an (additional) flag I could use in case 2 to set for stack to build always using the newest state of the dependent packages silently (as if there would be a new version, without bumping the version number up).
Do I misunderstand the guarantee of stack build or misunderstand the behavior of stack? The code I used to test is simplistic and is on github: git#github.com:andrewufrank/test-depProj.git.
The question is related to previous questions I asked regarding atoms or leksahs behavior in multi-project development. I found that the issue is essentially a question of the behavior of stack build and must be clarified for stack build first.
For clarification the stack.yam of A
flags: {}
extra-package-dbs: []
packages:
- .
extra-deps: []
resolver: lts-8.13
and for B
flags: {}
extra-package-dbs: []
packages:
- .
- ../a
extra-deps: []
resolver: lts-8.13
It seems to be a bug and I reported it.
The workaround is, as described by #duplode, to do stack build only in one project or to use the --force-dirty flag when stack-build.
The problem is seemingly caused by behavior of cabal, which reuses an ID even when the content of a package has changed. A fix was included in stack 1.4.0. (see stack issues #2904 #3047), but it appears this was not effective in all cases.
Related
I don't get the point about Stack.
I used to write my Haskell code in my favourite environment, ran or compiled using GHC(i), and if necessary, installed packages using Cabal. Now, that apparently is not the way to go any more, but I don't understand how to work with Stack. So far, I have only understood that I need to write stack exec ghci instead ghci to start a repl.
Apart from that, the docs always talk about 'projects' for which I have to write some yaml files. But I probably don't have any project -- I just want to launch a GHCi repl and experiment a bit with my ideas. At the moment, this fails with the unability to get the packages that I want to work with installed.
How is working with Stack meant? Is there any explanation of their use cases? Where do I find my use case in there?
Edit. My confusion comes from the fact that I want to work with some software (IHaskell) whose installation guide explains the installation via stack. Assuming I already have a GHCi installed whose package base I maintain e.g. using Cabal. How would I have to set up stack.yaml to make stack use my global GHCi for that project?
First, notice that stack will use its own package base independent from cabal. AFAIK they can't be shared... hence, if you run stack build it'll download packages (including the compiler) on its own package database.
Nevertheless stack allows to use a system compiler (but not other libraries). To do so, in the stack.yaml you must have the following two lines
resolver: lts-XX.XX -- keep reading below
system-ghc: True
The version of the stackage snapshot can be found in: https://www.stackage.org/. Each snapshot works with a different version of the compiler. Be sure to use a snapshot with the same compiler version you have in you system. If it happens your system ghc is greater than any lts, then you can set allow-newer: true in stack.yaml.
Now, if getting a different database from stack feels wrong to you, notice that you can build the project with cabal too, since stack at the end of the day spits out a cabal file. Probably, It wont work out of the box if you build with cabal. You can modify the cabal file to match exactly the version of the packages of the snapshot you are using
In summary:
You can use your system-wide ghc
you can not share libraries installed with cabal.
you can use cabal to build the project, probably modifying the ihaskell.cabal file to match versions of the stackage's snapshot.
I have two Haskell libraries lib-a and lib-b, both hosted on private git repos. lib-b depends on lib-a, both build with no problem.
Now i want to import lib-b into another project and thus add it to the stack configuration with the git directive, like this:
- git: git#github.com:dataO1/lib-b.git
commit: deadbeef102958393127912734
Stack still seems to need a specific version for lib-a:
In the dependencies for application-0.1.0.0:
lib-a needed, but the stack configuration has no specified version (no package with that name found,
perhaps there is a typo in a package's build-depends or an omission from the stack.yaml packages
list?)
needed due to application-0.1.0.0 -> lib-b-0.1.0.0
The question now is, can stack somehow figure out specific versions for nested git dependencies without explicitely specifying them? If the project grows i dont want to manually adjust this every time i update lib-a.
Sidenote: I'm using nixOS and the nix directive for all three stack projects.
Stack follows the snapshot-based model of package management, meaning that it has some "globally" specified set of packages (of fixed version) that you can use. In Stack's case this set of packages is called Stackage. The core idea is to have this clearly specified set of packages that you're working with.
So the short answer is no it cannot figure it out by itself, you have to add them by hand.
But! you need to specify only packages that are not in the snapshot. e.g. package lib-a is likely to depend on mostly packages that are commonly used in Haskell (e.g. base, aeson, ...) and those will already be in Stackage. so even if the project grows you will be adding just "a few" git refs.
So this doesn't generally tend to be a problem.
Problem: I'm working on a Haskell project that uses stack (+ nix). We have a dependency that takes 10+ minutes to compile. Every time we clean our .stack-work, we have to wait for this huge package to compile, and it's really hurting our project's efficiency. The package name is godot-haskell, and here is how the package is depended upon in our stack.yaml:
extra-deps:
- godot-haskell-0.1.0.0#sha256:9d92ff27c7b6c6d2155286f04ba2c432f96460f448fd976654ef26a84f0e35a6,26290
Question: Is there a way for us to somehow cache this package (in stack, or even in nix) so that it locally never has to get compiled (or has to get compiled at most once, even if the .stack-work directory is deleted)?
For the currently released Stack, the best way to make this happen is to put the extra-dep into a custom snapshot file instead of the extra-deps in the stack.yaml file. (The upcoming Stack release has a feature referred to as "implicit snapshots" which sidesteps this.) You can see an example of this in the Stack repo itself:
https://github.com/commercialhaskell/stack/blob/master/stack.yaml#L1
https://github.com/commercialhaskell/stack/blob/master/snapshot.yaml
I am relatively new to haskell, stack, ghc, etc.
Have been trying a few projects with ghcjs and haven't been able to build any of them, including reflex-dom-stack-demo. I am getting the following error:
In the dependencies for semigroupoids-5.0.0.4:
tagged-0.8.1 from stack configuration does not match >=0.8.5 && <1 (latest matching version is 0.8.5)
needed due to ghcjs-0.2.0 -> semigroupoids-5.0.0.4
Now I cannot understand whether I misconfigured something or there is truly a broken dependency. Deleted ~/.stack multiple times throughout my experiments.
I found this bug in stackage but am unsure whether this is what affects me, and whether it would be fixed once the fix moves through.
Using Ubuntu 17.10..
Any insight is welcome.
The recomended way to create a development environnement for reflex-dom is to use try-reflex.
It is tricky to build reflex-dom with stack, because some needed changes have not yet been added to the upstream libraries.
If you really want to build a reflex-dom environnement with stack, please consider these hints:
Do not use a GHC compiler with a version higher than 8.0.2
Do not use the reflex /reflex-dom versions from Hackage, they are outdated.
Use versions of reflex / reflex-dom from Github.
This repo contains a stack.yaml file, that used to work.
You may also try the stack.yaml file from the answer to this SO question.
I am using alternative version numbering approach for my projects. I have encountered strange behavior by cabal and stack that does not allow me to fully enjoy benefits of this approach. Both cabal and stack enforce version to be of format Int.Int.Int, which does not cover the case of another version format I use for branches (0.x.x, 1.x.x, 1.0.x, etc).
If I have line version: 0.x.x in my .cabal file, I am getting Parse of field 'version' failed. error when running cabal build or Unable to parse cabal file {PROJECT_NAME}.cabal: NoParse "version" 5 when running stack init.
Is there a way to disable version parsing on cabal and stack commands? Is there a flag for it? Or do I have to request this kind of change (adding flags, disabling version parsing) from the developers of cabal and stack?
Why is there any parsing at all? How does it help with building a package? Does cabal or stack automatically increment build numbers on some event? If yes, where could I read more about this? How could I influence the way version numbering incrementation gets implemented in cabal and stack? I want developers of haskell packages take into account the possibility of alternative version numbering approaches.
PS. For all interested folks, I want to quickly summarize the idea behind "weird" version numbers, such as 0.x.x, 1.x.x, 1.0.x. I use the version numbers with x's to describe streamlines of development that allow code changes while such version numbers as 1.0.0, 1.1.0, 2.35.46 are used to describe frozen states of development (to be precise, they are used for released versions of software). Note that such version numbers as 0.x.0, 1.x.15, 2.x.23 are also possible (used for snapshots/builds of software) and they mean that codebase has been inherited from branches with version numbers 0.x.x, 1.x.x and 2.x.x correspondingly.
Why do I need such version numbers as 0.x.x, 1.x.x and 2.x.x at all? In brief, different number of x's mean branches of different types. For example, version number pattern N.x.x is used for support branches, while pattern N.M.x is used for release branches. Idea behind support branches is that they get created due to incompatibility of the corresponding codebases. Release branches get created due to feature freeze in corresponding codebase. For example, branches 1.0.x, 1.1.x, 1.2.x, ... get created as a result of feature freezes (or releases) in branch 1.x.x.
I know this is all confusing, but I worked hard to establish this version numbering approach and I continue working on awareness about the inconsistencies of version numbering through my presentations and other projects. This all makes sense once you think more about the pitfalls of semver approach (you can find detailed slideshare presentation on the matter following the link). But I do not want to defend it for now. For the time being, I just want cabal and stack to stop enforcing their, as I perceive them, unjustified rules to my project. Hope you can help me with that.
You can't. The version will be parsed to Version, which is:
data Version = PV0 {-# UNPACK #-} !Word64
| PV1 !Int [Int]
Stack uses Cabal as a library but has its own Version type:
newtype Version =
Version {unVersion :: Vector Word}
deriving (Eq,Ord,Typeable,Data,Generic,Store,NFData)
Neither cabal nor stack have a way to customize the parsing. You have to write your own variant of those programs if you want to use another version type. But then again, you're not winning anything at that point: neither Hackage nor Stackage will recognize your package's version.
So the 1.x.x isn't possible at the moment. You could exchange x with 99999999 or something similar to mitigate the problem. That being said, it's not clear what cabal install should then install. The 99999999 version? Or the latest stable variant?
If you can express the semantics, a discussion on the mailing list as well as a feature request might change the behaviour in the (far away) future, but for now, you either have to patch the programs yourself or use another numbering scheme.
Is there a way to disable version parsing on cabal and stack commands? Is there a flag for it?
No.
Or do I have to request this kind of change (adding flags, disabling version parsing) from the developers of cabal and stack?
You can of course ask, but there are so many outstanding issues that you are unlikely to get any traction. You will have to be very convincing -- convincing enough to overturn more than 20 years of experience that says the current versioning scheme is basically workable. Realistically, if you want this to happen you'll probably have to maintain a fork of these tools yourself, and provide an alternative place to host packages using this scheme.
Why is there any parsing at all? How does it help with building a package?
Packages specify dependencies, and for each dependency, specify what version ranges they work with. The build tools then use a constraint solver to choose a coherent set of package/version pairs to satisfy all the (transitive) dependencies. To do this, they must at a minimum be able to check whether a given version is in a given range -- which requires parsing the version number at least a little bit.
Does cabal or stack automatically increment build numbers on some event? If yes, where could I read more about this?
There is nothing automatic. But you should take a look at the Package Version Policy, which serves as a social contract between package maintainers. It lets one package maintainer say, "I am using bytestring version 0.10.0.1 and it seems to work. I'm being careful about qualifying all my bytestring imports; therefore I can specify a range like >=0.10 && <0.11 and be sure that things will just work, while giving the bytestring maintainer the ability to push security and efficiency updates to my users." without having to pore through the full documentation of bytestring and hope its maintainer had written about what his version numbers mean.
How could I influence the way version numbering incrementation gets implemented in cabal and stack?
As with your previous question about changing the way the community does things, I think modifications to the Package Versioning Policy are going to be quite difficult, especially changes as radical as you seem to be proposing here. The more radical the change, the more carefully motivated it will have to be to gain traction.
I honestly don't know what a reasonable place to take such motivation and discussion would be; perhaps the haskell-cafe mailing list or similar.