I am trying to move a project from cabal to stack. It was painless up until I tried to make alex use a custom wrapper. Now I'm stuck.
I used to use cabal build --alex-options=".." to pass them on, but that option is not there with stack (stack version 1.1.2 at least). Similarly, I used to pass debug flags to happy, but now it no longer seems possible. Is there some other way to achieve this and still use stack?
The point of moving this project is to help collaborators be able to reproduce my build effortlessly (it was already some work to convince them to use Haskell).
Related
I don't get the point about Stack.
I used to write my Haskell code in my favourite environment, ran or compiled using GHC(i), and if necessary, installed packages using Cabal. Now, that apparently is not the way to go any more, but I don't understand how to work with Stack. So far, I have only understood that I need to write stack exec ghci instead ghci to start a repl.
Apart from that, the docs always talk about 'projects' for which I have to write some yaml files. But I probably don't have any project -- I just want to launch a GHCi repl and experiment a bit with my ideas. At the moment, this fails with the unability to get the packages that I want to work with installed.
How is working with Stack meant? Is there any explanation of their use cases? Where do I find my use case in there?
Edit. My confusion comes from the fact that I want to work with some software (IHaskell) whose installation guide explains the installation via stack. Assuming I already have a GHCi installed whose package base I maintain e.g. using Cabal. How would I have to set up stack.yaml to make stack use my global GHCi for that project?
First, notice that stack will use its own package base independent from cabal. AFAIK they can't be shared... hence, if you run stack build it'll download packages (including the compiler) on its own package database.
Nevertheless stack allows to use a system compiler (but not other libraries). To do so, in the stack.yaml you must have the following two lines
resolver: lts-XX.XX -- keep reading below
system-ghc: True
The version of the stackage snapshot can be found in: https://www.stackage.org/. Each snapshot works with a different version of the compiler. Be sure to use a snapshot with the same compiler version you have in you system. If it happens your system ghc is greater than any lts, then you can set allow-newer: true in stack.yaml.
Now, if getting a different database from stack feels wrong to you, notice that you can build the project with cabal too, since stack at the end of the day spits out a cabal file. Probably, It wont work out of the box if you build with cabal. You can modify the cabal file to match exactly the version of the packages of the snapshot you are using
In summary:
You can use your system-wide ghc
you can not share libraries installed with cabal.
you can use cabal to build the project, probably modifying the ihaskell.cabal file to match versions of the stackage's snapshot.
Long story short, I'd like some guidance on what's the (best) way to have Haskell work on Archlinux.
By work I mean all, in terms of the ghci command line tool, installing packages I don't have - such as vector-space, which this answer to a question of mine refers to -, and any other thing that could be necessary to a Haskell obstinate learner.
Archlinux wikipage on Haskell lists three (alternative?) packages for making Haskell work on the system, namely ghc, cabal-install, and stack. I have the first and the third installed on my system, but I think I must have installed the latter later (unless it's a dependency to ghc) while tampering around (probably in relation to Vim as a Haskell IDE). Furthermore, I have a huge amount of haskell-* packages installed (why? Who knows? As a learner I must have come multiple times to the point of say uh, let's try this!).
Are there any pros and cons ("cons", ahah) about each of those packages?
Can they all be used with/without conflicts?
Does any of them make any other superfluous?
Is there anything else I should be aware of which I seem apparently ignorant about based of what I've written?
Arch Linux's choice of providing dynamically linked libraries in their packages tends to get in the way if you are looking to develop Haskell code. As an Arch user myself, my default advice would be to not use Arch's Haskell packages at all, and instead to install whatever you need through ghcup or Stack, starting from the guidance in their respective project pages.
You are basically there. Try the following:
ghci: If you get the Haskell REPL then it works.
stack ghci: Again you should get the Haskell REPL. There are a lot of versions of GHC, and stack manages these along with the libraries. Whenever you use a new version of GHC stack will download it and create a local installation for you.
stack is independent of your Linux package manager. The trouble is that your distro will only have the Haskell libraries it actually needs for any applications it has integrated, and once you step outside of those you are in dependency hell with no support. So I recommend that you avoid your distro Haskell packages. stack does everything you need.
If you installed stack from your Linux package manager then you might want to uninstall it and use a personal copy (i.e. in your ~/.local directory) instead. Then you can always say stack update to check you have the latest version.
Once you have stack going, create a project by saying stack new my-project simple. Then go into the project folder and start editing. You can work with just a .hs file and GHC if you really want, but its painful; you will do much better with stack, even if you are just messing around.
You'll also need an editor. Basic functionality like syntax highlighting is available in pretty much everything, but once you get past Towers of Hanoi you are going to want something better. I use Atom with ide-haskell-ghcide. This uses the Haskell Language Server under the hood, so you will need to install that too. I know a bunch of other editors have HLS support, but I don't have experience with them.
I am using alternative version numbering approach for my projects. I have encountered strange behavior by cabal and stack that does not allow me to fully enjoy benefits of this approach. Both cabal and stack enforce version to be of format Int.Int.Int, which does not cover the case of another version format I use for branches (0.x.x, 1.x.x, 1.0.x, etc).
If I have line version: 0.x.x in my .cabal file, I am getting Parse of field 'version' failed. error when running cabal build or Unable to parse cabal file {PROJECT_NAME}.cabal: NoParse "version" 5 when running stack init.
Is there a way to disable version parsing on cabal and stack commands? Is there a flag for it? Or do I have to request this kind of change (adding flags, disabling version parsing) from the developers of cabal and stack?
Why is there any parsing at all? How does it help with building a package? Does cabal or stack automatically increment build numbers on some event? If yes, where could I read more about this? How could I influence the way version numbering incrementation gets implemented in cabal and stack? I want developers of haskell packages take into account the possibility of alternative version numbering approaches.
PS. For all interested folks, I want to quickly summarize the idea behind "weird" version numbers, such as 0.x.x, 1.x.x, 1.0.x. I use the version numbers with x's to describe streamlines of development that allow code changes while such version numbers as 1.0.0, 1.1.0, 2.35.46 are used to describe frozen states of development (to be precise, they are used for released versions of software). Note that such version numbers as 0.x.0, 1.x.15, 2.x.23 are also possible (used for snapshots/builds of software) and they mean that codebase has been inherited from branches with version numbers 0.x.x, 1.x.x and 2.x.x correspondingly.
Why do I need such version numbers as 0.x.x, 1.x.x and 2.x.x at all? In brief, different number of x's mean branches of different types. For example, version number pattern N.x.x is used for support branches, while pattern N.M.x is used for release branches. Idea behind support branches is that they get created due to incompatibility of the corresponding codebases. Release branches get created due to feature freeze in corresponding codebase. For example, branches 1.0.x, 1.1.x, 1.2.x, ... get created as a result of feature freezes (or releases) in branch 1.x.x.
I know this is all confusing, but I worked hard to establish this version numbering approach and I continue working on awareness about the inconsistencies of version numbering through my presentations and other projects. This all makes sense once you think more about the pitfalls of semver approach (you can find detailed slideshare presentation on the matter following the link). But I do not want to defend it for now. For the time being, I just want cabal and stack to stop enforcing their, as I perceive them, unjustified rules to my project. Hope you can help me with that.
You can't. The version will be parsed to Version, which is:
data Version = PV0 {-# UNPACK #-} !Word64
| PV1 !Int [Int]
Stack uses Cabal as a library but has its own Version type:
newtype Version =
Version {unVersion :: Vector Word}
deriving (Eq,Ord,Typeable,Data,Generic,Store,NFData)
Neither cabal nor stack have a way to customize the parsing. You have to write your own variant of those programs if you want to use another version type. But then again, you're not winning anything at that point: neither Hackage nor Stackage will recognize your package's version.
So the 1.x.x isn't possible at the moment. You could exchange x with 99999999 or something similar to mitigate the problem. That being said, it's not clear what cabal install should then install. The 99999999 version? Or the latest stable variant?
If you can express the semantics, a discussion on the mailing list as well as a feature request might change the behaviour in the (far away) future, but for now, you either have to patch the programs yourself or use another numbering scheme.
Is there a way to disable version parsing on cabal and stack commands? Is there a flag for it?
No.
Or do I have to request this kind of change (adding flags, disabling version parsing) from the developers of cabal and stack?
You can of course ask, but there are so many outstanding issues that you are unlikely to get any traction. You will have to be very convincing -- convincing enough to overturn more than 20 years of experience that says the current versioning scheme is basically workable. Realistically, if you want this to happen you'll probably have to maintain a fork of these tools yourself, and provide an alternative place to host packages using this scheme.
Why is there any parsing at all? How does it help with building a package?
Packages specify dependencies, and for each dependency, specify what version ranges they work with. The build tools then use a constraint solver to choose a coherent set of package/version pairs to satisfy all the (transitive) dependencies. To do this, they must at a minimum be able to check whether a given version is in a given range -- which requires parsing the version number at least a little bit.
Does cabal or stack automatically increment build numbers on some event? If yes, where could I read more about this?
There is nothing automatic. But you should take a look at the Package Version Policy, which serves as a social contract between package maintainers. It lets one package maintainer say, "I am using bytestring version 0.10.0.1 and it seems to work. I'm being careful about qualifying all my bytestring imports; therefore I can specify a range like >=0.10 && <0.11 and be sure that things will just work, while giving the bytestring maintainer the ability to push security and efficiency updates to my users." without having to pore through the full documentation of bytestring and hope its maintainer had written about what his version numbers mean.
How could I influence the way version numbering incrementation gets implemented in cabal and stack?
As with your previous question about changing the way the community does things, I think modifications to the Package Versioning Policy are going to be quite difficult, especially changes as radical as you seem to be proposing here. The more radical the change, the more carefully motivated it will have to be to gain traction.
I honestly don't know what a reasonable place to take such motivation and discussion would be; perhaps the haskell-cafe mailing list or similar.
I recently finished the book "learnyouahaskell" and now I would like to apply my acquired knowledge by building a yesod application.
However I am not sure about how to get started.
There seems to be two options on how to set up a yesod project. One is with Stack, and the other is with cabal sandboxes.
But what are the differences (If any?) and the similarities between them? Does one count as best practice whereas the other doesn't?
The yesod quickstart suggests using stack, is this fine or should I use cabal sanbox?
There are actually three different packages that are being talked about here.
cabal-install is the current stable binary to build your applications.
stack was just released to the public recently. I believe it is trying to replace cabal-install as a better, more convenient tool. At the very least, it is showing the Haskell community a different way of something things.
Cabal is the library that both cabal-install and stack are based off of.
As for the differences between the first two tools.
cabal-install is a mature application used nearly everywhere within the Haskell community (at least in open source, I have no idea what people are doing behind closed doors).
stack is still a new (at least to the public) application used in some newer projects. Some more information can be found here. But some of the highlights are:
running stack build in a projects directory will install GHC (Haskell compiler) as well as the needed dependencies for the project.
stack, by default, runs off of stackage. Which is a curated version of hackage. Meaning you can expect the different packages to play nicely with each other. Leading to reproducible builds.
You can still fall back on hackage should you choose to.
The great thing about these two applications is that they can be used by different people for the same project. If you decide you want to use cabal-install with sandboxes, and someone comes along and wants to help with your project, they can just add the files that stack needs and they can use stack while you continue to use cabal-install. Or vice-versa.
here is one persons experience after using stack for the first time. They claim that it is a little bit easier to get started because there are a couple less steps required to get started. If nothing else, people highlight the pros and cons of each tool.
Note: I'm still fairly new to Haskell, and have never actually used stack. I've actually been told to stay away from it unless building something in yesod.
Edit: As stated in a comment under this answer, I believe I have mis-represented what people have told me about stack. The comments people have given me when I asked if I should switch over to stack were more along the lines of, If you are comfortable enough using cabal sandboxes, there is no reason to switch over to stack unless you are having issues.
The lead developer of Yesod (Michael Snoyman) is actively involved with the Stack tool. So, I would recommend you to set up Yesod with Stack. Also, Yesod has a quite a complex set of dependencies and using Stackage as the default curated source, helps very much in the installation process (which Stack takes care of by default).
Also read this post for understanding the differences of Stack from cabal.
What is the best practice for adding a compilation flag when building the Linux kernel? I'm interested to know this both generally because I encounter the same issue from time to time and specifically for enabling kenter, kdebug and kleave traces in http://lxr.free-electrons.com/source/security/keys/internal.h.
When you compile a module you can use CFLAGS_MODULE but that doesn't seem to inject into subtree builds when you compile the whole kernel.
Currently I just add a define directly to the source file, which is not a very lean solution.
It was quite easy and actually I had used this previously but just had forgotten it. You can just use make CFLAGS_KERNEL="<flags>".