Conventions for Stability field of Cabal packages - haskell

Cabal allows for a freeform Stability field:
stability: freeform
The stability level of the package, e.g. alpha, experimental, provisional, stable.
What are the community conventions about these stability values? What is considered experimental and what is provisional? I see only few packages are declared as stable. What kind of stability does it refer to, stability of the exposed API or the ultimate bug-free state of the software?

The field is mostly defunct now, and shouldn't be used. As Max said, it will probably be replaced by something meaningful in the future.
If you're interested in the history, the field originated in a design proposal for the first set of Hierarchical Haskell Libraries. That document describes the original intended meanings for the values.

Currently this field is a very poor guide to the stability of the library, so is mostly ignored. Duncan Coutts (one of the main Cabal and Hackage developers) has said that he eventually plans to replace this field entirely, with something like a social voting system on Hackage.
Personally (and I'm not alone) I just always omit the stability field. Given that it's going to go away, its probably not worth losing any sleep over what to put into it.

The original intended meanings were:
experimental: the API is unstable. It may change at any time, i.e.: any version number change;
provisional: the API is moving towards stability. It may be changed at every minor revision, but should provide deprecated versions of features;
stable: the API is stable. Only additions should be made at minor releases. After changes in the API, deprecated features should be kept for at least one major release.
As the other answers pointed out, the community seems not to be following these guidelines anymore.
As Simon Marlow points out, this is described in a design proposal for the first set of Hierarchical Haskell Libraries. The original link is dead, but you can find a copy in the wayback machine.

Related

How do indicate that a Haskell package is in either an alpha/beta/release candidate stage?

Let us say that I have worked on a haskell library and am now ready to release a beta version of the software to hackage/make repo public on github etc.
Possible Solutions and why they do not work for me
Use packagename-0.0.0.1-alpha or similar.
The problem here is quite simple: The Haskell PVP Specification does not allow it: (bold is me)
The components of the version number MUST be numbers! Historically Cabal supported version numbers with string tags at the end, e.g. 1.0-beta This proved not to work well because the ordering for tags was not well defined. Version tags are no longer supported and mostly ignored, however some tools will fail in some circumstances if they encounter them.
Just use packagename-0.* until it is out of alpha/beta (and then use packagename-1.*).
The problem here is twofold:
This method would not work for describing relase candidates which are post version 1.
Programmers from other ecosystems, such as that of rust, where it is quite common to have a stable library in 0.*, might wrongly assume that this library is stable. (Of course, it could be mitigated somewhat with a warning in the README, but I would prefer a better solution still.)
So, what is the best (and most conventional in haskell) way to indicate that the library version is in alpha/beta stage of development or is a release candidate?
As far as I know, there is not a package-wide way to say this. However, you can add a module description that describes the stability of the module to each of your modules' documentation.
{-|
Stability: experimental
-}
module PackageName.ModuleName where

What about terraform versioning scheme?

I've been using terraform for some time now and a doubt always come to my mind is, which versioning scheme is terraform-core using?
Is it semantic versioning AKA semver? Because if it is, why an upgrade in the minor version, as when upgrading a project from 0.11.X to 0.12.Y writes the state of terraform with that 0.12.x and it is not allowed to downgrade it back to 0.11.x?
Another thing related : why they opt for starting their version numbers in 0.X.X rather that 1.X.X? Does it mean anything?
They do use semantic versioning, but their interpretation is a little different than most.
Here's an answer on a GitHub issue from a Hashicorp employee regarding their versioning methodology:
At HashiCorp we take the idea of a v1.0 very seriously, and once Terraform gets there it will represent a strong promise of compatibility because we believe that the configuration language, internal architecture, CLI, and other product features are right for the long haul.
The current state of Terraform is a little more subtle. We do still consider backward-compatibility very important since we know there is a lot of production infrastructure depending on Terraform today. We must therefore make compromises so we can keep making progress towards something we could make v1.0 promises about. While we keep these disruptions to a minimum, they cannot always be avoided, and so we try to be very explicit about them in the changelog and, where applicable, in upgrade guides.
With this in mind, at this time we suggest always referring to the changelog before upgrading since this is our primary means to note any special considerations that apply during an upgrade. We try to reserve significant breaking changes for increases to the second (traditionally "minor") position in the version number, which at this time represent our "major" development milestones as we work towards an eventual v1.0.
Since Terraform is an application rather than a library we do not intend to follow the Semantic Versioning conventions to the letter, but since they do indeed represent common versioning idiom we are likely to follow them in spirit, since of course we wish to be as clear as possible. As #kshep noted, v0 releases are special in the semver conventions, but the meaning of v1.0 in semver is broadly consistent with how we intend to interpret it.
I'm sorry that our version numbering practices caused confusion here; based on this feedback, we will attempt to be clearer about the significance and risk of each release when we announce it and will work on writing some more explicit documentation on what I wrote above.
Ref: https://github.com/hashicorp/terraform/issues/15839#issuecomment-323106524
Better late than never: https://www.hashicorp.com/blog/announcing-hashicorp-terraform-1-0-general-availability
From here on you can expect regular semantic versioning support.

When using semver when to upgrade/bump to 1.0.0 (stable)

The Semantic Versioning Specification (SemVer) defines:
Major version zero (0.y.z) is for initial development. Anything may change at any time. The public API should not be considered stable.
So starting with 1.0.0 is considered stable.
When starting a project normally version 0.1.0 is used and gradually increased, there is a point where the project can have something like 0.20.3 for months/years and it is "stable".
Therefore would like to know what could be considered good practices to follow besides the criteria, arguments before bumping a project to server 1.0.0.
How you are dealing with this?
If there are not issues/code activity in 3 months the version is dumped?
The development team decides when they have version 1.0.0. It is possible for a project to remain in experimental/prototype mode for very long periods of time. The only thing that matters here is whether the interface and implementation can be considered complete or not. What are your goals? Do you have all the planned v1 features in place? Do you have doubts w.r.t. implementation conformance to the documented interface?
Some teams don't have workflows that map onto the full semver spec, but they use packaging/release tooling that requires a semver version string. In order to be legal, they never release version 1.0.0, so any version bumps literally don't have full SemVer semantics. See #4 in the spec.
When I see SomeLib.0.20.3.pkg I assume it is not stable. A breaking change can occur on the very next minor field bump, whether or not there have ever been any such changes in the past. SemVer is a contract that allows the SomeLib developers to break things without notice in this particular case.
There is nothing in the spec that precludes a team from issuing a 1.0.0 and then returning to a long sequence of 0.x.x releases if they so desire to operate that way. This is similar to, but not exactly the same as using prerelease tags. When you issue 1.0.1-prerelease you are at least implying intent to release a work derived from 1.0.0 that is a bug-fix, but the prerelease tag is warning label that you are not yet certain of the safety of the changes you made. Following on from 1.0.0 to a sequence of 0.x.x releases says you might not even be deriving from 1.0.0 anymore. Basically, all bets are off again.
If you require any further elucidation on this matter, please ask, I am happy to try to answer any questions regarding SemVer.

Non-maintainer uploads to Hackage

I have a package on Hackage which depends on third-party package, which doesn't build on newer versions of GHC (>= 7.2). The problem with the other package can be solved with just a one-line patch (a LANGUAGE pragma). I sent the patch to the upstream twice, but didn't receive any feedback. The problem is that my package is not installable neither until the dependency is fixed.
I could have uploaded the fixed version of depenency package (with a minor version bump), but I'd like to hear what is the attitude of the community about such non-maintainer uploads. Again, I don't want to change the library interface, I only add a new compilation flag to make it buildable again.
Are non-maintainer uploads to Hackage allowed and tolerated?
When a fork of the package on Hackage is a better approach?
Package uploads by non-maintainers are allowed (there may be license issues, but most packages if not all on hackage have licenses permitting this), but of course they are not usually done. They are tolerated if done in good faith and with reasonable procedure. If you contact the maintainer and don't get any response within n weeks (where I'm not sure what the appropriate value of n is, not less than 3, I'd say), uploading a new version yourself becomes an option, discussing that on the mailing lists seems however more prudent. If the package looks like it is abandoned, even taking over maintainership - of course after again contacting the maintainer, giving her/him time to respond - may be the appropriate action, but that should definitely be discussed with the community (haskell-cafe or mailing list, for example). Whether to prefer a non-maintainer upload or a fork must be left to your judgment, personally I tend to believe forks step on fewer people's toes.
But a better founded reply would be possible if we knew which package is concerned and could look at the concrete situation.
A forking is intrusive for a package that you suspect is still maintained but the author is temporarily missing. By intrusive, I mean that other programmers might pick up your fork then not go back to the mainline once the original author has resumed work on the mainline.
For packages where the original author has left the Haskell community, my personal opinion is that its better to fork the package if you are going to develop it further. Forking prevents succession problems, such as those that happened with Parsec where many developers didn't want to update because the successor was slower and less well documented than the original for some time.
In all cases asking on the Cafe is best, regardless of whether people have chosen not to follow it, it is still the center of the Haskell community.
For the particular case in the question, while it is nice if things on Hackage compile, there is no rule that says they have to. A package that depends on a broken package could simply put change instructions for the broken dependency on its front page, i.e. "This package depends on LambdaThing-0.2.0 which is broken, to fix LambdaThing add ... to the file Lambda.hs"
I would say, it's a very good idea to consult the mailing lists regarding the specific package and the specific person who is missing. I took control of the haskell-src-meta package from its original owner, but only after consulting with the lists and IRC, who assured me that Matt Morrow had been missing for months and no-one knew why.
In my opinion, package ownership should probably only be changed where there is a consensus to do so, or at the very least there should be efforts made to find one. In the development version of the Hackage software, it's my understanding there are access controls so that only administrators can make this kind of intervention.

Technical considerations in dropping support for old compiler versions?

I work on a project that's distributed for free in both source and binary form, since many of our users need to compile it specifically for their system. The necessitates a degree of consideration in maintaining backwards compatibility with older host systems, and primarily their compilers.
Some of the cruftiest of these, such as GCC 3.2 (2003!), ICC 9, MSVC (almost abandonware, not C++!) and Sun's compiler (in some old version that we still care about), lack support for language features that would make development much easier. There are definitely also cases where enabling users to stick with these compilers costs them a lot of performance, which runs counter to the goals of what we're providing.
So, at what point do we say enough is enough? I can see several arguments for ceasing to support a particular compiler:
Poor performance of generated code (relative to newer versions, asked about here)
Lack of support of language features
Poor availability on development systems (more for the proprietary than GCC, but there are sysadmin issues with getting old GCC, too)
Possibility of unfixed bugs (we've isolated ICEs in ICC and xlC, what else might be lurking?)
I'm sure I've missed some others, and I'm not sure how to weight them. So, what arguments have I missed? What other technical considerations come into play?
Note: This question was previously more broadly phrased, leading many respondents to point out that the decision-making is fundamentally a business process, not an engineering process. I'm aware of the 'business' considerations, but that's not what I'm looking for more of here. I want to hear experiences from people who've had to support older compilers, or made the choice to drop them, and how that's affected their development.
Your question is conceptually the same as web developers who want to know when they should stop supporting Internet Explorer 6. The answer is that you have to do research.
How many people use the older compilers?
How many use the newer ones?
How many will be willing to upgrade?
How many users will you lose? (This can be calculated from the answers to 1, 2, and 3).
How much time and work would it save you to drop support for the older compilers?
Basically your decision comes down to comparing the answers to 4 and 5. It seems like this is an open source project from your description, but if it's a business, you can compare it numerically (if money lost is less than money saved, drop support). If it's not a business, it's a bit more complicated, as you have to guess the human cost, which can be a bit tricky.
Well, the usual way to go about this is first to ask. I assume you have a mailing list of a webpage or something to facilitate that. So ask: Who will be affected and how hard would it be to upgrade if we drop support for any of these compilers. After doing so, you'll get an idea if it is worth the hassle to keep supporting these compilers.
It might also be kind to flag the last working version for each compiler-version you decide to drop support for, so that anyone who really cares can keep on using that old version.
I don't think it is particularly anything to do the efficacy of old compiler tech. It's a business decision, and really boils down whether you want to keep your customers or lose them. Customers don't deal in tech, they deal in business and business decisions.
Ideally you want to define some kind of metric that constructed on how many customers you
have, against the different compiler versions they are used, against the cost of
maintaining particular versions of each compiler type.
Fundamentally, you really need to be careful when and how your going to tell your customer
base that your going to retire part of your product set. How you tell them as well. Just
drop it in their lap. Plan it.
You need a internal approved controlled policy, and start rolling it out, perhaps telling
them at user group meetings, and then ensure you have decent length of time (2 years is
good, allow the customer to complete current implementations (1 years) plus some slack,
before you start implementing in, and have a support framework in place, to help customer
migrate in time.
How you plan this will define how your customers react. A few years go, I was working in
software house, which sold a really complex high end product for controlling electricity networks. The product sell £2m for the complete package, and each customer signed for
a 25 year support contract. Somehow we decided to rationalise hardware. We were
offering it on AIX, Solaris, Tru64 and HPUX. But for reason we decided to rationalise it
on AIX, which I think we had a deal. Anyway, one of the customers which was a Solaris shop
got really upset about this, and then for the next 4 years we never heard a word from them.
No phone calls, patched, on site audits. Nothing.
The reason we decided to change it, as we did a 6 sigma project, and it indicated we
would save about £19m a year, buy rationalising the infrastructure to AIX and NT. But in
the end up, we ended up fxxking off one of our primary customers, virtually destroying our user group community.
The decision was made hastily, and it backfired. So I think your best idea is to plan it.

Resources