GHC warns when a package depends on different instances of the same package via dependencies, e.g.:
Configuring tasty-hspec-1.1.5.1...
Warning:
This package indirectly depends on multiple versions of the same package.
This is very likely to cause a compile failure.
package hspec-core (hspec-core-2.5.5-H06vLnMfEeIEsZFdji6h0O) requires
clock-0.7.2-9qwmBbNbGzEOSffjlyarp
package tasty (tasty-1.1.0.3-I8Vu9v0lHj8Jlg3jpKXavp) requires
clock-0.7.2-Cf9UTsaN2AjEpBnoMpmgkU
Two things are unclear to me with respect to this warning:
If GHC warns, and the compile doesn't fail, is everything fine? That is, could subtly conflicting instances of the same package still cause bad behaviour? (I'm imagining something like a type (Int, Int) in the public interface, with both instances of the package switching the order of the fields.)
Is there a way to make GHC fail for this warning?
That's not GHC what's warning you about multiple package-versions. GHC just compiles the packages that were specified... which hardly anybody ever does by hand, but let Stack or Cabal do it for them. And in this case it's Cabal giving the warning message.
If different versions cause a problem, you will in practice almost always see it at compile-time. It's most often a missing-instance error, because you're e.g. trying to use the class Foo from pkg-1.0 with the type Bar from pkg-2.0. Direct version mismatch of a data type in public interfaces can also happen.
Theoretically, I think it would also be possible to have an error like (Int,Int) meaning two different things, which the compiler would not catch. However, this kind of change is asking for trouble anyways. Whenever the order of some data fields is not completely obvious and might change in the future, a data record should be used to make sure the compiler can catch it. (This is largely orthogonal to the different-versions-of-same-package issue.)
If you want to be safe from any version-mismatch issues, you can use Stack instead of Cabal. I think this is a good part of the reason why many Haskellers prefer Stack.
Related
I often use typed holes to define functions for which I know the interface and intend to implement later. When I run cabal build, it stops after the first module it encounters with typed holes, hiding type errors that may exist in other modules.
Is there any way of typechecking a project, and only failing for typed holes after the entire project has been built and typechecked?
#chi very politely told me to Read The Docs. There seems to be two ways of configuring typed holes:
The default behaviour, which makes typed holes a compile-time error. This causes compilation to stop after the first module encountered with typed holes, hiding both type errors and typed holes in other modules.
-fdefer-typed-holes which will issue a warning upon encountering a typed hole, and go on to compile the rest of the project. If no other errors are encountered, a binary is built, with the typed holes demoted to runtime errors. On the one hand, all holes show up in the compiler output, but on the other, it is less than ideal for these to allow the build to succeed.
There is, however, a slightly hacky combination of flags that gets (almost) the desired behaviour:
-fdefer-typed-holes -Werror=typed-holes
This typechecks each module in the project, stopping for any (non-hole) type errors. If there are none in a given module, build prints out all the typed holes in the project and goes on to typecheck the rest of the project. Build only succeeds if it encounters no type errors or typed holes.
It would be even nicer if we could get type errors and typed holes in the same output, but you can't have everything.
This excerpt from the docs says it all.
A “Found hole” error usually terminates compilation, like any other type error. After all, you have omitted some code from your program.
Nevertheless, you can run and test a piece of code containing holes,
by using the -fdefer-typed-holes flag. This flag defers errors
produced by typed holes until runtime, and converts them into
compile-time warnings. These warnings can in turn be suppressed
entirely by -Wno-typed-holes.
The Rust community has a fairly detailed description of their interpretation of Semantic Versioning.
The PureScript community has this, which includes:
We should write a semver tutorial for beginners, specifically its use in PureScript and the way we rely on ~-versions.
The odd thing is that looking at an assortment of 65 randomish purescript libraries, they all use ^-versions rather than ~-versions, but I have been unable to find any newer documentation and we recently had our build broken due to a mismatch in expectations.
Does the PureScript community have a reasonably consistent interpretation of semver, specifically regarding what is or isn't considered a breaking change? If so, what is it?
We don't have an exhaustive list anywhere, no. Now's as good a time as any to start one!
Taking advantage of features that require a newer compiler than when the current version was released.
Adding a dependency.
Removing a dependency.
Bumping a dependency's major version.
Deleting or renaming a module.
Removing a member (that means anything - type, value, class, kind, operator) from a module (either by hiding the export or deleting it).
Changing a type signature of an existing function or value in a way that means it won't unify with the previous version (so it is allowable to make types more general, but not less so).
Adding, removing, or altering the kind of type variables for a type.
Adding, removing, or altering data constructors for a type (unless the type does not export its constructors).
Adding or removing members of a type class declaration.
Changing the expected type parameters of a class.
Adding or altering functional dependencies of a class.
Changing the laws of a class.
Removing instances of a class.
Pretty much anything other than adding new members (or re-exports) to a module is considered a breaking change!
Occasionally we've made changes that are technically breaking (due to type signature changes), but done so to fix something that was completely unusable without the fix. In those cases they've gone out as patch bumps, but those cases are very rare. They tend only to occur when the FFI is involved.
Re: ~ vs ^... I think at the time the linked page was made there wasn't the option to use ^ in Bower (or it didn't default to that at least). ^ is the preferred/recommended range to use for libraries now.
I see in this answer and this one that "everything will break horribly" and Stack won't let me replace base, but it will let me replace bytestring. What's the problem with this? Is there a way to do this safely without recompiling GHC? I'm debugging a problem with the base libraries and it'd be very convenient.
N.B. when I say I want to replace base I mean with a modified version of base from the same GHC version. I'm debugging the library, not testing a program against different GHC releases.
Most libraries are collections of Haskell modules containing Haskell code. The meaning of those libraries is determined by the code in the modules.
The base package, though, is a bit different. Many of the functions and data types it offers are not implemented in standard Haskell; their meaning is not given by the code contained in the package, but by the compiler itself. If you look at the source of the base package (and the other boot libraries), you will see many operations whose complete definition is simply undefined. Special code in the compiler's runtime system implements these operations and exposes them.
For example, if the compiler didn't offer seq as a primitive operation, there would be no way to implement seq after-the-fact: no Haskell term that you can write down will have the same type and semantics as seq unless it uses seq (or one of the Haskell extensions defined in terms of seq). Likewise many of the pointer operations, ST operations, concurrency primitives, and so forth are implemented in the compiler themselves.
Not only are these operations typically unimplementable, they also are typically very strongly tied to the compiler's internal data structures, which change from one release to the next. So even if you managed to convince GHC to use the base package from a different (version of the) compiler, the most likely outcome would simply be corrupted internal data structures with unpredictable (and potentially disastrous) results -- race conditions, trashing memory, space leaks, segfaults, that kind of thing.
If you need several versions of base, just install several versions of GHC. It's been carefully architected so that multiple versions can peacefully coexist on a single machine. (And in particular installing multiple versions definitely does not require recompiling GHC or even compiling GHC a first time, which seems to be your main concern.)
I maintain the augeas FFI library at http://hackage.haskell.org/package/augeas
Recently augeas added an aug_to_xml method that includes a parameter with type xmlNode from libmxl2. It looks like libxml is the FFI library for libxml2, but it hasn't been updated in a while, and it doesn't look to have Debian packaging, so I'm hesitant to add it as a dependency to the augeas FFI library.
So my question is when I add FFI support for this function, would it be better to add the dependency to libxml, which might lead to packaging problems later on, or would it be better to use something like a opaque type as per the FFI cookbook so there's no interlibrary dependency ?
If I go with the opaque type approach, and the users want like to use libxml on their own, could they go about casting my type as a Text.XML.LibXML.Node ?
An opaque type is probably the best route here if you want to include the function, but I'd be sceptical of including a function that is only usable with unsafe coercion to another library's type (which would be indeed possible, yes, but would rely on the internal representation of the libxml binding's Node to not change — risky).
I would suggest simply not adding the function; if someone wants to use it, they can easily import it themselves, and if your binding is appropriately direct, it'll probably be easy for them to use it with your binding's types. Of course, if it's likely to be commonly-used, you could easily bundle this up into a package by itself, though I highly doubt a package that was last updated in 2008 and doesn't even build on GHC 6.12 onwards is going to get much use.
So, I'd just omit the function from your binding, or use an opaque type if you really do want to include it.
GHC 7.0.3 (ubuntu repoes) produces during compilation warnings of kind
SpecConstr
Function `$j_se6a{v} [lid]'
has one call pattern, but the limit is 0
Use -fspec-constr-count=n to set the bound
Use -dppr-debug to see specialisations
I've made my own datatype, when I make it strict there are these warnings, when it is lazy, there are no. Though I've tested both versions run equally fast, so probably strictness is excessive here. Anyway are these warnings serious?
These messages (technically not even warnings) indicate that GHC could do further optimisations (which might or might not result in improved performance), but due to the limit placed on constructor specialisation doesn't. You could also get rid of them by passing -fspec-constr-count=n with a sufficiently large n (default is 3) or -fno-spec-constr-count to the compiler. The result would be larger code (more specialisations), which could be faster, equally fast, or in unfortunate cases slower. If performance is critical, you should try and compare.
These warnings can be safely ignored; they are always produced in GHC 7.0 because of internal details — basically, they're not real warnings, just debug output.
However, you can turn them off using -dno-debug-output, according to this GHC bug report.
You should no longer see them if you upgrade to GHC 7.2.