GHC 7.0.3 (ubuntu repoes) produces during compilation warnings of kind
SpecConstr
Function `$j_se6a{v} [lid]'
has one call pattern, but the limit is 0
Use -fspec-constr-count=n to set the bound
Use -dppr-debug to see specialisations
I've made my own datatype, when I make it strict there are these warnings, when it is lazy, there are no. Though I've tested both versions run equally fast, so probably strictness is excessive here. Anyway are these warnings serious?
These messages (technically not even warnings) indicate that GHC could do further optimisations (which might or might not result in improved performance), but due to the limit placed on constructor specialisation doesn't. You could also get rid of them by passing -fspec-constr-count=n with a sufficiently large n (default is 3) or -fno-spec-constr-count to the compiler. The result would be larger code (more specialisations), which could be faster, equally fast, or in unfortunate cases slower. If performance is critical, you should try and compare.
These warnings can be safely ignored; they are always produced in GHC 7.0 because of internal details — basically, they're not real warnings, just debug output.
However, you can turn them off using -dno-debug-output, according to this GHC bug report.
You should no longer see them if you upgrade to GHC 7.2.
Related
I often use typed holes to define functions for which I know the interface and intend to implement later. When I run cabal build, it stops after the first module it encounters with typed holes, hiding type errors that may exist in other modules.
Is there any way of typechecking a project, and only failing for typed holes after the entire project has been built and typechecked?
#chi very politely told me to Read The Docs. There seems to be two ways of configuring typed holes:
The default behaviour, which makes typed holes a compile-time error. This causes compilation to stop after the first module encountered with typed holes, hiding both type errors and typed holes in other modules.
-fdefer-typed-holes which will issue a warning upon encountering a typed hole, and go on to compile the rest of the project. If no other errors are encountered, a binary is built, with the typed holes demoted to runtime errors. On the one hand, all holes show up in the compiler output, but on the other, it is less than ideal for these to allow the build to succeed.
There is, however, a slightly hacky combination of flags that gets (almost) the desired behaviour:
-fdefer-typed-holes -Werror=typed-holes
This typechecks each module in the project, stopping for any (non-hole) type errors. If there are none in a given module, build prints out all the typed holes in the project and goes on to typecheck the rest of the project. Build only succeeds if it encounters no type errors or typed holes.
It would be even nicer if we could get type errors and typed holes in the same output, but you can't have everything.
This excerpt from the docs says it all.
A “Found hole” error usually terminates compilation, like any other type error. After all, you have omitted some code from your program.
Nevertheless, you can run and test a piece of code containing holes,
by using the -fdefer-typed-holes flag. This flag defers errors
produced by typed holes until runtime, and converts them into
compile-time warnings. These warnings can in turn be suppressed
entirely by -Wno-typed-holes.
GHC warns when a package depends on different instances of the same package via dependencies, e.g.:
Configuring tasty-hspec-1.1.5.1...
Warning:
This package indirectly depends on multiple versions of the same package.
This is very likely to cause a compile failure.
package hspec-core (hspec-core-2.5.5-H06vLnMfEeIEsZFdji6h0O) requires
clock-0.7.2-9qwmBbNbGzEOSffjlyarp
package tasty (tasty-1.1.0.3-I8Vu9v0lHj8Jlg3jpKXavp) requires
clock-0.7.2-Cf9UTsaN2AjEpBnoMpmgkU
Two things are unclear to me with respect to this warning:
If GHC warns, and the compile doesn't fail, is everything fine? That is, could subtly conflicting instances of the same package still cause bad behaviour? (I'm imagining something like a type (Int, Int) in the public interface, with both instances of the package switching the order of the fields.)
Is there a way to make GHC fail for this warning?
That's not GHC what's warning you about multiple package-versions. GHC just compiles the packages that were specified... which hardly anybody ever does by hand, but let Stack or Cabal do it for them. And in this case it's Cabal giving the warning message.
If different versions cause a problem, you will in practice almost always see it at compile-time. It's most often a missing-instance error, because you're e.g. trying to use the class Foo from pkg-1.0 with the type Bar from pkg-2.0. Direct version mismatch of a data type in public interfaces can also happen.
Theoretically, I think it would also be possible to have an error like (Int,Int) meaning two different things, which the compiler would not catch. However, this kind of change is asking for trouble anyways. Whenever the order of some data fields is not completely obvious and might change in the future, a data record should be used to make sure the compiler can catch it. (This is largely orthogonal to the different-versions-of-same-package issue.)
If you want to be safe from any version-mismatch issues, you can use Stack instead of Cabal. I think this is a good part of the reason why many Haskellers prefer Stack.
Executing rustc -C help shows (among other things):
-C opt-level=val -- optimize with possible levels 0-3, s, or z
The levels 0 to 3 are fairly intuitive, I think: the higher the level, the more aggressive optimizations will be performed. However, I have no clue what the s and z options are doing and I couldn't find Rust-related information about them.
It seems like you are not the only one confused, as described in a Rust issue. It seems to follow the same pattern as Clang:
Os For optimising the size when compiling.
Oz For even more size optimisation.
Looking at these and these lines in Rust's source code, I can say that s means optimize for size, and z means optimize for size some more.
All optimizations seem to be performed by the LLVM code-generation engine.
These two sequences, Os and Oz, within LLVM, are pretty similar. Oz invokes 260 passes (I am using LLVM 12.0), whereas Os invokes 264. Oz' sequence of analyses and optimizations is almost a strict subsequence of Os', except for one pass (opt -loops), which appears in a different place within Os. This said, notice that the effects of the optimizations can still be different, because they use different cost models, e.g., constants that determine the behavior of optimizations. Thus, optimizations that have impact on size, like loop unrolling and inlining can behave differently in these two sequences.
I see in this answer and this one that "everything will break horribly" and Stack won't let me replace base, but it will let me replace bytestring. What's the problem with this? Is there a way to do this safely without recompiling GHC? I'm debugging a problem with the base libraries and it'd be very convenient.
N.B. when I say I want to replace base I mean with a modified version of base from the same GHC version. I'm debugging the library, not testing a program against different GHC releases.
Most libraries are collections of Haskell modules containing Haskell code. The meaning of those libraries is determined by the code in the modules.
The base package, though, is a bit different. Many of the functions and data types it offers are not implemented in standard Haskell; their meaning is not given by the code contained in the package, but by the compiler itself. If you look at the source of the base package (and the other boot libraries), you will see many operations whose complete definition is simply undefined. Special code in the compiler's runtime system implements these operations and exposes them.
For example, if the compiler didn't offer seq as a primitive operation, there would be no way to implement seq after-the-fact: no Haskell term that you can write down will have the same type and semantics as seq unless it uses seq (or one of the Haskell extensions defined in terms of seq). Likewise many of the pointer operations, ST operations, concurrency primitives, and so forth are implemented in the compiler themselves.
Not only are these operations typically unimplementable, they also are typically very strongly tied to the compiler's internal data structures, which change from one release to the next. So even if you managed to convince GHC to use the base package from a different (version of the) compiler, the most likely outcome would simply be corrupted internal data structures with unpredictable (and potentially disastrous) results -- race conditions, trashing memory, space leaks, segfaults, that kind of thing.
If you need several versions of base, just install several versions of GHC. It's been carefully architected so that multiple versions can peacefully coexist on a single machine. (And in particular installing multiple versions definitely does not require recompiling GHC or even compiling GHC a first time, which seems to be your main concern.)
Why is the /Wp64 flag in Visual C++ deprecated?
cl : Command line warning D9035 :
option 'Wp64' has been deprecated and will be removed in a future release
I think that/Wp64 is deprecated mainly because compiling for a 64-bit target will catch the kinds of errors it was designed to catch (/Wp64 is only valid in 32-bit compiles). The option was added back when 64-bit targets were emerging to help people migrate their programs to 64-bit and help detect code that wasn't '64-bit clean'.
Here's an example of the kinds of problems with /Wp64 that Microsoft just isn't interested in fixing - probably rightly so (from http://connect.microsoft.com/VisualStudio/feedback/details/502281/std-vector-incompatible-with-wp64-compiler-option):
Actually, the STL isn't intentionally incompatible with /Wp64, nor is
it completely and unconditionally incompatible with /Wp64. The
underlying problem is that /Wp64 interacts extremely badly with
templates, because __w64 isn't fully integrated into the type system.
Therefore, if vector<unsigned int> is instantiated before vector<__w64 unsigned int>, then both of them will behave like vector<unsigned int>, and vice versa. On x86, SOCKET is a typedef for __w64 unsigned int. It's not obvious, but vector<unsigned int> is being instantiated
before your vector<SOCKET>, since vector<bool> is backed (in our
implementation) by vector<unsigned int>.
Previously (in VC9 and earlier), this bad interaction between /Wp64
and templates caused spurious warnings. In VC10, however, changes to
the STL have made this worse. Now, when vector::push_back() is given
an element of the vector itself, it figures out the element's index
before doing other work. That index is obtained by subtracting the
element's address from the beginning of the vector. In your repro,
this involves subtracting const SOCKET * - unsigned int *. (The latter
is unsigned int * and not SOCKET * due to the previously described
bug.) This /should/ trigger a spurious warning, saying "I'm
subtracting pointers that point to the same type on x86, but to
different types on x64". However, there is a SECOND bug here, where
/Wp64 gets really confused and thinks this is a hard error (while
adding constness to the unsigned int *).
We agree that this bogus error message is confusing. However, since
it's preceded by an un-silenceable command line deprecation warning
D9035, we believe that that should be sufficient. D9035 already says
that /Wp64 shouldn't be used (although it doesn't go on to say "this
option is super duper buggy, and completely unnecessary now").
In the STL, we could #error when /Wp64 is used. However, that would
break customers who are still compiling with /Wp64 (despite the
deprecation warning) and aren't triggering this bogus error. The STL
could also emit a warning, but the compiler is already emitting D9035.
/Wp64 on 32-bit builds is a waste of time. It is deprecated, and this deprecation makes sense. The way /Wp64 worked on 32-bit builds is it would look for a _w64 annotation on a type. This _w64 annotation would tell the compiler that, even though this type is 32-bits in 32-bit mode, it is 64-bits in 64-bit mode. This turned out to be really flakey, especially where templates are involved.
/Wp64 on 64-bit builds is extremely useful. The documentation (http://msdn.microsoft.com/en-us/library/vstudio/yt4xw8fh.aspx) claims that it is on by default in 64-bit builds, but this is not true. Compiler warnings C4311 and C4312 are only emitted if /Wp64 is explicitly set. Those two warnings indicate when a 32-bit value is put into a pointer, or vice-versa. These are very important for code correctness, and claim to be at warning level 1. I have found bugs in very widespread code that would have been stopped if the developers had turned on /Wp64 for 64-bit builds. Unfortunately, you also get the command line warning that you have observed. I know of no way to squelch this warning, and I have learned to live with it. On the bright side, if you build with as warnings as errors, this command line warning doesn't turn into an error.
Because when using the 64 Bit compiler from VS2010 the compiler does the detection of 64 bit problems automatically... this switch is from back in the day when you could try to detect 64 Bit problem running the 32 Bit compiler...
See http://msdn.microsoft.com/en-us/library/yt4xw8fh%28v=VS.100%29.aspx
You could link to the deprecation warning, but couldn't go to the /Wp64 documentation?
By default, the /Wp64 compiler option is off in the Visual C++ 32-bit compiler and on in the Visual C++ 64-bit compiler.
If you regularly compile your application by using a 64-bit compiler, you can just disable /Wp64 in your 32-bit compilations because the 64-bit compiler will detect all issues.
Emphasis added