Is there options, except -O0, that can speed up compilation time?
It's not matter if resulting programs will be not optimised. Actually I want to just type-check large haskell package often and fast.
Flag -fno-code dramatically speeds up compilation but it's not possible to use it because TemplateHaskell is used by this program.
Looks like a task for hdevtools! Hdevtools is used to as a backend for vim-plugin of the same name and it provides speedy syntax and type checking, directly from the editor. It is about as fast as ghci when reloading modules. I assume that it can be used from the command line.
Another alternative would be to keep a ghci instance running and use that to type check your modules.
I've found splitting up large files can speed up compilation.
Related
Typically in a Haskell project, I either work interactively with ghci or compile the entire project with cabal build.
However, in some use cases, I may have a computationally intensive routine along with some higher level scripting functionality, say for picking inputs to an analysis algorithm.
Is it possible to use GHCi + GHC such that I compile the computationally intensive module, load the compiled code to re-run with different inputs from within GHCi?
Yes, you can load compiled modules in ghci; if there is an appropriately named .hi and .o file, ghci will use those instead of interpreting the code in the corresponding .hs file. You will then only have access to the operations that are exported from that module.
In case you find yourself using a compiled loaded module when you wanted the interpreted one, you can :load *foo.hs to instruct ghci to ignore the compiled version and interpret foo.hs.
I'm working on a fairly simple text-editor for Haskell, and I'd like to be able to highlight static errors in code when the user hits "check."
Is there a way to use the GHC-API to do a "dry-run" of compiling a haskell file without actually compiling it? I'd like to be able to take a string and do all the checks of normal compilation, but without the output. The GHC-API would be ideal because then I wouldn't have to parse command-line output from GHC to highlight errors and such.
In addition, is it possible to do this check on a string, instead of on a file? (If not, I can just write it to a temp file, which isn't terribly efficient, but would work).
If this is possible, could you provide or point me to an example how how to do this?
This question ask the same thing, but it is from three years ago, at which time the answer was "GHC-API is new and there isn't good documentation yet." So my hope is that the status has changed.
EDIT: the "dry-run" restriction is because I'm doing this in a web-based setting where compilation happens server side, so I'd like to avoid unnecessary disk reads/write every time the user hits "check". The executable would just get thrown away anyways, until they had a version ready to run.
Just to move this to an answer, this already exists as ghc-mod, here's the homepage. This already has frontends for Emacs, Sublime, and Vim so if you need examples of how to use it, there are plenty. In essence ghc-mod is just what you want, a wrapper around the GHC API designed for editors.
If GHC takes a long time to compile something, is there a way to find out what it's doing?
Firstly, it would be nice to know if I've actually crashed the compiler (i.e., put it into some sort of infinite loop somehow), or whether it's actually making progress, but just very slowly.
Secondly, it would be nice to know exactly what part of the compilation process GHC is having trouble with. Is it the parsing, or desugaring, or type-checking, or Core optimisation, or code generation, or...?
Is there some way to monitor what's going on? (Bearing in mind that if GHC is taking a long time, that probably means it's doing a lot of work, so if you ask for too much output it's going to be huge!)
GHC already tells you which modules it's trying to (re)compile. In my case, the problem is a single self-contained module. I'd like to know where GHC is getting stuck.
Following Daniel Fischer's comment, I tried running GHC with different verbosity options.
-v1: Produced a bit more output, but nothing during the main compilation step.
-v2: Tells you what step GHC is currently doing (parser, desugar, type check, simplifier, etc). This is pretty much what I actually wanted.
-v3: Appears to make the simplifier actually dump what it's doing to the console - bad idea while compiling 8MB of source code!
So it seems that -v2 is the place to start.
(In the specific case of the program that prompted this question, it seems GHC is spending forever in the type checking phase.)
It's quite nice to have ghci integrated with Emacs through inferior-haskell-mode: this adds a wonderful possibility to quickly navigate to compile error locations, interactively inspect types, definitions, etc. Nevertheless, the major feature I'm missing in this setup is inability to use ghci tab-completion, which is quite helpful for completing functions available from imported modules, language extensions and ghci commands.
I assume that this functionality may be implemented rather trivially by passing raw "TAB" character to the ghci process, reading its output back and pasting the result into the Emacs buffer. Note that I haven't worked with "comint-mode" in Emacs, so I may be totally wrong.
Finally, we have come to my question: why this feature is missing from haskell-mode? Are there any obvious problems which I am unaware of, is it hard to implement, or is it just due to some historical reasons? (like "no one bothered to write it"). Do you have any workarounds for the problem? (except running ghci outside Emacs)
Check out ghc-mode that builds on top of haskell-mode and adds autocompletion and some other features.
There's also a haskell-emacs mode, which is different from haskell-mode. It also has autocompletion. Although it was quirky and not always worked when i tried it.
I remember a professor once saying that interpreted code was about 10 times slower than compiled. What's the speed difference between interpreted and bytecode? (assuming that the bytecode isn't JIT compiled)
I ask because some folks have been kicking around the idea of compiling vim script into bytecode and I just wonder what kind of performance boost that will get.
When you compile things down to bytecode, you have the opportunity to first perform a bunch of expensive high-level optimizations. You design the byte-code to be very easily compiled to machine code and run all the optimizations and flow analysis ahead of time.
The speed-increase is thus potentially quite substantial - not only do you skip the whole lexing/parsing stages at runtime, but you also have more opportunity to apply optimizations and generate better machine code.
You could see a pretty good boost. However, there are a lot of factors. You can't just say that compiled code is always about 10 times faster than interpreted code, or that bytecode is n times faster than interpreted code.
Factors include the complexity and verbosity of the language for example. If a keyword in the language is several characters, and the bytecode is one, it should be quite a bit faster to load the bytecode, and jump to the routine that handles that bytecode, than it is to read the keyword string, then figure out where to go. But, if you're interpreting one of the exotic languages that has a one-byte keyword, the difference might be less noticeable.
I've seen this performance boost in practice, so it might worth it for you. Besides, it's fun to write such a thing, gives you a feel for how language interpreters and compilers work, and that will make you a better coder.
Are there actually any mainstream "interpreters" these days that don't actually compile their code? (Either to bytecode or something similar.)
For instance, when you use use a Perl program directly from its source code, the first thing it does is compile the source into a syntax tree, which it then optimizes and uses to execute the program. In normal situations the time spent compiling is tiny compared to the time actually running the program.
Sticking to this example, obviously Perl cannot be faster than well-optimized C code, as it is written in C. In practice, for most things you would normally do with Perl (like text processing), it will be as fast as you could reasonably code it in C, and orders of magnitude easier to write. On the other hand, I certainly wouldn't try to write a high performance math routine directly in Perl.
Also, a lot of "classic" interpreters also include the lex/parse phase along with execution.
For example, consider executing a Python script. When you do that, you have all the costs associated with converting the program text in to the internal interpreter data structures, which are then executed.
Now contrast that with executing a compiled Python script, a .pyc file. Here, the lex and parse phase is done, and you have just the runtime of the inner interpreter.
But if you consider, say, a classic BASIC interpreter, these typically never store the raw text, rather they store a tokenized form and recreate the program text when you do "LIST". Here the byte code is much cruder (you don't really have a virtual machine here), but your execution gets to skip some of the text processing. That's all done when you enter the line and hit ENTER.
It is according to your virtual machine. Some of your faster virtual machines(JVM) are approaching the speed of C code. So how fast is your interpreted code running compared to C?
Don't think that if you convert your interpreted code into ByteCode it will run as fast a Java(near C speeds), there has been years of performance boosting going on, but you should see significant speed boost.
Emacs has been ported into bytecode with increased performance. Might be worth a look to you.
I've never noticed a Vim script that was slow enough to notice. Assuming a script primarily calls built-in, native-code, operations (regexes, block operations, etc) that are implemented in the editor's core, even a 10x speed-up of the 'glue logic' in scripting would be insignificant.
Still, profiling is the only way to be really sure.