Haskell measuring function performance - haskell

In Haskell, how can i 'simply' measure a functions performance. For example, how long it takes to run, or how much memory it takes?. I am aware of profiling, however, is there a more simple way that will not require me to change my code too much?

Measuring how long it takes to run and how much memory it takes are two separate problems, namely: benchmarking and profiling. Haskell has a well defined set of tools for both. Solving neither of the problems requires you to make any changes to the actual application's code.
Benchmarking
This is done using libraries. There is an ultimate winner in that area, which was suggested by Niklas in the comments, namely Criterion. The library is very well designed, isn't hard to use and produces a very detailed data.
The workflow is the following: you create a separate module containing the setup of your benchmark, compile it and run it with options. To get a reference on available options run it with --help modifier.
You can find examples of setup modules here.
Profiling
There is enough of good materials on that already, so I'll just refer to them:
General reference on profiling
A tutorial in Real World Haskell
A tutorial on profiling with Cabal

For extremely crude information on how individual functions perform compared to each other, you can use ghci
Prelude> :set +s
Prelude> last [1..100000000]
100000000
(1.65 secs, 4000685276 bytes)
You need to be aware that ghci doesn't compile code, so runs much slower than ghc, the timing and memory usage data is approximate, and that absolutely no optimisation has been performed.
This means that it gives you only a very rough idea of how (in)efficient your code is, and is no substitute for proper benchmarking and profiling of compiled and optimised code, as detailed in Nikita Volkov's answer.

Related

How to get more performance out of automatic differentiation?

I am having a hard time optimizing a program that is relying on ads conjugateGradientDescent function for most of it's work.
Basically my code is a translation of an old papers code that is written in Matlab and C. I have not measured it, but that code is running at several iterations per second. Mine is in the order of minutes per iteration ...
The code is available in this repositories:
https://github.com/fhaust/aer
https://github.com/fhaust/aer-utils
The code in question can be run by following these commands:
$ cd aer-utils
$ cabal sandbox init
$ cabal sandbox add-source ../aer
$ cabal run learngabors
Using GHCs profiling facilities I have confirmed that the descent is in fact the part that is taking most of the time:
(interactive version here: https://dl.dropboxusercontent.com/u/2359191/learngabors.svg)
-s is telling me that productivity is quite low:
Productivity 33.6% of total user, 33.6% of total elapsed
From what I have gathered there are two things that might lead to higher performance:
Unboxing: currently I use a custom matrix implementation (in src/Data/SimpleMat.hs). This was the only way I could get ad to work with matrices (see: How to do automatic differentiation on hmatrix?). My guess is that by using a matrix type like newtype Mat w h a = Mat (Unboxed.Vector a) would achieve better performance due to unboxing and fusion. I found some code that has ad instances for unboxed vectors, but up to now I haven't been able to use these with the conjugateGradientFunction.
Matrix derivatives: In an email I just can't find at the moment Edward mentions that it would be better to use Forward instances for matrix types instead of having matrices filled with Forward instances. I have a faint idea how to achieve that, but have yet to figure out how I'd implement it in terms of ads type classes.
This is probably a question that is too wide to be answered on SO, so if you are willing to help me out here, feel free to contact me on Github.
You are running into pretty much the worst-case scenario for the current ad library here.
FWIW- You won't be able to use the existing ad classes/types with "matrix/vector ad". It'd be a fairly large engineering effort, see https://github.com/ekmett/ad/issues/2
As for why you can't unbox: conjugateGradient requires the ability to use Kahn mode or two levels of forward mode on your functions. The former precludes it from working with unboxed vectors, as the data types carry syntax trees, and can't be unboxed. For various technical reasons I haven't figured out how to make it work with a fixed sized 'tape' like the standard Reverse mode.
I think the "right" answer here is for us to sit down and figure out how to get matrix/vector AD right and integrated into the package, but I confess I'm timesliced a bit too thinly right now to give it the attention it deserves.
If you get a chance to swing by #haskell-lens on irc.freenode.net I'd happy to talk about designs in this space and offer advice. Alex Lang has also been working on ad a lot and is often present there and may have ideas.

GHC Partial Evaluation and Separate Compilation

Whole-program compilers like MLton create optimized binaries in part to their ability to use the total source of the binary to perform partial evaluation: aggressively inlining constants and evaluating them until stuck—all during compilation!
This has been explored public ally a bit in the Haskell space by Gabriel Gonzalez's Morte.
Now my understanding is that Haskell does not do very much of this—if any at all. The cited reason I understand is that it is antithetical to separate compilation. This makes sense to prohibit partial evaluation across source-file boundaries, but it seems like in-file partial evaluation would still be an option.
As far as I know, in-file partial evaluation is still not performed, though.
My question is: is this true? If so, what are the tradeoffs for performing in-file partial evaluation? If not, what is an example file where one can improve compiled performance by putting more functionality into the same file?
(Edit: To clarify the above, I know there are a lot of questions as to what the best set of reductions to perform are—many are undecidable! I'd like to know the tradeoffs made in an "industrial strength" compiler with separate compilation that live at a level above choosing the right equational theory if there are any interesting things to talk about there. Things like compilation speed or file bloat are more toward the scope I'm interested in. Another question in the same space might be: "Why can't MLton get separate compilation just by compiling each module separately, leaving the API exposed, and then linking them all together?")
This is definitely an optimization that a small set of people are interested in and are pursuing. The Google search term to find information on it is "supercompilation". I believe there are at least two approaches floating about at the moment.
It seems one of the big tradeoffs is compilation-time resources (time and memory both), and at the moment the performance wins of paying these costs appear to be somewhat unpredictable. There's quite some work left. A few links:
A page on the GHC wiki
Neil Mitchell's Supero
Max Bolingbroke's Supercompilation by evaluation

Profiling the cost of polymorphism?

I have a codebase that is "needlessly" polymorphic in that almost every function is polymorphic in some way (why not, when you can?), but the end program uses functions with only a handful of concrete types. I've started spending some time throwing in SPECIALIZE and INLINABLE pragmas to try to bring down the performance cost of all this polymorphism, but with the size of my code it's pretty hit and miss. Is there a way to tell from profiling how much time is spent "doing the the things polymorphism needs" at runtime, for each function?
(Note: I've asked this question without knowing if such a thing is even technically possible or if "the things polymorphism needs" is well-defined enough).
The process of determining costs is:
Construct a benchmark - with criterion, or some other measurement tool
Profile - with ghc's profiling support
Read the core - with ghc-core, if the performance causes are not obvious
Typically you will identify some operation that is too slow; compile with profiling and determine precisely which components are costly, and then inspect the code to optimize it (e.g. by specializing data structures or functions, changing algorithms, or making other changes).
For performance critical work you will then go and inspect the Core to see if micro-tuning the compiler can help.

Expression trees vs IL.Emit for runtime code specialization

I recently learned that it is possible to generate C# code at runtime and I would like to put this feature to use. I have code that does some very basic geometric calculations like computing line-plane intersections and I think I could gain some performance benefits by generating specialized code for some of the methods because many of the calculations are performed for the same plane or the same line over and over again. By specializing the code that computes the intersections I think I should be able to gain some performance benefits.
The problem is that I'm not sure where to begin. From reading a few blog posts and browsing MSDN documentation I've come across two possible strategies for generating code at runtime: Expression trees and IL.Emit. Using expression trees seems much easier because there is no need to learn anything about OpCodes and various other MSIL related intricacies but I'm not sure if expression trees are as fast as manually generated MSIL. So are there any suggestions on which method I should go with?
The performance of both is generally same, as expression trees internally are traversed and emitted as IL using the same underlying system functions that you would be using yourself. It is theoretically possible to emit a more efficient IL using low-level functions, but I doubt that there would be any practically important performance gain. That would depend on the task, but I have not come of any practical optimisation of emitted IL, compared to one emitted by expression trees.
I highly suggest getting the tool called ILSpy that reverse-compiles CLR assemblies. With that you can look at the code actually traversing the expression trees and actually emitting IL.
Finally, a caveat. I have used expression trees in a language parser, where function calls are bound to grammar rules that are compiled from a file at runtime. Compiled is a key here. For many problems I came across, when what you want to achieve is known at compile time, then you would not gain much performance by runtime code generation. Some CLR JIT optimizations might be also unavailable to dynamic code. This is only an opinion from my practice, and your domain would be different, but if performance is critical, I would rather look at native code, highly optimized libraries. Some of the work I have done would be snail slow if not using LAPACK/MKL. But that is only a piece of the advice not asked for, so take it with a grain of salt.
If I were in your situation, I would try alternatives from high level to low level, in increasing "needed time & effort" and decreasing reusability order, and I would stop as soon as the performance is good enough for the time being, i.e.:
first, I'd check to see if Math.NET, LAPACK or some similar numeric library already has similar functionality, or I can adapt/extend the code to my needs;
second, I'd try Expression Trees;
third, I'd check Roslyn Project (even though it is in prerelease version);
fourth, I'd think about writing common routines with unsafe C code;
[fifth, I'd think about quitting and starting a new career in a different profession :) ],
and only if none of these work out, would I be so hopeless to try emitting IL at run time.
But perhaps I'm biased against low level approaches; your expertise, experience and point of view might be different.

Haskell for mission-critical systems [duplicate]

I've been curious to understand if it is possible to apply the power of Haskell to embedded realtime world, and in googling have found the Atom package. I'd assume that in the complex case the code might have all the classical C bugs - crashes, memory corruptions, etc, which would then need to be traced to the original Haskell code that
caused them. So, this is the first part of the question: "If you had the experience with Atom, how did you deal with the task of debugging the low-level bugs in compiled C code and fixing them in Haskell original code ?"
I searched for some more examples for Atom, this blog post mentions the resulting C code 22KLOC (and obviously no code:), the included example is a toy. This and this references have a bit more practical code, but this is where this ends. And the reason I put "sizable" in the subject is, I'm most interested if you might share your experiences of working with the generated C code in the range of 300KLOC+.
As I am a Haskell newbie, obviously there may be other ways that I did not find due to my unknown unknowns, so any other pointers for self-education in this area would be greatly appreciated - and this is the second part of the question - "what would be some other practical methods (if) of doing real-time development in Haskell?". If the multicore is also in the picture, that's an extra plus :-)
(About usage of Haskell itself for this purpose: from what I read in this blog post, the garbage collection and laziness in Haskell makes it rather nondeterministic scheduling-wise, but maybe in two years something has changed. Real world Haskell programming question on SO was the closest that I could find to this topic)
Note: "real-time" above is would be closer to "hard realtime" - I'm curious if it is possible to ensure that the pause time when the main task is not executing is under 0.5ms.
At Galois we use Haskell for two things:
Soft real time (OS device layers, networking), where 1-5 ms response times are plausible. GHC generates fast code, and has plenty of support for tuning the garbage collector and scheduler to get the right timings.
for true real time systems EDSLs are used to generate code for other languages that provide stronger timing guarantees. E.g. Cryptol, Atom and Copilot.
So be careful to distinguish the EDSL (Copilot or Atom) from the host language (Haskell).
Some examples of critical systems, and in some cases, real-time systems, either written or generated from Haskell, produced by Galois.
EDSLs
Copilot: A Hard Real-Time Runtime Monitor -- a DSL for real-time avionics monitoring
Equivalence and Safety Checking in Cryptol -- a DSL for cryptographic components of critical systems
Systems
HaLVM -- a lightweight microkernel for embedded and mobile applications
TSE -- a cross-domain (security level) network appliance
It will be a long time before there is a Haskell system that fits in small memory and can guarantee sub-millisecond pause times. The community of Haskell implementors just doesn't seem to be interested in this kind of target.
There is healthy interest in using Haskell or something Haskell-like to compile down to something very efficient; for example, Bluespec compiles to hardware.
I don't think it will meet your needs, but if you're interested in functional programming and embedded systems you should learn about Erlang.
Andrew,
Yes, it can be tricky to debug problems through the generated code back to the original source. One thing Atom provides is a means to probe internal expressions, then leaves if up to the user how to handle these probes. For vehicle testing, we build a transmitter (in Atom) and stream the probes out over a CAN bus. We can then capture this data, formated it, then view it with tools like GTKWave, either in post-processing or realtime. For software simulation, probes are handled differently. Instead of getting probe data from a CAN protocol, hooks are made to the C code to lift the probe values directly. The probe values are then used in the unit testing framework (distributed with Atom) to determine if a test passes or fails and to calculate simulation coverage.
I don't think Haskell, or other Garbage Collected languages are very well-suited to hard-realtime systems, as GC's tend to amortize their runtimes into short pauses.
Writing in Atom is not exactly programming in Haskell, as Haskell here can be seen as purely a preprocessor for the actual program you are writing.
I think Haskell is an awesome preprocessor, and using DSEL's like Atom is probably a great way to create sizable hard-realtime systems, but I don't know if Atom fits the bill or not. If it doesn't, I'm pretty sure it is possible (and I encourage anyone who does!) to implement a DSEL that does.
Having a very strong pre-processor like Haskell for a low-level language opens up a huge window of opportunity to implement abstractions through code-generation that are much more clumsy when implemented as C code text generators.
I've been fooling around with Atom. It is pretty cool, but I think it is best for small systems. Yes it runs in trucks and buses and implements real-world, critical applications, but that doesn't mean those applications are necessarily large or complex. It really is for hard-real-time apps and goes to great lengths to make every operation take the exact same amount of time. For example, instead of an if/else statement that conditionally executes one of two code branches that might differ in running time, it has a "mux" statement that always executes both branches before conditionally selecting one of the two computed values (so the total execution time is the same whichever value is selected). It doesn't have any significant type system other than built-in types (comparable to C's) that are enforced through GADT values passed through the Atom monad. The author is working on a static verification tool that analyzes the output C code, which is pretty cool (it uses an SMT solver), but I think Atom would benefit from more source-level features and checks. Even in my toy-sized app (LED flashlight controller), I've made a number of newbie errors that someone more experienced with the package might avoid, but that resulted in buggy output code that I'd rather have been caught by the compiler instead of through testing. On the other hand, it's still at version 0.1.something so improvements are undoubtedly coming.

Resources