How to reliably influence generated code at near machine level using GHC? - haskell

While this may sound as theoretical question, suppose I decide to invest and build a mission-critical application written in Haskell. A year later I find that I absolutely need to improve performance of some very thin bottleneck and this will require optimizing memory access close to raw machine capabilities.
Some assumptions:
It isn't realtime system - occasional latency spikes are tolerable (from interrupts, thread scheduling irregularities, occasional GC etc.)
It isn't a numeric problem - data layout and cache-friendly access patterns are most important (avoiding pointer chasing, reducing conditional jumps etc.)
Code may be tied to specific GHC release (but no forking)
Performance goal requires inplace modification of pre-allocated offheap arrays taking alignment into account (C strings, bit-packed fields etc.)
Data is statically bounded in arrays and allocations are rarely if ever needed
What mechanisms does GHC offer to perfom this kind of optimization? By saying reliably I mean that if source change causes code to no longer perform, it is correctible in source code without rewriting it in assembly.
Is it already possible using GHC-specific extensions and libraries?
Would custom FFI help avoid C calling convention overhead?
Could a special purpose compiler plugin do it through a restricted source DSL?
Could source code generator from a "high-level" assembly (LLVM?) be solution?

It sounds like you're looking for unboxed arrays. "unboxed" in haskell-land means "has no runtime heap representation". You can usually learn whether some part of your code is compiled to an unboxed loop (a loop that performs no allocation), say, by looking at the core representation (this is a very haskell-like language, that's the first stage in compilation). So e.g. you might see Int# in the core output which means an integer which has no heap representation (it's gonna be in a register).
When optimizing haskell code we regularly look at core and expect to be able to manipulate or correct for performance regressions by changing the source code (e.g. adding a strictness annotation, or fiddling with a function such that it can be inlined). This isn't always fun, but will be fairly stable especially if you are pinning your compiler version.
Back to unboxed arrays: GHC exposes a lot of low-level primops in GHC.Prim, in particular it sounds like you want mutable unboxed arrays (MutableByteArray). The primitive package exposes these primops behind a slightly safer, friendlier API and is what you should use (and depend on if writing your own library).
There are many other libraries that implement unboxed arrays, such as vector, and which are built on MutableByteArray, but the point is that operations on that structure generate no garbage and likely compile down to pretty predictable machine instructions.
You might also like to check out this technique if you're doing numeric work and want to use a particular instruction or implement some loop directly in assembly.
GHC also has a very powerful FFI, and you can research about how to write portions of your program in C and interop; haskell supports pinned arrays among other structures for this purpose.
If you need more control than those give you then haskell is likely the wrong language. It's impossible to tell from your description if this is the case for your problem (Your requirements seem contradictory: you need to be able to write a carefully cache-tuned algorithm, but arbitrary GC pauses are okay?).
One last note: you can't rely on GHC's native code generator to perform any of the low-level strength reduction optimizations that e.g. GCC performs (GHC's NCG will probably never ever know about bit-twiddling hacks, autovectorization, etc. etc.). Instead you can try the LLVM backend, but whether you see a speedup in your program is by no means guaranteed.

Related

What does 'Zero Cost Abstraction' mean?

I came across this term when exploring Rust.
I saw different kinds of explanations regarding this and still don't quite get the ideas.
In The Embedded Rust Book, it said
Type states are also an excellent example of Zero Cost Abstractions
the ability to move certain behaviors to compile time execution or analysis.
These type states contain no actual data, and are instead used as
markers.
Since they contain no data, they have no actual representation in
memory at runtime:
Does it mean the runtime is faster because there is no memory in runtime?
Appreciate it if anyone can explain it in an easy to understand way.
Zero Cost Abstractions means adding higher-level programming concepts, like generics, collections and so on do not come with a run-time cost, only compile time cost (the code will be slower to compile). Any operation on zero-cost abstractions is as fast as you would write out matching functionality by hand using lower-level programming concepts like for loops, counters, ifs and using raw pointers.
Or another way to view this is that using zero-cost abstraction tools, functions, templates, classes and such come with "zero cost" for the performance of your code.
Zero cost abstractions are ones that bear no runtime costs in execution speed or memory usage.
By contrast, virtual methods are a good example of a costly abstraction: in many OO languages the type of the method's caller is determined at runtime which requires maintaining a lookup table (runtime memory usage) and then actually performing the lookup (runtime overhead per method call, likely at least an extra pointer dereference) with the runtime type to determine which version of the method to call. Another good example would be garbage collection: in return for being able to not worry about the details of memory allocation you pay with GC pauses.
Rust though mostly tries to have zero cost abstractions: ones that let you have your cake and eat it too. Ones that compiler can safely and correctly convert to forms that bear no extra indirection/memory usage. In fact, the only thing that I'm aware of (somebody more knowledgeable correct me if I'm wrong) that you really pay for at runtime in Rust is bounds checking.
The concept of zero cost abstractions originally came from the functional world. However, the terminology comes from C++. According to Bjarne Stroustrup,
In general, C++ implementations obey the
zero-overhead principle: What you don’t use, you don’t pay for. And further: What you do use, you couldn’t hand code any better.
This quote along with most answers fail to deliver the idea in its entirety because the context in which these were said isn't explicitly stated.
If there was only one programming language in the world: be it Rust or C++, zero-cost abstractions would be indistinguishable from most other compiler optimizations. The implication here is that there are countless other languages that let you do the same things as Rust or C++, but with a nonzero and often runtime-specific cost.

Why do the pre-defined functions always perform faster then user-defind function for the same functionality? Apologies if its too basic

In every programming languages, why the pre-defined
functions always performs faster than user-defined
function for the same functionality however we code with minimum Time Complexity?
Is it related to any resource utilization difference of these two types.?
This was my long term query.. I need atleast one better answer..
What you stated is simply not true. The claim that in any language user-defined functions perform worse than builtin is a very bold claim which is to be backed by hard data acquired using profiling.
Being more down-to-Earth though, we indeed might from time to time observe this behaviour with various platforms/lanugages/runtimes/libraries etc.
Mostly this is due to these two reasons:
Builtin functions are sometimes allowed to not be functions at all: the compiler is free to remove a function call completely (and replace it with something else like a bunch of machine instructions) if the behaviour of the code does not change.
In C, functions in the memcpy() family are a good example of this: in "release" builds, a good compiler for popular platforms usually replaces these cases with direct calls to CPU instructions which copy ranges of memory.
Builtin functions are often coded in platform-dependent assembly, so that when applicable, the function's body will be a finely tuned highly optimized low level code written for the target CPU's extended instruction set (such as SSE and its later iterations on Intel platforms).
If you meant not low-level languages like C but rather more high-level, targetting a virtual machine or interpreted then the answer should be rather more obvious.

Efficient GC write barrier in generated C

I'm designing a system that up-front compiles CIL bytecode. In order to keep it relatively simple and also make it very portable, the system will emit C source code (however with all higher-level constructs like OOP factored out) instead of machine code. The intention is that a standard C compiler for the target platform will then be used on that code to get the end product.
Initially I intend to use a very simple GC approach such as stop-the-world. However, while the application doesn't require stellar performance, it does need decent performance, so eventually the GC may need to be changed.
I'm thinking ahead to eventually a more sophisticated GC requiring some sort of write barrier. I've looked at SATB and card marking approaches but I'm not ready to actually plan out a good GC yet. I just don't want to shoot myself in the foot by having the thing emit C source code only to later find that an efficient GC write barrier will demand inline assembly, largely defeating the purpose of emitting C.
So, my question is, can typical write barriers be efficiently implemented in C code? We can assume that the C compiler has a decent optimizer. It's also already a given that the resultant "source code" will be utterly illegible, so clarity doesn't matter.
I'm guessing that - at the expense of bloating the source files even more, - it probably can be done reasonably, but I'd appreciate word from people more experienced in GC design and/or compiler internals.
I am assuming you want a precise generational moving or copying GC.
You could have a write barrier in C; as an example, both Ocaml and MELT runtimes have generational GC with a write barrier. And qish is a generational copying GC with a write barrier, working with C.
(BTW, MELT is a domain specific language to extend GCC, and it is compiled to C, exactly like you want to do)
A more significant issue is how do you keep local pointers (and how the GC knows about them), that is the precise aspect of your GC. You may want to pack them in some local structure.... But then it could happen that the C compiler (e.g. GCC) is optimizing a bit less.
You might look into the source code of recent versions of MONO, they have a generational copying GC. Look also inside Chicken Scheme (also generating C code).
Notice that your C code generator will have to be changed to fit inside some (or yours) particular GC implementation (because each GC has slightly different invariants and expectations). Take also care of tail recursion (some C compilers, notably recent GCC, may optimize them in limited cases).
In Qish, MELT or Ocaml the write barrier (on the C side) is implemented by some macro (or inline functions) called for each touched pointer. Details are implementation specific. Your C code generator will have to take care of them.
Be careful that multi-threaded GC are difficult to design, and that debugging GC, even simple ones, takes a lot of time and is difficult.

Haskell for mission-critical systems [duplicate]

I've been curious to understand if it is possible to apply the power of Haskell to embedded realtime world, and in googling have found the Atom package. I'd assume that in the complex case the code might have all the classical C bugs - crashes, memory corruptions, etc, which would then need to be traced to the original Haskell code that
caused them. So, this is the first part of the question: "If you had the experience with Atom, how did you deal with the task of debugging the low-level bugs in compiled C code and fixing them in Haskell original code ?"
I searched for some more examples for Atom, this blog post mentions the resulting C code 22KLOC (and obviously no code:), the included example is a toy. This and this references have a bit more practical code, but this is where this ends. And the reason I put "sizable" in the subject is, I'm most interested if you might share your experiences of working with the generated C code in the range of 300KLOC+.
As I am a Haskell newbie, obviously there may be other ways that I did not find due to my unknown unknowns, so any other pointers for self-education in this area would be greatly appreciated - and this is the second part of the question - "what would be some other practical methods (if) of doing real-time development in Haskell?". If the multicore is also in the picture, that's an extra plus :-)
(About usage of Haskell itself for this purpose: from what I read in this blog post, the garbage collection and laziness in Haskell makes it rather nondeterministic scheduling-wise, but maybe in two years something has changed. Real world Haskell programming question on SO was the closest that I could find to this topic)
Note: "real-time" above is would be closer to "hard realtime" - I'm curious if it is possible to ensure that the pause time when the main task is not executing is under 0.5ms.
At Galois we use Haskell for two things:
Soft real time (OS device layers, networking), where 1-5 ms response times are plausible. GHC generates fast code, and has plenty of support for tuning the garbage collector and scheduler to get the right timings.
for true real time systems EDSLs are used to generate code for other languages that provide stronger timing guarantees. E.g. Cryptol, Atom and Copilot.
So be careful to distinguish the EDSL (Copilot or Atom) from the host language (Haskell).
Some examples of critical systems, and in some cases, real-time systems, either written or generated from Haskell, produced by Galois.
EDSLs
Copilot: A Hard Real-Time Runtime Monitor -- a DSL for real-time avionics monitoring
Equivalence and Safety Checking in Cryptol -- a DSL for cryptographic components of critical systems
Systems
HaLVM -- a lightweight microkernel for embedded and mobile applications
TSE -- a cross-domain (security level) network appliance
It will be a long time before there is a Haskell system that fits in small memory and can guarantee sub-millisecond pause times. The community of Haskell implementors just doesn't seem to be interested in this kind of target.
There is healthy interest in using Haskell or something Haskell-like to compile down to something very efficient; for example, Bluespec compiles to hardware.
I don't think it will meet your needs, but if you're interested in functional programming and embedded systems you should learn about Erlang.
Andrew,
Yes, it can be tricky to debug problems through the generated code back to the original source. One thing Atom provides is a means to probe internal expressions, then leaves if up to the user how to handle these probes. For vehicle testing, we build a transmitter (in Atom) and stream the probes out over a CAN bus. We can then capture this data, formated it, then view it with tools like GTKWave, either in post-processing or realtime. For software simulation, probes are handled differently. Instead of getting probe data from a CAN protocol, hooks are made to the C code to lift the probe values directly. The probe values are then used in the unit testing framework (distributed with Atom) to determine if a test passes or fails and to calculate simulation coverage.
I don't think Haskell, or other Garbage Collected languages are very well-suited to hard-realtime systems, as GC's tend to amortize their runtimes into short pauses.
Writing in Atom is not exactly programming in Haskell, as Haskell here can be seen as purely a preprocessor for the actual program you are writing.
I think Haskell is an awesome preprocessor, and using DSEL's like Atom is probably a great way to create sizable hard-realtime systems, but I don't know if Atom fits the bill or not. If it doesn't, I'm pretty sure it is possible (and I encourage anyone who does!) to implement a DSEL that does.
Having a very strong pre-processor like Haskell for a low-level language opens up a huge window of opportunity to implement abstractions through code-generation that are much more clumsy when implemented as C code text generators.
I've been fooling around with Atom. It is pretty cool, but I think it is best for small systems. Yes it runs in trucks and buses and implements real-world, critical applications, but that doesn't mean those applications are necessarily large or complex. It really is for hard-real-time apps and goes to great lengths to make every operation take the exact same amount of time. For example, instead of an if/else statement that conditionally executes one of two code branches that might differ in running time, it has a "mux" statement that always executes both branches before conditionally selecting one of the two computed values (so the total execution time is the same whichever value is selected). It doesn't have any significant type system other than built-in types (comparable to C's) that are enforced through GADT values passed through the Atom monad. The author is working on a static verification tool that analyzes the output C code, which is pretty cool (it uses an SMT solver), but I think Atom would benefit from more source-level features and checks. Even in my toy-sized app (LED flashlight controller), I've made a number of newbie errors that someone more experienced with the package might avoid, but that resulted in buggy output code that I'd rather have been caught by the compiler instead of through testing. On the other hand, it's still at version 0.1.something so improvements are undoubtedly coming.

Trivial mathematical problems as language benchmarks

Why do people insist on using trivial mathematical problems like finding numbers in the Fibonacci sequence for language benchmarks? Don't these usually get optimized to relativistic speeds? Isn't the brunt of the bottlenecks usually in I/O, system API calls, operations on strings and structures, processing large quantities of data, abstract object-oriented stuff, etc?
It is a throwback to the old days, when compiler technology for what we would now call basic math was still evolving rapidly.
Now, compiler evolution is more focused on exploiting new instructions for niche operations, 64-bit math, and so on.
Micro-benchmarks such as the ones you mention were useful, though, when evaluating the efficiency of the hotspot compiler when Java was first launched, and in evaluating the efficiency of .NET versus C/C++.
Your suggestion that I/O and system calls are the likely bottlenecks is correct, at least for some space of problems. But I notice you suggested string operations. One person's irrelevant micro-benchmark is another person's critical performance metric.
EDIT: ps, I also remember using linpack and other micro-benchmarks to compare versions of the JVM, and to compare vendors of the JVM. From v4 to v5 there was a big jump in perf, I guess the JIT compiler got more effective. Also, IBM's JVM was ahead of Sun's at that time, on Windows-x86.
Because if you want to benchmark the language/compiler, these "math problems" are good indicators of the "bare speed" of the generated code. Either they use the iterative solution, which is a tight loop and indicates how well can the compiler push the instructions to the processor, or they use the recursive solution, which indicates how does it handle recursive calls of short functions (inlining, tail-recursion etc.) (although the Ackermann function is usually used for that too).
Usually, the benchmark suite for the language contain tests benchmarking other parts as well - eg. gzip compression, text searching, object creation, virtual function call, exception throw/catch benchmarks.
The other things you've noticed, syscalls and IO are usually not included because
syscalls are in fact not that slow - applications don't spend significant porion of the time in the kernel, except for test specifically targeted at them or when something is seriously wrong with the program
syscall and IO performance does not depend on the language, but rather on the OS & hardware
I'd think a simple, well-established algorithm would remove the possibility that the benchmark is biased (whether through ignorance or malice) to favor one language. It is very difficult to write a complex program in two different languages exactly the same. Testing something like the efficiency of a multithreaded application in c# vs java, for example, would require developers skilled in multithreaded development both languages, and there would still be questions as to whether the benchmark app properly represents the general case, or if it is misrepresenting a special case that only one language handles well.
Back when the sieve of eratosthanes was a popular benchmark for C compilers, I thought it would be funny if one of the compiler authors would recognize the sieve code and replace it with a pre-computed lookup.

Resources