Perhaps it's just better to describe my problem.
I'm developing a Haskell library. But part of the library is written in C, and another part actually in raw LLVM. To actually get GHC to spit out the code I want I have to follow this process:
Run ghc -emit-llvm on both the code that uses the Haskell module and the "Main" module.
Run clang -emit-llvm on the C file
Now I've got three .ll files from above. I add the part of the library I've handwritten in raw LLVM and llvm-link these into one .ll file.
I then run LLVM's opt on the linked file.
Lastly, I feed the LLVM bitcode fileback into GHC (which pleasantly accepts it) and produces an executable.
This process (with appropriate optimisation settings of course) seems to be the only way I can inline code from C, removing the function call overhead. Since many of these C functions are very small this is significant.
Anyway, I want to be able to distribute the library and for users to be able to use it as painlessly as possible, whilst still gaining the optimisations from the process above. I understand it's going to be a bit more of a pain than an ordinary library (for example, you're forced to compile via LLVM) but as painlessly as possible is what I'm looking for advice for.
Any guidance will be appreciated, I don't expect a step by step answer because I think it will be complex, but just some ideas would be helpful.
Related
If I have a package with several executables, which I initially build using cabal build. Now I change one file that impacts just one executable, cabal seems to take about a second or two to examine each executable to see if it's impacted or not. On the other hand, make, given an equivalent number of executables and source files, will determine in a fraction of a second what needs to be recompiled. Why the huge difference? Is there a reason, cabal can't just build its own version of a makefile and go from there?
Disclaimer: I'm not familiar enough with Haskell or make internals to give technical specifics, but some web searching does offer some insight that lines up with my proposal (trying to avoid eliciting opinions by providing references). Also, I'm assuming your makefile is calling ghc, as cabal apparently would.
Proposal: I believe there could be several key reasons, but the main one is that make is written in C, whereas cabal is written in Haskell. This would be coupled with superior dependency checking from make (although I'm not sure how to prove this without looking at the source code). Other supporting reasons, as found on the web:
cabal tries to do a lot more than simply compiling, e.g. appears to take steps with regard to packaging (https://www.haskell.org/cabal/)
cabal is written in haskell, although the run time is written in C (https://en.wikipedia.org/wiki/Glasgow_Haskell_Compiler)
Again, not being overly familiar with make internals, make may simply have a faster dependency checking mechanism, thereby better tracking these changes. I point this out because from the OP it sounds like there is a significant enough difference to where cabal may be doing a blanket check against all dependencies. I suspect this would be the primary reason for the speed difference, if true.
At any rate, these are open source and can be downloaded from their respective sites (haskell.org/cabal/ and savannah.gnu.org/projects/make/) allowing anyone to examine specifics of the implementations.
It is also likely one could see a lot of variance in speed based upon the switches passed to the compilers in use.
HTH at least point you in the right direction.
I am trying to figure out a bug (a serious performance downgrade). Unfortunately, I wasn't able to figure out why by going back many different versions of my code.
I am suspecting it could be some modifications to libraries that I've updated, not to mention in the meanwhile I've updated to GHC 7.6 from 7.4 (and if anybody knows if some laziness behavior has changed I would greatly appreciate it!).
I have an older executable of this code that does not have this bug and thus I wonder if there are any tools to tell me the library versions I was linking to from before? Like if it can figure out the symbols, etc.
GHC creates executables, which are notoriously hard to understand... On my Linux box I can view the assembly code by typing in
objdump -d <executable filename>
but I get back over 100K lines of code from just a simple "Hello, World!" program written in Haskell.
If you happen to have the GHC .hi files, you can get some information about the executable by typing in
ghc --show-iface <hi filename>
This won't give you the assembly code, but you can get some extra information that may prove useful.
As I mentioned in the comment above, on Linux you can use "ldd" to see what C-system libraries you used in the compile, but that is also probably less than useful.
You can try to use a disassembler, but those are generally written to disassemble to C, not anything higher level and certainly not Haskell. That being said, GHC compiles to C as an intermediary (at least it used to; has that changed?), so you might be able to learn something.
Personally I often find view system calls in action much more interesting than viewing pure assembly. On my Linux box, I can view all system calls by running using strace (use Wireshark for the network traffic equivalent):
strace <program executable>
This also will generate a lot of data, so it might only be useful if you know of some specific place where direct real world communication (i.e., changes to a file on the hard disk drive) goes wrong.
In all honesty, you are probably better off just debugging the problem from source, although, depending on the actual problem, some of these techniques may help you pinpoint something.
Most of these tools have Mac and Windows equivalents.
Since much has changed in the last 9 years, and apparently this is still the first result a search engine gives on this question (like for me, again), an updated answer is in order:
First of all, yes, while Haskell does not specify a bytecode format, bytecode is also just a kind of machine code, for a virtual machine. So for the rest of the answer I will treat them as the same thing. The “Core“ as well as the LLVM intermediate language, or even WASM could be considered equivalent too.
Secondly, if your old binary is statically linked, then of course, no matter the format your program is in, no symbols will be available to check out. Because that is what linking does. Even with bytecode, and even with just classic static #include in simple languages. So your old binary will be no good, no matter what. And given the optimisations compilers do, a classic decompiler will very likely never be able to figure out what optimised bits used to be partially what libraries. Especially with stream fusion and such “magic”.
Third, you can do the things you asked with a modern Haskell program. But you need to have your binaries compiled with -dynamic and -rdynamic, So not only the C-calling-convention libraries (e.g. .so), and the Haskell libraries, but also the runtime itself is dynamically loaded. That way you end up with a very small binary, consisting of only your actual code, dynamic linking instructions, and the exact data about what libraries and runtime were used to build it. And since the runtime is compiler-dependent, you will know the compiler too. So it would give you everything you need, but only if you compiled it right. (I recommend using such dynamic linking by default in any case as it saves memory.)
The last factor that one might forget, is that even the exact same compiler version might behave vastly differently, depending on what IT was compiled with. (E.g. if somebody put a backdoor in the very first version of GHC, and all GHCs after that were compiled with that first GHC, and nobody ever checked, then that backdoor could still be in the code today, with no traces in any source or libraries whatsoever. … Or for a less extreme case, that version of GHC your old binary was built with might have been compiled with different architecture options, leading to it putting more optimised instructions into the binaries it compiles for unless told to cross-compile.)
Finally, of course, you can profile even compiled binaries, by profiling their system calls. This will give you clues about which part of the code acted differently and how. (E.g. if you notice that your new binary floods the system with some slow system calls where the old one just used a single fast one. A classic OpenGL example would be using fast display lists versus slow direct calls to draw triangles. Or using a different sorting algorithm, or having switched to a different kind of data structure that fits your work load badly and thrashes a lot of memory.)
In C, one can split code into a "header file" and implementation, compile the implementation, and then just distribute the compiled version and the header only (not the full source).
Is this possible in Haskell?
GHC allows for that, but of course your code will be tied to a specific binary platform.
Check here:
http://www.haskell.org/ghc/docs/2.10/users_guide/user_174.html
or for a more updated explanation:
http://www.haskell.org/ghc/docs/7.0.3/html/users_guide/separate-compilation.html
In particular, look for .hi files.
It is quite possible to do this. When GHC compiles a Haskell module (i.e., a *.hs file), it generates executable code in a *.o object file, and also a *.hi "interface file". You only need the object file and interface file to use the compiled code.
However, unlike C, the run-time details of Haskell are not officially standardised. Consequently, you can't take code compiled with different Haskell compilers and link it together; the result won't work. In fact, often you can't even link together code compiled with different versions of GHC. It's not that there's anything "impossible" about doing this, it's just that nobody has standardised this stuff yet, so currently it doesn't work.
More recently, it is also possible to compile Haskell code into "dynamic libraries" (DLLs on Windows, *.so files on Unix). Again, you still need the *.hi files to compile against these, but at run-time you just need the library file itself.
Note that GHC tends to do a lot of cross-module optimisation, which somewhat reduces the usefulness of dynamic linking. (It's a bit like trying to "compile" a C++ template library...)
None of this matters of course if you're just interested in people not seeing your source code, or not having to supply a Haskell compiler to end-users.
I've spent a couple of days developing a program in Haskell, while learning the language. Now I realize that I'll need to call Arpack (a Fortran library) or Arpack++ (a C++ wrapper to Arpack) -- I can't find a good implementation of Lanczos method with Haskell bindings. Do any more experienced Haskell programers have an opinion of how difficult this would be?
I've been able to get ".so" ("shared object") versions of libarpack and libarpack++ installed through Ubuntu's repository, but I'm not sure that will suffice. I suspect I'm going to ultimately need to build Arpack++ from source code, which is possible, but I'm getting a lot of build errors, so it will take time. Is there any way to use just the ".so" files, without knowing exactly which version of the header files were used to generate them?
I'm considering using GreenCard, because it looks like the most well maintained Haskell/C bridge. I can't find much documentation though, so I'm wondering whether it will support C++ too.
I'm also starting to wonder whether I should rewrite my program in Python, and use scipy to call Arpack, but I've already sunk a couple of days into writing Haskell. I really like Haskell too, so I'm hoping I can make this work. I guess my overall question is this: What would be involved in making this work with Haskell?
Thanks much.
ELF format is standard format of executables and shared libraries, so accessing the code in these compiled modules is only a matter of knowing function names. If I understand correctly, Fortran is interoperable with C. As a consequence, Fortran should be interoperable with any language which can use C bindings, including Haskell. FYI, you can find all names exported by a module (executable or shared object or simple object archive) using nm tool (it is usually available in all linux distros by default). This of course would work if the binary file was not "stripped", but AFAIK it is not common practice.
However, Haskell cannot use C++ bindings in sane way, since C++ polymorphic features require name mangling, and the method of this name transformation is highly compiler-dependent. It is well-known problem which is not specific to Haskell. Of course, you could try to get a list of exported symbols from C++ shared object and then bind them using FFI, but... It isn't worth it.
As dsign said, you can use Foreign Function Interface GHC feature to create bindings to foreign code. All you would require is library headers (and the library itself of course). In case of C language that would be header files (*.h), but since your library is written in Fortran, you have to find header files analogue in library sources, refere to this page to match Fortran and C types, and then use this information to write FFI bindings. It would be helpful first to write C bindings, i.e. write C header. Then you can even use automatic FFI binding programs like c2hs.
It maybe also helpful to look through C++ bindings. It is possible that it has the header file I've described above. If it has one, then writing FFI bindings will be no more difficult than writing them for any other library.
So, it is not entirely impossible, but it may require some thorough work. Writing bindings to scientific/pure computational libraries is way easier than writing them for some system library which does a lot of IO and keeps its own internal state, but since this library is written not in C... Well, it may be advisable to invest your time in easier alternatives. I cannot say anythin about scipy, I've never used it, but since Python as a language is much more simpler than Haskell, it may be good alternative.
I can tell you that using a C/Fortran library from Haskell, with the help of the Foreign Function Interface would be certainly possible and not terribly complicated. Here is an introduction. In my understanding, you should be able to call anything with a C calling convention, and perhaps even Fortran, without need of recompiling the code. The only exception is with things that look like function calls but are indeed macros, in which case you will have to figure out what the macros do and reproduce them in Haskell.
As of greencard, I have never used it, so I can not vouch for it.
Your second idea of using Python could potentially save you more than a couple of days. Sad as it is, I have never managed Haskell code to easily adapt to my changing requirements, while I find that trivial in Python. Of course, that could be a limitation on my skills with Haskell or my thinking process rather that something to blame to the language.
Say I have a Haskell program or library that I'd like to make accessible to non-Haskellers, potentially C programmers. Can I compile it to C using GHC and then distribute this as a C source?
If this is possible, can someone provide a minimal example? (e.g., a Makefile)
Is it possible to use GHC to automatically determine what compiler flags and headers and needed, and then perhaps bundle this into a single folder?
Basically I'm interested in being able to write portions of programs in C and Haskell, and then distributing it as a tarball, but without requiring the target to have GHC and Cabal installed.
I'm interested in being able to write portions of programs in C and Haskell, and then distributing it as a tarball, but without requiring the target to have GHC and Cabal installed.
You're asking for an awful lot of infrastructure that you're unlikely to find. Remember that any Haskell program, even if it is going to be compiled to C, is almost certain to depend on a large, complex run-time system for its correct operation. At a bare minimum, that run-time system has to support garbage collection and lazy evaluation. So you have more than just a translation problem.
I suggest you tackle this problem as a software-distribution problem. Rather than a tarball, provide a package for your favored distribution platform (Debian, Red Hat, InstallShield, whatever). Personally, in order to reuse other people's efforts, I would aim for something that checks for Cabal, installs Cabal if needed, then uses Cabal to install the rest of what your users will need.
You can do this with jhc. It's a full program optimizing compiler that compiles down to C. It doesn't have all the fancy extensions that GHC supports though.
Even if you could, I wouldn't call it "C source". GHC can use C as part of its compilation system, but the generated C code is not even slightly readable. Even if it could be read and understood, it would make no sense to modify it because there is no way (apart from back-porting the changes into Haskell) to incorporate any modifications made by C hackers into future versions of your program.
The term "source" means the code that is written by a human and used to generate the program. In this case that is the Haskell. C generated by a compiler is not "source code", it is an intermediate representation.
You can't get there with GHC. Even when it compiles via C, GHC relies on manipulating the resulting assembly to shuffle segments around, a huge runtime system and a LOT of baggage.
On the other hand, you might have better luck if what you want is supported by somewhat more limited feature set of John Meacham's JHC compiler, however, which generates fairly compact C output.
I know this is an old post, but I still wanted to also mention ajhc. Ajhc forked jhc with the plans of adding new features and later pushing the updates back to jhc.
Say I have a Haskell program or library that I'd like to make accessible to non-Haskellers, potentially C programmers. Can I compile it to C using GHC and then distribute this as a C source
You can compile to C, but the resulting C is not human-readable. You're better off writing header files and using the excellent C FFI alongside it. In any case, distributing the generated C seems like a fool's errand.
Basically I'm interested in being able to write portions of programs in C and Haskell, and then distributing it as a tarball, but without requiring the target to have GHC and Cabal installed.
I do not know of any solutions that do not involve GHC. You'd have to distribute at the very least the Haskell RTS.
Can I compile it to C using GHC and then distribute this as a C source?
No it is not possible but you can easily create interface between haskell and c by using the Foreign Function Interface (FFI) of Haskell.
You can have more example here.