As we all know, when profiling Haskell applications, all dependencies have be installed with profiling information. This is fine, but a problem arises with Haskell packages that have -auto-all in their .cabal files. This means that I will always see their profiling information, even when this is irrelevent to me.
Allow me to present an example where this is problematic. I am building a little game engine, and I do a bunch of work before my game loop loading textures and such with JuicyPixels. This isn't code that's interesting to profile - I'm interested in profiling the game loop itself. However, because JuicyPixels built itself with -auto-all, there doesn't seem to be a way to exclude this information from profiling. As a result, I end up with hundreds of profiling lines that are simply noise.
Is it possible to strip out all of JuicyPixels debugging information (or any library, in the general case)?
The comments suggest that this is a problem with the cabal file for JuicyPixels (and if this problem continues to happen in other libraries, then it is also their fault). I started a discussion on the Haskell Cafe (http://haskell.1045720.n5.nabble.com/ghc-prof-options-and-libraries-on-Hackage-td5756706.html), and will try and follow up on that.
Related
I am writing some simulation software for my own research and made a visualization tool as part of the project. This works perfectly fine on my workstation, and i can use it to for example monitor a simulation as it is running, or visualize the log data later. I have come to a point were I need to run simulations on a cluster now though, and OpenGL is neither available, nor needed for the actual simulations. However, since the project depends on OpenGL, it will not build.
Now obviously I could create a separate branch without the OpenGL parts, which will probably be my short term solution, but seems like a bit of a pain to maintain.
I am not sure what the best long term solution would be. Ideally I'd like to have a setup that optionally builds the visualization part if OpenGL is available, and skips it if not. Does stack (or cabal) support this type of thing?
Another option would be to make the visualization part a different project, but this would make monitoring the simulation as it is running significantly more difficult.
What is the best way to solve this?
There are a couple ways you can do this.
The first and simplest would be to split up your code into two different packages. One would be the code without the OpenGL dependency, and the other would be the visualization tool using OpenGL. If there's no reason you can't do it this way, this is by far the best option.
If you cannot do this, then you can use Cabal flags, as someone mentioned in the comment. An example of a cabal file with flags is:
name: mylibname
description: some description
...
flag opengl
description: build opengl support
default: False
library
...
build-depends: base, containers, ...
if flag(opengl)
build-depends: OpenGL
cpp-options: -DWITH_OPENGL
...
Now in your source files, you can do this:
{-# LANGUAGE CPP #-}
...
#ifdef WITH_OPENGL
someOpenGLCode
#endif
By default, your builds won't include the OpenGL parts. You can use either cabal or stack to ask to build the package with OpenGL. In cabal, you can either do so on the command line with -f or --flags option, or in the cabal.project file with the flags: field. Someone else already linked to what looks like some equivalent stack documentation (though I don't know stack well enough to comment on its correctness)
Perhaps it's just better to describe my problem.
I'm developing a Haskell library. But part of the library is written in C, and another part actually in raw LLVM. To actually get GHC to spit out the code I want I have to follow this process:
Run ghc -emit-llvm on both the code that uses the Haskell module and the "Main" module.
Run clang -emit-llvm on the C file
Now I've got three .ll files from above. I add the part of the library I've handwritten in raw LLVM and llvm-link these into one .ll file.
I then run LLVM's opt on the linked file.
Lastly, I feed the LLVM bitcode fileback into GHC (which pleasantly accepts it) and produces an executable.
This process (with appropriate optimisation settings of course) seems to be the only way I can inline code from C, removing the function call overhead. Since many of these C functions are very small this is significant.
Anyway, I want to be able to distribute the library and for users to be able to use it as painlessly as possible, whilst still gaining the optimisations from the process above. I understand it's going to be a bit more of a pain than an ordinary library (for example, you're forced to compile via LLVM) but as painlessly as possible is what I'm looking for advice for.
Any guidance will be appreciated, I don't expect a step by step answer because I think it will be complex, but just some ideas would be helpful.
If I have a package with several executables, which I initially build using cabal build. Now I change one file that impacts just one executable, cabal seems to take about a second or two to examine each executable to see if it's impacted or not. On the other hand, make, given an equivalent number of executables and source files, will determine in a fraction of a second what needs to be recompiled. Why the huge difference? Is there a reason, cabal can't just build its own version of a makefile and go from there?
Disclaimer: I'm not familiar enough with Haskell or make internals to give technical specifics, but some web searching does offer some insight that lines up with my proposal (trying to avoid eliciting opinions by providing references). Also, I'm assuming your makefile is calling ghc, as cabal apparently would.
Proposal: I believe there could be several key reasons, but the main one is that make is written in C, whereas cabal is written in Haskell. This would be coupled with superior dependency checking from make (although I'm not sure how to prove this without looking at the source code). Other supporting reasons, as found on the web:
cabal tries to do a lot more than simply compiling, e.g. appears to take steps with regard to packaging (https://www.haskell.org/cabal/)
cabal is written in haskell, although the run time is written in C (https://en.wikipedia.org/wiki/Glasgow_Haskell_Compiler)
Again, not being overly familiar with make internals, make may simply have a faster dependency checking mechanism, thereby better tracking these changes. I point this out because from the OP it sounds like there is a significant enough difference to where cabal may be doing a blanket check against all dependencies. I suspect this would be the primary reason for the speed difference, if true.
At any rate, these are open source and can be downloaded from their respective sites (haskell.org/cabal/ and savannah.gnu.org/projects/make/) allowing anyone to examine specifics of the implementations.
It is also likely one could see a lot of variance in speed based upon the switches passed to the compilers in use.
HTH at least point you in the right direction.
I am trying to figure out a bug (a serious performance downgrade). Unfortunately, I wasn't able to figure out why by going back many different versions of my code.
I am suspecting it could be some modifications to libraries that I've updated, not to mention in the meanwhile I've updated to GHC 7.6 from 7.4 (and if anybody knows if some laziness behavior has changed I would greatly appreciate it!).
I have an older executable of this code that does not have this bug and thus I wonder if there are any tools to tell me the library versions I was linking to from before? Like if it can figure out the symbols, etc.
GHC creates executables, which are notoriously hard to understand... On my Linux box I can view the assembly code by typing in
objdump -d <executable filename>
but I get back over 100K lines of code from just a simple "Hello, World!" program written in Haskell.
If you happen to have the GHC .hi files, you can get some information about the executable by typing in
ghc --show-iface <hi filename>
This won't give you the assembly code, but you can get some extra information that may prove useful.
As I mentioned in the comment above, on Linux you can use "ldd" to see what C-system libraries you used in the compile, but that is also probably less than useful.
You can try to use a disassembler, but those are generally written to disassemble to C, not anything higher level and certainly not Haskell. That being said, GHC compiles to C as an intermediary (at least it used to; has that changed?), so you might be able to learn something.
Personally I often find view system calls in action much more interesting than viewing pure assembly. On my Linux box, I can view all system calls by running using strace (use Wireshark for the network traffic equivalent):
strace <program executable>
This also will generate a lot of data, so it might only be useful if you know of some specific place where direct real world communication (i.e., changes to a file on the hard disk drive) goes wrong.
In all honesty, you are probably better off just debugging the problem from source, although, depending on the actual problem, some of these techniques may help you pinpoint something.
Most of these tools have Mac and Windows equivalents.
Since much has changed in the last 9 years, and apparently this is still the first result a search engine gives on this question (like for me, again), an updated answer is in order:
First of all, yes, while Haskell does not specify a bytecode format, bytecode is also just a kind of machine code, for a virtual machine. So for the rest of the answer I will treat them as the same thing. The “Core“ as well as the LLVM intermediate language, or even WASM could be considered equivalent too.
Secondly, if your old binary is statically linked, then of course, no matter the format your program is in, no symbols will be available to check out. Because that is what linking does. Even with bytecode, and even with just classic static #include in simple languages. So your old binary will be no good, no matter what. And given the optimisations compilers do, a classic decompiler will very likely never be able to figure out what optimised bits used to be partially what libraries. Especially with stream fusion and such “magic”.
Third, you can do the things you asked with a modern Haskell program. But you need to have your binaries compiled with -dynamic and -rdynamic, So not only the C-calling-convention libraries (e.g. .so), and the Haskell libraries, but also the runtime itself is dynamically loaded. That way you end up with a very small binary, consisting of only your actual code, dynamic linking instructions, and the exact data about what libraries and runtime were used to build it. And since the runtime is compiler-dependent, you will know the compiler too. So it would give you everything you need, but only if you compiled it right. (I recommend using such dynamic linking by default in any case as it saves memory.)
The last factor that one might forget, is that even the exact same compiler version might behave vastly differently, depending on what IT was compiled with. (E.g. if somebody put a backdoor in the very first version of GHC, and all GHCs after that were compiled with that first GHC, and nobody ever checked, then that backdoor could still be in the code today, with no traces in any source or libraries whatsoever. … Or for a less extreme case, that version of GHC your old binary was built with might have been compiled with different architecture options, leading to it putting more optimised instructions into the binaries it compiles for unless told to cross-compile.)
Finally, of course, you can profile even compiled binaries, by profiling their system calls. This will give you clues about which part of the code acted differently and how. (E.g. if you notice that your new binary floods the system with some slow system calls where the old one just used a single fast one. A classic OpenGL example would be using fast display lists versus slow direct calls to draw triangles. Or using a different sorting algorithm, or having switched to a different kind of data structure that fits your work load badly and thrashes a lot of memory.)
Is it possible to time profile a Haskell program without installing the profiling libraries?
When I pass the -prof option to ghc, I always get errors like this one:
src/MyPKG/FooBlah.lhs:7:7:
Could not find module `Data.Time.Calendar':
Perhaps you haven't installed the profiling libraries for package `time-1.1.4'?
Use -v to see a list of the files searched for.
I know that the solution is to install with cabal profile versions of the libraries, but sometimes this is a pain in the ass (sorry for the bad language).
I think it should be possible to profile my program and the calls that have no symbols should appear as ???? or something like that in the output.
No, it's not possible. Building for profiling changes the representation and function calls have extra parameters to keep track of the profiling data.
You have to install the profiling libraries to use GHC's profiler, even if it's a pain in the rear.