Can I actually build and run an executable from the same package as part of a test suite? - haskell

It struck me that I do not really know of a way to black box test an executable packaged with Cabal.
With npm, for instance, I can run arbitrary shell commands, and I surely can wire it so that the necessary sources are transpiled and executed, and their side effects inspected.
Stack (as said here) builds the executables and publishes them in $PATH for the test suite, so I can easily run them.
But with Cabal, a test suite apparently cannot even depend on an executable, so there is no way to force the latter to be built. (Am I wrong about this?) And even then, I would have to know the path to the compiled binary.
How do I approach this problem?
The particulars of my situation are that an executable must extensively analyze the state of the system and branch accordingly, and I want to integration test that it does not forget to do so.
Note also that I am not at peace with running the relevant IO functions directly because I find it not integrative enough. Or, rather, I would like it to be possible to run the individual IO functions and also run the program as a whole. In my case, there are testing shell scripts in place already, but I would really like to "bake them in".

It turns out that there is a (slightly hacky) way to do this, at least for now, using the new(ish) build-tool-depends Cabal field. There has been some discussion (https://github.com/haskell/cabal/issues/5411, https://github.com/haskell/cabal/pull/4104#issuecomment-266838873) of build-tool-depends only being available at build-time, and having a separate field for executables that should be available when running a component. However, this separate run-time tool depends field doesn't exist yet. Luckily, it seems like Cabal (at least 2.1 and 2.2) completely doesn't draw this distinction: executables listed in build-tool-depends are actually available when cabal new-test runs a test suite. This means that you can use a pkg.cabal file that looks like this:
name: pkg
executable exe
...
test-suite test
...
build-tool-depends: pkg:exe
And when you run the test suite, the executable will be built & on the path.

Related

Test for GHC compile time errors

I'm working on proto-lens#400 tweaking a Haskell code generator. In one of the tests I'd like to verify that certain API has not been built. Specifically, I want to ensure that a certain type of program will not type check successfully. I'd also have a similar program with one identifier changed which should compile, to guard against a typo breaking the test. Reading Extending and using GHC as a Library I have managed to have my test write a small file and compile it using GHC as a library.
But I need the code emitted by the test to load some other modules. Specifically the output of the code generator of that project and its runtime environment with transitive dependencies. I have at best a very rough understanding of stack and hpack, which is providing the build time system. I know I can add dependencies to some package.yaml file to make them available to individual tests, but I have no clue how to access such dependencies from the GHC session set up as part of running the test. I imagine I might find some usable data in some environment variables, but I also believe such an approach might be undocumented and prone to break without warning.
How can I have a test case use GHC as a library and have it access dependencies expressed in package.yaml? Or alternatively, can I use some construct other than a regular test case to express a file with dependencies but check that the file won't compile?
I don't know if this applies to you because there are too many details going way over my head, but one way to test for type errors is to build your test suite with -fdefer-type-errors and to catch the exception at run-time (of type TypeError).

Makefile explanation. Understanding someone else's Makefile

I am relatively new to programming on Linux.
I understand that Makefiles are used to ease the compiling process when compiling several files.
Rather than writing "g++ main.cpp x.cpp y.cpp -o executable" everytime you need to compile and run your program, you can throw it into a Makefile and run make in that directory.
I am trying to get a RPi and Arduino to communicate with each other using the nRF24L01 radios using tmrh20's library here. I have been successful using tmrh20's Makefile to build the the executable needed (on the RPi). I would like to, however, use tmrh20's library to build my own executables.
I have watched several tutorial videos on Makefiles but still cannot seem to piece together what is happening in tmrh20's.
The Makefile (1) in question is here. I believe it is somehow referencing a second Makefile (2) (for filenames?) here. (Why is this necessary?)
If it helps anyone understand (it took me a while) I had to build using SPIDEV (the instructions here) the Makefile (3) in the RF24 directory which produced several object files which I think are relevant to Makefile (1)&(2).
How do I find out what files I need to make my own Makefile, from tmrh20's Makefile (if that makes sense?) He seems to use variables in his Makefile that are not defined? Or are perhaps defined elsewhere?
Apologies for my poor explanation.
The canonical sequence is not just make and make install. There is an initial ./configure step (such a file is here) that sets up everything and generates several files used in the make steps.
You only need to run this configure script successfully only once, unless you want to change build parameters. I say "successfully" because the first execution will usually complain that you are missing libraries or header files. But ince ./configure runs without errors, make and make install should run without errors.
PS: I didn't try to compile it, but since the project has a rather comprehensive configure it is likely complete and you shouldn't need to tweak makefiles if your follow the usual procedure.
The reason for splitting the Makefiles in the way you've mentioned and linked to here is to separate the definition of the variables from the implementation. This way you could have multiple base Makefiles that define their PROGRAM variable differently, but all do the same thing based on the value of that variable.
In my own personal opinion, I see some value here - but there very many ways to skin this proverbial cat.
Having learned GNU Make the hard way, I can only recommend you do the same. There's a slight steep curve at the beginning, but once you get the main concepts down following other peoples Makefiles gets pretty easy.
Good luck: https://www.gnu.org/software/make/manual/html_node/index.html

How to use the binary output of a Cargo project as the input of another one?

In order to reduce the executable size of a Rust program (called runtime in my code), I am trying to compress it and then include it in a second program (called szl) that decompresses it and executes it.
I have done that by using a Cargo build script in szl that opens the output binary from runtime, compresses it, and then generates a file that is ready for use by include_bytes!.
The issue with this approach is the dependencies are not handled properly. For example, Cargo may try to build szl before runtime (and fail), and when the source code of runtime is modified, szl is not rebuilt.
Is there a way to tell Cargo that szl depends on the binary from runtime (and transitively on the source code of runtime), or should I use another approach such as an external Makefile?
While not exactly your use case, you might get it to work with the links manifest key. It would allow you to express a dependency between the two programs and you can pass more information with DEP_FOO_KEY variables.
Before you go to such drastic measures, it might be worth it to try other known strategies for reducing rust binary size (such as calling strip, remove debug symbols, LTO, panic=abort) etc.

Arranging auxiliary tasks for a Haskell project

There are some repetitive auxiliary tasks that I usually have to run when developing or testing a project. For example: downloading some data, setting up the database, cleaning the logs, etc. In Ruby land, they are handled by rake while other languages prefer make or something else (tasks occasionally depend on other tasks, so we may occasionally need one task to perform subtasks that it depends on).
So, is there some conventional way to organize those tasks in a Haskell project?
I would assume that cabal could be used for that, but not all of those auxiliary tasks are about running Haskell code: sometimes it's just a case of performing rm -r logs/*.log or downloading some data with wget or curl. Would it make sense to make cabal's test target depend on other cabal targets that, ugh, run shell scripts/commands from Haskell code? (If it's possible to have dependent targets in cabal at all?)
Alternatively, I could use make, but would "an average haskeller" (an "outside" project contributor, for example) find that intuitive? I believe one would first try cabal test before discovering that it requires setting up the database for the testing first, then running a whole chain of other tasks. Would one notice a Makefile in the first place?
I couldn't find any recipes for handling those auxiliary tasks in Haskell project around.
As long as I know, there's no de facto standard tool in Haskell project.
But recently I heard Shake, a monadic build system written in Haskell.

How can I make Cabal search for external programs?

I'm trying to write a Haskell program which requires the output of external programs (such as lame, the mp3 encoder). While declaring dependency on a library is easy in cabal, how can one declare dependency on an executable?
You can't currently add a dependency in the .cabal file for external executables, other than a list of known build tools (see build-tools: alex for example).
You can however specify build-type: Configure, and then use a separate configure script to search for any additional binaries (for example, an autoconf-based configure script is perfectly fine, and can be used to set constants in your source).
Note that searching for a runtime dependency -- such as a lame encoder -- at compile time may be a bad idea, as the build and run environments are different on many package systems. It might be a better idea to dynamically search for required binaries at program startup.
For example, hmp3 hunts for mpg321 with
mmpg <- findExecutable (MPG321 :: String)
where MPG321 is the name of the program determined via a ./configure option. For more information, see the haddocks:
http://hackage.haskell.org/packages/archive/directory/latest/doc/html/System-Directory.html#v:findExecutable

Resources