Is it possible to compile "only a file" in a cabal project? - haskell

In JVM based programs, you can compile a file to a .class file and be able to run the binary again, without compiling necessarily all the files.
Is it possible to do it in haskell? Is it imperative to compile and link all the files in the project? If yes, why?
What if there is no binary, you are only installing a library?

For GHC, you can change and recompile a single module without having to recompile modules depending on that, provided the exposed interface doesn't change. GHC's --make mode (default as of ghc-7.*) checks whether recompilation is necessary and recompiles only those modules where it can't determine that it's not necessary.
If you have a cabal package and you cabal build after changing one module, you can see from the compiler output that it doesn't recompile all modules in the package in general, only the changed module and [maybe] the ones depending on it.
If you build an executable, that of course has to be relinked, but many of the old object files can be reused.
If you build a library, the library archive of course has to be rebuilt, but many of the old object files can be reused.

Related

How to compile with make but also include all dependencies

I'm compiling a C++ program on linux, and I can run make and it all compiles, but when I need to downgrade or change one of it's dependencies for another program, it breaks. I was wondering if it was possible to create a standalone executable, with dependencies bundled inside? There's not many dependencies, so size isn't an issue.
So, what you're asking is, can you link with static versions of libraries (which are included in the program directly) instead of dynamic versions of libraris (shared libraries) which are kept external to your program.
The answer is "yes", but it's not always straightforward. First you have to ensure you actually have the static versions of the libraries installed in your system: the static and dynamic libraries are different files and often the "standard" installation provides only the dynamic library.
If you're already compiling code against those libraries you probably already have the static libraries installed because, at least on GNU/Linux systems, the static libraries are often included in the "dev" packages along with the header files etc. needed to compile code.
To make this work you need to modify your linker command line. If you have a sufficiently new version of the binutils package (which provides the linker), you can change your link line to replace arguments like -lssl -lcrypto with arguments like -l:libssl.a -l:libcrypto.a (don't forget the colon after the -l) and that should do it.

Which libraries should go to a pkg-config file as a dependencies?

I'm writing a shared library that itself depends on boost and pcl libraries.
When generating .pc file for my library should I add all these libraries also to the .pc file as dependencies?
It's been a long time since I last time studied these things and I'm a bit confused how this worked again on Linux. When my test app links to my lib I have to add all these pcl and boost libs again to the build even though the lib already has been linked against these libs.
But when I look at the deps of libQtGui.so, for example, it has tens of all kinds of libs it links to, but I don't need to make my app link to those libs...only -lQtGui is enough.
I have just used CMake and link_libraries to add boost and pcl libs.
When generating .pc file for my library should I add all these libraries also to the .pc file as dependencies?
It depends on API of your library:
if public (i.e. installable) headers of your lib use boost/pcl (i.e. have #inclue <boost/...>) (in other words you used PUBLIC (or INTERFACE) named keywords when link your library against boost/pcl in CMake+target_link_libraries) -- then yes you need to add 'em;
otherwise, it depends on what exactly you have at the end -- i.e. does your DSO has DT_NEEDED entries for boost/pcl libs (most likely) or not (you can check it w/ ldd <your-lib>.so). For the last case, you also need to add your dependencies to the *.pc files.
Also, in case of binary dependency from boost/pcl (dunno if the latter has any DSO or not) please make sure you specify exact location of the linked libs -- cuz a user may have multiple (co-existed) boost installations (potentially incompatible) or can do upgrade (later) to other (binary incompatible) version (and you can't really do smth w/ it)… It is important to be linked to the same (or at least binary compatible, which is kinda hard to guarantee for boost) library as you did…
I have just used CMake and link_libraries to add boost and pcl libs.
Please read smth about "Modern CMake" and stop using link_libraries :-) -- use target_link_libraries instead…

Will ghc-options of an executable override ghc-options of linked libraries?

I have a main Haskell executable program with a cabal file. There, I specify the ghc-options.
This executable links against other libraries out in the wilderness. Will the ghc-options of the cabal files of these libraries be ignored?
I'm basically wondering whether the executable's ghc-options will be used for the entire enchilada (main executable + libraries).
Additional bounty notes: Please also expand on chi's comment below, namely, what exactly is the difference between ghc-options for compiling vs. linking. Which are which, and which are never needed in libraries? Maybe you can talk about some of the most important ones, such as the -threaded mentioned below.
Under the normal cabal-install workflow (and the stack workflow built atop it), flags specified in your Cabal file are local to your package, and should not trigger rebuilds. Similarly, options specified with --ghc-options on the command line are local to your package.
To your specific questions about -threaded, this flag has no effect on library code (as cabal-install will tell you), only on executables.
A brief listing of GHC flags is available here. In particular, note that -threaded is listed under Linking options, with a further link to Options affecting linking. From this information, we conclude that -threaded is only meaningful for executables because it signals to GHC that we wish to use the threaded runtime. If your package doesn't provide an executable, it has no need for any runtime, threaded or otherwise.
For a high-level explanation of compiling vs. linking: they are two of the steps between source code and executable. Compilation is the process of producing an object file from source code. Linking is the process of connecting the numerous object files which compose your executable. When you compile an executable, it has no idea that a function, say map exists unless you defined it, so it just compiles under the assumption that it does. The linking step is where we make all those names available and meaningful. In the case of -threaded, we are making the linking process aware of the threaded runtime, which all code calling on the runtime will use.
Since I don't know if you're using the standard cabal workflow, stack, or the new cabal.project workflow, here's a digression to discuss this behavior in the cabal.project case.
This is actually an open bug, right now.
The bug is tracked as issue 3883 on the Cabal GitHub (and somewhat in the related issue 4247).
Relevant to your question, according to current behavior, specifying flags in a ghc-options stanza in a cabal.project file causes those dependencies to be compiled (or recompiled, as the case may be) with those flags.

How to use two different compilers for different targets in a .cabal file?

When I run cabal build it uses some Haskell compiler to build the executables and/or test-suites in my .cabal file.
Can I control which compiler is used for the different targets? Ideally, I would like to have separate build targets that use ghc and ghcjs in the same .cabal file. It seems to me that someone might want to use ghc and hugs or two version of ghc in the same project. Is this currently possible?
Also, how does cabal decide what compiler to use when running cabal build? I saw there is a compiler option in my ~/.cabal/config file but changing it from ghc to ghcjs and uncommenting it, did not seem to change what cabal build does.
The compiler to use is determined during the configure step (or during an install step's implicit configure step, which does not share configuration options with a previous configure step). It is also determined by the entity building the package and cannot be influenced by the person writing the package. Probably what happened to you is that a previous cabal build implicitly invoked the configure step and chose a compiler; future builds will keep a previous choice of compiler over one stuck in your global configuration file. You can countermand that by simply manually running cabal configure again.
It is possible to cause a build to fail with the wrong implementation, e.g.
library
if impl(ghc)
buildable: False
will prevent cabal from trying to build the package using GHC. However, this isn't really useful for building separate parts of a package with separate compilers, as cabal will refuse to install a package unless it can build the whole thing with a single compiler.
Probably the best way forward is to make separate packages for things that should be built by separate compilers.

How do I statically compile a C library into a Haskell module that I can later load with the GHC API?

Here is my desired use case:
I have a package with a single module that reads HDF5 files and writes some of their data to Haskell records. To do the work, the library uses the bindings-hdf5 package. Here is my cabal's build-depends. reader-types is a module I wrote that defines the types of the Haskell records that contain the read-in data.
build-depends: base >=4.7 && <4.8
, text
, vector
, containers
, bindings-hdf5
, reader-types
Note that my cabal file does not currently use extra-libraries or ghc-options. I can load my module, src/Mabel.hs in ghci as long as I specify the required hdf5_hl library:
ghci src/Mabel.hs -lhdf5_hl -L/long/nixos/path/lib
and within ghci, I can run my function perfectly fine.
Now, what I want to do is compile this library/module into a single, compiled file that I can later load with the GHC API in a different Haskell program. By single file, I mean that it needs to run even if the hdf5_hl library does not exist on the system. Preferably, it would also run even if text, vector, and/or containers are missing, but this is not essential because reader-types requires those types anyway. When loading the module with the GHC API, I want it to load in already compiled form, and not run interpreted.
My purpose for doing this is that I want the self-contained file to act as a single, pre-compiled plugin file that is later loaded and executed by a different Haskell executable. Other plugins might not use hdf5 at all, and the only package they are guaranteed to use is reader-types, which essentially defines the plugin interface types.
The hdf5 library on my system contains the following files: libhdf5_la.la, libhdf5_hl.so, libhdf5.la, libhdf5.so, and similar files that have the version number in the file name.
I have done a lot of googling, but am getting confused by all the edge cases I am finding. Here are some examples that I'm either sure don't fit my case, or I can't tell.
I do not want to compile a Haskell library to use from C or Python, only a Haskell program using GHC API.
I do not want to compile C wrappers for a C++ library into a Haskell module because the bindings already exist and the library is already a C library.
I do not to want compile a library that is entirely self-contained because, since I am loading it with the GHC API, I don't need the GHC runtime included in the library. (My understanding is that the plugins must be compiled with the same ghc version they will be loaded with in the GHC API).
I do not want to compile C bindings and the C library at the same time because the C library is already compiled and the bindings are specified in separate package (bindings-hdf5).
The closest resource for what I want to do is this exchange on the mailing list from 2009. However, I added extra-libraries: hdf5_hl or extra-libraries: hdf5 to my cabal file, and in both cases the resulting .a, .so, .dyn_hi, .dyn_o, .hi, and .o files in dist/build are all the exact same size as without using extra-libraries, so I'm confident it is not working correctly.
What changes to my cabal file do I need to make to create a self-contained, standalone file that I can later load with the GHC API? If this is not possible, what are the alternatives?
Instead of using the GHC API, I am also open to using the plugins library to load the plugin, but the self-contained requirements are still the same.
EDIT: I do not care what form the compiled "plugin" must take (I assume object file is the right way), but I want to load it dynamically from an separate executable at run time and execute functions it defines with known names and known types. The reason I want a single file is that there will eventually be other different plugins, and I want them all to behave the same way without having to worry about lib paths and dependencies for each one. A compiled, single file is a simpler interface for doing this than zipping/unzipping archives that include Haskell object code and their dependencies.

Resources