I have 3 data constructors say A,B and C that are defined in files A.hs, B.hs, C.hs and the files are in the directory project-utils.
Now I want to use these data constructors in parts of other projects. These projects reside in totally different directories.
How do I import the data and type constructors of A, B and C in such project files?
Thanks to the first answer given below, I realized that I am looking for a skeleton to organize such project in a better way. I searched but could not find any such project skeleton.
The link provided there contains many things that are described in vague manner. For example, on line 5 there it is simple written as "..."
What i am looking for is,
The skeleton project should not be very simple "single" file project as is given on the Haskell site. But should NOT be overly complex with tons of dependencies etc that we see in many projects on hackage.
Edit: I changed the title to reflect my problem in a better way. Sorry for the inconvenience.
Make a cabal pkg out of them and install that package locally.
Follow a directory stucture as here and use those constuctors in a project rather than across projects. The stucture mentioned is basically a structure of a cabal package.
Manually add the input source while compiling through ghc or loading in ghci.
Example
ghci -i project-utils/A.hs Foo.hs
where Foo.hs uses elements exported by A.hs
May not be exactly what you're looking for, but for future readers of this question, a Haskell skeleton/template project was just released here:
https://github.com/tfausak/haskeleton
It does add some dependencies like hlint and hspec. Here is the blog post, which goes through each of the individual decisions:
http://taylor.fausak.me/2014/03/04/haskeleton-a-haskell-project-skeleton/
I found this: how to write a haskell program link as a handy reference.
#Tem Pora : you need to install yesod and yesod-bin. This link talks more about the scaffolding
cabal install yesod
cabal install yesod-bin
<cdtoprojdir> yesod init
Hope this helps.
Related
I am trying to install a C++ library called FastAD ( https://github.com/JamesYang007/FastAD#user-guide) in Rcpp but the installation instructions are generic (not specifically for Rcpp).
It would be greatly appreciated if someone coule give me some guidance for how to install and be able to #include the files?
FastAD is a header-only library which only depends on Eigen3. That makes for a pretty straightforward application for Rcpp and friends.
First, rely on the RcppEigen.package.skeleton() function to create the barebones RcppEigen-using package.
Second, we copy the FastAD library into inst/include. We add a PKG_CPPFLAGS variable in src/Makevars to let R know to source it from there. That is all the compiler needs with a header-only library. (Edit: We also set CXX_STD=CXX17 unless one has a new enough compiler or R (currently: r-devel) which already default to C++17.)
Third, we create a simple example in src/ based on an example in FastAD. We picked the Black-Scholes example.
Fourth, minor cleanups like removing the hello* stanza files.
That is pretty much it. In its embryonic form the package is now here on GitHub. An example is
> library(RcppFastAD)
> blackScholesExamples()
56.5136
0.773818
51.4109
-0.226182
>
I have a project with a a library in a few directories. The cabal file is produced with hpack and looks ok. The project builds with cabal build and the main can be run with cabal run xx.
Using repl in vscode, I get occasionally
Could not load module ‘GIS.Subdivisions’
It is a member of the hidden package ‘CatCoreConcepts-0.2’.
Perhaps you need to add ‘CatCoreConcepts’ to the build-depends in your .cabal file.
The package is, of courese, listed in the dependencies. The error is not always occuring and I assume it is a probem with some data cached in the vscode Haskell HLS plugin. Is there a simple way to clean the cache of the plugin? Restart HLS and Developer: reload window in vscode is not having any effect.
I have had the problem where VSCode thinks modules are hidden. Many answers on SO, such as this one. Possible solution, which hopefully helps,
ghc-pkg expose CatCoreConcepts
This is not an answer to the main question, but to some OP comments.
I'd say cabal documentation is good (but lengthy).
A brief summary as I understand (not official documentation)
A Package is a set of components
A component is a set of modules (haskell files)
Module-name and file-name must be equal except for entrypoints (see below)
Components can be classified in two groups:
runnables: tests, benchmarks and executables
runnables must have a unique main function called the entrypoint
In the cabal file, you can use keywords executable, test-suite and benchmark for defining runnables
Within each runnable section in the cabal file there must be a field main-is pointing to the file with the entrypoint
The file-name of the entrypoint can be whatever but the module name must be Main.
Notice that runnable components can have multiple modules (haskell files). For example you can have a test suit consisting in a file with auxiliar functions and other file with the main function. Both files conform a component (a test-suite in this case)
non-runnables: libraries
In the cabal file, you can use keywords library for defining non-runnable component
For libraries you must specify the modules which are exposed with the exposed-modules keyword.
Runnable components can import non-runnable ones even if they are defined in the same package. But viceversa is not true.
As a good practice, each component should be separated in a different folder. Some libraries expect naming convention. For example: tests to be in a folder with the same name and with test_XXXX.hs format (this is regular across programing languages, not only Haskell)
When libraries are separated in folders the field hs-source-dirs must be specified to point to the folder with the haskell files
(the name runnable is not part of the official docs, but is the way I understand)
stack works pretty much the same as cabal since the former is just a different front-end for the later. So aside from different keywords and yaml format, all of the above can be applied to stack (notice that cabal updates more often than stack, therefore some features supported by cabal can be missing in stack.)
If you find this useful, I think I am opening an issue to the cabal docs to include it.
The Haskell project I am working on generates code(and tests for it) that is intended to be used as an independent Haskell library. I want to wrap it in a cabal project, so it can be included as a dependency.
I searched for a library interface for the cabal, so I can create a cabal project at a given directory by calling some functions, but found none.
I could, of course, just run bash commands from Haskell, but it looks ugly to me.
Is there any tool that will solve my problem in a nice way?
You want the Cabal package. You can parse an existing cabal file, change stuff in the data structures, and regenerate the text representation.
Edit in answer to comment:
I don't know of any tutorials. The links I gave are for the Haddock docs, and the mapping between data types and Cabal file text is pretty straightforward. So you should probably start by writing the code to produce a PackageDescription value and then call writePackageDescription on it.
Note the existence of emptyPackageDescription, which lets you just specify the fields you want.
(Removed link to pretty printer class because PackageDescription isn't a member.)
Suppose there are three projects A, B and C. The modules in C will be shared by A and B. Because there is no java's CLASSPATH thing, then do we need to use absolute path when importing modules from C ?
Any suggestion is appreciated !
I urge you to cabalise your projects.
But if for some reason you don't want to do this, then the nearest thing to the java classpath is the -i switch to ghc. Note that the current directory needs to appear explicitly in the list.
The canonical way is to choose carefully where they should sit in the module hierarchy and make each project into a fully featured Cabal package, then install them locally so that they're a part of the namespace for your compiler locally.
This way the modules of each are available to any source code you're writing.
If (for example) you use the leksah IDE it'll do a lot of the work for you.
If by "projects" you mean Cabal packages, the standard thing to do would be to export whatever modules are necessary from C and then have A and B depend on the C and import them. It's not a good idea to have source files in a package directly depend on files that are aren't part of that package.
I am the author of the operational package, which includes example code. I would like this example code to be hscolored and installed together with the API documentation, which is generated by Haddock.
I probably have to use a custom Cabal build type and create a user hook for the Haddock phase. However, I never managed to make this work. Hence, my question is:
How to include full modules as example code in Haddock?
Could you give an example of a Cabal user hook that applies hscolor to an additional source code file example.hs and joins the result with the generated Haddock documentation?
I'm a total Haskell nube and this is a shot in the dark, but couldn't you use hscolour to output the code as HTML and then do something along the lines of cabal haddock --executables --hyperlink-source to include the colorized HTML?