I just started to look into haskelldb as a more powerful companion to persistent, as I need a more powerful tool to query the database. Almost immediately I ran into difficulties with datatypes; in particular, I am using Data.Text quite extensively, UTCTime too and some custom datatypes as well. Unfortunately, although HDBC seems to support these datatypes quite well, haskelldb hides all of this and you have to write your own conversions starting from String input.
I don't want to duplicate the work already done for HDBC; what do you suggest to do in this case?
I think I will probably add an attribute getHdbcValue into the GetInstances class, so that I can write simple GetValue instances that would leverage the HDBC infrastructure; are there any better ideas? Am I missing something obvious?
(BTW: it seems to me that this library is - maybe from historical reasons - a little over-generalized; couldn't it just support hdbc..? )
I really love PostgreSQL and its rich type collection, particularly arrays. Most used of the extra PG types across projects outside Haskell is [int4], typical array of ints. Bringing the support for it to HaskellDB became one of the most exciting challenges I've had on my road to understanding Haskell, especially type-level programming (and TH/QQ too). Adding a new type to support looks kinda easy as long as it is supported by HDBC.
Hope this little patch could show how to add support for a new type. Here's the pull request for that, almost all changes needed covered here(all that is left is FlexibleInstances):
Pull Request
Main Changeset
There are many different libraries on Hackage dealing with interpolated strings. Some have poor quality while other vary with number of features they support.
Which ones are worth using?
Examples of libraries (in no particular order): shakespeare, interpolatedstring-qq, Interpolation
I took a look at all the interpolation quasiquoter libraries I could find on Hackage.
Interpolation libraries worth using:
interpolatedstring-perl6: Supports interpolating arbitrary Haskell code with reasonable syntax, but requires haskell-src-exts. If you just want a general string interpolation syntax, I'd use this.
shakespeare-text: Based on the Shakespeare family of templates, and has minimal dependencies; most other interpolated string packages depend on haskell-src-exts, which is quite a heavy package and can take a lot of time and resources to compile. If you use any other Shakespeare templates, I'd suggest going with this one.
However, it doesn't support interpolating arbitrary Haskell code, although it supports more than simple variable expansion; it also does function application, operators, etc. I think it also uses Text rather than String; I'm not sure whether it can be used with Strings looking from the source code, though there is support code to suggest it can be.
Interpolation: Supports arbitrary expressions (again with haskell-src-exts), and also has built-in looping facilities. If you want more "template"-like features than just plain interpolation, it's worth considering, although I personally find the syntax quite ugly.
Interpolation libraries probably not worth using:
interpolatedstring-qq: Seems to be based on interpolatedstring-perl6; it hasn't been updated for over a year, and seems to have less functionality than interpolatedstring-perl6. Unless you're really attached to the #{expr} syntax, I wouldn't consider this one.
interpol: Implemented as a preprocessor, gives {foo} special meaning in strings; IMO too heavyweight a solution, and with such lightweight syntax, likely to break existing code.
In summary, I'd suggest interpolatedstring-perl6 if you don't mind the haskell-src-exts dependency, and shakespeare-text if you do (or are already using Shakespeare templates).
Another option might be to use the string-qq package with a more general template engine; it supports String, Text and ByteString, which should cover every use. However, this obviously doesn't support embedding Haskell code, and you'll need to specify the variables separately, so it's probably only useful in certain situations.
I have a cabal package that exports a type NBT which might be useful for other developers. I've gone through the trouble of defining an Arbitrary instance for my type, and it would be a shame to not offer it to other developers for testing their code that integrates my work.
However, I want to avoid situations where my instance might get in the way. Perhaps the other developer has a different idea for what the Arbitrary instance should be. Perhaps my package's dependency on a particular version of QuickCheck might interfere with or be unwanted in the dependencies of the client project.
My ideas, in no particular order, are:
Leave the Arbitrary instance next to the definition of the type, and let clients deal with shadowing the instance or overriding the QuickCheck version number.
Make the Arbitrary instance an orphan instance in a separate module within the same package, say Data.NBT.Arbitrary. The dependency on QuickCheck for the overall package remains.
Offer the Arbitrary instance in a totally separate package, so that it can be listed as a separate test dependency for client projects.
Conditionally include both the Arbitrary instance and the QuickCheck dependency in the main package, but only if a flag like -ftest is set.
I've seen combinations of all of these used in other libraries, but haven't found any consensus on which works best. I want to try and get it right before uploading to Hackage.
On the basis of not much specific experience, but a general desire for robustness, the guiding principle for package dependencies should perhaps be
From each according to their ability; to each according to their need.
It's good to keep the dependencies of a package to the minimum needed for its essential functionality. That suggests option 3 or option 4 to me. Of course, it's a pain to chop the package up so much. If options are capable of expressing the conditionality involved, then option 4 sounds like a sensible approach, based on using language effectively to say what you mean.
It would be really good if a consensus emerged about which one switch we need to flick to get the testing kit as well as the basic functionality.
It's also clear that there's room for refinement here. It's amazing that Cabal works as well as it does, but it could allow for more sophisticated notions of "package", perhaps after the manner of the SML module system. Translating dependencies into function types, we basically get to write
simplePackage :: (Dependency1, .., Dependencyn) -> Deliverable
but one could imagine more elaborate combinations of products and functions, like
fancyPackage :: BasicDependency -> (BasicDeliverable, HelpfulExtras -> Gravy)
Until then, pick the option that most accurately reflects the actual deal. And tell us about it, so we can build that consensus.
The problem comes down to: how likely is it that someone using your library will be wanting to run QuickCheck tests using your NBT type?
If it is likely, and the Arbitrary instance is detailed (and thus not likely to change for different people), it would probably be best to ship it with your package, especially if you're going to make sure you keep updating the package (as for using a flag or not, that comes down to a bit of personal preference). If the instance is relatively simple however (and thus more likely that people would want to customise it), then it might be an idea to just provide a sample instance in the documentation.
If the type is primarily internal in nature and not likely to be used by others wanting to run tests, then using a flag to conditionally bring in QuickCheck is probably the best way to go to avoid unnecessary dependencies (i.e. the test suite is there just so you can test the package).
I'm not a fan of having QuickCheck-only packages in general, though it might be useful in some situations.
Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 6 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What is a good way to design/structure large functional programs, especially in Haskell?
I've been through a bunch of the tutorials (Write Yourself a Scheme being my favorite, with Real World Haskell a close second) - but most of the programs are relatively small, and single-purpose. Additionally, I don't consider some of them to be particularly elegant (for example, the vast lookup tables in WYAS).
I'm now wanting to write larger programs, with more moving parts - acquiring data from a variety of different sources, cleaning it, processing it in various ways, displaying it in user interfaces, persisting it, communicating over networks, etc. How could one best structure such code to be legible, maintainable, and adaptable to changing requirements?
There is quite a large literature addressing these questions for large object-oriented imperative programs. Ideas like MVC, design patterns, etc. are decent prescriptions for realizing broad goals like separation of concerns and reusability in an OO style. Additionally, newer imperative languages lend themselves to a 'design as you grow' style of refactoring to which, in my novice opinion, Haskell appears less well-suited.
Is there an equivalent literature for Haskell? How is the zoo of exotic control structures available in functional programming (monads, arrows, applicative, etc.) best employed for this purpose? What best practices could you recommend?
Thanks!
EDIT (this is a follow-up to Don Stewart's answer):
#dons mentioned: "Monads capture key architectural designs in types."
I guess my question is: how should one think about key architectural designs in a pure functional language?
Consider the example of several data streams, and several processing steps. I can write modular parsers for the data streams to a set of data structures, and I can implement each processing step as a pure function. The processing steps required for one piece of data will depend on its value and others'. Some of the steps should be followed by side-effects like GUI updates or database queries.
What's the 'Right' way to tie the data and the parsing steps in a nice way? One could write a big function which does the right thing for the various data types. Or one could use a monad to keep track of what's been processed so far and have each processing step get whatever it needs next from the monad state. Or one could write largely separate programs and send messages around (I don't much like this option).
The slides he linked have a Things we Need bullet: "Idioms for mapping design onto
types/functions/classes/monads". What are the idioms? :)
I talk a bit about this in Engineering Large Projects in Haskell and in the Design and Implementation of XMonad. Engineering in the large is about managing complexity. The primary code structuring mechanisms in Haskell for managing complexity are:
The type system
Use the type system to enforce abstractions, simplifying interactions.
Enforce key invariants via types
(e.g. that certain values cannot escape some scope)
That certain code does no IO, does not touch the disk
Enforce safety: checked exceptions (Maybe/Either), avoid mixing concepts (Word, Int, Address)
Good data structures (like zippers) can make some classes of testing needless, as they rule out e.g. out of bounds errors statically.
The profiler
Provide objective evidence of your program's heap and time profiles.
Heap profiling, in particular, is the best way to ensure no unnecessary memory use.
Purity
Reduce complexity dramatically by removing state. Purely functional code scales, because it is compositional. All you need is the type to determine how to use some code -- it won't mysteriously break when you change some other part of the program.
Use lots of "model/view/controller" style programming: parse external data as soon as possible into purely functional data structures, operate on those structures, then once all work is done, render/flush/serialize out. Keeps most of your code pure
Testing
QuickCheck + Haskell Code Coverage, to ensure you are testing the things you can't check with types.
GHC + RTS is great for seeing if you're spending too much time doing GC.
QuickCheck can also help you identify clean, orthogonal APIs for your modules. If the properties of your code are difficult to state, they're probably too complex. Keep refactoring until you have a clean set of properties that can test your code, that compose well. Then the code is probably well designed too.
Monads for Structuring
Monads capture key architectural designs in types (this code accesses hardware, this code is a single-user session, etc.)
E.g. the X monad in xmonad, captures precisely the design for what state is visible to what components of the system.
Type classes and existential types
Use type classes to provide abstraction: hide implementations behind polymorphic interfaces.
Concurrency and parallelism
Sneak par into your program to beat the competition with easy, composable parallelism.
Refactor
You can refactor in Haskell a lot. The types ensure your large scale changes will be safe, if you're using types wisely. This will help your codebase scale. Make sure that your refactorings will cause type errors until complete.
Use the FFI wisely
The FFI makes it easier to play with foreign code, but that foreign code can be dangerous.
Be very careful in assumptions about the shape of data returned.
Meta programming
A bit of Template Haskell or generics can remove boilerplate.
Packaging and distribution
Use Cabal. Don't roll your own build system. (EDIT: Actually you probably want to use Stack now for getting started.).
Use Haddock for good API docs
Tools like graphmod can show your module structures.
Rely on the Haskell Platform versions of libraries and tools, if at all possible. It is a stable base. (EDIT: Again, these days you likely want to use Stack for getting a stable base up and running.)
Warnings
Use -Wall to keep your code clean of smells. You might also look at Agda, Isabelle or Catch for more assurance. For lint-like checking, see the great hlint, which will suggest improvements.
With all these tools you can keep a handle on complexity, removing as many interactions between components as possible. Ideally, you have a very large base of pure code, which is really easy to maintain, since it is compositional. That's not always possible, but it is worth aiming for.
In general: decompose the logical units of your system into the smallest referentially transparent components possible, then implement them in modules. Global or local environments for sets of components (or inside components) might be mapped to monads. Use algebraic data types to describe core data structures. Share those definitions widely.
Don gave you most of the details above, but here's my two cents from doing really nitty-gritty stateful programs like system daemons in Haskell.
In the end, you live in a monad transformer stack. At the bottom is IO. Above that, every major module (in the abstract sense, not the module-in-a-file sense) maps its necessary state into a layer in that stack. So if you have your database connection code hidden in a module, you write it all to be over a type MonadReader Connection m => ... -> m ... and then your database functions can always get their connection without functions from other modules having to be aware of its existence. You might end up with one layer carrying your database connection, another your configuration, a third your various semaphores and mvars for the resolution of parallelism and synchronization, another your log file handles, etc.
Figure out your error handling first. The greatest weakness at the moment for Haskell in larger systems is the plethora of error handling methods, including lousy ones like Maybe (which is wrong because you can't return any information on what went wrong; always use Either instead of Maybe unless you really just mean missing values). Figure out how you're going to do it first, and set up adapters from the various error handling mechanisms your libraries and other code uses into your final one. This will save you a world of grief later.
Addendum (extracted from comments; thanks to Lii & liminalisht) —
more discussion about different ways to slice a large program into monads in a stack:
Ben Kolera gives a great practical intro to this topic, and Brian Hurt discusses solutions to the problem of lifting monadic actions into your custom monad. George Wilson shows how to use mtl to write code that works with any monad that implements the required typeclasses, rather than your custom monad kind. Carlo Hamalainen has written some short, useful notes summarizing George's talk.
Designing large programs in Haskell is not that different from doing it in other languages.
Programming in the large is about breaking your problem into manageable pieces, and how to fit those together; the implementation language is less important.
That said, in a large design it's nice to try and leverage the type system to make sure you can only fit your pieces together in a way that is correct. This might involve newtype or phantom types to make things that appear to have the same type be different.
When it comes to refactoring the code as you go along, purity is a great boon, so try to keep as much of the code as possible pure. Pure code is easy to refactor, because it has no hidden interaction with other parts of your program.
I did learn structured functional programming the first time with this book.
It may not be exactly what you are looking for, but for beginners in functional programming, this may be one of the best first steps to learn to structure functional programs - independant of the scale. On all abstraction levels, the design should always have clearly arranged structures.
The Craft of Functional Programming
http://www.cs.kent.ac.uk/people/staff/sjt/craft2e/
I'm currently writing a book with the title "Functional Design and Architecture". It provides you with a complete set of techniques how to build a big application using pure functional approach. It describes many functional patterns and ideas while building an SCADA-like application 'Andromeda' for controlling spaceships from scratch. My primary language is Haskell. The book covers:
Approaches to architecture modelling using diagrams;
Requirements analysis;
Embedded DSL domain modelling;
External DSL design and implementation;
Monads as subsystems with effects;
Free monads as functional interfaces;
Arrowised eDSLs;
Inversion of Control using Free monadic eDSLs;
Software Transactional Memory;
Lenses;
State, Reader, Writer, RWS, ST monads;
Impure state: IORef, MVar, STM;
Multithreading and concurrent domain modelling;
GUI;
Applicability of mainstream techniques and approaches such as UML, SOLID, GRASP;
Interaction with impure subsystems.
You may get familiar with the code for the book here, and the 'Andromeda' project code.
I expect to finish this book at the end of 2017. Until that happens, you may read my article "Design and Architecture in Functional Programming" (Rus) here.
UPDATE
I shared my book online (first 5 chapters). See post on Reddit
Gabriel's blog post Scalable program architectures might be worth a mention.
Haskell design patterns differ from mainstream design patterns in one
important way:
Conventional architecture: Combine a several components together of
type A to generate a "network" or "topology" of type B
Haskell architecture: Combine several components together of type A to
generate a new component of the same type A, indistinguishable in
character from its substituent parts
It often strikes me that an apparently elegant architecture often tends to fall out of libraries that exhibit this nice sense of homogeneity, in a bottom-up sort of way. In Haskell this is especially apparent - patterns that would traditionally be considered "top-down architecture" tend to be captured in libraries like mvc, Netwire and Cloud Haskell. That is to say, I hope this answer will not be interpreted as an attempt replace any of the others in this thread, just that structural choices can and should ideally be abstracted away in libraries by domain experts. The real difficulty in building large systems, in my opinion, is evaluating these libraries on their architectural "goodness" versus all of your pragmatic concerns.
As liminalisht mentions in the comments, The category design pattern is another post by Gabriel on the topic, in a similar vein.
I have found the paper "Teaching Software Architecture Using Haskell" (pdf) by Alejandro Serrano useful for thinking about large-scale structure in Haskell.
Perhaps you have to go an step back and think of how to translate the description of the problem to a design in the first place. Since Haskell is so high level, it can capture the description of the problem in the form of data structures , the actions as procedures and the pure transformation as functions. Then you have a design. The development start when you compile this code and find concrete errors about missing fields, missing instances and missing monadic transformers in your code, because for example you perform a database Access from a library that need a certain state monad within an IO procedure. And voila, there is the program. The compiler feed your mental sketches and gives coherence to the design and the development.
In such a way you benefit from the help of Haskell since the beginning, and the coding is natural. I would not care to do something "functional" or "pure" or enough general if what you have in mind is a concrete ordinary problem. I think that over-engineering is the most dangerous thing in IT. Things are different when the problem is to create a library that abstract a set of related problems.
As I am learning Haskell, I see that there is a lot of language extensions used in real life code. As a beginner, should I learn to use them, or should I avoid them at all cost? I see that it breaks compatibility with Haskell 98 and limits code to pretty much GHC only. However, if I browse packages on Hackage, I see that most of them are GHC-only anyway.
So, what is an attitude of community towards using language extensions?
And if use of extensions is OK, how can I distinguish extensions which I can use “safely” (those which are likely to become part of the next Haskell standard) from those which are mostly “experimental”? For example, I suppose that -XDisambiguateRecordFields is nice and useful, but is it likely to be supported in the future?
There are some GHC extensions that are too good to live without. Among my favorites are
Multiparameter type classes
Scoped type variables
Higher-rank types
Generalized algebraic data types (GADTs)
Of these the really essential one is multiparameter type classes.
Some GHC extensions are very speculative and experimental, and you may want to use with caution. A good way to identify a stable and trusted extension is to see if it is slated for inclusion in Haskell Prime, which is hoped to be the successor the Haskell 98.
I second Don Stewart's suggestion that every extension should be marked using the LANGUAGE pragma in the source file. Don't enable extensions using command-line options.
Yes, use extensions as appropriate.
But be sure to enable them intentionally -- only when you decide you need them.
Do this on a per-module basis via {-# LANGUAGE Rank2Types #-} (for example).
Generally speaking people do use GHC extensions quite heavily, because they're so useful and Haskell 98 is quite old. Once there's a more up to date standard people may make more effort to stick to it.
You can find the status of proposals for the next standard here.
The other answers here are good ones. I would add that GHC extensions are not as future-vulnerable (*) as they might be, because GHC seems to be far and away the most popular Haskell compiler, and I don't see that changing soon.
(*) as in the opposite of "future-proof"