Trouble writing readable code using Haskell records - haskell

I'm fairly new to Haskell, and one thing I've been struggling with is writing readable code using records.
My specific problems are:
I haven't found an effective strategy for dealing with name conflicts between fields in different record types. I'm finding I want the same field in multiple different record types, and the name conflict issue is really annoying. I end up choosing some prefix to put on all of my fields, which adds to verbosity and hinders readability.
Using nested records results in really verbose code. I find
someFunction(foo.bar, 2 * foo.bar.baz)
in a language like Java or C++ to be pretty readable. In Haskell I find myself writing this to accomplish the same thing
someFunction (fooBar foo) (2 * barBaz (fooBar foo))
which is a lot harder to visually parse, and calls to functions with multiple arguments quickly become unreadable. In order to make this more readable, I find myself defining intermediate values which are to extract fields from records, which is more readable, but adds more lines of code, so it hurts readability in a different way.
Is there a better way to use records that is more readable, or is there something I should be doing instead? Just using tuples? Writing functions with tons of parameters instead of grouping related values into records? Something else?

One solution (as suggested in the comments) to the problem is to use lenses. Using the microlens and microlens-th packages (these might be simpler when you're getting started):
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FunctionalDependencies #-}
{-# LANGUAGE FlexibleInstances #-}
import Data.List (nub)
import Lens.Micro ((^.), (^..))
import Lens.Micro.TH (makeFields)
newtype Name = Name String
deriving Eq
data Person = Person { _personName :: Name }
makeFields ''Person
data Species = Dog | Cat
deriving Eq
data Pet = Pet { _petName :: Name, _petSpecies :: Species }
makeFields ''Pet
-- ^. is an infix operator for view
uniquePersonNames :: [Person] -> [Name]
uniquePersonNames ps = nub (map (\p -> p ^. name) ps)
dogs :: [Pet] -> [Pet]
dogs ps = filter (\p -> p ^. species == Dog) ps
data Concert = Concert
{ _concertPerformers :: [Person]
, _concertAttendees :: [Person]
}
makeFields ''Concert
-- ^.. is an infix operator for toListOf
performerNames :: Concert -> [Name]
performerNames c = c ^.. performers . traverse . name
data House = House { _housePeople :: [Person], _housePet :: Pet}
makeFields ''House
houseSound :: House -> String
houseSound h = case h ^. pet . species of
Dog -> "Woof!"
Cat -> "Meow!"
There are several resources out there to learn more about lenses and other kinds of optics. One particularly beginner friendly resource Control.Lens.Tutorial.
Be warned that this approach can lead to type errors that are hard to understand (I believe the generic-lens library has better error messages, but I have not used it), especially if you start using things blindly. I suggest sticking to the basics (as presented in the linked tutorial) -- this will cover a large portion of your use cases.

Related

How can I turn a [TExp a] into a TExp [a], or otherwise apply refineTH to multiple values programatically?

I've been using refined for refinement types in Haskell recently, and have encountered a major usability problem. I can't figure out how to refine an entire list of values at compile time.
For example I can write:
{-# LANGUAGE TemplateHaskell #-}
import Refined
oneToThree :: [Refined Positive Int]
oneToThree = [$$(refineTH 1), $$(refineTH 2), $$(refineTH 3)]
But I can't do this precludes the ability of using range syntax, because Refined doesn't (for good reason) have an instance for Enum.
I would like to be able to do something like
oneToThree :: [Refined Positive Int]
oneToThree = $$(traverse refineTH [1..3])
but I can't get this to compile because I can't lift [TExp (Refined Positive Int)] into TExp [Refined Positive Int].
Is there template haskell magic that I missing that will let me do this?
Would also be open to suggestions for better lightweight refinement type libraries if someone has a suggestion.
sequenceQTExpList :: [Q (TExp a)] -> Q (TExp [a])
sequenceQTExpList [] = [|| [] ||]
sequenceQTExpList (x:xs) = [|| $$(x) : $$(sequenceQTExpList xs) ||]
Then use it as
$$(sequenceQTExpList $ map refineTH [1..3])
You're right that it feels like a traverse. The type is a bit off, though, with the extra Qs floating around. I don't see anything offhand that lets you combine those layers usefully.
Unfortunately, a lot of the mechanism used there is TH syntax rather than functions. There just isn't an obvious way to do both the lifting and the splicing as functions, so you're stuck writing bespoke helpers for each container type instead of getting to use Traversable. It's an interesting problem. If there's a clean solution, it'd have a good chance of making it into a future version of template Haskell if it was brought up to the maintainers. But I just don't see it right now.
This works (it needs to be in a different file than you use it in because of the stage restriction, though):
import Language.Haskell.TH.Syntax (Exp(ListE), TExp(TExp))
makeTypedTHList :: [TExp a] -> TExp [a]
makeTypedTHList xs = TExp $ ListE [x | TExp x <- xs]
You'd then use it like this:
{-# LANGUAGE TemplateHaskell #-}
import Refined
import AboveCodeInSeparateModuleBecauseOfStageRestriction (makeTypedTHList)
oneToThree :: [Refined Positive Int]
oneToThree = $$(makeTypedTHList <$> traverse refineTH [1..3])
However, calling the TExp constructor yourself subverts some of the safety of typed Template Haskell (although I think this particular case is safe). Ideally, I'd prefer an approach that didn't require doing that, but I can't think of one.

Is there a canonical way of comparing/changing one/two records in haskell?

I want to compare two records in haskell, without defining each change in the datatype of the record with and each function of 2 datas for all of the elements of the record over and over.
I read about lens, but I could not find an example for that,
and do not know where begin to read in the documentation.
Example, not working:
data TheState = TheState { number :: Int,
truth :: Bool
}
initState = TheState 77 True
-- not working, example:
stateMaybe = fmap Just initState
-- result should be:
-- ANewStateType{ number = Just 77, truth = Just True}
The same way, I want to compare the 2 states:
state2 = TheState 78 True
-- not working, example
stateMaybe2 = someNewCompare initState state2
-- result should be:
-- ANewStateType{ number = Just 78, truth = Nothing}
As others have mentioned in comments, it's most likely easier to create a different record to hold the Maybe version of the fields and do the manual conversion. However there is a way to get the functor like mapping over your fields in a more automated way.
It's probably more involved than what you would want but it's possible to achieve using a pattern called Higher Kinded Data (HKD) and a library called barbies.
Here is a amazing blog post on the subject: https://chrispenner.ca/posts/hkd-options
And here is my attempt at using HKD on your specific example:
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleContexts #-}
-- base
import Data.Functor.Identity
import GHC.Generics (Generic)
-- barbie
import Data.Barbie
type TheState = TheState_ Identity
data TheState_ f = TheState
{ number :: f Int
, truth :: f Bool
} deriving (Generic, FunctorB)
initState :: TheState
initState = TheState (pure 77) (pure True)
stateMaybe :: TheState_ Maybe
stateMaybe = bmap (Just . runIdentity) initState
What is happening here, is that we are wrapping every field of the record in a custom f. We now get to choose what to parameterise TheState with in order to wrap every field. A normal record now has all of its fields wrapped in Identity. But you can have other versions of the record easily available as well. The bmap function let's you map your transformation from one type of TheState_ to another.
Honestly, the blog post will do a much better job at explaining this than I would. I find the subject very interesting, but I am still very new to it myself.
Hope this helped! :-)
How to make a Functor out of a record. For that I have an answer: apply the function to > all of the items of the record.
I want to use the record as an heterogenous container / hashmap, where
the names determine the values-types
While there's no "easy", direct way of doing this, it can be accomplished with several existing libraries.
This answer uses red-black-record library, which is itself built over the anonymous products of sop-core. "sop-core" allows each field in a product to be wrapped in a functor like Maybe and provides functions to manipulate fields uniformly. "red-black-record" inherits this, adding named fields and conversions from normal records.
To make TheState compatible with "red-black-record", we need to do the following:
{-# LANGUAGE DataKinds, FlexibleContexts, ScopedTypeVariables,
DeriveGeneric, DeriveAnyClass,
TypeApplications #-}
import GHC.Generics
import Data.SOP
import Data.SOP.NP (NP,cliftA2_NP) -- anonymous n-ary products
import Data.RBR (Record, -- generalized record type with fields wrapped in functors
I(..), -- an identity functor for "simple" cases
Productlike, -- relates a map of types to its flattened list of types
ToRecord, toRecord, -- convert a normal record to its generalized form
RecordCode, -- returns the map of types correspoding to a normal record
toNP, fromNP, -- convert generalized record to and from n-ary product
getField) -- access field from generalized record using TypeApplication
data TheState = TheState { number :: Int,
truth :: Bool
} deriving (Generic,ToRecord)
We auto-derive the Generic instance that allows other code to introspect the structure of the datatype. This is needed by ToRecord, that allows conversion of normal records into their "generalized forms".
Now consider the following function:
compareRecords :: forall r flat. (ToRecord r,
Productlike '[] (RecordCode r) flat,
All Eq flat)
=> r
-> r
-> Record Maybe (RecordCode r)
compareRecords state1 state2 =
let mapIIM :: forall a. Eq a => I a -> I a -> Maybe a
mapIIM (I val1) (I val2) = if val1 /= val2 then Just val2
else Nothing
resultNP :: NP Maybe flat
resultNP = cliftA2_NP (Proxy #Eq)
mapIIM
(toNP (toRecord state1))
(toNP (toRecord state2))
in fromNP resultNP
It compares two records whatsoever that have ToRecord r instances, and also a corresponding flattened list of types that all have Eq instances (the Productlike '[] (RecordCode r) flat and All Eq flat constraints).
First it converts the initial record arguments to their generalized forms with toRecord. These generalized forms are parameterized with an identity functor I because they come from "pure" values and there aren't any effects are play, yet.
The generalized record forms are in turn converted to n-ary products with toNP.
Then we can use the cliftA2_NP function from "sop-core" to compare accross all fields using their respective Eq instances. The function requires specifying the Eq constraint using a Proxy.
The only thing left to do is reconstructing a generalized record (this one parameterized by Maybe) using fromNP.
An example of use:
main :: IO ()
main = do
let comparison = compareRecords (TheState 0 False) (TheState 0 True)
print (getField #"number" comparison)
print (getField #"truth" comparison)
getField is used to extract values from generalized records. The field name is given as a Symbol by way of -XTypeApplications.

(Generically) Build Parsers from custom data types?

I'm working on a network streaming client that needs to talk to the server. The server encodes the responses in bytestrings, for example, "1\NULJohn\NULTeddy\NUL501\NUL", where '\NUL' is the separator. The above response translates to "This is a message of type 1(hard coded by the server), which tells the client what the ID of a user is(here, the user id of "John Teddy" is "501").
So naively I define a custom data type
data User
{ firstName :: String
, lastName :: String
, id :: Int
}
and a parser for this data type
parseID :: Parser User
parseID = ...
Then one just writes a handler to do some job(e.g., write to a database) after the parser succesfully mathes a response like this. This is very straightforward.
However, the server has almost 100 types of different responses like this that the client needs to parse. I suspect that there must be a much more elegant way to do the job rather than writing 100 almost identical parsers like this, because, after all, all haksell coders are lazy. I am a total newbie to generic programming so can some one tell me if there is a package that can do this job?
For these kinds of problems I turn to generics-sop instead of using generics directly. generics-sop is built on top of Generics and provides functions for manipulating all the fields in a record in a uniform way.
In this answer I use the ReadP parser which comes with base, but any other Applicative parser would do. Some preliminary imports:
{-# language DeriveGeneric #-}
{-# language FlexibleContexts #-}
{-# language FlexibleInstances #-}
{-# language TypeFamilies #-}
{-# language DataKinds #-}
{-# language TypeApplications #-} -- for the Proxy
import Text.ParserCombinators.ReadP (ReadP,readP_to_S)
import Text.ParserCombinators.ReadPrec (readPrec_to_P)
import Text.Read (readPrec)
import Data.Proxy
import qualified GHC.Generics as GHC
import Generics.SOP
We define a typeclass that can produce an Applicative parser for each of its instances. Here we define only the instances for Int and Bool:
class HasSimpleParser c where
getSimpleParser :: ReadP c
instance HasSimpleParser Int where
getSimpleParser = readPrec_to_P readPrec 0
instance HasSimpleParser Bool where
getSimpleParser = readPrec_to_P readPrec 0
Now we define a generic parser for records in which every field has a HasSimpleParser instance:
recParser :: (Generic r, Code r ~ '[xs], All HasSimpleParser xs) => ReadP r
recParser = to . SOP . Z <$> hsequence (hcpure (Proxy #HasSimpleParser) getSimpleParser)
The Code r ~ '[xs], All HasSimpleParser xs constraint means "this type has only one constructor, the list of field types is xs, and all the field types have HasSimpleParser instances".
hcpure constructs an n-ary product (NP) where each component is a parser for the corresponding field of r. (NP products wrap each component in a type constructor, which in our case is the parser type ReadP).
Then we use hsequence to turn a n-ary product of parsers into the parser of an n-ary product.
Finally, we fmap into the resulting parser and turn the n-ary product back into the original r record using to. The Z and SOP constructors are required for turning the n-ary product into the sum-of-products the to function expects.
Ok, let's define an example record and make it an instance of Generics.SOP.Generic:
data Foo = Foo { x :: Int, y :: Bool } deriving (Show, GHC.Generic)
instance Generic Foo -- Generic from generics-sop
Let's check if we can parse Foo with recParser:
main :: IO ()
main = do
print $ readP_to_S (recParser #Foo) "55False"
The result is
[(Foo {x = 55, y = False},"")]
You can write your own parser - but there is already a package that can do the parsing for you: cassava and while SO is usually not a place to search for library recommendations, I want to include this answer for people looking for a solution, but not having the time to implement this themselves and looking for a solution that works out of the box.
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE OverloadedStrings #-}
import Data.Csv
import Data.Vector
import Data.ByteString.Lazy as B
import GHC.Generics
data Person = P { personId :: Int
, firstName :: String
, lastName :: String
} deriving (Eq, Generic, Show)
-- the following are provided by friendly neighborhood Generic
instance FromRecord Person
instance ToRecord Person
main :: IO ()
main = do B.writeFile "test" "1\NULThomas\NULof Aquin"
Right thomas <- decodeWith (DecodeOptions 0) NoHeader <$>
B.readFile "test"
print (thomas :: Vector Person)
Basically cassava allows you to parse all X-separated structures into a Vector, provided you can write down a FromRecord instance (which needs a parseRecord :: Parser … function to work.
Side note on Generic until recently I thought - EVERYTHING - in haskell has a Generic instance, or can derive one. Well this is not the case I wanted to serialize some ThreadId to CSV/JSON and happened to find out unboxed types are not so easily "genericked"!
And before I forget it - when you speak of streaming and server and so on there is cassava-conduit that might be of help.

Does Haskell have pointers/references to record members?

I can create and reference relative pointers to struct members in C++ using the ::*, .*, and ->* syntax like :
char* fstab_t::*field = &fstab_t::fs_vfstype;
my_fstab.*field = ...
In Haskell, I can easily create temporary labels for record getters like :
(idxF_s,idxL_s) = swap_by_sign sgn (idxF,idxL) ;
Afaik, I cannot however then update records using these getters as labels like :
a { idxF_s = idxL_s b }
Is there an easy way to do this without coding for each record setter?
A getter and setter bundled together in a first-class value is referred to as a lens. There are quite a few packages for doing this; the most popular are data-lens and fclabels. This previous SO question is a good introduction.
Both of those libraries support deriving lenses from record definitions using Template Haskell (with data-lens, it's provided as an additional package for portability). Your example would be expressed as (using data-lens syntax):
setL idxF_s (b ^. idL_s) a
(or equivalently: idxF_s ^= (b ^. idL_s) $ a)
You can, of course, transform lenses in a generic way by transforming their getter and setter together:
-- I don't know what swap_by_sign is supposed to do.
negateLens :: (Num b) => Lens a b -> Lens a b
negateLens l = lens get set
where
get = negate . getL l
set = setL l . negate
(or equivalently: negateLens l = iso negate negate . l1)
In general, I would recommend using lenses whenever you have to deal with any kind of non-trivial record handling; not only do they vastly simplify pure transformation of records, but both packages contain convenience functions for accessing and modifying a state monad's state using lenses, which is incredibly useful. (For data-lens, you'll want to use the data-lens-fd package to use these convenience functions in any MonadState; again, they're in a separate package for portability.)
1 When using either package, you should start your modules with:
import Prelude hiding (id, (.))
import Control.Category
This is because they use generalised forms of the Prelude's id and (.) functions — id can be used as the lens from any value to itself (not all that useful, admittedly), and (.) is used to compose lenses (e.g. getL (fieldA . fieldB) a is the same as getL fieldA . getL fieldB $ a). The shorter negateLens definition uses this.
What you want here is first-class record labels, and while this does not exist in the language, there are several packages on Hackage which implement this pattern. One of these is fclabels, which can use Template Haskell to generate the required boilerplate for you. Here's an example:
{-# LANGUAGE TemplateHaskell #-}
import Control.Category
import Data.Label
import Prelude hiding ((.))
data Foo = Foo { _fieldA :: Int, _fieldB :: Int }
deriving (Show)
$(mkLabels [''Foo])
main = do
let foo = Foo 2 3
putStrLn "Pick a field, A or B"
line <- getLine
let field = (if line == "A" then fieldA else fieldB)
print $ modify field (*10) foo

haskell load module in list

Hey haskellers and haskellettes,
is it possible to load a module functions in a list.
in my concrete case i have a list of functions all checked with or
checkRules :: [Nucleotide] -> Bool
checkRules nucs = or $ map ($ nucs) [checkRule1, checkRule2]
i do import checkRule1 and checkRule2 from a seperate module - i don't know if i will need more of them in the future.
i'd like to have the same functionality look something like
-- import all functions from Rules as rules where
-- :t rules ~~> [([Nucleotide] -> Bool)]
checkRules :: [Nucleotide] -> Bool
checkRules nucs = or $ map ($ nucs) rules
the program sorts Pseudo Nucleotide Sequences in viable and nonviable squences according to given rules.
thanks in advance ε/2
Addendum:
So do i think right - i need:
genList :: File -> TypeSignature -> [TypeSignature]
chckfun :: (a->b) -> TypeSignature -> Bool
at compile time.
but i can't generate a list of all functions in the module - as they most probably will have not the same type signature and hence not all fit in one list. so i cannot filter given list with chckfun.
In order to do this i either want to check the written type signatures in the source file (?) or the inferenced types given by the compiler(?).
another problem that comes to my mind is: not every function written in the source file might get exported ?
Is this a problem a haskell beginner should try to solve after 5 months of learning - my brain is shaped like a klein's bottle after all this "compile time thinking".
There is a nice package on Hackage just for this: language-haskell-extract. In particular, the Template Haskell function functionExtractor takes a regular expression and returns a list of the matching top level bindings as (name, value) pairs. As long as they all have matching types, you're good to go.
{-# LANGUAGE TemplateHaskell #-}
import Language.Haskell.Extract
myFoo = "Hello"
myBar = "World"
allMyStuff = $(functionExtractor "^my")
main = print allMyStuff
Output:
[("myFoo", "Hello"), ("myBar", "World")]

Resources