What's the difference between these two in GHC. They seem similar in intended purpose but deriving (Data) has been around for a while yet deriving (Generic) has only been added to GHC recently.
Is deriving (Generic) basically an "upgrade" to deriving (Data) or do the two classes have different purposes?
I almost hesitate to answer this question, because I only marginally understand it myself, but I did spend a couple of days looking at this myself about a year ago, and this is my current understanding....
Both classes are used for introspection.... Using them, you can get access to the Haskell code parsetrees (of the Haskell program itself).
Philosophically, the way that they do this differs, however.
deriving (Data) creates data objects, which represent the parsed tree of objects that can be manipulated at runtime.
deriving (Generic) creates new Types corresponding to each parsetree, which often can be manipulated at compile time (leaving less work to be done at runtime).
From my limited usage, "deriving (Data)" was much more straightforward to use, but of course wasn't as sleek at runtime (....for me this was negligable).
Although "deriving (Generic)" was theoretically faster, it was trickier to program, and involved creating classes that could manipulate groups of Types (ie- related parsetrees). It also may push you into using cutting edge GHC extensions.
My opinion- "deriving (Generic)" is the "right" way to do things, but would take longer to master.
Template Haskell is another way to access Haskell parsetrees, although it works 100% at compiletime, and has tools to let you create and insert parse tree data into the code (ie- code that generates code, like a Lisp macro).
Again, let me stress that this is all based on a couple of days of research, so if I have mangled this too badly, someone let me know (I myself would like to firm up my understanding).
Related
I am trying to work with GHC core data types.
I am able to compile my Haskell source to core representation with type Bind CoreBndr.
As we know there is no default Show instance for this data type.
There is a way to pretty print this representation but it has way too much noise associated with it.
I want to treat GHC core as any other algebraic data type and write functions with it.
It would be much easier if we had a Show instance of GHC core.
Has anybody already written a show instance which I can reuse?
Aside, how does the community write and verify programs that deal with GHC core?
A naive implementation of Show in GHC is probably not what you want. The reason for this is because internally GHC has recursion among many of its data types. For instance, between TyCon, AlgTyConRhs, and DataCon we have:
TyCon has AlgTyCon, which contains AlgTyConRhs.
AlgTyConRhs contains data_cons :: [DataCon] as one of its record fields.
DataCon contains dcRepTyCon :: TyCon as one of its fields.
And thus we come full circle. Because of how Show works, recursion like this will create infinite output if you ever attempt to print it.
In order to get a "nice" custom representation with data constructors and everything showing, you would have to write it yourself. This is actually somewhat challenging, since you have to consider and debug cases of recursion like this that default pretty printers have solved.
In the Haskell community, we are slowly adding features of dependent types. Dependent types is an advanced typing feature by which types can depend on values. Some languages like Agda and Idris already have them. It appears to be a very advanced feature requiring an advanced type system, until you realize that python has had dependent types has had the dynamic typing version of dependent types, which may or may not be actual dependent types, from the beginning.
For most any program in a functional programming language, there is a way to reperesent it as an untyped lambda calculus term, no matter how advanced the typing. That's because typing only eliminates programs, not enable new ones.
Strong Typing wins us safety. How classes of errors that happened at run time can no longer happen at run time. This safety is rather nice. Besides this safety though, what does strong typing give you?
Are there an additional benefits of a strong type system besides safety?
(Note that I'm not saying that strong typing is worthless. Safety is a huge benefit in and of itself. I'm just wondering if there are additional benefits.)
First, we need to talk a bit about the history of the simply typed lambda calculus.
There are two historical developments of the simply typed lambda calculus.
When Alonzo Church described the lambda calculus the types were baked in as part of the meaning / operational behavior of the terms.
When Haskell Curry described the lambda calculus the types were annotations put on the terms.
So we have the lambda calculus a la Church and the lambda calculus a la Curry. See https://en.wikipedia.org/wiki/Simply_typed_lambda_calculus#Intrinsic_vs._extrinsic_interpretations for more.
Ironically, the language Haskell, which is named after Curry is based on a lambda calculus a la Church!
What this means is the types aren't simply annotations that rule out bad programs for you. They can "do stuff" too. Such types don't erase without leaving residue.
This shows up in Haskell's notion of type classes, which are really why Haskell is a language a la Church.
In Haskell, when I make a function
sort :: Ord a => [a] -> [a]
We're passing an object or dictionary for Ord a as the first argument.
But you aren't forced to plumb that argument around yourself in the code, it is the job of the compiler to build that up and use it.
instance Ord Char
instance Ord Int
instance Ord a => Ord [a]
So if you go and use sort on a list of strings, which are themselves lists of chars, then this will build up the dictionary by passing the Ord Char instance through the instance for Ord a => Ord [a] to get Ord [Char], which is the same as Ord String, then you can sort a list of strings.
Calling sort above, is a lot less verbose than manually building a LexicographicComparator<List<Char>> by passing it an IComparator<Char> to its constructor and calling the function with an extra second argument, if I were to compare the complexity of calling such a sort function in Haskell to calling it in C# or Java.
This shows us that programming with types can be significantly less verbose, because mechanisms like implicits and typeclasses can infer a large part of the code for your program during type checking.
On a simpler basis, even the sizes of arguments can depend on types, unless you want to pay fairly massive costs for boxing everything in your language up so that it has a homogeneous representation.
This shows us that programming with types can be significantly more efficient, because it can use dedicated representations, rather than paying for boxed structures everywhere in your code. An int can't just be a machine integer, because it has to somehow look like everything else in the system. If you're willing to give up an order of magnitude or more worth of performance at runtime, then this may not matter to you.
Finally, once we have types "doing stuff" for us, it is often beneficial to consider the refactoring benefits that mere safety provides.
If I refactor the smaller set of code that remains, it'll rewrite all that type-class plumbing for me. It'll figure out the new ways it can rewrite the code to unbox more arguments. I'm not stuck elaborating all of this stuff by hand, I can leave these mundane tasks to the type-checker.
But even when I do change the types, I can move arguments around fairly willy-nilly, comfortable that the compiler will very likely catch my errors. Types give you "free theorems" which are like unit tests for whole classes of such errors.
On the other hand, once I lock down an API in a language like Python I'm deathly afraid of changing it, because it'll silently break at runtime for all my downstream dependencies! This leads to baroque APIs that lean heavily on easily bit-rotted keyword-arguments, and the API of something that evolves over time rarely resembles what you'd build out of the box if you had it to do over again. Consequently, even the mere safety concern has long-term impact in API design once you ever want people to build on top of your work, rather than simply replace it when it gets too unwieldy.
That's because typing only eliminates programs, not enable new ones.
This is not a correct statement. Type-classes make it possible to generate parts of your program from type-level information.
Consider two expressions:
readMaybe "15" :: Maybe Integer
readMaybe "15" :: Maybe Bool
Here I'm using the readMaybe function from the Text.Read module. At term level those expressions are identical, only their type annotations are different. However, the results they produce at runtime differ (Just 15 in the first case, Nothing in the second case).
This is because the compiler generates code for you from the static type information you have. To be more precise, it selects a suitable type class instance and passes its dictionary to the polymorphic function (readMaybe in our case).
This example is simple, but there are way more complex use cases. Using the mtl library you can write computations that run in different computational contexts (aka Monads). The compiler will automatically insert a lot of code that manages the computational contexts. In a dynamically typed language, you would have no static information to make this possible.
As you can see, static typing not only cuts off incorrect programs but also writes correct ones for you.
You need "safety" when you already know what and how you want to write. It's a very small part of what types are useful for. The most important thing about types is that they make your reasoning structured. When someone writes in Python a + b he doesn't see a and b as some abstract variables — he sees them as some numbers. Types are already there in the internal language of humans, Python just doesn't have a type system to talk about them. The actual question in the "typed vs untyped (unityped) programming" dispute is "do we want to reflect our internal structured concepts in a safe and explicit or unsafe and implicit way?". Types don't introduce new concepts — it's untyped reasoning forgets the existing ones.
When someone looks at a tree (I mean a real green one) he doesn't see every single leaf on it, but he doesn't treat it as an abstract nameless object as well. "A tree" — is an approximation that is good enough for most cases and that's why we have Hindley-Milner type systems, but sometimes you want to talk about a specific tree and you do want to look at leaves. And that's what dependent types give you: the ability to zoom. "A tree without leaves", "a tree in the forest", "a tree of a particular form"... Dependently typed programming is just another step towards how humans think.
On a less abstract note, I have a type checker for a toy dependently typed language, where all typing rules are expressed as constructors of a data type. You don't need to dive into the type checking procedure to understand the rules of the system. That's the power of "zooming": you can introduce as complex invariants as you want, thus distinguishing essential parts from not important ones.
Another example of the power dependent types give you is various forms of reflection. Look e.g. at the Pierre-Évariste Dagand thesis, which proves that
generic programming is just programming
And of course types are hints, many functions and abstractions I defined I would define in a far more clumsy way in a weakly typed language, but types suggested better alternatives.
There is just no question "What to choose: simple types or dependent types?". Dependent types are always better and they of course subsume simple types. The question is "What to choose: no types or dependent types?", but that question doesn't stand for me.
Refactoring. By having a strong type system you can safely refactor code and have the compiler tell you whether what you are doing now even makes sense. The stronger the typing system, the more refactor errors are avoided. This of course means your code is a lot more maintainable.
When trying to learn Haskell, one of the difficulties that arise is the ability when something requires special magic from the compiler. One exemple that comes in mind is the seq function which can't be defined i.e. you can't make a seq2 function behaving exactly as the built-in seq. Consequently, when teaching someone about seq, you need to mention that seq is special because it's a special symbol for the compiler.
Another example would be the do-notation which only works with instances of the Monad class.
Sometimes, it's not always obvious. For instance, continuations. Does the compiler knows about Control.Monad.Cont or is it plain old Haskell that you could have invented yourself? In this case, I think nothing special is required from the compiler even if continuations are a very strange kind of beast.
Language extensions set aside, what other compiler magic Haskell learners should be aware of?
Nearly all the ghc primitives that cannot be implemented in userland are in the ghc-prim package. (it even has a module called GHC.Magic there!)
So browsing it will give a good sense.
Note that you should not use this module in userland code unless you know exactly what you are doing. Most of the usable stuff from it is exported in downstream modules in base, sometimes in modified form. Those downstream locations and APIs are considered more stable, while ghc-prim makes no guarantees as to how it will act from version to version.
The GHC-specific stuff is reexported in GHC.Exts, but plenty of other things go into the Prelude (such as basic data types, as well as seq) or the concurrency libraries, etc.
Polymorphic seq is definitely magic. You can implement seq for any specific type, but only the compiler can implement one function for all possible types [and avoid optimising it away even though it looks no-op].
Obviously the entire IO monad is deeply magic, as is everything to with concurrency and parallelism (par, forkIO, MVar), mutable storage, exception throwing and catching, querying the garbage collector and run-time stats, etc.
The IO monad can be considered a special case of the ST monad, which is also magic. (It allows truly mutable storage, which requires low-level stuff.)
The State monad, on the other hand, is completely ordinary user-level code that anybody can write. So is the Cont monad. So are the various exception / error monads.
Anything to do with syntax (do-blocks, list comprehensions) is hard-wired into the language definition. (Note, though, that some of these respond to LANGUAGE RebindableSyntax, which lets you change what functions it binds to.) Also the deriving stuff; the compiler "knows about" a handful of special classes and how to auto-generate instances for them. Deriving for newtype works for any class though. (It's just copying an instance from one type to another identical copy of that type.)
Arrays are hard-wired. Much like every other programming language.
All of the foreign function interface is clearly hard-wired.
STM can be implemented in user code (I've done it), but it's currently hard-wired. (I imagine this gives a significant performance benefit. I haven't tried actually measuring it.) But, conceptually, that's just an optimisation; you can implement it using the existing lower-level concurrency primitives.
In Haskell, you can have infinite lists, because it doesn't completely compute them, it uses thunks. I am wondering if there is a way to serialize or otherwise save to a file a piece of data's thunk. For example let us say you have a list [0..]. Then you do some processing on it (I am mostly interested in tail and (:), but it should support doing filter or map as well.) Here is an example of sort of what I am looking for.
serial::(SerialThunk a)=>a->serThunk
serialized = serial ([0..] :: [Int])
main=writeToFile "foo.txt" serialized
And
deserial::(SerialThunk a)=>serThunk->a
main=do
deserialized <- readFromFile "foo.txt" :: IO [Int]
print $ take 10 deserialized
No. There is no way to serialize a thunk in Haskell. Once code is compiled it is typically represented as assembly (for example, this is what GHC does) and there is no way to recover a serializable description of the function, let alone the function and environment that you'd like to make a thunk.
Yes. You could build custom solutions, such as describing and serializing a Haskell expression. Deserialization and execution could happen by way of interpretation (ex. Using the hint package).
Maybe. Someone (you?) could make a compiler or modify an existing compiler to maintain more information in a platform-agnostic manner such that things could be serialized without the user manually leveraging hint. I imaging this is an are under exploration by the Cloud Haskell (aka distributed-haskell) developers.
Why? I have also wanted an ability to serialize functions so that I could pass closures around in a flexible manner. Most of the time, though, that flexibility isn't actually needed and instead people want to pass certain types of computations that can be easily expressed as a custom data type and interpretation function.
packman: "Evaluation-orthogonal serialisation of Haskell data, as a library" (thanks to a reddit link) -- is exactly what we have been looking for!
...this serialisation is orthogonal to evaluation: the argument is
serialised in its current state of evaluation, it might be
entirely unevaluated (a thunk) or only partially evaluated (containing
thunks).
...The library enables sending and receiving data between different nodes
of a distributed Haskell system. This is where the code originated:
the Eden runtime system.
...Apart from this obvious application, the functionality can be used to
optimise programs by memoisation (across different program runs), and
to checkpoint program execution in selected places. Both uses are
exemplified in the slide set linked above.
...Another limitation is that serialised data can only be used by the
very same binary. This is however common for many approaches to
distributed programming using functional languages.
...
Cloud Haskell supports serialization of function closures. http://www.haskell.org/haskellwiki/Cloud_Haskell
Apart from the work in Cloud Haskell and HdpH on "closures", and part from the answers stating that thunks are not analyzable at runtime, I've found that:
:sprint in GHCi seems to have access to internal thunk representation -- . Perhaps GHCi works with some special, non-optimized code. So in principle one could use this representation and the implementation of :sprint if one wants to serialize thunks, isn't that true?
http://hackage.haskell.org/package/ghc-heap-view-0.5.3/docs/GHC-HeapView.html -- "With this module, you can investigate the heap representation of Haskell values, i.e. to investigate sharing and lazy evaluation."
I'd be very curious to know what kind of working solutions for seriliazing closures can be made out of this stuff...
God I hate the term "code smell", but I can't think of anything more accurate.
I'm designing a high-level language & compiler to Whitespace in my spare time to learn about compiler construction, language design, and functional programming (compiler is being written in Haskell).
During the code generation phase of the compiler, I have to maintain "state"-ish data as I traverse the syntax tree. For example, when compiling flow-control statements I need to generate unique names for the labels to jump to (labels generated from a counter that's passed in, updated, & returned, and the old value of the counter must never be used again). Another example is when I come across in-line string literals in the syntax tree, they need to be permanently converted into heap variables (in Whitespace, strings are best stored on the heap). I'm currently wrapping the entire code generation module in the state monad to handle this.
I've been told that writing a compiler is a problem well suited to the functional paradigm, but I find that I'm designing this in much the same way I would design it in C (you really can write C in any language - even Haskell w/ state monads).
I want to learn how to think in Haskell (rather, in the functional paradigm) - not in C with Haskell syntax. Should I really try to eliminate/minimize use of the state monad, or is it a legitimate functional "design pattern"?
I've written multiple compilers in Haskell, and a state monad is a reasonable solution to many compiler problems. But you want to keep it abstract---don't make it obvious you're using a monad.
Here's an example from the Glasgow Haskell Compiler (which I did not write; I just work around a few edges), where we build control-flow graphs. Here are the basic ways to make graphs:
empyGraph :: Graph
mkLabel :: Label -> Graph
mkAssignment :: Assignment -> Graph -- modify a register or memory
mkTransfer :: ControlTransfer -> Graph -- any control transfer
(<*>) :: Graph -> Graph -> Graph
But as you've discovered, maintaining a supply of unique labels is tedious at best, so we provide these functions as well:
withFreshLabel :: (Label -> Graph) -> Graph
mkIfThenElse :: (Label -> Label -> Graph) -- branch condition
-> Graph -- code in the 'then' branch
-> Graph -- code in the 'else' branch
-> Graph -- resulting if-then-else construct
The whole Graph thing is an abstract type, and the translator just merrily constructs graphs in purely functional fashion, without being aware that anything monadic is going on. Then, when the graph is finally constructed, in order to turn it into an algebraic datatype we can generate code from, we give it a supply of unique labels, run the state monad, and pull out the data structure.
The state monad is hidden underneath; although it's not exposed to the client, the definition of Graph is something like this:
type Graph = RealGraph -> [Label] -> (RealGraph, [Label])
or a bit more accurately
type Graph = RealGraph -> State [Label] RealGraph
-- a Graph is a monadic function from a successor RealGraph to a new RealGraph
With the state monad hidden behind a layer of abstraction, it's not smelly at all!
I'd say that state in general is not a code smell, so long as it's kept small and well controlled.
This means that using monads such as State, ST or custom-built ones, or just having a data structure containing state data that you pass around to a few places, is not a bad thing. (Actually, monads are just assistance in doing exactly this!) However, having state that goes all over the place (yes, this means you, IO monad!) is a bad smell.
An fairly clear example of this was when my team was working on our entry for the ICFP Programming Contest 2009 (the code is available at git://git.cynic.net/haskell/icfp-contest-2009). We ended up with several different modular parts to this:
VM: the virtual machine that ran the simulation program
Controllers: several different sets of routines that read the output of the simulator and generated new control inputs
Solution: generation of the solution file based on the output of the controllers
Visualizers: several different sets of routines that read both the input and output ports and generated some sort of visualization or log of what was going on as the simulation progressed
Each of these has its own state, and they all interact in various ways through the input and output values of the VM. We had several different controllers and visualizers, each of which had its own different kind of state.
The key point here was that the the internals of any particular state were limited to their own particular modules, and each module knew nothing about even the existence of state for other modules. Any particular set of stateful code and data was generally only a few dozen lines long, with a handful of data items in the state.
All this was glued together in one small function of about a dozen lines which had no access to the internals of any of the states, and which merely called the right things in the proper order as it looped through the simulation, and passed a very limited amount of outside information to each module (along with the module's previous state, of course).
When state is used in such a limited way, and the type system is preventing you from inadvertently modifying it, it's quite easy to handle. It's one of the beauties of Haskell that it lets you do this.
One answer says, "Don't use monads." From my point of view, this is exactly backwards. Monads are a control structure that, among other things, can help you minimize the amount of code that touches state. If you look at monadic parsers as an example, the state of the parse (i.e., the text being parsed, how far one has gotten in to it, any warnings that have accumulated, etc.) must run through every combinator used in the parser. Yet there will only be a few combinators that actually manipulate the state directly; anything else uses one of these few functions. This allows you to see clearly and in one place all of a small amount of code that can change the state, and more easily reason about how it can be changed, again making it easier to deal with.
Have you looked at Attribute grammars (AG)? (More info on wikipedia and an article in the Monad Reader)?
With AG you can add attributes to a syntax tree. These attributes are separated in synthesized and inherited attributes.
Synthesized attributes are things you generate (or synthesize) from your syntax tree, this could be the generated code, or all comments, or whatever else your interested in.
Inherited attributes are input to your syntax tree, this could be the environment, or a list of labels to use during code generation.
At Utrecht University we use the Attribute Grammar System (UUAGC) to write compilers. This is a pre-processor which generates haskell code (.hs files) from the provided .ag files.
Although, if you're still learning Haskell, then maybe this is not the time to start learning yet another layer of abstraction over that.
In that case, you could manually write the sort of code that attributes grammars generate for you, for example:
data AbstractSyntax = Literal Int | Block AbstractSyntax
| Comment String AbstractSyntax
compile :: AbstractSyntax -> [Label] -> (Code, Comments)
compile (Literal x) _ = (generateCode x, [])
compile (Block ast) (l:ls) = let (code', comments) = compile ast ls
in (labelCode l code', comments)
compile (Comment s ast) ls = let (code, comments') = compile ast ls
in (code, s : comments')
generateCode :: Int -> Code
labelCode :: Label -> Code -> Code
It's possible that you may want an applicative functor instead of a
monad:
http://www.haskell.org/haskellwiki/Applicative_functor
I think the original paper explains it better than the wiki, however:
http://www.soi.city.ac.uk/~ross/papers/Applicative.html
I don't think using the State Monad is a code smell when it used to model state.
If you need to thread state through your functions,
you can do this explicitly, taking the the state as an argument and returning it in each function.
The State Monad offers a good abstraction: it passes the state along for you and
provides lots of useful function to combine functions that require state.
In this case, using the State Monad (or Applicatives) is not a code smell.
However, if you use the State Monad to emulate an imperative style of programming
while a functional solution would suffice, you are just making things complicated.
In general you should try to avoid state wherever possible, but that's not always practical. Applicative makes effectful code look nicer and more functional, especially tree traversal code can benefit from this style. For the problem of name generation there is now a rather nice package available: value-supply.
Well, don't use monads. The power of functional programming is function purity and their reuse. There's this paper a professor of mine once wrote and he's one of the guys who helped build Haskell.
The paper is called "Why functional programming matters", I suggest you read through it. It's a good read.
let's be careful about the terminology here. State is not per se bad; functional languages have state. What is a "code smell" is when you find yourself wanting to assign variables values and change them.
Of course, the Haskell state monad is there for just that reason -- as with I/O, it's letting you do unsafe and un-functional things in a constrained context.
So, yes, it's probably a code smell.