Which Haskell (GHC) extensions should users use/avoid? [closed] - haskell

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
Improve this question
I have had the experience a few times now of having GHC tell me to use an extension, only to discover that when in using that extension I have made code far more complex when a simple refactor would have allowed me to stick with Haskell 98 (now 2010) and have a more straightforward solution.
On the other hand, there are also times when GADT's or Rank2Types (rarely RankNTypes) make for much less work and much cleaner code.
Which extensions tend generally to obscure the possibility of a better design, and which generally improve it? If there are some that do both, what should a user look for (be sure it true or not true of the solution they are intending) before deciding to use that extension?
(See also Should I use GHC Haskell extensions or not?)

An ad hoc list of morally "good" extensions, and morally "bad" ones - this is an aesthetic judgement!
The Good
GADTs
Parallel list comprehensions
Pattern guards
Monad comprehensions
Tuple sections
Record wild cards
Empty data decls
Existential types
Generalized new type deriving
MPTCs + FDs
Type families
Explicit quantification
Higher rank polymorphism
Lexically scoped tyvars
Bang Patterns
The Bad
SQL comprehensions
Implicit parameters
The Ugly (but necessary)
Template Haskell
Unboxed types and tuples
Undecidable, overlapping and incoherent instances -- usually means you have a misdesign.
Not sure
Arrow notation
View patterns

Related

Parametricity in Haskell [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
In my Haskell learning journey, I can't help but notice that parametricity is of the utmost importance in the language. Given how the type system and the inference capability of the compiler works, i think it is safe to say that parametricity or parametric polymorphism is natural, encourage and at the core of the philosophy of the language.
While the question I am going to ask is not specific to Haskell and could be ask to almost any programming language community, I'm quite intrigued at the point of view of the Haskellers, given the nature of the language as suggested above.
Why is parametricity so important to Haskellers? The language does really encourage to code to generic type and somewhat let the compiler figure out the right type when it is the most appropriate time (when it is forced too). Granted one does not have to stick to that, and we can and it is probably a good practice to declare the type.
But somehow I have the feeling, the all thing encourage you to be generic and not focus on the concrete type at first, adding the capability you need to the signature through type class, and focus on the composition and delay the concrete type at last, or leave it to the compiler.
I'm not completely sure of what I am saying but it feels that way.
I'm probably biased because I read a book in Scala, that also encourage that, although it is way more manual activity to than in Haskell.
Any philosophical response to that maybe? I have some idea about it, but from your point of you, how parametricity help programming faster and maybe safer too?
Note: I'm a Scala programmer learning Haskell
Edit
I illustrate my propos as I am studying with "Programming Haskell from first principles". To cite the author:
"There are some caveats to keep in mind here when it comes to using
concrete types. One of the nice things about parametricity and type
classes is that you are being explicit about what you mean to do with
your data, which means you are less likely to make a mistake. Int is a
big datatype with many inhabitants and many type classes and
operations defined for it—it would be easy to make a function that
does something unintended. Whereas if we were to write a function,
even if we have Int values in mind for it, that uses a polymorphic
type constrained by the type class instances we want, we could ensure
we only use the operations we intend. This isn’t a panacea, but
sometimes it can be worth avoiding concrete types for these (and
other) reasons.
(Page 208). "
I'd like to know what are the other reasons .... I mean this parametricity when compare to Scala that has it way more manual, is so baked in the language, I can't help think that is is part of the productivity philosophy of the language.
Parametricity is important because it restricts the implementation space. It's often the case that a properly parametric type restricts the implementation space down to a single implementation that lacks bottoms. Consider fst :: (a, b) -> a, for instance. With that type, there is only one possible return value from the function that doesn't have bottoms in it.
There are a lot of ways to write it that have bottoms - undefined, error, infinite loops, all of which varying in terms of eta expansion of the definition and whether the pair's constructor is matched. Many of these differences can be observed externally by careful means, but the thing they all have in common is that they don't produce a usable (non-bottom) value of type a.
This is a strong tool for implementing a definition. Given the guarantees parametricity makes, it's actually sufficient to test only that fst ((), ()) == (). If that expression evaluates to True, the implementation is correct. (Ok, it's not quite that simple in ghc, given the ability to break all sorts of rules with unsafe functions. You also need to validate that the implementation doesn't use anything unsafe that breaks parametricity.)
But guiding the implementation is only the first benefit. A consequence of the implementation being so limited is that parametricity also turns the type into concise, precise, and machine-checked documentation. You know that no matter what the implementation is, the only non-bottom value it can return is the first element of the pair.
And yes - usually things aren't quite so constrained as in the type of fst. But in every case where parametric polymorphism is present in a type, it restricts the implementation space. And every time the implementation space is restricted, that knowledge serves as machine-checked documentation of implementation of the type.
Parametricity is a clear win for both the implementor and user of code. It reduces the space for incorrect implementations and it improves precision and accuracy of documentation. This should be as close to an objectively good thing as there is in programming.

Why don't we write haskell in LISP syntax? (we can!) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
It... kinda works you guys (This absolutely compiles, adapted from https://hackage.haskell.org/package/scotty):
main :: IO ()
main = (do
(putStrLn "Starting Server....")
(scotty 3000 (do
(get "/hello/:name"
(text ("hello " <> (param "name") <> "!")))
(get "/users"
(json allUsers))
(get "/users/:id"
(json (filter (matchesId (param "id")) allUsers))))))
(I don't know enough haskell to convert <> to simple parens, but a clever person could easily.)
Why would we do this? We could preprocess Haskell with any lisp macro engine! Trivially!.
Imagine it. HASKELL AND LISP TOGETHER. WE COULD RULE THE GALAXY!
(I know what your thinking, but I've actually thought this through: in this example, Vader is Lisp, Luke is Haskell, and Yoda is Alonzo Church)
(edit "Thanks everyone who answered and commented, I'm now much wiser.
The biggest problem with this technique I don't think has been yet mentioned, and was pointed out by a friend IRL: If you write some lispy preprocessor, you lose type checking and syntax highlighting and comprehension in your IDE and tools. That sound like a hard pass from me."
"I'm now following the https://github.com/finkel-lang/finkel project, which is the lisp-flavoured haskell project that I want!")
The syntax of Haskell is historically derived from that of ISWIM, a language which appeared not much later than LISP and which is described in Peter J. Landin's 1966 article The Next 700 Programming Languages.
Section 6 is devoted to the relationship with LISP:
ISWIM can be looked on as an attempt to deliver LISP from its
eponymous commitment to lists, its reputation for hand-to-mouth
storage allocation, the hardware dependent flavor of its pedagogy,
its heavy bracketing, and its compromises with tradition.
Later in the same section:
The textual appearance of ISWIM is not like LISP's S-expressions. It
is nearer to LISP's M-expressions (which constitute an informal
language used as an intermediate result in hand-preparing LISP
programs). ISWIM has the following additional features: [...]
So there was the explicit intention of diverging from LISP syntax, or from S-expressions at least.
Structurally, a Haskell program consists of a set of modules. Each module consists of a set of declarations. Modules and declarations are inert - they cause nothing to happen by their existence alone. They just form entries in a static namespace that the compiler uses to resolve names while generating code.
As an aside, you might quibble about Main.main here. As the entry point, it is run merely for being defined. That's fair. But every other declaration is only used in code generation if it is required by Main.main, rather than just because it exists.
In contrast, Lisps are much more dynamic systems. A Lisp program consists of a sequence of s-expressions that are executed in order. Each one causes code execution with arbitrary side effects, including modification of the global namespace.
Here's where things get a lot more opinion-based. I'd argue that Lisp's dynamic structure is closely tied to the regularity of its syntax. A syntactic examination can't distinguish between s-expressions intended to add values to the global namespace and ones intended to be run for their side effects. Without a syntactic differentiation, it seems very awkward to add a semantic differentiation. So I'm arguing that there is a sense in which Lisp syntax is too regular to be used for a language with the strict semantic separations between different types of code in Haskell. Haskell's syntax, by contrast, provides syntactic distinctions to match the semantic distinctions.
Haskell does not have s-exp so parentheses are only used for marking the precedence of reduction and constructing tuples, this also means that it's not that easy to make lisp like macros work in Haskell since they make heavy use of s-exps and dynamic typing
Haskell has a right associative function application (namely ($)) which covers most use cases of parentheses
Whitespaces has semantic meaning in Haskell, that's why most of us write
do
p1
p2
p3
instead of
do { p1
; p2
; p3
}

Monad and Structure and interpretation of computer programs [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I couldn't find the word "Monad" when I searched SICP 2nd Edition book. Which concept ( or chapters) of SICP relates to Monad ?
Nothing in SICP addresses monads explicitly: the book was written long before anyone had formalized the concept of a monad as it relates to computer programming (ignoring here the mathematical idea of a monad, which is a different thing). But, some stuff in the book is monadic anyway: lists, for example, are a monad whether you know it or not.
SICP uses Scheme. Scheme allows for arbitrary actions to be chained together. Nothing stops you from doing so. In other words, you are basically working in a do-anything monad. Also, they tend not to be that useful or idomatic in a multi-paradigm language like Lisp (by that, I mean Scheme doesn't take sides; it kind of eschews mutation by making them taboo with the suffix, "!").
In Haskell, you write programs where types limit the kind of action that can occur within said function. Making an instance monadic lets you compose functions with some restrictions (on the type, as well as the monad laws that the programmer has to take care of). And you can stack up effects using transformers.
So, monads are not that useful in a language setting like Scheme. Nor, as Amalloy rightly said, were they invented back then.
EDIT 1: A clarification on the first paragraph. You can have monads in Lisp (an impure language), just that you don't have the type system making sure you are not mixing effects. I used IO in a List monad (Racket + functional/better-monads) That said, the monad design pattern can be quite useful like how Maybe and List are used in Clojure/Racket, as Alexis King pointed out.
EDIT 2: For things like State and ST (which are probably what you see in most use cases as many (most?) algorithms take advantage of mutability), monads don't really make much sense. Also, as I've already pointed it out, you do not get guarantees that you expect out of Haskell in most Lisps.

Relation between object [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
For a few weeks I’ve been thinking about relation between objects – not especially OOP’s objects. For instance in C++, we’re used to representing that by layering pointers or container of pointers in the structure that needs an access to the other object. If an object A needs to have an access to B, it’s not uncommon to find a B *pB in A.
But I’m not a C++ programmer anymore, I write programs using functional languages, and more especially in Haskell, which is a pure functional language. It’s possible to use pointers, references or that kind of stuff, but I feel strange with that, like “doing it the non-Haskell way”.
Then I thought a bit deeper about all that relation stuff and came to the point:
“Why do we even represent such relation by layering?
I read some folks already thought about that (here). In my point of view, representing relations through explicit graphes is way better since it enables us to focus on the core of our type, and express relations later are through combinators (a bit like SQL does).
By core I mean that when we define A, we expect to define what A is made of, not what it depends on. For instance, in a video game, if we have a type Character, it’s legit to talk about Trait, Skill or that kind of stuff, but is it if we talk about Weapon or Items? I’m not so sure anymore. Then:
data Character = {
chSkills :: [Skill]
, chTraits :: [Traits]
, chName :: String
, chWeapon :: IORef Weapon -- or STRef, or whatever
, chItems :: IORef [Item] -- ditto
}
sounds really wrong in term of design to me. I’d rather prefer something like:
data Character = {
chSkills :: [Skill]
, chTraits :: [Traits]
, chName :: String
}
-- link our character to a Weapon using a Graph Character Weapon
-- link our character to Items using a Graph Character [Item] or that kind of stuff
Furthermore, when a day comes to add new features, we can just create new types, new graphs and link. In the first design, we’d have to break the Character type, or use some kind of
work around to extend it.
What do you think about that idea? What do you think is best to deal with that kind of issues in Haskell, a pure functional language?

Libraries for strict data structures in Haskell [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
What libraries do exist that implement strict data structures? Specifically, I am looking for strict lists and strict sets.
Disclaimers:
I am aware of deepseq. It's very useful, but it adds the overhead of traversing the whole data structure every time you use deepseq (which might be more than once).
I am aware that a strict container-like data structure does not
ensure everything it contains will be fully evaluated, but the structure
itself should be strict, e.g.:
data StrictList a = !a :$ !(StrictList a) | Empty
(Here, the contained elements are in WHNF, and possibly not fully evaluated, but the structure of the list is. For example, infinite lists will be non-terminating values.)
I know about the 'strict' package on hackage, but it has a very
limited set of strict data structures. It does neither contain strict
lists nor sets.
Writing strict lists myself seems amazingly easy (I love ghc's
extensions to derive Functor, Traversable and Foldable, btw.), but it
still seems like it would be better done in a separate library. And
efficient implementations of sets don't seem that trivial to me.
The containers package (shipped with ghc) will soon have strict Set and Map variants (I'm not sure they will be included with ghc-7.4, but there's reason to hope). So an efficient implementation of strict Sets and Maps is on the way. Strict lists are, as you say easy, still a package on hackage providing them would be nice, so not everybody has to do it themselves. What else do you need?
For your second point, the term I've seen most often is spine-strict.
For a spine-strict list, you could probably use Data.Seq.Sequence (from containers) or Data.Vector (from vector). Neither one is a list, however depending on what you're doing one (or both) are likely to be better. Sequence provides O(1) cons and snoc, with very fast access to either end of the structure. Vector's performance is similar to an array. If you want a more list-like interface to Sequence, you might consider the ListLike package (there's a ListLike interface for vectors too, but it's less useful because Vector provides a fairly comprehensive interface on its own). Both are spine-strict.
For strict sets, you might try unordered-containers, which also provides a strict map.

Resources