I just realized that I can define my own Prelude module and carefully control its exports.
Is this considered bad practice?
Advantages:
No need to repeatedly import a "Common" module in large projects.
No need to write "import Prelude hiding (catch)".
In general its a bad idea, as you end up with code written in your own idioms that isn't going to be easy to maintain by others.
To communicate with others you need a shared language of symbols. The Prelude is our core language, so if you redefine it, expect confusion.
The exception to this rule would be when developing an embedded domain-specific language. There, making a custom Prelude is entirely a good idea, and is indeed why it is possible to redefine the Prelude (and inbuilt syntax) in the first place.
By all means have your own additional modules, but don't override the Prelude.
Related
Looking at some code in hackage I stumbled upon the Safe and Trustworthy extensions.
What do they mean broadly (or also exactly...)? Is there a good rule of thumb on when to use them and when not?
Safe Haskell is in essence a subset of the Haskell language. It aims to disable certain "tricks" that are often used, like for example unsafePerformIO :: IO a -> a, and furthermore it (aims to) guarantee that you do not somehow get access to private functions, data constructors, etc. of a module that aims to prevent access from these. In short safe Haskell guarantees three things:
Referential transparency;
Module boundary control; and
Semantic consistency.
A safe module has to follow these limitations, and furthermore only work with safe modules. A safe module of course does not mean that the code it works with is correct, or that an IO action for example can not be malicious. But it has some guarantees since if a function has a type that has no IO, then normally the module should not be able to perform IO in an unsafe way.
Certain extensions like TemplateHaskell and specifying {-# RULES … #-} pragma's are not allowed in safe Haskell. Other extensions, like DeriveDataTypeable are allowed, but only if one makes use of the deriving clause to generate an instance, and thus not generates a custom one.
Some modules however, need to make use of extensions in order to work properly. In that case, the author can mark the module as Trustworthy. That means that the author claims that the module exposes a safe API, but that it internally need to work with some unsafe extensions, pragmas, etc. The compiler thus can not guarantee safety.
These extensions are documented in the documentation:
The Safe Haskell extension introduces the following three language
flags:
XSafe — Enables the safe language dialect, asking GHC to guarantee trust. The safe language dialect requires that all imports
be trusted or a compilation error will occur.
XTrustworthy — Means that while this module may invoke unsafe functions internally, the module's author claims that it exports an
API that can't be used in an unsafe way. This doesn't enable the
safe language or place any restrictions on the allowed Haskell code.
The trust guarantee is provided by the module author, not GHC. An
import statement with the safe keyword results in a compilation error
if the imported module is not trusted. An import statement without the
keyword behaves as usual and can import any module whether trusted or
not.
XUnsafe — Marks the module being compiled as unsafe so that modules compiled using -XSafe can't import it.
Ghci on acid defines in its .gchi
:set -XNoImplicitPrelude
What is the potential benefit/reason one might have for doing so ?
There is no other way to completely avoid importing the Prelude. Even the seemingly-effective
import Prelude ()
which is an explicit import (hence overrides the implicit one) and defines no names nevertheless puts a bunch of class instances in scope that may not be desired.
Avoiding the standard prelude completely is useful when you want to play around with alternate preludes; or when you want to overload syntax using other GHC extensions; or in other niche situations. Avoiding the prelude can also be useful if you plan on using many functions that happen to be named the same as the ones in the prelude, and would like to avoid qualifying them everywhere (though the lesser import Prelude () would suffice in many such situations).
This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
lenses, fclabels, data-accessor - which library for structure access and mutation is better
I'm going to use and learn a Lens package on my next Haskell project. I had almost decided on the Data.Lens package when I found this post which mentions van Laarhoven Lenses in the Control.Lens package.
I don't really understand the differences enough yet to decide which one to use. Which package would you suggest I learn/use on a real world project?
Thanks.
lenses, fclabels, data-accessor - which library for structure access and mutation is better
Control.Lens is almost certainly what you want. Data.Lens came first, and is simpler, but Control.Lens has many advantages, and is being actively developed.
Other than lenses, Control.Lens has many related types, like traversals (a traversal is like a lens that can refer to n values instead of just one), folds, read/modify-only lenses, indexed lenses, isomorphisms... It also comes with a much larger library of useful functions and predefined lenses for standard library types, Template Haskell to derive lenses, and a bunch of code for other things like generic zippers and uniplate-style generic traversal.
It's a big library -- you don't have to use all of it, but it's nice to have the thing you want already written.
The main advantage of Data.Lens is that it's simpler, and as such doesn't require extensions beyond Haskell 98. But note that if you just want to export a Control.Lens-style lens from a library, you can do it without leaving Haskell 98 -- in fact, without depending on the package at all.
If you're dealing with a Real World Project (tm), I'd highly recommend Control.Lens. Edwardk has put a lot of recent effort into it, and I'm sure he'd love to hear about your use case. In my opinion, this is going to become the canonical Lens library. I believe it's safe to say that everything you can do with Data.Lens, you can do with Control.Lens.
Data.Lens is much simpler and easier to work with. Control.Lens has a very large number of modules and uses language extensions to get the job done.
I have been using Haskell for quite a while. The more I use it, the more I fall in love with the language. I simply cannot believe I have spent almost 15 years of my life using other languages.
However, I am slowly but steadily growing fed up with Haskell's standard libraries. My main pet peeve is the "not polymorphic enough" definitions (Prelude.map, Control.Monad.forM_, etc.). I have a lot of Haskell source code files whose first lines look like
{-# LANGUAGE NoMonomorphismRestriction #-}
module Whatever where
import Control.Monad.Error hiding (forM_, mapM_)
import Control.Monad.State hiding (forM_, mapM_)
import Data.Foldable (forM_, mapM_)
{- ... -}
In order to avoid constantly hoogling which definitions I should hide, I would like to have a single or a small amount of source code files that wrap this import boilerplate into manageable units.
So...
Has anyone else tried doing this before?
If the answer to the previous question is "Yes", have they posted the resulting boilerplate-wrapping source code files?
It is not as clear cut as you imagine it to be. I will list all the disadvantages I can think of off the top of my head:
First, there is no limit to how general these functions can get. For example, right now I am writing a library for indexed types that subsumes ordinary types. Every function you mentioned has a more general indexed equivalent. Do I expect everybody to switch to my library for everything? No.
Here's another example. The mapM function defines a higher order functor that satisfies the functor laws in the Kleisli category:
mapM return = return
mapM (f >=> g) = mapM f >=> mapM g
So I could argue that your traversable generalization is the wrong one and instead we should generalize it as just being an instance of a higher order functor class.
Also, check out the category-extras package for some examples of these higher order classes and functions which subsume all your examples.
There is also the issue of performance. Many of thesr more specialized functions have really finely tuned implementations that dramatically help performance. Sometimes classes expose ways to admit more performant versions, but sometimes they don't.
There is also the issue of typeclass overload. I actually prefer to minimize use of typeclasses unless they have sound laws derived from theory rather than convenience. Also, typeclasses generally play poorly with the monomorphism restriction and I enjoy writing functions without signatures for my application code.
There is also the issue of taste. A lot of people simply don't agree what is the best Haskell style. We still can't even agree on the Prelude. Speaking of which, there have been many attempts to write new Preludes, but nobody can agree on what is best so we all default back to the Haskell98 one anyway.
However, I think the overall spirit of improving things is good and the worst enemy of progress is satisfaction, but don't assume there will be a clear-cut right way to do everything.
I'm in the process of learning Haskell, and type classes seem like a powerful way to make type-safe polymorphic functions. But a lot of the Haskell Prelude functions don't use them. More specifically:
Most of the list functions don't work with other data structures (for instance, foldr and length are only implemented for lists and can't be used on arrays).
Modules like Data.ByteString are unusable unless you use import qualified since they include functions that have the same names as Prelude functions.
It seems like both of these problems would go away if the standard library used generic functions with type classes (please let me know if I'm totally off base with this).
I have two questions:
Are there technical or design
reasons that the Prelude is like this, or is it just for
historical reasons?
Looking around, it looks like there are a
couple of libraries (like
Data.Foldable and, if I'm not
mistaken, Scrap Your Boilerplate)
that replace the standard Prelude functions
with generic alternatives. Are
there any plans to incorporate these ideas into future versions of Haskell?
There is a very good pragmatic reason that "standard" Haskell (Prelude + base + maybe some more) doesn't use more polymorphism:
Designing general-use type classes is hard. Good designs for classes that abstract over container types like lists, arrays and "bytestrings" (personally I don't really consider Bytestring a container) aren't floating round waiting to be included in Haskell 2012. There are some designs e.g Listlike and the Edison classes, and a number of people have chipped away at the problem but excepting Foldable and Traversable no-one has produced any compelling designs.
The Haskell base library used to be more polymorphic - list comprehensions used to work for any monad, map and ++ weren't limited to List, and perhaps other things.
But folks at the time thought that it led to confusing error messages for beginners and that folks who aren't beginners can use the specifically polymorphic versions.
While there are many things in base, and specifically Prelude, that are historic I think any generalization would see plenty of technical push-back. The main issue is speed - if you're function has a type class constraint then you're going to be passing around a dictionary for the type class functions and maybe eating more space for specialization.
Some of the libraries, such as SYB, use extensions that aren't part of Haskell. The first task would be to formalize and build support for these features. Look at the Haskell' documents to see where Haskell is going and how you might be able to influence that path.
Real World Haskell has some insights about this in the Monad Transformers chapter:
In an ideal world, would we make a break from the past, and switch over Prelude to use Traversable and Foldable types? Probably not. Learning Haskell is already a stimulating enough adventure for newcomers. The Foldable and Traversable abstractions are easy to pick up when we already understand functors and monads, but they would put early learners on too pure a diet of abstraction. For teaching the language, it's good that map operates on lists, not on functors.