Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
For a few weeks I’ve been thinking about relation between objects – not especially OOP’s objects. For instance in C++, we’re used to representing that by layering pointers or container of pointers in the structure that needs an access to the other object. If an object A needs to have an access to B, it’s not uncommon to find a B *pB in A.
But I’m not a C++ programmer anymore, I write programs using functional languages, and more especially in Haskell, which is a pure functional language. It’s possible to use pointers, references or that kind of stuff, but I feel strange with that, like “doing it the non-Haskell way”.
Then I thought a bit deeper about all that relation stuff and came to the point:
“Why do we even represent such relation by layering?
I read some folks already thought about that (here). In my point of view, representing relations through explicit graphes is way better since it enables us to focus on the core of our type, and express relations later are through combinators (a bit like SQL does).
By core I mean that when we define A, we expect to define what A is made of, not what it depends on. For instance, in a video game, if we have a type Character, it’s legit to talk about Trait, Skill or that kind of stuff, but is it if we talk about Weapon or Items? I’m not so sure anymore. Then:
data Character = {
chSkills :: [Skill]
, chTraits :: [Traits]
, chName :: String
, chWeapon :: IORef Weapon -- or STRef, or whatever
, chItems :: IORef [Item] -- ditto
}
sounds really wrong in term of design to me. I’d rather prefer something like:
data Character = {
chSkills :: [Skill]
, chTraits :: [Traits]
, chName :: String
}
-- link our character to a Weapon using a Graph Character Weapon
-- link our character to Items using a Graph Character [Item] or that kind of stuff
Furthermore, when a day comes to add new features, we can just create new types, new graphs and link. In the first design, we’d have to break the Character type, or use some kind of
work around to extend it.
What do you think about that idea? What do you think is best to deal with that kind of issues in Haskell, a pure functional language?
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
In my Haskell learning journey, I can't help but notice that parametricity is of the utmost importance in the language. Given how the type system and the inference capability of the compiler works, i think it is safe to say that parametricity or parametric polymorphism is natural, encourage and at the core of the philosophy of the language.
While the question I am going to ask is not specific to Haskell and could be ask to almost any programming language community, I'm quite intrigued at the point of view of the Haskellers, given the nature of the language as suggested above.
Why is parametricity so important to Haskellers? The language does really encourage to code to generic type and somewhat let the compiler figure out the right type when it is the most appropriate time (when it is forced too). Granted one does not have to stick to that, and we can and it is probably a good practice to declare the type.
But somehow I have the feeling, the all thing encourage you to be generic and not focus on the concrete type at first, adding the capability you need to the signature through type class, and focus on the composition and delay the concrete type at last, or leave it to the compiler.
I'm not completely sure of what I am saying but it feels that way.
I'm probably biased because I read a book in Scala, that also encourage that, although it is way more manual activity to than in Haskell.
Any philosophical response to that maybe? I have some idea about it, but from your point of you, how parametricity help programming faster and maybe safer too?
Note: I'm a Scala programmer learning Haskell
Edit
I illustrate my propos as I am studying with "Programming Haskell from first principles". To cite the author:
"There are some caveats to keep in mind here when it comes to using
concrete types. One of the nice things about parametricity and type
classes is that you are being explicit about what you mean to do with
your data, which means you are less likely to make a mistake. Int is a
big datatype with many inhabitants and many type classes and
operations defined for it—it would be easy to make a function that
does something unintended. Whereas if we were to write a function,
even if we have Int values in mind for it, that uses a polymorphic
type constrained by the type class instances we want, we could ensure
we only use the operations we intend. This isn’t a panacea, but
sometimes it can be worth avoiding concrete types for these (and
other) reasons.
(Page 208). "
I'd like to know what are the other reasons .... I mean this parametricity when compare to Scala that has it way more manual, is so baked in the language, I can't help think that is is part of the productivity philosophy of the language.
Parametricity is important because it restricts the implementation space. It's often the case that a properly parametric type restricts the implementation space down to a single implementation that lacks bottoms. Consider fst :: (a, b) -> a, for instance. With that type, there is only one possible return value from the function that doesn't have bottoms in it.
There are a lot of ways to write it that have bottoms - undefined, error, infinite loops, all of which varying in terms of eta expansion of the definition and whether the pair's constructor is matched. Many of these differences can be observed externally by careful means, but the thing they all have in common is that they don't produce a usable (non-bottom) value of type a.
This is a strong tool for implementing a definition. Given the guarantees parametricity makes, it's actually sufficient to test only that fst ((), ()) == (). If that expression evaluates to True, the implementation is correct. (Ok, it's not quite that simple in ghc, given the ability to break all sorts of rules with unsafe functions. You also need to validate that the implementation doesn't use anything unsafe that breaks parametricity.)
But guiding the implementation is only the first benefit. A consequence of the implementation being so limited is that parametricity also turns the type into concise, precise, and machine-checked documentation. You know that no matter what the implementation is, the only non-bottom value it can return is the first element of the pair.
And yes - usually things aren't quite so constrained as in the type of fst. But in every case where parametric polymorphism is present in a type, it restricts the implementation space. And every time the implementation space is restricted, that knowledge serves as machine-checked documentation of implementation of the type.
Parametricity is a clear win for both the implementor and user of code. It reduces the space for incorrect implementations and it improves precision and accuracy of documentation. This should be as close to an objectively good thing as there is in programming.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
In 1998 John Hughes proposed the Arrow type class for Haskell in this paper. This type class came with a number of operators that are 'non-alpha-numeric', like *** and &&&. He however does not give a pronounceable name for these operators.
Haskell's Monad type class has a similar thing going on with >>=, which is pronounced as bind.
My question is how to pronounce the Arrow operators *** and &&&. Or do they even have pronounceable names? How do Haskellers refer to these operators in conversations?
Control.Arrow calls them "split" and "fanout". That's the closest you'll get for official names.
However, in the particular case of arrows, I tend to think of them in terms of factory machines connected with conveyor belts. This gives you a very rich vocabulary if you start with defining the phonemes (not necessarily the actual functions)
belt = id
pipe-into = (.)
dupe = belt &&& belt
next-to = (***)
process-with = arr
In this vocabulary you pronounce first a as "a next to a belt" and second a as "a belt next-to a", while a &&& b becomes "a dupe piped into (an a next to a b)."
It also gives a nice visualization of ArrowApply; the factory machines can ArrowApply when there is some machine which takes in two conveyor belts: one for other machines, and one for objects that fit into the first machine. This machine stuffs the incoming object into the incoming machine, emits whatever the first machine emits, then throws the machine away.
It also gives a less-nice visualization of ArrowLoop in terms of giving the factory a magic box, then incrementally asking the factory to commit to some of the structure of what's inside the magic box (possibly providing more magic boxes for it to use), then making the committed structure magically available when the box is opened.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can anyone explain what the invariants in programming languages are and why they matter?
What are Invariants
Invariants in any field - are values (usually numbers) that allow you to distinguish "objects" if those invariants are not the same.
For example if you have a mathematical term say
(x+3)²+1
and you want to transform that term one invariant would be to substitute a random number for x, my rng chose x=0 - so the invariant would be
(0+3)²+1 = 9+1 = 10
then if I transform the term incorrectly
x²+6x+3 + 1 = x² + 6x +4
testing again with x = 0 I see 0²+0+4 = 4 which is different from 10 therefore I know there had to be a mistake.
But if on the other hand I had transformed the term to
x²+3x+9 +1 = x²+3x+10
the invariant for x=0 would be 10 again - so we see
Why are they useful?
different invariants => different objects
same invariants => maybe same objects
Example: equational reasoning
why has this become interesting in (functional) programming - one expression you will hear in this context is equational reasoning and this means just the procedure I did above if you can transform an algorithm/function/term into another one without loosing equality. This is often true for languages like haskell, by restriction of immutability, no side effects etc. whereas in oo this is often not true. Equational reasoning allows you to shrink the area where errors turned up quite good so debugging/bug finding is comparatively more easily done.
Example: property based testing
Another field where invariants are common is property based testing: here the standard example for this reverse :: [a] -> [a], i.e. the reverse function on (linked) lists, has the property of reverse . reverse == id, i.e. reversing twice is the same as doing nothing.
If you put this in a Quickcheck test - the test generator generates arbitrary lists and checks this property - if one of these (potentially) thousands of tests fail you know, where to improve your code.
Example: Compiler optimizations
Some properties also can used to make optimizations of your code for example if for all functions fmap f . fmap g == fmap (f . g) and the left hand side traverses a data structure twice, where the right hand side only does one traversal, the compiler can substitute them and make your code twice as fast.
An invariant is a property of your data that you expect to always hold. Invariants are important because they allow you to separate business logic from validation—your functions can safely assume that they’re not receiving invalid data.
For example, in a chess game, you have the invariant that only one piece at a time may occupy a given square on the board. Invariants can be enforced at compile-time, usually by using a static type system to make objects correct by construction, e.g., by representing the board as a matrix of optional pieces. They can also be enforced at runtime, e.g., by raising an exception when trying to make an invalid move.
The approach in functional programming languages is typically to make objects immutable, and prevent the construction of invalid states. In OOP languages, where objects are commonly mutable, methods are expected to prevent invalid state transitions.
Either way, enforcing invariants ensures that your program is always in a predictable state, which makes it easier to reason about your code and safely make changes without introducing regressions.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
So, I understand algebraic types and type classes very well, but I'm interested in the software-engineering/best practices side of it.
What is the modern consensus, if any, on typeclasses? Are they evil? Are they handy? Should they be used, and when?
Here's my case-study. I'm writing an RTS-style game, and I have different kinds of "units" (tank, scout, etc.). Say I want to get the max health of each unit. My two thoughts on how to define their types are as follows:
Different constructors of an ADT:
data Unit = Scout ... | Tank ...
maxHealth :: Unit -> Int
maxHealth Scout = 10
maxHealth Tank = 20
Typeclass for Unit, each kind is an instance
class Unit a where
maxHealth :: a -> Int
instance Unit Scout where
maxHealth scout = 10
instance Unit Tank where
maxHealth tank = 20
Obviously, there is going to be many more fields and functions in the final product. (For example, each unit will have a different position, etc. so not all of the functions will be constant).
The trick is, there might be some functions that make sense for some units, but not others. For example, every unit will have a getPosition function, but a tank might have a getArmour function, which doesn't make sense for a scout without armour.
Which is the "generally accepted" way to write this if I want other Haskellers to be able to understand and follow my code?
Most Haskell programmers frown on needless typeclasses. These hurt type inference; you can't even make a list of Units without tricks; in GHC, there's all the secret dictionary passing; they somehow make the Haddocks harder to read; they can lead to brittle hierarchies ... maybe others can give you further reasons. I guess a good rule would be to use them when it's much more painful to avoid them. For instance, without Eq, you'd have to manually pass around the functions to compare, say, two [[[Int]]]s (or use some ad-hoc runtime tests), which is one of the pain points of ML programming.
Take a look at this blog post. Your first method of using a sum type is OK, but if you want to allow users to mod the game with new units or whatever, I'd suggest something like
data Unit = Unit { name :: String, maxHealth :: Int }
scout, tank :: Unit
scout = Unit { name = "scout", maxHealth = 10 }
tank = Unit { name = "tank", maxHealth = 20 }
allUnits = [ scout
, tank
, Unit { name = "another unit", maxHealth = 5 }
]
In your example, you need to encode somewhere that a tank has armor but a scout doesn't. The obvious possibility is to augment the Unit type with extra information like a Maybe Armor field or a list of special powers ... there's not necessarily a definitive way.
One heavyweight solution, probably overkill, is to use a library like Vinyl that provides extensible records, giving you a form of subtyping.
I tend to use typeclasses only when generating and passing around the instances manually becomes a big mess. In the code I write this is almost never.
I'm not going to weigh in on an answer to a definitive time to use typeclasses, but I am currently writing a library that uses both of the methods you described for your Unit class. I lean on the sum type generally, but there's one large advantage that the typeclass method has: it gives you type-level distinctions between your Units.
This forces you to write to your interface slightly more as any function that needs to be polymorphic over Units must use only functions defined, ultimately, on the your abstract typeclass basis. In my case, it also importantly lets me use Unit-type types as type parameters in phantom types.
For instance, I'm writing a Haskell binding to Nanomsg (a ZMQ-alike project from the original author of ZMQ). In Nanomsg you have Socket types which share representations and semantics. Each Sockets has exactly one Protocol and some functions can only be called on Sockets of a particular Protocol. I could have these functions throw errors or return Maybes, but instead I defined my Protocols as separate types all sharing a class.
class Protocol p where ...
data Protocol1 = Protocol1
data Protocol2 = Protocol2
instance Protocol Protocol1 where ...
instance Protocol Protocol2 where ...
and have Sockets have a phantom type parameter
newtype Socket p = Socket ...
And now I can make it a type error to call functions on the wrong protocols.
funOnProto1 :: Socket Protocol1 -> ...
whereas if Socket were just a sum type this would be impossible to check at compile time.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
Improve this question
I have had the experience a few times now of having GHC tell me to use an extension, only to discover that when in using that extension I have made code far more complex when a simple refactor would have allowed me to stick with Haskell 98 (now 2010) and have a more straightforward solution.
On the other hand, there are also times when GADT's or Rank2Types (rarely RankNTypes) make for much less work and much cleaner code.
Which extensions tend generally to obscure the possibility of a better design, and which generally improve it? If there are some that do both, what should a user look for (be sure it true or not true of the solution they are intending) before deciding to use that extension?
(See also Should I use GHC Haskell extensions or not?)
An ad hoc list of morally "good" extensions, and morally "bad" ones - this is an aesthetic judgement!
The Good
GADTs
Parallel list comprehensions
Pattern guards
Monad comprehensions
Tuple sections
Record wild cards
Empty data decls
Existential types
Generalized new type deriving
MPTCs + FDs
Type families
Explicit quantification
Higher rank polymorphism
Lexically scoped tyvars
Bang Patterns
The Bad
SQL comprehensions
Implicit parameters
The Ugly (but necessary)
Template Haskell
Unboxed types and tuples
Undecidable, overlapping and incoherent instances -- usually means you have a misdesign.
Not sure
Arrow notation
View patterns