What are the 'real' names of Haskell's Arrow operators? [closed] - haskell

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
In 1998 John Hughes proposed the Arrow type class for Haskell in this paper. This type class came with a number of operators that are 'non-alpha-numeric', like *** and &&&. He however does not give a pronounceable name for these operators.
Haskell's Monad type class has a similar thing going on with >>=, which is pronounced as bind.
My question is how to pronounce the Arrow operators *** and &&&. Or do they even have pronounceable names? How do Haskellers refer to these operators in conversations?

Control.Arrow calls them "split" and "fanout". That's the closest you'll get for official names.
However, in the particular case of arrows, I tend to think of them in terms of factory machines connected with conveyor belts. This gives you a very rich vocabulary if you start with defining the phonemes (not necessarily the actual functions)
belt = id
pipe-into = (.)
dupe = belt &&& belt
next-to = (***)
process-with = arr
In this vocabulary you pronounce first a as "a next to a belt" and second a as "a belt next-to a", while a &&& b becomes "a dupe piped into (an a next to a b)."
It also gives a nice visualization of ArrowApply; the factory machines can ArrowApply when there is some machine which takes in two conveyor belts: one for other machines, and one for objects that fit into the first machine. This machine stuffs the incoming object into the incoming machine, emits whatever the first machine emits, then throws the machine away.
It also gives a less-nice visualization of ArrowLoop in terms of giving the factory a magic box, then incrementally asking the factory to commit to some of the structure of what's inside the magic box (possibly providing more magic boxes for it to use), then making the committed structure magically available when the box is opened.

Related

Why don't we write haskell in LISP syntax? (we can!) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
It... kinda works you guys (This absolutely compiles, adapted from https://hackage.haskell.org/package/scotty):
main :: IO ()
main = (do
(putStrLn "Starting Server....")
(scotty 3000 (do
(get "/hello/:name"
(text ("hello " <> (param "name") <> "!")))
(get "/users"
(json allUsers))
(get "/users/:id"
(json (filter (matchesId (param "id")) allUsers))))))
(I don't know enough haskell to convert <> to simple parens, but a clever person could easily.)
Why would we do this? We could preprocess Haskell with any lisp macro engine! Trivially!.
Imagine it. HASKELL AND LISP TOGETHER. WE COULD RULE THE GALAXY!
(I know what your thinking, but I've actually thought this through: in this example, Vader is Lisp, Luke is Haskell, and Yoda is Alonzo Church)
(edit "Thanks everyone who answered and commented, I'm now much wiser.
The biggest problem with this technique I don't think has been yet mentioned, and was pointed out by a friend IRL: If you write some lispy preprocessor, you lose type checking and syntax highlighting and comprehension in your IDE and tools. That sound like a hard pass from me."
"I'm now following the https://github.com/finkel-lang/finkel project, which is the lisp-flavoured haskell project that I want!")
The syntax of Haskell is historically derived from that of ISWIM, a language which appeared not much later than LISP and which is described in Peter J. Landin's 1966 article The Next 700 Programming Languages.
Section 6 is devoted to the relationship with LISP:
ISWIM can be looked on as an attempt to deliver LISP from its
eponymous commitment to lists, its reputation for hand-to-mouth
storage allocation, the hardware dependent flavor of its pedagogy,
its heavy bracketing, and its compromises with tradition.
Later in the same section:
The textual appearance of ISWIM is not like LISP's S-expressions. It
is nearer to LISP's M-expressions (which constitute an informal
language used as an intermediate result in hand-preparing LISP
programs). ISWIM has the following additional features: [...]
So there was the explicit intention of diverging from LISP syntax, or from S-expressions at least.
Structurally, a Haskell program consists of a set of modules. Each module consists of a set of declarations. Modules and declarations are inert - they cause nothing to happen by their existence alone. They just form entries in a static namespace that the compiler uses to resolve names while generating code.
As an aside, you might quibble about Main.main here. As the entry point, it is run merely for being defined. That's fair. But every other declaration is only used in code generation if it is required by Main.main, rather than just because it exists.
In contrast, Lisps are much more dynamic systems. A Lisp program consists of a sequence of s-expressions that are executed in order. Each one causes code execution with arbitrary side effects, including modification of the global namespace.
Here's where things get a lot more opinion-based. I'd argue that Lisp's dynamic structure is closely tied to the regularity of its syntax. A syntactic examination can't distinguish between s-expressions intended to add values to the global namespace and ones intended to be run for their side effects. Without a syntactic differentiation, it seems very awkward to add a semantic differentiation. So I'm arguing that there is a sense in which Lisp syntax is too regular to be used for a language with the strict semantic separations between different types of code in Haskell. Haskell's syntax, by contrast, provides syntactic distinctions to match the semantic distinctions.
Haskell does not have s-exp so parentheses are only used for marking the precedence of reduction and constructing tuples, this also means that it's not that easy to make lisp like macros work in Haskell since they make heavy use of s-exps and dynamic typing
Haskell has a right associative function application (namely ($)) which covers most use cases of parentheses
Whitespaces has semantic meaning in Haskell, that's why most of us write
do
p1
p2
p3
instead of
do { p1
; p2
; p3
}

What are invariants in programming languages and why is it important? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can anyone explain what the invariants in programming languages are and why they matter?
What are Invariants
Invariants in any field - are values (usually numbers) that allow you to distinguish "objects" if those invariants are not the same.
For example if you have a mathematical term say
(x+3)²+1
and you want to transform that term one invariant would be to substitute a random number for x, my rng chose x=0 - so the invariant would be
(0+3)²+1 = 9+1 = 10
then if I transform the term incorrectly
x²+6x+3 + 1 = x² + 6x +4
testing again with x = 0 I see 0²+0+4 = 4 which is different from 10 therefore I know there had to be a mistake.
But if on the other hand I had transformed the term to
x²+3x+9 +1 = x²+3x+10
the invariant for x=0 would be 10 again - so we see
Why are they useful?
different invariants => different objects
same invariants => maybe same objects
Example: equational reasoning
why has this become interesting in (functional) programming - one expression you will hear in this context is equational reasoning and this means just the procedure I did above if you can transform an algorithm/function/term into another one without loosing equality. This is often true for languages like haskell, by restriction of immutability, no side effects etc. whereas in oo this is often not true. Equational reasoning allows you to shrink the area where errors turned up quite good so debugging/bug finding is comparatively more easily done.
Example: property based testing
Another field where invariants are common is property based testing: here the standard example for this reverse :: [a] -> [a], i.e. the reverse function on (linked) lists, has the property of reverse . reverse == id, i.e. reversing twice is the same as doing nothing.
If you put this in a Quickcheck test - the test generator generates arbitrary lists and checks this property - if one of these (potentially) thousands of tests fail you know, where to improve your code.
Example: Compiler optimizations
Some properties also can used to make optimizations of your code for example if for all functions fmap f . fmap g == fmap (f . g) and the left hand side traverses a data structure twice, where the right hand side only does one traversal, the compiler can substitute them and make your code twice as fast.
An invariant is a property of your data that you expect to always hold. Invariants are important because they allow you to separate business logic from validation—your functions can safely assume that they’re not receiving invalid data.
For example, in a chess game, you have the invariant that only one piece at a time may occupy a given square on the board. Invariants can be enforced at compile-time, usually by using a static type system to make objects correct by construction, e.g., by representing the board as a matrix of optional pieces. They can also be enforced at runtime, e.g., by raising an exception when trying to make an invalid move.
The approach in functional programming languages is typically to make objects immutable, and prevent the construction of invalid states. In OOP languages, where objects are commonly mutable, methods are expected to prevent invalid state transitions.
Either way, enforcing invariants ensures that your program is always in a predictable state, which makes it easier to reason about your code and safely make changes without introducing regressions.

Monad and Structure and interpretation of computer programs [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I couldn't find the word "Monad" when I searched SICP 2nd Edition book. Which concept ( or chapters) of SICP relates to Monad ?
Nothing in SICP addresses monads explicitly: the book was written long before anyone had formalized the concept of a monad as it relates to computer programming (ignoring here the mathematical idea of a monad, which is a different thing). But, some stuff in the book is monadic anyway: lists, for example, are a monad whether you know it or not.
SICP uses Scheme. Scheme allows for arbitrary actions to be chained together. Nothing stops you from doing so. In other words, you are basically working in a do-anything monad. Also, they tend not to be that useful or idomatic in a multi-paradigm language like Lisp (by that, I mean Scheme doesn't take sides; it kind of eschews mutation by making them taboo with the suffix, "!").
In Haskell, you write programs where types limit the kind of action that can occur within said function. Making an instance monadic lets you compose functions with some restrictions (on the type, as well as the monad laws that the programmer has to take care of). And you can stack up effects using transformers.
So, monads are not that useful in a language setting like Scheme. Nor, as Amalloy rightly said, were they invented back then.
EDIT 1: A clarification on the first paragraph. You can have monads in Lisp (an impure language), just that you don't have the type system making sure you are not mixing effects. I used IO in a List monad (Racket + functional/better-monads) That said, the monad design pattern can be quite useful like how Maybe and List are used in Clojure/Racket, as Alexis King pointed out.
EDIT 2: For things like State and ST (which are probably what you see in most use cases as many (most?) algorithms take advantage of mutability), monads don't really make much sense. Also, as I've already pointed it out, you do not get guarantees that you expect out of Haskell in most Lisps.

A theorem prover / proof assistant supporting (multiple) subtyping / subclassing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
In short, I am looking for a theorem prover which its underlying logic supports multiple subtyping / subclassing mechanism.( I tried to use Isabelle, but it does not seem to provide a first class support for subtyping. see this )
I would like to define a couple of types among which some are subclasses / subtypes of others. Furthermore, each type might be subtype of more than one type. For example:
Type A
Type B
Type C
Type E
Type F
C is subtype of A
C is also subtype of B
E and F are subtypes of B
PS:
I am editing this question again to be more specific (because of a complains about being of-topic!): I am looking for a theorem prover / proof assistance in which I can define the above structure in a straight forward manner (not with workarounds as it is kindly described by some respectable answers here). If I take the types as classes then It seems above subtypings could be easily formulated in C++! So I am looking for a formal system / tool that I can define such a subtyping structure there and I can reason?
Many thanks
PVS has traditionally emphasized "predicate subtyping" a lot, but the system is a bit old-fashioned these days and has fallen behind the other big players that are more active: Coq, Isabelle/HOL, Agda, other HOLs, ACL2.
You did not make your application clear. I reckon that any of the big systems could be applied to the problem, one way or the other. Formalization is a matter to phrase your problem in a suitable way within the given logical environment. Logics are not programming languages, but have the real power of mathematics. Thus with some experience in a particular logic, you will be able to do great and amazing things that you did not expect at first sight.
When choosing your system, lists of particular low-level features are not so relevant. It is more important that you like the general style and culture of the system, before you make a commitment. You can compare that to learning a foreign language. Before you spend months or years to study do you collect features of the grammar? I don't think so.
You include the 'isabelle' tag, and it happens to be that according to "Wiki Subtyping", Isabelle provides one form of subtyping, "coercive subtyping", though as explained by Andreas Lochbihler, Isabelle doesn't really have subtyping like what you're wanting (and others want, too).
However, you're talking in vague generalities, so I easily provide a contrived example that meets the requirements of your 5 types. And though it is contrived, it's not meaningless, as I explain below.
(*The 5 types.*)
datatype tA = tA_con int rat real
type_synonym tB = "(nat * int)"
type_synonym tC = int
type_synonym tE = rat
type_synonym tF = real
(*The small amount of code required to automatically coerce from tC to tB.*)
fun coerce_C_to_B :: "tC => tB" where "coerce_C_to_B i = (0, i)"
declare [[coercion coerce_C_to_B]]
(*I can use type tC anywhere I can use type tB.*)
term "(2::tC)::tB"
value "(2::tC)::tB"
In the above example, it can be seen that types tC, tE, and tF lend themselves naturally, and easily, to being coerced to types tA or tB.
This coercion of types is done quite a bit in Isabelle. For example, the type nat is used to define int, and int is used to define rat. Consequently, nat is automatically coerced to int, though int isn't to rat.
Wrap I (you haven't been using canonical HOL):
In your previous question examples, you've been using typedecl to introduce new types, and that doesn't generally reflect how people define new types.
Types defined with typedecl are nearly always foundational and axiomatized, such as with ind, in Nat.thy.
See here: isabelle.in.tum.de/repos/isabelle/file/8f4a332500e4/src/HOL/Nat.thy#l21
The keyword datatype_new is one of the primary, automagical ways to define new types in Isabelle/HOL.
Part of the power of datatype_new (and datatype) is its use to define recursive types, and its use with fun, for example with pattern matching.
In comparison to other proof assistants, I assume that the new abilities of datatype_new is not trivial. For example, a distinguishing feature between types and ZFC sets has been that ZFC sets can be nested arbitrarily deep. Now, with datatype_new, a type of countable or finite set can be defined that can be nested arbitrarily deep.
You can use standard types, such as tuples, lists, and records to define new types, which can then be used with coercions, as shown in my example above.
Wrap II (but, yes, that would be nice):
I could have continued with the list above, but I separate from that list two other keywords to define new types, typedef and quotient_type.
I separate these two because, now, we enter into the realm of your complaint, that the logic of Isabelle/HOL doesn't make it easy, many times, to define a type/subtype relationship.
Knowing nothing much, I do know now that I should only use typedef as a last resort. It's actually used quite a bit in the HOL sources, but then, the developers then have to do a lot of work to make a type defined with it easy to use, such as with fset
http://isabelle.in.tum.de/repos/isabelle/file/8f4a332500e4/src/HOL/Library/FSet.thy
Wrap III (however, none are perfect in this imperfect world):
You listed the 3 proof assistants that probably have the largest market share, Coq, Isabelle, and Agda.
With proof assistants, we define your priorities, do our research, and then pick one, but it's like with programming languages. We're not going to get everything with any of them.
For myself, mathematical syntax and structured proofs are very important. Isabelle seems to be sufficiently powerful, so I choose it. It's not a perfect world, for sure.
Wrap IV (Haskell, Isablle, and type classes):
Isabelle, in fact, does have a very powerful form of subclassing, "type classes".
Well, it is powerful, but it is also limited in that you can only use one type variable when defining a type class.
If you look at Groups.thy, you'll see the introduction of class after class after class, to create a hierarchy of classes.
isabelle.in.tum.de/repos/isabelle/file/8f4a332500e4/src/HOL/Groups.thy
You also included the 'haskell' tag. The functional programming attributes of Isabelle/HOL, with its datatype and type classes, help tie the use of Isabelle/HOL to the use of Haskell, as demonstrated by the ability of the Isabelle code generator to produce Haskell code.
There are ways to achieve that in agda.
Group the functions related to one "type" into fields of record
Construct instances of such record for the types you want
Pass that record along into proofs that require them
For example:
record Monoid (A : Set) : Set where
constructor monoid
field
z : A
m+ : A -> A -> A
xz : (x : A) -> m+ x z == x
zx : (x : A) -> m+ z x == x
assoc : (x : A) -> (y : A) -> (z : A) -> m+ (m+ x y) z == m+ x (m+ y z)
open Monoid public
Now list-is-monoid = monoid Nil (++) lemma-append-nil lemma-nil-append lemma-append-assoc instantiates (proves) that a List is a Monoid (given the the proofs of Nil being a neutral element, and a proof of associativity).

Relation between object [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
For a few weeks I’ve been thinking about relation between objects – not especially OOP’s objects. For instance in C++, we’re used to representing that by layering pointers or container of pointers in the structure that needs an access to the other object. If an object A needs to have an access to B, it’s not uncommon to find a B *pB in A.
But I’m not a C++ programmer anymore, I write programs using functional languages, and more especially in Haskell, which is a pure functional language. It’s possible to use pointers, references or that kind of stuff, but I feel strange with that, like “doing it the non-Haskell way”.
Then I thought a bit deeper about all that relation stuff and came to the point:
“Why do we even represent such relation by layering?
I read some folks already thought about that (here). In my point of view, representing relations through explicit graphes is way better since it enables us to focus on the core of our type, and express relations later are through combinators (a bit like SQL does).
By core I mean that when we define A, we expect to define what A is made of, not what it depends on. For instance, in a video game, if we have a type Character, it’s legit to talk about Trait, Skill or that kind of stuff, but is it if we talk about Weapon or Items? I’m not so sure anymore. Then:
data Character = {
chSkills :: [Skill]
, chTraits :: [Traits]
, chName :: String
, chWeapon :: IORef Weapon -- or STRef, or whatever
, chItems :: IORef [Item] -- ditto
}
sounds really wrong in term of design to me. I’d rather prefer something like:
data Character = {
chSkills :: [Skill]
, chTraits :: [Traits]
, chName :: String
}
-- link our character to a Weapon using a Graph Character Weapon
-- link our character to Items using a Graph Character [Item] or that kind of stuff
Furthermore, when a day comes to add new features, we can just create new types, new graphs and link. In the first design, we’d have to break the Character type, or use some kind of
work around to extend it.
What do you think about that idea? What do you think is best to deal with that kind of issues in Haskell, a pure functional language?

Resources