Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
When I'm developing an understanding of some concept, I find it very unsatisfactory not to be able to see how the apparent etymology of the concept name relates to what I think I'm understanding about the concept. If I can't see the connection, I'm left with the feeling that there's some significant insight the name is trying to convey that I haven't yet discovered.
Monad: From Greek for unity. Mon = one; ad = a group or unit comprising a certain number. This composes to "A group or unit composed of one thing".
http://www.haskell.org/haskellwiki/All_About_Monads says:
"A monad is a way to structure computations in terms of values and sequences of computations using those values. Monads allow the programmer to build up computations using sequential building blocks, which can themselves be sequences of computations." ... "Other monads exist for building computations that perform I/O, have state, may return multiple results, etc"
Nothing much there about one-ness.
http://www.haskell.org/haskellwiki/Monad claims that the one-ness in term monad refers to the one output that a monad will produce. But given that any function produces one output, (and the above reference says "may return multiple results", not to mention out-of-band/error results), and there's nothing about the "group or unit", that explanation seems unconvincing.
Is there some better explanation?
[Edit: Responding to the "off topic" flag. My question is not about the etymology of the word "monad" per se. It is about the Haskell concept of monad, and how the roots of the word monad do or do not inform us about that concept, or perhaps actually misdirect us from understanding the topic. Given that monad is a famously hard-to-communicate concept in Haskell, this is certainly a question about programming.
That this is a salient issue is reinforced by the variation in respondent suggestions regarding how the roots in "monad" might relate to the topic at hand, including the observation that the explanation in Haskell's own documentation is highly suspect.
That said, I'm pretty satisfied with the answers given (thanks all!), so no need to reopen the topic. But I'd advocate not moving it elsewhere, so that others with the same confusion about an important Haskell concept can find it here.]
Is there some better explanation?
Short answer: No, there really isn't.
Slightly less short answer: It's almost certainly related to "monoid", and not related to any other use of "monad" (there are at least two), and the term was coined at a gathering of mathematicians so there's likely not even a written source that's the first use of the term.
Longer answer with quotes and citations: The one I wrote here.
That claim on the wiki about the alleged meaning seems very dubious to me, incidentally.
Related
Why Haskell has -- as the syntax of comments? I just want to know if there is any interesting stories behind the decision on this comment syntax in the design of Haskell. (That's all. If this kind of question is not intended for Stack Overflow, I'll delete this.)
For historical Haskell design questions, the best reference is Hudak, Hughes, Peyton Jones, and Wadler's "A History of Haskell: Being Lazy With Class" paper. Here's an electronic copy: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/history.pdf
Section 4.6 talks about comments, and has the following interesting note:
Comments provoked much discussion among the committee, and Wadler later formulated a law to describe how effort was allotted to various topics: semantics is discussed half as much as syntax, syntax is discussed half as much as lexical syntax, and lexical syntax is discussed half as much as the syntax of comments. This was an exaggeration: a review of the mail archives shows that well over half of the discussion concerned semantics, and infix operators and layout provoked more discussion than comments. Still, it accurately reflected that committee members held strong views on low-level details.
It goes on to describe the comment syntax, though I don't see any specific reason why -- was chosen. My personal thought is that so you can separate two parts of the program with a complete dashed-line and it'd be syntactically valid, while looking like a regular document that uses a full line as a separator for a similar effect.
There're further comments regarding bird-tracks which fell out-of-fashion so far as I know. At the end, I think it was more or less an arbitrary choice. But as the quote above indicates, apparently there was considerable discussion around it.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Trying to learn functional programming approach I was faced with both lambda calculus and Category theory terms.
Could you please explain in layman's terms what are the differences between them, in the scope of FP.
What do they mean for FP?
Thank you!
It's hard to fully answer such a broad question. Below, I tried to provide some insights, but I can not describe these vast topics in only a few paragraphs.
The lambda calculus is the mathematical core of any functional programming language. It can be seen as a very minimalistic programming language, where the key properties of FP can be studied without being distracted by an heavy syntax.
Any programmer, especially if they are FP programmers, should be able to learn the basics of lambda calculus within a few hours of study. Note that, even if the basics are rather simple, the underlying theory is incredibly vast: there's a huge amount of scientific literature dedicated to the lambda calculus. One does not need to know all of this for everyday FP, but reading a few results here and there once in a while can provide some insights on FP. For instance, reading about Church encodings can make one realize how powerful can be a lambda abstraction.
Category theory is one of the most abstract parts of mathematics. Compared with the lambda calculus, it is much harder to study. It can be very challenging.
CT is connected with the lambda calculus mostly because it provides a nice way to understand types. Types in FP have an underlying algebraic structure which can be best understood by categorical means. For instance, in Haskell the types (A, Either B C) and Either (A,B) (A,C) are isomorphic (ignoring bottoms, at least; because a*(b+c) = (a*b)+(a*c) ), roughly meaning that they carry the same amount of information. As another example, currying and uncurrying form the fundamental idea of cartesian closed categories, the "standard" way to interpret simple types.
If you have a programmer's background, you can try Bartosz Milewski's online book Category Theory for Programmers. Still, I would recommend to start from learning some FP and the lambda calculus first.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Haskell is a purely functional language, breaking from traditional object-oriented languages. However, consider the following quote from Alan Kay on the "true" meaning of OOP:
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I’m not aware of them. -- Alan Kay
and later on:
I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning -- it took a while to see how to do messaging in a programming language efficiently enough to be useful).
I'm curious to what extent this style of programming can be achieved in Haskell. In particular, is it possible to structure a Haskell program as a sequence of (something resembling) encapsulated objects passing messages back and forth to each other?
NOTE: I'm looking for examples specific to Haskell, not functional languages at large (when in conflict).
only messaging, local retention and protection and hiding of
state-process
like biological cells and/or individual computers on a network, only
able to communicate with messages
I believe certain Haskell programming patterns do resemble Kay's description, to a degree.
In streaming libraries like conduit, pipes or streaming, it is common to build a computation as a pipeline composed of different stages. Each part of the pipeline is quite independent of the others and can maintain its own private state (you can have "shared" state in the pipeline as well).
The topologies tend to be linear and unidirectional. That said, abstractions like conduit's ZipSink—and the Applicative instance for Folds in the foldl package—let you build "tree-like" topologies that branch out. And pipes can be bidirectional, although I haven't seen many examples that make use of it.
Then there's arrowized functional reactive programming. It lets you build "circuits" of automaton arrows that can even include loops. Each part of the circuit can maintain its own state. As the description of the netwire FRP library states:
This library provides interfaces for and implements wire arrows useful
both for functional reactive programming (FRP) and locally stateful
programming (LSP).
And from the docs of the auto library:
auto works by providing a type that encapsulates value stream
transformers, or locally stateful functions; by specifying your
program as a (potentially cyclic) graph of relationships between value
streams, you create a way of declaring a system based simply on static
relationships between quantities.
Instead of a state monad type solution, where all functions have
access to a rigid global state, auto works by specifying relationships
which each exist independently and on their own, without any global
state.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I use the term all the time... but I was just sort of thinking that I don't really have a solid denotational sense behind the term (or at least the term in the sense I want to discuss here). I'm interested in the sense of the word related to code, not the anthropomorphic idea. I'm also not interested here in the sense of the word related to intentional malicious computing (i.e. a hack to unlock secret powers in a game). What I want to explore is what it means to 'hack' in terms of writing software to solve a problem
wikipedia's def of 'hack' to me is a bit vague, but a decent starting point. It considers a hack
can refer to a solution or method which functions correctly but which is "ugly" in its concepion
works outside the accepted structures and norms of the environment
is not easily extendable or maintainable
can be slang for "copy", "imitation" or "rip-off."
These traits of a hack conform to my usage of the word--when applied to code it is always a term of derision. To my mind, a hack
Is likely to be difficult to maintain & hard to understand in the context of the rest of the code.
Is likely to cause failure of the app.
tends to indicate a poor understanding by the coder either of the problem space, usage of the language or both
tends to be the byproduct of aggressive schedules
suggests potential changes in requirements that have not been fully incorporated into the architecture of the solution (requiring an 'inorganic' workaround).
smells
all bad, bad, bad. To me, a hack in this sense is always negative, indicating either lack of time, incompetence, or sloth on the part of the developer, though a decent percentage of hacks must be written to compensate for ill-conceived designs or systems that have gained requirements which their original design cannot handle 'organically'.
I don't think I've really captured it totally though--it's like pornography a bit: I can't really define it, but I know it when I see it. So I ask you: what does it mean to 'hack' when you are trying to solve a problem in software?
I've always preferred Paul Graham's definition:
To add to the confusion, the noun "hack" also has two senses. It can be either a compliment or an insult. It's called a hack when you do something in an ugly way. But when you do something so clever that you somehow beat the system, that's also called a hack. The word is used more often in the former than the latter sense, probably because ugly solutions are more common than brilliant ones.
From the Jargon File, the glossary of hacker slang:
The Meaning of ‘Hack’
“The word hack doesn't really have 69 different meanings”, according to MIT hacker Phil Agre. “In fact, hack has only one meaning, an extremely subtle and profound one which defies articulation. Which connotation is implied by a given use of the word depends in similarly profound ways on the context. Similar remarks apply to a couple of other hacker words, most notably random.”
Hacking might be characterized as ‘an appropriate application of ingenuity’. Whether the result is a quick-and-dirty patchwork job or a carefully crafted work of art, you have to admire the cleverness that went into it.
An important secondary meaning of hack is ‘a creative practical joke’. This kind of hack is easier to explain to non-hackers than the programming kind.
When I think of "hack", I think of it as being a non-expected workaround to solve a problem, not necessarily a bad thing. Creative, innovative, and well-placed. "Hack" can apply to more than just computers, though I seldom hear it used that way.
Too often "hack" simply means: "Not the way I would do it."
This topic will turn into something like a question about Love. Everyone's gonna have their own definition. The best way to know the proper (default) definition is in the dictionary
It's when you've stepped out of the idiomatic, natural, sensible and (sometimes) supported ways of doing something in a given language/framework/etc.
Sometimes that's a stroke of genius, usually it's an act of idiocy, occasionally it's one disguised as the other, and on rare occasions it's both.
(Incidentally, the judge who coined that statement about pornography you quote later retracted in making another ruling).
When I use the term 'hack' it usually refers to a solution to a problem that was done usually in response to a pressing issue, and so not a lot of thought went into it in regards to the overall design of the application. Sometimes it works out, sometimes not so much, and sometimes it turns out to be a work of genius. But mainly, it's an admitted temporary solution that (hopefully) gets refactored and refined when possible.
Here's a great sentence I saw about the difference between hacking and scamming and it says, "Hacking attacks are successful when the criminal knows how a particular computer system works. Scams are successful when the perpetrator knows how the human brain works.", which brings the idea out that to hack into something, you need to have a deep understanding of how it works.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
OK -- a bit of an undefined question (is the pattern of plugs in an Eniac plugboard a language ??) but contenders include:
Konrad Zuse's PlanKalkül (1940s) -
never implemented (generally
accepted as the first).
Whatever Ada Lovelace (1840s) programmed in (not
Ada) -- if she is the first
programmer, as everyone says, she
must have used the first programming
language, no? Again probably never
implemented - but did Babbage have
anything that could be called a
language?
Turing's description of
his Turing machine (1936 paper). In
the paper he actually writes
programs and simulates their
execution mathematically - that
makes it as good as (and earlier
than) PlanKalkül in my book.
Autocode for the Machester Mark 1 computer (1952) -- compiled, high level, beats Fortan to the punch (?). Mr Turing again (!).
Fortran (Early 1950's) - beats out Lisp by a couple of years and undoubtedly passes the sniff test. But was it earlier than Mark 1 autocode ??
The PBS series Connections made the argument that the holes punched in tiles to control the patterns created on looms (circa 1700s??) were the first programming "language".
These were followed by player piano scrolls: Codes, on paper, which are read by, and control the operation of a machine. That's a programming language, isn't it?
DNA -- or does it have to involve silicon computers? ;-)
Since Ada Lovelace is widely regarded as the first programmer, I'd investigate what she called the set of symbols she was using.
Update: You can read the notation that Lovelace used in her Notes on Sketch of The Analytical Engine Invented by Charles Babbage By L. F. MENABREA. Lovelace was the translator, but her notes describing the programming of the Analytical Engine ended up being about four times longer than the original publication.
I think we need to agree on a definition of "programming language" to answer this question in any useful way. Is directly manipulating machine code a programming language?
Konrad Zuse's PlanKalkül (1940s) - never implemented
There was actually an implementation of the language published by Rojas et al. somewhere around the year 2000.
DNA -- or does it have to involve silicon computers? ;-)
Well, if you go down that road then the correct answer has to be RNA which existed before DNA. But then, do we have a Blind Programmer? ;-)
In the beginning there was Ada Lovelace , Then Bill said 'Let there be C#' And there was light !!
Assuming a definition of "programming language" as "a textual notation used to describe/control the intended behavior of a digital computer", I think there's only one possible answer: raw (numerical) machine code.
Many of the other answers (e.g. recipes for cooking) are clever, but aren't about programming per se, but about description/control in a different context or more general sense.
I would say that the first programming language actually used was the machine language of the first stored program computer, which I believe was Baby: http://www.computer50.org/
The language the analytical engine would have used was its own machine code, entered via punch cards indicating the operation to be performed and the columns (effectively registers) to perform it to. See these notes for some details.
Programming, at least in the declarative sense, comes down to combinations of sequence, alternation, and repetition. One might consider recipe authors as programmers, and therefore very early ones. Think about a recipe: it contains sequence (slice this, then chop that, then heat so and so...), alternation (if you want it moist then bake for 40 minutes, else if you want it "cakey" bake for 55 minutes), and repetition (while not stiff kneed the dough, repeat stirring until the batter is smooth). Recipes go back thousands of years.