The most minimal LISP? [duplicate] - programming-languages

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How many primitives does it take to build a LISP machine? Ten, seven or five?
I am curious. What is the most minimal LISP, upon which all further features could be built? Ignore efficiency -- the question comes simply from a place of elegance.
If you awoke on an alien planet and were instructed to build the most minimal LISP that you could later build upon to implement any feature you liked, what would you include?
Edit: Clarification. My intent here is not to start a debate, rather I am considering implementing a minimal LISP and I want to understand how minimal I can go while still allowing the language I implement to be Turing complete, etc. If this turns out to be controversial I am sure I will learn what I wanted to learn from observing the controversy. :). Thanks!

Courtesy of Paul Graham, here's a Common Lisp implementation of John McCarthy's original LISP:
It assumes quote, atom, eq, cons, car, cdr, and cond, and defines null, and, not, append, list, pair, assoc, eval, evcon and evlis.

Peter Norvig implemented a LISP interpreter in 90 lines of Python, and his subsequent article discusses in detail what you're asking. Well worth the read, IMHO.
http://norvig.com/lispy.html
I've got a feeling that his "sequencing" special form could be dropped if only function calls could take an arbitrary number of parameters.
(defun last (lst)
(if (cdr lst)
(last (cdr lst))
(car lst)))
(defun begin (:rest y) (last y))
But this would complicate function application.

Related

Why don't we write haskell in LISP syntax? (we can!) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
It... kinda works you guys (This absolutely compiles, adapted from https://hackage.haskell.org/package/scotty):
main :: IO ()
main = (do
(putStrLn "Starting Server....")
(scotty 3000 (do
(get "/hello/:name"
(text ("hello " <> (param "name") <> "!")))
(get "/users"
(json allUsers))
(get "/users/:id"
(json (filter (matchesId (param "id")) allUsers))))))
(I don't know enough haskell to convert <> to simple parens, but a clever person could easily.)
Why would we do this? We could preprocess Haskell with any lisp macro engine! Trivially!.
Imagine it. HASKELL AND LISP TOGETHER. WE COULD RULE THE GALAXY!
(I know what your thinking, but I've actually thought this through: in this example, Vader is Lisp, Luke is Haskell, and Yoda is Alonzo Church)
(edit "Thanks everyone who answered and commented, I'm now much wiser.
The biggest problem with this technique I don't think has been yet mentioned, and was pointed out by a friend IRL: If you write some lispy preprocessor, you lose type checking and syntax highlighting and comprehension in your IDE and tools. That sound like a hard pass from me."
"I'm now following the https://github.com/finkel-lang/finkel project, which is the lisp-flavoured haskell project that I want!")
The syntax of Haskell is historically derived from that of ISWIM, a language which appeared not much later than LISP and which is described in Peter J. Landin's 1966 article The Next 700 Programming Languages.
Section 6 is devoted to the relationship with LISP:
ISWIM can be looked on as an attempt to deliver LISP from its
eponymous commitment to lists, its reputation for hand-to-mouth
storage allocation, the hardware dependent flavor of its pedagogy,
its heavy bracketing, and its compromises with tradition.
Later in the same section:
The textual appearance of ISWIM is not like LISP's S-expressions. It
is nearer to LISP's M-expressions (which constitute an informal
language used as an intermediate result in hand-preparing LISP
programs). ISWIM has the following additional features: [...]
So there was the explicit intention of diverging from LISP syntax, or from S-expressions at least.
Structurally, a Haskell program consists of a set of modules. Each module consists of a set of declarations. Modules and declarations are inert - they cause nothing to happen by their existence alone. They just form entries in a static namespace that the compiler uses to resolve names while generating code.
As an aside, you might quibble about Main.main here. As the entry point, it is run merely for being defined. That's fair. But every other declaration is only used in code generation if it is required by Main.main, rather than just because it exists.
In contrast, Lisps are much more dynamic systems. A Lisp program consists of a sequence of s-expressions that are executed in order. Each one causes code execution with arbitrary side effects, including modification of the global namespace.
Here's where things get a lot more opinion-based. I'd argue that Lisp's dynamic structure is closely tied to the regularity of its syntax. A syntactic examination can't distinguish between s-expressions intended to add values to the global namespace and ones intended to be run for their side effects. Without a syntactic differentiation, it seems very awkward to add a semantic differentiation. So I'm arguing that there is a sense in which Lisp syntax is too regular to be used for a language with the strict semantic separations between different types of code in Haskell. Haskell's syntax, by contrast, provides syntactic distinctions to match the semantic distinctions.
Haskell does not have s-exp so parentheses are only used for marking the precedence of reduction and constructing tuples, this also means that it's not that easy to make lisp like macros work in Haskell since they make heavy use of s-exps and dynamic typing
Haskell has a right associative function application (namely ($)) which covers most use cases of parentheses
Whitespaces has semantic meaning in Haskell, that's why most of us write
do
p1
p2
p3
instead of
do { p1
; p2
; p3
}

Concrete example of functional knowledge allowing you to write better imperative/OO code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been working with functional programming for a while now and I think it's great so I would like to teach some of my friends Haskell.
Unfortunately, I can't think of any particular piece of code to show them and say "See, this is how it would look imperatively, see how much better functional is"
So, could someone that's more of an expert than I am (and that's a very low requirement) help me out?
This doesn't seem to oppinionated to me, but in case it is, please tell me how to fix it.
Probably the best notions to carry back are so called "value semantics" and "purity".
Each of these play off one another so much it's hard to separate them in practice. In principle, however, value semantics means that each "thing" should act like a value instead of an object. It leads to simpler passing, less "spooky action at a distance" from statefulness, and it provides some amount of background to perform equational reasoning on code. Purity means that side effects do not occur wherever you have code but instead only at carefully demarcated points. This means that most of your code ends up independent and reusable while only the core "application" bits entangle themselves deeply with state and effect.
You might say that purity is having value semantics everywhere or that values are pure computations—so perhaps it's worth saying that "values" refer to the nouns (statics) of your system and "purity" the verbs (dynamics).
These techniques are well known to be useful in other languages. It's a common idea in OO languages these days to happily sacrifice some speed for value semantics due to the organizational and correctness benefits. If you become comfortable with Haskell then you will understand how value semantics and purity work if they are applied to every single aspect of an entire program without compromise. That means you've been exposed to some powerful patterns for reasoning about and building pure programs.
One place I've been thinking about making a comparison is between free monads and the Command pattern. Both are solving very similar problems—"how do I make explicit a structure containing instructions to be performed by a program and execute it at a later time, perhaps in various ways?"—but the Command pattern tends to dance around a lot mutability in, at the very least, the interpreter if not the commands themselves.
Can we write Command patterns which behave more like Free monads? What would be the benefits? These are the kinds of questions you can ask with much more acuity if you've got a strong Haskell background.
It's an interesting, and tricky question. There has been a trend of concepts from functional languages making their way into imperative languages for some time now, and the line between functional/imperative languages is quite blurred. For example, say you want to square every element of a list xs and store the result in a new list, ys.
>> xs = [1, 2, 3] # Python
>> xs = [1, 2, 3] -- Haskell
If you didn't know about functional idioms, you might do this:
>> ys = [0] * len(xs) # Python
>> for i in range(len(xs)):
ys[i] = xs[i] * xs[i]
In Haskell you would just write
>> ys = map (\x -> x * x) xs -- Haskell
This idiom also exists in Python, of course
>> ys = map(lambda x: x * x, xs) # Python
Arguably, an even nicer way to write it is using a list comprehension
>> ys = [x * x | x <- xs] -- Haskell
which also exists in Python
>> ys = [x * x for x in xs] # Python
Certainly, the functional way is much nicer (and more composable, more reusable) than the imperative way. But in this case, you don't need to use a functional language to get the benefit - you just have to be ready to "think in a functional way."
Monads and continuations.
I have come across code like this:
synchronized(q) {
Object o = q.poll();
if (o == null) {
...// do something
}
}
This has folded itself into a much nicer API refactoring:
Object o = q.poll(doSomethingAsLambda)
The q implementation was rewritten, of course, but now synchronization is finer grained, because the implementation permits executing custom code "inside" a branch the q implementation would be aware of.
Modern functional languages (like Haskell and ML) are concise - you can say a lot in not much code.
A real benefit of concision is that you can iterate a design quickly, and in other domains (like graphic or fashion design) rapid iteration seems to be considered to be one of the "tools" for becoming a good or expert designer; therefore it's a fair belief that learning how to rapidly iterate software designs will help make you a better programmer.
Of course modern scripting languages like Python and "design" languages like Alloy (developed by Daniel Jackson at the MIT) are concise and allow you to rapidly iterate designs / prototypes. So a greater theme seems to be that "lightweight" / concise languages help you iterate and improve your design skills rather than just "functional programming will make you a better 'mainstream' programmer".

What are examples of Symbolic Programming?

I have to do a term project in my symbolic programming class. But I'm not really sure what a good/legitimate project would be. Can anyone give me examples of symbolic programming? Just any generic ideas because right now I'm leaning towards a turn based fight game (jrpg style basically), but I really don't want to do a game.
The book Paradigms of Artificial Intelligence Programming, Case Studies in Common Lisp by Peter Norvig is useful in this context.
The book describes in detail symbolic AI programming with Common Lisp.
The examples are in the domain of Computer Algebra, solving mathematical tasks, game playing, compiler implementation and more.
Soon there will be another fun book: The Land of Lisp by Conrad Barski.
Generally there are a lot of possible applications of Symbolic Programming:
natural language question answering
natural language story generation
planning in logistics
computer algebra
fault diagnosis of technical systems
description of catalogs of things and matching
game playing
scene understaning
configuration of technical things
A simple arithmetic interpreter in Scheme that takes a list of symbols as input:
(define (symbolic-arith lst)
(case (car lst)
((add) (+ (cadr lst) (caddr lst)))
((sub) (- (cadr lst) (caddr lst)))
((mult) (* (cadr lst) (caddr lst)))
((div) (/ (cadr lst) (caddr lst)))
(else (error "unknown operator"))))
Test run:
> (symbolic-arith '(add 2 43))
=> 45
> (symbolic-arith '(sub 10 43))
=> -33
> (symbolic-arith '(mu 50 43))
unknown operator
> (symbolic-arith '(mult 50 43))
=> 2150
That shows the basic idea behind meta-circular interpreters. Lisp in Small Pieces is the best place to learn about such interpreters.
Another simple example where lists of symbols are used to implement a key-value data store:
> (define person (list (cons 'name 'mat) (cons 'age 20)))
> person
=> ((name . mat) (age . 20))
> (assoc 'name person)
=> (name . mat)
> (assoc 'age person)
=> (age . 20)
If you are new to Lisp, Common Lisp: A Gentle Introduction to Symbolic Computation is a good place to start.
This isn't (just) a naked attempt to steal #Ken's rep. I don't recall what McCarthy's original motivation for creating Lisp might have been. But it is certainly suitable for computer algebra, including differentiation and integration.
The reason that I am posting, though, is that automatic differentiation is used to mean something other than differentiating symbolic expressions. It's used to mean writing a function which calculate the derivative of another function. For example, given a Fortran program which calculates f(x) an automatic differentiation tool would write a Fortran function which calculates f'(x). One technique, of course, is to try to transform the program into a symbolic expression, then use symbolic differentiation, then transform the resulting expression into a program again.
The first of these is a nice exercise in symbolic computation, though it is so well trodden that it might not be a good choice for a term paper. The second is an active research front and OP would have to be careful not to bite off more than (s)he can chew . However, even a limited implementation would be interesting and challenging.

Experiences teaching or learning map/reduce/etc before recursion? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
As far as I can see, the usual (and best in my opinion) order for teaching iterting constructs in functional programming with Scheme is to first teach recursion and maybe later get into things like map, reduce and all SRFI-1 procedures. This is probably, I guess, because with recursion the student has everything that's necessary for iterating (and even re-write all of SRFI-1 if he/she wants to do so).
Now I was wondering if the opposite approach has ever been tried: use several procedures from SRFI-1 and only when they are not enough (for example, to approximate a function) use recursion. My guess is that the result would not be good, but I'd like to know about any past experiences with this approach.
Of course, this is not specific to Scheme; the question is also valid for any functional language.
One book that teaches "applicative programming" (the use of combinators) before recursion is Dave Touretsky's COMMON LISP: A Gentle Introduction to Symbolic Computation -- but then, it's a Common Lisp book, and he can teach imperative looping before that.
IMO starting with basic blocks of knowledge first is better, then derive the results. This is what they do in mathematics, i.e. they don't introduce exponentiation before multiplication, and multiplication before addition because the former in each case is derived from the latter. I have seen some instructors go the other way around, and I believe it is not as successful like when you go from the basics to the results. In addition, by delaying the more advanced topics, you give students a mental challenge to derive these results them selves using the knowledge they already have.
There is something fundamentally flawed in saying "with recursion the student has everything that's necessary for iterating". It's true that when you know how to write (recursive) functions you can do anything, but what is that better for a student? When you think about it, you also have everything you need if you know machine language, or to make a more extreme point, if I give you a cpu and a pile of wires.
Yes, that's an over-exaggeration, but it can relate to the way people teach. Take any language and remove any inessential constructs -- at the extreme of doing so in a functional language like Scheme, you'll be left with the lambda calculus (or something similar), which -- again -- is everything that you need. But obviously you shouldn't throw beginners into that pot before you cover more territory.
To put this in more concrete perspective, consider the difference between a list and a pair, as they are implemented in Scheme. You can do a lot with just lists even if you know nothing about how they're implemented as pairs. In fact, if you give students a limited form of cons that requires a proper list as its second argument then you'll be doing them a favor by introducing a consistent easier-to-grok concept before you proceed to the details of how lists are implemented. If you have experience teaching this stuff, then I'm sure that you've encountered many students that get hopelessly confused over using cons where the want append and vice versa. Problems that require both can be extremely difficult to newbies, and all the box+pointer diagrams in the world are not going to help them.
So, to give an actual suggestion, I recommend you have a look at HtDP: one of the things that this book is doing very carefully is to expose students gradually to programming, making sure that the mental picture at every step is consistent with what the student knows at that point.
I have never seen this order used in teaching, and I find it as backwards as you. There are quite a few questions on StackOverflow that show that at least some programmers think "functional programming" is exclusively the application of "magic" combinators and are at a loss when the combinator they need doesn't exist, even if what they would need is as simple as map3.
Considering this bias, I would make sure that students are able to write each combinator themselves before introducing it.
I also think introducing map/reduce before recursion is a good idea. (However, the classic SICP introduces recursion first, and implement map/reduce based on list and recursion. This is a building abstraction from bottom up approach. Its emphises is still abstraction.)
Here's the sum-of-squares example I can share with you using F#/ML:
let sumOfSqrs1 lst =
let rec sum lst acc =
match lst with
| x::xs -> sum xs (acc + x * x)
| [] -> acc
sum lst 0
let sumOfSqr2 lst =
let sqr x = x * x
lst |> List.map sqr |> List.sum
The second method is a more abstract way to do this sum-of-squares problem, while the first one expresses too much details. The strength of functional programming is better abstraction. The second program using the List library expresses the idea that the for loop can be abstracted out.
Once the student could play with List.*, they would be eager to know how these functions are implemented. At that time, you could go back to recursion. This is kind of top-down teaching approach.
I think this is a bad idea.
Recursion is one of the hardest basic subjects in programming to understand, and even harder to use. The only way to learn this is to do it, and a lot of it.
If the students will be handed the nicely abstracted higher order functions, they will use these over recursion, and will just use the higher order functions. Then when they will need to write a higher order function themselves, they will be clueless and will need you, the teacher, to practically write the code for them.
As someone mentioned, you've gotta learn a subject bottom-up if you want people to really understand a subject and how to customize it to their needs.
I find that a lot of people when programming functionally often 'imitate' an imperative style and try to imitate loops with recursion when they don't need anything resembling a loop and need map or fold/reduce instead.
Most functional programmers would agree that you shouldn't try to imitate an imperative style, I think recursion shouldn't be 'taught' at all, but should develop naturally and self-explanatory, in declarative programming it's often at various points evident that a function is defined in terms of itself. Recursion shouldn't be seen as 'looping' but as 'defining a function in terms of itself'.
Map however is a repetition of the same thing all over again, often when people use (tail) recursion to simulate loops, they should be using map in functional style.
The thing is that the "recursion" you really want to teach/learn is, ultimately, tail recursion, which is technically not recursion but a loop.
So I say go ahead and teach/learn the real recursion (the one nobody uses in real-life because it's impractical), then teach why they are useless, then teach tail-recursion, then teach why they are not recursions.
That seems to me to be the best way. If you're learning, do all this before using higher-order functions too much. If you're teaching, show them how they replace loops (and then they'll understand later when you teach tail-recursion how the looping is really just hidden but still there).

Are there any purely functional Schemes or Lisps?

I've played around with a few functional programming languages and really enjoy the s-expr syntax used by Lisps (Scheme in particular).
I also see the advantages of working in a purely functional language. Therefore:
Are there any purely functional Schemes (or Lisps in general)?
The new Racket language (formerly PLT Scheme) allows you to implement any semantics you like with s-expressions (really any syntax). The base language is an eagerly evaluated, dynamically typed scheme variant but some notable languages built on top are a lazy scheme and a functional reactive system called Father Time.
An easy way to make a purely functional language in Racket is to take the base language and not provide any procedures that mutate state. For example:
#lang racket/base
(provide (except-out (all-from-out racket/base) set! ...more here...))
makes up a language that has no set!.
I don't believe there are any purely functional Lisps, but Clojure is probably the closest.
Rich Hickey, the creator of Clojure:
Why did I write yet another programming language?
Basically because I wanted a Lisp for Functional Programming
designed for Concurrency and couldn't find one.
http://clojure.org/rationale
Clojure is functional, with immutable data types and variables, but you can get mutable behavior in some special cases or by dropping down to Java (Clojure runs on the JVM).
This is by design - another quote by Rich is
A purely functional programming
language is only good for heating your
computer.
See the presentation of Clojure for Lisp programmers.
Are there any purely functional Schemes (or Lisps in general)?
The ACL2 theorem prover is a pure Lisp. It is, however, intended for theorem proving rather than programming, and in particular it is limited to first-order programs. It has, however, been extremely successful in its niche.
Among other things, it won the 2005 ACM Software System Award.
Probably not, at least not as anything other than toys/proofs of concept. Note that even Haskell isn't 100% purely functional--it has secret escape hatches, and anything in IO is only "pure" in some torturous, hand-waving sense of the word.
So, that said, do you really need a purely functional language? You can write purely functional code in almost any language, with varying degrees of inconvenience and inefficiency.
Of course, languages that assume universal state-modification make it painful to keep things pure, so perhaps what you really want is a language that encourages immutability? In that case, you might find it worthwhile to take a look at Clojure's philosophy. And it's a Lisp, to boot!
As a final note, do realize that most of Haskell's "syntax" is thick layers of sugar. The underlying language is not much more than a typed lambda calculus, and nothing stops you from writing all your code that way. You may get funny looks from other Haskell programmers, though. There's also Liskell but I'm not sure what state it's in these days.
On a final, practical note: If you want to actually write code you intend to use, not just tinker with stuff for fun, you'll really want a clever compiler that knows how to work with pure code/immutable data structures.
inconsistent and non-extendable syntax
What is "inconsistency" here?
It is odd to base a language choice soley on syntax. After all, learning syntax will take a few hours -- it is a tiny fraction of the investment required.
In comparison, important considerations like speed, typing discipline, portability, breadth of libraries, documentation and community, have far greater impact on whether you can be productive.
Ignoring all the flame bait, a quick google for immutable Scheme yields some results:
http://blog.plt-scheme.org/2007/11/getting-rid-of-set-car-and-set-cdr.html
30 years ago there was lispkit lisp
Not sure how accesible it is today.
[Thats one of the places where I learnt functional programming]
there is owl lisp, a dialect of scheme R5RS with all data structures made immutable and some additional pure data structures. It is not a large project, but seems to be actively developed and used by a small group of people (from what I can see on the website & git repository). There are also plans to include R7RS support and some sort of type inference. So while probably not ready for production use, this might be a fun thing to play with.
If you like lisp's syntax then you can actually do similar things in Haskell
let fibs = ((++) [1, 1] (zipWith (+) fibs (tail fibs)))
The let fibs = aside. You can always use s-expr syntax in Haskell expressions. This is because you can always add parentheses on the outside and it won't matter. This is the same code without redundant parentheses:
let fibs = (++) [1, 1] (zipWith (+) fibs (tail fibs))
And here it is in "typical" Haskell style:
let fibs = [1, 1] ++ zipWith (+) fibs (tail fibs)
There are a couple of projects that aim to use haskell underneath a lispy syntax. The older, deader, and more ponderous one is "Liskell". The newer, more alive, and lighter weight one is hasp. I think you might find it worth a look.

Resources