What are examples of Symbolic Programming? - programming-languages

I have to do a term project in my symbolic programming class. But I'm not really sure what a good/legitimate project would be. Can anyone give me examples of symbolic programming? Just any generic ideas because right now I'm leaning towards a turn based fight game (jrpg style basically), but I really don't want to do a game.

The book Paradigms of Artificial Intelligence Programming, Case Studies in Common Lisp by Peter Norvig is useful in this context.
The book describes in detail symbolic AI programming with Common Lisp.
The examples are in the domain of Computer Algebra, solving mathematical tasks, game playing, compiler implementation and more.
Soon there will be another fun book: The Land of Lisp by Conrad Barski.
Generally there are a lot of possible applications of Symbolic Programming:
natural language question answering
natural language story generation
planning in logistics
computer algebra
fault diagnosis of technical systems
description of catalogs of things and matching
game playing
scene understaning
configuration of technical things

A simple arithmetic interpreter in Scheme that takes a list of symbols as input:
(define (symbolic-arith lst)
(case (car lst)
((add) (+ (cadr lst) (caddr lst)))
((sub) (- (cadr lst) (caddr lst)))
((mult) (* (cadr lst) (caddr lst)))
((div) (/ (cadr lst) (caddr lst)))
(else (error "unknown operator"))))
Test run:
> (symbolic-arith '(add 2 43))
=> 45
> (symbolic-arith '(sub 10 43))
=> -33
> (symbolic-arith '(mu 50 43))
unknown operator
> (symbolic-arith '(mult 50 43))
=> 2150
That shows the basic idea behind meta-circular interpreters. Lisp in Small Pieces is the best place to learn about such interpreters.
Another simple example where lists of symbols are used to implement a key-value data store:
> (define person (list (cons 'name 'mat) (cons 'age 20)))
> person
=> ((name . mat) (age . 20))
> (assoc 'name person)
=> (name . mat)
> (assoc 'age person)
=> (age . 20)
If you are new to Lisp, Common Lisp: A Gentle Introduction to Symbolic Computation is a good place to start.

This isn't (just) a naked attempt to steal #Ken's rep. I don't recall what McCarthy's original motivation for creating Lisp might have been. But it is certainly suitable for computer algebra, including differentiation and integration.
The reason that I am posting, though, is that automatic differentiation is used to mean something other than differentiating symbolic expressions. It's used to mean writing a function which calculate the derivative of another function. For example, given a Fortran program which calculates f(x) an automatic differentiation tool would write a Fortran function which calculates f'(x). One technique, of course, is to try to transform the program into a symbolic expression, then use symbolic differentiation, then transform the resulting expression into a program again.
The first of these is a nice exercise in symbolic computation, though it is so well trodden that it might not be a good choice for a term paper. The second is an active research front and OP would have to be careful not to bite off more than (s)he can chew . However, even a limited implementation would be interesting and challenging.

Related

Why was function application chosen as default Haskell operator, not composition?

Haskell syntax requires relatively noisy f . g $ 3 compared to 3 g f as in stack-oriented languages. What were main design arguments for this choice?
That could also be written f (g 3).
Why is Haskell not a concatenative language?
Based on A History of Haskell, it was influenced by a variety of functional programming and lazy language experiments, including ML. As section 4, Syntax describes:
Currying
Following a tradition going back to Frege, a function of two arguments may be represented as a function of one argument that itself returns a function of one argument. This tradition was honed by Moses Sch ̈onfinkel and Haskell Curry and came to be called currying. Function application is denoted by juxtaposition and associates to the left. Thus, f x y is parsed (f x) y. This leads to concise and powerful code. For example, to square each number in a list we write map square [1,2,3], while to square each number in a list of lists we write map (map square) [[1,2],[3]]. Haskell, like many other languages based on lambda calculus, supports both curried and uncurried definitions,
The concept of currying is so central to Haskell's semantics and the lambda calculus at its core that any other method of arrangement would interact poorly with the language.
The stack-oriented style doesn't so much compose as sequence functions; 3 g f is such a language is rather f $ g $ 3 in Haskell. Of course, that's equivalent to f . g $ 3, but it only works as long as you immediately apply the composition to some value. In Haskell, you very often compose functions just to hand them to some higher-order combinator, or to make a point-free definition. In a stack-oriented language that requires some sort of explicit block, in Haskell it requires just the . operator.
Usually, you don't just chain "atomic" functions. Certainly you don't deal with globally-named single-letter functions, so the tiny . or $ doesn't really make a dramatic difference verbosity-wise. And very often, as rmmh said, you chain partially applied functions, e.g.
main = interact $ unlines . take 10 . filter ((>20) . length) . lines
That's much more cumbersome without cheap tight-binding application. Also, it's very natural to have the seperating . to mark what's not immediately applied but just composed.
If you're interested in the history of Haskell, Hudak, Hughes, Peyton Jones & Wadler's "A History of Haskell: Being Lazy with Class" is the best-known paper on this topic, and well-worth reading.
It doesn't address your question directly, but it does point out one very relevant fact: Haskell was created as a unifying compromise between a bunch of existing languages from small teams. Quoting section 2.2 ("A tower of Babel"):
As a result of all this activity, by the mid-1980s there were a number of researchers, including the authors, who were keenly interested in both design and implementation techniques for pure, lazy languages. In fact, many of us had independently designed our own lazy languages and were busily building our own implementations for them. We were each writing papers about our efforts, in which we first had to describe our languages before we could describe our implementation techniques. Languages that contributed to this lazy Tower of Babel include:
Miranda […]
Lazy ML (LML) […]
Orwell […]
Alfl […]
Id […]
Clean […]
Ponder […]
Daisy […]
So the answer may simply be that Haskell copied this from its predecessor languages. And since a bunch of these languages were in turn based or inspired by Lisp and ML, they may analogously have copied it from them. So back to quote your question:
What were main design arguments for this choice?
Chances are that there was never a sustained argument for the choice. Very few high-level languages have gone for the stack-based design, in any case, and few people know them.
My guess would be lambda calculus and usefulness (in real world scenarios).
In lambda calculus, space is application, and thus it feels more similar to people who know it.
In most commonly used languages, the usual thing to do with a function is to apply it. Haskell is not a stack-based language, so the choice was made there.

Can Haskell programs be represented as Lisp S-expressions?

This would be useful for genetic programming, which usually use a Lisp subset as representation for programs.
I've found something called Liskell (Lisp syntax, Haskell inside) on the web, but the links are broken and I can't find the paper on it...
Check out Lisk, which was designed to fix the author's gripes with Liskell.
In my spare time I’m working on a project called Lisk. Using the -pgmF option for GHC, you can provide GHC a program name that is called to preprocess the file before GHC compiles it. It also works in GHCi and imports. You use it like this:
{-# OPTIONS -F -pgmF lisk #-}
(module fibs
(import system.environment)
(:: main (io ()))
(= main (>>= get-args (. print fib read head)))
(:: test (-> :string (, :int :string)))
(= test (, 1))
(:: fib (-> :int :int))
(= fib 0 0)
(= fib 1 1)
(= fib n (+ (fib (- n 1))
(fib (- n 2)))))
The source is here.
Also, if you don't actually care about Haskell and just want some of its features, you might want to check out Qi (or its successor, Shen), which has s-expression syntax with many modern functional-programming features similar to Haskell.
You might be interested in a project I have been working on, husk scheme.
Basically it will let you call into Scheme code (S-expressions) from Haskell, and vice-versa. So you can intermingle that code within your program, and then process the s-expressions as native Haskell data types when you want to do something on the Haskell side.
Anyway, it may be useful to you, or not - have a look and decide for yourself.
In most genetic programming software, programs are represented as abstract syntax trees (AST) which are evaluated directly in that form. The Lisp S-expression syntax is only apparent when the programs are output as source code. Have you considered just modifying the output module in your chosen software to produce Haskell source code from the ASTs instead?
The obvious answer is "yes" -- unsurprising given that S-expressions were intended as a simple & uniform representation of parsed code. The thing is that languages like Haskell or ML tend to have some problems with that. I once did something similar to OCaml (abused CamlP4 and wrote some function that translates the P4 AST into some sexpr-like representation), and the fun begins when you run into similar kinds of AST nodes that have different types because they're not really the same... For example, there's function application, and there's a similar form that is used in patterns, and yet another form that is used in type expressions.
My guess is that trying to do genetic programming this way is likely to suffer from too much junk programs that don't have any meaning. But that's unsurprising too for any statically typed language -- a dynamically typed language will allow more junk in. Comparing the two worlds WRT to genetic programming might be interesting for reasons beyond the AI...
The Liskell paper is at http://clemens.endorphin.org/ILC07-Liskell-draft.pdf and the liskell.org site seems to still be up in general.

The most minimal LISP? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How many primitives does it take to build a LISP machine? Ten, seven or five?
I am curious. What is the most minimal LISP, upon which all further features could be built? Ignore efficiency -- the question comes simply from a place of elegance.
If you awoke on an alien planet and were instructed to build the most minimal LISP that you could later build upon to implement any feature you liked, what would you include?
Edit: Clarification. My intent here is not to start a debate, rather I am considering implementing a minimal LISP and I want to understand how minimal I can go while still allowing the language I implement to be Turing complete, etc. If this turns out to be controversial I am sure I will learn what I wanted to learn from observing the controversy. :). Thanks!
Courtesy of Paul Graham, here's a Common Lisp implementation of John McCarthy's original LISP:
It assumes quote, atom, eq, cons, car, cdr, and cond, and defines null, and, not, append, list, pair, assoc, eval, evcon and evlis.
Peter Norvig implemented a LISP interpreter in 90 lines of Python, and his subsequent article discusses in detail what you're asking. Well worth the read, IMHO.
http://norvig.com/lispy.html
I've got a feeling that his "sequencing" special form could be dropped if only function calls could take an arbitrary number of parameters.
(defun last (lst)
(if (cdr lst)
(last (cdr lst))
(car lst)))
(defun begin (:rest y) (last y))
But this would complicate function application.

Good information about type systems based on contracts/constraints?

Problem:
I am looking for good introduction about type systems,
which are based on contracts/constraints
(sorry, I don't remember which term one is appropriate for a type system).
I need that information to be able to implement an experimental type system of such kind.
As far as I know, such type system is used in XSD (Xml Schema Definition).
Instead of defining a data type, one defines constraints on the set of possible values.
Example:
I define some method with a parameter, which is either "nothing", or matches the integral range [0..100].
Such method would accept following values:
"nothing"
0
1
...
100
I hope, I could make my self clear.
You can have a look at languages like Haskell, or even Agda. Also, Oleg has lots of great resources.
Common Lisp offers such type testing at run time. It has an elaborate type system, but it's not used as you might be accustomed to in a statically-typed language. The macro check-type accepts a typespec, which can be a built-in specification or one defined by the macro deftype. The constraints expressible with typespecs are those of a predicate function written in the host language, which is to say that anything you can inspect a run time can be the criteria for what constitutes your new type.
Consider this example:
(defun is-nothing (val)
(when (stringp val)
(string= val "nothing")))
(deftype strange-range ()
"A number between 0 and 100 inclusive, or the string \"nothing\"."
'(or (integer 0 100)
(satisfies is-nothing)))
That defines a type called "strange-range". Now test a few values against it:
CL-USER> (let ((n 0))
(check-type n strange-range))
NIL
CL-USER> (let ((n 100))
(check-type n strange-range))
NIL
CL-USER> (let ((n "nothing"))
(check-type n strange-range))
NIL
CL-USER> (let ((n 101))
(check-type n strange-range))
The last one triggers the debugger with the following message:
The value of N should be of type STRANGE-RANGE.
The value is: 101
[Condition of type SIMPLE-TYPE-ERROR]
This one provokes the same outcome:
CL-USER> (let ((n "something"))
(check-type n strange-range))
The constraints one can impose this way are expressive, but they don't serve the same purpose that the elaborate type systems of languages like Haskell or Scala do. While type definitions can coax the Common Lisp compiler to emit code more tailored to and efficient for the types of the operands, the examples above are more of a concise way to write run-time type checks.
This is not my area of expertise, so it might be off topic, but Microsoft Research has a project Code Contracts, which "provide a language-agnostic way to express coding assumptions in .NET programs. The contracts take the form of preconditions, postconditions, and object invariants".

Can all iterative algorithms be expressed recursively?

Can all iterative algorithms be expressed recursively?
If not, is there a good counter example that shows an iterative algorithm for which there exists no recursive counterpart?
If it is the case that all iterative algorithms can be expressed recursively, are there cases in which this is more difficult to do?
Also, what role does the programming language play in all this? I can imagine that Scheme programmers have a different take on iteration (= tail-recursion) and stack usage than Java-only programmers.
There's a simple ad hoc proof for this. Since you can build a Turing complete language using strictly iterative structures and a Turing complete language using only recursive structures, then the two are therefore equivalent.
Can all iterative algorithms be expressed recursively?
Yes, but the proof is not interesting:
Transform the program with all its control flow into a single loop containing a single case statement in which each branch is straight-line control flow possibly including break, return, exit, raise, and so on. Introduce a new variable (call it the "program counter") which the case statement uses to decide which block to execute next.
This construction was discovered during the great "structured-programming wars" of the 1960s when people were arguing the relative expressive power of various control-flow constructs.
Replace the loop with a recursive function, and replace every mutable local variable with a parameter to that function. Voilà! Iteration replaced by recursion.
This procedure amounts to writing an interpreter for the original function. As you may imagine, it results in unreadable code, and it is not an interesting thing to do. However, some of the techniques can be useful for a person with background in imperative programming who is learning to program in a functional language for the first time.
Like you say, every iterative approach can be turned into a "recursive" one, and with tail calls, the stack will not explode either. :-) In fact, that's actually how Scheme implements all common forms of looping. Example in Scheme:
(define (fib n)
(do ((x 0 y)
(y 1 (+ x y))
(i 1 (+ i 1)))
((> i n) x)))
Here, although the function looks iterative, it actually recurses on an internal lambda that takes three parameters, x, y, and i, and calling itself with new values at each iteration.
Here's one way that function could be macro-expanded:
(define (fib n)
(letrec ((inner (lambda (x y i)
(if (> i n) x
(inner y (+ x y) (+ i 1))))))
(inner 0 1 1)))
This way, the recursive nature becomes more visually apparent.
Defining iterative as:
function q(vars):
while X:
do Y
Can be translated as:
function q(vars):
if X:
do Y
call q(vars)
Y in most cases would include incrementing a counter that is tested by X. This variable will have to be passed along in 'vars' in some way when going the recursive route.
As noted by plinth in the their answer we can construct proofs showing how recursion and iteration are equivalent and can both be used to solve the same problem; however, even though we know the two are equivalent there are drawbacks to use one over the other.
In languages that are not optimized for recursion you may find that an algorithm using iteration preforms faster than the recursive one and likewise, even in optimized languages you may find that an algorithm using iteration written in a different language runs faster than the recursive one. Furthermore, there may not be an obvious way of written a given algorithm using recursion versus iteration and vice versa. This can lead to code that is difficult to read which leads to maintainability issues.
Prolog is recursive only language and you can do pretty much everything in it (I don't suggest you do, but you can :))
Recursive Solutions are usually relatively inefficient when compared to iterative solutions.
However, it is noted that there are some problems that can be solved only through recursion and equivalent iterative solution may not exist or extremely complex to program easily (Example of such is The Ackermann function cannot be expressed without recursion)Though recursions are elegant,easy to write and understand.

Resources