Haskell analog of lisp backquoting and splicing - haskell

In some lisps (e.g. elisp, common lisp) there is a feature called backquoting.
It allows to construct a list while evaluating or splicing into it some elements. For example:
`(1 2 (3 (+ 4 5)))
⇒ (1 2 (3 (+ 4 5))) ; just quoted unevaluated list
`(1 2 (3 ,(+ 4 5)))
⇒ (1 2 (3 9)) ; (+ 4 5) has been evaluated
`(1 2 ,#(list 3 (+ 4 5)))
⇒ (1 2 3 9) ; (3 9) has been spliced into the list
I guess, in Haskell some subset of backquoting could look like this:
[backquote| 1, 2, #$(replicate 2 3), 2 + 2 |]
⇒ [1, 2, 3, 3, 4]
I wonder, if splicing into list like this is possible and if it has been implemented.

It seems like the discussion in the comments kind of went off the rails. Anyway, I have a different take on this, so let me offer an answer.
I would say that Haskell already has a feature analogous to backquoting, and you've probably used it extensively in your own Haskell programming without realizing it.
You've drawn a parallel between Lisp lists and Haskell lists, but in Lisp, S-expression (i.e., "pairs with atoms", especially with atomic symbols) are a flexible and ubiquitous data structure, used not only for representing Lisp code, but also as the go-to representation that's at least the first consideration for any complex, structured data. As such, most Lisp programs spend a lot of time generating and manipulating these structures, and so S-expression "literals" are common in Lisp code. And, S-expression "almost literals", where a few sub-expressions need to be calculated, are more conveniently written using the backquoting mechanism than trying to build up the expression with smaller literal and evaluated pieces using functions like cons, list, append, etc.
Contrast that with Haskell -- Haskell lists are certainly popular in Haskell code, and are a go-to structure for representing homogeneous sequences, but they provide only a small fraction of the flexibility of S-expressions. Instead, the corresponding ubiquitous data structure in Haskell is the Algebraic Data Type (ADT).
Well, just like Lisp with its S-expressions, Haskell programs spend a lot of time generating and manipulating ADTs, and Haskell also has a convenient syntax for ADT literals and "almost literals". They are unified into a single "function application" syntax with literal and evaluated parts differentiated by the use of constructors (identifiers with an initial uppercase letter or infix operators starting with a colon) versus non-constructors (identifiers with an initial lowercase letter or infix operators without an initial colon). There is, of course, some additional syntax for certain constructors (lists and tuples).
For example, compare the following backquoted expressions in Lisp and Haskell:
;; Lisp
(setq baz `(node ,id
(node ,(+ id 1) ,left-tree leaf)
(node ,(+ id 2) leaf ,right-tree)))
-- Haskell
baz = Node id (Node (id + 1) left_tree Leaf) (Node (id + 2) Leaf right_tree)
In the Haskell version of this "almost literal", the Node and Leaf constructors represent the quoted parts; the left-tree, right-tree, and + infix expression represent the evaluated parts, and they are syntactically distinguishable by the usual rules for constructors and non-constructors.
Of course, completely separate from this, there's a Template Haskell mechanism that directly manipulates snippets of Haskell code at compile time. While the code is represented as an ADT that could, in principle, be written using the same "almost literal" syntax used for other ADTs, the ADT in question is quite cumbersome and looks nothing like the underlying Haskell code. So, Template Haskell provides a more classic sort of backquoting syntax.

Related

Precedence of function application

In order to illustrate function application has the highest precedence in Haskell the following example was provided (by schoolofhaskell):
sq b = b * b
main = print $
-- show
sq 3+1
-- /show
The result here is 10.
What puzzles me is that the argument constitutes a function application too. Consider the operator + being a short-cut of a function. So when the argument is taken, I would expect that its function application now takes precedence over the original one.
Written that way it delivers the expected result:
sq b = b * b
main = print $
sq ((+) 3 1 )
Any explanation?
What puzzles me is that the argument constitutes a function application too. Consider the operator "+" being a short-cut of a function.
I think this is the heart of the misunderstanding (actually, very good understanding!) involved here. It is true that 3 + 1 is an expression denoting the application of the (+) function to 3 and 1, you have understood that correctly. However, Haskell has two kinds of function application syntax, prefix and infix. So the more precise version of "function application has the highest precedence" would be something like "syntactically prefix function application has higher precedence than any syntactically infix function application".
You can also convert back and forth between the two forms. Each function has a "natural" position: names with only symbols are naturally syntactically infix and names with letters and numbers and the like are naturally syntactically prefix. You can turn a naturally prefix function to infix with backticks, and a naturally infix function to prefix with parentheses. So, if we define
plus = (+)
then all of the following really mean the same thing:
3 + 1
3 `plus` 1
(+) 3 1
plus 3 1
Returning to your example:
sq 3+1
Because sq is naturally prefix, and + is naturally infix, the sq application takes precedence.
So when the argument is taken, would expect that its function application now takes precedence over the original one.
The Haskell grammar [Haskell report] specifies:
exp10
→ …
| …
| fexp
fexp
→ [fexp] aexp (function application)
This means that function application syntax has precedence 10 (the superscript on the exp10 part).
This means that, when your expression is parsed, the empty space in sq 3 takes precedence over the + in 3+1, and thus sq 3+1 is interpreted as (sq 3) + 1 which semantically means that it squares 3 first, and then adds 1 to the result, and will thus produce 10.
If you write it as sq (3 + 1) or in canonical form as sq ((+) 3 1) it will first sum up 3 and 1 and then determine the square which will produce 16.
The addition operator is syntactically different from a function application, and that is what determines its operator precedence.
If you rewrite your addition (3 + 1) as a function application ((+) 3 1), the slice (+) follows special slicing rules inside its own parentheses, but outside the slice it's just another parenthesized expression.
Note that your "expected result" is not really parallel to your original example:
sq 3 + 1 -- original example, parsed `(sq 3) + 1`
sq ((+) 3 1) -- "expected result", has added parentheses to force your parse
sq (3 + 1) -- this is the operator version of "expected result"
In Haskell, the parentheses are not part of function application -- they are used solely for grouping!
That is to say: like (+) is just another parenthesized expression, so is (3 + 1)
I think your confusion is just the result of slightly imprecise language (on the part of both the OP and the School of Haskell page linked).
When we say things like "function application has higher precedence than any operator", the term "function application" there is not actually a phrase meaning "applying a function". It's a name for the specific syntactic form func arg (where you just write two terms next to each other in order to apply the first one to the second). We are trying to draw a distinction between "normal" prefix syntax for applying a function and the infix operator syntax for applying a function. So with this specific usage, sq 3 is "function application" and 3 + 1 is not. But this isn't claiming that sq is a function and + is not!
Whereas in many other contexts "function application" does just mean "applying a function"; there it isn't a single term, but just the ordinary meaning of the words "function" and "application" that happen to be used together. In that sense, both sq 3 and 3 + 1 are examples of "function application".
These two senses of the term arise because there are two different contexts we use when thinking about the code1: logical and syntactic. Consider if we define:
add = (+)
In the "logical" view where we think about the idealised mathematical objects represented by our code, both add and (+) are simply functions (the same function in fact). They are even exactly the same object (we defined one by saying it was equal to the other). This underlying mathematical function exists independently of any name, and has exactly the same properties no matter how we choose to refer to it. In particular, the function can be applied (since that is basically the sole defining feature of a function).
But at the syntactic level, the language simply has different rules about how you can use the names add and + (regardless of what underlying objects those names refer to). One of these names is an operator, and the other is not. We have special syntactic rules for the how you need to write the application of an operator, which differs from how you need to write the application of any non-operator term (including but not limited to non-operator names like sq2). So when talking about syntax we need to be able to draw a distinction between these two cases. But it's important to remember that this is a distinction about names and syntax, and has nothing to do with the underlying functions being referred to (proved by the fact that the exact same function can have many names in different parts of the program, some of them operators and some of them not).
There isn't really an agreed upon common term for "any term that isn't an operator"; if there was we would probably say "non-operator application has higher precedence than any operator", since that would be clearer. But for better or worse the "application by simple adjacency" syntactic feature is frequently referred to as "function application", even though that term could also mean other things in other contexts3.
So (un?)fortunately there isn't really anything deeper going on here than the phrase "function application" meaning different things in different contexts.
1 Okay, there are way more than two contexts we might use to think about our code. There are two relevant to the point I'm making.
2 For an example of other non-operator terms that can be applied we can also have arbitrary expressions in parentheses. For example (compare on fst) (1, ()) is the non-operator application of (compare `on` fst) to (1, ()); the expression being applied is itself the operator-application of on to compare and fst.
3 For yet another usage, $ is often deemed to be the "function application operator"; this is perhaps ironic when considered alongside usages that are trying to use the phrase "function application" specifically to exclude operators!

Haskell: Get a prefix operator that works without parentheses

One big reason prefix operators are nice is that they can avoid the need for parentheses so that + - 10 1 2 unambiguously means (10 - 1) + 2. The infix expression becomes ambiguous if parens are dropped, which you can do away with by having certain precedence rules but that's messy, blah, blah, blah.
I'd like to make Haskell use prefix operations but the only way I've seen to do that sort of trades away the gains made by getting rid of parentheses.
Prelude> (-) 10 1
uses two parens.
It gets even worse when you try to compose functions because
Prelude> (+) (-) 10 1 2
yields an error, I believe because it's trying to feed the minus operation into the plus operations rather than first evaluating the minus and then feeding--so now you need even more parens!
Is there a way to make Haskell intelligently evaluate prefix notation? I think if I made functions like
Prelude> let p x y = x+y
Prelude> let m x y = x-y
I would recover the initial gains on fewer parens but function composition would still be a problem. If there's a clever way to join this with $ notation to make it behave at least close to how I want, I'm not seeing it. If there's a totally different strategy available I'd appreciate hearing it.
I tried reproducing what the accepted answer did here:
Haskell: get rid of parentheses in liftM2
but in both a Prelude console and a Haskell script, the import command didn't work. And besides, this is more advanced Haskell than I'm able to understand, so I was hoping there might be some other simpler solution anyway before I do the heavy lifting to investigate whatever this is doing.
It gets even worse when you try to compose functions because
Prelude> (+) (-) 10 1 2
yields an error, I believe because it's
trying to feed the minus operation into the plus operations rather
than first evaluating the minus and then feeding--so now you need even
more parens!
Here you raise exactly the key issue that's a blocker for getting what you want in Haskell.
The prefix notation you're talking about is unambiguous for basic arithmetic operations (more generally, for any set of functions of statically known arity). But you have to know that + and - each accept 2 arguments for + - 10 1 2 to be unambiguously resolved as +(-(10, 1), 2) (where I've explicit argument lists to denote every call).
But ignoring the specific meaning of + and -, the first function taking the second function as an argument is a perfectly reasonable interpretation! For Haskell rather than arithmetic we need to support higher order functions like map. You would want not not x has to turn into not(not(x)), but map not x has to turn into map(not, x).
And what if I had f g x? How is that supposed to work? Do I need to know what f and g are bound to so that I know whether it's a case like not not x or a case like map not x, just to know how to parse the call structure? Even assuming I have all the code available to inspect, how am I supposed to figure out what things are bound to if I can't know what the call structure of any expression is?
You'd end up needing to invent disambiguation syntax like map (not) x, wrapping not in parentheses to disable it's ability to act like an arity-1 function (much like Haskell's actual syntax lets you wrap operators in parentheses to disable their ability to act like an infix operator). Or use the fact that all Haskell functions are arity-1, but then you have to write (map not) x and your arithmetic example has to look like (+ ((- 10) 1)) 2. Back to the parentheses!
The truth is that the prefix notation you're proposing isn't unambiguous. Haskell's normal function syntax (without operators) is; the rule is you always interpret a sequence of terms like foo bar baz qux etc as ((((foo) bar) baz) qux) etc (where each of foo, bar, etc can be an identifier or a sub-term in parentheses). You use parentheses not to disambiguate that rule, but to group terms to impose a different call structure than that hard rule would give you.
Infix operators do complicate that rule, and they are ambiguous without knowing something about the operators involved (their precedence and associativity, which unlike arity is associated with the name not the actual value referred to). Those complications were added to help make the code easier to understand; particularly for the arithmetic conventions most programmers are already familiar with (that + is lower precedence than *, etc).
If you don't like the additional burden of having to memorise the precedence and associativity of operators (not an unreasonable position), you are free to use a notation that is unambiguous without needing precedence rules, but it has to be Haskell's prefix notation, not Polish prefix notation. And whatever syntactic convention you're using, in any language, you'll always have to use something like parentheses to indicate grouping where the call structure you need is different from what the standard convention would indicate. So:
(+) ((-) 10 1) 2
Or:
plus (minus 10 1) 2
if you define non-operator function names.

Why was function application chosen as default Haskell operator, not composition?

Haskell syntax requires relatively noisy f . g $ 3 compared to 3 g f as in stack-oriented languages. What were main design arguments for this choice?
That could also be written f (g 3).
Why is Haskell not a concatenative language?
Based on A History of Haskell, it was influenced by a variety of functional programming and lazy language experiments, including ML. As section 4, Syntax describes:
Currying
Following a tradition going back to Frege, a function of two arguments may be represented as a function of one argument that itself returns a function of one argument. This tradition was honed by Moses Sch ̈onfinkel and Haskell Curry and came to be called currying. Function application is denoted by juxtaposition and associates to the left. Thus, f x y is parsed (f x) y. This leads to concise and powerful code. For example, to square each number in a list we write map square [1,2,3], while to square each number in a list of lists we write map (map square) [[1,2],[3]]. Haskell, like many other languages based on lambda calculus, supports both curried and uncurried definitions,
The concept of currying is so central to Haskell's semantics and the lambda calculus at its core that any other method of arrangement would interact poorly with the language.
The stack-oriented style doesn't so much compose as sequence functions; 3 g f is such a language is rather f $ g $ 3 in Haskell. Of course, that's equivalent to f . g $ 3, but it only works as long as you immediately apply the composition to some value. In Haskell, you very often compose functions just to hand them to some higher-order combinator, or to make a point-free definition. In a stack-oriented language that requires some sort of explicit block, in Haskell it requires just the . operator.
Usually, you don't just chain "atomic" functions. Certainly you don't deal with globally-named single-letter functions, so the tiny . or $ doesn't really make a dramatic difference verbosity-wise. And very often, as rmmh said, you chain partially applied functions, e.g.
main = interact $ unlines . take 10 . filter ((>20) . length) . lines
That's much more cumbersome without cheap tight-binding application. Also, it's very natural to have the seperating . to mark what's not immediately applied but just composed.
If you're interested in the history of Haskell, Hudak, Hughes, Peyton Jones & Wadler's "A History of Haskell: Being Lazy with Class" is the best-known paper on this topic, and well-worth reading.
It doesn't address your question directly, but it does point out one very relevant fact: Haskell was created as a unifying compromise between a bunch of existing languages from small teams. Quoting section 2.2 ("A tower of Babel"):
As a result of all this activity, by the mid-1980s there were a number of researchers, including the authors, who were keenly interested in both design and implementation techniques for pure, lazy languages. In fact, many of us had independently designed our own lazy languages and were busily building our own implementations for them. We were each writing papers about our efforts, in which we first had to describe our languages before we could describe our implementation techniques. Languages that contributed to this lazy Tower of Babel include:
Miranda […]
Lazy ML (LML) […]
Orwell […]
Alfl […]
Id […]
Clean […]
Ponder […]
Daisy […]
So the answer may simply be that Haskell copied this from its predecessor languages. And since a bunch of these languages were in turn based or inspired by Lisp and ML, they may analogously have copied it from them. So back to quote your question:
What were main design arguments for this choice?
Chances are that there was never a sustained argument for the choice. Very few high-level languages have gone for the stack-based design, in any case, and few people know them.
My guess would be lambda calculus and usefulness (in real world scenarios).
In lambda calculus, space is application, and thus it feels more similar to people who know it.
In most commonly used languages, the usual thing to do with a function is to apply it. Haskell is not a stack-based language, so the choice was made there.

Difference between logic programming and functional programming

I have been reading many articles trying to understand the difference between functional and logic programming, but the only deduction I have been able to make so far is that logic programming defines programs through mathematical expressions. But such a thing is not associated with logic programming.
I would really appreciate some light being shed on the difference between functional and logic programming.
I wouldn't say that logic programming defines programs through mathematical expressions; that sounds more like functional programming. Logic programming uses logic expressions (well, eventually logic is math).
In my opinion, the major difference between functional and logic programming is the "building blocks": functional programming uses functions while logic programming uses predicates. A predicate is not a function; it does not have a return value. Depending on the value of it's arguments it may be true or false; if some values are undefined it will try to find the values that would make the predicate true.
Prolog in particular uses a special form of logic clauses named Horn clauses that belong to first order logic; Hilog uses clauses of higher order logic.
When you write a prolog predicate you are defining a horn clause:
foo :- bar1, bar2, bar3. means that foo is true if bar1, bar2 and bar3 is true.
note that I did not say if and only if; you can have multiple clauses for one predicate:
foo:-
bar1.
foo:-
bar2.
means that foo is true if bar1 is true or if bar2 is true
Some say that logic programming is a superset of functional programming since each function could be expressed as a predicate:
foo(x,y) -> x+y.
could be written as
foo(X, Y, ReturnValue):-
ReturnValue is X+Y.
but I think that such statements are a bit misleading
Another difference between logic and functional is backtracking. In functional programming once you enter the body of the function you cannot fail and move to the next definition. For example you can write
abs(x) ->
if x>0 x else -x
or even use guards:
abs(x) x>0 -> x;
abs(x) x=<0 -> -x.
but you cannot write
abs(x) ->
x>0,
x;
abs(x) ->
-x.
on the other hand, in Prolog you could write
abs(X, R):-
X>0,
R is X.
abs(X, R):-
R is -X.
if then you call abs(-3, R), Prolog would try the first clause, and fail when the execution reaches the -3 > 0 point but you wont get an error; Prolog will try the second clause and return R = 3.
I do not think that it is impossible for a functional language to implement something similar (but I haven't used such a language).
All in all, although both paradigms are considered declarative, they are quite different; so different that comparing them feels like comparing functional and imperative styles. I would suggest to try a bit of logic programming; it should be a mind-boggling experience. However, you should try to understand the philosophy and not simply write programs; Prolog allows you to write in functional or even imperative style (with monstrous results).
In a nutshell:
In functional programming, your program is a set of function definitions. The return value for each function is evaluated as a mathematical expression, possibly making use of passed arguments and other defined functions. For example, you can define a factorial function, which returns a factorial of a given number:
factorial 0 = 1 // a factorial of 0 is 1
factorial n = n * factorial (n - 1) // a factorial of n is n times factorial of n - 1
In logic programming, your program is a set of predicates. Predicates are usually defined as sets of clauses, where each clause can be defined using mathematical expressions, other defined predicates, and propositional calculus. For example, you can define a 'factorial' predicate, which holds whenever second argument is a factorial of first:
factorial(0, 1). // it is true that a factorial of 0 is 1
factorial(X, Y) :- // it is true that a factorial of X is Y, when all following are true:
X1 is X - 1, // there is a X1, equal to X - 1,
factorial(X1, Z), // and it is true that factorial of X1 is Z,
Y is Z * X. // and Y is Z * X
Both styles allow using mathematical expressions in the programs.
First, there are a lot of commonalities between functional and logic programming. That is, a lot of notions developed in one community can also be used in the other. Both paradigms started with rather crude implementations and strive towards purity.
But you want to know the differences.
So I will take Haskell on the one side and Prolog with constraints on the other. Practically all current Prolog systems offer constraints of some sort, like B, Ciao, ECLiPSe, GNU, IF, Scryer, SICStus, SWI, YAP, XSB. For the sake of the argument, I will use a very simple constraint dif/2 meaning inequality, which was present even in the very first Prolog implementation - so I will not use anything more advanced than that.
What functional programming is lacking
The most fundamental difference revolves around the notion of a variable. In functional programming a variable denotes a concrete value. This value must not be entirely defined, but only those parts that are defined can be used in computations. Consider in Haskell:
> let v = iterate (tail) [1..3]
> v
[[1,2,3],[2,3],[3],[],*** Exception: Prelude.tail: empty list
After the 4th element, the value is undefined. Nevertheless, you can use the first 4 elements safely:
> take 4 v
[[1,2,3],[2,3],[3],[]]
Note that the syntax in functional programs is cleverly restricted to avoid that a variable is left undefined.
In logic programming, a variable does not need to refer to a concrete value. So, if we want a list of 3 elements, we might say:
?- length(Xs,3).
Xs = [_A,_B,_C].
In this answer, the elements of the list are variables. All possible instances of these variables are valid solutions. Like Xs = [1,2,3]. Now, lets say that the first element should be different to the remaining elements:
?- length(Xs,3), Xs = [X|Ys], maplist(dif(X), Ys).
Xs = [X,_A,_B], Ys = [_A,_B], dif(X,_B), dif(X,_A).
Later on, we might demand that the elements in Xs are all equal. Before I write it out, I will try it alone:
?- maplist(=(_),Xs).
Xs = []
; Xs = [_A]
; Xs = [_A,_A]
; Xs = [_A,_A,_A]
; Xs = [_A,_A,_A,_A]
; ... .
See that the answers contain always the same variable? Now, I can combine both queries:
?- length(Xs,3), Xs = [X|Ys], maplist(dif(X), Ys), maplist(=(_),Xs).
false.
So what we have shown here is that there is no 3 element list where the first element is different to the other elements and all elements are equal.
This generality has permitted to develop several constraint languages which are offered as libraries to Prolog systems, the most prominent are CLPFD and CHR.
There is no straight forward way to get similar functionality in functional programming. You can emulate things, but the emulation isn't quite the same.
What logic programming is lacking
But there are many things that are lacking in logic programming that make functional programming so interesting. In particular:
Higher-order programming: Functional programming has here a very long tradition and has developed a rich set of idioms. For Prolog, the first proposals date back to the early 1980s, but it is still not very common. At least ISO Prolog has now the homologue to apply called call/2, call/3 ....
Lambdas: Again, it is possible to extend logic programming in that direction, the most prominent system is Lambda Prolog. More recently, lambdas have been developed also for ISO Prolog.
Type systems: There have been attempts, like Mercury, but it has not caught on that much. And there is no system with functionality comparable to type classes.
Purity: Haskell is entirely pure, a function Integer -> Integer is a function. No fine print lurking around. And still you can perform side effects. Comparable approaches are very slowly evolving.
There are many areas where functional and logic programming more or less overlap. For example backtracking and lazyness and list comprehensions, lazy evaluation and freeze/2, when/2, block. DCGs and monads. I will leave discussing these issues to others...
Logic programming and functional programming use different "metaphors" for computation. This often affects how you think about producing a solution, and sometimes means that different algorithms come naturally to a functional programmer than a logic programmer.
Both are based on mathematical foundations that provide more benefits for "pure" code; code that doesn't operate with side effects. There are languages for both paradigms that enforce purity, as well as languages that allow unconstrained side effects, but culturally the programmers for such languages tend to still value purity.
I'm going to consider append, a fairly basic operation in both logical and functional programming, for appending a list on to the end of another list.
In functional programming, we might consider append to be something like this:
append [] ys = ys
append (x:xs) ys = x : append xs ys
While in logic programming, we might consider append to be something like this:
append([], Ys, Ys).
append([X|Xs], Ys, [X|Zs]) :- append(Xs, Ys, Zs).
These implement the same algorithm, and even work basically the same way, but they "mean" something very different.
The functional append defines the list that results from appending ys onto the end of xs. We think of append as a function from two lists to another list, and the runtime system is designed to calculate the result of the function when we invoke it on two lists.
The logical append defines a relationship between three lists, which is true if the third list is the elements of the first list followed by the elements of the second list. We think of append as a predicate that is either true or false for any 3 given lists, and the runtime system is designed to find values that will make this predicate true when we invoke it with some arguments bound to specific lists and some left unbound.
The thing that makes logical append different is you can use it to compute the list that results from appending one list onto another, but you can also use it to compute the list you'd need to append onto the end of another to get a third list (or whether no such list exists), or to compute the list to which you need to append another to get a third list, or to give you two possible lists that can be appended together to get a given third (and to explore all possible ways of doing this).
While equivalent in that you can do anything you can do in one in the other, they lead to different ways of thinking about your programming task. To implement something in functional programming, you think about how to produce your result from the results of other function calls (which you may also have to implement). To implement something in logic programming, you think about what relationships between your arguments (some of which are input and some of which are output, and not necessarily the same ones from call to call) will imply the desired relationship.
Prolog, being a logical language, gives you free backtracking, it's pretty noticeable.
To elaborate, and I precise that I'm in no way expert in any of the paradigms, it looks to me like logical programming is way better when it comes to solving things. Because that's precisely what the language does (that appears clearly when backtracking is needed for example).
I think the difference is this:
imperative programming=modelling actions
function programming=modelling reasoning
logic programming =modelling knowledge
choose what fits your mind best
functional programming:
when 6PM, light on.
logic programming:
when dark, light on.

Can Haskell programs be represented as Lisp S-expressions?

This would be useful for genetic programming, which usually use a Lisp subset as representation for programs.
I've found something called Liskell (Lisp syntax, Haskell inside) on the web, but the links are broken and I can't find the paper on it...
Check out Lisk, which was designed to fix the author's gripes with Liskell.
In my spare time I’m working on a project called Lisk. Using the -pgmF option for GHC, you can provide GHC a program name that is called to preprocess the file before GHC compiles it. It also works in GHCi and imports. You use it like this:
{-# OPTIONS -F -pgmF lisk #-}
(module fibs
(import system.environment)
(:: main (io ()))
(= main (>>= get-args (. print fib read head)))
(:: test (-> :string (, :int :string)))
(= test (, 1))
(:: fib (-> :int :int))
(= fib 0 0)
(= fib 1 1)
(= fib n (+ (fib (- n 1))
(fib (- n 2)))))
The source is here.
Also, if you don't actually care about Haskell and just want some of its features, you might want to check out Qi (or its successor, Shen), which has s-expression syntax with many modern functional-programming features similar to Haskell.
You might be interested in a project I have been working on, husk scheme.
Basically it will let you call into Scheme code (S-expressions) from Haskell, and vice-versa. So you can intermingle that code within your program, and then process the s-expressions as native Haskell data types when you want to do something on the Haskell side.
Anyway, it may be useful to you, or not - have a look and decide for yourself.
In most genetic programming software, programs are represented as abstract syntax trees (AST) which are evaluated directly in that form. The Lisp S-expression syntax is only apparent when the programs are output as source code. Have you considered just modifying the output module in your chosen software to produce Haskell source code from the ASTs instead?
The obvious answer is "yes" -- unsurprising given that S-expressions were intended as a simple & uniform representation of parsed code. The thing is that languages like Haskell or ML tend to have some problems with that. I once did something similar to OCaml (abused CamlP4 and wrote some function that translates the P4 AST into some sexpr-like representation), and the fun begins when you run into similar kinds of AST nodes that have different types because they're not really the same... For example, there's function application, and there's a similar form that is used in patterns, and yet another form that is used in type expressions.
My guess is that trying to do genetic programming this way is likely to suffer from too much junk programs that don't have any meaning. But that's unsurprising too for any statically typed language -- a dynamically typed language will allow more junk in. Comparing the two worlds WRT to genetic programming might be interesting for reasons beyond the AI...
The Liskell paper is at http://clemens.endorphin.org/ILC07-Liskell-draft.pdf and the liskell.org site seems to still be up in general.

Resources