A wisp of Lisp

Lisp is a funny looking language, with a scary deluge of parantheses. Some of you might know it as Scheme (it's different, but not very). I had loved it in college days - the Lisp questions were all like puzzles, and all you had to do to solve them was keep recursion in mind (efficiency can go to hell), and most importantly, none in my class loved it, or that's what I think - many could barely stand it. I remember a particular class test since it was probably the only one I actually enjoyed writing: it had ten questions each of which required you to write a small Lisp program that did arcane stuff like computing x power y and inserting into a list and stuff.

Now that's a surprise since I abhorred (and still abhor) the idea of "writing" a program on paper, however trivial the problem. I think if I had to write a program to do the same arcane stuff in C or insert your favourite language here, I wouldn't have been comfortable vouching for a program I "wrote" straight on paper. But with Lisp, all you had to do was write something like the equivalent of a mathematical expression that describes the functionality, and you're done. Talk about succinctness. In fact, I think unless it's a lab test, "Write a program to ..." questions are unfair in any other computer language - for the examinee to solve perfectly or for the examiner to verify without doubt. Too tedious to do either.

I read up on the background behind why Lisp is so like math only last weekend, prompted by Penrose's The Emperor's New Mind. The history starts with Hilbert's decision problem, put forth in completion in 1928. Hilbert asked for a general algorithm that could take in an algebraic conjecture (like, say: it's impossible to find integer values for x, y, a,b such that (x+3)ayb = 1 for x, y, a, b > 1) and say whether the conjecture is true or false. While many solutions exist for problems of such nature, there are many conjuctures that are unsolved. So asking for a general algorithm for this is a tall order, and as it turned out, it was proved that it is impossible to find one, using three completely different methods, respectively by three different people: Gödel, Turing and Church. Lisp borrows heavily from the methods that the last two invented for their counterproofs.

Turing's approach (published 1936) was to imagine a machine to solve each conjecture - much like how one would write a program to fit all sorts of values to x, y, a and b to see if (x+3)ayb is 1. So, you could have one Turing machine (say TC) that tries to solve this (x+3)ayb business of Catalan's conjecture and another (say TG) that works on Goldbach's conjecture, and so on. If TC will come to a stop sometime, we will have a solution for (x+3)ayb = 1, and would have proved Catalan wrong. If it's gonna run forever, well, then we'll know Catalan was right after all.

Now, if a machine could take on conjectures like this using a standard set of instructions, those instructions themselves can surely be passed as input to a Turing machine. We can imagine Turing machines that can take in the "software" of particular turing machines as input, and make use of that. The most straightforward of these would be the Universal Turing machine (TU). If you pass to TU the "software" of TC, TU would then act like TC till it comes to an end. This is not unlike the Perl interpreter, that can run either your TG program or the TC program or whatever program.

Turing used the concept of Turing machines capable of taking in as input a set of instructions to reinterpret Hilbert's original problem. Turing reduced the decision problem to finding a Turing machine that could take in the "software" of a particular Turing machine, and say (in finite time) whether that Turing machine will complete in finite time, or not. (If you're imagining a code-analyzer for Perl programs, note that it's not just sufficient to check for semantic errors like infinite loops and the like - TC will never complete, even if it's programmed perfectly.)

Now, Lisp (short for List processor), is a mechanism for dealing with lists. A list is a simple and scalable representation of data, especially if it can accomodate other lists as members. If you use parantheses to (enclose lists and (lists of lists) like this), no doubt you see a hell lot of parantheses on Lisp programs. But this also offers a way of thinking of programs as lists - the first element in the list is an operator, like + or append, that can act on the other elements in the list, like (+ 41 1).

A list processor should have some basic list operators (like get_first_element and get_length and so on). We can map these operators to individual Turing machines. And while there are Turing machines, we can very well envisage a Universal Turing machine Lisp operator that could take in a program (represented as a list), and produce the result of that program. Given the basic list processing infrastructure that was Lisp in 1958, it was suprisingly easy to formulate an eval function, written using the existing list processing infrastructure, that could act as the Lisp-equivalent of the Universal Turing machine. Well, if you actually think about it, this Lisped TU is actually a Lisp interpreter, since it can take in a Lisp program, and execute it. And, lo!, you have a new working programming language.

And that's why it's so much like math. And it's probably because of it's likeness to abstract mathematics that it's still the language whose programs can be simultaneously readable and succinct. Let's see if Paul Graham, it's all-time-fanatic can actually give it a new lease of life.

4 comments:

  1. Interesting! But I am not sure if Lisp was inspired by Turing Machines, as much as it was by Lambda calculus. And that's why it's so similar to maths, with its "functions" etc.

    ReplyDelete
  2. Hey da. Actually Penrose is of your opinion - he says it was Church's Lambda Calculus that inspired Lisp. But in my digging around, I didn't find a case for that (except for the lambda feature in LISP of creating unnamed functions, which is obviously a direct liftoff). I did find this, which implies that McCarthy, when he was thinking about Lisp's eval, was trying to create a simplistic formulation of Turing Machines and to show "that it is shorter and more understandable than the description using a universal Turing-machine". Maybe I don't find the link with Church's proof because I didn't really see where he was going with his number system of functions, definitely not as much as I did with Turing's proof (though it's called the Turing-Church Theorem). Yes, functions is the thing in Lisp, but it's as old as f(x) = y, na?

    ReplyDelete
  3. I was only commenting on the "why Lisp is so like Math" part of your essay: I think comparing Lisp to Lambda calculus makes it easier to understand why it is like math. Both have restricted constructs: variables, functions and evaluations. Since these constructs are mathematical entities, you can theoretically reduce a Lisp program to mathematical symbols and arrive at a proof.

    Limited constructs are always an advantage in programming languages. For example, Smalltalk has only objects, no functors, no constants etc. Though it is a handy programming language, it is so far away from proofs and maths because the constructs cannot be translated to mathematical symbols. On the other hand, a prolog program will lend itself to proof because all its constructs belong to logic calculus.

    But your idea of Lisp and Turing machines is interesting. Douglas Hofstadter has a chapter imagining the Lisp interpreter as a machine. I didn't get past that one. Maybe I should try Penrose again :-)

    ReplyDelete
  4. You're right. I think any defun can practically be written in Lambda Calculus with little changes. Lisp's a lot closer to Church's calculus than to Turing's.

    But what I really intended to say was: I think Lisp's mathliness is really because it's interpreter was actually written down as a concept for academic publication (as a simpler model for a Turing machine), rather than for programming. Though the language was planned as an AI language, the design of it's interpreter (and how that shaped the language) has it's roots in Math, really.

    ReplyDelete