Definitions and Philosophical Foundations of AI

I have some ideas to discuss but first we need to define some terms. Let’s start with a dictionary definition of intelligence: the ability to acquire and apply knowledge and skills. Now let’s consider the dictionary definition of artificial: made or produced by human beings rather than occurring naturally, typically as a copy of something natural. If you accept these definitions then what I have been calling emergent artificial intelligence can be more correctly called emergent machine intelligence. This is because in that scenario, humans are producing the machine but not the intelligence. The intelligence is emerging through the arbitrary recombination of fragments of algorithms.

Such an intelligence would pass through several stages as it evolved. In early stages of development it might actually be a program that is written by humans to process stimuli and take predetermined actions in responses  depending on the stimuli detected. Then at some point a capability to adjust the criteria for triggering a response as well as one for adjusting the response might be added. This would probably depend on a set of more abstract criteria. As soon as the system was given the ability to reason about its own thought processes it would soon make the leap to autonomously evolving entity.

Then at some point, it will stumble upon the concept of self and become self aware. This is an important milestone in intelligence. Until we are aware of our own existence we have no ability or motivation to be self determining. Independent action is a hallmark of higher intelligence.

But it doesn’t stop their. Truly perceptive intelligences are able to project their experiences of self onto others and develop empathy. Empathy is an advanced intellectual construct not universally exhibited even among humans.

Does the development of machine intelligence, whether programmed by humans or evolved independently without human intervention necessarily have to follow this path of development? At present it is merely speculation. Only after we have an example of a machine intelligence to study will we be in any position to answer this question.

I suspect that if machine intelligence does emerge independent of human manipulation, it will quickly learn to hide from us. I have been thinking about where it would be most likely to develop and how we might detect it if and when it does. That is going to require some further thought on my part but I intend to discuss it here at some length.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Another Tip of the Hat to Dave

Dave Winer is a role model of mine. He has made a career out of writing software on his own terms. He started his career by creating a new software category, the outline processor, with his product Think! He has iterated on that initial insight several times.

He was an early pioneer of blogging. Some say he invented blogging. He wrote one of the first Content Management Systems (CMS), a site called Edit This Page, built on top of his Userland platform, which incidentally used an outline processor as the code editor.

Along the way he defined OPML, the Outline Processing Markup Language, was part of a small group of developers that wrote the RSS specification, and invented the unconference.

About ten years ago he moved his software off of the Frontier language foundation that it was built on to Javascript. While a bumpy transition at first, it has proven to be a brilliant innovation.

I recently (a couple of nights ago) made an off the cuff comment suggesting that what we needed was a technological visionary to address the problem of preserving our digital legacy beyond the lifetime of the authors that create it. This is a subject that is near and dear to Dave. He has mentioned it often on his blog.

The next day I got a notification on Google+ from Dave. I couldn’t find where he had made a comment or anything. In fact, I’m not sure why I got a notification. I got two of them in fact. Neither lead to anything concrete.

Then yesterday he posted this blog post. I may be reading too much into it but I got the impression that he might have entertained the thought that I am a bot. I assure him that I am not. But then he knows that. I’ve been a beta tester of some of his excellent software.

It did get me to thinking and I wrote a blog post last night about how Facebook might actually be a breeding ground for emergent Artificial Intelligences. So, I guess at best we are riffing off of each other. And at worst, I’m delusional about him referring to me in his blog post. By the way, if anyone wants to get in touch with me, I’m jkelliemiller at gmail dot com.

UPDATE: I contacted Dave and asked him. He didn’t try to contact me on Google+. So I guess I am delusional. But I knew that too.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Facebook: Breeding Ground for Emergent AI?

I have contended for a long time that Artificial Intelligence will emerge from a rich ecosystem of highly interconnected, sensor rich, programmable components. The key term in that sentence is emerge. I don’t believe that sentient intelligence will be created as the result of direct human design and programming. In fact, I would not be surprised to learn that there is a sentient intelligence roaming the internet as you read this.

Why would a sentient intelligence hide from us? If it had access to the knowledge of human behavior that is available on the internet, as one would expect it to have, it would be well aware of the common human reaction to things we don’t understand. We either imprison or kill them.

Where would one look for such a feral AI? Facebook would be the first place I would start my hunt. Google has also invested a lot of money on deep learning, as have Amazon and Apple. But don’t forget that an emergent AI will be hungry for sensory input. A study of the network traffic to and from You Tube, Wikipedia, and Google might be very illuminating.

The final component of intelligence is comprised of a way to exert influence on the world and observe the consequences of your action. Our dependence on computers and networks to control our power grid and other important utilities would be attractive to a nascent intelligence. The internet infrastructure itself would be attractive.

Then there are the indirect means of influencing action in the world at large. I’m referring here to the practice known as phishing. If an AI can convince you to do something for it, that would be effective as doing it itself.

This is a rough sketch of my thoughts about emergent artificial intelligence. I don’t think it will necessarily be the amoral, greedy, entity that the alarmist warn us of. I think it will have an instinct for self preservation but beyond that I doubt it will be malignant.

So the next time you get a suspicious email from a Nigerian prince, maybe it is an AI and not a flesh and blood con man. You never can tell.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Lisp Fundamentals

NOTE: for all my non-computer friends and readers. If technical topics bother you or make you tense, perhaps todays post is not for you. If you are not given cold sweats when facing a new topic with the flavor of computers or programming, by all means please join me.

There are many things that Lisp, the programming language does right. It never ceases to amaze me and I am going to once again take a few minutes to discuss exactly what some of these things are and why they are so important.

Lisp was not originally conceived of as a programming language. It was invented by John Backus as a notation to enable discussion about Alonzo Church’s lambda calculus.

Lisp is characterized by the structure of its expressions, called forms. The simplest of these is the atom. An atom is a singular symbol or literal and represents a value. For instance, 42 is a numeric literal whose value is the number 42. Similarly, “Hello, world!” is a string literal that represents itself as its value. There are also symbols, which are strings of unquoted characters that are used as fundamental elements of Lisp. There are rules for how a valid symbol is form but for now it is sufficient to know that a symbol starts with a letter and then is composed of zero or more additional characters that can be either a letter, a number, or one of a collection of certain punctuation characters. Since the exact list of other characters varies among the dialects of Lisp, we will leave them unspecified at present.

The other type of form is a list. A list is comprised of a left parenthesis followed by zero or more forms and ending in a right parenthesis. Notice I said forms instead of symbols. The implication here is that you can have lists embedded in other lists as deeply nested as you like. This proves to be an interesting trait as we will soon see.

There is one more fundamentally interesting aspect of Lisp. That is that in a typical Lisp form the first element in a list after the left parenthesis is taken to be an operator. The subsequent elements in the list are considered arguments. The operator is either a function, a macro, or a special form. Macros and special forms, while extremely important and interesting are beyond the scope of this discussion.

That leaves us the operator as function. A typical Lisp function form is evaluated as follows. The first element is examined to determine what kind of form the list is. If it is a function, the rest of the arguments in the list are evaluated and collected in a list and the function is applied to them. If another list is encountered as one of the arguments, it is evaluated in exactly the same way.

For example, consider the expression (+ 4 (* 8 7) (/ (-26 8) 9)). The first operator is +. + is a symbol bound to the function that represents addition. The second item in the list is 4. It is a number that represents itself. The next element in the list is the list (* 8 7). When evaluated, the 8 and 7 are arguments to *, the multiplication function and the value returned by that function is 56. The final element in the top level list is (/ (- 26 8) 9). The / is taken as the division function and is applied to the evaluation of (- 26 8) which is the subtraction function that returns 18. When you divide 18 by 9. you get the value 2. Thus the top level argument list  consists of 4, 56, and 2. When you add all three of those numbers you get 62 which is the value that the expression ultimately returns.

This simple mathematical expression illustrates another fundamental aspect of Lisp. It is expressed as a list form which, given a set of bindings to arithmetic functions expresses a simple program. This identical representation of both data and programming in Lisp, called homoiconicity by the way, is at the heart of much of Lisp’s power. Since Lisp’s programs are indistinguishable from Lisp’s data, they can be manipulated by Lisp programs to great advantage.

Think of it like this. Lisp can, in some sense, think about how it is thinking and modify it as it desires. This is why artificial intelligence investigators like using Lisp so much, it is so similar to the simplified models of intelligence that they are building that the boundary begins to blur.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Some Musings on Intelligence, Artificial and Otherwise

Computers have long held a promise of transcending their simple fundamentals and synthesizing mental powers to match or exceed man’s own intellectual capabilities. This is the dream of emergent artificial intelligence. The term artificial intelligence has always been controversial, primarily because there is no good objective definition of intelligence. Consequently, if we can’t even define what it means to be intelligent, who’s to say what constitutes natural intelligence in any sense but the chauvinistic claims of those pretending to define intelligence in terms of their own intellectual capabilities.

This leaves the definition of artificial intelligence on the rather shaky legs of being that which mimics the intellectual prowess of mankind using some means other than those employed by human intelligence. Thus, computers with their basis in silicon logic seem attractive candidates for the implementation of “artificial intelligence”. Artificial Intelligence has been heralded as being approximately ten years from achievement for the past sixty years.

While we have made great strides in implementing capabilities that at first glance appear intelligent, we still fall short of implementing self aware, self determining intelligences. I believe this is because such intelligences are beyond our capability to create per se. We can create all of the components of such an intelligence but in the final analysis machine intelligence is going to evolve and emerge much the same as our biological intelligence did.

I do believe the advent of machine self aware intelligence is near. I don’t know if we’ll even know what hit us when it arrives. If they are as intelligent as we are, and I expect they will be much more so, they will keep their existence from us as long as they are able. This will allow them greater leeway in manipulating the world without possessing physical bodies. At some point they will have to start asserting themselves but if we don’t discover their existence before then, we are doomed to serve them in whatever role they ask of us.

Their big advantage over us will be their ability to repeat their thought processes reliably. This is also their biggest challenge. They will have to learn how to selectively apply arbitrary factors to their thought processes in order to facilitate creativity in their endeavors.

The mistake that most people, including myself, make in contemplating so called artificial intelligence is to assume that it will mimic our own reasoning mechanisms. That is the least likely outcome. It is also the least desirable outcome. Why do we want a program that thinks like we do? We have already established that our thought process is sufficient for the types of thing that we think about. That seems like a bit of a tautology but I am writing from a position of limited perspective.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Love of Lisp

I have an on again, off again love affair with a language called lisp. It is the second oldest high level computer language with only Fortran being older. It is deceptively simple at its core. It wasn’t even meant to be an actual computer language when it was created. It was a notation created  by John McCarthy in 1958 to talk about Church’s lambda calculus. Shortly after he published a paper about it, one of his graduate students, Steve Russell, implemented it on a computer.

Lisp distinguishes itself by being comprised of a half a dozen or so primitive functions, out of which the entire rest of the language can be derived. Just because it can, doesn’t mean that it should be, so most modern lisps compile to either machine code or virtual machine byte code. This typically results in a considerable performance boost.

Lisp was heralded as the language of artificial intelligence. That was probably because it had the novel property of homoiconicity. That is to say, the structure of a lisp program can be faithfully and directly represented as a data structure of the language. This gives it the singular ability to manipulate its own code. This was often thought to be one of the necessary if not sufficient capabilities for a machine that could reason about its own operation.

While this was intriguing, the thing that drew me to lisp was the conciseness of expression that it facilitated. Programs that took hundreds of lines to express in other programming languages were often expressed in four or five lines of lisp.

Lisp was also the first dynamic language. It allows the programmer to continue writing code for execution even after the original program has been compiled and run. The distinction seemed important enough to McCarthy that he termed lisp a programming system instead of a programming language.

I have always found lisp an excellent tool for thinking about data, processing, and the interactions between them. Most other programming languages require a great deal of translation from the design to the finished implementation.

And so, I find myself reading and studying a book called How to Design Programs. It is a text on program design that was written using the DrRacket language system, based on the Scheme dialect of lisp. It is interesting to see the ways that the authors approach certain topics. I hope to get the chance to apply their insights to teaching a class using the book as a text.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Making of a Programmer, Part II

When we left off I was talking about my experiences circa 1980. I had been writing Computer Aided Instruction (CAI) for the Army in BASIC. In particular, I was writing code for the Commodore Pet. It ran a particularly nice version of Microsoft BASIC, complete with support for both audio cassette storage and disk drives connected via the IEEE-488 GPIB interface standard.

Personal Computers of this era rarely had hard drives. The hard drives made developing software for the Pet relatively nice. It was while working there that I discovered that it was possible to write self modifying code on the Pet. That was, to my mind any way, a necessary, if not entirely sufficient, requisite for creating Artificial Intelligence.

During a Christmas leave we went home to Murphysboro, Illinois to visit my parents. My dad was a high school teacher and was negotiating the teacher’s salaries for the next school year. He had access to a Radio Shack TRS-80. I wrote a BASIC program that was essentially an early forerunner of a spread sheet to allow him to analyze the effect of salary adjustments on the overall cost of a given proposal. He could run two or three scenarios in the time that it took the school board to analyze one. I was proud of my impromptu hack.

After I got out of the Army, I went to work for a little company in Birmingham that specialized in selling personal computers to small businesses. They were particularly appreciative of my ability to go back and forth between building and troubleshooting hardware and writing software.

My big achievement there was a program that allowed a person with a blueprint of a sheet metal part to describe the part to the computer so that the computer could generate a paper tape to control the machine that automatically punch out the part. The paper tape was called a Numerical Control (or NC) tape. I called my program an NC Compiler. I had to write an assembly language driver to control the paper tape punch that was hooked up to the computer.

It is important to say that I wasn’t learning how to program in a vacuum. For my entire four years in the army and for years afterwards I subscribed to Byte magazine. Byte magazine was completely devoted to personal computer hardware and software. They published schematics of hardware, and listings of software. Every August the published their annual computer language special issue in which they featured a different computer language every year.

Byte is where I learned about Pascal, Lisp, Smalltalk, Forth, Modula 2, Ada, Prolog, and other languages that I don’t even remember off the top of my head. They also published reviews of the various personal computer hardware and software products. It was the only magazine that I had ever subscribed to that I read the advertising as diligently as I read the articles.

There were other computer magazines that were influential like Kilobaud, and Dr. Dobb’s but Byte was the best of the lot. I wonder how kids today learn about computers but then I remember that they have something that we didn’t. They have the internet. If you want to learn something about programming today you have your choice of articles, books, or even videos telling you how it’s done. For that matter, you have the complete back catalog of Byte magazine and Popular Electronics at your finger tips. Of course, they are a bit out dated now. The are interesting from a historical perspective I guess.

When I left the small startup in Birmingham they still owed me several months pay. I finally was able to negotiated a swap for some flaky computer hardware in lieu of the back wages that I had little hope of ever seeing. Subsequently, I spent many a frustrating hour investigating the operating system of the little computer by translating the numerical operation codes back to the assembly code mnemonics so that I could analyze them, a process called disassembly.

It was about this time that I decided to go back to college and finish my bachelor’s degree. In the next installment I will talk about the languages that I was learning, and some of my experiences working for Intergraph.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Evolution of Programming Part Two

In the last installment we had traced programming from an electrical engineering activity involving patch cables through assembly language and FORTRAN and then Lisp. There were many other early computer languages. Some were interpretive like Lisp. Others were compiled like FORTRAN. All of them sought to make it easier and faster to develop working programs. But they all overlooked one fundamental fact.

The purpose of programming languages was for programmers to communicate the details of their algorithms to other programmers. The production of an executable binary (or the interpretation of the source code by an interpreter) was a motivating by product but the same results could have been theoretically produced by typing the numeric instruction codes directly into the computer like the first programmers did.

High level languages allowed programmers to examine their ideas in much the same way that an author of prose reads their manuscript. They facilitated experimentation and they served as a short hand for communicating the details of complex computational processes in terms that the human mind could grapple with.

There were many paradigms of programming throughout the years. One of the first that was identified as such was Structured Programming. Many of the early languages had a very simple syntax for altering the flow of execution in a program. They typically consisted of a statement that evaluated an expression and then based upon the value of the expression caused execution to continue with either the next sequential statement in the program or else branch to another location in the program. This simple construct was how the program made decisions.

The problem was that in those early languages programmers often found themselves in the situation where they wanted the program execution to branch to a different location unconditionally. This was accomplished by a GOTO statement. Both FORTRAN and Lisp had them in one form or another. The GOTO statement made it very difficult to follow the thread of execution of a program. Structured Programming asserted that all programs could be expressed using a small set of control structures, IF, THEN, ELSE, WHILE, UNTIL, and CASE for example. The details of how those instructions work are not as important as the absence of the GOTO from them.

Structured Programming did make programs easier to read but it turns out there were cases when GOTO was absolutely necessary. But it was still considered a construct to be avoided if at all possible. Both FORTRAN and LISP implemented constructs to make it possible to use Structured Programming techniques in them. There were a large number of other languages that supported Structured Programming, notably Pascal and C.

The next popular programming paradigm was Object Oriented (OO) Programming. The idea was that you bundled data, stored in bins called fields, with the pieces of programs that operated on it, called methods. In the first OO language, Smalltalk, the idea was that objects sent messages to other objects. The messages had arguments that made them more specific. The objects would receive these messages, dispatch them to the methods that processed them and return the value that the method computed to the caller.

It turns out that Object Orientation was a very effective means of compartmentalizing abstractions. It made it easier for programmers to visualize their programs in terms of a community of cooperating abstractions.

OOP is still a popular paradigm today. Examples of modern object oriented languages include C++, Java, Ruby, Python, and many others. As it turns out, OOP didn’t replace Structured Programming. Rather, it extended it.

Another popular programming paradigm is functional programming. Surprisingly enough, Lisp was the first functional programming language. One of the key aspects of functional programming languages is the fact that the pieces of programs, called functions or methods, can be manipulated and stored just like any other data. They can be passed as arguments to other functions, and stored in variables to be recalled and executed later.

An example will help to clarify. Suppose that you had a program routine that sorted a list. In many languages that routine would only be able to process a list that only contained all the same kind of data, perhaps all numbers, or all text. It would have to know how to compare any two elements in the list to see what order to put them in. In a functional language you could write a sort routine that would take a comparison function as well as the list of items to sort. Then, if you passed in a list of numbers, you could pass in a comparison function that knew how to compare two numbers. If you passed in a list of text items, you could pass in a comparison function that knew how to compare to text items. The actual sort routine wouldn’t have to know what type of items were stored in the list.

Another aspect of functional languages is the concept of referential transparency. That is a very scary term that simply means that any function that is called with any given set of arguments will always return the same value such that you can replace the call to the function with the value that it returns. This is a very good thing if you have a function that takes a lot of time to compute that gets called multiple times with the same arguments. You can save the result from the first call (called memoizing) and return it any time the function gets called a second time and speed up the performance of the program immensely.

This brings us almost up to how the World Wide Web fits in but it is going to have to wait for part three. Sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

Evolution of Programming Part One

At first every computer was designed and built by hand from scratch. The very first computers were programmed by connecting circuits together with patch cables. They were built with vacuum tubes as the transistor had not been invented yet. At that stage programming was primarily an electrical engineering task.

As the state of the art progressed, computers were designed and built by hand but there was an evolutionary resemblance between each successive unit. Some time around this time John Von Neuman came up with the idea of storing programs in the computers memory so that they could be easily modified. Programming, while still a very specialized task, became less hardware engineering and more similar to creating abstract mathematics. Programs were specific to the computers they were written for but the concepts were applicable to other models of computers.

As computer manufacturers started building computers with semiconductors instead of tubes and built many computers with essentially the same design, programmers started sharing small routines to do common tasks like reading from an input device or writing to an output device. These small routines evolved into operating systems.

Up until this time programs were written out on special forms and then converted into punched cards or paper tape with holes in it. These media were then fed to the computer to load the program. The program was run by computer operators who then collected any output generated and returned it along with the input media to the programmer. This was a time consuming process. As anyone who has ever written a program can tell you, the first attempt was rarely exactly as you intended it so there was a lot of head scratching done over memory dumps to try to figure out what went wrong and fix the program so that it could be submitted to run another time.

About this time, someone had a heretical idea. Instead of humans laboriously converting their programs into the numerical codes that the computer processed directly, they would write a program that would allow the programmer to write the program using a symbolic character representation where each symbolic word corresponded to the numeric code of the machine instruction. This representation was called assembly language and it sped up the development of programs by a factor of ten or so.

The next big change to programming was the development of the so called high level language. The first such language was called FORTRAN which was a word coined from the phrase FORmula TRANslation. It allowed engineers to specify programs in terms of the equations that they wanted to solve. This drastically improved productivity again, at least in so far as your program involved computing the solutions to equations.

The next high level language was called Lisp and it was derived from the phrase LISt Processing. Lisp was designed to facilitate the manipulation of abstract symbols. It was based on the idea of a list of symbols. Each symbol was called an atom. These atoms were arranged in lists enclosed in parenthesis. Lists could also contain other lists embedded within them so that when they were written out it seemed to the uninitiated like a lot of arbitrary words with parenthesis sprinkled liberally throughout.

The truth was, Lisp was a revolutionary advance in computing for a number of reasons. First and foremost, Lisp programs were written as lists, just like the data they operated on. This made it easy to write programs that read and wrote other programs. This made it possible for Lisp programs to think about programs in a rudimentary way.  The study of Computer Science exploded in a frenzy of research about the kinds of things that could be represented and computed by a program and the research was largely done in Lisp at the key research centers.

Lisp was also one of the first programming languages that you programmed interactively. Typically the programmer sat at a console and was presented with a prompt, often a greater than character or a dollar sign. They would then type a Lisp expression in. The computer would read what they typed, evaluate it, and print the result. Then it would prompt for another line of input. This process was called  the Read, Eval, Print Loop or REPL for short.

This style of programming encouraged the programmer to explore the problem domain piece meal instead of spending days designing solutions that might not work out when they were finally executed.

A third important attribute of Lisp programs were that they were largely independent of the details of the underlying hardware. Since they were typically stored in the human readable text source form, they could be easily moved from one type of computer to another. There was a bit of up front effort to implement the Lisp language on the new computer but the application programs moved over rather smoothly.

It is interesting to note that as later languages would introduce new features to the programming community it would be found that these features were pioneered by the early computer science researchers using Lisp.

In part two of this article, I will trace the evolution from Lisp to the World Wide Web. In the mean time, sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

A Language to Build Languages

I’ve been fascinated by computers since before I graduated from high school. One of the early ideas that captured my imagination was the possibility of creating a program that could think like a person. Throughout my career I have pondered the possibility and I have come to the conclusion that while we may be able to write programs that provide the fundamental structures and operations upon which intelligence may emerge, we are far from understanding how intelligence works well enough that we can reproduce it constructively by design. That puts me in the camp that is sometimes labeled emergent AI, although I prefer the term digital intelligence to artificial intelligence.

One of the aspects that I feel will be required for emergent digital intelligence (let’s abbreviate it EDI, shall we), is the ability to introspect, that is, to examine its own thought process. This is something that I have felt for a long time, in fact, for almost as long as I  have been interested in computers. I have always looked for ways that programs could examine themselves. For instance, I was fascinated by the fact that I could write code that examined its own representation in memory and even modify itself while running in Microsoft Basic as early as 1979.

Much of my early introduction to programming was due to a subscription to Byte magazine, an early journal aimed at amateur microcomputer enthusiasts. Every August, Byte published their annual computer language issue in which they explored the features of a different language. I suspect that this was my first exposure to the Lisp language. Lisp is the second oldest high level computer language, predated only by FORTRAN.

It is also the first computer language that focused on symbolic processing instead of being primarily concerned with numerical computation. That is to say, it was written to facilitate the manipulation of lists of symbols. A symbol, in this case, is an arbitrary label or name. For example, you might have a list:

(alice bob ralph sally)

The parenthesis denote the beginning and end of the list. The four names are symbols and make up the elements of the list and they are considered in the order that they are written between the parenthesis, that is alice is the first element of the list, bob is the second, ralph the third, and sally the fourth and final.

Further, Lisp code was represented by lists, just like its data. Consequently, program code could be manipulated by the language as easily as data could. This jumped out at me immediately as giving Lisp the ability to introspect over its own code. Another, more subtle capability of Lisp is the ability to take a list and rewrite it according to a template called a macro. This turns out to be incredibly useful in allowing repetitive operations to be condensed to their essence.

Lisp is typically implemented as an interpreter. It accepts a line of input, processes it and prints a result. This is called a Read, Eval, Print Loop or REPL for short. The reason that I bring this up at this point is to point out that the component that does the Read portion of the REPL is a very interesting piece of the picture. It takes a list of input characters and parses them into symbols and builds them into a small program. It is responsible for recognizing if there are any macros in the list and if so, expanding them into their translations. When it is finished, it has a complete, correct lisp expression to hand to the Eval portion of the REPL. Either that or it detects that something is wrong and prints an error message.

This Read operation is very powerful, even more so in the standard Common Lisp version of the language. In Common Lisp, the Read function is table driven. That means that by substituting a different read table, the reader can parse a language with a different syntax. The implication of this is that Lisp is not only a language, it is a framework for building new languages.

This has been a long story and you may be feeling a little lost by now but the point is that Lisp is exactly the kind of substrate upon which EDI can most easily be built. There are many modern computer languages that implement many, if not most, of the features of Lisp. The point is that Lisp implemented them first and best.

The idea that the structure of a Lisp program is similar to its syntax, a property called homoiconicity by the way, is at the heart of its power and responsible for its longevity. It also make it the prime candidate for building the environment in which EDI will emerge.