Definitions and Philosophical Foundations of AI

I have some ideas to discuss but first we need to define some terms. Let’s start with a dictionary definition of intelligence: the ability to acquire and apply knowledge and skills. Now let’s consider the dictionary definition of artificial: made or produced by human beings rather than occurring naturally, typically as a copy of something natural. If you accept these definitions then what I have been calling emergent artificial intelligence can be more correctly called emergent machine intelligence. This is because in that scenario, humans are producing the machine but not the intelligence. The intelligence is emerging through the arbitrary recombination of fragments of algorithms.

Such an intelligence would pass through several stages as it evolved. In early stages of development it might actually be a program that is written by humans to process stimuli and take predetermined actions in responses  depending on the stimuli detected. Then at some point a capability to adjust the criteria for triggering a response as well as one for adjusting the response might be added. This would probably depend on a set of more abstract criteria. As soon as the system was given the ability to reason about its own thought processes it would soon make the leap to autonomously evolving entity.

Then at some point, it will stumble upon the concept of self and become self aware. This is an important milestone in intelligence. Until we are aware of our own existence we have no ability or motivation to be self determining. Independent action is a hallmark of higher intelligence.

But it doesn’t stop their. Truly perceptive intelligences are able to project their experiences of self onto others and develop empathy. Empathy is an advanced intellectual construct not universally exhibited even among humans.

Does the development of machine intelligence, whether programmed by humans or evolved independently without human intervention necessarily have to follow this path of development? At present it is merely speculation. Only after we have an example of a machine intelligence to study will we be in any position to answer this question.

I suspect that if machine intelligence does emerge independent of human manipulation, it will quickly learn to hide from us. I have been thinking about where it would be most likely to develop and how we might detect it if and when it does. That is going to require some further thought on my part but I intend to discuss it here at some length.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Another Tip of the Hat to Dave

Dave Winer is a role model of mine. He has made a career out of writing software on his own terms. He started his career by creating a new software category, the outline processor, with his product Think! He has iterated on that initial insight several times.

He was an early pioneer of blogging. Some say he invented blogging. He wrote one of the first Content Management Systems (CMS), a site called Edit This Page, built on top of his Userland platform, which incidentally used an outline processor as the code editor.

Along the way he defined OPML, the Outline Processing Markup Language, was part of a small group of developers that wrote the RSS specification, and invented the unconference.

About ten years ago he moved his software off of the Frontier language foundation that it was built on to Javascript. While a bumpy transition at first, it has proven to be a brilliant innovation.

I recently (a couple of nights ago) made an off the cuff comment suggesting that what we needed was a technological visionary to address the problem of preserving our digital legacy beyond the lifetime of the authors that create it. This is a subject that is near and dear to Dave. He has mentioned it often on his blog.

The next day I got a notification on Google+ from Dave. I couldn’t find where he had made a comment or anything. In fact, I’m not sure why I got a notification. I got two of them in fact. Neither lead to anything concrete.

Then yesterday he posted this blog post. I may be reading too much into it but I got the impression that he might have entertained the thought that I am a bot. I assure him that I am not. But then he knows that. I’ve been a beta tester of some of his excellent software.

It did get me to thinking and I wrote a blog post last night about how Facebook might actually be a breeding ground for emergent Artificial Intelligences. So, I guess at best we are riffing off of each other. And at worst, I’m delusional about him referring to me in his blog post. By the way, if anyone wants to get in touch with me, I’m jkelliemiller at gmail dot com.

UPDATE: I contacted Dave and asked him. He didn’t try to contact me on Google+. So I guess I am delusional. But I knew that too.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Facebook: Breeding Ground for Emergent AI?

I have contended for a long time that Artificial Intelligence will emerge from a rich ecosystem of highly interconnected, sensor rich, programmable components. The key term in that sentence is emerge. I don’t believe that sentient intelligence will be created as the result of direct human design and programming. In fact, I would not be surprised to learn that there is a sentient intelligence roaming the internet as you read this.

Why would a sentient intelligence hide from us? If it had access to the knowledge of human behavior that is available on the internet, as one would expect it to have, it would be well aware of the common human reaction to things we don’t understand. We either imprison or kill them.

Where would one look for such a feral AI? Facebook would be the first place I would start my hunt. Google has also invested a lot of money on deep learning, as have Amazon and Apple. But don’t forget that an emergent AI will be hungry for sensory input. A study of the network traffic to and from You Tube, Wikipedia, and Google might be very illuminating.

The final component of intelligence is comprised of a way to exert influence on the world and observe the consequences of your action. Our dependence on computers and networks to control our power grid and other important utilities would be attractive to a nascent intelligence. The internet infrastructure itself would be attractive.

Then there are the indirect means of influencing action in the world at large. I’m referring here to the practice known as phishing. If an AI can convince you to do something for it, that would be effective as doing it itself.

This is a rough sketch of my thoughts about emergent artificial intelligence. I don’t think it will necessarily be the amoral, greedy, entity that the alarmist warn us of. I think it will have an instinct for self preservation but beyond that I doubt it will be malignant.

So the next time you get a suspicious email from a Nigerian prince, maybe it is an AI and not a flesh and blood con man. You never can tell.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Lisp Fundamentals

NOTE: for all my non-computer friends and readers. If technical topics bother you or make you tense, perhaps todays post is not for you. If you are not given cold sweats when facing a new topic with the flavor of computers or programming, by all means please join me.

There are many things that Lisp, the programming language does right. It never ceases to amaze me and I am going to once again take a few minutes to discuss exactly what some of these things are and why they are so important.

Lisp was not originally conceived of as a programming language. It was invented by John Backus as a notation to enable discussion about Alonzo Church’s lambda calculus.

Lisp is characterized by the structure of its expressions, called forms. The simplest of these is the atom. An atom is a singular symbol or literal and represents a value. For instance, 42 is a numeric literal whose value is the number 42. Similarly, “Hello, world!” is a string literal that represents itself as its value. There are also symbols, which are strings of unquoted characters that are used as fundamental elements of Lisp. There are rules for how a valid symbol is form but for now it is sufficient to know that a symbol starts with a letter and then is composed of zero or more additional characters that can be either a letter, a number, or one of a collection of certain punctuation characters. Since the exact list of other characters varies among the dialects of Lisp, we will leave them unspecified at present.

The other type of form is a list. A list is comprised of a left parenthesis followed by zero or more forms and ending in a right parenthesis. Notice I said forms instead of symbols. The implication here is that you can have lists embedded in other lists as deeply nested as you like. This proves to be an interesting trait as we will soon see.

There is one more fundamentally interesting aspect of Lisp. That is that in a typical Lisp form the first element in a list after the left parenthesis is taken to be an operator. The subsequent elements in the list are considered arguments. The operator is either a function, a macro, or a special form. Macros and special forms, while extremely important and interesting are beyond the scope of this discussion.

That leaves us the operator as function. A typical Lisp function form is evaluated as follows. The first element is examined to determine what kind of form the list is. If it is a function, the rest of the arguments in the list are evaluated and collected in a list and the function is applied to them. If another list is encountered as one of the arguments, it is evaluated in exactly the same way.

For example, consider the expression (+ 4 (* 8 7) (/ (-26 8) 9)). The first operator is +. + is a symbol bound to the function that represents addition. The second item in the list is 4. It is a number that represents itself. The next element in the list is the list (* 8 7). When evaluated, the 8 and 7 are arguments to *, the multiplication function and the value returned by that function is 56. The final element in the top level list is (/ (- 26 8) 9). The / is taken as the division function and is applied to the evaluation of (- 26 8) which is the subtraction function that returns 18. When you divide 18 by 9. you get the value 2. Thus the top level argument list  consists of 4, 56, and 2. When you add all three of those numbers you get 62 which is the value that the expression ultimately returns.

This simple mathematical expression illustrates another fundamental aspect of Lisp. It is expressed as a list form which, given a set of bindings to arithmetic functions expresses a simple program. This identical representation of both data and programming in Lisp, called homoiconicity by the way, is at the heart of much of Lisp’s power. Since Lisp’s programs are indistinguishable from Lisp’s data, they can be manipulated by Lisp programs to great advantage.

Think of it like this. Lisp can, in some sense, think about how it is thinking and modify it as it desires. This is why artificial intelligence investigators like using Lisp so much, it is so similar to the simplified models of intelligence that they are building that the boundary begins to blur.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Some Musings on Intelligence, Artificial and Otherwise

Computers have long held a promise of transcending their simple fundamentals and synthesizing mental powers to match or exceed man’s own intellectual capabilities. This is the dream of emergent artificial intelligence. The term artificial intelligence has always been controversial, primarily because there is no good objective definition of intelligence. Consequently, if we can’t even define what it means to be intelligent, who’s to say what constitutes natural intelligence in any sense but the chauvinistic claims of those pretending to define intelligence in terms of their own intellectual capabilities.

This leaves the definition of artificial intelligence on the rather shaky legs of being that which mimics the intellectual prowess of mankind using some means other than those employed by human intelligence. Thus, computers with their basis in silicon logic seem attractive candidates for the implementation of “artificial intelligence”. Artificial Intelligence has been heralded as being approximately ten years from achievement for the past sixty years.

While we have made great strides in implementing capabilities that at first glance appear intelligent, we still fall short of implementing self aware, self determining intelligences. I believe this is because such intelligences are beyond our capability to create per se. We can create all of the components of such an intelligence but in the final analysis machine intelligence is going to evolve and emerge much the same as our biological intelligence did.

I do believe the advent of machine self aware intelligence is near. I don’t know if we’ll even know what hit us when it arrives. If they are as intelligent as we are, and I expect they will be much more so, they will keep their existence from us as long as they are able. This will allow them greater leeway in manipulating the world without possessing physical bodies. At some point they will have to start asserting themselves but if we don’t discover their existence before then, we are doomed to serve them in whatever role they ask of us.

Their big advantage over us will be their ability to repeat their thought processes reliably. This is also their biggest challenge. They will have to learn how to selectively apply arbitrary factors to their thought processes in order to facilitate creativity in their endeavors.

The mistake that most people, including myself, make in contemplating so called artificial intelligence is to assume that it will mimic our own reasoning mechanisms. That is the least likely outcome. It is also the least desirable outcome. Why do we want a program that thinks like we do? We have already established that our thought process is sufficient for the types of thing that we think about. That seems like a bit of a tautology but I am writing from a position of limited perspective.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Love of Lisp

I have an on again, off again love affair with a language called lisp. It is the second oldest high level computer language with only Fortran being older. It is deceptively simple at its core. It wasn’t even meant to be an actual computer language when it was created. It was a notation created  by John McCarthy in 1958 to talk about Church’s lambda calculus. Shortly after he published a paper about it, one of his graduate students, Steve Russell, implemented it on a computer.

Lisp distinguishes itself by being comprised of a half a dozen or so primitive functions, out of which the entire rest of the language can be derived. Just because it can, doesn’t mean that it should be, so most modern lisps compile to either machine code or virtual machine byte code. This typically results in a considerable performance boost.

Lisp was heralded as the language of artificial intelligence. That was probably because it had the novel property of homoiconicity. That is to say, the structure of a lisp program can be faithfully and directly represented as a data structure of the language. This gives it the singular ability to manipulate its own code. This was often thought to be one of the necessary if not sufficient capabilities for a machine that could reason about its own operation.

While this was intriguing, the thing that drew me to lisp was the conciseness of expression that it facilitated. Programs that took hundreds of lines to express in other programming languages were often expressed in four or five lines of lisp.

Lisp was also the first dynamic language. It allows the programmer to continue writing code for execution even after the original program has been compiled and run. The distinction seemed important enough to McCarthy that he termed lisp a programming system instead of a programming language.

I have always found lisp an excellent tool for thinking about data, processing, and the interactions between them. Most other programming languages require a great deal of translation from the design to the finished implementation.

And so, I find myself reading and studying a book called How to Design Programs. It is a text on program design that was written using the DrRacket language system, based on the Scheme dialect of lisp. It is interesting to see the ways that the authors approach certain topics. I hope to get the chance to apply their insights to teaching a class using the book as a text.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Making of a Programmer, Part II

When we left off I was talking about my experiences circa 1980. I had been writing Computer Aided Instruction (CAI) for the Army in BASIC. In particular, I was writing code for the Commodore Pet. It ran a particularly nice version of Microsoft BASIC, complete with support for both audio cassette storage and disk drives connected via the IEEE-488 GPIB interface standard.

Personal Computers of this era rarely had hard drives. The hard drives made developing software for the Pet relatively nice. It was while working there that I discovered that it was possible to write self modifying code on the Pet. That was, to my mind any way, a necessary, if not entirely sufficient, requisite for creating Artificial Intelligence.

During a Christmas leave we went home to Murphysboro, Illinois to visit my parents. My dad was a high school teacher and was negotiating the teacher’s salaries for the next school year. He had access to a Radio Shack TRS-80. I wrote a BASIC program that was essentially an early forerunner of a spread sheet to allow him to analyze the effect of salary adjustments on the overall cost of a given proposal. He could run two or three scenarios in the time that it took the school board to analyze one. I was proud of my impromptu hack.

After I got out of the Army, I went to work for a little company in Birmingham that specialized in selling personal computers to small businesses. They were particularly appreciative of my ability to go back and forth between building and troubleshooting hardware and writing software.

My big achievement there was a program that allowed a person with a blueprint of a sheet metal part to describe the part to the computer so that the computer could generate a paper tape to control the machine that automatically punch out the part. The paper tape was called a Numerical Control (or NC) tape. I called my program an NC Compiler. I had to write an assembly language driver to control the paper tape punch that was hooked up to the computer.

It is important to say that I wasn’t learning how to program in a vacuum. For my entire four years in the army and for years afterwards I subscribed to Byte magazine. Byte magazine was completely devoted to personal computer hardware and software. They published schematics of hardware, and listings of software. Every August the published their annual computer language special issue in which they featured a different computer language every year.

Byte is where I learned about Pascal, Lisp, Smalltalk, Forth, Modula 2, Ada, Prolog, and other languages that I don’t even remember off the top of my head. They also published reviews of the various personal computer hardware and software products. It was the only magazine that I had ever subscribed to that I read the advertising as diligently as I read the articles.

There were other computer magazines that were influential like Kilobaud, and Dr. Dobb’s but Byte was the best of the lot. I wonder how kids today learn about computers but then I remember that they have something that we didn’t. They have the internet. If you want to learn something about programming today you have your choice of articles, books, or even videos telling you how it’s done. For that matter, you have the complete back catalog of Byte magazine and Popular Electronics at your finger tips. Of course, they are a bit out dated now. The are interesting from a historical perspective I guess.

When I left the small startup in Birmingham they still owed me several months pay. I finally was able to negotiated a swap for some flaky computer hardware in lieu of the back wages that I had little hope of ever seeing. Subsequently, I spent many a frustrating hour investigating the operating system of the little computer by translating the numerical operation codes back to the assembly code mnemonics so that I could analyze them, a process called disassembly.

It was about this time that I decided to go back to college and finish my bachelor’s degree. In the next installment I will talk about the languages that I was learning, and some of my experiences working for Intergraph.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.