Love the One You’re With

Twenty five years ago the computer industry was fraught with religious wars. There was the OS war that purported to be about which OS was the best for a personal computer. Similarly there was the programming language war. And, last but most certainly not least, there was the text editor war over which was the best editor for entering text into a computer.

There are many more options in each of these categories now. And, people still have strong opinions about which option they prefer. The religious zealotry has largely subsided though. People are finally more focused on getting the job done than they are finding the absolute best way to do something on a computer.

I have my opinions, particularly when it comes to text editors. Text editing is my bread and butter. I write programs, documents, and stories with a text editor. For years I was an emacs snob. Actually, it was more a matter of muscle memory. I had used emacs for so long that I no longer had to consciously think of a command, my fingers just type it when I thought about wanting to execute it. I have joked about muscle memory but it is true, I am immediately more productive when I sit down to a machine with emacs installed on it.

But part of the job of a senior developer is to help more junior developers figure out ways to be more productive. I could have taken the time to get emacs installed on the lab machines. There is a process and I’ve been told to feel free to do it. But on further consideration I have decided that I push past my comfort zone and learn more about the vi editor, or more specifically the vim editor.

Vi has been around since soon after the rise of the unix operating system. It was written by Bill Joy of Sun Microsystems. It is notable if only for the fact that it is delivered on practically every unix distribution in the world. It has a reasonably rich command set. I wanted to get past the point of having to look each one up in order to use it so I started using it to develop the various test cases that I am responsible for verifying.

I have only been using it in this capacity for a couple of days but I can report that it is much more capable than I would have thought. Over the years all the major programming text editors have added features like intelligent code formatting and keyword highlighting. Code highlighting can be very useful for calling attention to inadvertent typographical editors in your program.

There are other editors in the running today. For instance, gedit is the graphical text editor bundled with Linux. Atom is a saucy little editor written in javascript, formatted with CSS, and capable of syntax highlighting with the best of them. It has recently been upgraded to ease integration with GitHub.

These days the choice has boiled down to the answer to these questions: What editor does everyone else use? What are most of the developers on the project use to? What features are you most familiar with and which ones fit the type of uses you intend to demonstrate?


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Evolution of Programming Paradigms

My career as a programmer has spanned more than forty years. In that time I have seen a number of programming paradigms promise to make programming easier, more robust, and less error prone. Let’s review these paradigms in roughly chronological order.

Early in my career the latest advance in programming paradigms was Structured Programming.  The essential insight of Structured Programming was that programs should be written using a small set of control structures and that the primitive goto instruction should be shunned. The benefit was more readable code that could be more easily analyzed. The goto ridden code was often referred to as spaghetti code.

Next to come into vogue was the paradigm called Object Oriented Programming. Objects were intended to provide several benefits. First they were aimed at grouping functions with the data that it operated on. It also introduced the concept of encapsulation or data hiding. Data, and functions for that matter, that were not specified as public could not be accessed or modified by code outside the object.

Most popular Object Oriented languages were based on the idea of Classes that specified the data and functional template from which individual object instances were created. These Classes were often organized in a hierarchy such that the more general Classes specified the common data and functional characteristics of a group of objects. The more specific members of the class hierarchy would then add or override data or functional characteristics of the subclass to refine the representation and behavior of the instances of the subclass.

As it turns out, in most cases class hierarchies just added unnecessary complexity to programs. They also introduced problems such as what if you have a class that inherits from two different parent classes. For example, suppose that you had a BankAccount class that represented the ledger of a bank account, the deposits, the withdrawals, and the current balance. Suppose there was another class that represented a PrintableReport. Suppose that you wanted to create a class BankAccountReport that inherited attributes from both the BankAccount class and the PrintableReport class. Now here’s the quandary. Both superclasses have an operation called addItem. Which operation should the child class inherit, the one from BankAccount or the one from PrintableReport? This created so many problems that many Object Oriented languages only allowed a class to have a single super class.

Next to the scene was Aspect Oriented Programming. It’s claim to fame was a solution to the problem of multiple inheritance or, as it referred to it, cross cutting concerns. Aspects were a mechanism that allowed the programmer to conditionally alter the behavior of an object by modifying the behavior of a class without modifying its implementation. It did this by capturing calls to it’s methods and allowing the programmer to intervene before or after the call to the aspected operation of the underlying class.

The latest paradigm is not really new. Functional Programming goes back to the early days of Lisp. It says that functions, in the mathematical sense, should map inputs to outputs without causing any side effects. Functions should further be first class entities in the language. This means that they should be allowed to be stored in variables and passed as arguments to other functions.

Strict functional programming is difficult if not impossible to achieve practical results with. Most programs take input from somewhere and output results to somewhere. Taking input and outputting results are both violations of the constraint that the output of a function should depend solely on it’s input. Most functional languages have well defined accommodations for operations that aren’t strictly functional.

This was a whirlwind tour. I hope it gave an overview of the evolution of programming paradigms over the last forty years. Look these programming paradigms up on Wikipedia if you want to know more about them. Or if I have managed to confuse you totally.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Purposes of Computer Programming Languages

Computer programming can be viewed on many levels. Superficially it is a means for specifying how a computer should perform a given task. If that were all it was, we could must enter a long sequence of numbers that correspond to the machine instructions to accomplish the task.

Modern computer languages go much further. They provide platforms for communicating, not only the specific tasks to accomplish but our assumptions about them, the structure of the data that they will manipulate, and the algorithms that will accomplish them.

But even beyond that, they provide a common language for people to talk to each other about programming tasks. As such, they evolve with the growth of our understanding of the activity of programming, its attributes, and the algorithms that are the final product of the activity.

To summarize, we write programs to enable the automated solution of computational problems by computers but also to communicate with each other about these computational processes. In the interest of clarity of communication, we have seen the trend of programming languages toward higher and higher levels of abstraction with an ever increasing investment in compiler technologies that translate these abstractions into efficient executables that provide the computer tasks that we set out to specify in the first place. It is ultimately this communication that offers their greatest benefit.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Lisp Fundamentals

NOTE: for all my non-computer friends and readers. If technical topics bother you or make you tense, perhaps todays post is not for you. If you are not given cold sweats when facing a new topic with the flavor of computers or programming, by all means please join me.

There are many things that Lisp, the programming language does right. It never ceases to amaze me and I am going to once again take a few minutes to discuss exactly what some of these things are and why they are so important.

Lisp was not originally conceived of as a programming language. It was invented by John Backus as a notation to enable discussion about Alonzo Church’s lambda calculus.

Lisp is characterized by the structure of its expressions, called forms. The simplest of these is the atom. An atom is a singular symbol or literal and represents a value. For instance, 42 is a numeric literal whose value is the number 42. Similarly, “Hello, world!” is a string literal that represents itself as its value. There are also symbols, which are strings of unquoted characters that are used as fundamental elements of Lisp. There are rules for how a valid symbol is form but for now it is sufficient to know that a symbol starts with a letter and then is composed of zero or more additional characters that can be either a letter, a number, or one of a collection of certain punctuation characters. Since the exact list of other characters varies among the dialects of Lisp, we will leave them unspecified at present.

The other type of form is a list. A list is comprised of a left parenthesis followed by zero or more forms and ending in a right parenthesis. Notice I said forms instead of symbols. The implication here is that you can have lists embedded in other lists as deeply nested as you like. This proves to be an interesting trait as we will soon see.

There is one more fundamentally interesting aspect of Lisp. That is that in a typical Lisp form the first element in a list after the left parenthesis is taken to be an operator. The subsequent elements in the list are considered arguments. The operator is either a function, a macro, or a special form. Macros and special forms, while extremely important and interesting are beyond the scope of this discussion.

That leaves us the operator as function. A typical Lisp function form is evaluated as follows. The first element is examined to determine what kind of form the list is. If it is a function, the rest of the arguments in the list are evaluated and collected in a list and the function is applied to them. If another list is encountered as one of the arguments, it is evaluated in exactly the same way.

For example, consider the expression (+ 4 (* 8 7) (/ (-26 8) 9)). The first operator is +. + is a symbol bound to the function that represents addition. The second item in the list is 4. It is a number that represents itself. The next element in the list is the list (* 8 7). When evaluated, the 8 and 7 are arguments to *, the multiplication function and the value returned by that function is 56. The final element in the top level list is (/ (- 26 8) 9). The / is taken as the division function and is applied to the evaluation of (- 26 8) which is the subtraction function that returns 18. When you divide 18 by 9. you get the value 2. Thus the top level argument list  consists of 4, 56, and 2. When you add all three of those numbers you get 62 which is the value that the expression ultimately returns.

This simple mathematical expression illustrates another fundamental aspect of Lisp. It is expressed as a list form which, given a set of bindings to arithmetic functions expresses a simple program. This identical representation of both data and programming in Lisp, called homoiconicity by the way, is at the heart of much of Lisp’s power. Since Lisp’s programs are indistinguishable from Lisp’s data, they can be manipulated by Lisp programs to great advantage.

Think of it like this. Lisp can, in some sense, think about how it is thinking and modify it as it desires. This is why artificial intelligence investigators like using Lisp so much, it is so similar to the simplified models of intelligence that they are building that the boundary begins to blur.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Evolution of Computer Languages

I’ve got a thing about computer languages. I consider myself to be somewhat of a connoisseur. I have a soft spot in my heart for Lisp but I am also fans of other languages based on the context. I spent ten years more or less as an evangelist for Java. At the time I was fluent in Java, C, BASIC, and Pascal, I was conversant with Lisp, Scheme, Smalltalk, and Ada, and I could read most other languages but in particular COBOL, SNOBOL, Fortran, and Prolog.

While I personally preferred Lisp, I felt that the bulk of the programmers at the time were C or C++ programmers. As such, Lisp looked and behaved weirdly from their perspective. Java represented a huge movement in the right direction while remaining a language accessible to C programmers.

At the time, everybody was impressed by the elegance of Smalltalk and the object oriented, message passing paradigm. Smalltalk was also too esoteric for most C programmers but there was a guy named Doug Cox that came up with a language called Objective-C that captured some of the object oriented flavor of Smalltalk in a syntax that appealed to the C crowd. This was about the same time that Bjarne Stroustrup was experimenting with C++.

Both Objective-C and C++ proved to be overly complicated, especially when it came to managing the dynamic allocation of memory. Consequently, they both gained a reputation for being difficult if powerful. This was the state of affairs when James Gosling was faced with developing a language for a set top box. The requirements were that it be fast, easy to write bug free code in, and it would be well integrated with the network. And, of course, it would be object oriented and have automatic memory management in the guise of garbage collection. In short, Java was no Lisp but it was about as close to it as the programmers of the day could get their minds around

As it turns out, Java did raise the bar to the point that now, some twenty years later, it has itself passed into the conservative end of the spectrum and new languages now fill the spot it once held. In fact, Lisp has had a resurgence in popularity in recent years.

This renewed popularity can probably be best explained by the fact that Lisp has always been a research language. It was conceived as a notation for the discussion of Church’s lambda calculus but it’s simple, homoiconic syntax quickly became a powerful tool for creating derivative languages to explore new programming paradigms.

Consequently, concepts such as structured programming, functional programming, and object oriented programming had their first experimental implementations in Lisp. It has been said that every new feature in every new programming language introduced since Lisp was first created have been done first in Lisp and often better.

Which brings me around to a point of sorts. Since all of these languages have been gravitating toward Lisp for all these years, why hasn’t Lisp just taken over as the language of choice? There are a number of answers to that question, some of them contradictory.

For years Lisp had a reputation as being terrible for problems with a lot of mathematical computation. The truth of the matter was that the implementation of arithmetic in most of the Lisps of the time was good enough for the researchers that were primarily interested in investigating aspects other than numerical computation. When later generations of Lisp implementors took the time to optimize the numerical performance of Lisp it came to rival C and Fortran in both speed and accuracy.

This illustrates the important observation that Lisp has seldom been considered a language for the development of production software. A couple of blatant exceptions have been the use of Lisp in the development of software to predict the performance of stocks on Wall Street and software to predict the most likely places to explore for oil. These domains were willing to accept some rough edges in order to solve these particularly hard problems at all.

At one point it was argued that the automatic garbage collection of Lisp would kick in at the most inopportune time and embarrass the developer mid-demo. Advances in the technology of garbage collection have since made this argument mute.

Another often sited argument used against Lisp is the claim that other, more popular languages have a larger selection of third party libraries available to them than Lisp does. This does remain a challenge to some degree however many Lisp implementations have Foreign Function Interface mechanisms that allow them to call library routines written in other languages.

Another spin on the question is that Lisp has regained popularity especially in revised dialects like Clojure which has taken the opportunity to refactor the architecture of collection types so that the operations on them have similar names when they do similar things. This makes the language easier to learn. Clojure also runs on top of the Java Virtual Machine making interoperation with the vast Java third party libraries one of its attractive features.

The sad conclusion that I come to is that Lisp is a good source of inspiration and even a moderately good platform for investigation of architectural approaches to difficult, complex software systems but the benefits of the languages such as Racket, Swift, Ruby, Groovy, and even Javascript usually far outweigh any advantages that Lisp may once have had when it comes to implementing software for production use.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

8 Bit Fantasies

I watched a video interview with the Oliver twins. They are video game legends from England. They started developing video games as teenagers in 1983. They went on to start their own game studio. In the interview, they talked about the process of developing games. They observed that the constraints of creating games for eight bit processors with limited display hardware often made it easier to creating games than the relatively unconstrained environment of modern hardware. The reason this is so is that when the hardware has severely limited capabilities it forces you to think backwards from the constraints to the design of a game.

The counter intuitive fact of game design is that games with simple rules and clear goals are more fun. For example, chess only has six unique types of pieces and is played on a board of 64 squares and yet the combinations of valid games is astronomical.

Another thing they commented on was the importance of thinking about the program with pencil and paper before they started writing code. They discovered this because when they started developing games they only had one computer between the two of them. Consequently, while one of them was entering code into the computer, the other was figuring out what they were going to tackle next when they got their turn on the computer.

Listening to them talk about their game developing experiences reminded me of a friend that I knew in the same era. Stan and I worked for Intergraph as computer technicians. We tested and repaired a specialized processor that allowed high speed searches for graphical elements in CAD files. In short, we both understood how computers worked in great detail. Stan owned an Atari 800 computer. We spent many hours talking about game design for the Atari.

As I think back on these conversations, I realize that the hard part was never implementing game ideas in code. It was coming up with simple yet engaging ideas for how the game would work. We didn’t spend enough time with pencil and paper. We both wanted to sit down and start coding immediately. This is an important point that needs to be taught when we teach people to code. A little bit of design up front can save a lot of trial and error programming later. And also, adding artificial constraints to the design process can have the surprising effect of making it easier to invent an interesting game.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Making of a Programmer Part III

I worked at Intergraph for six years. When I started there it was a startup in the exponential growth phase. It was exhilarating to come to work every day. I was working on cutting edge technology. My first job at Intergraph was as a technician. I tested and repaired specialized computers that were optimized to search graphics files for patterns. My Army training had prepared me for just such a job.

I enjoyed my work for six months or so. Then I got tired of working with broken computers all the time. One of the engineers discovered that I knew how to program and gave me some tasks to do. Before I knew it, I was working for him and he was the head of the newly formed networking department.

I was given the task of writing a program to copy files from one computer to another over the network. Originally the plan was to write the program using Pascal, I high level, structured programming language. I was uncomfortable with that. I kept finding problems with the implementation of Pascal that would have to be worked around. I finally suggested that we implement the program in assembly language.

Programs on the PDP-11 were often given three letter names. I called my file transfer program IFM and told my boss it stood for Intergraph File Management. I told my friends the truth. It stood for It’s F%#$ing Magic.

After making IFM work between two PDP-11 computers my next challenge was to add VAX-11 computers to the mix, that is I had to transfer files from PDP-11 to VAXs, VAXs to PDP-11s,  and VAXs to VAXs. Luckily, VAX-11 assembly language was a super set of PDP-11 assembly so the translation went smoothly.

The next challenge came when Intergraph decided to build an intelligent desktop workstation that ran Unix. I was provided an early prototype of the workstation, serial number 8, to get file transfer working. This time the underlying file system was different than it was on the DEC machines. I had to start from scratch. I decided to use C, the system programming language made famous by the Unix system.

My new file transfer program was called FMU for File Management Utility. I leave it to the reader’s imagination what that actually stood for. C proved to be a powerful language and I learned to employ all kinds of heuristics to determine how to preserve the semantics of the files while transferring them from one type of file system to another.

It was during this time that I went back to college. I had over two years of credit under my belt and the college that I was attending didn’t have a Computer Science degree program at the time. So, I took Physics, and Calculus, and all the computer science classes that they offered. I ended up getting a Bachelors of Science degree in General Studies.

I worked full time while getting that degree. The last term before I graduated, the college announced that they were going to start offering a Computer Science degree. I asked what it would take to get a second degree in Computer Science. They looked at my transcript and said that all I would have to do would be to take forty more hours to establish residency and I would have the degree.

By this time I was friends with most of the professors in the Computer Science department. I arranged to teach one class and take two ever term until I finished my second degree. I taught Operating Systems a number of times, 8086 Assembly Language, and Artificial Intelligence. It was a great time, but all good things must come to an end.

One of the other colleges in the area had a bunch of Computer Science professors with PhDs that didn’t get tenure. The college that I was attending snapped them up and didn’t renew their contract with anoy of the adjunct professors with less than a PhD. I took my second Bachelors in Computer Science and called it a day.

Next installment I’ll talk about my experiences working on the Star Wars program, working for NASA, and landing my dream job at an AI lab.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.