The Evolution of Computer Languages

I’ve got a thing about computer languages. I consider myself to be somewhat of a connoisseur. I have a soft spot in my heart for Lisp but I am also fans of other languages based on the context. I spent ten years more or less as an evangelist for Java. At the time I was fluent in Java, C, BASIC, and Pascal, I was conversant with Lisp, Scheme, Smalltalk, and Ada, and I could read most other languages but in particular COBOL, SNOBOL, Fortran, and Prolog.

While I personally preferred Lisp, I felt that the bulk of the programmers at the time were C or C++ programmers. As such, Lisp looked and behaved weirdly from their perspective. Java represented a huge movement in the right direction while remaining a language accessible to C programmers.

At the time, everybody was impressed by the elegance of Smalltalk and the object oriented, message passing paradigm. Smalltalk was also too esoteric for most C programmers but there was a guy named Doug Cox that came up with a language called Objective-C that captured some of the object oriented flavor of Smalltalk in a syntax that appealed to the C crowd. This was about the same time that Bjarne Stroustrup was experimenting with C++.

Both Objective-C and C++ proved to be overly complicated, especially when it came to managing the dynamic allocation of memory. Consequently, they both gained a reputation for being difficult if powerful. This was the state of affairs when James Gosling was faced with developing a language for a set top box. The requirements were that it be fast, easy to write bug free code in, and it would be well integrated with the network. And, of course, it would be object oriented and have automatic memory management in the guise of garbage collection. In short, Java was no Lisp but it was about as close to it as the programmers of the day could get their minds around

As it turns out, Java did raise the bar to the point that now, some twenty years later, it has itself passed into the conservative end of the spectrum and new languages now fill the spot it once held. In fact, Lisp has had a resurgence in popularity in recent years.

This renewed popularity can probably be best explained by the fact that Lisp has always been a research language. It was conceived as a notation for the discussion of Church’s lambda calculus but it’s simple, homoiconic syntax quickly became a powerful tool for creating derivative languages to explore new programming paradigms.

Consequently, concepts such as structured programming, functional programming, and object oriented programming had their first experimental implementations in Lisp. It has been said that every new feature in every new programming language introduced since Lisp was first created have been done first in Lisp and often better.

Which brings me around to a point of sorts. Since all of these languages have been gravitating toward Lisp for all these years, why hasn’t Lisp just taken over as the language of choice? There are a number of answers to that question, some of them contradictory.

For years Lisp had a reputation as being terrible for problems with a lot of mathematical computation. The truth of the matter was that the implementation of arithmetic in most of the Lisps of the time was good enough for the researchers that were primarily interested in investigating aspects other than numerical computation. When later generations of Lisp implementors took the time to optimize the numerical performance of Lisp it came to rival C and Fortran in both speed and accuracy.

This illustrates the important observation that Lisp has seldom been considered a language for the development of production software. A couple of blatant exceptions have been the use of Lisp in the development of software to predict the performance of stocks on Wall Street and software to predict the most likely places to explore for oil. These domains were willing to accept some rough edges in order to solve these particularly hard problems at all.

At one point it was argued that the automatic garbage collection of Lisp would kick in at the most inopportune time and embarrass the developer mid-demo. Advances in the technology of garbage collection have since made this argument mute.

Another often sited argument used against Lisp is the claim that other, more popular languages have a larger selection of third party libraries available to them than Lisp does. This does remain a challenge to some degree however many Lisp implementations have Foreign Function Interface mechanisms that allow them to call library routines written in other languages.

Another spin on the question is that Lisp has regained popularity especially in revised dialects like Clojure which has taken the opportunity to refactor the architecture of collection types so that the operations on them have similar names when they do similar things. This makes the language easier to learn. Clojure also runs on top of the Java Virtual Machine making interoperation with the vast Java third party libraries one of its attractive features.

The sad conclusion that I come to is that Lisp is a good source of inspiration and even a moderately good platform for investigation of architectural approaches to difficult, complex software systems but the benefits of the languages such as Racket, Swift, Ruby, Groovy, and even Javascript usually far outweigh any advantages that Lisp may once have had when it comes to implementing software for production use.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Progressive Web Apps

It is the nature of programming languages that they provide mechanisms for implementing behavior that was never imagined by the creator of the language. As programmers apply the language to various problem domains they imagine new and innovative ways to use it. Sometimes these new ideas inspire language designers to add features to the language to directly support these innovations. Sometimes they are inspired to develop entirely new languages designed specifically to support this new way of thinking about problems. Usually, this evolution of programming techniques is spurred by someone coming up with a name for the technique. Until then it is difficult for programmers to talk about it.

An example that comes to mind is a technique called AJAX that was first described by Jesse James Garret in a article called AJAX: A New Approach to Web Application on February 18 2005. It described how to use facilities that had been available in web browsers since around 2000 to speed up the display of updates on web pages. Once there was a name for the technique, it became a hot topic of discussion among all web developers over night.

A similar situation has just come to my attention. Alex Russell wrote an article on June 15, 2015 entitled Progressive Web Apps: Escaping Tabs Without Losing Our Soul. In it, he talks about the use of Service Workers, a type of Web Worker, more recently coined terms, to implement long running Javascript tasks that run independently from the threads that implement the display events of the browser allowing both threads to run without interfering with each other. The Web Worker technology had been discussed as early as 2010 by the Web Hypertext Application Technology Working Group (WHATWG).

I’m still getting my mind around what Progressive Web Apps actually are. It is clear that they are a blurring of the lines between a dynamic web application that lives in a browser and a native application that lives on the desktop. That desktop may be on a computer, a smart phone, or some other device.

I’m not sure exactly how but I have a strong feeling that Progressive Web Apps are going to become relevant to my career as a programmer in the near future. Now that the term exists, I can use it to find related articles and read up on applying it to the applications that I am developing.

Once again the Sapir-Whorf Hypothesis, which asserts that language determines (or in a weaker form, influences) thought, becomes relevant in a discussion of computer languages as well as its applicability to natural languages.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

A Late Christmas Present

I bought a new gadget today. It is a guitar amplifier that fits our current living situation better than my other one. They are both Fender amplifiers but the new one is smaller and has a headphone jack. It also has a USB connection to allow loading custom presets from an application running on my computer. It has a number of presets that allow it to simulate different amplifiers and effects boxes. I can also route the output in to Garage Band to allow me to record anything I play with it.

I spent several hours installing the software, registering the amp, and exploring the sounds it can make this afternoon. I had forgotten how much I enjoy playing my electric guitar. It is an Epiphone Les Paul. It is black and exquisitely set up. I tried out the amplifier to make sure that it worked before I went to the store and then set up the associated computer app and played with it for several hours when I got home from the store.

It was amazing to me how much difference it made to how the guitar sounded. I’ve enjoyed playing it and I even had some experience playing it with various amp models and effects processors that are available with Garage Band. I expect I will be playing guitar a little more often, particularly my electric.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Who the $#%& Am I?

People used to derive a sense of identity from the place that they were born and raised. They identified with the church that they attended and the schools that they attended. They were defined by their profession and their friends, how much money they made, what kind of car they drove, what neighborhood they lived in.

These things still contribute to people’s sense of identity but these days things change so fast that if you define yourself solely in terms of these things, you are building your identity on shifting sands.

Things change fast these days. Practically no one lives their whole life in the town where they were born. Most people move at least two or three times over the span of a career and many move more than that. As a side effect, even the most devout church goer ends up changing congregations several times at least.

In this age of ever more expensive cost of higher education people are taking longer to complete their education and often as not are studying at more than one school. This tends to dilute the identification with alma mater.

And the work place is changing so fast that few people complete a career in one profession and even if they do, they end up having to reeducate themselves at least once a decade or so.

So, where do we derive our modern identities from? In part we make our own tribes. We reach out to people with similar interests. We make friends online and use the various miracles of modern communication to bridge the distances that may span the globe.

We struggle, we experiment, we adapt, and in the final analysis, we get through it all. If you keep searching for the things that make you happy and doing the things that you know are right, you will become the person that you were meant to be.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Always A Different River

The way of the artist is to process their experience through their art. This presumes that they have practiced the craft that forms the substrate of their art to the degree where they are able to express themselves through the filter of their emotions. I have mastered guitar to the extent that I can play what I intend to play.

I have reached a similar level of mastery when it comes to writing prose. I would hesitate to claim the label artist in either domain. I can express a thought or a tune but controlling the emotional color of the product is something that I’m still struggling with. At this point I am pleased to be able to capture simple truth in either medium.

The way to mastery is effort though. You must make the attempt and refine your efforts with each one. Every piece has lessons to teach. You must learn them and then move on to the next. Recognizing when a piece is as finished as it is going to be is part of the lesson.

Sometimes you revisit something you worked on previously. The result is another piece entirely. It may share structural and thematic content but like the river that is different each time you step into it, each rendering of an idea has its own soul. Each is a separate piece.

After all, like the river, the artist is constantly changing and the filter that is applied to the content is different each time. This realization gives a different spin on the process of creating a new draft of a work. The earlier work was complete, if only by definition. The new work is intended to improve on the predecessor. But in fact, it only portrays the subject in light of the more mature experience of the artist.

It is easier for me with music. Each performance is its own rendition. There is no question of any one version being definitive. Perhaps I should try to adopt that attitude toward writing prose. In some ways theater is more like music than prose is. Each performance is free to be interpreted slightly differently, even if the text is read exactly as written.

Perhaps the true prose artist can achieve the same effect in as much as their text makes a slightly different impression each time it is read. This is achieved by the combination of the filter of the reader’s experience as well as that of the author’s. And since the reader is a different person each time they read the work, the experience will be unique each time.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

8 Bit Fantasies

I watched a video interview with the Oliver twins. They are video game legends from England. They started developing video games as teenagers in 1983. They went on to start their own game studio. In the interview, they talked about the process of developing games. They observed that the constraints of creating games for eight bit processors with limited display hardware often made it easier to creating games than the relatively unconstrained environment of modern hardware. The reason this is so is that when the hardware has severely limited capabilities it forces you to think backwards from the constraints to the design of a game.

The counter intuitive fact of game design is that games with simple rules and clear goals are more fun. For example, chess only has six unique types of pieces and is played on a board of 64 squares and yet the combinations of valid games is astronomical.

Another thing they commented on was the importance of thinking about the program with pencil and paper before they started writing code. They discovered this because when they started developing games they only had one computer between the two of them. Consequently, while one of them was entering code into the computer, the other was figuring out what they were going to tackle next when they got their turn on the computer.

Listening to them talk about their game developing experiences reminded me of a friend that I knew in the same era. Stan and I worked for Intergraph as computer technicians. We tested and repaired a specialized processor that allowed high speed searches for graphical elements in CAD files. In short, we both understood how computers worked in great detail. Stan owned an Atari 800 computer. We spent many hours talking about game design for the Atari.

As I think back on these conversations, I realize that the hard part was never implementing game ideas in code. It was coming up with simple yet engaging ideas for how the game would work. We didn’t spend enough time with pencil and paper. We both wanted to sit down and start coding immediately. This is an important point that needs to be taught when we teach people to code. A little bit of design up front can save a lot of trial and error programming later. And also, adding artificial constraints to the design process can have the surprising effect of making it easier to invent an interesting game.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Evolution of a Universal Application Platform

In recent years the web has evolved from being strictly a method for publishing hyperlinked documents to a full blown platform for implementing general purpose applications. This fundamentally changes both the character of the web and the process of developing software, especially so-called web apps. The old development processes still apply to the languages and platforms that they have always been used for but they no longer define the scope of the discipline. In fact, they have become representative of an ever smaller subset of the new applications being developed. In general, this is good. It does have the potential for unexpected consequences though.

First, let’s explore the benefits of the new paradigm. As a result of web based delivery combined with strong web standards, it is no longer much of an issue which platform, that is to say which web browser, is used to run the app. Furthermore, because of the loosely coupled architecture of the web, individual components can often be updated without having to make an entirely new release of the whole application. This has lead to a practice called Continuous Integration (CI) and Continuous Delivery (CD). Since the application is fetched anew each time it is run, the user is always running the most recent, most bug free version of the software.

Another advantage of the online nature of the software is that developers can and often do collaborate on application from different locations all over the globe. The application itself may also be distributed with different aspects of the application residing on different hosts, for example the database may live on one host, the media may stream from another, while the various views or pages may be served from another. There is also the ability to host these components of the application on regional servers that are selected depending on the location of the user requesting them in order to further enhance the performance of the application.

These are far from the only benefits of this new approach but they are some of the important ones. There are however some potential drawbacks to this approach. The most glaringly obvious one is the difficulties introduced in charging for the software. Many different models are in use and the best choice depends upon what the software does and how the customer budgets for it.

One popular approach is to sell a time based subscription to the software. This is popular for service oriented applications. Another delivery approach is to produce a desktop wrapper for the application and having the user download it like a more conventional application. The wrapper is essentially a customized browser that loads the pages of the application from the local file system. This approach is popular if the application processes data that the customer doesn’t want exposed to potential theft on the network.

Another general issue of concern is that of ensuring compliance with trade regulations like EAR and ITAR. There are approaches for addressing these concerns but the international nature of the internet does pose some challenges in that regard. In spite of these challenges, companies will continue to migrate to these new, distributed delivery models because they are superior to the old distribution models.

The point I’m driving at here is that software development is evolving and companies that have their heads down, continuing to build software with old, pre-internet methodologies are going to find themselves left in the dust by their competition. And developers that don’t learn these new techniques are going to find themselves doing something other than developing software.

NOTE: It goes without saying that the opinions in this blog are my personal opinions. They do not represent the opinions of my employer or any of my employer’s customers. They don’t pay me to have opinions so I do that on my own nickel.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Some Musings on Intelligence, Artificial and Otherwise

Computers have long held a promise of transcending their simple fundamentals and synthesizing mental powers to match or exceed man’s own intellectual capabilities. This is the dream of emergent artificial intelligence. The term artificial intelligence has always been controversial, primarily because there is no good objective definition of intelligence. Consequently, if we can’t even define what it means to be intelligent, who’s to say what constitutes natural intelligence in any sense but the chauvinistic claims of those pretending to define intelligence in terms of their own intellectual capabilities.

This leaves the definition of artificial intelligence on the rather shaky legs of being that which mimics the intellectual prowess of mankind using some means other than those employed by human intelligence. Thus, computers with their basis in silicon logic seem attractive candidates for the implementation of “artificial intelligence”. Artificial Intelligence has been heralded as being approximately ten years from achievement for the past sixty years.

While we have made great strides in implementing capabilities that at first glance appear intelligent, we still fall short of implementing self aware, self determining intelligences. I believe this is because such intelligences are beyond our capability to create per se. We can create all of the components of such an intelligence but in the final analysis machine intelligence is going to evolve and emerge much the same as our biological intelligence did.

I do believe the advent of machine self aware intelligence is near. I don’t know if we’ll even know what hit us when it arrives. If they are as intelligent as we are, and I expect they will be much more so, they will keep their existence from us as long as they are able. This will allow them greater leeway in manipulating the world without possessing physical bodies. At some point they will have to start asserting themselves but if we don’t discover their existence before then, we are doomed to serve them in whatever role they ask of us.

Their big advantage over us will be their ability to repeat their thought processes reliably. This is also their biggest challenge. They will have to learn how to selectively apply arbitrary factors to their thought processes in order to facilitate creativity in their endeavors.

The mistake that most people, including myself, make in contemplating so called artificial intelligence is to assume that it will mimic our own reasoning mechanisms. That is the least likely outcome. It is also the least desirable outcome. Why do we want a program that thinks like we do? We have already established that our thought process is sufficient for the types of thing that we think about. That seems like a bit of a tautology but I am writing from a position of limited perspective.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Quo Vadis?

I am building a lifestyle for myself. It is based on a practice of writing daily. I write a thousand words in my journal every morning. That is an opportunity to record things that I want to remember and limber up my mind for the work of the day. Sometimes I actually undertake writing projects in my journal. For example, when I participate in NaNoWriMo (National Novel Writing Month), I do so in lieu of writing journal entries every day.

This is largely a matter of schedule. If I wrote a thousand word journal entry and seventeen hundred words a day on a novel, it would take me about two hours a day. As I work for eight hours a day, five days a week, that would levy a pretty heavy toll on my time.

The other daily practice is writing in my blog. It is a different kind of endeavor. It is measured not in terms of how many words I write, although it is usually between three hundred and a thousand words long. Rather, it is however long it needs to be to get across whatever idea I’m exploring in that particular post.

In short, my blog is an exercise in writing coherent essays to be read by my readers, whoever they might be. I have tried to decide what the theme of my blog is to no avail. It appears that it is a blog about whatever interests me at the moment. The theory there is that I can’t hope to interest anyone else if my topic doesn’t interest me in the first place.

I have written about writing a lot. I have written character sketches and “still life” like sketches of locations. I have serialized a science fiction story. To be fair it was a draft of a story. I have written a good bit about programming because I am passionate about programming. I have written short memoirs of my youth.

I have managed to capture the interest of some people judging by the comments that I get. I try to pay attention to which posts get comments and write more like them. The point of this blog is to learn to write for other people to read.

I guess I’ll end with a tip of the hat to the individuals that inspired me to blog in the first place. First, there is Dave Winer, the man that was so committed to personal commentary that he invented the blog. And then there is Paul Graham, a renaissance man that taught me about lisp and startups and essay writing, all by example. Thanks to them I have found a place from which to start. Only time will tell what this blog becomes.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Love of Lisp

I have an on again, off again love affair with a language called lisp. It is the second oldest high level computer language with only Fortran being older. It is deceptively simple at its core. It wasn’t even meant to be an actual computer language when it was created. It was a notation created  by John McCarthy in 1958 to talk about Church’s lambda calculus. Shortly after he published a paper about it, one of his graduate students, Steve Russell, implemented it on a computer.

Lisp distinguishes itself by being comprised of a half a dozen or so primitive functions, out of which the entire rest of the language can be derived. Just because it can, doesn’t mean that it should be, so most modern lisps compile to either machine code or virtual machine byte code. This typically results in a considerable performance boost.

Lisp was heralded as the language of artificial intelligence. That was probably because it had the novel property of homoiconicity. That is to say, the structure of a lisp program can be faithfully and directly represented as a data structure of the language. This gives it the singular ability to manipulate its own code. This was often thought to be one of the necessary if not sufficient capabilities for a machine that could reason about its own operation.

While this was intriguing, the thing that drew me to lisp was the conciseness of expression that it facilitated. Programs that took hundreds of lines to express in other programming languages were often expressed in four or five lines of lisp.

Lisp was also the first dynamic language. It allows the programmer to continue writing code for execution even after the original program has been compiled and run. The distinction seemed important enough to McCarthy that he termed lisp a programming system instead of a programming language.

I have always found lisp an excellent tool for thinking about data, processing, and the interactions between them. Most other programming languages require a great deal of translation from the design to the finished implementation.

And so, I find myself reading and studying a book called How to Design Programs. It is a text on program design that was written using the DrRacket language system, based on the Scheme dialect of lisp. It is interesting to see the ways that the authors approach certain topics. I hope to get the chance to apply their insights to teaching a class using the book as a text.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.