The Forest for the Trees

It is slightly unnerving to discover that in spite of no particular planning on your part you have ended up living in one of the best places to live. Of course there are all sorts of caveats on that statement. If you mess with the criteria enough you can say that about any place. But when national magazines write articles about the ten best places to live and work in the United States and your city keeps showing up on the list it’s hard to deny that there is something to it.

There are several ways that this might have happened and then there is the way that it actually happened. The latter is as good a story as any. It was 1975 and I was a college student. My wife was pregnant and I needed a job. I took the civil service exam for postal worker and made top marks on it. The problem was, veterans were given a 10 point lead over non-veterans. So, someone who made a 95 on the test ended up with an adjusted score of 105 and was given preference for the job.

I looked for work for weeks but I didn’t know how to look for a job. The only jobs I’d ever had were the result of knowing someone that knew me or my parents. My job experience was somewhat limited. I had been a guitar player and gunfighter in a western theme park and I had been a probationary supply clerk for the Illinois Central Gulf Railroad thanks to my father-in-law. The gig with the railroad was messed up when I had a minor wreck and was out of work for a week.

It occurred to me that perhaps if I joined the Army I could expand upon my job skills and at very least I would be a veteran and thus eligible for the preferential hiring policy at the post office. I talked with the recruiter and told him that I wanted to enlist for the longest school that had training in fixing digital computer hardware. He suggested Pershing Missile Repairman and I embarked on an adventure that would lead me to Huntsville, Alabama.

I spent nine months in Pershing school where I learned to repair two different computers and related peripheral hardware. Part of that peripheral hardware was the guidance system of the Pershing missile. It was an exciting time. After I graduated from the school, I was sent to Neu Ulm Germany to practice my newly learned trade. After an adventurous two years there, I got sent back to Huntsville to be an instructor in the Pershing school.

After I got out of the Army, I new I wanted a career in computers. After an abortive start with a small startup in Birmingham, I returned to Huntsville once again. I didn’t plan to live in Huntsville. There were just a lot of good jobs that required my skills with computers. I started out at Intergraph, a rapidly growing Computer Aided Drafting startup. I had several jobs in the aerospace industry including a twenty five year run with one of the leading airplane manufacturers.

Life has been good. But now I find myself looking around for something new. I want to use my experience with computers but I also want to explore my newly developed writing skills. I also want to change my work hours some. I’m tired of getting up before dawn to get my writing done and get to work by eight o’clock. I’ve always been more of an afternoon person anyway.

This certainly didn’t go the way I expected it to but four thirty comes early tomorrow and I’m still committed to my current job. Consequently, I don’t plan on scrapping this post, or rewriting it. I will tuck it in bed, tag it, write a title for it, and head for bed myself.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Blog Struggles Continue

Something interesting happened to me on the way to work this morning. I thought of a good idea for a blog post. I intended to jot down a reminder to myself when I got to work but I forgot. This evening when I sat down to write my blog, I couldn’t remember my idea. That is the very epitome of frustration.

I thought perhaps if I wrote about the event, it would jog my memory. So far, that hasn’t worked. I think that by the time I do any necessary errands after work, pick up dinner, and come home, I’ve run out of stamina to do anything else. My mind is a total blank.

I spent a little bit of time reading Wikipedia. I have occasionally found that I am inspired to write something by reading arbitrary articles that I find interesting there. That works better when I’m not at the end of a hard day. I’d actually prefer to write my blog post first thing in the morning. The problem is, I don’t have time to do that before I go to work.

I have started giving some serious thought to looking for another job and retiring from my current job. I have considered trying to write for a living. I’m not confident in my ability to do that yet. Jumping in the deep end on faith is a young man’s game. I may not consider myself that old but I’m definitely not a young man any longer.

I am interested in writing about the history of computers, the history of computer science, and the history of computer languages. As I browse Wikipedia and search the internet with Google, I discover there is a lot of studying to be done before I know enough about it to tell a coherent story. And after all, the most important part of history is story.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Purposes of Computer Programming Languages

Computer programming can be viewed on many levels. Superficially it is a means for specifying how a computer should perform a given task. If that were all it was, we could must enter a long sequence of numbers that correspond to the machine instructions to accomplish the task.

Modern computer languages go much further. They provide platforms for communicating, not only the specific tasks to accomplish but our assumptions about them, the structure of the data that they will manipulate, and the algorithms that will accomplish them.

But even beyond that, they provide a common language for people to talk to each other about programming tasks. As such, they evolve with the growth of our understanding of the activity of programming, its attributes, and the algorithms that are the final product of the activity.

To summarize, we write programs to enable the automated solution of computational problems by computers but also to communicate with each other about these computational processes. In the interest of clarity of communication, we have seen the trend of programming languages toward higher and higher levels of abstraction with an ever increasing investment in compiler technologies that translate these abstractions into efficient executables that provide the computer tasks that we set out to specify in the first place. It is ultimately this communication that offers their greatest benefit.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Programming Principle to Ponder

In my years as a programmer I have discovered a number of simple facts about computers that aren’t obvious at first. I thought that I’d share a few of them with you.

The first is what I call the fundamental theorem of Computer Science. It is in any system you can trade processing for storage and vice versa. An example may serve to help illustrate what I mean. Say for instance you need a function that returns the sine of the integers between 0 and 89. You can either write an algorithm that computes the sine or you can have an array of 90 floats that are preloaded with the sine of the first 90 integers.

The first will be more expensive in terms of the time that it takes to return a result. The second will be more expensive in terms of the memory that it takes to store the table. The correct choice will depend on whether you need a fast answer or a memory efficient one.

Another fundamental principle of programming I learned from a book entitled The Pragmatic Programmer by Andy Hunt and Dave Thomas. They call it the DRY principle and it stands for Don’t Repeat Yourself. This principle was first espoused by database gurus in the form of first normal form.

The idea is if you store the same value more than one place in your program you run the risk of changing it in one of those places and forgetting to change it in the other. It is a simple thing to do but it helps avoid hard to find bugs.

One more and I’ll call it a night. It was first brought to my attention by David Heinemeier Hanson (or DHH as he is commonly referred to by the community), the original architect and author of Ruby on Rails. He calls it configuration by convention. To explain I need to describe how people handled configuration of their programs before.

There were two popular approaches. One was to specify the configuration of your program with so called command line options. These usually consisted of symbols, either single letters or entire words, that were associated with a value for the option.

This soon got rather cumbersome if there were a lot of options. The first attempt to simplify the specification of options was by creating a file that had a special syntax that made it easy for a program to associate the option specifier with the value to be assigned to the option. The problem was that the configuration file syntax was often complex and error prone. For example, XML was a popular syntax used in configuration files.

And, when people started using configuration files the proliferated such that every new library that you adopted in your program would have it’s own configuration file.

DHH observed that a large percentage of the things that are configured by configuration files can be established by having conventional configurations. For example, a common configuration parameter was the specification of the directory where particular files of interest to the application could be found. Instead, DHH established a default directory layout that the application used and documented it.

He asserted that software should be opinionated. It should expect things to be done a particular way and reap the benefits of simplification that these assumptions enabled.

I think the thread that runs through these principles is that the most important thing a programmer needs to do is think about the problem that they are trying to solve and ways that they can solve it instead of what most of us do which is to try to reuse techniques that were successful on previous projects.

This is only bad if it is done without careful thought about the project at hand. Are you trying to drive a nail with a monkey wrench? Programmers are often too quick to start coding instead of taking the time to think about the problem.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Kellie’s Professional Origin Story

When I was a teenager, I was interested in electronics. I liked listening to AM radio broadcasts from all over the country. I read Popular Electronics magazine. I built small electronic kits and took apart old inoperable television sets for their parts but mostly to learn how to solder and desolder.

When I was in my first year of college Popular Electronics published a construction article on how to build your own personal computer. I had always been intrigued by computers and avidly read science fiction books and watched science fiction tv shows and movies. I wanted a computer. But the $600 price tag was way beyond my meager student finances.

I found a Plato terminal in the library and obtained an account on it. Plato was a time sharing system that provided computer based instruction in everything from psychology, physics, literature, and even computer programming. In particular, there were lessons on the Tutor language, the computer language in which all of the instructional material on Plato was implemented. I pursued it with great relish and wrote short animated presentations with it.

Time passed. I got married. We were extremely broke students. Inevitably my wife got pregnant and I had to look for a job. It was during a recession and I had no marketable skills. I decided to remedy that situation and spoke with an Army recruiter. I told him I wanted to enlist for the longest school that taught computers. I figured that they wouldn’t spend any more time than necessary on training and consequently the longest school would have the most content. I was right.

For the next year I learned every circuit in the commercial minicomputer that served as the ground control computer of the Pershing missile system. Along the way, I learned a little bit about programming from my course work and a lot about programming from magazines like Byte, Kilobaud, and Compute! to name just a few.

I tell this story to explain that my perspective on computers has always been two pronged. That is, I have an appreciation for both the hardware that comprises the computer and the software that runs on it. Most people in the computer business specialize in either computer hardware or computer software. I decided early in my career that I liked to write software but I also enjoy understanding how the hardware works so that I can make the computer do things that other people might not imagine that it was capable of.

Another of my long term interests has been in Artificial Intelligence. But that is a topic best left for another post. Dinner and the weekend beckon and I have managed to fulfill my daily writing goals early today.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

More Rant

As my colleague Danny Cutts pointed out in a comment on my post yesterday, I criticized the status quo in software development without making any constructive suggestions for how language selection ought to be done.

The short answer to that question is that it is a topic that sorely needs research. There are, in fact people in academia all over the world that are investigating these issues. There are many interesting approaches. I have been impressed by the results obtained by using languages that enforce constraints against mutable data and encourage programmers to avoid side effects.

I am an admitted fan of Lisp and I think that Clojure has done the best job of all the modern Lisps of advancing the state of the art of Lisp like languages. Not only has it made data immutable by default, it has also unified the operations upon collections of all sorts. It has also baked in thread safety to the point that it’s hard to get concurrency wrong.

And the final aspect boosting Clojure over the top in the comparison of modern Lisp implementations is the fact that it is built on top of the JVM and provides direct access to all those incredible libraries available for the Java platform. It is truly the best of both worlds.

Another language that is oft maligned but is far better than it is widely thought to be is Javascript. It has long suffered from lack of respect due largely to being forced out the door prematurely for marketing reasons and then forced to live with its unfortunate choices due to its wide spread adoption as the universal web scripting language.

Modern implementations, Node.js on the server, and the evangelism of Douglas Crockford have all gone a long way toward improving Javascript’s reputation not to mention it’s attractiveness as a generic platform for application development.

Languages should be chosen to best address the needs of the problem domain. That is much easier said than done. We are suffering from too many options. People try to overcome that problem by artificially constraining the list of choices. Perhaps they would do better to use the prescription that the Unix community suggests (sometimes attributed to Kent Beck):

  1. Make it work.
  2. Make it right.
  3. Make it fast.

What that means is, first hack out a naive solution, sometimes referred to as the simplest thing that might work. Then, refine the solution to cover all the edge cases that you overlooked in the prototype. Finally, instrument the code to find out where it is spending most of its time and concentrate on optimizing that part of the code.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

In Which I Discover A History of the Personal Computer

Lately I have been thinking about the early days of personal computers. I was curious about the timeline of when the various computers were introduced. I had a fairly good idea about most of the early makes but there was one that I didn’t know much about when it was first introduced. It was a line of computers made by a company called Ohio Scientific, originally Ohio Scientific Instruments. The reason that I was interested was that it was the computer that was sold by the company that I went to work for when I got out of the Army.

I looked Ohio Scientific up on Wikipedia and one of the references at the end of the article led me to a book called A History of the Personal Computer: The People and the Technology. Someone, hopefully with permission of the copyright holder, had converted each chapter to PDF and made it available on the web.

It has proven to be a gold mine of details about the early days of personal computing. I will be commenting on it as I read it with personal experiences that occurred contemporaneously with events described in the book. I recommend the book to anyone that is interested in the history of computers through 2001 when the book was published.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Evolution of Computer Languages

I’ve got a thing about computer languages. I consider myself to be somewhat of a connoisseur. I have a soft spot in my heart for Lisp but I am also fans of other languages based on the context. I spent ten years more or less as an evangelist for Java. At the time I was fluent in Java, C, BASIC, and Pascal, I was conversant with Lisp, Scheme, Smalltalk, and Ada, and I could read most other languages but in particular COBOL, SNOBOL, Fortran, and Prolog.

While I personally preferred Lisp, I felt that the bulk of the programmers at the time were C or C++ programmers. As such, Lisp looked and behaved weirdly from their perspective. Java represented a huge movement in the right direction while remaining a language accessible to C programmers.

At the time, everybody was impressed by the elegance of Smalltalk and the object oriented, message passing paradigm. Smalltalk was also too esoteric for most C programmers but there was a guy named Doug Cox that came up with a language called Objective-C that captured some of the object oriented flavor of Smalltalk in a syntax that appealed to the C crowd. This was about the same time that Bjarne Stroustrup was experimenting with C++.

Both Objective-C and C++ proved to be overly complicated, especially when it came to managing the dynamic allocation of memory. Consequently, they both gained a reputation for being difficult if powerful. This was the state of affairs when James Gosling was faced with developing a language for a set top box. The requirements were that it be fast, easy to write bug free code in, and it would be well integrated with the network. And, of course, it would be object oriented and have automatic memory management in the guise of garbage collection. In short, Java was no Lisp but it was about as close to it as the programmers of the day could get their minds around

As it turns out, Java did raise the bar to the point that now, some twenty years later, it has itself passed into the conservative end of the spectrum and new languages now fill the spot it once held. In fact, Lisp has had a resurgence in popularity in recent years.

This renewed popularity can probably be best explained by the fact that Lisp has always been a research language. It was conceived as a notation for the discussion of Church’s lambda calculus but it’s simple, homoiconic syntax quickly became a powerful tool for creating derivative languages to explore new programming paradigms.

Consequently, concepts such as structured programming, functional programming, and object oriented programming had their first experimental implementations in Lisp. It has been said that every new feature in every new programming language introduced since Lisp was first created have been done first in Lisp and often better.

Which brings me around to a point of sorts. Since all of these languages have been gravitating toward Lisp for all these years, why hasn’t Lisp just taken over as the language of choice? There are a number of answers to that question, some of them contradictory.

For years Lisp had a reputation as being terrible for problems with a lot of mathematical computation. The truth of the matter was that the implementation of arithmetic in most of the Lisps of the time was good enough for the researchers that were primarily interested in investigating aspects other than numerical computation. When later generations of Lisp implementors took the time to optimize the numerical performance of Lisp it came to rival C and Fortran in both speed and accuracy.

This illustrates the important observation that Lisp has seldom been considered a language for the development of production software. A couple of blatant exceptions have been the use of Lisp in the development of software to predict the performance of stocks on Wall Street and software to predict the most likely places to explore for oil. These domains were willing to accept some rough edges in order to solve these particularly hard problems at all.

At one point it was argued that the automatic garbage collection of Lisp would kick in at the most inopportune time and embarrass the developer mid-demo. Advances in the technology of garbage collection have since made this argument mute.

Another often sited argument used against Lisp is the claim that other, more popular languages have a larger selection of third party libraries available to them than Lisp does. This does remain a challenge to some degree however many Lisp implementations have Foreign Function Interface mechanisms that allow them to call library routines written in other languages.

Another spin on the question is that Lisp has regained popularity especially in revised dialects like Clojure which has taken the opportunity to refactor the architecture of collection types so that the operations on them have similar names when they do similar things. This makes the language easier to learn. Clojure also runs on top of the Java Virtual Machine making interoperation with the vast Java third party libraries one of its attractive features.

The sad conclusion that I come to is that Lisp is a good source of inspiration and even a moderately good platform for investigation of architectural approaches to difficult, complex software systems but the benefits of the languages such as Racket, Swift, Ruby, Groovy, and even Javascript usually far outweigh any advantages that Lisp may once have had when it comes to implementing software for production use.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Progressive Web Apps

It is the nature of programming languages that they provide mechanisms for implementing behavior that was never imagined by the creator of the language. As programmers apply the language to various problem domains they imagine new and innovative ways to use it. Sometimes these new ideas inspire language designers to add features to the language to directly support these innovations. Sometimes they are inspired to develop entirely new languages designed specifically to support this new way of thinking about problems. Usually, this evolution of programming techniques is spurred by someone coming up with a name for the technique. Until then it is difficult for programmers to talk about it.

An example that comes to mind is a technique called AJAX that was first described by Jesse James Garret in a article called AJAX: A New Approach to Web Application on February 18 2005. It described how to use facilities that had been available in web browsers since around 2000 to speed up the display of updates on web pages. Once there was a name for the technique, it became a hot topic of discussion among all web developers over night.

A similar situation has just come to my attention. Alex Russell wrote an article on June 15, 2015 entitled Progressive Web Apps: Escaping Tabs Without Losing Our Soul. In it, he talks about the use of Service Workers, a type of Web Worker, more recently coined terms, to implement long running Javascript tasks that run independently from the threads that implement the display events of the browser allowing both threads to run without interfering with each other. The Web Worker technology had been discussed as early as 2010 by the Web Hypertext Application Technology Working Group (WHATWG).

I’m still getting my mind around what Progressive Web Apps actually are. It is clear that they are a blurring of the lines between a dynamic web application that lives in a browser and a native application that lives on the desktop. That desktop may be on a computer, a smart phone, or some other device.

I’m not sure exactly how but I have a strong feeling that Progressive Web Apps are going to become relevant to my career as a programmer in the near future. Now that the term exists, I can use it to find related articles and read up on applying it to the applications that I am developing.

Once again the Sapir-Whorf Hypothesis, which asserts that language determines (or in a weaker form, influences) thought, becomes relevant in a discussion of computer languages as well as its applicability to natural languages.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

8 Bit Fantasies

I watched a video interview with the Oliver twins. They are video game legends from England. They started developing video games as teenagers in 1983. They went on to start their own game studio. In the interview, they talked about the process of developing games. They observed that the constraints of creating games for eight bit processors with limited display hardware often made it easier to creating games than the relatively unconstrained environment of modern hardware. The reason this is so is that when the hardware has severely limited capabilities it forces you to think backwards from the constraints to the design of a game.

The counter intuitive fact of game design is that games with simple rules and clear goals are more fun. For example, chess only has six unique types of pieces and is played on a board of 64 squares and yet the combinations of valid games is astronomical.

Another thing they commented on was the importance of thinking about the program with pencil and paper before they started writing code. They discovered this because when they started developing games they only had one computer between the two of them. Consequently, while one of them was entering code into the computer, the other was figuring out what they were going to tackle next when they got their turn on the computer.

Listening to them talk about their game developing experiences reminded me of a friend that I knew in the same era. Stan and I worked for Intergraph as computer technicians. We tested and repaired a specialized processor that allowed high speed searches for graphical elements in CAD files. In short, we both understood how computers worked in great detail. Stan owned an Atari 800 computer. We spent many hours talking about game design for the Atari.

As I think back on these conversations, I realize that the hard part was never implementing game ideas in code. It was coming up with simple yet engaging ideas for how the game would work. We didn’t spend enough time with pencil and paper. We both wanted to sit down and start coding immediately. This is an important point that needs to be taught when we teach people to code. A little bit of design up front can save a lot of trial and error programming later. And also, adding artificial constraints to the design process can have the surprising effect of making it easier to invent an interesting game.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.