It’s Not Rocket Science… No Wait…

I was complaining to my wife that I didn’t have enough time in the day for all the things I wanted to do. In the process I said, I have to spend eight hours a day working with rockets. She laughed and asked what my sixteen year old self would have to say about that. It was a good question.

When I was sixteen years old we were in the middle of the golden age of moon missions at NASA. My best friend and I had followed the space program with avid interest. When something had gone wrong we were poring over our books that we had accumulated on the space craft to try to figure out what they were talking about. It was as if by focusing our attention on figuring out what was wrong and how to fix it we were doing something to help the crowd in Florida and Houston solve the problem.

Looking back, it was pretty miraculous. We sent those intrepid souls up in little tin cans strapped on top of massive ordinance. The flight computer in the Apollo was huge and expensive, both in terms of price and in terms of weight which related directly to the cost of launching it out of Earth’s gravity well. And yet it had less computing power than a programmable calculator that an eighth grader might use in math class today.

I actually date myself there. Eighth graders don’t use calculators any more except maybe to take tests where the concern might be to prevent them from cheating by “asking a friend”. They use their cell phones the rest of the time.

And now that I’ve mentioned it, as a result of the advances in miniaturization that were largely driven by the requirements of the space program, cell phones have become the ubiquitous, universal appliances that science fiction writers postulated in my youth. The Dick Tracy wrist television is reality. The Jetson’s flying car is still struggling along in the development laboratory. We’ll get there eventually. Whatever we can imagine we can usually figure out how to build given enough time and money.

Yes, my sixteen year old self would be flabbergasted by how much for granted I take my job and the technology that I own and use daily. When I was sixteen I was already interested in computers. They were still big cabinets that lived in special air-conditioned rooms with access to them restricted to elite operators. This would soon change as computers got smaller and cheaper. Soon academic science and mathematic departments in state institutes of higher education could afford desktop computers that cost a fraction of what their mainframe big brothers did.

By the time I had spent a couple of years in college, the age of the personal computer was dawning. The last trip to the moon, Apollo 17, was several years in the rear view mirror. The Space Shuttle flight was almost a decade in the future. NASA had lost some of it’s luster, if only temporarily. The hard core space fanatics of my generation were still following every development with relish but the typical American of the time had their mind on other issues.

I joined the Army because I needed a job to support my growing family. I knew one thing when I talked to the recruiter. I wanted as much computer training as I could get. I asked for the longest school that involved computers. I was steered toward a job repairing the computer systems in the Pershing Missile system. It was a turning point in my life. Instead of becoming a Film Maker or a Musician I became a Computer Programmer.

Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

A Pause to Plan

I have come to the conclusion that the major reason that I am finding it so difficult to write a blog post most days is the time of day that I’m setting aside to do it. I have been waiting to blog until right before I go to bed. By then I am tired and not thinking as clearly as I do earlier in the day. I struggle to remember the things that have occurred to me earlier in the day that would indubitably be exciting topics for a blog post.

Life is about making choices. I made an important choice seven years ago when I started writing at In those seven years I have become a much better writer. I have completed NaNoWriMo twice and attempted it two other times. I have written several short stories and hundreds of blog posts.

Almost a year ago I decided to step up my game and commit to writing a blog post daily. I felt like it would have several beneficial effects on me. It would force me to write things to be read by other people. My journal was private. No one but me would ever read it so it didn’t matter what I wrote. When you write for someone else to read, you shoulder a certain amount of responsibility. For example, responsibility for the veracity of what you say when you assert that something is true. You also accept a certain amount of responsibility to entertain, or inform, or both. You must give your reader some reason to read what you’ve written.

At about the same time, I stepped up the quota on my journal entry to a thousand words. Writing longer journal entries helped me learn to sustain longer threads of thought. It has been a productive year.

Now, I find myself feeling a need for a shift in my focus. I want to do some writing to share with a critique group. The experience of reading other people’s writing and giving constructive criticism of it while at the same time having them critique something you’ve written seems like the next step in my development as a writer.

This is going to require me to rethink my schedule. I can’t continue to write approximately fifteen hundred words a day, a thousand word journal entry and an approximately five hundred word blog post, and still have enough time left over to write things for the critique group.

The choice that I face now is what to keep and what to put aside, either for a while or permanently. My blog is something that I want to keep writing but I need to move that writing to a time of day when I have more clarity of thought. My journal entry may need to be repurposed and perhaps made shorter. Perhaps I should use it as a venue for writing a first draft of my blog post. Or perhaps I can use it to write pieces to be critiqued. During NaNoWriMo I used it as the time and place that I set aside to work on my novel so using it for other purposes than journaling is certainly not without precedent.

These are all good thoughts. I need to consider them for a while before I make a decision. I felt like it was the kind of thing that might be of interest to those of you that bother to read my blog. Although it was a bit of navel gazing, it had a clear motivation and it does effect the future direction of this blog (which is in no danger of ceasing publication any time soon.)

As always, let me know what you think. You can post comments on Facebook or Twitter, email me at jkelliemiller at gmail dot com, or talk to me in person if we happen to know each other IRL (In Real Life). I have tried repeatedly to set up comments in WordPress but I haven’t quite figured it out yet. I’ll give that a try again in the near future.

Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Evolution of Programming Paradigms

My career as a programmer has spanned more than forty years. In that time I have seen a number of programming paradigms promise to make programming easier, more robust, and less error prone. Let’s review these paradigms in roughly chronological order.

Early in my career the latest advance in programming paradigms was Structured Programming.  The essential insight of Structured Programming was that programs should be written using a small set of control structures and that the primitive goto instruction should be shunned. The benefit was more readable code that could be more easily analyzed. The goto ridden code was often referred to as spaghetti code.

Next to come into vogue was the paradigm called Object Oriented Programming. Objects were intended to provide several benefits. First they were aimed at grouping functions with the data that it operated on. It also introduced the concept of encapsulation or data hiding. Data, and functions for that matter, that were not specified as public could not be accessed or modified by code outside the object.

Most popular Object Oriented languages were based on the idea of Classes that specified the data and functional template from which individual object instances were created. These Classes were often organized in a hierarchy such that the more general Classes specified the common data and functional characteristics of a group of objects. The more specific members of the class hierarchy would then add or override data or functional characteristics of the subclass to refine the representation and behavior of the instances of the subclass.

As it turns out, in most cases class hierarchies just added unnecessary complexity to programs. They also introduced problems such as what if you have a class that inherits from two different parent classes. For example, suppose that you had a BankAccount class that represented the ledger of a bank account, the deposits, the withdrawals, and the current balance. Suppose there was another class that represented a PrintableReport. Suppose that you wanted to create a class BankAccountReport that inherited attributes from both the BankAccount class and the PrintableReport class. Now here’s the quandary. Both superclasses have an operation called addItem. Which operation should the child class inherit, the one from BankAccount or the one from PrintableReport? This created so many problems that many Object Oriented languages only allowed a class to have a single super class.

Next to the scene was Aspect Oriented Programming. It’s claim to fame was a solution to the problem of multiple inheritance or, as it referred to it, cross cutting concerns. Aspects were a mechanism that allowed the programmer to conditionally alter the behavior of an object by modifying the behavior of a class without modifying its implementation. It did this by capturing calls to it’s methods and allowing the programmer to intervene before or after the call to the aspected operation of the underlying class.

The latest paradigm is not really new. Functional Programming goes back to the early days of Lisp. It says that functions, in the mathematical sense, should map inputs to outputs without causing any side effects. Functions should further be first class entities in the language. This means that they should be allowed to be stored in variables and passed as arguments to other functions.

Strict functional programming is difficult if not impossible to achieve practical results with. Most programs take input from somewhere and output results to somewhere. Taking input and outputting results are both violations of the constraint that the output of a function should depend solely on it’s input. Most functional languages have well defined accommodations for operations that aren’t strictly functional.

This was a whirlwind tour. I hope it gave an overview of the evolution of programming paradigms over the last forty years. Look these programming paradigms up on Wikipedia if you want to know more about them. Or if I have managed to confuse you totally.

Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.