Musings on the Evolution of Programming

When I started my career as a programmer things were a lot different than they are now. Computers were just becoming less expensive but they were still being designed and operated as if they were the expensive behemoths that required huge air conditioned data centers and a small army to operate. The era of the temple of the computer was waning.

For those of you too young to remember, the temple of the computer had an outer vestibule where the unwashed masses were allowed and an inner sanctum with a counter separating it from the outer vestibule. There was a kind of priesthood of operators that intervened for the mere mortals who brought their offerings of decks of punched cards to be submitted to the great and powerful machine.

Then, after some passage of time, you would learn whether the gods had smiled upon you and run your program or whether you had a mistake in your deck. In either case you received a printout that contained the results, be they the output from a successful run or the core dump from an unsuccessful run. You also typically received a report of how many seconds your run had taken that was directly related to how much your account would be charged for the run.

All this changed with the advent of the microcomputer. The problem was, most people that were programming these modern marvels had learned at the temple of the computer. Consequently, things were oriented to be more convenient for the computer than the user. This was particularly true when it came to how you entered your program into the computer.

Early microcomputers had two types of languages, compiled languages and interpreted languages. Compiled languages were rare at first because most of the early microcomputers did not have much if any secondary storage. They typically had paper tape readers or cassette tapes hooked up to load programs from. Consequently, interpreted languages like Basic were the norm.

Basic programming had two modes of operation. In immediate mode you would type in a command, hit return, and the computer would run the command immediately. In programming mode you would type a line number followed by a command, hit return and the computer would store that line for execution later.

The line numbers were used to sequence the program with the lowest number being the beginning of the program and proceeding upward until you reached the highest line number. When you were through entering the entire program you would type the immediate command “run” and the computer would start running the program that you had entered starting at the lowest line number.

This was what happened in the best of cases. What typically happened the first time that you ran a program is that the computer would print an error message like “Syntax error in line 2215”. Then you would type “list” to get the computer to display the program on the screen so that you could try to figure out what was wrong with line 2215.

To fix the problem you would have to retype the entire line in error, including the line number, with your corrected instruction. This was labor intensive but I loved it. I was making the computer do my bidding like a genie in a bottle. Somehow though, I knew there had to be an even better way to enter the program.

There was a better way. It was a program called a text editor. It allowed you to create a program that you then saved, either to tape, a floppy disk, or, if you were extremely lucky, a hard drive. The first text editors were line oriented and were little better than the Basic interpreter. They had commands that allowed you to make corrections to a line without having to retype the whole line. There were even some commands that would allow you to change every occurrence of a particular word or phrase to another word or phrase in the entire file.

The next innovation in programming was the screen editor. It was similar in operation to modern word processors except it didn’t allow bold facing or underlining or any other kind of styling. This wasn’t required for programming. Screen editors allowed you to use the cursor keys to move around in your program and correct it by deleting the errors and inserting the corrections directly.

Keep in mind that these innovations were taking years to come to pass. Life was getting better all the time for programmers. At this point we even had character based games that drew maps of dungeons on our screens using dashes and vertical bars and represented monsters with single letters that would move around the screen and attack the single letter that represented you in the game.

But life got better still with the introduction of the programmable editor. This allowed the user to create their own custom sequences of commands to reduce tedious repetitive corrections to a single command. For instance, suppose you had a file that contained a hundred lines that had all consisted of some arbitrary text with the unique string “$$” on each line followed by a number. You need to change the number so that it has a decimal point and two zeroes appended to it. A programmable editor would allow you to create a custom command that would look for the string “$$”, find the end of the number after that point and append “.00” to it. This saved countless tedious hours.

That was not the end of improvements in the programmers lot by a long shot. It is however the end of this blog post. I may revisit the topic and bring you up to date with the innovations that came after this point if there is any interest.

A Language to Build Languages

I’ve been fascinated by computers since before I graduated from high school. One of the early ideas that captured my imagination was the possibility of creating a program that could think like a person. Throughout my career I have pondered the possibility and I have come to the conclusion that while we may be able to write programs that provide the fundamental structures and operations upon which intelligence may emerge, we are far from understanding how intelligence works well enough that we can reproduce it constructively by design. That puts me in the camp that is sometimes labeled emergent AI, although I prefer the term digital intelligence to artificial intelligence.

One of the aspects that I feel will be required for emergent digital intelligence (let’s abbreviate it EDI, shall we), is the ability to introspect, that is, to examine its own thought process. This is something that I have felt for a long time, in fact, for almost as long as I  have been interested in computers. I have always looked for ways that programs could examine themselves. For instance, I was fascinated by the fact that I could write code that examined its own representation in memory and even modify itself while running in Microsoft Basic as early as 1979.

Much of my early introduction to programming was due to a subscription to Byte magazine, an early journal aimed at amateur microcomputer enthusiasts. Every August, Byte published their annual computer language issue in which they explored the features of a different language. I suspect that this was my first exposure to the Lisp language. Lisp is the second oldest high level computer language, predated only by FORTRAN.

It is also the first computer language that focused on symbolic processing instead of being primarily concerned with numerical computation. That is to say, it was written to facilitate the manipulation of lists of symbols. A symbol, in this case, is an arbitrary label or name. For example, you might have a list:

(alice bob ralph sally)

The parenthesis denote the beginning and end of the list. The four names are symbols and make up the elements of the list and they are considered in the order that they are written between the parenthesis, that is alice is the first element of the list, bob is the second, ralph the third, and sally the fourth and final.

Further, Lisp code was represented by lists, just like its data. Consequently, program code could be manipulated by the language as easily as data could. This jumped out at me immediately as giving Lisp the ability to introspect over its own code. Another, more subtle capability of Lisp is the ability to take a list and rewrite it according to a template called a macro. This turns out to be incredibly useful in allowing repetitive operations to be condensed to their essence.

Lisp is typically implemented as an interpreter. It accepts a line of input, processes it and prints a result. This is called a Read, Eval, Print Loop or REPL for short. The reason that I bring this up at this point is to point out that the component that does the Read portion of the REPL is a very interesting piece of the picture. It takes a list of input characters and parses them into symbols and builds them into a small program. It is responsible for recognizing if there are any macros in the list and if so, expanding them into their translations. When it is finished, it has a complete, correct lisp expression to hand to the Eval portion of the REPL. Either that or it detects that something is wrong and prints an error message.

This Read operation is very powerful, even more so in the standard Common Lisp version of the language. In Common Lisp, the Read function is table driven. That means that by substituting a different read table, the reader can parse a language with a different syntax. The implication of this is that Lisp is not only a language, it is a framework for building new languages.

This has been a long story and you may be feeling a little lost by now but the point is that Lisp is exactly the kind of substrate upon which EDI can most easily be built. There are many modern computer languages that implement many, if not most, of the features of Lisp. The point is that Lisp implemented them first and best.

The idea that the structure of a Lisp program is similar to its syntax, a property called homoiconicity by the way, is at the heart of its power and responsible for its longevity. It also make it the prime candidate for building the environment in which EDI will emerge.

News (and a Few Comments)

I haven’t been keeping up my blog lately. Lots of other things have been happening. I thought I’d take a moment to update everyone on a few of them.

I started off the new year with a plan. I had been camping out on Pam’s MacBook Pro for over a year and I decided that it was time that I got my own machine again. I looked at the bank account and like the responsible adult that I am (no snickers from the peanut gallery), I decided that there wasn’t an Apple computer in my near future.

I have my iPhone and my iPad. I have the use of Pam’s Macbook Pro if I really need a Mac. But for my everyday work horse I needed something less expensive.

Around that time the Raspberry Pi 3 was announced. I was impressed. Here was a computer with the horsepower of a high end cell phone. I mean that as a compliment. High end cell phones are more powerful in many ways than a lot of low end desktops.

Then, Pi day came (3/14/16, get it? Pi rounds to 3.1416.) and Western Digital, the disk drive manufacturer came out with a special product. It was a 314 GB hard drive for the Raspberry Pi that they were selling for $31.41. When I looked into it I discovered that they had a 1 TB model for around $80 and it came with cables, a power adapter, and a case.

Needless to say, I bought the Raspberry Pi 3, a new monitor for about $100, and the WD 1 TB Pi Drive. The Raspberry Pi and monitor came in rather quickly. WD backordered the disk drive and I ended up waiting a month for it. But for about three hundred dollars, I had a new computer.

When it finally came in, I installed the multi-boot software that they suggested for it and chose Fedora 23 ARM for the distribution that I wanted to run. To make a long story short, for technical reasons that I won’t go into right now, the Pi was not up to being used as a developer’s computer. It is fine for a hobby computer or for a student that is just learning to program. It is great to build Internet of Things projects around. It just doesn’t have the guts to be a developer’s main computer.

I was depressed. I thought I had discovered a cheap way to get my own computer up and running. I moped around for a few days and then I started looking around to see what I could find to solve my problem. I found a local store that had refurbished computers but they weren’t exactly what I needed. Besides, I didn’t have the money to buy what they had in stock.

Then, in a flash of inspiration I remembered that I had a $100 Amazon gift card that I  had forgotten about. I started shopping on Amazon and soon found a refurbished Dell with a 3.0GHz Core 2 Duo processor, a 500 GB hard drive, and 8 GB of RAM. It wasn’t perfect but it was adequate for a developer’s machine.

I had to put $40 with the gift card to get it but a week later it arrived. It came with Windows 7 Professional. I am not a Windows fan. I have worked with Windows every day at work for the past twenty years or so, so it isn’t that I don’t know how to use it. I just don’t like it. But that’s a topic for another blog post.

I do know that Windows is useful on occasions so I went ahead and installed it. Then, I installed Fedora 23 x86_64 on it. I was impressed. The Fedora install program did a wonderful job of shrinking my Windows partition and creating the Linux partitions. Fedora booted up and has run like a dream ever since. It is not a Mac but it is a good solid developers machine. I have been very happy with it.

I’ll post more on what I am developing with it in another post. This post has gotten longer than I intended and I need to get to bed. I will hint that not only am I developing software with it but I am also writing a book on programming with it.

On Writing, Programming, and Composing

I used to be daunted by a blank page. Now I am beckoned. It is an invitation to pour out whatever I am thinking about. I grew up in a time when you either laboriously wrote out your thoughts in long hand with pen and paper or you typed them with a typewriter. In either case erasing was complicated to the extent that it wasn’t really a good option. I think many writers just marked through their mistakes and kept writing.

I remember sitting in the spare bedroom of the trailer in Carbondale surrounded by a bunch of wadded up sheets of yellow paper containing discarded starts of the screen play I was attempting to write. I had no concept of how to write a draft. I was a perfectionist. If it wasn’t exactly what I meant to say, I ripped the page out of the typewriter, wadded it up, and loaded a clean page.

Even when I finally got a computer, I didn’t know how to write with it. I spent hours typing a few words and deleting them and then typing a few more. I had similar problems with programs. I have started many programs that never got much further than a skeleton and a few simple primitives. The important thing in both cases was that I didn’t quit trying.

For a while, I kept just starting over again doing the same thing each time. Then, I started varying my approach. I had some successes with programming at work. I eventually found The Artist’s Way and learned how to bootstrap my writing by sitting down every day at the same time and writing a minimum of 750 words. I eventually became confident enough in myself that I was able to write 50,000 words in a month.

I have learned that I must keep raising the bar, demanding more of myself. I recently increased my daily minimum to 1000 words. I decided that I would spend at least part of my words writing something that was more focused than a journal entry. Some days I find that I still spend the whole entry rambling. Others, I actually dive into a topical post as soon as I start writing. I feel the quality of my writing improve with practice. I notice my mindset while writing changing. It has become an exercise in organizing my thoughts instead of struggling with the mechanics of writing.

I have struggled off and on with integrating what I know to be good grammar with the conversational voice that often ignores such faux pas as dangling participles. I have also have had problems with sounding pedantic when I write. I am still struggling but I seem to be doing better with less struggle lately.

On the programming front, I have had similar experiences. I have learned that good tools are very important to being productive. They can help you be more productive so that you have more time to think about the code you are writing. This means that you don’t have to take the first thing that works as the final product. I am feeling the urge to rewrite more often. Too often in the commercial world the people that are paying for the software don’t appreciate the value of iterating a couple of times to improve the design of a piece of software.

I am to a point now that I am facing a common challenge in all of my endeavors. To finish. I suppose there is a corollary that has just occurred to me. Each time you iterate over a piece you should strive to finish that iteration. Each iteration should have as its primary goal to improve over the previous iteration. If you make the practice of finishing each iteration, it doesn’t matter if you have an iteration that is a regression. You can fall back to the last iteration and try again.

This is true of writing, programming, and my other artistic endeavor that I haven’t even discussed yet, composing music. In fact, it is even more applicable to composing music. So much so that there is a special name for it. It is called improvisation. There is improvisation in writing and programming but it is not as exposed to public scrutiny as musical improvisation is. It occurs to me that essay writing is literary improvisation though. And live coding is programming improvisation. So the paradigm does translate across all three fields.

A good essay would draw some conclusions at this point. I’m not claiming this is a good essay. It is certainly not a bad start though. So I’ll leave it at that.

Experiments with Gradle in the IntelliJ IDEA IDE

I’ve recently started using JetBrain’s IntelliJ IDEA software to develop software written in Groovy at work. It is an excellent package well worth it’s modest price. I have been investigating the differences between the Ultimate edition that my employer purchased for me at work and the community edition that JetBrain’s offers for free. While the extra features available in the Ultimate edition are nice, I have been finding my experience with the community edition as good or better than my previous experiences with Eclipse and Netbeans.

Lately, I have been experimenting with using the Gradle build tool within IDEA. I have figured a lot of things out about it but I am having trouble getting it to build an executable jar file. I’m sure it is just a matter of configuring something correctly within the jar task but I haven’t figure out how to do it yet.

I have learned a lot about IDEA and Gradle but I am just going to have to keep studying until I figure out how to get things to work the way that I want them to. I’ll write a post when I figure out how it’s done.

UPDATE: I figured it out. You just need to add the following lines to your build.gradle file. This includes all of your dependent jars in your executable jar. By the way, substitute the name of your main class  for  ‘org.me.package.Main’.

task fatJar(type: Jar) {
    manifest {
        attributes 'Main-Class': 'org.me.package.Main'
    }
    baseName = project.name + '-all'
    from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } }
    with jar
}

Of Gradle, Groovy, and How I’ve Come to Love Build Automation

I finally got my project at work to build using Gradle. Grade is a build tool, something like make or ant except that it is implemented as a Domain Specific Language (DSL) built on top of Groovy. Groovy is a remarkable language in its own right. It is a dynamic language that compiles to Java byte code so it runs on the Java Virtual Machine (JVM). It can freely call code written in Java and Java code can call code written in it. This gives Groovy an enormous head start in terms of the variety of libraries that it can take advantage of right out of the box.

What is so great about Groovy, anyway? Well, it is a lot less verbose than Java for one thing. You rarely need to use semicolons in Groovy. Usually, it knows where the end of a statement is without you having to tell it explicitly with a semicolon. Another thing Groovy is good at is figuring out the types of variables without explicitly being told. This makes it easy to define a variable using the def keyword and letting Groovy figure out the type of the variable by what you assign to it. Groovy is touted as a scripting language and it does serve in that capacity very well but it can also be used to write very succinct and flexible object oriented code, like Java. Another place where Groovy saves typing is with imports. All of the more commonly used library packages are included by default.

Groovy also adds a new syntax for cleanly entering Map constants. This makes creating keyword/value data structures much easier. These are very useful for collecting information such as configuration parameters. There are lots of other neat features that Groovy brings to the table but to get back to Gradle, it is an application, written in Groovy specifically for managing the build process.

Gradle makes the build process a lot more expressive. It is more concise while at the same time being more flexible. It is easily extended both in an ad-hoc fashion by writing code specific to the build at hand as well as in a more general fashion by supporting plug-ins that can be shared among many different projects.

Using Gradle to automate my build process has turned a tedious job into one that is as exciting for me as writing the rest of the code in my application is. If you are developing in Java or Groovy or any other language for that matter, I suggest that you give Gradle a look.

Electron is Awesome

I finally got the current version of Netlog, my program to help me create logs of the ARES Training Net, moved over from being a web app to being a desktop app in the electron framework. I had to require jquery and schedule the init function to be run 100 microseconds in the future instead of depending on the apparently non-existent onReady event of the document. Figuring this out took me several minutes but it really wasn’t that difficult at all. I suspect that getting it to run as an app on windows and linux will be even easier. I wouldn’t be surprised if getting it to run on Android and iOS wasn’t fairly easy as well.

I suspect there will be a bunch of applications that work this way in the near future. I might even get them to let me write an app in Coffeescript at work. I doubt it. It’s a little bit too free wheeling for the corporate environment. I guess that’s my main problem. I’m too much of a rebel to excel in the corporate environment.

I spent all of my time yesterday learning about photon and electron and forgot about writing my blog post. Well, in the spirit of moving on, here is my blog post for today. Tomorrow is another day. I hope I can get my momentum back and post again tomorrow.

I’ll Take a Cup of Cocoa® Please

I found a great book this weekend. It’s Cocoa® Programming Developer’s Handbook, Second Edition, by David Chisnall, published by Addison-Wesley Professional. It provides a very complete coverage of this broad subject but, unlike many of the other books I’ve read on the topic, it assumes that the reader is already a competent programmer. The author tells how Cocoa started life as NeXTStep on the NeXT computer and follows its evolution through a collaboration with Sun Microcomputers which resulted in OpenStep until Apple bought NeXT and adopted OpenStep as the heart of it’s development of OS X.

The book is wide, deep and fast paced. Don’t be frustrated if you find yourself having to read some sections more than once.  It includes an historical overview, a survey of the languages that have interfaces to Cocoa and why you might want to consider using each of them, an overview of the Developer Tools that Apple supplies to write applications with Cocoa, and of course, in depth discussions of how to use all of the various frameworks that comprise Cocoa (e.g. Core Framework, Core Graphics, Core Data, Core Audio, etc.) It also discusses the philosophy of Document-Driven Applications that was pioneered by Apple on the Mac. It frames these discussions with plenty of code examples that help place them in a practical context.

The Beginning of a Series of Opinionated Posts

One of the philosophical principals underlying Ruby on Rails is that software should be opinionated. I have been thinking about what that means a lot lately and have decided that being opinionated is a good trait in general. I have decided that I will be opinionated and share my opinions with anyone who will listen. In particular, I will share my opinions here.

I have concluded that software engineering is at best a misnomer and at worst a detriment to the development of quality software. Engineering is a philosophy of creating physical artifacts that has been developed empirically for the last two or three centuries. Software is not a physical artifact.

When I have a physical artifact and I give it to you I no longer have the artifact. When I have a piece of software and I give it to you, I still have it. Your having it doesn’t reduce the utility of my having it. When I design a physical artifact, I want to get all the details right before I build it because materials are expensive. When I design software, the easiest way to figure out the details is to create a prototype and then iteratively improve it until it is right.

The point being that building multiple versions doesn’t incur large material costs. These are only two of many reasons that software development is very different from the process we know as engineering. Calling Software Development Software Engineering raises inappropriate expectation in those that don’t understand Software Development.

I’ll rant on this topic more later but I’m going to call it a night right now.

Parrot Speaks A Number of Languages

After watching several of Allison Randal’s videos yesterday (see Dynamism Clarified ), I started investigating Parrot. I was so impressed that I downloaded the latest version (2.0.0) and built it on my MacBook. I haven’t had time to do much more than start reading the documentation but I like what I see so far. I will probably play with Cardinal, an implementation of Ruby 1.9 in Parrot. I may see what kind of bench marks I can come up with.

I realized that my first several languages were all dynamic languages, i.e. Microsoft Basic (long before Visual Basic) and Forth. I always preferred dynamic languages because, in spite of whatever project I was working on for my employer, I was always intrigued by the prospect of artificial intelligence. My first static language was Pascal, quickly followed by C. I was going to say that I learned Lisp around this time but it took me a long time to really learn Lisp. I was able to write Lisp expressions in pretty short order but the whole process of building expressions up into programs that leveraged the unique strengths of Lisp took quite a while.

When I look back over my career it seems that I was always avidly studying dynamic languages. In fact, one of the reasons I was so enamored with Java was that it was more dynamic than C. When I discovered Java (the first day that Sun released the first public beta as a matter of fact) I immediately recognized it as a tool for convincing the static programming masses of the value of dynamism. Or as I put it at the time, it was a step in the right direction toward Lisp.

My current favorite language is Ruby, primarily because I can interface to more main stream software more easily with Ruby than just about any other platform. It is also sufficiently mature that I don’t worry much about it changing too drastically. I also share a lot of “opinions” about code with Ruby.