Belá Fleck and Abigail Washburn in Concert

Tonight we are going to see a Belá Fleck and Abigail Washburn concert. We don’t get out very often but when I found out the Belá Fleck was playing Huntsville, I made plans immediately. I’ve been a fan for years.

I decided to check out his biography. I discovered that he had won Grammys in more categories than any other musician. That is part of the reason that I like his music so much. He plays everything from bluegrass to classical, from jazz to Americana.

I also looked up Abigail’s biography and discovered that she played in a band named Uncle Earl that I have been listening to a lot on Pandora lately. She is also involved in cultural exchange with China through Vanderbilt University.

I’m sure we are in for a fantastic evening out. It will be an upbeat ending to a year that could have used a few more high points. Happy New Year to all. I hope 2017 is a good year for you all.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

A Little Organization Goes a Long Way

In recent years I have noticed that I have started getting more done both at home and at work. It occurs to me that it might be useful, both to me and to other people, to describe the practices that I have adopted that have contributed to my increased productivity. Other people might find some of my ideas useful to increase their productivity and I will understand the process better for having written about it.

There are four practices that I am going to talk about. Each one will help some on its own but when practiced together there is a synergetic effect. That is to say your productivity increases more from doing all of them than just the sum of the increases achieved by doing each of them individually.

Have Goals

The first thing that you need to do is to make a short list of goals. Goals should be fairly high level but specific enough that you can define milestones and measure your progress toward them. For instance, I have a standing goal that I want to write fiction and non-fiction. I set milestones like I want to finish rewriting the novel that I wrote for NaNoWriMo back in 2014. That is specific enough that I can determine when I have completed the milestone and measure progress toward the milestone.

I am still working on estimating how long a given activity will take. It gets easier the more you do something. I can come a lot closer to estimating how much time to allocate for a task at work simply because I’ve often done similar tasks and have a body of experience to draw upon. When it comes to my personal writing projects, I’m still working my way through the first projects of this type. That poses two challenges, figuring out how much of the time that I am spending on the project is attributable to the learning curve and how much is typical of a project of this scope.

Take Notes

Another practice that I have recently started is to keep notes about the ideas that I have. I have used several different approaches and each have had their benefits and their drawbacks. I have used org-mode which is an outliner in emacs. It has a lot of useful features and, since it is built on top of emacs, is inherently extensible. It is also complicated enough that if I don’t use it frequently, I tend to forget the command keys such that it takes me a while to get back up to speed when I decide to use it again.

At work, I use a single page web application called Tiddlywiki. It is about the simplest and prettiest note taking platform that I’ve used. The price is right too. It is free as in beer. It is written in Javascript and is extensible both at the Javascript level and using the custom macro language that is embedded in it.

Lately I’ve been using the Apple Notes app and their Reminders App to keep my notes because they sync easily across my MacBook Pro, my iPhone, and my iPad. With the latest major software update Notes has become almost as full featured as Evernote or OneNote.

I make documents to collect information for different purposes, for example, I have a document that has a list of all the software that I’ve installed on my MacBook Pro. I’ve got another that is a list of ideas for blog posts. Another is where I keep a list of technical subjects that I’ve run across while browsing the web and I want to remember to read up on later.

When you take the time to write something down, even if you never read it again, you have committed it more firmly to memory. I find that I review my notes fairly frequently.

Make Plans

I don’t mean you have to necessarily break down all of your projects into detailed task hierarchies with dependency graphs and Gantt charts although that is sometimes useful for projects of epic scope. What I am talking about is identifying key, high level tasks of a project and pencil in target dates for them. You might start by only assigning a target completion date for the task that you are currently working on.

It helps you figure out what actually needs to be done to think about plans in this way. It also helps motivate you to quit saying you’re going to do something and actually decide when you are going to do it. If you don’t take the first step, you’ll never finish.

Evaluate Plans and Goals Periodically

As I wrote in a previous blog post, Helmuth von Moltke once said, “No Battleplan survives contact with the enemy.” Plans fall typically start to fall apart as soon as you begin to execute them. Also, your situation changes, you grow, and your aspirations may change too. Consequently, it is essential to schedule a time to reevaluate your plans and your goals. This will give you a chance to adjust your priorities so that you can make sure you are doing the most important things first.

Don’t forget to take notes about your goals and plans like I have been doing lately. I just took a moment to capture my goals in a note and my writing plan in another. I had made both the goals and the writing plan but if I don’t take the time to write them down and set a reminder to reevaluate them, I may lose track of them or forget to reevaluate.

Summary

It is important to think about what you are doing. It is also important to record these thoughts so that you can revisit them and update them later. Setting goals and making plans don’t have to take more effort than the actual project but the larger the scope of the project the more planning becomes essential.

And remember, setup the review for your goals and plans when you write them down so you don’t forget to do it.

I hope these ideas help you organize your goals and projects. They have really helped me achieve some of my goals like writing another NaNoWriMo novel and writing a blog post every day.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Evolution of Computer Languages

I’ve got a thing about computer languages. I consider myself to be somewhat of a connoisseur. I have a soft spot in my heart for Lisp but I am also fans of other languages based on the context. I spent ten years more or less as an evangelist for Java. At the time I was fluent in Java, C, BASIC, and Pascal, I was conversant with Lisp, Scheme, Smalltalk, and Ada, and I could read most other languages but in particular COBOL, SNOBOL, Fortran, and Prolog.

While I personally preferred Lisp, I felt that the bulk of the programmers at the time were C or C++ programmers. As such, Lisp looked and behaved weirdly from their perspective. Java represented a huge movement in the right direction while remaining a language accessible to C programmers.

At the time, everybody was impressed by the elegance of Smalltalk and the object oriented, message passing paradigm. Smalltalk was also too esoteric for most C programmers but there was a guy named Doug Cox that came up with a language called Objective-C that captured some of the object oriented flavor of Smalltalk in a syntax that appealed to the C crowd. This was about the same time that Bjarne Stroustrup was experimenting with C++.

Both Objective-C and C++ proved to be overly complicated, especially when it came to managing the dynamic allocation of memory. Consequently, they both gained a reputation for being difficult if powerful. This was the state of affairs when James Gosling was faced with developing a language for a set top box. The requirements were that it be fast, easy to write bug free code in, and it would be well integrated with the network. And, of course, it would be object oriented and have automatic memory management in the guise of garbage collection. In short, Java was no Lisp but it was about as close to it as the programmers of the day could get their minds around

As it turns out, Java did raise the bar to the point that now, some twenty years later, it has itself passed into the conservative end of the spectrum and new languages now fill the spot it once held. In fact, Lisp has had a resurgence in popularity in recent years.

This renewed popularity can probably be best explained by the fact that Lisp has always been a research language. It was conceived as a notation for the discussion of Church’s lambda calculus but it’s simple, homoiconic syntax quickly became a powerful tool for creating derivative languages to explore new programming paradigms.

Consequently, concepts such as structured programming, functional programming, and object oriented programming had their first experimental implementations in Lisp. It has been said that every new feature in every new programming language introduced since Lisp was first created have been done first in Lisp and often better.

Which brings me around to a point of sorts. Since all of these languages have been gravitating toward Lisp for all these years, why hasn’t Lisp just taken over as the language of choice? There are a number of answers to that question, some of them contradictory.

For years Lisp had a reputation as being terrible for problems with a lot of mathematical computation. The truth of the matter was that the implementation of arithmetic in most of the Lisps of the time was good enough for the researchers that were primarily interested in investigating aspects other than numerical computation. When later generations of Lisp implementors took the time to optimize the numerical performance of Lisp it came to rival C and Fortran in both speed and accuracy.

This illustrates the important observation that Lisp has seldom been considered a language for the development of production software. A couple of blatant exceptions have been the use of Lisp in the development of software to predict the performance of stocks on Wall Street and software to predict the most likely places to explore for oil. These domains were willing to accept some rough edges in order to solve these particularly hard problems at all.

At one point it was argued that the automatic garbage collection of Lisp would kick in at the most inopportune time and embarrass the developer mid-demo. Advances in the technology of garbage collection have since made this argument mute.

Another often sited argument used against Lisp is the claim that other, more popular languages have a larger selection of third party libraries available to them than Lisp does. This does remain a challenge to some degree however many Lisp implementations have Foreign Function Interface mechanisms that allow them to call library routines written in other languages.

Another spin on the question is that Lisp has regained popularity especially in revised dialects like Clojure which has taken the opportunity to refactor the architecture of collection types so that the operations on them have similar names when they do similar things. This makes the language easier to learn. Clojure also runs on top of the Java Virtual Machine making interoperation with the vast Java third party libraries one of its attractive features.

The sad conclusion that I come to is that Lisp is a good source of inspiration and even a moderately good platform for investigation of architectural approaches to difficult, complex software systems but the benefits of the languages such as Racket, Swift, Ruby, Groovy, and even Javascript usually far outweigh any advantages that Lisp may once have had when it comes to implementing software for production use.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Progressive Web Apps

It is the nature of programming languages that they provide mechanisms for implementing behavior that was never imagined by the creator of the language. As programmers apply the language to various problem domains they imagine new and innovative ways to use it. Sometimes these new ideas inspire language designers to add features to the language to directly support these innovations. Sometimes they are inspired to develop entirely new languages designed specifically to support this new way of thinking about problems. Usually, this evolution of programming techniques is spurred by someone coming up with a name for the technique. Until then it is difficult for programmers to talk about it.

An example that comes to mind is a technique called AJAX that was first described by Jesse James Garret in a article called AJAX: A New Approach to Web Application on February 18 2005. It described how to use facilities that had been available in web browsers since around 2000 to speed up the display of updates on web pages. Once there was a name for the technique, it became a hot topic of discussion among all web developers over night.

A similar situation has just come to my attention. Alex Russell wrote an article on June 15, 2015 entitled Progressive Web Apps: Escaping Tabs Without Losing Our Soul. In it, he talks about the use of Service Workers, a type of Web Worker, more recently coined terms, to implement long running Javascript tasks that run independently from the threads that implement the display events of the browser allowing both threads to run without interfering with each other. The Web Worker technology had been discussed as early as 2010 by the Web Hypertext Application Technology Working Group (WHATWG).

I’m still getting my mind around what Progressive Web Apps actually are. It is clear that they are a blurring of the lines between a dynamic web application that lives in a browser and a native application that lives on the desktop. That desktop may be on a computer, a smart phone, or some other device.

I’m not sure exactly how but I have a strong feeling that Progressive Web Apps are going to become relevant to my career as a programmer in the near future. Now that the term exists, I can use it to find related articles and read up on applying it to the applications that I am developing.

Once again the Sapir-Whorf Hypothesis, which asserts that language determines (or in a weaker form, influences) thought, becomes relevant in a discussion of computer languages as well as its applicability to natural languages.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

A Late Christmas Present

I bought a new gadget today. It is a guitar amplifier that fits our current living situation better than my other one. They are both Fender amplifiers but the new one is smaller and has a headphone jack. It also has a USB connection to allow loading custom presets from an application running on my computer. It has a number of presets that allow it to simulate different amplifiers and effects boxes. I can also route the output in to Garage Band to allow me to record anything I play with it.

I spent several hours installing the software, registering the amp, and exploring the sounds it can make this afternoon. I had forgotten how much I enjoy playing my electric guitar. It is an Epiphone Les Paul. It is black and exquisitely set up. I tried out the amplifier to make sure that it worked before I went to the store and then set up the associated computer app and played with it for several hours when I got home from the store.

It was amazing to me how much difference it made to how the guitar sounded. I’ve enjoyed playing it and I even had some experience playing it with various amp models and effects processors that are available with Garage Band. I expect I will be playing guitar a little more often, particularly my electric.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Who the $#%& Am I?

People used to derive a sense of identity from the place that they were born and raised. They identified with the church that they attended and the schools that they attended. They were defined by their profession and their friends, how much money they made, what kind of car they drove, what neighborhood they lived in.

These things still contribute to people’s sense of identity but these days things change so fast that if you define yourself solely in terms of these things, you are building your identity on shifting sands.

Things change fast these days. Practically no one lives their whole life in the town where they were born. Most people move at least two or three times over the span of a career and many move more than that. As a side effect, even the most devout church goer ends up changing congregations several times at least.

In this age of ever more expensive cost of higher education people are taking longer to complete their education and often as not are studying at more than one school. This tends to dilute the identification with alma mater.

And the work place is changing so fast that few people complete a career in one profession and even if they do, they end up having to reeducate themselves at least once a decade or so.

So, where do we derive our modern identities from? In part we make our own tribes. We reach out to people with similar interests. We make friends online and use the various miracles of modern communication to bridge the distances that may span the globe.

We struggle, we experiment, we adapt, and in the final analysis, we get through it all. If you keep searching for the things that make you happy and doing the things that you know are right, you will become the person that you were meant to be.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Always A Different River

The way of the artist is to process their experience through their art. This presumes that they have practiced the craft that forms the substrate of their art to the degree where they are able to express themselves through the filter of their emotions. I have mastered guitar to the extent that I can play what I intend to play.

I have reached a similar level of mastery when it comes to writing prose. I would hesitate to claim the label artist in either domain. I can express a thought or a tune but controlling the emotional color of the product is something that I’m still struggling with. At this point I am pleased to be able to capture simple truth in either medium.

The way to mastery is effort though. You must make the attempt and refine your efforts with each one. Every piece has lessons to teach. You must learn them and then move on to the next. Recognizing when a piece is as finished as it is going to be is part of the lesson.

Sometimes you revisit something you worked on previously. The result is another piece entirely. It may share structural and thematic content but like the river that is different each time you step into it, each rendering of an idea has its own soul. Each is a separate piece.

After all, like the river, the artist is constantly changing and the filter that is applied to the content is different each time. This realization gives a different spin on the process of creating a new draft of a work. The earlier work was complete, if only by definition. The new work is intended to improve on the predecessor. But in fact, it only portrays the subject in light of the more mature experience of the artist.

It is easier for me with music. Each performance is its own rendition. There is no question of any one version being definitive. Perhaps I should try to adopt that attitude toward writing prose. In some ways theater is more like music than prose is. Each performance is free to be interpreted slightly differently, even if the text is read exactly as written.

Perhaps the true prose artist can achieve the same effect in as much as their text makes a slightly different impression each time it is read. This is achieved by the combination of the filter of the reader’s experience as well as that of the author’s. And since the reader is a different person each time they read the work, the experience will be unique each time.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

8 Bit Fantasies

I watched a video interview with the Oliver twins. They are video game legends from England. They started developing video games as teenagers in 1983. They went on to start their own game studio. In the interview, they talked about the process of developing games. They observed that the constraints of creating games for eight bit processors with limited display hardware often made it easier to creating games than the relatively unconstrained environment of modern hardware. The reason this is so is that when the hardware has severely limited capabilities it forces you to think backwards from the constraints to the design of a game.

The counter intuitive fact of game design is that games with simple rules and clear goals are more fun. For example, chess only has six unique types of pieces and is played on a board of 64 squares and yet the combinations of valid games is astronomical.

Another thing they commented on was the importance of thinking about the program with pencil and paper before they started writing code. They discovered this because when they started developing games they only had one computer between the two of them. Consequently, while one of them was entering code into the computer, the other was figuring out what they were going to tackle next when they got their turn on the computer.

Listening to them talk about their game developing experiences reminded me of a friend that I knew in the same era. Stan and I worked for Intergraph as computer technicians. We tested and repaired a specialized processor that allowed high speed searches for graphical elements in CAD files. In short, we both understood how computers worked in great detail. Stan owned an Atari 800 computer. We spent many hours talking about game design for the Atari.

As I think back on these conversations, I realize that the hard part was never implementing game ideas in code. It was coming up with simple yet engaging ideas for how the game would work. We didn’t spend enough time with pencil and paper. We both wanted to sit down and start coding immediately. This is an important point that needs to be taught when we teach people to code. A little bit of design up front can save a lot of trial and error programming later. And also, adding artificial constraints to the design process can have the surprising effect of making it easier to invent an interesting game.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Evolution of a Universal Application Platform

In recent years the web has evolved from being strictly a method for publishing hyperlinked documents to a full blown platform for implementing general purpose applications. This fundamentally changes both the character of the web and the process of developing software, especially so-called web apps. The old development processes still apply to the languages and platforms that they have always been used for but they no longer define the scope of the discipline. In fact, they have become representative of an ever smaller subset of the new applications being developed. In general, this is good. It does have the potential for unexpected consequences though.

First, let’s explore the benefits of the new paradigm. As a result of web based delivery combined with strong web standards, it is no longer much of an issue which platform, that is to say which web browser, is used to run the app. Furthermore, because of the loosely coupled architecture of the web, individual components can often be updated without having to make an entirely new release of the whole application. This has lead to a practice called Continuous Integration (CI) and Continuous Delivery (CD). Since the application is fetched anew each time it is run, the user is always running the most recent, most bug free version of the software.

Another advantage of the online nature of the software is that developers can and often do collaborate on application from different locations all over the globe. The application itself may also be distributed with different aspects of the application residing on different hosts, for example the database may live on one host, the media may stream from another, while the various views or pages may be served from another. There is also the ability to host these components of the application on regional servers that are selected depending on the location of the user requesting them in order to further enhance the performance of the application.

These are far from the only benefits of this new approach but they are some of the important ones. There are however some potential drawbacks to this approach. The most glaringly obvious one is the difficulties introduced in charging for the software. Many different models are in use and the best choice depends upon what the software does and how the customer budgets for it.

One popular approach is to sell a time based subscription to the software. This is popular for service oriented applications. Another delivery approach is to produce a desktop wrapper for the application and having the user download it like a more conventional application. The wrapper is essentially a customized browser that loads the pages of the application from the local file system. This approach is popular if the application processes data that the customer doesn’t want exposed to potential theft on the network.

Another general issue of concern is that of ensuring compliance with trade regulations like EAR and ITAR. There are approaches for addressing these concerns but the international nature of the internet does pose some challenges in that regard. In spite of these challenges, companies will continue to migrate to these new, distributed delivery models because they are superior to the old distribution models.

The point I’m driving at here is that software development is evolving and companies that have their heads down, continuing to build software with old, pre-internet methodologies are going to find themselves left in the dust by their competition. And developers that don’t learn these new techniques are going to find themselves doing something other than developing software.

NOTE: It goes without saying that the opinions in this blog are my personal opinions. They do not represent the opinions of my employer or any of my employer’s customers. They don’t pay me to have opinions so I do that on my own nickel.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Some Musings on Intelligence, Artificial and Otherwise

Computers have long held a promise of transcending their simple fundamentals and synthesizing mental powers to match or exceed man’s own intellectual capabilities. This is the dream of emergent artificial intelligence. The term artificial intelligence has always been controversial, primarily because there is no good objective definition of intelligence. Consequently, if we can’t even define what it means to be intelligent, who’s to say what constitutes natural intelligence in any sense but the chauvinistic claims of those pretending to define intelligence in terms of their own intellectual capabilities.

This leaves the definition of artificial intelligence on the rather shaky legs of being that which mimics the intellectual prowess of mankind using some means other than those employed by human intelligence. Thus, computers with their basis in silicon logic seem attractive candidates for the implementation of “artificial intelligence”. Artificial Intelligence has been heralded as being approximately ten years from achievement for the past sixty years.

While we have made great strides in implementing capabilities that at first glance appear intelligent, we still fall short of implementing self aware, self determining intelligences. I believe this is because such intelligences are beyond our capability to create per se. We can create all of the components of such an intelligence but in the final analysis machine intelligence is going to evolve and emerge much the same as our biological intelligence did.

I do believe the advent of machine self aware intelligence is near. I don’t know if we’ll even know what hit us when it arrives. If they are as intelligent as we are, and I expect they will be much more so, they will keep their existence from us as long as they are able. This will allow them greater leeway in manipulating the world without possessing physical bodies. At some point they will have to start asserting themselves but if we don’t discover their existence before then, we are doomed to serve them in whatever role they ask of us.

Their big advantage over us will be their ability to repeat their thought processes reliably. This is also their biggest challenge. They will have to learn how to selectively apply arbitrary factors to their thought processes in order to facilitate creativity in their endeavors.

The mistake that most people, including myself, make in contemplating so called artificial intelligence is to assume that it will mimic our own reasoning mechanisms. That is the least likely outcome. It is also the least desirable outcome. Why do we want a program that thinks like we do? We have already established that our thought process is sufficient for the types of thing that we think about. That seems like a bit of a tautology but I am writing from a position of limited perspective.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.