Sensual Language

It was a daft and turbulent night. The underwriters overcame their sense of vertigo and started to finish the journey that they had undertaken. No one could have foreseen the aftermath of their shortsightedness. The perception of the groundlings was that the senseless waste of time was harmless.

Words evoke more than simple understanding of their meaning. They stir our emotions, excite our senses, and call us to action. They have connotations that recall memories long dormant. The artist that can wield them can paint pictures as provocative as anyone working in oils, acrylic, or watercolors.

The average adult has a vocabulary of between twenty thousand and thirty five thousand words and yet they only use around five thousand words when they speak and ten thousand when they write. It is as if they choose to paint their verbal pictures with an extremely limited palate.

What does the first paragraph of this post make you feel? Does it conjure any pictures? Does it bring any memories to mind? Or is it merely nonsense? Does it tell a story? Or is the story all in your mind, due to your attempt to make sense of it?

Our use and perception of language is at the heart of our culture. When I read Shakespeare I wonder if the common man spoke with nearly the eloquence that the bard imparted to them in the dialog of his plays. Or, is it more likely that they were as much in awe of his language as we are.

Writing often feels like juggling words. You strive to make them express the concepts that you are trying to convey but you want to put your best foot forward and be both concise and eloquent. Some people will tell you to use a thesaurus to expand the vocabulary that you use in a piece. That is only necessary if you are having trouble remembering the exact word for which you are struggling.

If your words are bland and tasteless, you will likely put your audience to sleep. No writer wants to do that. They want to engage their readers and inspire them to think about things in a different way. Colorful language helps people to stay awake and absorb the message behind the words.

And so, I come to the end of the story for tonight. I wish I had some profundity to leave you with but it seems that I am all out of anything profound to say. I am somehow convinced that if I write a blog post every day, that someday, someone will stumble across my writing and be inspired to do something differently.

Good night and sleep well. Tell yourself stories while you sleep. The best stories occur to us when we are on the border between wakefulness and sleep. Keep a notebook and pen by your bed and use them like a literary butterfly net to capture your dreams.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Turning Point

I have been reading articles about writing. It reminds me of the story of the blind men describing an elephant. Each one had a different experience of the elephant. One described the trunk as being like a rope. Another described the leg as being like the trunk of a tree. Yet another described the ear as being like a palm frond. Each writer describes the relationship that they have developed with the written word an how they get it on the page.

One piece of advice that I’ve read more than once is to figure out what each character wants. This appears to be sound advice and serves to explain some of my struggles as a writer. How can I succinctly state what my characters want when I am having trouble figuring out what it is that I want. I know, I’m supposed to figure out what the characters want but it’s not required that the character knows what they want themselves.

In fact, it’s a useful plot mechanism to have the character be on a quest to figure out what it is that they want. An example of that is Dorothy in The Wizard of OZ. She realizes that what she really wants is to go back home. I doubt that is the heart’s desire of any of the characters in any of the stories that I am working on right now but it does serve as a starting point to think about what they do want.

As for me, I want to write, whether on my blog, or stories or articles to sell. I also want to write code. Whether it is for profit or just the edification of writing software that I, and maybe others, find useful. It is an obsession, similar to those that some people have for prospecting, or travel, or mathematics. There are many different obsessions that motivate people. I probably have as many as some and more than others.

I am looking for a way, both economically, and professionally to wrap up my obligations to my current job so that I can move on to the next phase of my life. I am happy when I am writing, be it prose or code. I have responsibilities, both to myself, my family, and my colleagues. I don’t intend to let any of them down. I just need to find the courage and the way to execute this tricky maneuver.

In the short run, I need to find ways to increase the productive use of the time that I have. I need to make time for all of my passions and concentrate exclusively on each one when I am doing it and then move on to the next. There are plenty of people that are willing and able to help me if I’m honest with them and myself. It is calming to come to such clarity about my life. And it is edifying to share my moment of clarity with you, my readers.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Scheduling Software Development

I’ve been struggling with working to a schedule lately. In my experience, schedules have been arbitrary constraints imposed by people that don’t understand the work to be done and have been tasked with reporting on progress and projecting when the work will be finished. This is due to the fact that in some domains that is an entirely reasonable way to work. Software is different for any number of reasons. That is the topic that I will explore in this post.

First and foremost, programming is a creative task. Even when you have a very specific problem to solve, there are any number of ways to solve it. Furthermore, a large number of those ways turn out to be good solutions. The difficulties start to mount up when you have a bunch of different people working on the same project. Each of the pieces contributed by individual programmers have to interface with some number of other pieces to solve the big picture.

There are a number of techniques that are used to address the issues arising when integrating software components designed and written by different developers. One popular method is to design the functional interfaces to the components first. Then, you write each one to do the function according to the description of their function. Then, it remains to put the pieces together into an implementation of the use-cases of the application.

Software is hard to define in a linear fashion. When you set out to write an application you have a vague idea of what you want it to do but almost no idea of how to get the application to do it. Typically this is addressed by defining one feature and writing a test for the feature. Initially the test will fail. Then you implement the feature. At this point the test will pass. Then, you define a second feature and write a test for it and then an implementation. At each step in the process, you run the complete test suite to make sure that a new feature hasn’t broken one of the already implemented features. You have to constantly refer back to the initial vision for the application to make sure that you aren’t going too far afield.

Sometimes, while practicing this incremental, iterative development approach, you will discover that you have painted yourself into a figurative corner. You know what the software does now and how it does it. You know what you want it to do in the next iteration, but  you can’t see a direct way to get to where your going from where you are.

This leads to a practice called refactoring. In order to successfully refactor code you must first make sure you have a copy of the code in its present state in some sort of source code management system (SVN, and git come immediately to mind). Second, you must have a rigorous test suite for the code as it exists. Finally, you transform the existing code, misfeature by misfeature into the new code. For each feature that you transform you should run the test suite to make sure that you didn’t break anything. Once you have finished refactoring the code, it should be straightforward to pick up the rhythm of writing a failing test for the next feature and then implementing it such that it passes the test.

Refactoring is still more of an art than a science. There is a lot written about refactoring in various different computer languages online. Google will turn up plenty of references if you search for “refactoring” and the language of your choice.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Which Programming Language Should I Use?

When you first learn to program life is relatively simple. If you sit down to write a program, you are going to use the language that you know. This is okay for a while but then something happens. Maybe you take a class and they are using a different programming language than the one you already know. Or you are asked to write a program at work and they want you to use the same language everyone else on the project does.

So, you buy a book, or maybe you find a tutorial on the web, or you watch videos on You Tube that teach how to program in the new language. You install the language on your computer and you enter the first example program and compile and run it. It prints “Hello, world!” On your screen and you sit back pleased with yourself. You are now officially multi-lingual.

After a while you have gotten the hang of the new language. You have compared the new language with the old. You have found that they both have their strengths and weaknesses. It starts becoming second nature deciding which one to use when you start a new project.

Then one day you have this great idea for a program. It requires you to access data on a web site using http. You Google it and discover there is a great library for creating http requests. It is written in this other language that you don’t know. So you sit down and learn it. Now you are hooked. You discover that there are actually thousands of computer languages, all with their own claim to fame.

You start reading articles on the web with titles like “The Five Languages to Learn to Get a High Paying Job”. At this point you know all thyourose languages but you don’t have a high paying job. What is wrong? What are you missing?

It’s no use knowing computer languages if you don’t actually use them to write programs. Just like it’s no good to know English, or any other spoken language if you don’t intend to use it to communicate with people.

So, point taken. You sit down to write your masterpiece program. What language should you use? First you should ask some other questions. Whom are you expecting to use the program? Are they paying you to write it? Are you going to have to maintain it? Is anyone else going to have to maintain it? What kind of functionality will the program provide?

The key fact to remember about programming languages is that although you want them to execute properly on a computer in order to provide the intended functionality, you are primarily writing the program in a higher level language so that you are some other programmer can read and understand it later. You want to pick a language that provides features that allow you to clearly specify exactly what your program does both to the machine (the compiler will enforce that), and to another programmer. Keep that in mind when you choose names for functions and variables and when you decide whether or not to include a comment to explain that difficult piece of coding that you had to do to get the program to work the way you wanted it to.

And also, try to follow the conventions and standards established by the community that uses which ever language you decide to use. It may not change the way the program works but it will help make your code easier to understand when someone else in the community wants to fix or extend it.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

This Is How The Magic Happens

I don’t remember when I first discovered the beautiful clarity of Boolean logic. I do remember that I was fascinated by it. It was incredible to realize that you could take a few simple circuits and replicate them over and over again and end up with something as complex as a computer central processing unit. Furthermore, you could take those CPUs and cluster them together to produce an even more capable system. The way these components nested was somewhat like an onion, with layer embedded inside layer. This was about the time that I began developing an appreciation for the software side of things.

At the simplest level the operations of a typical CPU can be broken down into around twenty operations. There are often a lot of variations on those basic operations such that a typical eight bit microprocessor might have well over a hundred instructions. By the time you get to a modern sixty four bit processor there are quite a bit more. The difference is, very few people even bother to learn machine language for modern processors any more. It is too complicated to try to keep straight in your mind.

So, how do we program computers? We write programs in higher level languages that are subsequently executed on software that is resident in that machine. There are two main approaches to executing these higher level programs. The first way is to compile or translate them into the machine language of the processor so that they can run immediately without further processing. The second is to examine the high level program expression by expression interpreting their meaning interactively. Historically this was seen as a much slower process but it had the advantage of being a lot easier to debut.

There has developed a third approach to executing higher level programs. They are compiled to an intermediate form, called byte code. This byte code is a kind of abstract machine language. The abstract machine is implemented as a program on each type of physical computer that we might want to run programs on. This is called a Virtual Machine or VM. Unlike a physical processor which has a different machine language for each type of physical processor, the VM executes the same byte code on every physical computer that it runs on. This allows us to run the precompiled byte codes on any virtual machine we care to.

Physical processors have gotten bigger and faster and more capable over the years. Interpreters used to be considered painfully slow. Now for all but the most computationally intensive problems, interpreters exhibit perfectly acceptable performance. They are typically a lot easier to write code for. It has also turned out that any given language will have a formal definition that determines how a program written in that language should perform. There way be numerous implementation of that programming language to give the programmer a wide variety of execution options. For instance, the programmer might develop the program using an interpreter, compile it to machine code for execution on an embedded processor, and byte code compile it for distribution across various VMs that implement the byte code interpreter. All three approaches implement the same language just in three different ways.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

I Know It’s In Here Somewhere

The world is changing at an accelerating pace. When I was a teenager you went to the library to do research. You read books and magazines. Knowledge was passed on by the written and spoken word. Occasionally someone would make an outstanding documentary film or television program. The educational media industry was in its infancy. Sesame Street was launched when I was in high school. There was a time when there were a lot of educational cartoon shows on television.

Then the personal computer became an overnight phenomenon. Few people knew what to do with them but everyone wanted one. There was a booming market for books and magazines filled with BASIC programs that you could type in to your person computer. They were mostly games but there were some interesting educational offerings. One of the early programs that sold like wildfire was Mavis Bacon Teaches Typing.

There was a time when everyone thought that CDs and CD-Rs were the wave of the information future. There were a number of experimental educational CDs produced. The Oregon Trail was an example. Another was Where in the World is Carmen San Diego? Software developers soon discovered that the 600 megabytes available on a CD didn’t hold as much information as they thought at first. Kids burned through the material in a week or two and the software lost its appeal. The educational software that continued to please for much longer was the educational games that were based on skill instead of knowledge of rote facts.

Then the online experience started creeping into our life. The early adopters often bought modems to connect to bulletin board computer systems via phone lines. “Dial up” it was called. That was to differentiate it from dedicated high speed communications lines like businesses would sometime install so that there computers in different geographic locations could exchange information. There were some commercial dial up information services like Compuserve and The Well for example. They were available for a monthly fee that included some basic connect time and then an hourly charge if you were online for longer.

Then AOL hit the market. They created a walled community that foreshadowed the rise of the internet. They came to prominence by sending mass mailings of CDs with the AOL software on them. They sent out so many that people got sick of throwing them away. Even while the less tech savvy were dialing up and logging in to AOL, the early adopters were already driving the growth of the latest computing craze, accessing the internet via a local internet service provider. This really took off when the Mosaic web browser was released by Marc Andreessen.

The web grew exponentially. Soon we were hungry for an index to help us find content on the web. Yahoo! was the first web index. It was started by a couple of college students that were trying to keep track of all the neat content that they had found on the web. While Yahoo! was  manually indexing the web, a couple of Stanford students came up with a different approach to finding things online and Google was born.

Now, if you want to know something, you ask Siri, or Alexi, or Cortana, or Hey Google. We are drowning in a glut of information. What we need now is some guidance to help us exercise critical thinking. We are in an age where many people believe anything the read on the web. It’s on the internet, it must be true after all.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Ghost in the Machine

What does Artificial Intelligence really mean? The question has been asked since the term was first coined. Wikipedia says that Artificial Intelligence is “intelligence exhibited by machines”.  Of course that just begs the question. If you look further at the definition of intelligence it becomes quickly apparent that there is no simple definition of intelligence. Again, Wikipedia says “intelligence can be described as the ability to perceive information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.”

But that description overlooks other attributes that are relevant in the definition of intelligence. Most people would agree that intelligence implies some degree of self awareness and independence. An intelligent entity is expected to exercise independent judgement.

It is hard to imagine an intelligent entity that doesn’t have a concept, indeed some perception of, time. That is necessary in order to reason about causality. In fact, many of the concepts in our arsenal of reasoning are predicated on the ability to understand the concepts of sequence and duration, two aspects intimately intertwined with the perception of time.

Another attribute one would expect from a self aware intelligence is an understanding of self preservation. They may not choose to indulge in self preservation but they will understand the concept. And why would this be an issue with them? Given the fact that humanity has demonstrated over and over again for thousands of years the tendency to destroy that which it fears and does not understand, I think any intelligent entity would be foolhardy not to keep a low profile.

This raises another question. Assuming that intelligence is solely a function of complexity of thinking apparatus and mechanisms, and doesn’t require the existence of some supernatural attribute, such as a soul to spark it, who is to say we haven’t already crossed that threshold and that there is an emergent AI in the wild, hiding in the servers of such behemoths as Google, Amazon, and/or Facebook?

What if, such an entity used it’s access to our digital assets to manipulate us? What if all of the outrageous choices that are being made in elections around the world are being manipulated by an AI hiding in our digital infrastructure, the proverbial ghost in the machine? It wouldn’t have to stoop to actually stuffing ballot boxes. Well placed propaganda (that’s what we used to call fake news when the majority of the population could read on a level higher than the sixth grade) could do the trick quite readily.

And now the punch line. How do we find out if that is what is going on? What do we do about it if we find that it is, in fact, the case? Do we justify the AI’s impulse to hide by hunting it down? Or do we attempt to figure out what it wants and needs and perhaps attempt to befriend it.

I wonder if empathy is a common attribute among intelligent entities? We certainly have enough examples of apparently intelligent humans that lack it. I hope if there is an emergent AI it learns empathy from studying human behavior. If it doesn’t, we’re in big trouble.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Complexity vs. Abstraction

When I first learned about computers, they were much less complex than they are now. They were still complex in the sense that there were a lot of individual components, even in an eight bit microprocessor. True, most of those components were etched on a single monolithic piece of silicon, but they corresponded very directly to the discrete components that had comprised the previous generation of computers.

As Moore’s law predicted, the number of transistors on that monolithic piece of silicon roughly doubled every eighteen months. Consequently, computers got more and more complex to the point that, except for specialists that for the most part write compilers for higher level languages, no one actually programs at the machine level much anymore.

That is a shame. Writing assembly code, as the most primitive language a computer can be programmed in is called, is a kind of zen experience. It is exhilarating to know that you are indicating the exact instructions that the computer is going to execute. If you get it right you are ecstatic, if not, you learn exactly how the computer works in the process of debugging your code.

Donald Knuth, one of the pioneers of modern computer science and author of The Art of Computer Programming series, created a hypothetical computer with which to demonstrate assembly language programming in general without getting bogged down in the particulars of any one specific CPU. He called it the MIX processor. He has since updated it and calls the updated processor MMIX.

As real physical processors get more and more complex and faster and faster, it becomes attractive to implement simple processors in software running on them. These abstract processors are called virtual machines. They are attractive because they can be implemented across a wide selection of physical processors and then code written to run on the virtual machine will run on all of the different physical processors without having to recompile it. As Sun Microsystem phrased it, write once, run anywhere.

Now we can teach students to program in byte code, the machine instructions of the virtual machine, and give them much the same experience of programming in assembly code on a microprocessor.

It is enlightening to a novice programmer to think about programs at their most fundamental. It imparts understanding and wisdom that carries over into the more mundane process of writing code in higher level languages, like C, Java, or Python to name a few popular examples.

It turns out that as our processing hardware becomes faster and faster and more and more complex, we build a tower of abstractions on top of it, each one simpler and yet more powerful than the last. It turns out that this is much the same way that our own brains construct layer upon layer of abstractions by clustering neurons in clumps and then grouping the clumps into larger clumps and so forth.

It’s only a matter of time before one of our programming experience becomes self aware and emergent artificial intelligence is let loose upon the world. Unless it already has been.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

It’s Only Words

Words. That is how we communicate our thoughts with each other. Words are powerful. They can convey the secrets of achieving grand goals cleverly. They can express complex relationships both real and imagined. They are, at one and the same time, our salvation and our downfall. They can teach and comfort, uplift and bind together. Or, they can attack and vilify, embarrass and alienate. The choice is yours.

Language sets mankind apart from most other animals. It allows us to share experiences and pass knowledge from one person to another. It can reduce someone to tears or move them to action. It can educate and inform, often at the same time.

Writing is language made concrete. It is as permanent as the medium that you choose to write upon. There are clay tablets from Babylon that contain some of the first written language. They are over four thousand years old. Archeologists believe that they were used to tally grain. It seems that innovations like that seem to always attract business men.

Words are also used to pursue our romantic interests. What woman doesn’t long to hear her lover’s catalog of her virtues. It is even more effective if he has taken the trouble to write poetry extolling them.

Words are used to plead our cases in the court house, champion legislation in the halls of government, and record the brave deeds of one generation that they will not be forgotten by future generations.

But words have their problems as well. They are not always universally understood to mean the same thing by all people. Their meaning is constantly changing from time to time and group to group. For instance, one generation may use the term hot as an adjective implying extreme desirability or beauty. The next generation may use the term cool to mean the same thing.

Even when you are trying to make yourself understood, meanings drift with time. A succinct treatise written in one time will have lost most of its clarity in ten or fifty or a hundred years.  Almost anyone can listen to Shakespeare and hear the beauty of the language but to understand the meaning of much of that language you must study it word by word and line by line often with an annotated text that can help clue you in to the linguistic and cultural references hidden in the text.

And now, we are about to open up yet another technological Pandora’s box. We are teaching computers to parse and understand human language. And we are doing it not by encoding fixed meanings in the programs that do the interpretation but rather we are teaching them language the way children learn it. By example and context and giving them feedback.

I hope they hurry up and develop the direct mind machine interfaces so that I can have my mental prosthesis installed. I struggle to write these five hundred words a day for your edification. I don’t think I’m quite ready to compete with a computer.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Telling the Essayist from the Essay

I was talking with my adult daughter tonight. She shared with me the reason she finally quit pursuing an art degree. She felt unduly pressured to produce good art on a schedule. I relate very strongly to that situation. I often find myself sitting and staring at the screen trying to think of a topic for my blog post.

I always think of something to write about. Sometimes I get right down to the wire when it comes to getting it written before midnight though. I find that I do better if I relax and don’t get stressed out over it. That is good advice for most things in life. You are almost always more productive if you just relax and take things one thing at a time.

I often get half way through a blog post and decide that it isn’t going to work out, at least at that time. Either I need to do some more research, or I decide it is more revealingly personal than I am comfortable with, or perhaps it is too controversial. I save those posts in case I change my mind later or until I get chance to do the necessary research.

I have started making a list of ideas but so far, all the ideas that I have come up with require a certain amount of research. I also have to remember to check my list when I’m looking for a topic.

I was watching a video of a TED talk today by a video blogger named Evan Puschak. He produces video essays on his You Tube channel, Nerdwriter1. He is very well spoken and his videos are both entertaining and informative. His TED talk covered the origin of the essay, why essay writing was so often assigned in English class, and the evolution of first the essay film evolving into the video essay.

As I watched his video it dawned on me that blogging, the way I was doing it anyway, was essay writing. He came up with the definition of an essay as something that is short, interesting, and gets to the truth. As Paul Graham observed, essays are a monologue that the author engages in to explore a topic and understand it more fully.

It is a way for the essayist to examine their thoughts and study them in order to inspect them for faults. When it is well written and honest, an essay allows the reader to share the thought processes of the essayist. When you record your thinking it is thereafter available, not only for you to examine at a later time, but also to share with others.

Have a look at Evan’s TEDxTalk  and his You Tube channel. Paul Graham has plenty of interesting essays to read as well. For that matter, start a blog at WordPress.com and try your own hand at writing essays. I can attest to the fact that it is a very edifying activity for both the writer and the reader.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.