Evolution of Programming Part Three

In the last installment we discussed several of the popular paradigms of programming languages. We talked about Structured Programming, Object Oriented Programming, and Functional Programming. In this installment we are going to look at programs from a different perspective.

Early computers were operated as free standing machines. They could receive input from tape drives, disk drives, or keyboards. They could send output to printers, tape drives, disk drives, or video displays. They could send date to other computers over serial lines but the transfers were typically manually initiated on both the sending and receiving computer.

Then various computer manufacturers started coming up with schemes for connecting multiple computers together and programming to talk among themselves in more autonomous ways. The early networks were restricted such that they only operated between computers made by the same manufacturer running the same operating software.

Then the Defence Department’s R&D branch, DARPA, started funding research to try to build a computer network that would talk between heterogeneous computers and would survive a nuclear attack. The idea was to build a set of network protocols that would detect the most efficient way to route data through the network and would adapt to failures of any given network paths by finding alternative paths.

The researchers that built the internet would hold workshops where they would get together and connect their computers together and attempt to get them to talk to each other. Their was an agreement among them that the first ones to get there machines to talk would by doing so establish the definition of how that particular protocol worked. There was a lot of healthy competition to be the first to get each layer of the network to talk to the others.

I mentioned network layers above and that deserves a little bit of elaboration. Networks were built in layers that went from the lowest level that interfaced directly with the hardware and only transmitted and received data on behalf of the layer above it. Each successive layer added more sophisticated features such as guaranteed delivery of data in the same order that it was sent, and guarantees that the data arrived intact, for example. These layers were available for use by programmers in the form of libraries.

The highest level interface was known as the application layer. One of the first application protocols was the email protocol. It allowed someone on one computer to send email on another computer in much the same manner as we do today.

Another early application protocol was file transfer protocol or FTP. The people that wrote these protocols soon learned that it was easier to debug them if the components of the protocol were comprised of human readable text fields. Thus an email consisted of the now familiar fields such as “TO: username@hostname.domain” and “SUBJECT: some descriptive text”. This was carried over to other protocols.

After the internet protocols were widely established and in use in computer centers around the world, the inevitable thing happened. A researcher at CERN named Tim Berners-Lee was trying to cobble together a system for scientists to share their papers with one another. Thanks to work on computer typesetting software that was readily available at the time, the scientists were used to good looking electronic documents that had various typefaces and embedded graphics, photographs, and even mathematical equations. Tim Berners-Lee came up with a protocol that he called the HyperText Transport Protocol (HTTP) that allowed for the data in the papers to be exchanged along with all the supporting information such as which fonts to use and where to find the images. While he was at it he implemented a language called HyperText Markup Language (HTML) that had facilities for specifying the structure of the document content. One of the more clever components of HTML was the mechanism for making certain elements in the document act as links to other documents such that if you clicked on them in the browser, as the document display program was called, the other document was retrieved and replaced the first document in the browser.

This Hypertext capability was incredibly powerful and caught on like wild fire. In fact, some people would say it was the beginning of another paradigm of programming, the hypertext document. The problem with the original hypertext specification was that it didn’t have any mechanism for the document author to extend HTML.

The browser manufacturers soon remedied that situation. Microsoft embedded their Visual Basic in their Internet Explorer. Netscape came up with a scripting language for their browser. Initially called Mocha, then LiveScript, and finally JavaScript in an attempt to capitalize on the newly found popularity of Sun’s Java programming language. JavaScript never had any similarity to Java other than in it’s name and a cursory similarity in the look of the syntax.

Javascript quickly gained a reputation for being a toy language. In fact it was a very powerful, if slightly buggy, language. It took several years before Google used Javascript to implement Gmail and established that it was a powerful language to be contended with.

The main thing that JavaScript represented was a powerful language that was universally available across all operating systems and all computers. It also had a standard way of producing high quality graphical output by way of HTML and Cascading Style Sheets (CSS). CSS was a technology that was added to HTML to allow the document author to specify how a document was to be displayed orthogonally to  the structure of the document. This comprised a programming platform that ran on all computers and all operating systems without modification. The universal programming language was apparently born.

Sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

Evolution of Programming Part Two

In the last installment we had traced programming from an electrical engineering activity involving patch cables through assembly language and FORTRAN and then Lisp. There were many other early computer languages. Some were interpretive like Lisp. Others were compiled like FORTRAN. All of them sought to make it easier and faster to develop working programs. But they all overlooked one fundamental fact.

The purpose of programming languages was for programmers to communicate the details of their algorithms to other programmers. The production of an executable binary (or the interpretation of the source code by an interpreter) was a motivating by product but the same results could have been theoretically produced by typing the numeric instruction codes directly into the computer like the first programmers did.

High level languages allowed programmers to examine their ideas in much the same way that an author of prose reads their manuscript. They facilitated experimentation and they served as a short hand for communicating the details of complex computational processes in terms that the human mind could grapple with.

There were many paradigms of programming throughout the years. One of the first that was identified as such was Structured Programming. Many of the early languages had a very simple syntax for altering the flow of execution in a program. They typically consisted of a statement that evaluated an expression and then based upon the value of the expression caused execution to continue with either the next sequential statement in the program or else branch to another location in the program. This simple construct was how the program made decisions.

The problem was that in those early languages programmers often found themselves in the situation where they wanted the program execution to branch to a different location unconditionally. This was accomplished by a GOTO statement. Both FORTRAN and Lisp had them in one form or another. The GOTO statement made it very difficult to follow the thread of execution of a program. Structured Programming asserted that all programs could be expressed using a small set of control structures, IF, THEN, ELSE, WHILE, UNTIL, and CASE for example. The details of how those instructions work are not as important as the absence of the GOTO from them.

Structured Programming did make programs easier to read but it turns out there were cases when GOTO was absolutely necessary. But it was still considered a construct to be avoided if at all possible. Both FORTRAN and LISP implemented constructs to make it possible to use Structured Programming techniques in them. There were a large number of other languages that supported Structured Programming, notably Pascal and C.

The next popular programming paradigm was Object Oriented (OO) Programming. The idea was that you bundled data, stored in bins called fields, with the pieces of programs that operated on it, called methods. In the first OO language, Smalltalk, the idea was that objects sent messages to other objects. The messages had arguments that made them more specific. The objects would receive these messages, dispatch them to the methods that processed them and return the value that the method computed to the caller.

It turns out that Object Orientation was a very effective means of compartmentalizing abstractions. It made it easier for programmers to visualize their programs in terms of a community of cooperating abstractions.

OOP is still a popular paradigm today. Examples of modern object oriented languages include C++, Java, Ruby, Python, and many others. As it turns out, OOP didn’t replace Structured Programming. Rather, it extended it.

Another popular programming paradigm is functional programming. Surprisingly enough, Lisp was the first functional programming language. One of the key aspects of functional programming languages is the fact that the pieces of programs, called functions or methods, can be manipulated and stored just like any other data. They can be passed as arguments to other functions, and stored in variables to be recalled and executed later.

An example will help to clarify. Suppose that you had a program routine that sorted a list. In many languages that routine would only be able to process a list that only contained all the same kind of data, perhaps all numbers, or all text. It would have to know how to compare any two elements in the list to see what order to put them in. In a functional language you could write a sort routine that would take a comparison function as well as the list of items to sort. Then, if you passed in a list of numbers, you could pass in a comparison function that knew how to compare two numbers. If you passed in a list of text items, you could pass in a comparison function that knew how to compare to text items. The actual sort routine wouldn’t have to know what type of items were stored in the list.

Another aspect of functional languages is the concept of referential transparency. That is a very scary term that simply means that any function that is called with any given set of arguments will always return the same value such that you can replace the call to the function with the value that it returns. This is a very good thing if you have a function that takes a lot of time to compute that gets called multiple times with the same arguments. You can save the result from the first call (called memoizing) and return it any time the function gets called a second time and speed up the performance of the program immensely.

This brings us almost up to how the World Wide Web fits in but it is going to have to wait for part three. Sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

Evolution of Programming Part One

At first every computer was designed and built by hand from scratch. The very first computers were programmed by connecting circuits together with patch cables. They were built with vacuum tubes as the transistor had not been invented yet. At that stage programming was primarily an electrical engineering task.

As the state of the art progressed, computers were designed and built by hand but there was an evolutionary resemblance between each successive unit. Some time around this time John Von Neuman came up with the idea of storing programs in the computers memory so that they could be easily modified. Programming, while still a very specialized task, became less hardware engineering and more similar to creating abstract mathematics. Programs were specific to the computers they were written for but the concepts were applicable to other models of computers.

As computer manufacturers started building computers with semiconductors instead of tubes and built many computers with essentially the same design, programmers started sharing small routines to do common tasks like reading from an input device or writing to an output device. These small routines evolved into operating systems.

Up until this time programs were written out on special forms and then converted into punched cards or paper tape with holes in it. These media were then fed to the computer to load the program. The program was run by computer operators who then collected any output generated and returned it along with the input media to the programmer. This was a time consuming process. As anyone who has ever written a program can tell you, the first attempt was rarely exactly as you intended it so there was a lot of head scratching done over memory dumps to try to figure out what went wrong and fix the program so that it could be submitted to run another time.

About this time, someone had a heretical idea. Instead of humans laboriously converting their programs into the numerical codes that the computer processed directly, they would write a program that would allow the programmer to write the program using a symbolic character representation where each symbolic word corresponded to the numeric code of the machine instruction. This representation was called assembly language and it sped up the development of programs by a factor of ten or so.

The next big change to programming was the development of the so called high level language. The first such language was called FORTRAN which was a word coined from the phrase FORmula TRANslation. It allowed engineers to specify programs in terms of the equations that they wanted to solve. This drastically improved productivity again, at least in so far as your program involved computing the solutions to equations.

The next high level language was called Lisp and it was derived from the phrase LISt Processing. Lisp was designed to facilitate the manipulation of abstract symbols. It was based on the idea of a list of symbols. Each symbol was called an atom. These atoms were arranged in lists enclosed in parenthesis. Lists could also contain other lists embedded within them so that when they were written out it seemed to the uninitiated like a lot of arbitrary words with parenthesis sprinkled liberally throughout.

The truth was, Lisp was a revolutionary advance in computing for a number of reasons. First and foremost, Lisp programs were written as lists, just like the data they operated on. This made it easy to write programs that read and wrote other programs. This made it possible for Lisp programs to think about programs in a rudimentary way.  The study of Computer Science exploded in a frenzy of research about the kinds of things that could be represented and computed by a program and the research was largely done in Lisp at the key research centers.

Lisp was also one of the first programming languages that you programmed interactively. Typically the programmer sat at a console and was presented with a prompt, often a greater than character or a dollar sign. They would then type a Lisp expression in. The computer would read what they typed, evaluate it, and print the result. Then it would prompt for another line of input. This process was called  the Read, Eval, Print Loop or REPL for short.

This style of programming encouraged the programmer to explore the problem domain piece meal instead of spending days designing solutions that might not work out when they were finally executed.

A third important attribute of Lisp programs were that they were largely independent of the details of the underlying hardware. Since they were typically stored in the human readable text source form, they could be easily moved from one type of computer to another. There was a bit of up front effort to implement the Lisp language on the new computer but the application programs moved over rather smoothly.

It is interesting to note that as later languages would introduce new features to the programming community it would be found that these features were pioneered by the early computer science researchers using Lisp.

In part two of this article, I will trace the evolution from Lisp to the World Wide Web. In the mean time, sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

In Which Pixie Finds a New Home

Belle got a new little sister today. Belle is our eight year old Maltipoo. She is also Pam’s Emotional Support Animal. We got Pixie, our new Maltipoo puppy, so that Belle could help train her as Pam’s next Emotional Support Animal. Belle is a very smart dog. She can tell when Pam is about to have a migraine and let her know so that she can take her medicine to alleviate it. We are hoping that Pixie can learn to do the same thing.

Right now, Pixie is a cute little bright eyed bundle of hair. Maltipoos have hair instead of fur. Belle is mostly ignoring her. Cory smelled of her and hissed. It is going to take a while before things settle down to some semblance of normal around here.

We looked at seven dogs all together today. The choice was clear. Pixie had the intelligence and temperament that we were looking for. She slept most of the way home and then curled up on my belly while Pam and I talked.

It’s always difficult integrating a new member of the family but I think Pixie is well on her way to being accepted by her brother and sister. Sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

 

He’s Safe!

It’s late Friday night. Soon it will be early Saturday morning. The quiet in the house is deafening. The fan rattles, the air conditioner whirs. Everyone with any sense is in bed. You sit in front of the keyboard writing one false start after another. Only you don’t rip them from the typewriter and wad them up and throw them over your shoulder. Instead, you save them in a file and mark them as a draft. You may come back to them later and make something of them.

This is what it’s like to be a new writer that is pushing themselves to grow beyond their comfort zone. You keep telling yourself to make a list of potential blog topics but you still find yourself racing the deadline of midnight. You are determined to keep your pledge to write a blog a day.

What would be the consequences if you missed a day? Disappointment in yourself for not keeping your promise? Would you give up on the project all together? No, you’re made of sterner stuff than that. You would sit back down at that keyboard and write two blogs the next day to make up for the one that you missed the day before.

Let’s hope that this writer doesn’t have to find out what he would do. Let’s hope that he keeps sliding in under the wire. Let’s hope that these posts are interesting enough that he doesn’t lose all both of his readers to boredom. Sweet dreams, don’t forget to tell the ones you love that you love them, and most important, be kind.

Keeping It Short

When you get up at five thirty in the morning, eleven o’clock at night is late. At both times, you are likely to feel drowsy and have difficulty completing sentences. Intelligent people would take this as a cue to pick times when they were not affected thus to write their blog posts.

It is not the assertion of this blog that it’s author is not intelligent. It is the assertion of this blog that it’s author is nodding off at fairly regular intervals. Consequently, it remains only to say, have sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

Diagnosis: Impostor Syndrome

There is a malady that often afflicts creative types. It is called Impostor Syndrome. It is the feeling that one gets when they find themselves being recognized for skills that they are not sure they have. For example, artists early in their career often doubt their bona fides as artists. They have spent their youth in awe of the masters that actually make a living doing the things that they love. When they start to have some success they feel like someone is going to knock on the door and tell them, “Okay. You’ve had your fun. Now it’s time to get a real job.”

Artists aren’t the only ones that are afflicted with Impostor Syndrome though. The software developer works in a field that is constantly changing. New languages and tools are developed so fast that there are few, if any experts in any of them. You see ads on job forums looking for candidates with five years experience in a technology that has only existed for two years at most. Often the only way to get these jobs is to step up and say you know something that you don’t. Then, if you get the job, you hustle like mad to learn the skills that you claim you already have.

Needless to say, this causes a good deal of anxiety among software developers working on the bleeding edge of technology. It is a strange feeling that is unlike most other types of anxiety. Most anxiety is abated when whatever fears that you are anxious about turn out to be unfounded. In the case of Impostor Syndrome, the fears are founded until such time as you demonstrate that they aren’t by actually learning the skills that you have claimed.

When you finally reach the point where you can contribute to a project that you are working on under the shadow of Impostor Syndrome, the relief is palpable. It is an emotional roller coaster ride that takes a kind of adrenalin junkie personality type to enjoy it. The best advice if you find yourself in this position is to take a deep breath and dive in. After all, you were looking for a job when you found this one.

Sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

Prescription for a Program

Here is one way to solve a difficult problem. It is described in the context of developing a software solution but the process can be similar for a broad selection of problem domains.

First, ask questions. Ask lots of questions. Ask every question that you can think of. Questions are more important than answers, especially at this stage. Do not be tempted to try to answer these questions at this point. If you look for answers too early, you may stop asking questions before you’ve thought of the important ones.

Write them down as you ask them. You’ll be surprised at how quickly you will forget them if you don’t write them down. Also, if you write them down you can read them later and evaluate them from a fresh perspective. Not only can you read them later, you should look over what you’ve written. See if you have forgotten anything. See if there are any patterns to be discerned among them.

At this point, you can start looking for answers. That doesn’t mean that you shouldn’t capture any good questions that occur to you while you do. Consult with people that are familiar with the problem. In the case of a software project that would include the intended users of the program.

Write a concise description of the problem as you understand it. Review the questions and any answers that you’ve found to see if you have overlooked any details in your problem description.

Next, imagine potential solutions. Write them down as you think of them. Frame them in the form of stories from the perspective of the user of the program. Try to think of several different approaches. Read what you have written and see if any of these stories can be broken down into smaller stories. Keep breaking big stories into collections of smaller stories until you feel like you could write a program that implements one of the small stories.

At some point, pick one of the small stories. You might pick an easy one. That will let you see results quickly and build your confidence. You might pick a  hard story. You may have to struggle more to implement it but you will have a sense of accomplishment when you are done with it. After implementing each story you should write a test framework that demonstrates that it works.

This description has been written as a linear sequence but often in practice it unfolds iteratively. You start out asking questions. You think you are ready to look for answers to them but you think of more questions. The more you learn, the more questions that you have.

As you start imagining solutions your understanding of the problem may be clarified so you can revise the problem description. You may start to implement a story and decide that it should be broken into smaller stories. You may think of more questions at any stage. This is as it should be.

Don’t be afraid to start trying to implement a solution. There is such a thing as analysis paralysis. Software is cheap. The raw material for it is ideas. The principle cost is labor and that is relatively cheap in the broad scope of things. Do experiments along the way to help you understand the problem better. Experiments can also inspire story development.

Finally, understand that you will rarely find a problem that you will be able to completely solve. Usually the best you will be able to do is create a solution that is good enough. It remains for you to decide when you’ve achieved that stage.

This sounds simple but it is hard work. Just remember that you haven’t failed until you quit trying. Sometimes a good night sleep can inspire new perspectives on the problem. Sweet dreams, don’t forget to tell the people you love that you love them, and most important, be kind.

 

Job vs. Profession

The difference between a job and a profession is a matter of attitude. A professional owes their motivation to a passion for the work at hand. On the other hand, having a job is usually just a matter of trading a certain amount of your time and skills for money and other considerations, for instance health care benefits and vacation. The motivation is often primarily monetary.

A professional practices their profession. A laborer does the work that is at hand. A professional is not happy when required to do too much outside of their profession. Most professions require special training and experience. Professionals will sometimes take low paid or even unpaid internships to acquire experience in their chosen profession.

A sad situation sometimes arises where an employer does not recognize an employee as a professional and gives them assignments that fall outside of their domain of professional expertise. The result is made even more poignant when said employee is well paid. The colloquial term is that they are wearing golden handcuffs.

The solution to this situation would seem to be for the employee to seek employment elsewhere where they can practice their profession. Unfortunately there are often mitigating circumstances that make that difficult if not impossible.

It is a sad situation to observe. It is a sad situation to be in. The important thing to remember is that all things are possible. All you have to do is imagine them clearly enough and watch for the opportunity for change to present itself. You must manifest your dreams.

Sweet dreams, remember to tell the people that you love that you love them, and most important of all, be kind.

The Ayes Have It

Writing without the use of the personal pronoun is challenging. It requires a confidence that is hard to muster. The perspective is implied and yet the resulting prose is stronger when it is written in that fashion. Points are asserted and it is left to the reader to evaluate their veracity.

It requires the author to think about the arguments they will make and the facts they will to assert. When a statement is made in this way, there are no apologies to soften them. The reader knows who is making the assertions and will hold the author responsible for them.

This style results in simpler, clearer prose. There are no words wasted on personal appeals. The prose has been trimmed to the bone. It may not suit all purposes but it is the best way to present factual narratives.

Our educational system has become lax in teaching its students concise thinking and clear writing. It is left for other avenues of tuition to hone the skills of modern writers. One such mechanism is the blog. It provides a platform upon which the aspiring writer can practice their craft. It is then a matter of Darwinian selection to see which blogs attract a readership and which languish in obscurity.

Another platform that champions the spoken word is the podcast. It offers a similar low barrier to entry while potentially providing greater exposure for the author that captures the interest of their audience. There is a wide range of styles of podcasts ranging from very informal to carefully scripted. It is left to the consumers to determine which styles flourish and which do not.

The ubiquity of the smart phone has made it possible for many people to produce short videos. You Tube was one of the first to provide a platform for video distribution and remains a major source of engaging amateur video content to this day. This provides yet another way the aspiring writer can distribute their work.

It is clear that there are plenty of avenues for authors and artists to deliver their creations to an audience in the modern world. Although this discussion has focused on the online platforms for expression there are also other venues that aspiring authors can employ to publish their work. These include local paper publications, commercial broadcast media, and even open mike nights at local restaurants and other entertainment establishments.