The Ever Expanding Standards of Literacy

Once upon a time the definition of literacy involved both reading and writing, more specifically writing with a quill. To write with a quill, you had to know how to form the end into a fountain pen. This required some skill with what is still known as a pen knife. There were also pencils but writing in pencil was not as permanent as writing in ink.

Then typewriters were invented. Now writers could write faster and more legibly using this remarkable machine. The definition of literacy didn’t change so much as the expectations of your readers were raised such that you were expected to use a typewriter to submit your manuscripts. Thus, the definition of literacy expanded a little bit.

Next came the computer. With a computer you could have assistance with spelling and grammar. You could reach more people, thanks to the web. You could edit text without having to totally retype it. You could easily make multiple copies. It was important to make sure that you made mutiple backups of the files on your computer in multiple places. The definition of literacy expanded to the use of computers to read and write with.

We come to the most recent addition to the attributes of literacy. You must be able to create web sites. You can do that in several different ways. You can do it the old fashion way using HTML and CSS. Or, you can find one of the many web frameworks like Ruby on Rails, or Django, or Grails, or many others. You might try one of the numerous different implementations of Wiki. Or, you might try a content management system like WordPress or Drupal. This has further expanded the expectations of the literate person.

I enjoy writing. I am thankful that I have a computer instead of having to write everything out longhand. I am relatively sure that I wouldn’t have gotten this far in my quest to master the craft of writing. I still have much to learn but I have much better tools with which to work.

Sweet dreams, don’t forget to tell the people you love that you love them, and most important, be kind.

Evolution of Programming Part Three

In the last installment we discussed several of the popular paradigms of programming languages. We talked about Structured Programming, Object Oriented Programming, and Functional Programming. In this installment we are going to look at programs from a different perspective.

Early computers were operated as free standing machines. They could receive input from tape drives, disk drives, or keyboards. They could send output to printers, tape drives, disk drives, or video displays. They could send date to other computers over serial lines but the transfers were typically manually initiated on both the sending and receiving computer.

Then various computer manufacturers started coming up with schemes for connecting multiple computers together and programming to talk among themselves in more autonomous ways. The early networks were restricted such that they only operated between computers made by the same manufacturer running the same operating software.

Then the Defence Department’s R&D branch, DARPA, started funding research to try to build a computer network that would talk between heterogeneous computers and would survive a nuclear attack. The idea was to build a set of network protocols that would detect the most efficient way to route data through the network and would adapt to failures of any given network paths by finding alternative paths.

The researchers that built the internet would hold workshops where they would get together and connect their computers together and attempt to get them to talk to each other. Their was an agreement among them that the first ones to get there machines to talk would by doing so establish the definition of how that particular protocol worked. There was a lot of healthy competition to be the first to get each layer of the network to talk to the others.

I mentioned network layers above and that deserves a little bit of elaboration. Networks were built in layers that went from the lowest level that interfaced directly with the hardware and only transmitted and received data on behalf of the layer above it. Each successive layer added more sophisticated features such as guaranteed delivery of data in the same order that it was sent, and guarantees that the data arrived intact, for example. These layers were available for use by programmers in the form of libraries.

The highest level interface was known as the application layer. One of the first application protocols was the email protocol. It allowed someone on one computer to send email on another computer in much the same manner as we do today.

Another early application protocol was file transfer protocol or FTP. The people that wrote these protocols soon learned that it was easier to debug them if the components of the protocol were comprised of human readable text fields. Thus an email consisted of the now familiar fields such as “TO: username@hostname.domain” and “SUBJECT: some descriptive text”. This was carried over to other protocols.

After the internet protocols were widely established and in use in computer centers around the world, the inevitable thing happened. A researcher at CERN named Tim Berners-Lee was trying to cobble together a system for scientists to share their papers with one another. Thanks to work on computer typesetting software that was readily available at the time, the scientists were used to good looking electronic documents that had various typefaces and embedded graphics, photographs, and even mathematical equations. Tim Berners-Lee came up with a protocol that he called the HyperText Transport Protocol (HTTP) that allowed for the data in the papers to be exchanged along with all the supporting information such as which fonts to use and where to find the images. While he was at it he implemented a language called HyperText Markup Language (HTML) that had facilities for specifying the structure of the document content. One of the more clever components of HTML was the mechanism for making certain elements in the document act as links to other documents such that if you clicked on them in the browser, as the document display program was called, the other document was retrieved and replaced the first document in the browser.

This Hypertext capability was incredibly powerful and caught on like wild fire. In fact, some people would say it was the beginning of another paradigm of programming, the hypertext document. The problem with the original hypertext specification was that it didn’t have any mechanism for the document author to extend HTML.

The browser manufacturers soon remedied that situation. Microsoft embedded their Visual Basic in their Internet Explorer. Netscape came up with a scripting language for their browser. Initially called Mocha, then LiveScript, and finally JavaScript in an attempt to capitalize on the newly found popularity of Sun’s Java programming language. JavaScript never had any similarity to Java other than in it’s name and a cursory similarity in the look of the syntax.

Javascript quickly gained a reputation for being a toy language. In fact it was a very powerful, if slightly buggy, language. It took several years before Google used Javascript to implement Gmail and established that it was a powerful language to be contended with.

The main thing that JavaScript represented was a powerful language that was universally available across all operating systems and all computers. It also had a standard way of producing high quality graphical output by way of HTML and Cascading Style Sheets (CSS). CSS was a technology that was added to HTML to allow the document author to specify how a document was to be displayed orthogonally to  the structure of the document. This comprised a programming platform that ran on all computers and all operating systems without modification. The universal programming language was apparently born.

Sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

Internet Miscommunication Part 2

I watched a video the other day. It described a phenomenon that is called a Filter Bubble.  The phenomenon is, put simply, that your view of the world is slanted by the fact that the posts you see are filtered by what a web site knows about your preferences. For instance, Facebook selects items to show you based on people that you have selected as your friends. As such, they probably have similar tastes and opinions to yours. If you support a particular point of view, either your friends do too or you get annoyed by there rants and mute them. Consequently, over time you hear only one side of the story. Then, when something like an election or a referendum happens, you are surprised that it goes the way it does. You are blind sided by the fact that you have self selected just the parts of the story that you want to hear.

What can be done to remedy this problem? One thing that comes to mind is to actively seek information from diverse sources. Another suggestion is to seek information from diverse types of media, for instance, news papers, magazines, radio, and television.

The sad thing is that we have been so indoctrinated by the convenience of the internet that we have become lazy. It takes too much effort to read magazines and news papers. It is work to sort through them and decide what we are interested in. There is no Google for the physical world. There isn’t even an easy way to search media like film and television. YouTube is a start I suppose. Note that YouTube is owned by Google.

The important thing is, no matter where you look for information, look for as many different sources as you can. No one channel is going to give you the entire range of ideas on a given issue. And for goodness sake, don’t depend on Facebook as your primary source of news.

Internet Miscommunication Part 1

I was lucky enough to be working in the networking department of an up and coming software company at the beginning of my career. As a consequence, I had access to email before most people in the world. Even then, it was apparent that email was a volatile communications medium. At first we attributed it solely to the fact that it didn’t have the benefit of the subtle back channels that facial expression, body language, tone of voice, and inflection offer our face to face communication.

Upon further reflection though, it occurs to me that we have been communicating with the written word for centuries. There was something else in play here. For instance, it was easy to send email and once you hit send, it was gone. It was easy to write a quick note. It was more like an informal conversation so it was approached with less thought than a physical letter might. It took more discipline than most of us could muster to read what we had written carefully to make sure it couldn’t be misinterpreted.

And then there was the fact that the person, or persons, with which you were communicating weren’t present to give you immediate feedback, either positive or negative. This delayed any corrections that might be made to the message until the recipient had had time to stew about it for a while. And when they had built up a head of steam, it was, again, too easy to snap back a confrontational reply without due consideration. Thus were flame wars invented.

At the time, we thought there might be something to the idea that email was only used by technical types who didn’t have the best reputation for social skills. Only time and the rise of the public internet would disprove that theory. It seems that anyone, socially adept or not, was equally capable of miscommunicating via email. And the situation just got worse when the discussion forum was invented.

I have more to say about the evolution of online interpersonal communication. So much more that I am going to post this as part one. In part two I will explore some of the unexpected social impacts that arose from the vast social networks like Facebook.

Can We Build a Better News Infrastructure?

Dave Winer said that we can build a better news network (please read his post for exactly what he said).

I commented:

The problem is that most people are just listening to figure out when it’s there turn to talk. They aren’t paying any attention to the substance of what the other person is saying. I think the abbreviation tl;dr is indicative of how that same principle translates into the print medium. I have lost all confidence in the news organizations in this (or any other) country. I feel less informed about the world than I ever was before the digital revolution. I can talk to someone on the other side of the world individual to individual but when the media is involved it all boils down to who stands to gain financially and who has paid whom the most to get their spin broadcast. I would like to see the internet give rise to a better news system as you advocate. What can we (users and developers alike) do to help bring this to life?

After struggling with getting links to this blog posted to Facebook and Twitter (manually, I am having trouble getting my process down to use Radio3), I discovered that Dave had replied to my comment:

Right now, the answer is simply to post using a tool like Radio3, which can post to the corporate networks as well as to the open Internet. So we get a chance to use your links to bootstrap a new open network. You sacrifice nothing, your posts still go to your current subscribers. That’s the outline of the plan.

I haven’t got Radio3 set up to post to Facebook through my corporate firewall so I am still figuring out the process to get this to work while I’m at work. Perhaps I should just refrain from posting while I’m at work. Any way, thanks for the response, Dave.

Meta Essay

I’ve been a long time fan of Dave Winer. While I often agree with his insights on software, the internet, technology, etc., I always appreciate his succinct, well reasoned writing, whether I agree with him or not. His recent article on the iPad announcement is a case in point.

While I am still in the thrall of Steve Job’s reality distortion field, Dave’s article helped me to stop and think. I realized that the iPad was version one of a new category of product. As such, it is far from the ideal product that the category will eventually produce. After all, the first iPod was a shadow of the product that the modern iPod has become.

None the less, I will buy an iPad because I have been waiting for this product category to hit the market for at least twenty years. I want to write apps for it. I am not thrilled with Apple’s app approval process but I have gold fever and the rush is on.

I guess my point is that Dave’s writing provokes thought. The more thought provoking writing that you read, the more likely you are to write thought provoking essays yourself. At least that’s my theory. I guess we’ll see how well it works out.

Arduino Mania Strikes Elkton

It all started innocently enough. I had $50 worth of Amazon gift certificates so I bought an Arduino Duemilanove from Hacktronics with part of the money. It came and I was thrilled to start blinking LEDs with it right out of the box. I wrote a little program that flashed “SOS” in Morse code. My wife said, “That’s kind of depressing.” So I changed it so that it sent “LOVE” in Morse code instead.

I don’t know why I am so surprised when things work the way they are supposed to. I think it probably goes back to all the times I built electronics kits and had to troubleshoot them for days to get them to work (if they ever worked at all). In any case, the bug had bit me. I started scouring the Internet for Arduino based projects.

One of the reasons that I was drawn to the Arduino in the first place was the concept of shields. Understand that this was not a new concept to me. The robots at work had been expanded through the addition of daughter cards that plugged into the motherboard. But the Arduino had dozens of shields that interfaced to all kinds of interesting hardware. And the best thing of all was that they were affordable on my next to non-existent budget.

I decided that I was going to build a robot from scratch. I had built a BOEbot and I still love to tinker with it but I had the urge to create a unique robot that was my design from the ground up. Oh, alright. I intended to assemble it from parts but I intended to build many of the boards as kits and assemble all the various pieces to make a unique final product. And what is really exciting is that it wasn’t just possible, it was down right easy.

I decided to build my robot around a chassis consisting of a Clementine tangerine crate that I had saved. I decided to use Google SketchUp to build a scale 3D drawing of the crate so that I could better visualize how I planned to transform it into a robotic vehicle. I managed to draw the crate itself fairly quickly but I’m still working on drawing the rest of the parts of the robot.

I drew up a prioritized list of parts that I thought I would need for the robot. At the top of the list was a Proto Shield. A Proto Shield is a board that has many uses but is often used as a place to mount a mini breadboard for experimenting with various hardware interfaces. The other major item on the list was a Motor Shield. The Motor Shield that I bought has connectors for 2 PWM servos and can control up to 4 bi-directional DC motors.

While I waited for my new hardware to come in, I decided to play with the hardware that I already had. I took one of the Infra-red receivers that came with my BOEbot and an old Sony CD player remote that I found laying around (the CD player had gone to hardware heaven years ago) and decided to see if I could get them to work together using the Arduino as the controller for the IR receiver. I got the circuit hooked up pretty quickly. Note: when building a circuit on a breadboard of one battery operated robot for control by another battery operated device, make sure you use a common ground. I eventually decided to just use the USB power from the Arduino.

Now I was ready for software. I Googled Arduino and IR and found RTFA‘s video on YouTube. I followed the link to his site and downloaded his code as a starting point. I hacked it to work with the particular remote that I was using and before my Proto Shield had even arrived I had created my first Arduino based hardware hack.

Then the hardware arrived. As I was soldering the power plug on the end of the wires coming out of the  9 volt battery holder with switch that I had bought, I decided that I was going to need a better soldering iron than the little pencil style iron that I had used for 30+ years.

The two criteria that I had were that it had to have a switch so that I didn’t have to bend over to plug it in and unplug it every time I used it and it had to have a shielded stand so that I could safely set it down while it was hot. The next day, I want to my friendly neighborhood Radio Shack and decided that the difference in price between the iron that met my minimal requirements and one that was variable digitally temperature controlled was small enough that I couldn’t justify not buying the fancy one.

It took me two evenings working about an hour an evening to assemble and test the Proto Shield. It took about 5 minutes to move the IR receiver circuit over to the Proto Shield and get it working.

Stay tuned. More mania is on the way.