The Show Bible

Writers of all sorts of fiction, from novels to screenplays and even television series, share a single concern; maintaining consistency throughout a given milleau. This is often accomplished by what is called the Show Bible in the television industry. This is the document where all the relevant details from each episode are kept so that they can be looked up when they become important in future episodes. The movie industry has a department devoted to this function. It’s called continuity in that domain. And novelist, especially authors of multivolume series, often have many notebooks filled with lore of the world that they have created.

I have had a programming project on the back burner for some time that amounts to an computerized Show Bible. I may still finish it eventually. I have some ideas for features that I haven’t found in any other product yet. But in the mean time, I think I’ve found a tool that will solve about 80% of the problem. It is the single page web application called TiddlyWiki that I wrote about here a while ago. Here is a brief list of it’s virtues:

  • It is small enough to fit on a thumb drive.
  • It works with any modern web browser.
  • It is easy to create hyperlinks between various entries in the document.
  • It is easily searchable.
  • It is easy to extend.
  • It is easy to format.
  • It is easy to add photographs, drawings, video clips, and all kinds of other multimedia to it. In fact, it can display anything that any other web page can.

I have decided that I like the world that Against the Cold of Deepest Space is set in. I intend to develop a Show Bible for it so that I can write multiple stories and perhaps even novels in that world. I am going to use TiddlyWiki to compile that document.

I am, however, going to go ahead and write the short story that I started in the blog post that I labeled (Part 1).


Sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

Evolution of Programming Part Three

In the last installment we discussed several of the popular paradigms of programming languages. We talked about Structured Programming, Object Oriented Programming, and Functional Programming. In this installment we are going to look at programs from a different perspective.

Early computers were operated as free standing machines. They could receive input from tape drives, disk drives, or keyboards. They could send output to printers, tape drives, disk drives, or video displays. They could send date to other computers over serial lines but the transfers were typically manually initiated on both the sending and receiving computer.

Then various computer manufacturers started coming up with schemes for connecting multiple computers together and programming to talk among themselves in more autonomous ways. The early networks were restricted such that they only operated between computers made by the same manufacturer running the same operating software.

Then the Defence Department’s R&D branch, DARPA, started funding research to try to build a computer network that would talk between heterogeneous computers and would survive a nuclear attack. The idea was to build a set of network protocols that would detect the most efficient way to route data through the network and would adapt to failures of any given network paths by finding alternative paths.

The researchers that built the internet would hold workshops where they would get together and connect their computers together and attempt to get them to talk to each other. Their was an agreement among them that the first ones to get there machines to talk would by doing so establish the definition of how that particular protocol worked. There was a lot of healthy competition to be the first to get each layer of the network to talk to the others.

I mentioned network layers above and that deserves a little bit of elaboration. Networks were built in layers that went from the lowest level that interfaced directly with the hardware and only transmitted and received data on behalf of the layer above it. Each successive layer added more sophisticated features such as guaranteed delivery of data in the same order that it was sent, and guarantees that the data arrived intact, for example. These layers were available for use by programmers in the form of libraries.

The highest level interface was known as the application layer. One of the first application protocols was the email protocol. It allowed someone on one computer to send email on another computer in much the same manner as we do today.

Another early application protocol was file transfer protocol or FTP. The people that wrote these protocols soon learned that it was easier to debug them if the components of the protocol were comprised of human readable text fields. Thus an email consisted of the now familiar fields such as “TO: username@hostname.domain” and “SUBJECT: some descriptive text”. This was carried over to other protocols.

After the internet protocols were widely established and in use in computer centers around the world, the inevitable thing happened. A researcher at CERN named Tim Berners-Lee was trying to cobble together a system for scientists to share their papers with one another. Thanks to work on computer typesetting software that was readily available at the time, the scientists were used to good looking electronic documents that had various typefaces and embedded graphics, photographs, and even mathematical equations. Tim Berners-Lee came up with a protocol that he called the HyperText Transport Protocol (HTTP) that allowed for the data in the papers to be exchanged along with all the supporting information such as which fonts to use and where to find the images. While he was at it he implemented a language called HyperText Markup Language (HTML) that had facilities for specifying the structure of the document content. One of the more clever components of HTML was the mechanism for making certain elements in the document act as links to other documents such that if you clicked on them in the browser, as the document display program was called, the other document was retrieved and replaced the first document in the browser.

This Hypertext capability was incredibly powerful and caught on like wild fire. In fact, some people would say it was the beginning of another paradigm of programming, the hypertext document. The problem with the original hypertext specification was that it didn’t have any mechanism for the document author to extend HTML.

The browser manufacturers soon remedied that situation. Microsoft embedded their Visual Basic in their Internet Explorer. Netscape came up with a scripting language for their browser. Initially called Mocha, then LiveScript, and finally JavaScript in an attempt to capitalize on the newly found popularity of Sun’s Java programming language. JavaScript never had any similarity to Java other than in it’s name and a cursory similarity in the look of the syntax.

Javascript quickly gained a reputation for being a toy language. In fact it was a very powerful, if slightly buggy, language. It took several years before Google used Javascript to implement Gmail and established that it was a powerful language to be contended with.

The main thing that JavaScript represented was a powerful language that was universally available across all operating systems and all computers. It also had a standard way of producing high quality graphical output by way of HTML and Cascading Style Sheets (CSS). CSS was a technology that was added to HTML to allow the document author to specify how a document was to be displayed orthogonally to  the structure of the document. This comprised a programming platform that ran on all computers and all operating systems without modification. The universal programming language was apparently born.

Sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.