A Web Site for Developing Web Sites

Back in January of 1993, Marc Andreessen and his team released the Mosaic web browser. It captured my imagination for several reasons. First, it brought the promise of a platform independent means for sharing information across the internet. It was not only a hypertext system but a hypermedia system.

At that point, the network barely had enough bandwidth to support the transmission of photographs, much less video. But Mosaic didn’t restrict the type or size of content. It was built to allow the browser to be extended to support new media types and protocols as the were developed. That was the inspiration for the name of the program. It was a Mosaic of protocol engines and renderers.

The second feature that captured my imagination was the description of the input mechanisms provided by that early version of HTML. I reasoned that if this browser could be evolved to allow arbitrary input as well as rendering new kinds of output as they were developed, then it was for all intents and purposes a platform independent, Graphical User Interface (GUI).

This came at a time when users were arguing over which operating system would dominate the world of desktop workstations. There were three major contenders. First, there was the Macintosh. Then the PC running Windows. Bringing up the rear was unix and linux both running the Xwindows system.

Here, presenting itself in the guise of an humble hypertext reader, was a potential answer to the tower of Babel situation that we found ourselves in. Realizing that vision has taken the better part of ten or fifteen years.

The technologies that made this possible are Cascading Style Sheets, Javascript, HTML5, the Apache web server, and Node.js. These are not the only technologies that contributed to this web application platform, but they are the most significant ones.

At this point we have the means to make web development easy and platform independent but lack the resolve to implement a web development tool that runs on the cloud and is simple enough to use that mere mortals (and managers) can use it to maintain their information on the web.

There is actually several packages that come close to providing the cloud centric development that I am talking about. The one that has captured my imagination is called XWiki. It allows you to create content interactively by using the same kind of tools that you do to browse a web site.

The place where XWiki falls short at present, is in its lack of an obvious way to import a complex brand identity framework and use it as a template upon which to implement the actual content of the site. It should be possible to import the content from other programs or files, as well as dynamically creating it in the framework.

I’m sure such frameworks exist. They’re just not open source or as simple to use as I would want them to be. I’m still intrigued by XWiki but it has fallen back in my estimation of it’s ability to  be easily extended to support the kind of web site development that I’m trying to foster.

I haven’t really talked much about my vision for this tool. That may be because I am still fleshing it out in my mind. I will give it some thought, take some notes, and make another stab at specifying the tool that I’m dancing around tomorrow.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Agile is Dead! Long Live Agile!

I keep seeing headlines on various programming news sites saying something like “Agile is Dead”. When I do, I know immediately that the author has missed the point. Agile is not a methodology to be slavishly followed. It is a philosophy of software development. It is a collection of best practices that may help the developer solve his customer’s problem. It is a starting point from which to open a dialog between the customer and the developer.

As such, it makes statements about tradeoffs that can be made in order to achieve goals. For instance, agile doesn’t prescribe that you should have sprints of two week duration. Rather, it suggests that bounding the time between milestones where you have demonstrable working code will help keep your customer’s confidence in the team high.

Observing that users rarely know exactly what they want at the beginning of a project, even when they think they do, agile suggests focusing on some small number of features at a time that can be demonstrated to the user at the end of each sprint. This has the added advantage of constraining the amount of time that you spend developing a given feature so that if the customer ends up not liking it as well as they thought when they actually see it implemented you have not wasted anything more than the minimum amount of time discovering that fact.

In short, agile is a tool bag of techniques for improving communication between the developer and the customer and insuring that the developer spends the maximum amount of time possible working on the things that the customer values most.

The practices of many agile teams have the benefit of having repeatedly demonstrated their effectiveness in actual practice  but they are not some kind of magic formula. If the customer refuses to communicate with the development team as they develop, agile is not going to work. Not because it is dead. But rather because it is not accomplishing the underlying goal of facilitating communication.

Communication is difficult in the best of situations. When one, or both parties fail to communicate, agile practices become less and less effective. There is no such thing as a silver bullet. Software development is hard. It is always going to be hard. And the hardest thing about it is for the customer to clearly communicate to the developer how he wants the software to work. This is made even harder by the fact that he often doesn’t know.

So the next time you see the headline, “Agile is Dead” remember, that agile is just a collection of suggestions that may (or may not) be helpful in communicating about the customer’s desires and expectations regarding the software that you want to build for them. Anyone that tells you any different is selling you a bill of goods.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Things I Learned (TIL)

I’ve learned a lot about web sites and Content Management Systems (CMS) today. Not that I didn’t already know a good bit about them. I just learned that there is an almost infinite number of variations on the whole CMS theme.

For instance, WordPress, the software that I use to publish this blog is a CMS. It is an extremely flexible CMS. Another popular CMS is called Drupal. I have played with Drupal several years ago which means that I  have very little idea how it works in its present incarnation. Both of these packages are primarily intended for use by bloggers.

Another CMS is the XWiki package that I’ve been talking about a good bit of late. It adds the added feature of acting like a Wiki. If I’m not mistaken, WordPress has a wiki plugin but I haven’t investigated it yet. Drupal may have one as well. The difference being, I suppose, that XWiki comes with the Wiki functionality built in as a primary feature.

Dave Winer has written a number of CMSs starting with Frontier and ending up with his latest, 1999.io. They are good products. They are based on his wonderful outliner software. They are fairly minimalistic in their formatting capability out of the box. To get flashy pages you have to be a web developer and understand what he is doing to render the content. That is in no way a criticism. It is how the product works and it is a great product.

What I’m looking for is a flexible framework for building a web site that is maintainable by any number of people, most of which having little or no experience with html, css, or any other web development technology. So far, I have to say, out of the box XWiki is the most capable. It is also the easiest for an honest to goodness web developer to extend in such a way that the content is still maintainable by mere mortals.

Now for the hard part. I have to put together a plan for migrating to XWiki. That includes a schedule. I am not good with schedules. Especially considering that I am dealing with a technology that I’ve only known existed for about a week. The good news is that there is not much to the web site that I am “porting” to XWiki. I’m not sure what the bad news might be. That’s the thing about the unknown. It sneaks up on you and says “Boo!”


Sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

It’s All in There Somewhere

I want a place to stash stuff on the web. I want to be able to find it later without remembering where I put it. I want it to be relatively secure from other people’s prying eyes but it would be nice if I could make certain things accessible to anyone or at least to people to whom I had given explicit permission for access.

I want to be able to stash anything from a short text note to a complete document, a simple URL to an entire web site, a few random values to an entire database. I want it to be accessible from my desktop, my laptop, my tablet, my phone, or any of the numerous, internet connected gadgets that are cropping up all over the place.

I’d like to be able to get a readable representation of the items in the stash using a web browser. This may (or may not) require a web application to massage the items into a readable form.

This post began when I sat trying to figure out where to stash a reference to a web site that I was interested in along with some brief notes about it. This comes up more often than I would have expected and I have tried many different solutions for it.

The first, most obvious solution was bookmarks. The problem with bookmarks is that they are browser dependent and require that you either use a browser that maintains a central registry of your bookmarks or that you copy your bookmarks manually from platform to platform. The central registry approach requires that you trust the operator of the registry, usually not a problem for me but definitely a problem for some of my more paranoid acquaintances.

Another problem with bookmarks is finding stuff that you bookmarked later. None of the bookmark schemes has a particularly good search mechanism. Perhaps I gave up on them before they implemented something useful but I have this huge ball-of-mud collection of bookmarks that I  have been collecting for ages and I have all but stopped adding to it because I can’t find anything when I look for it and I can’t trust that the link will still be active if I do find it. Bookmarks also ignore the aspect of wanting to store documents and other data in the repository.

An approach that addresses that last objection is to store notes on Dropbox or one of the other network file systems. That has (at least) two problems. First, you have to be able to access the service from everywhere. My employer views these stores as potential data leaks for corporate espionage and blocks them with our firewall. This would probably be true of any service that provided the features that I am looking for. Second, storage is less than half the problem. Finding the data is the harder part. Rendering it in a readable fashion can be challenging as well.

Then there are the online notebook applications like Evernote. They are pretty close to what I’m wanting but they are also kind of pricey. I suppose a business model that meets my requirements while not costing an arm and a leg is another requirement. I should look at Evernote closely and see where it falls short.

Perhaps I just need to go start hacking away and see what I can come up with. If it is useful for me, it will be useful for other people. And I’ll learn a lot about myself and the way I use the computer along the way.

 

Another Facet of the Blogger Emerges

The theme for today seems to be Data Analysis/Data Science. Three separate times today the topic of Data Analysis has come up. As I sat down to write my blog post tonight, it struck me as interesting that the topic had woven itself through my day so thoroughly. I took it as an opportunity to introduce this facet of my interests to the readers of my blog. If this isn’t your cup of tea, check back next post. My interests are many and varied and I intend to write on all of them. In other words, if you don’t like this post, come back for the next one. It’s bound to be different.

As may already have become obvious, I’ve recently embarked on a journey of exploration of Data Analysis and Data Science. If you read the Wikipedia articles that I linked to in the first paragraph above, you will see that the field of Data Analysis is very broad and the field of Data Science is somewhat controversial. While initially they seem to be different but related fields, the more I try to characterize the difference between them, the more I realize that they have a lot in common.

I think the problem with trying to differentiate between them is that they both appeal to the naive interpretation of their names, which in both cases is incredibly broad. I am reminded of the problem that the field of Artificial Intelligence has struggled with for its entire existence, namely that there isn’t an unambiguous definition of Artificial or Intelligence that rigorously captures what the practitioners of the field intuitively understand it to mean.

Getting back to the inspiration for this post, the first time the subject came up today, I was getting a demonstration of the work that some of my colleagues were doing with Oculus Rift and the Unity development environment. We ended up discussing the fact that the customers, for whom they were developing applications, had started by capturing their working data using Microsoft Office applications like Excel, Access, and PowerPoint. Over time, their data had become so large that these applications had become unwieldy. My colleagues had taken the data that had been captured with these legacy processes and imported it into a new application and had thus been able to provide  a more unified way to manage the data.

One of the things that was learned along the way is the customer had learned to love their existing processes. Consequently, the application that was being developed to supersede those older tools had to retain much of the look and feel of them in order to gain acceptance from the customer. This was a very important realization. Earlier in my career I have had personal experiences where customer acceptance was never achieved because of an aversion to the perceived difficulty in learning a new interface. Thus, the first observation that I gleaned about large collections of data is that the owners of the data have to be accommodated when you are trying to evolve their use of their ever growing collection of relevant data.

A little bit later I had a conversation with a colleague about my understanding, naive as it is at this stage, of what Data Analytics is and how it is relevant to the large aerospace company for which we both work. Strangely enough, the conversation soon turned to the fact that the first thing that we, as would-be practitioners of Data Analysis, would have to do is to educate the engineering and business staff about the benefits that could be accrued from using the data that is already being collected in this way while, at the same time, being careful to respect their perspective on the data and the ways that they are currently using it.

Then, when I got home and was reading my personal email, I came across a link to Big Smart Data, the blog of Max Goff. I met Max a number of years ago in Seattle while he was working for Sun Microsystems. He was a Java Evangelist and at the time I was struggling to persuade the software developers where I worked of the benefits of write once, run everywhere, the  battle cry of Java at the time. I followed his career as he left Sun and started a consultancy in Tennessee. Somewhere along the line, I lost track of where he was blogging. I was thrilled to find his latest blog and also excited about the fact that he was exploring the same aspects of big data that form the core inspiration of my interest in Data Analysis.

A former boss of mine once said something to the effect that you could tell when an AI application was going mainstream when it had a database behind it. I think there is a lot of wisdom in that observation. Access to a large data store is necessary but not sufficient for emergent AI. I believe we are on the cusp of the emergence of Artificial Intelligence, ambiguous as the definition of it may be. I believe that Big Data, Data Analysis, and Data Science are going to be instrumental in this emergence.

When I first came to work at the aforementioned big aerospace company, it was because I was given the opportunity to work in their AI laboratory. AI winter soon set in and I spent the intervening years doing what I could to help the company solve hard problems with computers. Along the way, I have worked on some amazing projects but I have always longed to pursue the goal of emergent artificial intelligence. I am beginning to see the potential for pursuing that goal arising again. Things just naturally come around in cycles. And so that was my day from the perspective of Data Analysis and Data Science.

An Agile Team of One

I am developing some software at work by myself. I have worked on several different styles of Agile team in the past, e.g. Scrum and XP, and I decided to think a little about what Agile practices are appropriate for a team of one.

First up, the daily tag-up, otherwise known in some circles as “the Scrum”, doesn’t serve the same purpose that it does on a larger team. You probably should set aside a moment, perhaps first thing in the morning, to review your progress from the day before, identify any obstacles you need to address to proceed, and make note of what you intend to do today. That should take very little time since you don’t have to explain what you mean to anyone else. Communication is the benefit and the major time sink, of the Scrum.

Next, a backlog is useful. I consider it another name for my todo list but it is a little more formal than some todo lists. I keep it sorted in order of highest priority first. I mark each major item with a status, e.g. ready, in-work, waiting on <resource> etc. I also use an outliner to keep track of my backlog so that I can easily represent subtasks.

I have added a practice that I learned from Dave Winer, called Narrate Your Work. It is particularly useful for me since I don’t have the benefit of a colleague to discuss my project with. By narrating my work, I get down the essence of what I’m doing and why so that I can remember what I’m trying to accomplish and the decisions that I have made along the way.

I haven’t had to do any estimating yet so I haven’t done anything like the planning game. I have a suspicion that you need three or more team members for the planning game to work very well. I also haven’t divided the work up into sprints. That seems like over kill for the one person team.

I will be doing periodic evaluations that will correspond somewhat to the end of sprint retrospectives. I think the key here again is that since I don’t have anyone to discuss it with, it is just a matter of taking a moment to think about what I’ve learned to date in the project.

The Beginning of a Series of Opinionated Posts

One of the philosophical principals underlying Ruby on Rails is that software should be opinionated. I have been thinking about what that means a lot lately and have decided that being opinionated is a good trait in general. I have decided that I will be opinionated and share my opinions with anyone who will listen. In particular, I will share my opinions here.

I have concluded that software engineering is at best a misnomer and at worst a detriment to the development of quality software. Engineering is a philosophy of creating physical artifacts that has been developed empirically for the last two or three centuries. Software is not a physical artifact.

When I have a physical artifact and I give it to you I no longer have the artifact. When I have a piece of software and I give it to you, I still have it. Your having it doesn’t reduce the utility of my having it. When I design a physical artifact, I want to get all the details right before I build it because materials are expensive. When I design software, the easiest way to figure out the details is to create a prototype and then iteratively improve it until it is right.

The point being that building multiple versions doesn’t incur large material costs. These are only two of many reasons that software development is very different from the process we know as engineering. Calling Software Development Software Engineering raises inappropriate expectation in those that don’t understand Software Development.

I’ll rant on this topic more later but I’m going to call it a night right now.

Ode to a Ruby Gem

This morning I was thinking about a project that I am doing in Ruby. I found myself thinking to myself  “I sure am looking forward to getting more intimately familiar with active-record.” Active-record is the Object-Relational-Mapping component of Ruby on Rails.

I love a package that makes you eager to learn more about it. Not to say that you have to be intimately familiar with active-record in order to use it. Rails is just so well thought out that studying the API is actually fun. And, Rdoc, the Ruby documentation package, makes writing extensive documentation of your code so easy that programmers usually do a pretty good job of documenting their code.

I have been using active-record in my Rails apps for several years now. The reason that I needed to delve deeper into active-record at this point is that I am getting data from an external source (I’m scraping it from a web page), parsing it using nokogiri, another fine Ruby package, and then caching it in a local database. Consequently, I am having to do some thinking about how to structure the data that I cache.

Let’s face it, I’m not really all that experienced at database architecture. I can hack a little SQL when I need to but I haven’t had to do a lot of data normalization since I studied databases in college. Rails makes it easy to play around with your schema until you get it just right. I don’t mean to gush or anything but Rails makes these things so easy that it feels like playing instead of work. In my case, I guess it is playing, at least to the extent that I am not being paid to do it. But that’s another story.

Arduino Mania Strikes Elkton

It all started innocently enough. I had $50 worth of Amazon gift certificates so I bought an Arduino Duemilanove from Hacktronics with part of the money. It came and I was thrilled to start blinking LEDs with it right out of the box. I wrote a little program that flashed “SOS” in Morse code. My wife said, “That’s kind of depressing.” So I changed it so that it sent “LOVE” in Morse code instead.

I don’t know why I am so surprised when things work the way they are supposed to. I think it probably goes back to all the times I built electronics kits and had to troubleshoot them for days to get them to work (if they ever worked at all). In any case, the bug had bit me. I started scouring the Internet for Arduino based projects.

One of the reasons that I was drawn to the Arduino in the first place was the concept of shields. Understand that this was not a new concept to me. The robots at work had been expanded through the addition of daughter cards that plugged into the motherboard. But the Arduino had dozens of shields that interfaced to all kinds of interesting hardware. And the best thing of all was that they were affordable on my next to non-existent budget.

I decided that I was going to build a robot from scratch. I had built a BOEbot and I still love to tinker with it but I had the urge to create a unique robot that was my design from the ground up. Oh, alright. I intended to assemble it from parts but I intended to build many of the boards as kits and assemble all the various pieces to make a unique final product. And what is really exciting is that it wasn’t just possible, it was down right easy.

I decided to build my robot around a chassis consisting of a Clementine tangerine crate that I had saved. I decided to use Google SketchUp to build a scale 3D drawing of the crate so that I could better visualize how I planned to transform it into a robotic vehicle. I managed to draw the crate itself fairly quickly but I’m still working on drawing the rest of the parts of the robot.

I drew up a prioritized list of parts that I thought I would need for the robot. At the top of the list was a Proto Shield. A Proto Shield is a board that has many uses but is often used as a place to mount a mini breadboard for experimenting with various hardware interfaces. The other major item on the list was a Motor Shield. The Motor Shield that I bought has connectors for 2 PWM servos and can control up to 4 bi-directional DC motors.

While I waited for my new hardware to come in, I decided to play with the hardware that I already had. I took one of the Infra-red receivers that came with my BOEbot and an old Sony CD player remote that I found laying around (the CD player had gone to hardware heaven years ago) and decided to see if I could get them to work together using the Arduino as the controller for the IR receiver. I got the circuit hooked up pretty quickly. Note: when building a circuit on a breadboard of one battery operated robot for control by another battery operated device, make sure you use a common ground. I eventually decided to just use the USB power from the Arduino.

Now I was ready for software. I Googled Arduino and IR and found RTFA‘s video on YouTube. I followed the link to his site and downloaded his code as a starting point. I hacked it to work with the particular remote that I was using and before my Proto Shield had even arrived I had created my first Arduino based hardware hack.

Then the hardware arrived. As I was soldering the power plug on the end of the wires coming out of the  9 volt battery holder with switch that I had bought, I decided that I was going to need a better soldering iron than the little pencil style iron that I had used for 30+ years.

The two criteria that I had were that it had to have a switch so that I didn’t have to bend over to plug it in and unplug it every time I used it and it had to have a shielded stand so that I could safely set it down while it was hot. The next day, I want to my friendly neighborhood Radio Shack and decided that the difference in price between the iron that met my minimal requirements and one that was variable digitally temperature controlled was small enough that I couldn’t justify not buying the fancy one.

It took me two evenings working about an hour an evening to assemble and test the Proto Shield. It took about 5 minutes to move the IR receiver circuit over to the Proto Shield and get it working.

Stay tuned. More mania is on the way.

Get Er Done!

I recently read a book called Getting Real by the folks from 37signals, creators of Ruby on Rails, Ta-da List, Writeboard, Backpack, and Basecamp among other Ajaxian web application goodness. While superficially a book about how to start a successful business selling services based on web applications, a topic they have plenty of credibility with, the advice in this book is applicable to a much broader realm of endeavors.

I was so inspired by it that I have dusted off several projects that were laying dormant and started actively doing them again. Of course this is also aided by the insights that I have been gleaning from the Getting Things Done book. I have also bought a Backpack Basic account so that I can use their wonderful calendar. Enough raving for now. Got to get some things done :-).