The Elm Programming Language

Elm is an impressive programming language. It has been around for five years. It is one of the group of recent languages that compile to Javascript. As such, it has been primarily used to develop web applications. But that is selling it short. It is more innovative than most of the myriad other new languages that have been introduced in the last five years and deserves far wider attention than it has thus far garnered.

First of all, Elm is a statically typed, purely functional language. It treats all values as immutable and has a module system that enforces semantic versioning. Function invocation is accomplished by writing the function name followed by the arguments of the function separated by spaces. There are no parenthesis around the arguments to a function or commas between them. Parenthesis are reserved for forcing grouping of elements.

Elm looks clean on the page. It is designed for creating reactive web pages and consequently has the Model, View, Controller paradigm built into the basic structure of its code. It calls the Controller function ‘update’ though which actually makes more sense. It has incredibly informative and helpful error messages. It enforces good program design at compile time, consequently there are virtually no run time exceptions to deal with. Any such exceptions that do arise come from the fact that it tightly integrates with Javascript, HTML5, and CSS.

I’ve looked for books on Elm but I have only found one or two. The best one is the Elm Tutorial which is available for free online and is a compendium of most of the other online documentation. The language is small and concise. There are a number of examples available to help the neophyte get up and writing their web applications in short order.

Elm leverages the Node.js ecosystem to do a lot of its heavy lifting. Consequently, it is easy to use the Electron package to develop desktop applications in Elm. What is left facing the developer is a clean, easy to maintain syntax that encourages expressiveness while rejecting unnecessary boiler plate. Although the language is statically typed, type declarations are optional. Explicit type declarations are useful to improve performance and sometimes to allow more expressive interface definitions though.

As far as expressing web pages in Elm, it is far easier to read than HTML. I plan to do some development in Elm and will report back when I have more experience with it under my belt. In the mean time, you can check it out interactively on the web at http://elm-lang.org/.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Lisp Fundamentals

NOTE: for all my non-computer friends and readers. If technical topics bother you or make you tense, perhaps todays post is not for you. If you are not given cold sweats when facing a new topic with the flavor of computers or programming, by all means please join me.

There are many things that Lisp, the programming language does right. It never ceases to amaze me and I am going to once again take a few minutes to discuss exactly what some of these things are and why they are so important.

Lisp was not originally conceived of as a programming language. It was invented by John Backus as a notation to enable discussion about Alonzo Church’s lambda calculus.

Lisp is characterized by the structure of its expressions, called forms. The simplest of these is the atom. An atom is a singular symbol or literal and represents a value. For instance, 42 is a numeric literal whose value is the number 42. Similarly, “Hello, world!” is a string literal that represents itself as its value. There are also symbols, which are strings of unquoted characters that are used as fundamental elements of Lisp. There are rules for how a valid symbol is form but for now it is sufficient to know that a symbol starts with a letter and then is composed of zero or more additional characters that can be either a letter, a number, or one of a collection of certain punctuation characters. Since the exact list of other characters varies among the dialects of Lisp, we will leave them unspecified at present.

The other type of form is a list. A list is comprised of a left parenthesis followed by zero or more forms and ending in a right parenthesis. Notice I said forms instead of symbols. The implication here is that you can have lists embedded in other lists as deeply nested as you like. This proves to be an interesting trait as we will soon see.

There is one more fundamentally interesting aspect of Lisp. That is that in a typical Lisp form the first element in a list after the left parenthesis is taken to be an operator. The subsequent elements in the list are considered arguments. The operator is either a function, a macro, or a special form. Macros and special forms, while extremely important and interesting are beyond the scope of this discussion.

That leaves us the operator as function. A typical Lisp function form is evaluated as follows. The first element is examined to determine what kind of form the list is. If it is a function, the rest of the arguments in the list are evaluated and collected in a list and the function is applied to them. If another list is encountered as one of the arguments, it is evaluated in exactly the same way.

For example, consider the expression (+ 4 (* 8 7) (/ (-26 8) 9)). The first operator is +. + is a symbol bound to the function that represents addition. The second item in the list is 4. It is a number that represents itself. The next element in the list is the list (* 8 7). When evaluated, the 8 and 7 are arguments to *, the multiplication function and the value returned by that function is 56. The final element in the top level list is (/ (- 26 8) 9). The / is taken as the division function and is applied to the evaluation of (- 26 8) which is the subtraction function that returns 18. When you divide 18 by 9. you get the value 2. Thus the top level argument list  consists of 4, 56, and 2. When you add all three of those numbers you get 62 which is the value that the expression ultimately returns.

This simple mathematical expression illustrates another fundamental aspect of Lisp. It is expressed as a list form which, given a set of bindings to arithmetic functions expresses a simple program. This identical representation of both data and programming in Lisp, called homoiconicity by the way, is at the heart of much of Lisp’s power. Since Lisp’s programs are indistinguishable from Lisp’s data, they can be manipulated by Lisp programs to great advantage.

Think of it like this. Lisp can, in some sense, think about how it is thinking and modify it as it desires. This is why artificial intelligence investigators like using Lisp so much, it is so similar to the simplified models of intelligence that they are building that the boundary begins to blur.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Evolution of Computer Languages

I’ve got a thing about computer languages. I consider myself to be somewhat of a connoisseur. I have a soft spot in my heart for Lisp but I am also fans of other languages based on the context. I spent ten years more or less as an evangelist for Java. At the time I was fluent in Java, C, BASIC, and Pascal, I was conversant with Lisp, Scheme, Smalltalk, and Ada, and I could read most other languages but in particular COBOL, SNOBOL, Fortran, and Prolog.

While I personally preferred Lisp, I felt that the bulk of the programmers at the time were C or C++ programmers. As such, Lisp looked and behaved weirdly from their perspective. Java represented a huge movement in the right direction while remaining a language accessible to C programmers.

At the time, everybody was impressed by the elegance of Smalltalk and the object oriented, message passing paradigm. Smalltalk was also too esoteric for most C programmers but there was a guy named Doug Cox that came up with a language called Objective-C that captured some of the object oriented flavor of Smalltalk in a syntax that appealed to the C crowd. This was about the same time that Bjarne Stroustrup was experimenting with C++.

Both Objective-C and C++ proved to be overly complicated, especially when it came to managing the dynamic allocation of memory. Consequently, they both gained a reputation for being difficult if powerful. This was the state of affairs when James Gosling was faced with developing a language for a set top box. The requirements were that it be fast, easy to write bug free code in, and it would be well integrated with the network. And, of course, it would be object oriented and have automatic memory management in the guise of garbage collection. In short, Java was no Lisp but it was about as close to it as the programmers of the day could get their minds around

As it turns out, Java did raise the bar to the point that now, some twenty years later, it has itself passed into the conservative end of the spectrum and new languages now fill the spot it once held. In fact, Lisp has had a resurgence in popularity in recent years.

This renewed popularity can probably be best explained by the fact that Lisp has always been a research language. It was conceived as a notation for the discussion of Church’s lambda calculus but it’s simple, homoiconic syntax quickly became a powerful tool for creating derivative languages to explore new programming paradigms.

Consequently, concepts such as structured programming, functional programming, and object oriented programming had their first experimental implementations in Lisp. It has been said that every new feature in every new programming language introduced since Lisp was first created have been done first in Lisp and often better.

Which brings me around to a point of sorts. Since all of these languages have been gravitating toward Lisp for all these years, why hasn’t Lisp just taken over as the language of choice? There are a number of answers to that question, some of them contradictory.

For years Lisp had a reputation as being terrible for problems with a lot of mathematical computation. The truth of the matter was that the implementation of arithmetic in most of the Lisps of the time was good enough for the researchers that were primarily interested in investigating aspects other than numerical computation. When later generations of Lisp implementors took the time to optimize the numerical performance of Lisp it came to rival C and Fortran in both speed and accuracy.

This illustrates the important observation that Lisp has seldom been considered a language for the development of production software. A couple of blatant exceptions have been the use of Lisp in the development of software to predict the performance of stocks on Wall Street and software to predict the most likely places to explore for oil. These domains were willing to accept some rough edges in order to solve these particularly hard problems at all.

At one point it was argued that the automatic garbage collection of Lisp would kick in at the most inopportune time and embarrass the developer mid-demo. Advances in the technology of garbage collection have since made this argument mute.

Another often sited argument used against Lisp is the claim that other, more popular languages have a larger selection of third party libraries available to them than Lisp does. This does remain a challenge to some degree however many Lisp implementations have Foreign Function Interface mechanisms that allow them to call library routines written in other languages.

Another spin on the question is that Lisp has regained popularity especially in revised dialects like Clojure which has taken the opportunity to refactor the architecture of collection types so that the operations on them have similar names when they do similar things. This makes the language easier to learn. Clojure also runs on top of the Java Virtual Machine making interoperation with the vast Java third party libraries one of its attractive features.

The sad conclusion that I come to is that Lisp is a good source of inspiration and even a moderately good platform for investigation of architectural approaches to difficult, complex software systems but the benefits of the languages such as Racket, Swift, Ruby, Groovy, and even Javascript usually far outweigh any advantages that Lisp may once have had when it comes to implementing software for production use.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

Love of Lisp

I have an on again, off again love affair with a language called lisp. It is the second oldest high level computer language with only Fortran being older. It is deceptively simple at its core. It wasn’t even meant to be an actual computer language when it was created. It was a notation created  by John McCarthy in 1958 to talk about Church’s lambda calculus. Shortly after he published a paper about it, one of his graduate students, Steve Russell, implemented it on a computer.

Lisp distinguishes itself by being comprised of a half a dozen or so primitive functions, out of which the entire rest of the language can be derived. Just because it can, doesn’t mean that it should be, so most modern lisps compile to either machine code or virtual machine byte code. This typically results in a considerable performance boost.

Lisp was heralded as the language of artificial intelligence. That was probably because it had the novel property of homoiconicity. That is to say, the structure of a lisp program can be faithfully and directly represented as a data structure of the language. This gives it the singular ability to manipulate its own code. This was often thought to be one of the necessary if not sufficient capabilities for a machine that could reason about its own operation.

While this was intriguing, the thing that drew me to lisp was the conciseness of expression that it facilitated. Programs that took hundreds of lines to express in other programming languages were often expressed in four or five lines of lisp.

Lisp was also the first dynamic language. It allows the programmer to continue writing code for execution even after the original program has been compiled and run. The distinction seemed important enough to McCarthy that he termed lisp a programming system instead of a programming language.

I have always found lisp an excellent tool for thinking about data, processing, and the interactions between them. Most other programming languages require a great deal of translation from the design to the finished implementation.

And so, I find myself reading and studying a book called How to Design Programs. It is a text on program design that was written using the DrRacket language system, based on the Scheme dialect of lisp. It is interesting to see the ways that the authors approach certain topics. I hope to get the chance to apply their insights to teaching a class using the book as a text.


Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.

The Ever Expanding Standards of Literacy

Once upon a time the definition of literacy involved both reading and writing, more specifically writing with a quill. To write with a quill, you had to know how to form the end into a fountain pen. This required some skill with what is still known as a pen knife. There were also pencils but writing in pencil was not as permanent as writing in ink.

Then typewriters were invented. Now writers could write faster and more legibly using this remarkable machine. The definition of literacy didn’t change so much as the expectations of your readers were raised such that you were expected to use a typewriter to submit your manuscripts. Thus, the definition of literacy expanded a little bit.

Next came the computer. With a computer you could have assistance with spelling and grammar. You could reach more people, thanks to the web. You could edit text without having to totally retype it. You could easily make multiple copies. It was important to make sure that you made mutiple backups of the files on your computer in multiple places. The definition of literacy expanded to the use of computers to read and write with.

We come to the most recent addition to the attributes of literacy. You must be able to create web sites. You can do that in several different ways. You can do it the old fashion way using HTML and CSS. Or, you can find one of the many web frameworks like Ruby on Rails, or Django, or Grails, or many others. You might try one of the numerous different implementations of Wiki. Or, you might try a content management system like WordPress or Drupal. This has further expanded the expectations of the literate person.

I enjoy writing. I am thankful that I have a computer instead of having to write everything out longhand. I am relatively sure that I wouldn’t have gotten this far in my quest to master the craft of writing. I still have much to learn but I have much better tools with which to work.

Sweet dreams, don’t forget to tell the people you love that you love them, and most important, be kind.

Evolution of Programming Part Two

In the last installment we had traced programming from an electrical engineering activity involving patch cables through assembly language and FORTRAN and then Lisp. There were many other early computer languages. Some were interpretive like Lisp. Others were compiled like FORTRAN. All of them sought to make it easier and faster to develop working programs. But they all overlooked one fundamental fact.

The purpose of programming languages was for programmers to communicate the details of their algorithms to other programmers. The production of an executable binary (or the interpretation of the source code by an interpreter) was a motivating by product but the same results could have been theoretically produced by typing the numeric instruction codes directly into the computer like the first programmers did.

High level languages allowed programmers to examine their ideas in much the same way that an author of prose reads their manuscript. They facilitated experimentation and they served as a short hand for communicating the details of complex computational processes in terms that the human mind could grapple with.

There were many paradigms of programming throughout the years. One of the first that was identified as such was Structured Programming. Many of the early languages had a very simple syntax for altering the flow of execution in a program. They typically consisted of a statement that evaluated an expression and then based upon the value of the expression caused execution to continue with either the next sequential statement in the program or else branch to another location in the program. This simple construct was how the program made decisions.

The problem was that in those early languages programmers often found themselves in the situation where they wanted the program execution to branch to a different location unconditionally. This was accomplished by a GOTO statement. Both FORTRAN and Lisp had them in one form or another. The GOTO statement made it very difficult to follow the thread of execution of a program. Structured Programming asserted that all programs could be expressed using a small set of control structures, IF, THEN, ELSE, WHILE, UNTIL, and CASE for example. The details of how those instructions work are not as important as the absence of the GOTO from them.

Structured Programming did make programs easier to read but it turns out there were cases when GOTO was absolutely necessary. But it was still considered a construct to be avoided if at all possible. Both FORTRAN and LISP implemented constructs to make it possible to use Structured Programming techniques in them. There were a large number of other languages that supported Structured Programming, notably Pascal and C.

The next popular programming paradigm was Object Oriented (OO) Programming. The idea was that you bundled data, stored in bins called fields, with the pieces of programs that operated on it, called methods. In the first OO language, Smalltalk, the idea was that objects sent messages to other objects. The messages had arguments that made them more specific. The objects would receive these messages, dispatch them to the methods that processed them and return the value that the method computed to the caller.

It turns out that Object Orientation was a very effective means of compartmentalizing abstractions. It made it easier for programmers to visualize their programs in terms of a community of cooperating abstractions.

OOP is still a popular paradigm today. Examples of modern object oriented languages include C++, Java, Ruby, Python, and many others. As it turns out, OOP didn’t replace Structured Programming. Rather, it extended it.

Another popular programming paradigm is functional programming. Surprisingly enough, Lisp was the first functional programming language. One of the key aspects of functional programming languages is the fact that the pieces of programs, called functions or methods, can be manipulated and stored just like any other data. They can be passed as arguments to other functions, and stored in variables to be recalled and executed later.

An example will help to clarify. Suppose that you had a program routine that sorted a list. In many languages that routine would only be able to process a list that only contained all the same kind of data, perhaps all numbers, or all text. It would have to know how to compare any two elements in the list to see what order to put them in. In a functional language you could write a sort routine that would take a comparison function as well as the list of items to sort. Then, if you passed in a list of numbers, you could pass in a comparison function that knew how to compare two numbers. If you passed in a list of text items, you could pass in a comparison function that knew how to compare to text items. The actual sort routine wouldn’t have to know what type of items were stored in the list.

Another aspect of functional languages is the concept of referential transparency. That is a very scary term that simply means that any function that is called with any given set of arguments will always return the same value such that you can replace the call to the function with the value that it returns. This is a very good thing if you have a function that takes a lot of time to compute that gets called multiple times with the same arguments. You can save the result from the first call (called memoizing) and return it any time the function gets called a second time and speed up the performance of the program immensely.

This brings us almost up to how the World Wide Web fits in but it is going to have to wait for part three. Sweet dreams, don’t forget to tell the people you love that you love them, and most important of all, be kind.

Of Puns and Monkey Patching

The English language is plagued by a plethora of words that have many different meanings. Often, words get twisted over time until they mean something entirely different than they originally did. I suspect this is true in most natural languages. It can be the source of many misunderstandings, quarrels, and even wars. It is no surprise then, that computer languages are also subject to these kinds of problems.

One of the first recorded instances was in the original FORTRAN implementation. It allowed the redefinition of literal values. Consequently, you could add a statement like “0 = 100” to the beginning of a program and redefine the value of 0 to be 100. This rendered all of the computation in the rest of the program suspect if not totally wrong.

Being able to redefine the meaning of words in computer languages isn’t always a source of consternation though. In some modern languages, like Ruby for instance, it is used as a mechanism for fixing software. In Ruby, functions are called methods. A method can either be anonymous, in which case it is only useful in an immediate context, for instance as an argument to another method. Or, it can be assigned as the value of a name. Then it can be executed whenever it is required by “calling” the method using its name.

Because of the way that Ruby works, you can also do something colloquially  called “monkey patching”. This works by first assigning the value of a named method to another, auxiliary  name. At this point both the original name and the auxiliary name both refer to the original method. Then, you redefine the original name. You can insert expressions before and after a call to the auxiliary method. This new method is now the new definition for the original method. Any method that called the original method will now get the newly defined method.

To give an example of how you might use this feature, suppose you wanted to print a message “Now entering myMethod” every time you called the method named myMethod. You could easily define the new method to print that message before calling the original method using the auxiliary name. Later, when you were finished using this new monkey patched method to analyze the behavior of the program, you could restore the method back to its original state by assigning the value of the auxiliary method back to the original method.

This is a feature that can easily mess up your program. But it can also let it do things that would require much more effort if done any other way. As Spider-Man’s uncle Ben is known for saying, “With great power comes great responsibility.”

There are other interesting analogies between natural languages and computer languages. We’ll discuss them more in a future blog post.

A Brief Survey of Dynamic Programming Languages

A lot has been written about static languages lately. I am going to focus on dynamic languages. Dynamic languages allow the programmer to develop software by interacting with a “live” language system. The system accepts input and gives feedback about that input immediately as it is entered and evaluated. This encourages an experimental approach to programming. What happens when I do this? Just do it and find out immediately.

I talked at some length about the oldest dynamic language, Lisp, last time. As such, it provides an example of many of the features of subsequent languages. It is classified as a multi-paradigm language. This means that it supports many different styles of programming. You have probably heard of object-oriented programming. Lisp has a very flexible object system called CLOS (Common Lisp Object System). You may have heard of functional programming. Without going into great detail about the details of what functional programming is, Lisp was one of the first languages to treat functions as directly manipulable elements. Lisp also supports rule-based programming, aspect oriented programming, and many other programming paradigms. It has been said that most new languages are just adding features that were pioneered by Lisp.

What other dynamic languages are there? Another venerable and influential dynamic language is Smalltalk. It was developed by researchers at Xerox’s Palo Alto Research Center (PARC). It was the first language to treat every element in the langage as an object. It also was a complete system including editors, code browsers, and a host of other tools, all written in Smalltalk. The system is programmed by interactively defining new objects that are immediately available for use as soon as they are defined. The entire system is graphically oriented. Many of the concepts of modern graphical systems were first introduced by Smalltalk. Unsurprisingly, there are a number of implementations of Smalltalk in use today.

Another dynamic language that has been around for a long time is Forth. Forth is another language that inspires strong opinions. Programmers either love it or hate it. It has been described as write only. It is based on the idea of using a temporary data structure called the stack to temporarily store operands to be used in subsequent computations. The stack is a mechanism where the programmer can “push” values onto the stack and later “pop” the last item pushed. All operations are done on the latest item pushed. This allows an arbitrary number of temporary values to be readily accessible. It also requires the programmer to keep track of exactly what has been pushed onto the stack at any point in the execution of their program. Hence, the “write only” label.

Forth allows the programmer to define new functions interactively through the use of a REPL (Read, Eval, Print, Loop). This is a key feature of dynamic languages and goes a long way toward making up for the cognitive load of keeping track of the contents of the stack.

One of the more recent dynamic language to gain popularity is Ruby. Ruby was inspired by Lisp but adopts the Smalltalk stance that everything is an object. It first gained widespread notice because of a popular web framework written in Ruby called Ruby on Rails. Many of the best practices of modern programming have originated in the Ruby community.

The last dynamic language that I’m going to talk about in this post has had a rather profound impact on computing in the last couple of years. I’m talking of course about Python. Python is an interesting language. Noted for the use of indentation to indicate the start and end of computing blocks and the ease of using code libraries written in other static languages, it has captured the hearts of scientists and educators alike. It is the lingua franca of the popular inexpensive computer the Raspberry Pi. It has carved out a place for itself as probably the most popular modern dynamic language.

In future posts, I’ll talk more about each of these languages and even show some examples of what programs written in them looks like.

Static Vs. Dynamic Computer Languages

I’ve talked about the history of computer languages some in previous blog posts. In particular, I’ve talked about the difference between compiled languages and interpreted languages. Now I’d like to make a further distinction. Some languages are static and others are dynamic. More specifically, static languages are reduced to executable instructions ahead of time during a compilation phase where dynamic languages are processed interactively as the programmer explores the solution space defined by her program.

Static languages can more easily be made to run fast and can more readily be analyzed for correctness. Dynamic languages support organic exploration of the problem and its solution. Dynamic languages can, and have, been made to run fast and analyze themselves for correctness. It’s just typically more difficult to do so.

The surprising thing, to me anyway, is that we have had examples of both kinds of languages pretty much since the first higher level languages were written. FORTRAN was a static language written for engineers to automate the solution of numeric problems. Lisp was a dynamic language to explore the potential for computers to do things other than just numeric computation.

Engineers and Managers liked FORTRAN and its descendants. The applications that were written in them were immediately practical and it was easy to measure how much work the programmers had done. Initially this was measured in Source Lines Of Code or SLOCs. This was all well and good until the programmers realized that a more complicated solution requiring more source lines made them look better to the managers that were measuring their performance.

Over time, we learned a lot about how to write programs in static languages until finally, in recent years, most “serious” programming is done in static languages like C++ and Java. This is a sad situation. Static languages are good for many things but there are a lot of problems that are better served by dynamic languages.

Lisp has a bad reputation among many engineers including many computer scientists. This is often because they were forced to write a project using Lisp in a Computer Languages course by an instructor that didn’t understand Lisp themself. But, to be fair, Lisp does require a little bit of a different mind set from most static languages.

There are also a bunch of myths about Lisp that may have had some basis in fact at one point but over time, Lisp has improved and the hardware that all languages run on have improved, and the myths are, for the most part no longer valid. For instance, it was initially said that Lisp wasn’t any good for doing numerical computing. That was initially true, largely because the authors of Lisp language systems were interested in exploring the non-numerical aspects of programming and consequently made little effort to make the language handle numbers efficiently. This was remedied in the early seventies but Lisp retains a reputation for being bad with numbers in some circles.

The main thing that Lisp is better at than its static conterparts is allowing the programmer to interactively explore a problem space. In a static language, one typically has to write a substantial amount of code before you get to the point where you can run it and see how it behaves. If it doesn’t do what you intend, which is typical for the first couple of attempts, you have to go back to the source and change it, recompile the program, and rerun it to see if you’ve fixed the problem.

With Lisp, one types expressions directly to the interpreter using what is called a Read Eval Print Loop or REPL. The REPL reads a line of code, interprets it and prints the resulting value. Then it repeats the process (hence the Loop). This allows the programmer to explore the problem and potential solutions interactively, getting feedback from the computer after each line. It is amazing how much faster correct programs are developed using this approach.

I’ll talk more about dynamic languages in another post soon. There is much that I want to say about them. And Lisp may have been the first dynamic language but it certainly wasn’t the last. I’ll have more to say about other members of the dynamic language family. I’ll also have something to say about ways that we can realize the best aspects of both of these styles of language by combining them into hybrid languages.

A Manner of Speaking

Language is important. It is how we share what we think with others but even more importantly, it has become an integral part of how we think. If we have words to express something, it is easier to contemplate hypothetical variations on that thing. My dog is very smart but she has a very limited imagination. That is because she lives only in the present and does not think about things in language. Language is a puzzle that she decodes to gain approval not a tool that she uses to master her environment.

This insight was inspired by a You Tube video I watched where Dave Thomas, a popular computer consultant, writer, and publisher told how a bunch of clever people had transformed an adjective, “agile”, into a proper noun, “Agile”, and turned it into a gold mine. By taking a philosophical position statement, Manifesto for Agile Software Development, and turning it into the prescription for solving all the problems inherent in software development, they created an industry that sells books, training, and consulting, among other things, using fear and all the other modern marketing tricks so prevalent in our online society.

They did this by taking the adjective agile and turning it into a noun. I stopped and thought about the fact that this kind of abuse of language goes on all around us. My mother first pointed it out decades ago. She called it “verbing nouns” with her tongue firmly in cheek. It is not going to stop just because we catch people doing it. But we can become more aware of what is going on and think about what the intentions of people that are quick to coin new usage for words are. All we have to do is think.