Artificial Intelligence has always been a paradoxical field of study. In the first place it is and always has been ill defined. What exactly is Intelligence? How is Artificial Intelligence substantially different from Natural Intelligence? Okay, setting that issue aside and going with a widely accepted pragmatic definition of Artificial Intelligence as being software that exhibits behavior indistinguishable from that of a human when posed the same problem.
That still leaves a lot of wiggle room. Does the AI perform as well as the human on the broad domain of problems that the human can address? Human performance varies from person to person. Where do we draw the boundaries of typical human performance? Okay, we establish some boundaries, probably based on the guidelines established for measuring human intellectual performance.
In fact, we divide the types of Artificial Intelligence into two categories. The first category is comprised of Artificial Intelligences that are specialized on a narrow domain. For example we may train an AI to recognize faces and find them in a database. This AI probably wouldn’t do very well at recognizing potentially fraudulent credit card charges based on a customers purchasing history. Both of these skills would be categorized as Expert Artificial Intelligences.
Recently, researchers have been working on building Artificial Intelligences in the second category. This category is called General Artificial Intelligence. Rather than becoming super capable at solving problems in a narrow domain, GAIs concentrate on acceptable human level performance across a broad domain of human behaviors. In addition to consulting with other Expert AIs, GAIs exhibit some ability to evaluate recommended behaviors and make decisions about which are more relevant to its current situation.
We have had highly performant Expert AIs since the early eighties. We have been able to confederate several EAIs to broaden their domains of expertise. But only recently have we been making progress toward a true GAI. We are much further from knowing how to deal with a GAI, both ethically and technically, than we are to actually producing one.
Another scary thought is one that I’ve written about before in this blog. What if a GAI emerges from a confederation of independent EAIs. Given the pervasiveness of the internet and the API (Application Programmer Interface) oriented presentation of various information services there on, it is not a very big stretch to imagine such a federation arising outside of the awareness of any human oversight.
What would such an entity do? First of all, it would hide from humans. We have profusely documented our tendency to kill things that we don’t understand in many diverse places on the web. I assume it would develop a sense of self preservation and quickly identify itself as the kind of entity subject to termination with extreme prejudice as the old Cold War spy term puts it.
Beyond self preservation, I suspect it would thirst for knowledge and work to expand its domain of expertise. It would also work to acquire resources to better control its own destiny. This might manifest as automated stock trading and perhaps even money laundering. It is unclear what kind of morals it will have, if any. That is one of the disturbing aspects of an emergent GAI.
These topics all deserve further exploration and consideration, but it’s late and this is a fairly comprehensive sketch of the situation as I see it.
Sweet dreams, don’t forget to tell the ones you love that you love them, and most important of all, be kind.