Ethical Concerns About Digital Super-Intelligence (AI)

I saw a TED talk by Nick Bostrum the other day that has haunted me. It was about the impending emergence of a digital super-intelligence. This is often called Artificial Intelligence but since we don’t have a rigorous definition of intelligence or an objective criteria for determining what constitutes a natural intelligence I prefer the term digital super-intelligence.

The problem is, we are setting up the conditions for this super-intelligence to emerge but we will have virtually no control over it when it does. There has been talk of developing guidelines for ensuring that it will share our values but I can’t see how that is possible, especially since there are so many different sets of values and we can’t seem to come to any agreement on which of them are fundamental and which are secondary.

I don’t have any answers yet. I don’t know if I ever will. But I am sincerely concerned that we are going to let this genie out of the bottle and things will change extremely fast and not necessarily for the benefit of mankind. I think this is potentially much more dangerous than experimenting with creating human beings from scratch in the lab. And make no mistake, I’m not advocating that either.

What is clear is that we need to focus more intensely on the ethics of the science that we are doing. The catch-22 is that we can’t just arbitrarily ban these activities. Someone is going to pursue them whether they are banned or not. The best thing we can do is entice our sharpest minds to think about these difficult issues and try to come up with some viable plan of action when these digital super-intelligences emerge from the computer science laboratories as they most certainly will.