One of my tech heroes, Ray Kurtzweil, has long been predicting the Singularity, that is, the point at which digital intelligence surpasses human intelligence. In an interview at a conference on Exponential Finance he discussed his views on preventing what he called existential threats. His position seems to be that since our generation has dealt with the nuclear existential threat we at least have an example that it can be done.
While I agree, to an extent, and applaud his optimism, I still think that the digital super-intelligence existential threat is different in magnitude if not in kind. As I said In my prior post, we need to raise awareness of the danger and actively pursue making contingency plans. We should assume that it is liable to happen and work to reduce the probability that it will while also thinking of ways to mitigate the danger when it does. It will do no good to ban AI research and think that we have dealt with the problem.