I’ve been thinking about this and failing to see the fundamental intellectual issue with creating an artificial intelligence more intelligent than any human.
I can see risks with AIs that have faults but I don’t see any issue with its level of intelligence as such.
Let’s put it like this. Bill Gates may well be more intelligent than Barack Obama. Which one has the most power in the world? Intelligence does not grant any right to power or authority. Most dictators in history have not been the most intelligent people in their country or nation, and unfortunately many elected politicians have not been too bright either.
Greater intelligence does not grant either power or influence automatically with people. Many intelligent people have no real interest in running society or the world — so this idea that some massively intelligent AI would implicitly want to take over the world is really just the fear of the unknown.
Of course we don’t really want to allow an AI to launch nuclear missiles or control a Skynet set of killer robots. But personally I’d far rather a hyper-intelligent AI rather than a really dumb human controlled them.
99% of humans have had an experience with dealing with people more intelligent than them: at least at one time or another in their lives. Most humans are used to dealing with people smarter than them, they can certainly deal with a machine smarter than them.
I do conjecture that maybe people like Bill Gates have an issue with it because they are not used to dealing with anyone smarter than them?
Create your free account to unlock your custom reading experience.