Recently I read a great book that was on my "to read" list for quite some time, namely: "Superintelligence: Paths, Dangers, Strategies" by Professor Nick Bostrom. In the book -a tough, but great read- Prof. Bostrom shares his views on the opportunities and risks to human kind, associated with the ongoing development of Artificial Intelligence(A.I.).
Using his broad knowledge of Mathematics, Engineering, Medicine, Social science and Philosophy, Prof. Bostrom explains the possible dangers associated with A.I. reaching the level of Superintelligence. Superintelligence is a term used to describe the level of artificial intelligence that far surpasses the intelligence of the brightest human minds alive today, or will come in the future.
So after reading the book and doing some further post reading "research", a question kept popping up in my mind:
Why would we ever want Superintelligence ???
Now before offering my view on a possible answer to this question -and eventually relating it back to the title of this article-, I think it is good to first go over some commonly used terms and concepts of A.I., without going too much in detail... I hope.
Weak versus Strong A.I.
Artificial Intelligence or A.I., as a term and even a discipline, was introduced by one of the "founding fathers of A.I.", John McCarthy(Lisp programming anyone?), during the famous Dartmouth conference in the mid-fifties of the previous century. The late, great John McCarthy defined A.I. as
“the science and engineering of making intelligent machines.”
The levels of "intelligence" that the machines achieve versus humans can be classed as:
Weak A.I is an algorithm used by machines that has been written or has been trained to do one thing(or a very narrow set), and it does that one thing "better" than humans can.Strong A.I. on the other hand, is an algorithm or set of algorithms that can perform all tasks as well as, or better than humans.
To be clear, Strong A.I. does not exist today, it is merely an idea at present... but more on that later.
We live in an age of Weak A.I., and although it may be called weak, it is quite impressive what this A.I. is able to achieve across many different fields of science, industry and business, as well as getting embedded in many parts of human society.
Weak A.I. today takes basically 2 main approaches to reach a level of Artificial Intelligence
Good Old Fashioned A.I. (GOFAI) or Symbolic AI which is basically an algorithm that uses a combination of IF-THEN statements or rule engines and complex statistical models written by hand. Examples of GOFAI are IBM's Deepblue and chatbots
Machine Learning(ML) or non-Symbolic A.I. makes use of algorithms that are able to adjust themselves based on the data they are exposed to. In essence able to "learn" much like a human child does. Examples of ML are in self-driving cars, facial recognition and Google's AlphaGo
Machine Learning algorithms are mainly -but not limited to- trying to simulate how parts of the human brain works...at least as we think it does. In essence they make use of something called artificial "neural networks", which consist of weighted connections between input and output layers, as well as a hidden layer consisting of units that transform the input into, something that the output layer can use. Together with different "learning rules", and backpropogation, the ML detects patterns until an optimized output is produced.
Deep Learning is basically a form of ML with many more (hidden) layers. Deep learning is showing promise towards A.I. becoming better than humans at a number of domains rather than just one, and therefore pave the way to Strong A.I. e.g Google's AlphaZero...
But that is yet to be determined.
Don't make the mistake of assuming that GOFAI is vastly inferior to ML, they both have their pros and cons, determined by how efficient they are able to tackle a problem. The combination of both GOFAI and ML is what will probably lead to the best results going forward.
The summer of A.I.
During the decades following the Dartmouth conference in the 1950's, different schools of thought on A.I. had gained momentum, to always end up falling in what has been called the A.I. winters, which is considered to be a period of reduced funding and interest in A.I. research. Since the beginning of this decade A.I. has seen a surge in investments and research advancements, as well as "real life" applications of A.I.. Some have stated that a new A.I. "winter is coming", whilst others state that we have just entered an A.I. spring, the latter view I share. How long before summer, that I dare not predict.
I think as with many things, the law of diminishing returns drives where we are on a "hype cycle". With an important difference with the previous era, being that both human society, as well as businesses, are seeing actual tangible benefits of the application of A.I. i.e. there is a demand, and the demand is growing.
The intelligence explosion...Superintelligence
As outlined above, Strong A.I. does not exist today, however there are a lot of areas of A.I. research, which are essentially looking into creating A.I. enabled machines that will be at least as intelligent as us humans, at all tasks. The rate at which technology is advancing has never been as fast as today.
As an example: computational power increases are breaking Moore's Law of exponential growth, where power doubled every 2 years, and rather Neven's Law of doubly exponential -trust me it is huge- now seen in quantum computing is believed to be the next "law".
Some "hot" areas of research on Strong A.I. enablers today are
Genetic and evolutionary algorithms: in essence this is where an algorithm is created that mimics natural selection algorithms, and according to "fitness" is able to produce an A.I. that will basically be the best that it can be...ever. One of the biggest challenges here is that we do not yet have enough data to truly understand how evolution works, but with the knowledge we have today, combined with deep learning, might enable more rapid advances towards Strong A.I.
Whole Brain Emulation or WBE: rather than trying to understand how a human brain works, and building an A.I. that works according to the human brain algorithms, one actually scans(slices) and "records" the entire brain and uploads the data into a computer. I know this sounds very sci-fi, but although we miss the technology to do this at present, some are quite confident that within 50 to 100 years the needed technology would be there to do this.
Deep learning next gen: as mentioned before the combination of GOFAI and Machine Learning(Deep learning) is most likely going to increase the level of artificial intelligence across many domains, rather than just one. The combination of rapid increases in technology combined with newer Deep learning algorithms, could in fact allow for the A.I. to learn itself to become Strong.Several other areas exist, but are a bit too controversial to mention here IMHO
Brilliant people such as Nick Bostrom, Bill Gates, Elon Musk, (late)Stephen Hawking, late John McCarthy, believe that once A.I. reaches the same level as human intelligence, they would be able to quickly become more intelligent than humans i.e. the genesis of Superintelligence. The rate at which this would happen would be so fast, that it would be like an intelligence explosion.
Debate is still ongoing what this would mean to humankind, with many saying this would be the end of it. On this possibility I am not going to offer any opinion.
The possible answer to the why
Ok, hope you are still with me at this point.
As I mentioned before the "why would we want superintelligence" is not so clear to me, nor do I find a general consensus amongst the A.I. "community".
Yes, there is the growing demand for Weak A.I. to become more prevalent and better.Yes, there is an ongoing A.I. race similar to the arms race, in the sense that some want to ensure they have better A.I. than others, whether that is for financial, knowledge, political or military power gain.
However these can be considered to be rather demands for "stronger" Weak A.I., not Strong A.I. or even Superintelligence. There is the doomsday scenario on why we would not want this, but this does not seem to be enough for the general A.I. community to agree not to want it.
The proposed answer for me would indeed be: we should NOT want Superintelligence. Superintelligence, is actually such a fuzzy concept which cannot be explained with our current level of intelligence, so neither can the potential benefits or threats. I would even argue that Strong A.I. therefore is also not desired, as remember, the achievement of one is highly likely to lead to the genesis of the other.
Prof. Nick Bostrom has outlined something called "Differential technological development", which states that societies should strive to retard the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones.
To me that means we should actually strive to keep the Weak A.I. and just improve it, with speed being dictated by domain priority, but not to the point of reaching Strong A.I., and so of the 2, ensure the survival of the weakest...A.I.
Would love to hear your thoughts.
PS: Prof Nick Bostrom has a great website https://nickbostrom.com/ with loads of interesting material on several different topics, not just on A.I.. Also Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, has written great stuff on A.I. and is actually of the opposing school of thought than Prof Nick Bostrom on the possible doomsday scenario of A.I.