An outline of what happened in the last 60 years in AI This article has been awarded the Silver badge by as one of the most read and shared in April 2017. KDnuggets I. The origins In spite of all the current hype, AI is not a new field of study, but it has its ground in the fifties. If we exclude the pure philosophical reasoning path that goes from the Ancient Greek to Hobbes, Leibniz, and Pascal, AI as we know it has been officially started , where the most eminent experts gathered to brainstorm on intelligence simulation. in 1956 at Dartmouth College This happened only a few years after Asimov set his own three laws of robotics, but more relevantly after the famous paper published by Turing (1950), where he proposes for the first time the idea of a and the more popular Turing test to assess whether such machine shows, in fact, any intelligence. thinking machine As soon as the research group at Dartmouth publicly released the contents and ideas arisen from that summer meeting, a flow of government funding was reserved for the study of creating a nonbiological intelligence. II. The phantom menace At that time, AI seemed to be easily reachable, but it turned out that was not the case. At the end of the sixties, researchers realized that AI was indeed a tough field to manage, and the initial spark that brought the funding started dissipating. This phenomenon, which characterized AI along its all history, is commonly known as “ ”, and is made of two parts: AI effect i) The constant promise of a real AI coming in the following decade; ii) The discounting of the AI behavior after it mastered a certain problem, redefining continuously what intelligence means. In the United States, the reason for DARPA to fund AI research was mainly due to the idea of creating a , but two consecutive events wrecked that proposal, beginning what it is going to be called later on the first . perfect machine translator AI winter In fact, the report in the US in 1966, followed by the “ , assessed the feasibility of AI given the current developments and concluded negatively about the possibility of creating a machine that could learn or be considered intelligent. Automatic Language Processing Advisory Committee (ALPAC) Lighthill report” (1973) These two reports, jointly with the limited data available to feed the algorithms, as well as the scarce computational power of the engines of that period, made the field collapsing and AI fell into disgrace for the entire decade. III. Attack of the (expert) clones In the eighties, though, a new wave of funding in UK and Japan was motivated by the introduction of “ ”, which basically were examples of as defined . expert systems narrow AI in previous articles These programs were, in fact, able to simulate skills of human experts in specific domains, but this was enough to stimulate a new funding trend. The most active player during those years was the Japanese government, and its rush to create the fifth generation computer indirectly forced US and UK to reinstate the funding for research on AI. This golden age did not last long, though, and when the funding goals were not met, a new crisis began. In 1987, personal computers became more powerful than , which was the product of years of research in AI. This ratified the start of the , with the DARPA taking a clear position against AI and further funding. Lisp Machine second AI winter IV. The return of the Jed(AI) Luckily enough, in 1993 this period ended with the to build a humanoid robot, and with the Dynamic Analysis and Replanning Tool (DART) — that paid back the US government of the entire funding since 1950 — and when in 1997 DeepBlue defeated Kasparov at chess, it was clear that AI was back to the top. MIT Cog project In the last two decades, much has been done in academic research, but AI has been only recently recognized as a paradigm shift. There are of course a series of causes that might bring us to understand why we are investing so much into AI nowadays, but there is a specific event we think it is responsible for the last five-years trend. If we look at the following figure, we notice that regardless all the developments achieved, AI was not widely recognized until the end of 2012. The figure has been indeed created using CBInsights Trends, which basically plots the trends for specific words or themes (in this case, Artificial Intelligence and Machine Learning). Artificial intelligence trend for the period 2012–2016. More in details, I drew a line on a specific date I thought to be the real trigger of this new AI optimistic wave, i.e., . That Tuesday, a group of researchers presented at the Neural Information Processing Systems (NIPS) conference detailed information about their convolutional neural networks that granted them the first place in the competition few weeks before (Krizhevsky et al., 2012). Their work improved the classification algorithm and set the use of neural networks as fundamental for artificial intelligence. Dec. 4th 2012 ImageNet Classification from 72% to 85% In less than two years, advancements in the field brought classification in the ImageNet contest to reach . an accuracy of 96%, slightly higher than the human one (about 95%) The picture shows also three major growth trends in AI development (the broken dotted line), outlined by three major events: i) The 3-years-old being acquired by Google in Jan. 2014; DeepMind ii) The open letter of the signed by more than 8,000 people and the study on reinforcement learning released by Deepmind (Mnih et al., 2015) in Feb. 2015; Future of Life Institute iii) The paper published in Nature on Jan. 2016 by DeepMind scientists on (Silver et al., 2016) followed by the impressive victory of AlphaGo over Lee Sedol in March 2016 (followed by a list of — check out the article of ). neural networks other impressive achievements Ed Newton-Rex V. A new hope AI is intrinsically highly dependent on funding because it is a that requires an immeasurable amount of effort and resources to be fully depleted. long-term research field There are then raising concerns that we might currently live the next peak phase (Dhar, 2016), but also that the thrill is destined to stop soon. However, as many others, I believe that this new era is different for three main reasons: i) , because we finally have the bulk of data needed to feed the algorithms; (Big) data ii) The , because the storage ability, computational power, algorithm understanding, better and greater bandwidth, and lower technology costs allowed us to actually make the model digesting the information they needed; technological progress iii) The and efficient allocation introduced by Uber and Airbnb business models, which is reflected in cloud services (i.e., Amazon Web Services) and parallel computing operated by GPUs. resources democratization References Dhar, V. (2016). “The Future of Artificial Intelligence”. Big Data, 4(1): 5–9. Krizhevsky, A., Sutskever, I., Hinton, G.E. (2012). “Imagenet classification with deep convolutional neural networks”. Advances in neural information processing systems: 1097–1105. Lighthill, J. (1973). “Artificial Intelligence: A General Survey”. In Artificial Intelligence: a paper symposium, Science Research Council. Mnih, V., et al. (2015). “Human-level control through deep reinforcement learning”. Nature, 518: 529–533. Silver, D., et al. (2016). “Mastering the game of Go with deep neural networks and tree search”. Nature, 529: 484–489. Turing, A. M. (1950). “Computing Machinery and Intelligence”. Mind, 49: 433–460. Disclosure: this article was originally part of the longer article ‘ ’ which I am breaking down now based on some good readers’ feedback about article readability. I hope this helps. Artificial Intelligence Explained — — Follow me on Medium Look at my other articles on AI and Machine Learning: Open Source in Artificial Intelligence Big Data and Risk Management in Financial Markets (Part II) Big Data Strategy (Part III): is your company data-driven?