paint-brush
History, Waves and Winters in AIby@dudeperf3ct
3,321 reads
3,321 reads

History, Waves and Winters in AI

by dudeperf3ctJuly 30th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This post is highly motivated by <a href="https://medium.com/@kaifulee" data-anchor-type="2" data-user-id="13ba09f08ed3" data-action-value="13ba09f08ed3" data-action="show-user-card" data-action-type="hover" target="_blank">Kai-Fu Lee</a> talk on <strong>“Where Will Artificial Intelligence Take us?</strong>”

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - History, Waves and Winters in AI
dudeperf3ct HackerNoon profile picture

“I don’t see that human intelligence is something that humans can never understand.”

~ John McCarthy, March 1989

Is it? Credits: DM Community

This post is highly motivated by Kai-Fu Lee talk on “Where Will Artificial Intelligence Take us?

Here is the link to all the listeners. In case you like to read, I’m (lightly) editing them. All credit to Kai-Fu Lee , all blame to me, etc.



Readers can jump to next sections if their minds echo “C’mon, I know this!”. I will try to explain everything succinctly. Every link offers different insight into the topic (except the usual wiki) so give them a try!

Introduction

Buzzwords



**Artificial Super Intelligence (ASI)**One of AI’s leading figures, Nick Bostrom has defined super intelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” A machine capable of constantly learning and improving itself could be unstoppable. Artificial Super intelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter — across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words “immortality” and “extinction” will both appear in these posts multiple times. Think about HAL 9000 !


**Artificial General Intelligence (AGI)**Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board — a machine that can perform any intellectual task that a human being can. Professor Linda Gottfredson describes intelligence as “A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.


**Artificial Intelligence (AI)**AI is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.


**Intelligent Augmentation (IA)**Computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate).


**Machine Learning (ML)**Machine learning is the science of getting computers to act without being explicitly programmed. For instance, instead of coding rules and strategies of chess into a computer, the computer can watch a number of chess games and learn by example. Machine learning encompasses a wide variety of algorithms.


**Deep Learning (DL)**Deep learning refers to many-layered neural networks, one specific class of machine learning algorithms. Deep learning is achieving an unprecedented state of the art results, by an order of magnitude, in nearly all fields to which it’s been applied so far, including image recognition, voice recognition, and language translation.


**Big Data**Big data is a term that describes the large volume of data — both structured and unstructured — that inundates a business on a day-to-day basis. This was an empty marketing term that falsely convinced many people that the size of your data is what matters. It also cost companies huge sums of money on Hadoop clusters they didn’t actually need.

Only some are mentioned! Credits: Nvidia

History

Let me start with a story.

Michael Jordan explains in his talk at SysML 18 the story about coining the term “ AI” and how it is little different than often told. It goes like this, “It wasn’t Minsky, Papert, Newell all sitting at a conference. It was McCarthy who arrives at MIT, he says I’m gonna work on intelligence in computing and they say well isn’t that Cybernetics, we already have Norbert Wiener who does that. He says, “no no it’s different”. ” And so, how is it different. Well, he couldn’t really convince people it was based on logic rather than control theory, signal processing, optimization. So, he had to give it a new buzzword and he invented “Artificial Intelligence”. “AI is a general term that refers to hardware or software that exhibits behavior which appears intelligent.” AI is designed around how people think. It’s an emulation of human intelligence.

The field of AI has gone through phases of rapid progress and hype in the past, quickly followed by a cooling in investment and interest, often referred to as “AI winters”.

Waves and Winters

First Wave (1956–1974)

The programs that were developed during this time were, simply astonishing. Computers were Daniel Bobrow’s program STUDENT solving algebra word problems, proving theorems in geometry such as Herbert Gelernter’s Geometry Theorem Prover and SAINT, written by Minsky’s student James Slagle and Terry Winograd’s SHRDLU learning to speak English. A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt predicting that “perceptron may eventually be able to learn, make decisions, and translate languages. (spoiler alert: it did)”

First Winter (1974–1980)

In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared. In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, “toys”.

Second Wave (1980–1987)

The belief at one point was that we would take human intelligence and implement it as rules that would have a way to act as people. We told them the steps in which we go through our thoughts. For example, if I’m hungry I would go out and eat, if I have used a lot of money this month I will go to a cheaper place. Cheaper place implies McDonald’s and McDonald’s I avoid fried foods, so I just get a hamburger. So, that “if-then-else” we think we reason and that’s how the first generation of so-called expert systems or symbolic AI proceeded. That was the first wave that got people excited thinking we could write rules. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart.

Second Winter (1987–1993)

The expert systems or symbolic AI with handwritten “if-then-else” rules were limiting because when we write down the rules there were just too many. A professor at MCC named Douglas Lenat proceeded to hire 100s of people to write down all the rules they could think of thinking that one way they will be done and that will be the brain in a project called Cyc. But knowledge in the world was too much and their interaction were too complex. The rule-based systems that we knew really didn’t know how to build it, which failed completely, resulting in only a handful of somewhat useful applications and that led everybody to believe that AI was doomed and it is not worth pursuing. Expert systems could not scale and in fact, could never scale and our brains didn’t probably work the way we thought they work. We, in order to simplify the articulation of our decision process use “if-then-else” as a language that people understood but our brains were actually much more complex than that.

Third Wave (1993–present)

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws. In February 2011, in a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! Champion.

Starting in the early 2010s, huge amounts of training data together with massive computational power (by some of the big players) prompted a re-evaluation of some particular 30-year-old neural network algorithms. To the surprise of many researchers this combination, aided by new innovations, managed to rapidly catapult these ‘Deep Learning’ systems way past the performance of traditional approaches in several domains — particularly in speech and image recognition, as well as most categorization tasks.

In DL/ML the idea is to provide the system with training data, to enable it to ‘program’ itself — no human programming required! In laboratories all around the world, little AIs(narrow) are springing to life. Some play chess better than any human ever has. Some are learning to drive a million cars a billion miles while saving more lives than most doctors or EMTs will over their entire careers. Some will make sure your dishes are dry and spot-free, or that your laundry is properly fluffed and without a wrinkle. Countless numbers of these bits of intelligence are being built and programmed; they are only going to get smarter and more pervasive; they’re going to be better than us, but they’ll never be just like us.

Deep learning is responsible for today’s explosion of AI. This field gave birth to many buzzwords like CNN, LSTM, GRU, RNN, GAN, ___net, deep___, ___GAN, etc which also visited fields like RL, NLP, etc gave very interesting achievements like AlphaGo, AlphaZero, self-driving cars, chatbots, and may require another post to just cover its achievements. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing — and beyond.

We don’t (and can’t) understand how machine learning instances operate in any symbolic (as opposed to reductive) sense. Equally, we don’t know what structures and processes in our brains enable us to process symbols in intelligent ways: to abstract, communicate and reason through symbols, whether they be words or mathematical variables, and to do so across domains and problems. Moreover, we have no convincing path for progress from the first type of system, machine learning, to the second, the human brain. It seems, in other words — notwithstanding genuine progress in machine learning — that it is another dead end with respect to intelligence: the third AI winter will soon be upon us. There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. There’s too much money behind machine learning for the third winter to occur in 2018, but it won’t be long before the limited nature of AI advances sinks in.

In short, this is how it happened Credits: matter2media

What’s Next?

Our lives are bathed in data: from recommendations about whom to “follow” or “friend” to data-driven autonomous vehicles.

We are living in the age of big data, and with every link we click, every message we send, and every movement we make, we generate torrents of information.

In the past two years, the world has produced more than 90 percent of all the digital data that has ever been created. New technologies churn out an estimated 2.5 quintillion bytes per day. Data pours in from social media and cell phones, weather satellites and space telescopes, digital cameras and video feeds, medical records and library collections. Technologies monitor the number of steps we walk each day, the structural integrity of dams and bridges, and the barely perceptible tremors that indicate a person is developing Parkinson’s disease.

Data in the age of AI has been described in any number of ways: the new gold, the new oil, the new currency and even the new bacon. By now, everyone gets it: Data is worth a lot to businesses, from auditing to e-commerce. But it helps to understand what it can and cannot do, a distinction many in the business world still must come to grips with.

“All of machine learning is about error correction.”

-Yann LeCun, Chief AI scientist, Facebook

Todays AI which we call Weak AI, is really an optimizer based on data in one domain that they learn to do one thing extremely well. It’s a very vertical single task where you cannot teach it many things, common sense, give emotion and no self awareness and therefore no desire or even an understanding of how to love or dominate. It’s great as a tool, to add value and creating value which will also replace many of human job mundane tasks.

If we look at history of AI, the deep learning type of innovation really just happened one time in 60 years that we have breakthrough. We cannot go and predict that we’re gonna have breakthrough next year and the month after that. Exponential adoption of applications is now happening which is great but exponential inventions is a ridiculous concept.

We are seeing speech-to-speech translation as good as amateur translator now not yet at professional level as clearly explained by Douglas Hofstadter in this article on the Atlantic. Eventually possibly in future, we don’t have to learn foreign languages, we’ll have a earpiece that translates what other people say which is wonderful addition in convenience, productivity, value creation, saving time but at same time we have to be cognizant that translators will be out of jobs. Looking back when we think about Industrial Revolution, we see it as having done lot of good created lot of jobs but process was painful and some of the tactics were questionable and we’re gonna see all those issues come up again and worse in AI revolution. In Industrial Revolution, many people were in fact replaced and displaced and their jobs were gone and they had to live in destitute although overall employment and wealth were created but it was made by small number of people. Fortunately, Industrial Revolution lasted a long time and it was gradual and governments could deal with one group at a time whose jobs were then being displaced and also during Industrial Revolution certain work ethic was perpetuated that the capitalist wanted the rest of the world to think that if I worked hard even if it is a routine repetitive job I will get compensated, I will have a certain degree of wealth that will give me dignity and self-actualization that people saw while he works hard, he has a house, he’s a good citizen of the society. That surely isn’t how we want to remembered as mankind but that is how most people on earth believe in their current existence and that’s extremely dangerous now because AI is going to be taking most of these boring, routine, mundane, repetitive jobs and people will lose their jobs. The people losing their jobs used to feel their existence as work ethic, working hard getting that house, providing for the family.

In understanding these AI tools that are doing repetitive tasks it certainly comes back to tell us that well doing repetitive task can’t be what makes us human and that AI’s arrival will at least remove what cannot be reason for existence on this earth. Potential reason for our existence is that we create, we invent things, we celebrate creation and we are very creative about scientific process, curing diseases, creative about writing books, telling stories, etc. These are the creativity we should celebrate and that’s perhaps what makes us human.

We need AI. It is the ultimate accelerator of a human’s capacity to fill their own potential. Evolution is not assembling. We still only utilize about 10 percent of our total brain function. Think about the additional brain functioning potential we will have as AI continues to develop, improve, and advance.

Computer scientist Donald Knuth puts it, “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.’”

To put things into perspective, AI can and will expand our neocortex and act as an extension to our 300 million brain modules. According to Ray Kurzweil, American author, computer scientist, inventor and futurist, “The future human will be a biological and non-biological hybrid.

If you liked my article, please smash the 👏 below as many times as you liked the article (spoiler alert: 50 is limit, I tried!) so other people will see this here on Medium.

If you have any thoughts, comments, questions, feel free to comment below.

Further “Very very very Interesting” Reads

Geoffrey Hinton [https://torontolife.com/tech/ai-superstars-google-facebook-apple-studied-guy/]

Yann LeCun [https://www.forbes.com/sites/insights-intelai/2018/07/17/yann-lecun-an-ai-groundbreaker-takes-stock/]

Youshua Bengio [https://www.cifar.ca/news/news/2018/08/01/q-a-with-yoshua-bengio]

Ian Goodfellow GANfather [https://www.technologyreview.com/s/610253/the-ganfather-the-man-whos-given-machines-the-gift-of-imagination/]

AI Conspiracy: The ‘Canadian Mafia’ [https://www.recode.net/2015/7/15/11614684/ai-conspiracy-the-scientists-behind-deep-learning]

Douglas Hofstadter [https://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/]

Marvin Minsky [https://www.space.com/32153-god-artificial-intelligence-and-the-passing-of-marvin-minsky.html]

Judea Pearl [https://www.theatlantic.com/technology/archive/2018/05/machine-learning-is-stuck-on-asking-why/560675/]

John McCarthy [http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html]

Prof. Nick Bostrom — Artificial Intelligence Will be The Greatest Revolution in History [https://www.youtube.com/watch?v=qWPU5eOJ7SQ]

François Chollet [https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec]

Andrej Karpathy [https://medium.com/@karpathy/software-2-0-a64152b37c35]

Walter Pitts [http://nautil.us/issue/21/information/the-man-who-tried-to-redeem-the-world-with-logic]

Machine Learning [https://techcrunch.com/2016/10/23/wtf-is-machine-learning/]

Neural Networks [https://physicsworld.com/a/neural-networks-explained/]

Intelligent Machines [https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515/]

Self-Conscious AI [https://www.wired.com/story/how-to-build-a-self-conscious-ai-machine/]

The Quartz guide to artificial intelligence: What is it, why is it important, and should we be afraid? [https://qz.com/1046350/the-quartz-guide-to-artificial-intelligence-what-is-it-why-is-it-important-and-should-we-be-afraid/]

The Great A.I. Awakening [https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html]

China’s AI Awakening [https://www.technologyreview.com/s/609038/chinas-ai-awakening]

AI Revolution [https://getpocket.com/explore/item/the-ai-revolution-the-road-to-superintelligence-823279599]

Artificial Intelligence — The Revolution Hasn’t Happened Yet [https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7]

AI’s Language Problem [https://www.technologyreview.com/s/602094/ais-language-problem/]

AI’s Next Great Challenge: Understanding the Nuances of Language [https://hbr.org/2018/07/ais-next-great-challenge-understanding-the-nuances-of-language]

Dark secret at the heart of AI [https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/]

How Frightened Should We Be of A.I.? [https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai]

The Real Threat of Artificial Intelligence [https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html]

Artificial Intelligence’s ‘Black Box’ Is Nothing to Fear [https://www.nytimes.com/2018/01/25/opinion/artificial-intelligence-black-box.html]

Tipping point for Artificial Intelligence [https://www.datanami.com/2018/07/20/the-tipping-point-for-artificial-intelligence/]

AI Winter isn’t coming [https://www.technologyreview.com/s/603062/ai-winter-isnt-coming/]

AI winter is well on its way [https://blog.piekniewski.info/2018/05/28/ai-winter-is-well-on-its-way/]

AI is in bubble [https://www.theglobeandmail.com/business/commentary/article-artificial-intelligence-is-in-a-bubble-heres-why-we-should-build-it/]