paint-brush
What educators must learn from IBM’s ‘betrayal of science’by@fjmubeen
4,133 reads
4,133 reads

What educators must learn from IBM’s ‘betrayal of science’

by Junaid MubeenJuly 23rd, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

There are two versions of mathematics. The first is the mathematics of schooling, a collection of closed problems and hard truths to be learned for exams. We’ve all experienced it, few of us have enjoyed or excelled at it. The second version is a more open, beautiful and empowering representation of the subject, often the focus of my writing.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - What educators must learn from IBM’s ‘betrayal of science’
Junaid Mubeen HackerNoon profile picture

The price of performance over cognition

There are two versions of mathematics. The first is the mathematics of schooling, a collection of closed problems and hard truths to be learned for exams. We’ve all experienced it, few of us have enjoyed or excelled at it. The second version is a more open, beautiful and empowering representation of the subject, often the focus of my writing.

Why the disconnect? What leads us to privilege narrow outcomes at the expense of deep and meaningful learning experiences?

Garry Kasparov has a few thoughts.

In his latest book Deep Thinking, Kasparov recounts the story of the so-called ultimate battle of man vs machine. He was the man to IBM’s Deep Blue machine, contesting a 6-game chess match with the ultimate prize of intelligence bragging rights. We all know how that went: Deep Blue prevailed 3 ½–2 ½. The watershed moment had arrived for Artificial Intelligence.

Except, that’s not quite how Kasparov sees it. While Chess was long considered the holy grail for AI, the nature of Deep Blue’s programming was hardly the quantum leap forward many imagined it would be.

Deep Blue was built on brute force computation. By 1997, the combination of computing power and IBM’s search algorithms was more than sufficient to best the world’s dominant chess player. Deep Blue’s victory may have revealed less about the nature of AI and more about the relatively closed nature of chess.

Deep Blue adopted a playing style that scarcely resembled the strategic gameplay of Kasparov. Rather than reasoning through end game strategies, for example, Deep Blue could just look up the optimal next move from a table. IBM had no concern for why their machine played a particular move; the result was all that mattered.

The match was marked as a triumph for IBM (both its reputation and stock prices rocketed), but for Kasparov the brute-force computational approach was a betrayal of science. IBM had shunned the ambition of developing human-like chess machines. Instead they assumed a win-at-all-costs mindset that put performance over method. It was a victory for engineering and a missed opportunity for science.

Fast-forward twenty years, and the machines have risen further. Google DeepMind’s AlphaGo sent shockwaves through the scientific community by defeating world Go champion Lee Sedol. The game of Go is characterised by its openness; it cannot be tamed by the brute-force search algorithms of Deep Blue. Instead, AlphaGo called on advanced Machine Learning techniques such as Deep Learning and Neural Networks. The names alone presume that these machines think in more human-like ways. Such presumptions are far from justified. In a brilliant critique of Deep Learning, Francois Chollet writes:

The only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data. Doing this well is a game-changer for essentially every industry, but it is still a very long way from human-level AI.

So what would ‘human-level AI’ look like? He goes on:

The ability to handle hypotheticals, to expand our mental model space far beyond what we can experience directly, in a word, to perform abstraction and reasoning, is arguably the defining characteristic of human cognition.

This stands in sharp contrast with what deep nets do…the mapping from inputs to outputs performed by deep nets quickly stops making sense if new inputs differ even slightly from what they saw at training time.

There is a tendency to confuse performance with methods. Just because a machine can win at Go or Chess, it does not mean they play the game as humans do. We have too much reverence for black-box algorithms.

The parallels with education are striking. Those fundamental skills of abstraction and reasoning are often marginalised in the curriculum, whereas the brute-force memorisation of facts and procedures reign supreme. Students are trained to perform mathematical operations, to swallow the black-box and to act as paltry oracles who can churn out answers without a second thought. This is not the same as doing maths, because to do maths is to question, to explore, to reason. Maths is the most open of domains and can never be reduced to algorithmic form.

The quest for Chess machines was well intended, but failed in ambition due to the pressures of performance. So it goes with much of school mathematics, which is underpinned by performance-based assessment. The goals of assessment — to measure students’ thinking and provide feedback — are laudable. The commoditisation of high-stakes testing has put paid to those goals, reducing most of education to a performance act. Just as IBM betrayed science for the sake of winning, education has betrayed learning for the sake of measurement.

I was once called an ‘exam machine’ by my undergraduate tutor. He meant it as a compliment and I certainly took it as one. But to pass exams does not a mathematician make. Exams are bounded by time, changing the fundamental nature of problem solving. Just as Bullet Chess deprives humans of their intuitions, exams are not an appropriate format for capturing our inner mathematicians.

Educators must rise above the machine-like tendency to focus exclusively on outcomes. At a time when IBM’s latest showpiece Watson and other digital innovations are sinking their claws into Education, we must demand emphasis on deeper thinking.

Our track record with artificial intelligence exposes the things value as humans; performance over cognition. The same is true for how we have approached education.

If education is to develop students into productive and engaged citizens, then educators must flip the narrative. Method is everything.

I am a research mathematician turned educator working at the nexus of mathematics, education and innovation.

Come say hello on Twitter or LinkedIn.

If you liked this article you might want to check out my following pieces:


Maths students are trapped in Searle’s Chinese room_As the quest for strong AI unfolds, are we losing our grip on human intelligence?_hackernoon.com


Thinking in the age of cyborgs_An educator’s warning to Elon Musk_hackernoon.com


When maths screws you over_Reasoning is the language of mathematics_mystudentvoices.com