If you think Facebook is scary, shield your eyes from AI. How are we gonna handle it?
“this is no cave”
Surprise! Facebook knows a lot about us. But as it turns out, we don’t know a lot about them. We agree to terms of service we don’t read and sign up for apps through Facebook without knowing exactly what’s happening. So Congress brought in Mark Zuckerberg testify about the nature of their use of data in the wake of the Cambridge Analytica scancal. And if we don’t know a lot about Facebook, Congress really doesn’t know a lot about Facebook. Zuckerberg was largely untested due to the at times hilarious tech illiteracy of the Senate (as an example, this guy chairs the Senate Republican high tech task force):
Facebook is a very powerful beast, but it’s not overly complicated if you have any sort of technical knowledge. Yet the people in charge of regulating this beast have absolutely no clue what kind of power it has or why it has it. That’s pretty scary.
he didn’t really ask this, but man…I wish he did
So now imagine something that knew everything Facebook knew, plus all other software companies combined with the power to learn exponentially to achieve goals that aren’t well designed. That’s very scary. It would be a monster and it’s an actual possibility. It’s why you see headlines like Elon Musk warns A.I. could create an ‘immortal dictator from which we can never escape.’
Not so hilarious.
As technology titan Marc Andreesen famously proclaimed, ‘Software is eating the world.’ Call me old fashioned, but when something is eating me I typically like to know what it is. There is very little doubt he’s right…he wrote this in 2011 and software has not since left the buffet.
Uber, Airbnb, and Amazon are all software companies taking down traditionally brick and mortar industries. Since his proclamation in 2011, Exxon, PetroChina, Shell and ICBC have been replaced by Alphabet, Microsoft, Amazon, and Facebook as the top 5 publicly traded companies (Tencent has replaced Facebook since their PR nightmare, but they’re also a primarily a software company).
But there is still a binary (😉) gap between those who consider themselves techies and those who don’t, much like shortly after the invention of the printing press there was a gap between an elite that was literate and everyone else. But this revolution is far more important, transformative and (potentially) dangerous than books spreading across Europe in the 1500’s. It has the power to actually eat us. In the book Our Final Invention: Artificial Intelligence and the End of the Human Era (which didn’t eat me after I read it), James Barrat states:
I’ve written this book to warn you that artificial intelligence could drive mankind into extinction, and to explain how that catastrophic outcome is not just possible, but likely if we do not begin preparing very carefully now.
Notice that word, now.
As Alexander Shulgin wrote — “It wasn’t raining when Noah built the Ark.”
Yet, as Facebook’s data disaster has taught us all…a very few people have even the slightest clue how technology works.
Everyone uses technology, but very few actually understand it. Everyone stores passwords, transacts business over the internet, people hear about net neutrality and self driving cars. Every company has a website, file storage system and IT infrastructure.
Does this mean everyone should learn to code? I don’t think so. But should everyone have the ‘structural knowledge’ of code? I do think so.
As Pedro Domingos states in The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World:
Only engineers and mechanics need to know how a car’s engine works, but every driver needs to know that turning the steering wheel changes the car’s direction and stepping on the brake brings it to a stop. Few people today know what the corresponding elements of a learner even are, let alone how to use them. The psychologist Don Norman coined the term conceptual model to refer to the rough knowledge of a technology we need to have in order to use it effectively.
Structural knowledge is essentially this: you don’t need to be able to kill a lion, you just need to know that a lion can kill you. So you don’t go in the pit.
what we don’t want to be saying after unleashing AI
Right now, we’re mindlessly jumping into the pit one by one like lemmings off the side of a cliff….We’re too lazy to read legalese and give up more privacy than we think. And now we’re surprised the lion is eating us.
In technical terms, I believe this means everyone should know what a hash is, even if they don’t care to ever write or use one in a program. Everyone should know what a cookie is, why it’s there, etc.
When a new technology is as pervasive and game changing as machine learning, it’s not wise to let it remain a black box. Opacity opens the door to error and misuse. Amazon’s algorithm, more than any one person, determines what books are read in the world today. The NSA’s algorithms decide whether you’re a potential terrorist. Climate models decide what’s a safe level of carbon dioxide in the atmosphere. Stock-picking models drive the economy more than most of us do. You can’t control what you don’t understand, and that’s why you need to understand machine learning — as a citizen, a professional, and a human being engaged in the pursuit of happiness.
Most people don’t know this difference…and that’s not a good thing:
Yet we are STILL not teaching most kids digital literacy skills in school. Pew Research center reports only 17% of people are digitally ready.
The monster eating us is growing teeth, more and more every day. This is the compounding power of technology, much like in finance. If you have one penny today, and double that every day you will have over a million dollars 28 days later. Coincidence?:
This is why you see technology leaders like Bill Gates and Elon Musk leery about where AI is headed.
We currently have narrow intelligence — the ability of computers to beat us at chess. The abilities of AI to beat humans at tasks of narrow intelligence are increasing quickly — AI has learned facial recognition, voice recognition and beaten the greatest humans at the game Go far quicker than most experts predicted.
Next we have general artificial intelligence — beating us at all sorts of things its been trained to do. This is similar to the difference between a calculator and a computer….the first designed to do one thing very well and the next to do many things very well.
Once a general intelligence is achieved, this will likely very quickly lead to a Artificial Superintelligence which will render humans so far out of the intelligence league that we might as well be ants trying to compete in the Boston Marathon.
Yuval Noah Harari called the Agricultural Revolution humanity’s greatest fraud because it enslaved us to the land that scaled quickly enough to ensure no return back to a hunter gatherer lifestyle. This has long been a theme of technology that now can be seen with the internet, smartphones, email. We are such slaves to it that we cannot go back. Thus, the digital revolution may turn out to be an even greater fraud. But there’s no turning back. #deletefacebook may be a cute, but I don’t see a large wave of people deleting their accounts. It’s ingrained in too many of our lives. We’ve allowed technology to overtake our existence and the digitization of everything could be our downfall. Even if AI didn’t create Facebook, its role in our potential demise (all the digital data a Superintelligent AI would have about our intentions) could be very real.
So if we can’t go back, the main question is…what motivations and goals do we give Artificial Intelligence?
As Tim Urban states in his FANTASTIC two parts series on AI on his site Wait, But Why?, “the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reach AGI was 2040,” or during most of our lifetimes.
And how we deal with this issue will be one of the most defining decisions in the history of the world. Specifically, what motivation/goal do we give AI. As Oxford’s Nick Bostrom states in Superintelligence:
This is quite possibly the most important and most daunting challenge humanity has ever faced. And — whether we succeed or fail — it is probably the last challenge we will ever face.
The most important and most daunting challenge humanity has ever faced?!?
Yet our government can barely even get a fucking budget together. We can’t talk to each other or work together constructively. We are completely unaware of our own hypocrisy and biases. And the technical literacy of the people in charge of making these decisions is likely going to be horrific. Obama said:
“My Successor Will Govern a Country Being Transformed by AI”
We used to be able to keep up with technological innovation — things moved slowly enough that we could understand changes. Radio, television transformed the world but you can’t tell me that every legislator at the time knew what that meant (more or less). You could explain to Eisenhower what color television was and he could understand it. A textile worker could be retrained to be a weave machine operator, but a truck driver isn’t going to have time to learn to build machine learning apps.
As AI takes jobs away, angry crowds are going to be massive. They are going to be loud. And they are going to be extremely uneducated on wtf is going on.
Many legal decisions will be made by people who know very little about technology — blockchain, net neutrality, etc. and the ethics, values and goals associate with each. As Yuval Noah Harari states in ‘Homo Deus: A Brief History of Tomorrow’:
Precisely because technology is now moving so fast, and parliaments and dictators alike are overwhelmed by data they cannot process quickly enough, present-day politicians are thinking on a far smaller scale than their predecessors a century ago. Consequently, in the early twenty-first century politics is bereft of grand visions. Government has become mere administration. It manages the country, but it no longer leads it. Government ensures that teachers are paid on time and sewage systems don’t overflow, but it has no idea where the country will be in twenty years.
Perhaps we can thank FB for unintentionally giving us the kick in the ass we needed to realize we probably all need to learn more about technology.
Armies we have, armies we need.
The dinosaurs were annihilated by an asteroid they didn’t see coming. I’m afraid the tech illiterate dinosaurs of our government are failing to see what may annihilate us as they bicker about bathroom bills. This is why Tim Urban compares AI to Game of Thrones — we are quabbling amongst ourselves while a great unknown marches to perhaps destroy us.
has anyone seen my dragon glass laying around?
Patience is a virtue. As Max Tegmark quotes Isaac Asimov in Life 3.0:
the saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom — Isaac Asimov
We are designing a world that is unimaginable to us, with imperfect minds, and incentives that make us unlikely to truly examine the consequences of what we build.
We need to step back, learn what we’re dealing with and not just charge ahead like a junkie that just stumbled upon a warehouse of LSD and old Pink Floyd vinyl. What is important to us? Happiness? Immortality? Peace? Equality? To truly understanding reality? If it’s happiness, does that just mean we put our brains in vats of goo and hook our pleasure receptors up to dopamine IV’s? If it’s eradicating cancer, does AI simply realize the easiest way to do that is to destroy all biological life? Each path has its own consequences.
Perhaps its time to look back and refactor our goals for humanity and look at how we’ve done so far. Humans individually, are pretty terrible at predicting what’s going to make us happy. Are we any better as a society? Has technology made us happier? It’s certainly saved lives, but if we don’t just want our brains to be put in mason jars then isn’t pain an inevitable part of the human experience? Has technology decreased working hours? Doesn’t feel like it. Has it made us more connected to those around us or do we just churn on, checking email and staring at glowing rectangles? Have cars and planes brought us closer together or do we just now live further apart as a result of this new convenience?
I believe, like most others, that we can’t go back. It’s too late for that. And if that’s the case, we have to merge with technology or risk being irrelevant. That’s why Musk created Neuralink and many others are working to ensure a safe path forward. We should listen. We need to proceed very slowly. I will end on the same quote earlier from Bostrom:
And — whether we succeed or fail — it is probably the last challenge we will ever face.
We are like Thomas Jefferson writing the Declaration of Independence, only this time we’re writing the rules for the fate of our entire species. And we won’t get any amendments.