The way AI is talked about today is mostly relating what is actually machine learning. Cassie Kozyrkov, Googles Chief Decision Scientist, calls machine learning a fancy labeling machine. And thats a very grounding and comforting way to think about it. We teach the machine to label things like apples and pears by showing many examples of fruit to train on. Thats very graspable. And if you only show it green apples and read pears thats all that it will be able to label so it’s important to have good examples.
Computer excel sheets replaced endless rows of people sitting on typewriters doing manual data entry (and putting physical paper into binders and cabinets). Machine learning will replace all labeling work inte the same way. No more need to manually and repeatedly identifying, categorizing and sorting things. We can expect to uptake of this technology happen rapidly because most human don’t prefer to do repetitive work. A Japanese farmer took 7000 pictures of cucumbers that his mother had manually sorted and built and trained a machine to sort them automatically based on this technology.
Because we rarely understand the underlying mechanics of how the fancy thing labeling machine works we have to be good at testing that the decided outcomes are what we expect. Cassie Kozyrkov recently spoke about this at SthlmTechFest and had a another great analogy for how to think about this.
You have a friend that has and island with numerous of drunk inhabitants with connected laptops. They have all the free time in the world and are all very willing to do things but don’t respond to any detailed written instructions so you have to give them examples. This means it’s a pain to spend time teaching them if you have a one-off task, so you focus instead on the repetitive drudgery you’d like to cut out of your life.
But wait! Before you offload all your work to this island, consider: How drunk are these people? Can they even do your task? Don’t just blindly trust them with important work. Force them to earn your trust by checking that they actually perform your task well enough. According to Cassie you’re not ready to dive into a serious machine learning project until you are in possession of a document that outlines:
Matt Jones is working to bring human-centered on-device AI to Google Hardware and was recently talking at Frontiers in Stockholm about AI as tiny minds in octopus, hawks and spiders. We are so obsessed that AI will be some sort of human form interaction like the voice in the movie Her, but basically we are already seeing big change because the technology is small and cheap and working in decentralized processors. So Matt sugest we think of AI as companion species with different intelligence than ours. This distributed team work suggest other sorts of relationships. Sentient AI spiders could be offloading their cognition to a web that would be intuitively graspable for us in its surrounding. The web and its color could be indicating things like concentrated air pollution in urban environments.
Ana Arriola is design director at microsoft AI research and was speaking at the me-Convention about bringing crafted humanity in to tech. She also compared early Artificial general intelligence (AGI) as companion such as an innate little animal and showed picture of the robot dog.
Elon Musks says that the percentage of intelligence that is non-human is increasing and eventually we will represent a very small percentage of intelligence. In the movie more human than human Swedish philosopher Nick Bostrom reaches out his arms as far as he can and says that we perceive the intelligence gap between the village idiot and Einstein as far apart as we can imagine. It’s the entire spectrum in which we judge intelligence. He then holds his fingers together and says, it’s really this: a very very small distance on the axel of intelligence. That is why we wont see AI coming and be really surprised when artificial general intelligence just swooshes by.
Max Tegmark is the co-founder of the future of life institute and had a great summer talk in Swedish radio where he compared the increased capabilities of AI as that of a rising sea. In the abstract landscape different task will have different elevations. The sea level represents the current level of AI capabilities, so you might want to avoid careers in the waterfront that will soon be taken over by the water elevation. How high will the water be raising? When will it cover every mountain top of human knowledge so that we have a artificial general intelligence? Most AI researchers believes this will happens in decades, meaning during the lifetime of most people reading this text. How do I prepare my kids to coexist with a super intelligence?
The only way I’m ever gonna trust AI is if I can understand it on some level
When I think of AI I think about the machine looking back at you, seeing you much like a distorted mirror does. AI-first companies like facebook and google is looking back at you with years of self images, questions and thousands of data points in terms of likes and other interactions. It’s a strange mirror with a kind of memory that gives you an x-ray into your self image. When the AI in this mirror is making me feel seen, it’s doing me a service in the maintenance of being human, which is to tell my story. The more limbic resonance in us, the more engagement. We really like being mirrored! There is a loopiness to this because it’s self reinforcing and therefor giving rise to phenomenons such as filter bubbles and saturated biases in our egos.
In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference.
Douglas Hofstadter, I Am a Strange Loop p.363
I Am a Strange Loop is a 2007 book by Douglas Hofstadter, examining in depth the concept of a strange loop to explain the sense of “I”. Basically stating that analogy as the core of cognition and understanding. Any sufficiently complex system of analogy such as number theory can give rise to a self mirroring/referencing “strange loop” effect.
The key thought experiment in the mirror analogy is to think about what distorsion the AI mirror possesses so that we can understand the subsequent strange loops it will cause in our self image.
In extension this is the idea that “I” is distributed over numerous system, rather than being limited to precisely one brain. If these other systems are distorted by limited data from only men in western religions we will se a great bias in the system and as a consequence in our “I”. The normative psychological effect high and low involvement mediums is well documented since Marshall McLuhan stated that the medium is the message in the 1960s. The cambridge analytica company illegally used big data from 87 million facebook user to create relevant confirmation bias content (strange loops) and had a big influence in the Donald Trumps presidential campaign 2016 and leave.eu success in the UK Brexit vote 2017.
To me, this mirror idea also explains the obsession of making robots in our own image. In order to meet future care demands for elderly who are lonely and suffering from dementia, carebot Alice has been developed. In the documentary Alice cares (Ik ben Alice) we see lonely elderly in Netherlands quickly get attached to the robot and form a very human interaction even though the robots capabilities of dialoge is limited and mostly work by using the old mirroring trick. The realness of the attachment is made apparent when the old ladies get sad and depressed when the robots are removed from their home. AI is here to stay but it’s up to us to craft the best and most human way for it to play.