This Tuesday we all saw evidence that we are, indeed, living in a simulation. Like The Dress debacle of 2017 when we saw a blue dress right before our eyes which our colleague, sweetheart or friend swore up and down was gold, Yanny/Laurel made us doubt the fundamental sanity of ourselves and others. If you haven’t seen it, the internet discovered and subsequently lost their minds about an audio clip that some people hear as Yanny and some people hear as Laurel. Some folks even hear both names at the same time.
We live inside our mental model of the world. We live in it so fully that it’s hard to appreciate that there is a world outside of our simulation. The Yanny/Laurel clip breaks your brain because our brain wants to tell us it’s got reality sorted out. It says you hear all there is to hear and you see all there is to see. Yanny/Laurel destroys that illusion.
“Believe nothing you hear and only half of what you see.”
– Mark Twain
Our brains do the best they can at making meaning from sounds. The Yanny/Laurel recording works as it does because it’s low quality. What’s shocking is the degree to which our brains fill in the gaps of the sensory data. We don’t notice there’s a gap between the raw sensory information and what our brain rounded up the data to mean. Anil Seth illustrates how our brain processes poor-quality audio.
If you skipped the video, you really need to go back and play it. It really will enhance this. When you listen to the audio the first time you can’t understand what he’s saying. Then you hear the high quality audio and understand his words. Now when you hear the poor-quality audio again, your brain cooks the books. You hear more than is actually there. Your brain fills in the gaps now that you have a more accurate model of the world.
The same things happens with our vision. The data comes in from the retina and travels into the thalmus. Here’s the wild discovery from the recent neuroscience — the thalmus multiplies the raw data by 5x. FIVE TIMES more visual information comes out of the thalmus than went into it! Where did all that extra information come from? Something scientists call “priors.” Everything you’ve ever seen prior to this moment is used in your ability to see.
Infants can’t see when they’re first born because they don’t have enough prior experiences of seeing to be able to make sense of the visual data. Infants have to learn how to see. Which means… you too had to learn how to see. You had to look at the world and gather many experiences of environments and emotional contexts.
Most of what you are seeing in this moment isn’t here. Your visual systems are constructing the scene based on what you’ve seen both immediately before and in similar contexts. Your visual cortex is cooking the books and filling in a lot of what you are perceiving.
When golfers are confident the holes literally look larger. When people are tired, they see the hill up ahead as steeper than when they’re well rested. White men perceive black men as taller and more muscular (and therefore more threatening) than they actually are.
Me, you, an Olympic athlete, your sweetheart, coworker and Lyft driver have agreed on some things in common about reality – like puppies are warm and fuzzy and the sky is blue, but each of our simulations of reality are different. Each of our past experiences and emotional associations differ, and that radically shapes what we see and hear.
You rarely get glimpses of objects or audio clips where there’s such an obvious disagreement on their basic properties which force us to notice that your reality isn’t like another. If we can’t agree that the dress is blue or the audio clip is saying Yanny, how can we know that anything we see, hear, touch, smell or taste is the same? Let’s leave that one for the poets and philosophers while we venture deeper… ‘cause it gets worse.
“The moment you perceive as NOW has already passed.”
–Dr. David Eagleman
It takes time for the visual data gathered by the rods and cones in the retina to travel down the optic nerve, then be processed by the thalmus and then passed onto the visual cortex to detect the edges, determine the spatial depth and contours, detect where the objects are, assign velocity to objects, determine which objects are people – blah blah blah. It’s not important to understand this whole thing. I’m trying to impress upon you that a lot of incredibly slow, complex work happens to bring you a meaningful visual representation of the world that allows you to walk down a staircase, wash the dishes and untangle your earbuds.
Meanwhile, all of the nerves in your skin all the way from the soles of your feet are sending info about temperature, pressure and pain up the leg, through the spine, and into the brain to be processed. The eardrums are feeding the vibrations into the acoustic processing areas of the brain. All of that data from the five senses arrives at different times and must be coordinated into a coherent, actionable understanding of this moment of NOW.
If this coordination didn’t happen, it wouldn’t look like the things that were actually happening at the same time were happening at the same time. Because vision signals arrive at a different time as audio, it would appear that the sound of a snap happened before you saw the fingers move. Cray cray, right?!!
The body must take action in the NOW or else we wouldn’t be able to do things that need microsecond precision, like walking down a staircase. You wouldn’t be able to walk if you had a half second of delay between the feeling of the floor under your feet and when you give the commands for muscles to move and keep you upright. By the time you’re consciously aware of the sense data coming from your foot, it’s already too late to adjust.
We rely on our mental model of what every prior instance of walking down a staircase has been like. As long as the current staircase is similar enough to previous staircases, you have no problem walking down one. If there’s variation from your simulation of reality, like a loose stair tread, you are likely to slip and fall. That’s an error in the model. When the simulation doesn’t match the environment, we pay close attention to the error so that we can update our model of the world. Next time you walk down that staircase with the loose tread you’ll walk carefully around it so as not to imperil the body to breakage of flesh and bone. In this way, objective reality constantly informs and updates our simulation. I’m not saying there is no objective reality, I’m saying that we never have a pure experience of it — not really.
If you’re a golfer, you can hack your perception to increase your confidence so that the holes look physically larger. All pro athletes know how much of their performance is a mental game. White guys can hack their perceptions to reduce their fear of black men which causes them to see them as more threatening than they actually are.
These processes are happening automatically, but that doesn’t mean they’re unchangeable. Our brains are constantly updating their simulation of the world. If you’ve been considering any of my arguments, you are updating your simulation right now. It’s turtles all the way down.
The question is — which reality do you want to live in, and how do you go about shifting your simulation to get there? I’m creating a card deck and accompanying guidebook to explain how to do practical exercises that do just that with the neuroscience of why they work. If you’re interested in being an alpha tester of the exercises, pop ‘yer email in this here box. If this article broke your brain in a good way, please clap and share to help others find this.