paint-brush
The Body Is an Algorithm: 3 Lessons For Building Robust AIby@crisbeasley
2,016 reads
2,016 reads

The Body Is an Algorithm: 3 Lessons For Building Robust AI

by ;cbJune 3rd, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Bodies handle high-dimensional data simulations in every moment of their existence. <a href="https://hackernoon.com/evidence-were-living-in-a-simulation-yanny-laurel-audio-illusion-79ebf0461b9c" target="_blank">You and I are living in a simulation of reality, the one created by our brain.</a> The brain takes inputs from our five senses and <a href="http://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/" target="_blank">predicts what the world will be like</a>, then compares that prediction to what actually happened. It takes note of the errors and improves the algorithm continuously. This is a fantastic template for how to build robust AI systems.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - The Body Is an Algorithm: 3 Lessons For Building Robust AI
;cb HackerNoon profile picture

You’ve lived inside an algorithm since birth — the human body.

Bodies handle high-dimensional data simulations in every moment of their existence. You and I are living in a simulation of reality, the one created by our brain. The brain takes inputs from our five senses and predicts what the world will be like, then compares that prediction to what actually happened. It takes note of the errors and improves the algorithm continuously. This is a fantastic template for how to build robust AI systems.

1. Don’t ignore pain.

Brains do their best to prevent the body from being harmed. If your brain had an error in its prediction algorithm, that error might eventually result in damage to your body. For example, if you incorrectly modeled the width of a stair, you might fall down the staircase and bang up your knee.

Pain exists to signal the brain that an update needs to be made to the algorithm. The brain holds onto the perceptual data of what happened immediately before and during the event in higher fidelity so that it can be used to update the model of the world. This is why time seems to slow down during a traumatic event such as a car wreck. Time doesn’t actually slow down, it’s just that we remember more of the data.

In the human body, we have pain signals to tell us where the errors in the algorithm are. In business, we have customer support departments. If you failed to catch an error, it will show up in support requests from your clients. They will tell you when an algorithm is hurting them.

If the execs in your company ignore those signals, injuries are bound to happen. We see the damage FaceGoog continue to inflict on the world as a result of the executives living in their bubble with their fingers in their ears singing la-la-la-la-la.

Dreamlands imagined by Ricardo Cavolo

2. Get plenty of sleep.

Like us, our algorithms must also sleep. Sleep for an algorithm is comes in the form of truing up the model. That is, after all, one of the important roles sleep provides to the human body.

If you walked down the stairs today and nothing unusual happened, you’ll go to sleep tonight and no new updates will be made to your predictive model of stairs, because there was no new information gathered that day. If you fell down, however, your brain will retain a rich memory of all the relevant sensory data – the placement of your arms and legs in space during the fall, everything you saw, heard and touched. All that new data will be compressed while you sleep into the new predictive model which will be used the next time you walk down a staircase so that you’re less likely to damage the body again.

Sleeping is the process of pruning off the memory of things that didn’t contain new information and updating the predictive models where there was new info. Healthy cyborgs will have a relationship with their algorithms where they audit and correct errors daily.

Always Be Updating. Algorithms will never be 100% accurate. Even if they were, the world isn’t static and so neither can our algorithms be. There are no healthy algorithms without healthy processes for updating algorithms with the information brought to our attention by our pain signals.

The world changes daily, and we as algorithm makers must keep up. Humans are still required for evaluating the outputs and reworking when the greater context shifts. The craft of an algorithm maker resembles a shepherd or gardener more than an analyst or programmer.

3. Trust your five senses.

Taste, touch, smell, look and hear what’s happening in the real world around you a.k.a. test things with your real clients before you release it. You can avoid or reduce pain by trying out your tweaks to the algorithms by getting all… what’s the word — proactive. Avoid injury altogether by making sure your algorithm works well before you release it.

Test in person, with real clients. Look at your their physical and emotional reactions to whatever you’re building. Listen to their emotional reaction more even than to their words. Always test your algorithms against real humans before you release it to see if the data tells the same story as what you, the human, see when you interact with your client 1:1.


Stories are data with a soul.– Brene Brown, Braving the Wilderness

We can’t ignore the rich data from direct interactions with other humans. Stories are messy, contradictory and don’t come with any reassuring decimal points or significant digits. We can’t let the convenience and neatness of numbers distract us from real conversations with the humans our algorithms impact.

Our best hope is to become healthy cyborgs.


Participatory AI is Joy Buolamwini’s elegant term for what we should be striving for, instead: Not the aspirational artificial intelligence that only exists in movie depictions, but AI that’s fully symbiotic with human intelligence. An AI that performs the heavy lifting of data processing — which the human corrects, adjusts, re-directs. A far more accurate model for how AI actually works (at least in the near to medium term), Participatory AI also undermines fanciful, apocalyptic visions of Skynet, or massive unemployment caused by automation — and instead, helps us imagine a better future**.–** Amber Case, How to Design a Better Internet

The next evolution of humanity relies on us creating good environments where our algorithms can grow up. Like a child that’s dependent on it’s parent, algorithms could do nothing without us. Our algorithms are really quite primitive today. Perhaps they will surpass us, but that’s for another Medium article on a day far, far distant from today.

Right now, we have a huge opportunity to become healthy cyborgs that partner with algorithms to make us the biggest, boldest, kindest, most capable versions of ourselves.

For more from me: clap, get on my list or check out my podcast, Embodied Reality, about technology, love and creativity. We interview everyone from the astrophysicist and founder of GoogleX to the Yale-educated lawyer who founded a Buddhist order of nuns in China to the creator of VR games to induce trances.

❤ cris


3 Signs You're In the AI Cult of Data - Cris Beasley - Medium_Believe your own eyes, not just the decimal points. I grew up in a cult. Since leaving 12 years ago, I've studied the…_hackernoon.com


There’s Evidence We’re Living in a Simulation_The moment you perceive as NOW has already passed. Good new is — we can hack the simulation._hackernoon.com