In Darwin’s evolutionary theory, the concept of survival of the fittest stands for the phenomenon that the traits of life forms that have the biggest reproductive success will, over time, become prevailing, while other traits disappear.
I would like to adopt this framework for the age of algorithms. On the leading tech platforms such as Facebook, YouTube, Twitter, Instagram, LinkedIn and TikTok (gotta be inclusive here), algorithms play a key role in selecting what information people get to see, and who gets to be seen. Since these services’ business models are centered around advertising, their algorithms are optimized for making people spend as much time as possible on them.
Thanks to the vast amounts of usage data generated by billions of daily users as well as the ever-improving capabilities of machine learning (or “Artificial Intelligence”), one has to expect this optimization process to eventually become highly effective, if not truly perfect.
The result, then, would be something akin to an “algorithmic survival of the fittest”. The algorithms will reward those characteristics of information and traits of people who convey information — e.g. journalists, Instagram influencers, YouTubers, traditional celebrities, political pundits, Twitter opinion leaders — with reach and monetary incentives that generate the biggest attention and thus ad revenue.
This causes a feedback loop in which creators and information curators are observing and learning which type and style of information the algorithms want, followed by an increased supply of that kind of material. Then the whole cycle starts again.
While this is happening, algorithms are shaping the habitual patterns and minds of producers and consumers of that information alike. Over time, producers and consumers become what and who the algorithms “want” them to be. Technically of course it is not an actual “want”, but a consequence of their task to optimize for attention and user engagement.
For example, as a creator, if you realize that the algorithms reward you for screaming, being angry, being outraged, being controversial, or for actively behaving unintelligent, this is what you might keep doing. After a while, due to Neuroplasticity, this will have an actual impact on your brain structure, your mental models and who you are as a person. Even if it is a simplified way of putting it, you might become that angry, outraged, controversial, unintelligent-behaving person that the algorithm trained you to be (if you chose that path initially).
Another example: If the algorithms figure out that feeding you with ever more extreme ideological content makes you stick around longer and longer, they’ll keep doing that, casually turning you into a full-blown radical. It’s worth mentioning here that as with the original evolutionary theory, algorithmic survival of the fittest is not about individual survival in the physical world. It is about survival of certain traits that serve the algorithms’ needs. If a person that got radicalized on social media gets killed while committing a mass shooting, this doesn’t go against the algorithms’ interest. On the contrary: Perversely, algorithms will then leverage the increased polarization of the society and radicalization of individuals that follows the shooting for another round of maximizing people’s attention. Where do many people go after major incidents to express their sorrow, anger, and to look for information? Social media, of course.
As an individual and as a society, noticing how algorithms are taking over the minds while it happens is as hard as noticing your hair growing when you have a daily look in the mirror.
One day you see yourself in the mirror and realize “I need a haircut. How could I even let my hair get that long??”
When it comes to the power of algorithms over us, we might already be way past that moment.
Sign up for my weekly email, loaded with great things to read about the digital world.