paint-brush
AI and the Problem of “Knowledge Collapse”by@mikeyoung44
2,256 reads
2,256 reads

AI and the Problem of “Knowledge Collapse”

by Mike YoungApril 9th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

AI and the problem of "knowledge collapse" - The article delves into the concept of "knowledge collapse," suggesting that our increasing reliance on AI may narrow our access to unconventional ideas, stifling innovation. Andrew J. Peterson's research explores this phenomenon, highlighting the risks and proposing solutions to maintain a diverse range of knowledge in an AI-driven culture
featured image - AI and the Problem of “Knowledge Collapse”
Mike Young HackerNoon profile picture

AI is often hailed (by me, no less!) as a powerful tool for augmenting human intelligence and creativity. But what if relying on AI actually makes us less capable of formulating revolutionary ideas and innovations over time? That’s the alarming argument put forward by a new research paper that went viral on Reddit and Hacker News this week.


The paper’s central claim is that our growing use of AI systems like language models and knowledge bases could lead to a civilization-level threat the author dubs “knowledge collapse.” As we come to depend on AIs trained on mainstream, conventional information sources, we risk losing touch with the wild, unorthodox ideas on the fringes of knowledge — the same ideas that often fuel transformative discoveries and inventions.


You can find my full analysis of the paper, some counterpoint questions, and the technical breakdown below. But first, let’s dig into what “knowledge collapse” really means and why it matters so much…


AI and the problem of knowledge collapse

The paper, authored by Andrew J. Peterson at the University of Poitiers, introduces the concept of knowledge collapse as the “progressive narrowing over time of the set of information available to humans, along with a concomitant narrowing in the perceived availability and utility of different sets of information.”


In plain terms, knowledge collapse is what happens when AI makes conventional knowledge and common ideas so easy to access that unconventional, esoteric, “long-tail” knowledge gets neglected and forgotten. It’s not about making us dumber as individuals, but rather about eroding the healthy diversity of human thought.

Figure 3 from the paper, illustrating the central concept of knowledge collapse.

Peterson argues this is an existential threat to innovation because interacting with a wide variety of ideas, especially non-mainstream ones, is how we make novel conceptual connections and mental leaps. The most impactful breakthroughs in science, technology, art, and culture often come from synthesizing wildly different concepts or applying frameworks from one domain to another. But if AI causes us to draw from an ever-narrower slice of “normal” knowledge, those creative sparks become increasingly unlikely. Our collective intelligence gets trapped in a conformist echo chamber and stagnates. In the long run, the scope of human imagination shrinks to fit the limited information diet optimized by our AI tools.


To illustrate this, imagine if all book suggestions came from an AI trained only on the most popular mainstream titles. Fringe genres and niche subject matter would disappear over time, and the literary world would be stuck in a cycle of derivative, repetitive works. No more revolutionary ideas from mashing up wildly different influences.


Or picture a scenario where scientists and inventors get all their knowledge from an AI trained on a corpus of existing research. The most conventional, well-trodden lines of inquiry get reinforced (being highly represented in the training data), while the unorthodox approaches that lead to real paradigm shifts wither away. Entire frontiers of discovery go unexplored because our AI blinders cause us to ignore them.

That’s the insidious risk Peterson sees in outsourcing more and more of our information supply and knowledge curation to AI systems that prize mainstream data. The very diversity of thought required for humanity to continue making big creative leaps gradually erodes away, swallowed by the gravitational pull of the conventional and the quantitatively popular.


Peterson’s model of knowledge collapse

To further investigate the dynamics of knowledge collapse, Peterson introduces a mathematical model of how AI-driven narrowing of information sources could compound across generations.


The model imagines a community of “learners” who can choose to acquire knowledge by sampling from either 1) the full true distribution of information using traditional methods or 2) a discounted AI-based process that samples from a narrower distribution centered on mainstream information.

This is actually a screencap from Primer’s video on voting systems but I pictured that the simulated “learners” in their “community” looked like this when reading the paper, and now you will too.

Peterson then simulates how the overall “public knowledge distribution” evolves over multiple generations under different scenarios and assumptions.


Some key findings:

  • When AI provides learners with a 20% cost reduction for mainstream information, the public knowledge distribution ends up 2.3 times more skewed compared to a no-AI baseline. Fringe knowledge gets rapidly out-competed.

  • Recursive interdependence between AI systems (e.g. an AI that learns from the outputs of another AI and so on) dramatically accelerates knowledge collapse over generations. Errors and biases toward convention compound at each step.

  • Offsetting collapse requires very strong incentives for learners to actively seek out fringe knowledge. They must not only recognize the value of rare information but go out of their way to acquire it at personal cost.


Peterson also connects his model to concepts like “information cascades” in social learning theory and the economic incentives for AI companies to prioritize the most commercially applicable data. These all suggest strong pressures toward the conventional in an AI-driven knowledge ecosystem.


Critical Perspective and Open Questions

Peterson’s arguments about knowledge collapse are philosophically provocative and technically coherent. The paper’s formal model provides a helpful framework for analyzing the problem and envisioning solutions.


However, I would have liked to see more direct real-world evidence of these dynamics in action, beyond just a mathematical simulation. Empirical metrics for tracking diversity of knowledge over time might help test and quantify the core claims. The paper is also light on addressing potential counterarguments.


Some key open questions in my mind:

  • Can’t expanded AI access to knowledge still be a net positive in terms of innovation even if it skews things somewhat toward convention? Isn’t lowering the barriers to learning more important?

  • What collective policies, incentives or choice architectures could help offset knowledge collapse while preserving the efficiency gains of AI knowledge tools? How can we merge machine intelligence with comprehensive information?

  • Might the economic incentives of AI companies shift over time to place more value on rare data and edge cases as mainstream knowledge commoditizes? Could market dynamics actually encourage diversity?


Proposed solutions like reserving AI training data and individual commitment to seeking fringe knowledge feel only partially effective to me. Solving this seems to require coordination at a social and institutional level, not just individual choices. We need shared mechanisms to actively value and preserve the unconventional.


I’m also curious about the role decentralized, open knowledge bases might play as a counterweight to AI-driven narrowing. Could initiatives like Wikidata, arXiv, or IPFS provide a bulwark against knowledge collapse by making marginal information more accessible? There’s lots of room for further work here.


The stakes for our creative future

Ultimately, Peterson’s paper is a powerful warning about the hidden dangers lurking in our rush to make AI the mediator of human knowledge, even for people like me who are very pro-AI. In a world reshaped by machine intelligence, preserving the chaotic, unruly diversity of thought is an imperative for humanity’s continued creativity and progress.


We might be smart to proactively design our AI knowledge tools to nurture the unconventional as well as efficiently deliver the conventional. We need strong safeguards and incentives to keep us connected to the weirdness on the fringes. Failing to do so risks trapping our collective mind in a conformist bubble of our own design.

A conformist bubble of our own design!


So what do you think — are you concerned about knowledge collapse in an AI-driven culture? What strategies would you propose to prevent it? Let me know your thoughts in the comments!


And if this intro piqued your interest, consider becoming a paid subscriber to get the full analysis and support my work clarifying critical AI issues. If you share my conviction that grappling with these ideas is essential for our creative future, please share this piece and invite others to the discussion.


The diversity of human knowledge isn’t some abstract nice-to-have — it’s the essential catalyst for humanity’s most meaningful breakthroughs and creative leaps. Preserving that vibrant range of ideas in the face of hyper-efficient AI knowledge curation is a defining challenge for our future as an innovative species!


AIModels.fyi is a reader-supported publication. To receive new posts and support my work subscribe and be sure to follow me on Twitter!