We live in a world where people get paid to spread truths and lies. We’ve at least heard of the term propaganda, and by now, we’ve all been exposed to propagandist regimes: the most obvious example stems from our global political climate, since by nature, politicians love spreading lies. The duality between truth and lies and right and wrong is at the heart of politics and information. And it’s this duality that incentivizes powerful people to uphold or destroy the values of society that preserve truthfulness. If people care about the truth so much (which I hope enough of us do), how can we enforce the necessary qualities in the information we consume?
Access to truth is almost synonymous with access to quality education & information. With the advent of the Internet, digital consumers have increased their access to education & information in unbelievable ways. Universities and places for higher education are merely one avenue for engaging our intellectual curiosity and surely for achieving our goals. It is evident that this information metric is not evenly distributed since access to the Internet is still barred in many parts of the world. Paradoxically, the most digitally connected populations similarly suffer from not knowing what to believe or how to form their own opinions, an idea unfathomable to any individual living in their own bubble. People absorb information on the Internet like a sponge; it’s no surprise that majority views can tend towards lies.
A quick search on Google reveals that over 2.5 quintillion bytes of data are added to web every day. An amount that only continues to rise. How many lies are written to the history of the web every day? If people with higher access to education are being exposed to more and more malicious content, the line between truth and lie becomes blurred. People have a skewed understanding of what being educated is in society. It can’t simply be that the more content you absorb, the more educated you become. The information we absorb must be quality information. Anything less is doing ourselves a disservice.
It follows naturally that we should strive to find or create the optimal information source, whether for education or entertainment, for everyone (not just you). It should be a democratic information source that slowly adapts to curate truthful, quality information based on voting. It would permit biased, opinionated content on the basis that the content is truthful and well reasoned. These qualities are ideal for an information source that enables people to live outside their echo chambers rationally and willingly. Given this optimal source, we can postulate a pseudo black-box mechanism that attempts to meet this optimality.
Mechanisms are used in game theory to understand a variety of interesting market oriented problems, such as auctions. The goal of studying mechanisms is to understand incentives and strategies between the participants and the designer of the mechanism. In this ideological setting, both parties seek to maximize their expected utility, meaning they seek the highest reward for the lowest cost available.
Some qualities about our information mechanism:
Some questions we might have:
If we let the outcomes be binary (FAKE/TRUE), we can ask ourselves how well people are at predicting whether a particular piece of content is fake. The simplest, naivest model would be to aggregate votes and pick the majority outcome (a popular vote). I need not convince you of why this doesn’t always work, but I digress. Rolling with this, we can start imagining other simple solutions to the questions above.
Note, it would be nice to assume that people will act truthfully in public or anonymized elections. In practice, however, voters can be bribed, extorted, and form coalitions. It would also be nice to assume that people would participate out of curiosity, in the creation and dissemination of new information sources. Without these assumption and given we strive to have such a system (another BIG assumption), we must have some way of rewarding participants. This reward would be enough of an incentive for both sides of the market to participate; that is, a subset of the participants gains positive utility from the mechanism.
A simple system for proving truthfulness and incentivizing participation would reward voters in the majority party and punish everyone else. This mechanism is clearly not truthful since voters are incentivized to vote with the majority, regardless of what that vote may be. It may work in an anonymized election where the current election state is unknown until some later termination period, but for the sake of my own interests, it’s best to have a transparent, public mechanism. There are a variety of parameters to tune to realize different outcomes and rewards and punishments. An intuitive mechanism is one that weighs participants’ votes in a way that incentivizes truthfulness, without allocating too much power to any single vote. We can give each reader and writer a reputation to denote the influence of their vote or created piece of content. Such influence would gradually grow or shrink based on a user’s behavior in the mechanism.
The natural next step would be to incentivize truthful, quality content creation. We “have” a democratic way of voting on the truthfulness of content, truthfully; we now only need content in the system to get the ball rolling. Similar questions arise but with different user roles (content-creators vs. voters) such as:
Notice, the truthfulness of voters and content creators is distinct. In the former case, we want voters to submit their honest vote regardless of the ground state truth, which is unknown. The truthfulness of the latter however is more possibly critiqued. We must incentivize writers to cite their sources, not plagiarize, and not lie about facts to name some constraints. Their content inevitably lands in front of readers, who may or may not be able to judge accurately the content’s truthfulness. It would make sense to punish prejudiced writers and reward those on the opposite end of the spectrum of absurdity. After all, the goal is to create a quality content platform.
I have less answers to the questions above, or perhaps less of an idea how to answer them given how polarized people are. Presumably, if the wisdom of the crowd exists, and readers’ honest votes dictate the truthfulness of the content in question, then rewarding the most truthful content would incentivize more truthful content creation. We would hope that the market dynamics converge to such a point, and, eventually, the readers and writers with the most influence/reputation would benefit the most from the network.
The mechanism above shares striking similarities with any existing centralized, content platform today: Reddit. There are readers of content, who upvote/downvote things based on some intrinsic quality measure, and content producers who benefit from upvotes. But what is lacking in a content platform like Reddit is a meaningful reward for content creators and accountability for content creator posts. Reposting is rampant and dis-incentivizes content creators from the outset.
Beyond these complaints, it would be nice to have a platform that has meaningful rewards for meaningful content creators. Whether through direct monetary compensation or some crypto-token, the infrastructure exists to build new platforms that experiment with new monetization models. Journalists today are underpaid and under-represented on their own, outside of monolith news organizations. And researchers compete for spots in academic journals, sacrificing quality work in some cases for sheer output. A simple feedback mechanism that improves an individual’s reputation as a content producer could provide greater exposure to both parties and lead to larger engagement in their work and careers.
These new platforms can stress accountability in public content to make more transparent the provenance of information. Citing would be the status quo and allow for innovative “trickle-down” reward systems for content producers who have produced highly relevant work.
The original intent for spilling my thoughts on truth, lies, and information pertain to whether or not people can accurately predict truthfulness or quality in information. An interesting social experiment is whether or not we can achieve quality information dissemination using prediction markets. The mechanism above works on the assumption that people can collectively find quality, truthful content.
A prediction market is an exchange-traded market for the purpose of trading the outcome of events. Normally, prediction markets are used to trade events that past some later time, realize their true outcome: predicting what the weather will be next Sunday or who will be the next president for example. Things get more complicated if we build prediction markets for trading of probabilistic outcomes like identifying the truthfulness and quality of a news article: plainly put, outcomes with no ground state truth. People’s predictions may exist over the real interval [0,1] rather than in {0,1}. Can we still arrive at meaningful conclusions in a timely manner given these constraints and using the mechanism we designed above?
Prediction markets were formulated into a form a government called a futarchy by Robin Hanson. In a futarchy, “elected officials define measures of national welfare and prediction markets are used to determine which policies have the most positive effect”. For more information about this concept, here’s Robin Hanson’s account.
What is particularly interesting about futarchy is how quickly elected officials can be replaced on the basis of poor performance. Elected officials define the metrics they would want to reach from job creation to tax cuts and if they aren’t met, the markets will elect officials who more successfully reach their target measurements.
Back in our information world, an interesting concept would be applying futarchy to news and information organizations. Imagine the following scenario: Drew’s News Agency (DNA) pays its “elected” journalists, who write on Drew’s Open News Network (DONN), every month for writing under the DNA umbrella. The readers of Drew’s Open News Network will participate in a prediction market and vote on the quality/truthfulness of all journalists’ content on DONN. If the journalists of DNA don’t meet their output or quality expectation outlined by themselves but predicted by the crowd, their reputation will decrease. Journalists that meet and even exceed their own expectations will realize greater ratings and reputation. And with that, journalists realizing more success will from the collective crowd will have the opportunity to join DNA, those failing to meet expectations will be removed from DNA. Harsh to some, this feedback mechanism filters for the best journalists and content producers.
Modularizing the network of information and control of content creators will allow information organizations to grow closer towards their readers as well as enabling the best producers of information to benefit from being in the spotlight. With an open news/content network, journalists can receive the credit they’re owed for quality work, while the organizations can profit off increased exposure of their sponsored content. New innovative forms of governance and collective intelligence will transform how we consume content and see change in the organizations we subscribe to, hopefully improving quality standards all around. It’s only a matter of realizing these changes today.
Here’s a splurge of resources in no order if you’re interested in these ideas, feel free to throw questions my way as well!
Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.
To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.
If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!