In the past few years, platforms we have trusted with our data, time, and money have messed up big time. It feels like every week a new privacy scandal or data breach is revealed, and every time we are powerless about it. Our fundamental right to digital privacy is dwindling, and it is time to get serious about saving it. What we lack is trust that platforms and governments will not abuse the power they have. Ensuring trust from centralized entities is a difficult problem, and there is no perfect solution. Today, the internet’s trust primitives against centralized abuse are built on one thing: faith. Let’s talk about why that is and the path from “Don’t Be Evil” to “Can’t Be Evil”. Why is our data under threat? What mechanisms are in place today to protect it? I discuss the pros and cons of these mechanisms and conclude that they are inadequate.
The internet has become central to the lives of billions of people around the world, and this meteoric rise has brought with it a host of benefits. In the early days, a Cambrian explosion of independent technologies provided the user with a plethora of choice and increased the usability of the web for all. It enabled a whole new digital economy, which has grown at a stunning rate. This rise seemed to be organic: a functioning free market that put the user above all else. Profit seeking companies and their users had aligned incentives. If the company provides a better product that solves a problem or improves upon it, customers will want to use that product, and suppliers will want to provide content to the platform, thus bringing more revenue to the companies. The most successful companies were those who found a business plan that energizes and profits off of this flywheel effect, usually by taking advantage of data network effects. Ben Thompson, author of Stratchery, calls these companies “Aggregators”:
Aggregator Theory (Stratchery)
The goods sold by an aggregator are digital, and thus have zero marginal costs, zero distribution costs, and customer aquisition costs decrease over time. These characteristics allow aggregators to maintain their moats and stifle the competition by extracting value more easily. Aggregators have provided us with products that are the easiest to use, the cheapest, and have the largest network effects. We choose to use these platforms because nothing else provides a better UX and all of our friends are using them. It also implies two things:
All aggregators today are centralized, and have followed a predictable life cycle. In the beginning, they have an incentive to do everything they can to attract users, developers, and media attention. When aggregators reach a certain inflection point on the adoption S-curve, and can’t continue to grow from new users alone, they have the critical mass to begin extracting more value from each user.
Credit: Chris Dixon
Startups can’t disrupt the stranglehold aggregators gain on users and their industry, because aggregators are incentivized to deter them from doing so. Incentives between participants and platforms increasingly unalign, and the internet ends up paving the way for the next Buy N’ Large.
Wall-E
As these platforms grow, they gain more and more power over users, especially through data. Facebook, Google, and other aggregators incorporate data to provide a better UX by knowing more about you and your browsing tendencies than you do yourself. The bad news is that when it comes to your digital identity, the data you choose to share (or knowingly give up) is just the tip of an iceberg. Friendly interfaces, cool new features, and high-profile brands obscure how our most valuable behavioral data is inferred beyond our control and without our consent. It’s these deeper layers we can’t control that really make the decisions, not us.
The further out the ripples go, the harder it is to control. (Panoptykon Foundation)
Shoshana Zuboff, author of The Age of Surveillance Capital, defines this process as “Surveillance Capitalism”:
Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioural data. Although some of these data are applied to service improvement, the rest are declared as a proprietary behavioural surplus, fed into advanced manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, and later. Finally, these prediction products are traded in a new kind of marketplace that I call behavioural futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are willing to lay bets on our future behaviour.
So what mechanisms work against centralized abuses today? Ultimately, it comes down to trust. Trust, in this context, is defined as the firm belief in the reliability of a platform (or government) to protect its users from themselves.
Explanation
Imagine a company acts in bad faith towards its users. Say, for instance, Yahoo suffers a data breach and fails to disclose this to their users for two years. Users trust/hope that the action will eventually be exposed to the public, and the negative reaction from users, news organizations, investors, and even employees will act as a punishment and deterrent. This negative press would hopefully encourage the company to change its ways and be better in the future. Companies often strive to increase the loyalty of its users, and the erosion of trust works against that goal.
Pros
This restoring force has been the primary means of bad action prevention. Most tech companies have social media pages, and when something comes up, these pages have enabled a seemingly direct feedback mechanism that should show the companies the extent to which their stakeholders are unhappy about it. Twitter, Youtube, and the like have enabled the dissemination of bad faith actions and have forcefully exposed companies (or at least their PR departments) to the negative perception. Public outrage can also be a powerful rallying force for privacy, as shown with the #deletefacebook movement, and others.
Cons
Perception is often not enough to change the business practices of a company. If restorative action were to decrease revenue, or was deemed too complex/unimportant, then it is more than likely it will not occur. Additionally, information asymmetry is the norm for most users. The average Facebook user does not want to concern themselves with the dark and hidden problems of secondary data markets. Change from perception requires the masses to act, and this induces friction and inefficiencies in the restorative process. Moreover, an important issue may not receive attention unless the media cycle gets a hold of it, which is not a guarantee. Additionally, investors are not often swayed by perception, as seen from Facebook’s shares surging this week amid more and more scandals.
Explanation
If a company acts in bad faith towards its users, through fraud, invasive data mining and reselling, or some other act deemed unlawful, governments may step in to investigate and penalize. Regulation also includes preemptive measures against such actions, such as GDPR and the current American laws pertaining to data privacy.
Pros
Governments are meant to act in the best interests of the nation and its peoples. Thus, they should have an incentive to introduce penalties for malicious corporate activity. Governments have the power to demand access to data and the power to use that data, including the selling and reselling of information. They stand as the most powerful force today against these actions, especially if they are not easily revealed by public inspection. GDPR is the first logical step and provides a precedent on which other countries can base their regulations. Most notable are the right to access clauses.
Cons
With great power comes great responsibility. It is well known that governments snoop on communications to monitor for criminal activity. Snowden showed us that the FBI/NSA is accessing far more than just criminal activity. You can bet that they are storing all of this information in their own data centers, and our data is at the whim of future policy. Political officials have proven time and time again to misunderstand the nature of privacy or have chosen to ignore the idea altogether. Most believe terror attacks and espionage are more of a threat to democracy than the privacy of its own citizens. Additionally, executive and legislative actions are slow moving and fail to keep up with constantly evolving tech infrastructure. This evolution is driven by the largest companies with the most money, nearly always with the goal of accruing more of it. Smaller governments may not have the resources to fight back against these companies, or lack the technical knowledge to know what is at stake. It comes back to faith: faith in one’s government to act in the best interests of individual privacy.
It’s reasonable to argue that our data isn’t doing too much damage to us right now, partly because of the restoring forces mentioned above. But these companies and governments will have our data forever. It would be naive to think that these platforms will not change the rules for startups, creators, and everyday users, and if this trend continues, there won’t be anything to stop it.
Let’s stop hoping that corporations and governments won’t take advantage of our lack of privacy. Let’s make it impossible for them to do so, in ways that are just as easy to use as the current solutions. Protocols and platforms are being built now that aim to create a more open, fair, and private internet. Let’s think carefully about the future of the internet, and instead of promising not to fall for the same traps, lets make it impossible to do so by design.
For practical tips, check out Fighting the Surveillance Economy by Victor Vecsei. For an essay filled with examples of many of the things I discuss above, check out Why Decentralization Matters by Chris Dixon.