In the previous article, we argued that digital trust is not a technical feature but a philosophical problem - one rooted in legitimacy, justification, and authority. That framing leads naturally to a more uncomfortable question: where does digital trust actually break down in practice?
The answer is rarely found in code. It is found in governance.
The Platform as a Social Contract
Digital platforms do not merely host interactions; they govern them. Every major platform establishes rules of behavior, defines acceptable conduct, adjudicates disputes, and enforces sanctions. In doing so, platforms assume roles traditionally associated with institutions of authority - lawmakers, judges, and enforcers - all at once.
This arrangement closely resembles a social contract, but an unusual one. Users are bound by rules they did not help shape, enforced by systems they cannot inspect, and revised without meaningful consent. Participation is framed as voluntary, yet in practice, exit often carries real social, economic, or professional costs.
Classical social contract theory treated legitimacy as conditional. Authority was justified only insofar as it served those subject to it. Digital platforms invert this logic. Legitimacy is assumed through scale, network effects, and convenience rather than earned through accountability.
This inversion matters because trust is not created by participation alone. It is created when those subject to power believe that power is exercised fairly, predictably, and with restraint. Platforms often demand trust while offering little justification for it.
As a result, trust becomes fragile - not because users are irrational or hostile, but because the terms of the contract are structurally imbalanced.
How Platforms Erode Trust Through Unilateral Power
Trust rarely collapses all at once. On digital platforms, it erodes gradually, through a series of small governance failures that signal a deeper imbalance of power.
One of the most common failures is opacity. Rules exist, but their application is unclear. Decisions are made, but explanations are absent or generic. Enforcement happens, but the reasoning behind it remains hidden. From the platform’s perspective, this opacity is often justified as a necessity - to prevent gaming, protect proprietary systems, or manage scale. From the user’s perspective, it feels arbitrary.
Opacity is not a neutral design choice. It is a statement about authority. When users cannot understand why a decision was made, they cannot assess whether it was fair. When they cannot contest it meaningfully, they are reminded that participation does not imply agency. Over time, this transforms trust into resignation.
A second erosion point is inconsistency. Platforms frequently apply the same rules differently across users, contexts, or time. Sometimes this is the result of automation struggling with nuance. Sometimes it reflects internal priorities that are invisible to users. In both cases, the effect is the same: predictability disappears.
Trust depends on expectations. When outcomes become unpredictable, users stop believing in the system’s impartiality. They may continue to use the platform - often because alternatives are limited - but their relationship shifts. What was once trust becomes calculation: What can I get away with? What risks am I willing to take?
The third, and perhaps most damaging, factor is unilateral change. Platforms routinely modify policies, terms, and enforcement mechanisms without meaningful consultation. While this flexibility is often framed as innovation, it undermines the very idea of a stable social contract.
In classical governance systems, changes to the rules require justification, process, and often consent. On platforms, change is instantaneous and asymmetrical. Users are informed after the fact, if at all. The message is implicit but clear: the rules exist at the discretion of the platform, not as a mutual agreement.
These dynamics - opacity, inconsistency, and unilateral power - do more than frustrate users. They weaken legitimacy. When people no longer believe that a system is governed by principles rather than expediency, trust gives way to skepticism. Compliance replaces consent. And engagement becomes transactional rather than relational.
Importantly, none of this is caused by scale alone. Large institutions have governed fairly before. Nor is it caused by technology itself. Systems can be designed to explain, justify, and correct decisions. What erodes trust is the absence of governance structures that recognize the platform’s power and accept responsibility for exercising it transparently.
When platforms treat governance as an operational burden rather than a source of legitimacy, trust is not just weakened - it is quietly consumed.
Why Governance, Not Technology, Determines Trust
When trust collapses on digital platforms, the reflex is to blame technology. Algorithms are too opaque. Systems are too complex. Scale is too difficult to manage. These explanations are convenient - and largely wrong.
Technology does not decide what is permissible, whose voice matters, or when enforcement is justified. Governance does.
Rules can be clear or vague. Enforcement can be consistent or arbitrary. Appeals can exist or be performative. Decisions can be explained - or silently imposed. None of these outcomes are dictated by technical limitations. They are governance choices.
The most damaging trust failures occur not when platforms make mistakes, but when they refuse to explain, justify, or correct them. Opaque enforcement erodes confidence faster than technical failure ever could. Inconsistent rule application signals favoritism or neglect. Unilateral policy changes reveal where power truly resides.
In this sense, trust is not broken by malfunctioning systems, but by unaccountable ones.
This is why trust cannot be engineered solely through better interfaces or smarter automation. Without legitimate governance structures - ones that recognize asymmetry of power and provide mechanisms for explanation and contestation - trust degrades into mere dependence.
And dependence is not trust.
Do Platforms Need Legitimacy Or Just Compliance?
The question that follows from all of this is uncomfortable, but unavoidable: if platforms already govern, what makes their governance legitimate?
Most platforms operate as if compliance is enough. As long as they meet regulatory requirements, publish policies, and enforce rules at scale, trust is expected to follow. But compliance is not legitimacy. Compliance is external and minimal; legitimacy is internal and relational. It exists only when those subject to authority believe that it is exercised in a way that is justified, explainable, and accountable.
Classical political theory made this distinction clear long before digital platforms existed. Authority without legitimacy may function temporarily, but it does not endure. It invites resistance, circumvention, and eventually intervention. The same dynamic now plays out in digital environments. When users feel governed rather than represented, controlled rather than protected, trust does not disappear - it mutates into suspicion.
This is why trust crises on platforms are so often misdiagnosed. They are treated as communication failures, PR problems, or technical edge cases. In reality, they are governance failures. They arise when power is exercised without sufficient justification, when decisions cannot be meaningfully questioned, and when those affected have no voice in shaping the rules that govern them.
The deeper issue is not that platforms are too powerful. It is that they wield power without the institutional structures that historically made power tolerable. Checks and balances, transparency, proportionality, and the right to contest decisions did not emerge by accident in political systems; they emerged because authority without restraint erodes trust.
Digital platforms have recreated authority at scale, but not its safeguards.
Until this gap is addressed, trust will remain fragile. Users may comply, adapt, or exit, but they will not fully trust systems that treat governance as an internal optimisation problem rather than a shared responsibility. And no amount of technological sophistication can compensate for that absence.
This brings us to the central challenge of the digital age: trust cannot be restored by better tools alone. It depends on whether digital systems are willing to acknowledge their role as governing entities - and whether they are prepared to earn legitimacy, not assume it.
Further Reading & Conceptual References
-
Hobbes, T. - Leviathan (Trust and authority as foundations of social order)
-
Locke, J. - Two Treatises of Government (Conditional trust and legitimacy)
-
Rousseau, J.-J. - The Social Contract (Consent and institutional authority)
-
Luhmann, N. - Trust and Power (Trust as complexity reduction)
-
O’Neill, O. - A Question of Trust (Trustworthiness vs. mere reliability)
-
Hardin, R. - Trust and Trustworthiness (Relational and contextual trust)
-
Zuboff, S. - The Age of Surveillance Capitalism (Asymmetry of power in digital systems)
-
Cohen, J. E. - Between Truth and Power (Governance through information infrastructures)
-
Gillespie, T. - Custodians of the Internet (Platform moderation as governance)
-
OECD - Trust and Public Policy (Institutional trust, accountability, and fairness)
-
World Economic Forum - Rebuilding Trust in Technology (Governance as a prerequisite for trust)
