In AI We Trust

Written by andreimochola | Published 2026/02/03
Tech Story Tags: ai | artificial-intelligence-trends | legitimacy | digital-trust | philosophy | society | ai-trust | can-we-trust-ai

TLDRTrust is not a single attitude. It can reflect confidence in technical performance, tolerance of risk, resignation to inevitability, or simple habit.via the TL;DR App

From a Few Prompts to Over Four Billion a Day

When Melvin Chen asked In AI We Trust?, it framed a debate that has since evolved.

AI has moved from the periphery of digital experimentation to the centre of everyday interaction with remarkable speed. While no single global metric captures all AI requests across platforms, available disclosures give a sense of scale. ChatGPT alone now processes more than 2.5 billion prompts daily, having doubled from roughly one billion in late 2024. Other large models add hundreds of millions more: Claude approaches one billion daily queries, Gemini exceeds half a billion, while smaller but influential systems like Grok and Perplexity contribute additional volume. Taken together, conservative estimates place total daily AI interactions across major platforms at just over four billion.

The scale of this magnitude is often read as a signal of trust. In many digital contexts, volume correlates with confidence: repeat use suggests familiarity, perceived value, and a degree of reliability. Platforms, products, and brands routinely interpret adoption metrics as indicators of user trust and loyalty. The question is whether the same inference holds for AI.

Volume as an Indication of Trust

In earlier discussions about digital platforms, we examined a recurring pattern: legitimacy is frequently assumed through scale, network effects, and convenience rather than earned through explicit accountability. High usage is treated as validation, even when underlying mechanisms remain opaque. AI adoption fits easily into this pattern. Frequent interaction may indicate that systems are useful, accessible, or deeply embedded in workflows - but it does not, by itself, clarify what kind of trust users are placing in them.

This distinction matters because trust is not a single attitude. It can reflect confidence in technical performance, tolerance of risk, resignation to inevitability, or simple habit. Volume alone cannot reliably distinguish among these forms.

How Has Trust in AI Changed

Survey data from the past several years suggests that attitudes toward AI have shifted gradually rather than dramatically. In 2023, early studies painted a cautious picture. While a large majority of respondents acknowledged the potential benefits of AI, fewer than half reported trusting AI-generated outputs. Concerns around cybersecurity, accuracy, and verification were widespread, and many users admitted to rarely checking AI results.

By 2026, confidence indicators show modest improvement, particularly among IT professionals and enterprise users. Reported trust levels in specific systems have increased, and organizational adoption has expanded significantly. At the same time, trust remains uneven. User confidence often grows through successful low-risk interactions, then erodes following visible failures such as hallucinations or errors in more complex tasks. As a result, trust in AI tends to evolve dynamically rather than accumulate steadily.

This evolution is not uniform. Individuals tend to approach AI with greater caution in personal or sensitive contexts, while organizations more often assess trust through operational performance, safeguards, and risk management. These differences reflect role-based exposure rather than fundamentally different beliefs about AI itself.

Can We Trust Machines?

The question of whether machines can be trusted reopens long-standing philosophical debates about what trust actually entails. Many classical theories treat trust as a distinctly interpersonal phenomenon, involving vulnerability to another agent’s intentions, motivations, or goodwill. On these accounts, trust presupposes moral agency - something AI systems do not possess.

This has led many philosophers to argue that what we describe as trust in machines is, more precisely, a form of reliance. We depend on systems to behave predictably under known conditions, to perform tasks within specified parameters, and to fail in ways that are bounded and recoverable. In this view, trust shifts away from intention and toward reliability.

Other approaches complicate this distinction by focusing not on the inner qualities of the trusted party, but on the stance of the one who trusts. From this perspective, trust is a normative posture: treating another actor - human or not - as answerable within a shared framework of expectations. Applied to AI, this shifts attention toward system design, institutional context, and the conditions under which users are encouraged to rely on outputs without full understanding.

Contemporary discussions of AI trust often translate interpersonal criteria into system-level properties. Ability becomes algorithmic competence and data quality. Integrity becomes consistency, auditability, and predictability. Benevolence is reframed as alignment with stated objectives or ethical constraints. None of these transformations resolves the philosophical tension entirely, but they do explain why trust in AI is frequently discussed as something adjacent to, rather than identical with, human trust.

Crucially, skepticism toward AI does not arise only from technical limitations. Social, political, and economic contexts shape how systems are perceived, particularly when automated decisions affect access, opportunity, or rights. In such cases, distrust may reflect rational concern rather than misunderstanding.

When Trust Fails

High-profile failures reveal how trust assumptions are tested once AI systems move beyond controlled environments into public services, finance, and consumer platforms. These moments tend to expose less a single technical flaw than a mismatch between what users believe systems are doing and how they actually operate.

Australia’s Robodebt scheme illustrates this clearly. Introduced to automate welfare debt detection, it relied on opaque income averaging and offered little meaningful explanation or recourse. Thousands of citizens were wrongly accused, leading eventually to the program’s dismantling, a $1.2 billion settlement, and a royal commission. Trust collapsed not because automation existed, but because decisions could not be examined or challenged.

Similar dynamics have appeared elsewhere. In 2023, OpenAI’s ChatGPT was temporarily banned in Italy after concerns emerged over how user data was collected and used for training without clear consent. Meta’s Llama 2, despite relatively strong transparency scores, withheld key details about training data, fuelling disputes with artists and content creators. In finance, Apple’s credit card algorithm drew scrutiny after women received lower limits than similarly qualified men, with no accessible explanation of how decisions were made.

Failures in facial recognition systems, including those associated with Clearview AI, further demonstrate how opacity compounds harm. Wrongful identifications and arrests - often affecting minority communities - exposed how limited auditability undermines accountability.

Taken together, these examples suggest that trust in AI is most fragile when systems operate beyond scrutiny, explanation, or contestation. They do not point to inherent flaws in AI itself, but they do show how quickly confidence unravels when system behaviour cannot be meaningfully examined.

Trust Is Always Earned

Discussions of trust in AI often converge on familiar criteria - transparency, fairness, robustness, privacy, accountability - yet none of these function as guarantees. Transparency may illuminate parts of a system while leaving others opaque. Fairness metrics can conflict. Robustness reduces risk but never removes it. Accountability clarifies responsibility, yet complex AI ecosystems frequently diffuse it.

What emerges instead is a more modest conclusion. Trust in AI is not a stable state achieved once and retained indefinitely. It is a conditional judgment, continuously revised through experience, context, and consequence. Increased use may signal confidence, dependence, or necessity - but trust, in its stronger sense, remains something that must be earned, tested, and occasionally withdrawn.


Further Reading

  • Chen, M. - In AI We Trust? A Philosophical Perspective in: Head Foundation Digest (On whether trust is conceptually applicable to artificial agents and why trust in AI is philosophically constrained, not merely technical)
  • Floridi, L. - The Ethics of Artificial Intelligence (Agency, responsibility, and trust beyond human actors)
  • O’Neill, O. - A Question of Trust (Why trust depends on conditions of trustworthiness, not optimism)
  • Luhmann, N. - Trust and Power (Trust as a mechanism for managing complexity in systems)
  • Hardin, R. - Trust and Trustworthiness (Why much “trust” in systems is closer to reasoned reliance)
  • Stanford Encyclopedia of Philosophy - Trust (Conceptual foundations and distinctions between trust, reliance, and confidence)
  • Nature Human Behaviour - Trust and Artificial Intelligence (Empirical and philosophical perspectives on trust in automated systems)
  • World Economic Forum - AI Paradoxes in 2026 (Tensions between adoption, confidence, and unresolved trust)
  • Financial Times / Harvard Business Review - Selected coverage on AI trust and accountability (How trust narratives evolve alongside real-world failures)


Written by andreimochola | I occasionally disappear to think, then come back to write about things that refuse to stay quiet.
Published by HackerNoon on 2026/02/03