OpenAI has acquired Torch, a healthcare startup that aims to consolidate fragmented medical records into a single AI-accessible system. CNBC reports that all Torch employees are joining OpenAI, though the company has chosen not to disclose financial terms. At face value, the deal looks like a simple expansion of OpenAI’s health portfolio. acquired acquired CNBC CNBC However, that explanation sidesteps the real story. Centralizing health data inside a private AI ecosystem is not a step toward progress. It signals a future where the same company that manages your conversations, your workflows, your biometric identity, and your digital footprint now gains access to your medical history. Once you see that trajectory, you also see the danger. This acquisition raises concerns about privacy erosion, unchecked corporate power, and a world where AI systems quietly influence how people are evaluated, selected, priced, and judged. Here are the three reasons this deal should concern anyone watching the direction of modern AI. The Privacy Risk Is Not Subtle. It Is Structural Torch was built to solve a specific problem. Medical data is scattered across clinics, diagnostic centers, pharmacies, insurers, and specialist networks. That fragmentation frustrates everyone, but it also acts as an accidental safeguard. A single entity cannot easily assemble the full picture of someone’s health without permission and legal oversight. Torch’s unified medical memory erodes that safeguard by consolidating every detail into a single system. Once that unified system moves inside OpenAI, the meaning of “integration” changes. Health records stop being isolated clinical notes and become part of an intelligence pipeline that powers models, product features, and internal research. Then there is the connection to Worldcoin. The project continues to operate with global iris-scanning enrollment and a token trading at around 56 cents. This system assigns biometric identifiers tied to digital wallets. When you combine biometric identity with health records and behavioral data from ChatGPT, you create a level of visibility into individual lives unlike anything we have seen in consumer technology. It becomes technically possible to connect a person’s medical vulnerability to their identity and financial profile. No one should assume such a capability will forever remain unused. Torch’s acquisition also raises sharp questions. If OpenAI is hacked, sued, acquired, or bankrupted, medical records could be treated as corporate assets. Genetic testing companies like 23andMe have already shown the world what happens when private health data is mishandled. The fallout is personal, irreversible, and impossible to contain. At the center of this entire issue sits a simple question. If OpenAI already processes more than two hundred thirty million weekly health queries, why should anyone feel comfortable handing over their full medical history, as well? Market Power Concentration Threatens Real Innovation OpenAI’s decision to acquire Torch is part of a broader pattern. The company has been expanding aggressively across hardware, consumer devices, data aggregation, and enterprise infrastructure. Last year, it swallowed Jony Ive’s AI device company for more than $6 billion. It brought in Google’s former M&A executive, Albert Lee, to identify acquisition targets. Taken together, these moves show a strategy built on consolidation rather than competition. swallowed swallowed brought brought Healthcare is now the next frontier in that strategy. Hospitals and insurers are already exploring OpenAI tools to handle chart summarization, triage support, and patient communication. That level of influence gives OpenAI enormous leverage over what health AI looks like across the industry. Smaller startups cannot compete with a platform that controls both the most widely used models and the data required to build specialized medical tools. This creates a single point of failure. It also concentrates accountability in a company that prefers opacity to transparency. The question lurking beneath all of this is simple: how much control are we willing to hand over to one company before we acknowledge that innovation is being replaced by dominance? Ethical and Societal Risks Multiply When AI Influences Life-and-Death Decisions Health data is not neutral. It reveals psychological states, behavioral risks, financial vulnerability, and long-term health probabilities. When that information becomes part of a commercial AI ecosystem, the potential for exploitation multiplies. OpenAI states that health chats will not train foundational models. That leaves room for fine-tuning, research analysis, and the extraction of aggregated insights. Anonymization sounds like a safeguard, but any dataset with enough detail can be de-anonymized. Once health data becomes part of an AI economy, secondary uses become attractive. Insurers might purchase aggregate insight models to automate claim denials. Employers could rely on predictive health scoring to judge reliability or risk. Governments might explore behavior modeling built on subtle health patterns. Third-party aggregators complicate the picture even further. Companies such as B.Well Swiss operate as an intermediary that connects patient data, provider networks, and digital health applications. Once these pipelines feed into AI-driven systems, the number of hands passing information multiplies. Each link introduces another opportunity for misuse or misinterpretation. At some point, society must confront the uncomfortable truth. If AI companies become the interpreters of our health, our behavior, and our values, they are no longer tools. They are gatekeepers. Opt Out Now OpenAI’s acquisition of Torch should not be treated as another incremental step in digital health innovation. It marks a shift in how deeply AI companies plan to embed themselves in the most intimate parts of human life. People still have agency in this moment. They can demand regulations with real consequences for misuse. They can opt out of data-sharing pathways that gather more information than necessary. They can support privacy-focused health technologies that are not designed to serve the ambitions of trillion-dollar AI companies. Ignoring this moment invites a future where medical history becomes a score, a filter, or a hidden determinant of opportunity. Once that future arrives, the power to reverse it may already be gone.