paint-brush
What’s the Difference Between zkML and opML?by@marlene
583 reads
583 reads

What’s the Difference Between zkML and opML?

by MarleneNovember 22nd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

zkML brings privacy to AI with zero-knowledge proofs; opML offers scalable, efficient AI using optimistic verification, transforming AI trust and transparency.
featured image - What’s the Difference Between zkML and opML?
Marlene HackerNoon profile picture

TL;DR:

  • zkML vs. opML: two emerging methods for on-chain AI inference.
  • zkML: bringing privacy and transparency into ML using zero-knowledge proofs.
  • opML: offers scalable, cost-effective on-chain AI using optimistic verification.
  • Decentralized AI: turning ML models into public goods with baked-in trustlessness.


Due to current events unfolding at OpenAI, interest in decentralized on-chain AI is picking up. Not all on-chain AI methods are the same, and there’s a vast space of use cases spanning from on-chain ML training (not achieved yet) to decentralizing the hosting of AI models, to encrypting a model’s content, to proving inference, which means the querying of a model.


In this article, I want to focus on two emerging methods for AI inference, namely opML, and zkML. I will dive into what these terms mean, what their differences are, and how they can help make AI more transparent & decentralized.


zkML and opML leverage two different cryptographic methods to prove a model’s inference. In a nutshell, zkML and opML both create a cryptographic certificate that verifies that an ML model A has been queried through a prompt B, they attest the model’s details, like its size and parameters, and can certify that a prompt has been executed.


Let’s first take a look at zkML:

Understanding zkML

zkML, or zero-knowledge machine learning, intertwines the concept of zero-knowledge proofs (ZKPs) with ML. Zero-knowledge proofs are cryptographic protocols that enable one party to prove to another that a statement is true without revealing any information beyond the validity of the statement itself.


Privacy is a major use case for zkML, with biometric authentication being a popular application. Worldcoin, for instance, is working on utilizing zkML for authenticating its users locally on their smartphones using zkML.


But beyond privacy, zkML is useful for making ML models more fair and transparent.


Using a zkML library like ezkl we can prove, for instance, that:


"I ran this publicly available neural network on some private data, and it produced this output."


"I ran my private neural network on some public data, and it produced this output."


"I correctly ran this publicly available neural network on some public data, and it produced this output."


The on-chain aspect comes into play when we are able to verify these proofs on Ethereum.


And just a friendly reminder, in the case of proprietary AI services like OpenAI, all we can do is trust that the black box we’re interacting with is truthfully doing what we are telling it.


zkML and equally opML can make the application of algorithms and ML more fair.


For instance, zkML ensures that all users are subject to the same rules and that any changes to an algorithm are made public. This level of transparency and fairness has been absent online for way too long.


zkML and opML are not changing how LLMs are trained, but they can help with shedding light on how algorithms and models are applied.

Understanding opML

However, at this point, zkML is incredibly costly and slow. Model sizes are currently only in the millions of parameters, whereas currently GPT-4 has 1.7 trillion parameters. Recently, Hyper Oracle introduced a new approach called opML that can run 13 billion parameter LLMs on-chain using a standard PC. That’s a 10,000 x step up from zkML.


Due to its low cost and high performance, opML might pave the way for on-chain applications that utilize ML, but also vice versa, web2 applications that leverage the verifiable nature of opML for their users.


opML brings the process of AI model execution and inference directly on-chain, similar to how an optimistic rollup like Arbitrum or Optimism scales Ethereum transactions. While ZK Rollups rely on zk proofs for computational validity, Optimistic Rollups and opML use fraud proofs, an optimistic verification method that could broaden the horizons for on-chain ML technologies.


The core of opML is a verification game that borrows from the principles of Truebit and optimistic rollup frameworks, ensuring that the ML computations remain decentralized and verifiable. opML is so far the only method available to prove ML inference optimistically and offers users access to advanced AI models such as Stable Diffusion and LLaMA 2.

What’s different

The security mechanisms of zkML and opML diverge in their operations:


  • zkML combines ML inference computations with the creation of zk proofs for each instance.
  • opML carries out ML inference computations and only reruns a fraction of these computations on-chain if a challenge arises.


This design allows opML to bypass the intensive proof generation process typical of zkML unless it's absolutely necessary.


Considering that ML computations could be less critical than financial transactions, security requirements might be adjusted while preserving the trustless and verifiable nature of opML.


By redefining the security needs for on-chain ML performance, opML emerges as a scalable, efficient, and verifiable solution that maintains a shorter challenge period than standard Optimistic Rollups.

Redefining Trust and Transparency

In conclusion, the evolution of on-chain AI tools like zkML and opML is not just about enhancing the capabilities of machine learning models but fundamentally about reshaping our relationship with artificial intelligence. As we edge closer to the realm of AGI, the question of control and transparency becomes paramount.


zkML offers a significant leap in ensuring privacy and fairness in AI applications. However, its current limitations in terms of cost and scalability highlight the need for alternative solutions. Enter opML, a promising approach that combines the strengths of on-chain execution with the efficiency and scalability of optimistic verification methods.



By adopting these decentralized and transparent methodologies, we open the door to a future where AI is not just a tool controlled by a few but a trustless, open-source public good accessible to all.