Due to current events unfolding at OpenAI, interest in decentralized on-chain AI is picking up. Not all on-chain AI methods are the same, and there’s a vast space of use cases spanning from on-chain ML training (not achieved yet) to decentralizing the hosting of AI models, to encrypting a model’s content, to proving inference, which means the querying of a model.
In this article, I want to focus on two emerging methods for AI inference, namely opML, and zkML. I will dive into what these terms mean, what their differences are, and how they can help make AI more transparent & decentralized.
zkML and opML leverage two different cryptographic methods to prove a model’s inference. In a nutshell, zkML and opML both create a cryptographic certificate that verifies that an ML model A has been queried through a prompt B, they attest the model’s details, like its size and parameters, and can certify that a prompt has been executed.
Let’s first take a look at zkML:
zkML, or zero-knowledge machine learning, intertwines the concept of zero-knowledge proofs (ZKPs) with ML. Zero-knowledge proofs are cryptographic protocols that enable one party to prove to another that a statement is true without revealing any information beyond the validity of the statement itself.
Privacy is a major use case for zkML, with biometric authentication being a popular application.
But beyond privacy, zkML is useful for making ML models more fair and transparent.
Using a zkML library like
"I ran this publicly available neural network on some private data, and it produced this output."
"I ran my private neural network on some public data, and it produced this output."
"I correctly ran this publicly available neural network on some public data, and it produced this output."
The on-chain aspect comes into play when we are able to verify these proofs on Ethereum.
And just a friendly reminder, in the case of proprietary AI services like OpenAI, all we can do is trust that the black box we’re interacting with is truthfully doing what we are telling it.
zkML and equally opML can
For instance, zkML ensures that all users are subject to the same rules and that any changes to an algorithm are made public. This level of transparency and fairness has been absent online for way too long.
zkML and opML are not changing how LLMs are trained, but they can help with shedding light on how algorithms and models are applied.
However, at this point, zkML is incredibly costly and slow. Model sizes are currently only in the millions of parameters, whereas currently GPT-4 has 1.7 trillion parameters. Recently, Hyper Oracle introduced a new approach called
Due to its low cost and high performance, opML might pave the way for on-chain applications that utilize ML, but also vice versa, web2 applications that leverage the verifiable nature of opML for their users.
opML brings the process of AI model execution and inference directly on-chain, similar to how an optimistic rollup like Arbitrum or Optimism scales Ethereum transactions. While ZK Rollups rely on zk proofs for computational validity, Optimistic Rollups and opML use fraud proofs, an optimistic verification method that could broaden the horizons for on-chain ML technologies.
The core of opML is a verification game that borrows from the principles of Truebit and optimistic rollup frameworks, ensuring that the ML computations remain decentralized and verifiable. opML is so far the only method available to prove ML inference optimistically and offers users access to advanced AI models such as Stable Diffusion and LLaMA 2.
The security mechanisms of zkML and opML diverge in their operations:
This design allows opML to bypass the intensive proof generation process typical of zkML unless it's absolutely necessary.
Considering that ML computations could be less critical than financial transactions, security requirements might be adjusted while preserving the trustless and verifiable nature of opML.
By redefining the security needs for on-chain ML performance, opML emerges as a scalable, efficient, and verifiable solution that maintains a shorter challenge period than standard Optimistic Rollups.
In conclusion, the evolution of on-chain AI tools like zkML and opML is not just about enhancing the capabilities of machine learning models but fundamentally about reshaping our relationship with artificial intelligence. As we edge closer to the realm of AGI, the question of control and transparency becomes paramount.
zkML offers a significant leap in ensuring privacy and fairness in AI applications. However, its current limitations in terms of cost and scalability highlight the need for alternative solutions. Enter opML, a promising approach that combines the strengths of on-chain execution with the efficiency and scalability of optimistic verification methods.
By adopting these decentralized and transparent methodologies, we open the door to a future where AI is not just a tool controlled by a few but a trustless, open-source public good accessible to all.