paint-brush
Beyond Credit Scores: Exploring the Potential of Verifiable Models in Diverse Industriesby@mkaufmann
275 reads

Beyond Credit Scores: Exploring the Potential of Verifiable Models in Diverse Industries

by Matthew KaufmannDecember 13th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

How verifiable machine intelligence is transforming machine learning
featured image - Beyond Credit Scores: Exploring the Potential of Verifiable Models in Diverse Industries
Matthew Kaufmann HackerNoon profile picture

In the ever-evolving world of artificial intelligence, verifiable machine intelligence is quietly revolutionizing machine learning.


This paradigm shift is not just a technological advancement; it represents a fundamental change in how we interact with and perceive the capabilities of machine learning models.


At its core, verifiable machine intelligence is about establishing a new standard of reliability and trust in machine learning models. Traditionally, these models have been somewhat opaque, offering little insight into their inner workings or decision-making processes.


Verifiable machine intelligence confronts this challenge head-on, introducing methods that allow for greater transparency and understanding of how models arrive at their conclusions.


Simply put, it's redefining how we deal with and trust machine learning models. This innovative approach tackles issues like transparency, accuracy, and protecting intellectual property, ushering in a new era of trustworthiness and accountability.


The potential benefits of verifiable machine intelligence go far beyond smart contracts.

Most of the really important algorithms affecting people’s lives today, such as credit scores, insurance payouts, or even Twitter newsfeeds, are created by teams of data scientists behind closed doors. There’s usually very little recourse if there’s an error, which does occur with alarming frequency, such as the2022 Equifax data glitch, which garbled millions of credit scores and distorted everything from mortgage decisions to car loans to apartment rentals.


Bringing verifiable machine intelligence to such crucial applications using a crypto-economic concept like an Inference Economy could allow institutions to make accurate, accountable, and much more reliable decisions without revealing their underlying models to competitors.


Perhaps the real killer application is holding artificial intelligence accountable. Semi-autonomous agents, such as the “roll your own” ones beginning to be offered by OpenAI, are capable of in-taking feeds of inferences and interacting with one another, but they’re just as susceptible to hallucination and error as ChatGPT is with text. Verifiable intelligence would allow models to interact with one another while verifying that any data passed between them was sourced correctly.


At the heart of this quiet revolution is a library of technologies called zero-knowledge machine learning (zkML) and a handful of pioneering companies building applications with it. One of these is Spectral, who announced their Machine Intelligence Network on December 5th, creating an “Inference Economy” of model-makers, validators, and developers who solicit machine learning (ML) models or simply consume feeds of inferences in their smart contracts the way an oracle brings price feeds into a decentralized exchange.


“Spectral’s network is a continuous competition,” says Spectral CEO Sishir Varghese. “Zero-knowledge proofs are the key to the system. They allow users to verify that inferences are coming from top-performing models—Modelers don’t need to reveal their proprietary methodology and can still generate revenue. This is especially useful for smart contracts which require validated information to be used in their executions.”


Disclosure: The author acknowledges a vested interest in the organization(s) highlighted in this story. However, the views expressed within are delivered impartially and without bias.

Understanding Verifiable Machine Intelligence

Verifiable machine intelligence is built on three core principles: transparency, interpretability, and traceability. These principles significantly shift how we view and expect machine learning models to operate.


Essentially, verifiable machine intelligence aims to make the decision-making process of these models clear and understandable when generating specific predictions or decisions. The commitment to transparency ensures that the inner workings of these models are not unnecessarily complex but accessible and easy to interpret.


Transparency, a fundamental aspect of verifiable machine intelligence, is vital for building trust and allowing scrutiny of machine learning processes. This principle involves making the decision-making mechanisms of models understandable to stakeholders, enabling them to grasp the factors that contribute to the predictions. Verifiable models break the historical trend of black-box algorithms, providing clarity and insight into how decisions are made.


Interpretability, another crucial principle, takes transparency further by emphasizing the need for human-understandable explanations. Verifiable machine intelligence strives to create models with decisions that are not only clear but also intelligible to individuals without an advanced understanding of machine learning. This approach ensures that the reasoning behind a model's predictions is articulated in a way that aligns with human cognitive capacities.


Traceability completes the trio of principles by allowing the tracking of the decision-making process back to its origins. Verifiable models record the steps and features contributing to each prediction, enabling retrospective analysis and ensuring accountability. This traceability not only strengthens the reliability of the model but also helps identify and rectify potential biases or errors.


How Verifiable Machine Intelligence works in practice — creating a Web3 Credit Score


To return to the credit bureau glitch, if there were a community of data scientists and model-makers competing to create more accurate credit scoring models, any errors would be much more likely to have been spotted, and anyone competing for a share of revenue generated by those scores would have a very compelling reason to update the model with a more accurate version.


Spectral got their start creating the MACRO Score, an on-chain credit score that looked at the transactions and contents of an Ethereum wallet to generate a score measuring that wallet’s likelihood of liquidation on an on-chain lending protocol.


The Score was a success, with a capital efficiency simulation proving that their scores could help lower collateral requirements and interest rates for well-qualified borrowers, boosting profitability. They took the dataset they used to create the MACRO Score’s underlying ML model and are now using it as fodder for their first stream of inferences.


They offered the data science community a significant bounty – $100,000 (with an additional 50K top-up) – and began a data science challenge, in addition to an 85 percent share of all revenue generated from the model after the bounties had been exhausted.


The way it works is that there’s a leaderboard of modelers who meet benchmarks, and the top ten who meet benchmarks share revenue while their models are online, i.e., generating inferences. Because it’s a continuous challenge, payments are streamed to model makers. Anyone can submit a better version and knock out one of the challengers, getting a share of the proceeds.


Unlike competitive machine learning platforms, because of the verifiable machine intelligence used, competitors won’t have to reveal their models to one another, preserving their IP, and dissuading sniping and over-optimization. After the initial bounties are paid out, anyone can add credit scores to their smart contract or application simply by incorporating the feed and paying for their use.


Spectral is gradually offering new challenges, aiming to eventually have a library of ML inferences available. They will also offer a flexible format that makes integration into smart contracts or contributing a model simple despite the complex cryptography involved.

Challenges and Future Developments

As we delve into the realm of verifiable machine intelligence, some challenges could impact its widespread adoption. One key challenge involves finding the right balance between transparency and complexity.


While the goal is to boost understanding and trust in machine learning models, overly complex models might become hard to interpret, potentially undermining the core idea of verifiability. Striking this balance becomes crucial to ensure that the interpretability promised by verifiable models doesn't unintentionally compromise their accuracy or usefulness.


Another challenge relates to a potential trade-off between verifiability and the performance of machine learning models. Verifiable models might, in some cases, introduce a computational overhead due to extra processes for transparency and traceability. Balancing high-performance standards with achieving verifiability is an ongoing challenge that researchers and practitioners must address.


Looking to the future, ongoing research offers hope for overcoming these challenges and enhancing verifiable models. One avenue involves refining techniques for model distillation, where complex verifiable models can be transformed into simpler, more interpretable versions without sacrificing accuracy. This could tackle the challenge of interpretability by providing simplified representations of intricate models.


Advancements in explainable AI (XAI) and interpretable machine learning are crucial for the future of verifiable machine intelligence. As research progresses, models will likely become inherently more interpretable without losing complexity, mitigating the trade-off between verifiability and performance.


Techniques offering detailed insights into the decision-making processes of machine learning models are essential to ensure transparency is not just a checkbox but a meaningful aspect of verifiable models.


Additionally, the integration of privacy-preserving technologies is a significant future development. While verifiable machine intelligence stresses transparency, it must also coexist with the need to protect sensitive information. Research into techniques like federated learning and homomorphic encryption can contribute to developing verifiable models that respect privacy, ensuring transparency and security can go hand in hand.


In conclusion, the challenges faced by verifiable machine intelligence are not insurmountable. Ongoing research indicates a promising trajectory for future developments. Balancing transparency and complexity, addressing performance concerns, and advancing interpretability are critical areas for research and innovation. As the field matures, the evolution of verifiable models will likely bring solutions that not only overcome existing challenges but also set new standards for accountable, reliable, and interpretable machine intelligence.