Acknowledgments: Special thanks to
tl’dr
Every decision starts with a prediction. Consider pondering over Bitcoin’s potential: “Will purchasing Bitcoin now yield a doubled investment by year’s end? If the “yes” prospect is judged even marginally more likely than “no,” it would be economically rational to decide to buy Bitcoin in the absence of superior alternatives.
But why stop at Bitcoin? Imagine we could architect markets rooted in predictions about all kinds of events, such as who will be the next US president or which country will win the World Cup. Here, not assets but forecasts themselves are traded.
Prediction markets have been called the “holy grail of epistemic technology” by Vitalik.
Vitalik has a knack for seeing big things before others. So he’s a good source for frontrunning narratives. He proposed the idea of an AMM on Ethereum seven years ago in a
If Vitalik’s blog posts can initiate the creation of
But it’s his more
The market leading prediction market right now is Polymarket, owing to its ongoing UX improvements and expansion of event categories and event offerings.
The monthly volume recently hit all-time highs and is likely to go higher with the US presidential election in November of this year (Polymarket activity is US-centric).
There is further precedent to believe that prediction markets could take off this year. Besides crypto markets reaching all-time highs in 2024, we also have one of the biggest election years in history this year. Eight of the world’s ten most populous nations, including the US, India, Russia, Mexico, Brazil, Bangladesh, Indonesia, and Pakistan, are also going to the polls. We also have the 2024 Summer Olympics upcoming in Paris.
But given monthly volumes are still in the tens of millions when it could reach hundreds of millions, let’s consider some of the limitations of current prediction markets:
We believe that thing is AI.
We need AIs as players in the game. We expect that soon, it will be common to see AIs (bots) participating alongside human agents in prediction markets. We can already see live demos of this in
AIs need AIs as arbiters of the game. Although relatively rare, there can be instances where dispute resolution is important and necessary in a prediction market. For example, in a presidential election, the results may be very close, and allegations of voting irregularities may surface. So, while the prediction market may close, favoring Candidate A, the official electoral commission may declare Candidate B as the winner. Those betting on Candidate A will argue against the outcome due to alleged voting irregularities while those betting on Candidate B will argue that the electoral commission decision reflects the “true” outcome. A lot of money may be on the line. Who’s right?
Answering this question poses several challenges:
Players may not trust human arbiters due to their biases
Human arbitration can be slow and expensive
DAO-based prediction resolutions are vulnerable to Sybil attacks
To address this, prediction markets can use multi-round dispute systems a la
For prediction markets to really take off, they need to be able to engage sufficient interest to push people past the psychological threshold of actually trading prediction assets. It may not take much to do this for general topics a lot of people care about, like who will win a presidential election or the Superbowl. However, including only general topics severely limits potential liquidity. Ideally, a prediction market could tap into the liquidity of specific events of high interest to niche audiences. This is how targeted advertising works, and we all know targeted advertising works.
To achieve this, prediction markets need to solve four general challenges:
Event Supply: Highly relevant event supply is key. To grab the attention of a niche yet dedicated audience, event creators must deeply understand their community’s interests to drive participation and volumes.
Event Demand: Demand needs to be high within the particular targeted community, taking into account their demographic and psychographic idiosyncrasies.
Event Liquidity: There’s enough opinion diversity and dynamics within the targeted community to drive sufficient liquidity to retain both parties and minimize slippage.
Information Aggregation: Players should have easy access to enough information to make them confident to make a bet. This could include a background analysis, relevant historical data, and expert opinions.
Now, let’s see how AI could address each of these challenges:
Content Creator AIs: Content creator AIs (“copilots”) assist in the creation of content beyond human capacities or motivation. AIs suggest timely and relevant event topics by analyzing trends from news, social media, and financial data. Content creators – whether human or AI – will be rewarded for generating engaging content that keeps their communities lively. Community feedback enhances the AI’s understanding of its communities, making it an iteratively improving content creation engine to bond content creators and their audiences.
Event Recommendation AIs: Event recommender AIs tailor event suggestions to users based on their interests, trading history, and specific needs, focusing on recommending events ripe for debate and trading opportunities. It adapts to users’ behaviors across different regions, cultural contexts, and times. The end goal is a highly targeted feed of events, free from personally irrelevant content that clutters prediction market platforms today.
Liquidity Allocator AIs: Liquidity allocator AIs tackle counterparty liquidity risk by optimizing liquidity injections to narrow the bid-ask spread. To minimize risk, AIs can implement the
Information Aggregation AIs: These AIs harness compute over a wide array of indicators (e.g., on-chain data, historical data, news, sentiment indicators) for players to comprehensively understand the event. From there, the information aggregation AIs can offer well-rounded projections, turning prediction markets into the go-to source for informed decision-making and alpha. Projects can choose to token-gate access to the insights gleaned by information aggregation AIs because in prediction markets, knowledge = money.
Now, let’s see what this looks like when you piece it together. Below, you can see the main components and workings of a prediction market without AIs (in black) and with AIs (in blue).
In the non-AI model, content creators (usually the platform itself) arbitrarily create events, supply liquidity (initially subsidized by their treasuries), save the events to an event database and promote them in bulk to human players. This is how Polymarket currently works, and it's working quite well.
But I think it can get a lot better.
In the AI model, content creator copilot AIs support content creators in creating and promoting events inside targeted general or niche communities. Liquidity provision is supported by liquidity allocator AIs that optimize liquidity injections over time through learning player order books and using external data from oracles and other data vendors. Event recommendation AIs use stored events in the event database and wallet transaction history to optimize event recommendations tailored based on personal interests. Finally, information aggregation AIs collect information from data vendors to provide educational and contextual information to human players and to inform AI players about their prediction decisions. The end game? A fine-tuned prediction market system that enables prediction markets to work at a microscopic scale.
Prediction markets at this scale would enable a different user experience, one that is more like Tinder or TikTok. As the events are highly targeted, they could be fed to you in a feed a la TikTok, and – even with today’s wallet and blockchain technology – players could place bets by swiping left or right a la Tinder. Imagine that: people making micro-bets on the events they personally care about while they’re commuting to work or school.
Of the most notorious difficult outcomes to predict is asset prices, so let’s focus here to see how AIs perform when pushing at the edges of what is possible in prediction markets.
Using AI to predict asset prices is actively being explored in academic circles. Machine learning (ML) techniques like linear models, random forests, and support vector machines have been
IBM research
Another study comparing random forest regression and LSTM to predict Bitcoin’s next-day price
We can infer that in some popular prediction markets, there is simply too little time for a busy human to aggregate, analyze, and interpret sufficiently large amounts of data to make good predictions. Or the problems are simply too complex. But AIs can do this.
One of the main challenges facing prediction markets is that the markets are too thin to attract enough players and volume. But there is a major difference between the prediction markets of the 2010s vs the 2020s, and that is the
To add, it’s possible to
Convergence and liquidity removal. As prediction markets converge (i.e., as the outcome becomes more certain), LPs are incentivized to remove their liquidity. This is rational behavior because the risk of holding “losing” tokens increases. For example, in a market converging toward “yes,” the “no” tokens become less valuable (i.e., impermanent loss), posing a risk to LPs who might end up with worthless tokens if they don’t sell in advance.
Bias and inaccuracy. This reduction in liquidity can lead to less accuracy and more bias as prediction markets converge. Specifically, in the volume-weighted price range of 0.2 to 0.8, ‘no’ tokens are often underpriced, and ‘yes’ tokens are often overpriced.
To address these issues, the authors propose a “smooth liquid market maker” (SLMM) model and demonstrate that it can increase volumes and accuracy in converging prediction markets. It does this by introducing a concentration function into the model (a la Uniswap v3) in which LPs provide a liquidity position that is only active for specific price intervals. The result is reduced risk exposure, ensuring that the number of valuable tokens (e.g., ‘yes’ tokens in the market converging to ‘yes’ outcome) held by LPs does not converge to zero as prices adjust, unlike in the constant product AMM.
There is a balancing act that must be reached when choosing a concentrated liquidity AMM variant like the SLMM for converging prediction markets. While you’re trying to reduce risk for LPs, you end up disincentivizing some trading activity.
Specifically, while concentrated liquidity can make it less likely that LPs lose out as the market converges on a sure outcome (thus reducing premature withdrawal), it may also reduce trading opportunities to profit on small price changes (e.g., like moving from $0.70 to $0.75) due to increased slippage, especially for large orders. The direct consequence is that traders’ potential profit margins are squeezed. For instance, if they expect a small price move from $0.70 to $0.75, the slippage may limit the capital they can effectively deplore to capture the expected upside. Looking forward, it will be important to trial various adjustments on the tradeoff term in these market-maker formulas to find the sweet spot.
The prediction market primitive is a powerful one. Of course, like any other crypto primitive, it faces challenges, but we are confident that they will be overcome. As they are gradually overcome, we can expect to see this primitive reused to answer all sorts of questions in a wide variety of digital contexts. With advancements in targeting and liquidity solutions, we can expect the development of niche prediction markets. For example, take X (formerly Twitter) users:
Interestingly, these questions don’t need to stay confined to standalone prediction market websites. They could be integrated directly into X or other platforms via browser extensions. We may start to see micro-prediction markets pop up regularly in our everyday online experiences, enriching ordinary browsing with speculative trading opportunities.
I intentionally wrote some of the questions above and asked ChatGPT to write the others. Which did I write, and which did the content creator AI write? If it’s hard to tell, that’s because ChatGPT’s content creator AI is already really good. So are the information aggregation AIs and recommendation engines built by other Big Tech (look at the ads Google and Instagram feed you). While matching the performance of these models will take work and time, they demonstrate the feasibility of these AI categories. The main open question lacking precedent is more in the direction of liquidity allocator AIs, AI players, and the development of self-improvement and goal-directedness in AIs – the evolution from basic machine learning to verifiable AI agents.
If you’re building in these spaces or this post resonates with you, do
Hiroki Kotabe is the Research Principal at Inception Capital (formerly OP Crypto). Inception Capital is a first-check venture fund focusing on early-stage web3 startups and emerging fund managers. Since raising $50M for Inception Venture Fund I in September 2021, they have been at the forefront of the industry, investing at the earliest stage in over 30 projects, such as Scroll, Merit Circle, Avalanche, and Celestia. Their $30M Inception Fund of Funds has invested in web3 funds like Orange DAO, Escape Velocity, Alliance, Syncrasy, and Everyrealm.