StackOverflow, the most commonly used platform by software developers for programming support, has been through a rough ride lately. If you haven’t used StackOverflow before, it’s a Quora / Reddit-like Q&A forum where you can ask programming-related questions. It’s been several years since I wrote production-quality code but back when I did, StackOverflow was incredible.
For example, if you ran into the most obscure of errors while compiling your code and got an error message you could not make sense of, you would stick it into Google search. More often than not, you would find a StackOverflow page where someone had asked the same question and got an answer. Less often, you would find another soul who had the exact same obscure problem as you but got no answer - in which case, good luck. More precisely, 69% of questions on StackOverflow are answered, which is pretty impressive.
Recently, however, StackOverflow’s traffic has been in decline. Similarweb’s data shows that their traffic dropped 14% year over year (StackOverflow says it’s closer to 5%). Nevertheless, the trend is downward and is explained primarily by the emergence of AI coding products like ChatGPT and GitHub Copilot. These products have meaningful code-writing capabilities and are therefore able to provide programming support, at least partly as good as StackOverflow does. Ironically, several of the large language models (LLMs) behind these AI products were trained using scraped StackOverflow data.
The company has gotten pretty harsh media coverage with these developments. Business Insider, in their article Death by LLM, wrote:
Welcome to the future of the internet in an AI world. Online communities like Stack Overflow and Wikipedia thrived as hubs for experts and curious browsers to come together and share information freely. Now these digital meeting places are being pillaged by big tech companies prowling for human data to train their large language models.
The new products emerging from this generative-AI boom are putting the future of these online forums in doubt. The chatbots answer questions clearly, automatically, and often pleasantly — so humans don't need to deal with other humans to get information.
In the midst of all this attention, StackOverflow has played a steady hand and articulated its two-pronged approach to addressing this challenge:
A few weeks back, they announced that they will start charging large AI developers who use the platform’s 50M+ questions and answers for model training (we dug into this issue in the [data scraping](http://Data Scraping in the Spotlight) article earlier)
Last week, they launched the OverflowAI product, which is a set of actually useful generative AI features that can help kick off their second innings - we will focus on this today
In this article, we’ll dive deep into:
AI code writing tools disrupting StackOverflow
What OverflowAI does
How OverflowAI accelerates the future of StackOverflow
There are several AI code writing and editing tools available in the market today. These are either independent products (like OpenAI Codex, ChatGPT, Google Bard) or products that are natively integrated inside existing platforms (like GitHub Copilot, Replit Ghostwriter, Amazon CodeWhisperer). They have a broad range of capabilities including code generation, code editing, autocomplete, and debugging.
The products that have native distribution (like GitHub Copilot) are at a large advantage because they can operate seamlessly within environments that programmers already use today, and we will see more products attempting to get plugged into existing environments. For example, CodeGPT has a plugin that lets developers use the product from within Visual Studio Code (a popular code editing tool).
Existing AI code-writing tools are good are certain tasks. For example, this Reddit thread captures feedback from several web developers about GitHub Copilot - the overarching theme is that the product is useful in a subset of situations where developers have to write net new code and don’t want to spend time writing from scratch. Even in those situations, it’s often hit or miss.
The reason is not surprising. Conceptually, large language models (LLMs) take in a ton of data and generate output on the basis of this construct: in a particular context, for the question you asked, what is the most likely word/text to follow the previous word? It’s essentially calculating the probability of a word following another, and generating output based on that. Despite this construct, given the amount of data that’s gone into training these models, the results for the more general ChatGPT use cases (like drafting an email or summarizing a page) have been nothing short of impressive. But it’s important to remember that language models, by design, have limited analytical / math capabilities. In other words, when you ask the model “What is 2+2?”, it may give you the right answer - not because it knows math but because it has seen that text pattern before in its training data.
Similarly, when it comes to code generation, the model does not really “know” the underlying concepts behind programming but is predicting results based on its training with a ton of text data. The consequence of this is the GitHub Copilot feedback above - it is sometimes good at generating the base code you need, but its ability to actually understand code, debug and provide you explanations is limited. This will get better over time but it’s hard to say if it will ever get to the point of high accuracy / high reliability.
StackOverflow CEO Prashanth Chandrasekar describes it succinctly:
One problem with modern LLM systems is that they will provide incorrect answers with the same confidence as correct ones, and will ‘hallucinate’ facts and figures if they feel it fits the pattern of the answer a user seeks.
At some point you’re going to need to know what you’re building. You may have to debug it and have no idea what was just built, and it’s hard to skip the learning journey by taking shortcuts.
This is the opportunity for StackOverflow - their traffic drop may be permanent and it’s very likely that programmers come to StackOverflow less often for simpler questions (eg. they might not visit StackOverflow anymore for an off-the-shelf sorting algorithm). But where the product can shine is: 1) providing high accuracy / high-reliability answers to more complex questions that language models might not have the capability to answer, and 2) providing answers to questions in new technologies/problem spaces that the models have not had previous data to train on. OverflowAI is designed to directly tap into this opportunity.
There are three key facets they are betting on - direct answers to questions, usability from within developer environments, and supercharging knowledge within enterprises.
OverflowAI Search provides direct answers to users in a Q&A format (similar to ChatGPT), but provides several links to actual StackOverflow posts. Besides helping create trust, this also provides users with the opportunity to go deeper where the answer provided by AI does not fully solve the user’s problem. This strikes the delicate balance of giving a direct answer when the question is simple, but also guiding the user along a more exploratory path for difficult questions.
If the user is not satisfied with the responses, they can enter a chat-like interface to ask follow-up questions. If none of the answers are satisfactory, they can ask StackOverflow to draft a question on their behalf, ready to be posted to the Q&A forum. This experience also saves users from the semi-often situation where the question they ask is already answered previously.
The product also doubles down on usability by making all of this capability available from Visual Studio Code through an extension. This helps StackOverflow compete more effectively with natively integrated coding assistants by letting developers get answers from within their coding environments (instead of having to context switch and search from a browser).
In addition to this, for enterprise customers, OverflowAI is creating the ability to plug in several different sources of information within a company (internal Q&A, wiki pages, document repositories), to provide a cohesive Q&A experience for developers. Being able to utilize internal and StackOverflow data, and more important exposing this easily in a Q&A type interface, can be a big productivity boost for engineering organizations. They also intend to launch a Slack integration as a seamless interface to expose this capability.
What’s impressive about OverFlowAI’s product approach is that it takes the company’s core asset (answers to difficult questions), exposes answers in a highly usable interface wherever the users are (whether on Slack or within developer environments), and in turn creates a loop where users can leverage generative AI to submit new questions.
StackOverflow is not exactly a public company - they are owned by Prosus, which is in turn part of a bigger holding company Naspers, which is publicly traded. Therefore, it’s hard to get clean revenue data but a report from Prosus published in May 2022 paints a picture:
I think this is generally great news for StackOverflow. Their advertising business is in decline - which is not surprising. The success of advertising products is directly tied to traffic, and the trend there is downwards. Even if the revenue drop is explained by macro factors, it’s undeniable that the rise of generative AI products like ChatGPT and GitHub Copilot is going to reduce traffic to StackOverflow. This means that their advertising business becomes a less sizable revenue stream, which seems dire at first glance but isn’t necessarily bad.
The most likely long-term state of StackOverflow is that it continues to be a valuable source of answers for difficult questions, and the volume of questions and answers continue to grow with the company’s generative AI push to automatically draft/submit questions. I would go one step further and argue that if StackOverflow can keep the content engine running (i.e. have new questions submitted and answered), the quality of content on the platform will improve, as repetitive/easy questions will no longer be the highest volume of content.
What will change is how they make money. For the company to thrive long-term, how they monetize should be tied to their core value. StackOverflow’s core value proposition is providing answers to users’ programming problems with the least friction possible + having a high volume of trustworthy data that future AI models can use. This aligns closely with their focus - the StackOverflow for Teams business (growing +60% year over year) and the launch of their data licensing program (where they charge AI companies for training on their data).
The company has made the right product and business investments to weather what was a potential disruption, and I am optimistic these investments will set StackOverflow on a new trajectory where they continue to thrive. A world in which: they have fewer eyeballs (measured by page views) and therefore less advertising revenue, but a thriving content platform that is monetized through an AI-first enterprise product and data licensing. If this happens (and I think it will), it would be a turnaround story for the books.
Also published here.
🚀 If you liked this piece, consider subscribing to my weekly newsletter. Every week, I publish one deep-dive analysis on a current tech topic/product strategy in the form of a 10-minute read. Best, Viggy.