Not so long ago, AI startups were the new shiny object that everyone was getting excited about. It was a time of seemingly infinite promise: AI was going to not just redefine everything in business, but also offer entrepreneurs opportunities to build category-defining companies.
A few years (and billions of dollars of venture capital) later, AI startups have re-entered reality. Time has come to make good on the original promise, and prove that AI-first startups can become formidable companies, with long term differentiation and defensibility.
In other words, it is time to go from “starting” mode to “scaling” mode.
To be clear: I am as bullish on the AI space as ever. I believe AI is a different and powerful enough technology that entire new industry leaders can be built by leveraging it, as long it is applied to the right business problems.
At the same time, I have learned plenty of lessons in the last three or four years by being on the board of AI startups, and talking to many AI entrepreneurs in the context of Data Driven NYC. I’ll be sharing some notes here.
This post is a sequel to a presentation I made almost three years ago at the O’Reilly Artificial Intelligence conference, entitled “Building an AI Startup: Realities & Tactics”, which covered a lot of of core ideas about starting an AI company: building a team, acquiring data, finding the right market positioning. A lot of those concepts still hold, and this post will focus more on specific lessons around scaling.
To get the semantics out of the way: as a result of market hype over the last few years, it has become unclear what an “AI startup” actually is.
There are basically three categories of AI startups:
This post is mostly about “AI-first” startups, although some lessons may apply to other categories, and perhaps to “deep tech” (or “frontier tech”) startups in general.
Which AI-first startups are actually scaling?
Short answer: not many, just yet. It’s a work in progress.
If you look at the recent list of top 100 AI companies from CB Insights, the majority of them are somewhere around the Series B or Series C stage. Very few have reached truly large scale just yet, particularly in the AI-first category.
Why is that?
Most of those companies are still pretty young — perhaps between 4 and 6 years old. The current wave of AI startups was really started only sometime after 2012 (when deep learning demonstrated its power at the annual ImageNET competition).
Certainly, some of those startups have been very successful raising large amounts of money, and consequently hired a lot of people, which is certainly one measure of scaling.
From a revenue perspective, however, my sense is that most AI-first companies are still reasonably early — meaning, in the general 7-digit or low 8-digit ARR zone. (In the other categories mentioned above, several are much further along, like Dataiku which I’m intimately familiar with, or UI Path).
VC Financing: The deep tech challenge
Another reason not many AI-first startups have reached scale yet is that, at their core, they are “deep tech” companies.
It’s a reality that tends to be forgotten a bit too easily these days. Many in the VC community have jumped to the conclusion that, since AI will ultimately be a part of every company, there isn’t anything special or different about AI-first companies. Perhaps that is directionally correct long term, but at least for now, this conclusion is very premature.
Certainly, there’s been tons of progress which makes things easier — open source frameworks like TensorFlow are available, cloud providers cloud providers offer rapidly improving AI infrastructure, etc.
I’d also probably argue as well that the big problem everyone was afraid about a few years ago — acquiring all the data necessary to train super data-hungry AI algorithms — has turned out to be more tractable than anticipated. Yes, Google and Facebook have a formidable advantage, but as a startup, you don’t need all the data in the world, just the data necessary to solve the specific problem you’re going after (as long as you’ve defined it narrowly and/or vertically enough). It’s been a testament to entrepreneurial resourcefulness to see AI-first startups hustle and figure out data acquisition. Lot of different strategies here, from partnering with data-rich institutions (e.g., hospitals for radiology images) to creating their own datasets (either synthetically, or effectively manually by spinning up and down entire teams, often overseas).
Regardless, it’s still much harder and time-consuming to build an AI-first startup than your regular SaaS (or consumer) startup. You just can’t go lean and iterate quickly. Good ML engineers continue to be in short supply. Data acquisition, although tractable, still takes a lot of time and effort. Training models to an acceptable level of performance also takes time. TensorFlow may be open sourced, but deploying it as scale still requires rare expertise.
As a result, from a fundraising perspective, AI-first will often have an investment profile that is closer to their deep tech cousins (industrial startups, IoT, space tech, biology, etc.). Meaning, they have a lot of R&D to figure out before they’re even able to ship a real AI product, so they’re going to be behind their nimbler peers at each round, in terms of business metrics. They’ll show up at the Series A with a couple of pilots. At the Series B, they’ll often have product market fit, but an immature go to market machine. And so on and so forth.
Early stage investors have had time to adjust to this reality and have evolved their investment criteria (see this “napkin” from Louis Coppey at Point Nine Capital). Later stage investors, who are typically more metrics-oriented, are now getting to know this generation of AI startups, and trying to figure out what to make of them. Ironically, “moonshot” AI companies (e.g. autonomous driving) may arguably have an easier time raising later stage rounds, as it is unclear what metrics should apply to them, so the deep tech narrative may carry them further. For other AI startups, it will be much harder to avoid comparisons with regular consumer or enterprise software companies, so it is important to not fall too far behind.
R&D: Knowing when to stop
To avoid falling too far behind, it’s important to not get stuck in R&D mode for too long.
Just like any deep tech companies, AI-first startups are exposed to the “science project” risk.
Part of it comes with the DNA of the founding teams. To build an interesting AI-first company, you need a lot of really smart engineers who like to conclusively solve really hard technical problems. “Engineers gone wild” fun ensues.
In addition, progress in AI is often capricious and resists clear timelines. You’ll often hear people talk about how they got most of the way (say 80% accuracy) in a matter of weeks or months, but then got stuck, sometimes for years, trying to get to 90% or 95%. Sometimes companies, frustrated by the extended plateau, will spend months of efforts on a completely different approach, only to end up with a tiny improvement. Or even worse, sometimes you see the performance of the AI actually degrade, for example as a result of adding new datasets.
Bottom line, people often have no real idea when the next breakthrough in their AI product will happen (which is why no one has a clue when Level 5 autonomous driving will actually happen).
So it’s often easy, even for commercially-minded teams, to slip into extended R&D loops.
There are some hacks around that, but no silver bullet. For example, you’ll see startups building a V1 of their product that doesn’t have any actual AI in it, and functions with software and humans. It’s great in terms of getting real feedback from customers. But it’s basically kicking the can down the road. At some point, you’ll have to hot swap your “fake” V1 for a “real” AI V2, and things can go really wrong then. You’re basically trying to change the engine while flying.
Avoiding this extended loop is not easy, but ultimately it’s a question of leadership (starting with the CEO) to manage shipping timelines and the cash in the bank.
I have also learned that it’s really helpful to start building the Product function early into AI-first startups, and they’ll provide a nice counterbalance to the R&D teams.
Product: Creating value beyond AI
Plenty of interesting lessons on the product front.
The bad news: Customers expect your AI to be superhuman
For all the amazing progress of the last few years, AI is still a very imperfect technology that fails very often. This is particularly obvious (and problematic) in cases where it needs to perform at 100% accuracy (self-driving cars, medical diagnosis).
Unfortunately, customers don’t care. Perhaps as a result of all the hype, they expect your AI to be superhuman.
Human executive assistants routinely screw up meetings, bookkeepers miscategorize expenses, doctors misdiagnose… We somehow have accepted this when humans are involved, but we have essentially no tolerance for error once a machine is involved.
To make matters worse, most of us have a big blind spot: we’ll often give the AI the wrong instruction, causing it to fail, but we’ll still blame the machine!
Humans in the loop don’t scale
Given the above, it’s pretty clear that humans are often going to need to be in the mix to make AI work in the real world.
A lot of AI entrepreneurs came to this conclusion early, and built little armies of people behind the scenes to work alongside the AI. That worked well for the end customer (the product works). However, that is a disaster for the startup from a business model perspective. Of course, the idea is that the AI will get good, and eventually replace most of the people involved, but the reality, as per the above, is that that timeline is highly unpredictable. In the meantime, you remain in negative gross margin territory, which makes the business really hard to finance and grow. That’s a prime example of a tactic that just doesn’t scale.
Learning to fail gracefully
The trick is to get your customers (not your employees) to be the “humans in the loop”.
Google, of course, has been doing this for decades now. These days, we all spend time telling Google Photos that yes, this is the same kid in those two pictures. Your computer vision system could not figure it out, but I’m happy to tell you and do my bit make your algorithms better. You’re welcome, Google!
Google makes it look easy, but it’s really hard to nail this “AI-to-human handoff” from a product perspective.
A big part of it is for the AI to “fail gracefully”. Meaning, it needs to communicate that it has failed clearly and early, before users get frustrated or, worse, before real damage is done.
Then there needs to be a very clear workflow for humans to take over. Simple GUIs, like buttons offering the human multiple choices, have proven surprisingly efficient from that perspective.
Mastering the social dynamics of AI
There’s a really interesting social aspect to ensuring wide adoption of AI, especially in the enterprise. Humans often view AI (and automation in general) with some level of suspicion, and the trick is for the AI to empower, rather than threaten, employees.
Experience to date has shown that humans and AI don’t really work well together when they are positioned “side by side”. For example, customer service scenarios where the AI would make suggestions to reps in real time about how they should reply to a customer inquiry. This effectively puts the AI and humans in competition. I have heard of cases where employees sabotaged the AI as a result.
Conversely, putting humans in a position where they can correct and improve the AI works much better, from a social adoption perspective. This works particularly well when the AI has been built with transparency and “explainability” (a big trend in AI) in mind.
The best companies go one step further and help their enterprise customers build entire teams around managing and improving the AI, ensuring alignment through empowerment (see our portfolio company Ada Support, and their Automated Customer Experience teams).
Adding RPA to the mix
Finally, the most successful AI-first startups I have seen understood early that the AI itself should not be the sole driver of value for the product.
There’s often a number of low (or at least, lower) hanging fruits that are much easier to build than the AI itself.
Those can include anything from simple features (adding a Zoom integration to a calendar), connectors to other systems (to ingest data from the most commonly used sources of data in the enterprise), integration with payment systems (to enable customers to buy once the AI has made a recommendation) or “RPA” style automation for simple, repetitive tasks.
The irony is that those integrations or RPA features can matter just as much to the end user than all the AI-powered part. Sometimes value to the end customer and engineering complexity can be infuriatingly disconnected.
Sales & Marketing: No need to reinvent the wheel
As is often the case in deeply technical companies, it is not uncommon for AI-first startups to exhibit reluctance towards sales and marketing. Some companies struggle with the idea of hiring sales people, often considered to be of lesser IQ than engineers. Some decide to never hire any sales people, and instead use a “forward-deployed engineer model”, a la Palantir. Many AI founders’ ultimate fantasy is a product that basically sells itself because the technology is just so darn good.
Of course, for any startup to succeed at scale, distribution matters just as much as product. The most successful AI CEOs embrace that reality early, and when their background is purely technical, make it a mission to master all business aspects of building a venture. They understand that this is not an area where they need to start from first principles, and instead rely on the experience of senior sales and marketing hires that they actively recruit and empower.
Another common issue of engineering-heavy teams is to lead sales and marketing with a “technology-first” message. In the case of AI, it’s been interesting, because for the last few years, talking about AI to potential customers was actually very efficient. Everyone was curious about AI, and many people would take first meetings (or try a new consumer product) partly because of that. Many startups I am familiar with have since come to the correct realization that, when it comes to actual buying decisions, customers do not really care whether it is AI or not. Some have effectively removed AI from their marketing and sales messages. I would actually argue that the AI message still has a lot of legs. While it probably does not make sense to lead with AI, there’s value in still weaving it in the overall presentation of the product and company.
If a lot of this post sounds like a recommendation to AI-first companies to be “less of an AI company” in order to scale, it’s probably because it is.
The risk for AI-first startups is to get stuck for too long in “deep tech” mode, with an engineering-heavy team, and miss all the milestones that will make the company financeable by VCs. For most of them, there’s typically an opportunity to build “good enough” AI that can be supplemented and enhanced by simpler software and RPA type features, and distributed efficiently through tried-and-tested marketing and sales methods, as long as the relevant teams are onboarded early enough.
If they can do the above, I’m a big believer that AI-first startups will truly scale and emerge as category-leading, sometimes category-defining, companies. We’re still early in this wave of creation of AI startups, there are many more opportunities, and I’m excited to see how it all plays out.
Cover picture: Conversation with Dileep George, CTO of Vicarious, at Data Driven NYC.