I have and always will be a fan of Steve Blank and his advocacy of customer discovery in startups. 3 years ago when we started TalFinder, studying many of the frameworks he set in place helped us make sense of the chaos and get some things right.
This is a context to the post so bear with me and read it. When we started TalFinder, we were on a mission to simply developer evaluation in the hiring process. We wanted to develop a unique solution that can help address the gaps in the market and speed up a ridiculously slow process. At the center of our value proposition was the fact that we didn’t prefer the algorithmic approach to hiring where candidates are tested purely on data structures or algorithms. We believe that it would make more sense to give a real-world problem to the candidate & get them to submit the solution. Inherently, any real-world problem would then involve the use of logic, structures, etc. In addition, your ability to deliver a solution with the knowledge of frameworks and the technology stack gets tested as well.
This is already being done with take-home assessments so we believed the market is there. We hypothesized that if we provided a platform where this could be done easily without having to deal with the mess of building solutions locally on your laptop and sending GitHub links which then need to be rebuilt for evaluation, we are providing value.
Notice I called it a hypothesis because that is what this is. Before I get back to customer discovery:
“If you think you are the customer/user of your product, get real. You may be an ideal user but you are severely biased because you are building the product. Get out of the damn room and do customer discovery.”
After a lot of struggle, we took our first version(I don’t want to call it an MVP because it wasn’t one) to a customer who wanted a solution like ours badly. We worked with them and built a few improvements but nothing shook our hypothesis, but we noticed one worrying thing: Some of our success metrics didn’t look so good.
No. of candidates submitting code: Reduced drastically — This is a positive sign because candidates who previously made it to the interviews and then got rejected now didn’t make it past the first round.The success rate of candidates in subsequent rounds: High — this is the outcome we wanted.No. of candidates moving from pending evaluation to decision statuses like approved or rejected: Low. Huh? This should have been high because we made it easier. At least that was the hypothesis.
As we puzzled over this, hell broke loose. A key user lost his cool, called me up on a Saturday evening and finished his conversation with the words: “I might as well use ****”, **** was our biggest competition. When you strip away the irked customer talk, he was giving us some serious feedback. We had focussed so much on the IDE and the developer experience that we had missed the other assumption we made — The evaluator will hugely benefit from the web-IDE experience.
This assumption wasn’t wrong but it didn’t alleviate the pain as much as we thought it did. That is why the 3rd metric didn’t look good.
The immediate answer was — automated evaluation. Of course, if we were going to automate it then we might as well do it in a way that anyone can read it not just a technology hiring manager. Our new goal was, any time the evaluator has to open the IDE, we have lost. With this new north star, we rolled out the first few features within 48 hours and the next set within the next 72 hours. We retained the customer but also built a product that was way more attractive & had a much more appealing roadmap. What we did in the days that followed didn’t just help us turn around that customer but also brought in new prospects. Pleasing a single customer is never the way to go.
When all of this was happening, our CEO’s main ideas around what we should build were around the IDE and bettering assessment creation experience. Our roadmap would have veered off to another destination. It is customer discovery that led me to formulate the roadmap completely on a different path. I then had to convince the board that we need to go down a different route.
So, what did all this mean?
The sooner you get out and put your product in the hands of the customer the better.You can keep moving in any direction but the right move usually happens only after a customer uses your product.The customer has problems and will start suggesting solutions, it’s up to you to find the problems and come up with sustainable solutions.What you think is your best value proposition probably isn’t.In hindsight, we could have gotten to that point a lot faster with a good MVP instead of V1.Oh & did I mention that we built a lot of features in V1 that no one except us used? We did. That is waste, expensive waste that can kill your startup.
This post is a part of my writings on lessons learnt from my startup. You can read my other posts here:
How to Waste Money and Time in the Most Painful Way
Startup Lessons: What I Learnt Pricing My Product
Startup Lessons: What I Learnt Pricing My Product