It’s a common concept that companies should always strive to own the critical pieces of technology that drive their success. Without that ownership, you don’t really own your unique selling proposition. In fact, you could say you are borrowing it from the technology owner who isn’t competing with you yet.
While in concept this seems simple to address with the “build it in house” approach, most product managers recognize that when it comes down to it many digitally based products are an amalgam of interconnected first and third-party services that in their aggregate make the end product.
Take, for example, Apple, which builds their phones using screens from Samsung and LG, cameras from LG, Sharp and O-film, and Wi-Fi chips from Intel and Broadcom. Each of these manufacturers is providing components that make the iPhone, but the heart of the device is the Apple silicon chip, which carries a design spec owned by Apple and that runs iOS, an Apple-owned operating system.
Apple has decided that competing with commodity screens, cameras, etc. takes the backseat to the experience of using iOS powered by and tuned to their proprietary processor. The power of this strategy does not go unnoticed, as we’ve seen with Google’s recent decision to release the Pixel 6 powered by their proprietary Tensor processor.
Product managers need to decide which services require resources to stay competitive and the services that should be off the shelf. The “build in house” method ends up sapping resources and saddling your business with monolithic tech stacks and mounds of technical debt.
To help understand where to invest time in a product, I like to take advantage of a set of evaluation matrices I call the Capability Assessment. I designed this assessment to provide visual cues for understanding the role of a feature or component along with three important evaluation scales.
Development Effort: This first scale is a measure of the amount of development time required. At one end of the scale are the no-code solutions and at the other end is custom code.
Use Cases: The second scale to consider is a measure of the applicability of the component to broad use cases vs industry-specific narrow use cases.
Performance: The third scale measures the performance of the component as implemented today. One end of this scale encompasses poorly performing tech, while the other end is your best-in-class features. It’s important to recognize that performance is a measure of a feature/components delivery against expectations. Best in class functionality are those features a user would describe as the best implementation on the market.
While these three scales make up a three-dimensional plot of your product’s capabilities, plotting and interpreting the results in a Cartesian view is less useful than plotting two views using a matrix chart both designed to inform investment decisions.
In this first view, we plot the components of the product along the Development Effort scale (x-axis) and the Use Case scale (y-axis). Using this approach, we can visualize and draw conclusions about our product based on what features/components show up in each of our four quadrants.
Differentiators: In the top right, we find features/components that to implement would require a significant investment in development while being very specific to a particular industry. Items in this quadrant are items you’ll want to own in your product. These are the features/components that differentiate you from the competition. No one else in your competitive space has access to this capability.
Table Stakes: In the top left quadrant, we find features/components that are industry-specific, but for which there are solutions in the market you can license to enable with low code requirements. I consider these features table stakes for the industry, but not differentiators. Anyone can license the same features and gain the same benefit.
Functional Enablers: On the bottom right quadrant, you have features/components that are not industry specific and require more development effort. This includes supporting tech that needs to be integrated into your specific ecosystem. For a typical business, this might include CRM systems that need to tie into transaction data, or fraud software that needs pipelines of data from a data warehouse. These features are required for the business to function, but don’t contribute to the product story.
Business Tools: Finally, on the bottom left quadrant, we find those features/components that are low code and generic in applicability. This is where we find everyday business tools like ticketing systems, word processors, and other business platforms. These are the tools that enable your business to function but don’t contribute to the product story.
Based on this chart, you're able to make some inherent conclusions about where to spend your development effort. Ideally, your teams invest effort in Differentiators but you’ll often find that not to be the case. Visualizing use cases in this way often leads to interesting philosophical discussions about areas of focus.
Are you spending way too much time on Functional Enablers that are needed but not pushing the product story? Are you investing in a Differentiator that’s available off the shelf from 3rd parties? Or perish the thought, perhaps you have someone putting development effort into a Business Tool.
While this analysis should help you assess where to put your effort, the second view on this data can help you diagnose the success of that effort.
This view of the product changes our perspective from trying to understand where to put effort and instead of spending some time gauging the success of the efforts. To build this view, we’re looking at development effort along the X-axis and performance along the Y-axis.
Winners: Found in the top right quadrant, those items that require more investment in resources that we consider being best in class. Here’s where your efforts pay off by setting the industry standard for the feature/component.
Free Rides: In the top left quadrant are the best-in-class tools you’ve licensed. They’re the best of breed and help you distinguish yourself as a high level performer.
Flounders: In the bottom right quadrant are the features/components you’ve spent time on that are performing poorly. These are the items you’ve built that don’t deliver on expectations.
Millstones: The bottom left quadrant is where you find the features/components your product uses that hamper performance. These features/components are not reliant on custom development and are replaceable.
Similar to the previous assessment, we can use the Performance Matrix to better understand your product and make some critical decisions. I believe all products need one feature/component in the winner category. In Google speak the Winners are your 10X’s the features you do ten times better than the competition.
While it’s natural to have new features start as flounders, too many features in this category can mean you’re splitting resources too thinly. Flounders need to be invested in to make them winners or retired. Millstones are those features/components that are holding back your product. They’re easy to replace because of the low effort and could help your product perform at a higher level.
The magic comes when you combine both analyses together. When looking at the Performance Matrix with the knowledge gleaned from the Use Case matrix, you’ll be surprised at the conclusions you can extract.
Perhaps some of your flounders are not performing because you’re spending effort building a capability that’s table stakes. It could be technology that’s best licensed from others that focus on that capability. To me, this is equivalent to a small retail website trying to build their own shopping cart experience. Compared to the best in class (e.g. Shopify) it’ll be hard for the retailer to meet the expectations customers have for the shopping cart experience. A small retail site doesn’t differentiate on the shopping cart experience, instead, it needs to focus on merchandising, customer acquisition, and conversion. The best bet in this scenario is to license software from a shopping cart provider and convert the Flounder into a Free Ride.
Maybe your Winners aren’t Differentiators in your product strategy. This suggests a strategy pivot to better align the product story with performance. There’s lots of examples of start-ups that pivoted their business towards a Differentiator that wasn’t quite aligned to their product strategy.
Following the retail example above, perhaps the retailer built the best shopping cart experience, perhaps having this feature in the winner category would suggest a pivot to a different business model. This is how Shopify got started. The industry’s leading ecommerce platform was born out of an attempt to create an online store for snowboarding equipment.
The Performance Matrix is useful for illustrating a long-term roadmap. Finding the requisite Differentiators on a Performance Matrix and showing their movement over multiple years is an easy way to show a long-term roadmap vision without the specificity that timelines require.
It’s often the case that product managers get mired in the weeds of feature requirements and stakeholder management, but having a firm grasp on what your product does, what makes it different, and how you plan to win is crucial to the role. Investing in a capability assessment you can illustrate with a simple visualization grounds your decision-making and strategy so you can explain it to others.
Capability Assessments also serve as a high level roadmap to help illustrate progress and the direction of travel. Who knows, it may even help you pivot around capabilities your teams excel at delivering and lead you in an entirely different direction.
Featured image by JESHOOTS.COM on Unsplash