Getting to “Advanced Class”, Including Focusing on Compounding Value with the Right Partners
I talked about the importance of compounding value in part one of this series and will touch on this concept a little more here, along with the criticality of domain knowledge. Speaking of the latter, hundreds of IoT platform providers claim they can do a use case like Predictive Maintenance (PdM), but few have the domain knowledge to do it.
A PdM solution not only requires analytics tools and data science but also an intimate knowledge of failure patterns based on various attributes (vibration, oil particulates, temperature, motor current, etc.). Often an operator on a factory floor that has “been there, done that” sits alongside a data scientist to help program the analytics algorithms based on his tribal knowledge. A former colleague once worked on a PdM solution with a line operator named Brad, who was the expert that helped the data scientist understand what machine and process behaviors were non-issues, despite looking like bad news, and vice versa. They jokingly called the result “Bradalytics.”
Further, there’s a certain naivety in the industry today in pushing IoT solutions when there’s already established “good enough” precedent. In the case of PdM, manual data acquisition with a handheld vibration meter once a month or so is a common practice to accurately predict impending machine failures because these failures don’t typically happen overnight. An industrial operator can justify this manual data collection as OpEx and the only thing permanently installed on their machines are brass pads that indicate where to apply the contact-based handheld sensor, from which data is manually loaded into an analytical tool.
Similar manual procedures are all too common in other industries, such as USB data loggers used to monitor the temperature in cold chain logistics operations. In another example, structural engineers have historically used the practice of attaching an expensive sensor package to monitor the structural integrity of a bridge or building for a few weeks, only to then move this equipment on to the next piece of infrastructure.
The promise of IoT is that this instrumentation is becoming so inexpensive that it can be deployed permanently everywhere for acquiring real-time data from the physical world; however, the value of deploying this infrastructure still must be greater than the cost of deploying and maintaining it through its full lifecycle in addition to the associated risk in terms of maintaining security and privacy.
Don’t get me wrong — PdM enabled by IoT is a valuable use case — despite machines typically failing over a long period of time, it’s expensive to roll a truck to do repairs, especially if an industrial process experiences a loss of production. For example, downtime in a major jetliner factory can be upwards of $20k a minute! It’s just important to think about the big picture and whether you’re trying to solve a problem that has already been solved.
Looking at the bigger picture, a supply chain linked to a manufacturing process is a great example of an ecosystem that easily justifies an IoT solution for real-time monitoring and analytics. In a multi-stage manufacturing operation, the cost of discovering a flaw within a given part increases steadily with each process stage. The cost is even higher if that part gets into the supply chain, and it’s higher still if a defective product gets to a customer, not to mention impacting the manufacturer’s brand. Here the cumulative value of instrumenting throughout the product lifecycle is very high and definitely warrants a solution that can report issues the moment they occur.
Speaking of the holistic value that I touched on in part one and above, the real potential is not just the remote monitoring of a single machine, rather a whole fleet of interconnected machines. Imagine a world where you can send a tech out with work instructions to preventatively repair a fleet of machines to get the most out of a truck roll to a given location. And on top of that — imagine that the algorithm could tell you that it will cost you less money, in the long run, to replace a specific machine altogether, rather than trying to repair it yet again.
It goes beyond IoT and AI. Last I checked, systems ranging from machines to factory floors and vehicles are composed of subsystems from various suppliers. As such, open interoperability is also critical when it comes to Digital Twins.
Sharpening the edge
An agreed-upon taxonomy will help put an end to the current market confusion caused by various industries (cloud, telco, IT, OT/industrial, consumer) defining the edge with a strong bias towards their interests/PoV, not to mention taxonomies that use ambiguous terms such as “thin and thick” and “near and far” that mean different things to different people.
The IoT component of the Smart Device Edge (e.g., compared to client devices like smartphones, PCs, etc. that also live at this sub-category) spans a single node with 256MB of memory up to a small server cluster, deployed outside of a physically-secure data center or Modular Data Center (MDC). The upper end of this spectrum is increasingly blurring into the Kubernetes paradigm thanks to efforts like K3S. However, due to resource constraints and other factors, these nodes will not have the same functionality as higher edge tiers leveraging full-blown Kubernetes.
Below the Smart Device Edge is the “Constrained Device Edge.” This sub-tier consists of a highly fragmented landscape of microcontroller-based devices that typically have their own custom OTA tools. Efforts like Microsoft’s Sphere OS are trying to address this space, and it’s important to band together on efforts like this due to the immense fragmentation at this tier.
Ultimately it’s important to think of the edge to cloud as a continuum and that there isn’t an “industrial edge” vs. an “enterprise edge” and a “consumer edge,” as some would contend. Instead, it’s about building an ecosystem of technology and services providers that create a unique value-add on top of more consistent infrastructure while taking into account the necessary tradeoff across the continuum.
In closing
I’ll close with a highlight of the classic Clayton Christiansen book The Innovator’s Dilemma. I talk with a lot of technology providers and end-users, and I often see people stuck in this pattern, thinking, “I’ll turn things around if I can just do what I’ve always done better.” This goes not just for large incumbents but also fairly young startups!
One interaction that has stuck with me over the years was an IoT strategy discussion with a large credit card payments provider. They were clearly trapped in The Innovator’s Dilemma, wondering what to do about the mobile payment players like Square that had really disrupted their market. As they talked about how they didn’t want another Square situation to happen again, I asked them, “Have you thought about when machines start making payments?”. The fact that this blew their minds is exactly why another Square will happen to them if they don’t get outside of their own headspace.
When approaching digital transformation and ecosystem development, it’s often best to start small; however, it is equally important to think holistically for the long term. This includes building on an open foundation that you can grow with, enabling you to focus your investments on business relationships and innovation rather than reinvention. Sure, you may be able to build your own orchestration solution or your own full turn-key stack, but if you do either in a proprietary silo then you’ve just locked yourself out of the biggest potential — creating a network effect across various markets through increasingly interconnected ecosystems.