paint-brush
Pushing AI to the Edge: Use Cases and What is Next [Part 2]by@jasonshepherd
169 reads

Pushing AI to the Edge: Use Cases and What is Next [Part 2]

by Jason ShepherdNovember 25th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The ability to apply AI to “see” events in the physical world enables an immense opportunity for innovation in areas such as object recognition, safety and security, quality control, predictive maintenance and compliance monitoring. The need for AI (and ML) at the edge will only continue to grow with the explosion of devices and data. In Part One of this two-part Q&A series we highlighted some key considerations for edge AI deployments. In this installment, our questions turn to emerging use cases and key trends for the future.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Pushing AI to the Edge: Use Cases and What is Next [Part 2]
Jason Shepherd HackerNoon profile picture

In Part One of this two-part Q&A series we highlighted some key considerations for edge AI deployments. In this installment, our questions turn to emerging use cases and key trends for the future.

What do you see as the most promising use cases for edge AI?

As highlighted in Part One, the reasons for deploying AI at the edge include balancing needs across the vectors of scalability, latency, bandwidth, autonomy, security and privacy. In a perfect world, all processing would be centralized, however, this breaks down in practice and the need for AI (and ML) at the edge will only continue to grow with the explosion of devices and data.

Hands down, computer vision is the killer app for edge AI today due to the bandwidth associated with streaming video. The ability to apply AI to “see” events in the physical world enables an immense opportunity for innovation in areas such as object recognition, safety and security, quality control, predictive maintenance and compliance monitoring.

Considering retail — computer vision solutions will usher in a new wave of personalized services in brick-and-mortar stores that provide associates with real-time insights on current customers in addition to better informing longer-term marketing decisions. Due to privacy concerns, the initial focus will be primarily around assessing shopper demographics (e.g., age, gender) and location but increasingly we’ll see personalized shopping experiences based on individual identity with proper opt-in (often triggered through customer loyalty programs). This includes a trend for new “experiential” shopping centers, for which customers expect to give up some privacy when they walk in the door in exchange for a better experience.

While Amazon Go stores have led the trend for autonomous shopping environments, the use of computer-vision enabled self-service kiosks for grab-and-go checkout is growing rapidly overall. Given the health concerns with COVID-19, providers are shifting to making these solutions contactless by leveraging gesture control, instead of requiring interaction with a keypad or touch screen.

Computer vision use cases will often leverage sensor fusion, for example with barcode scans or radio-frequency identification (RFID) technology providing additional context for decision making in retail inventory management and point of sale (POS) systems. A camera can tell the difference between a T-shirt and a TV, but not the difference between large and medium sizes of the same shirt design. Still, perhaps eventually an AI model will be able to tell you if you have bad taste in clothing!

Another key vertical that will benefit from computer vision at the edge is healthcare. We’ve worked with a global provider that leverages AI models in their medical imaging machines, located within hospitals and provided as a managed service. In this instance, the service provider doesn’t own the network on which their machines are deployed so they need a zero-trust security model in addition to the right tools to orchestrate their hardware and software updates.

Another example where bandwidth drives a need for deploying AI at the IoT Edge is vibration analysis as part of a use case like predictive maintenance. Here sampling rates of at least 1KHz are common and can increase to 8–10KHz and beyond because these higher resolutions improve visibility into impending machine failures. This represents a significant amount of continuously streaming data that is cost-prohibitive to send directly to a centralized data center for analysis. Instead, inferencing models will be commonly deployed on compute hardware proximal to machines to analyze the vibration data in real-time and only backhauling events highlighting an impending failure.

Analysis for predictive maintenance will also commonly leverage sensor fusion by combining this vibration data with measurements for temperature and power (voltage and current). Computer vision is also increasingly being used for this use case, for example, the subtle wobble of a spinning motor shaft can be detected with a sufficient camera resolution, plus heat can be measured with thermal imaging sensors. Meanwhile, last I checked voltage and current can’t be checked with a camera!

An example of edge AI served up by the Service Provider Edge is for cellular vehicle-to-everything (C-V2X) use cases. While latency-critical workloads such as controlling steering and braking will always be run inside of a vehicle, service providers will leverage AI models deployed on compute proximal to small cells in a 5G network within public infrastructure to serve up infotainment, Augmented Reality for vehicle heads-up displays and coordinating traffic.

For the latter, these AI models can warn two cars that they are approaching a potentially dangerous situation at an intersection and even alert nearby pedestrians via their smartphones. As we continue to collaborate on foundational frameworks that support interoperability it will open up possibilities to leverage more and more sensor fusion that bridges intelligence across different edge nodes to help drive even more informed decisions.

We’re also turning to processing at the edge to minimize data movement and preserve privacy. When AI inferencing models are shrunk for use in constrained connected products or healthcare wearables, we can train local inferencing models to redact PII before data is sent to centralized locations for deeper analysis.

Who are the different stakeholders involved in accelerating adoption of edge AI?

Edge AI requires the efforts of a number of different industry players to come together. We need hardware OEMs and silicon providers for processing; cloud scalers to provide tools and datasets; telcos to manage the connectivity piece; software vendors to help productize frameworks and AI models; domain expert system integrators to develop industry-specific models, and security providers to ensure the process is secure.

In addition to having the right stakeholders, it’s about building an ecosystem based on common, open frameworks for interoperability with investment focused on the value add on top. Today there is a plethora of choices for platforms and AI tools sets which is confusing, but it’s more of the state of the market than necessity. A key point of efforts like LF Edge is to work in the open-source community to build more open, interoperable, consistent and trusted infrastructure and application frameworks so developers and end-users can focus on surrounding value add. Throughout the history of technology open interoperability has always won out over proprietary strategies when it comes to scale.

In the long run, the most successful digital transformation efforts will be led by organizations that have the best domain knowledge, algorithms, applications and services, not those that reinvent foundational plumbing. This is why open-source software has become such a critical enabler across enterprises of all sizes — facilitating the creation of de-facto standards and minimizing “undifferentiated heavy lifting” through a shared technology investment. It also drives interoperability which is key for realizing maximum business potential in the long term through interconnecting ecosystems… but that’s another blog for the near future!

How do people differentiate with AI in the long term?

Over time, AI software frameworks will become more standardized as part of foundational infrastructure, and the algorithms, domain knowledge and services on top will be where developers continue to meaningfully differentiate. We’ll see AI models for common tasks — for example assessing the demographics of people in a room, detecting license plate numbers, recognizing common objects like people, trees, bicycles and water bottles — become commodities over time.

Meanwhile, programming to specific industry contexts (e.g. a specific part geometry for manufacturing quality control) will be where value is continually added. Domain knowledge will always be one of the most important aspects of any provider’s offering.

What are some additional prerequisites for making edge AI viable at scale?

In addition to having the right ecosystem including domain experts that can pull solutions together, a key factor for edge AI success is having a consistent delivery or orchestration mechanism for both compute and AI tools. The reality is that to date many edge AI solutions have been lab experiments or limited field trials, not yet deployed and tested at scale. PoC, party of one, your table is ready!

Meanwhile, as organizations start to scale their solutions in the field they quickly realize the challenges. From our experience, we consistently see that manual deployment of edge computing using brute-force scripting and command-line interface (CLI) interaction becomes cost-prohibitive for customers at around 50 distributed nodes.

In order to scale, enterprises need to build on an orchestration solution that takes into account the unique needs of the distributed IoT edge in terms of diversity, resource constraints and security, and helps admins, developers and data scientists alike keep tabs on their deployments in the field. This includes having visibility into any potential issues that could lead to inaccurate analyses or total failure. Further, it’s important that this foundation is based on an open model to maximize potential in the long run.

Where is edge AI headed?

To date, much of the exploration involving AI at the edge has been focused on inferencing models — deployed after these algorithms have been trained with the scalable compute of the cloud. (P.S. for those of you who enjoy a good sports reference, think of training vs. inference as analogous to coaching vs. playing).

Meanwhile, we’re starting to see training and even federated learning selectively moving to the Service Provider and User Edges. Federated learning is an evolving space that seeks to balance the benefits of decentralization for reasons of privacy, autonomy, data sovereignty and bandwidth savings, while centralizing results from distributed data zones to eliminate regional bias.

The industry is also increasingly developing purpose-built silicon that can increase efficiencies amid power and thermal constraints in small devices and even support either training or inference and this corresponds with the shift towards pushing more and more AI workloads onto edge devices. Because of this, it’s important to leverage device and application orchestration tools that are completely agnostic to silicon, compared to offers from silicon makers that have a vested interest in locking you into their ecosystem.

Finally, we’ll see the lower boundary for edge AI increasingly extend into the Constrained Device Edge with the rise of “Tiny ML” — the practice of deploying small inferencing models optimized for highly constrained, microcontroller-based devices. An example of this is the “Hey Alexa” of an Amazon Echo that is recognized locally and subsequently opens the pipe to the cloud-based servers for a session. These Tiny ML algorithms will increasingly be used for localized analysis of simple voice and gesture commands, common sounds such as a gunshot or a baby crying, assessing location and orientation, environmental conditions, vital signs, and so forth.

To manage all of this complexity at scale, we’ll lean heavily on industry standardization, which will help us focus on value on top of common building blocks. Open-source AI interoperability projects, such as ONNX, show great promise in helping the industry coalesce around a format so that others can focus on developing and moving models across frameworks and from cloud to edge.

The Linux Foundation’s Trust over IP effort and emerging Project Alvarium will also help ease the process of transporting trusted data from devices to applications. This notion of pervasive data trust will lead to what I call the “Holy Grail of Digital” — selling and/or sharing data resources and services to/with people you don’t even know. Now this is scale!

In Closing

As the edge AI space develops, it’s important to avoid being locked into a particular toolset, instead opting to build a future-proofed infrastructure that accommodates a rapidly changing technology landscape and that can scale as you interconnect your business with other ecosystems.

The success of AI overall — and especially edge AI — will require our concerted collaboration and alignment to move the industry forward while protecting us from potential misuse along the way. The future of technology is about open collaboration on undifferentiated plumbing so we can focus on value and build increasingly interconnected ecosystems that drive new outcomes and revenue streams. As one political figure famously said — “it takes a village!”

Also published at https://medium.com/zededa/pushing-ai-to-the-edge-part-two-edge-ai-in-practice-and-whats-next-758b2d00f33e