Edge AI starts with edge computing. Also called edge processing, edge computing is a network technology that positions servers locally near devices. This helps to reduce system processing load and resolve data transmission delays. These processes are performed at the location where the sensor or device generates the data, also called the edge.
Note: This article was written by Tatsuo Kurita
Developments in edge computing mean that edge AI is becoming more important. This is true across a variety of industries, particularly when it comes to processing latency and data privacy. In this article we’ll look at the impact of Edge AI, why it’s important, and common use cases for it.
Edge AI refers to AI algorithms that process locally on hardware devices, and can process data without a connection. This means operations such as data creation can occur without streaming or storing data in the cloud. This is important because there are an increasing number of cases where device data can’t be handled via the cloud. Factory robots and cars, for example, need high-speed processing with minimal latency.
To achieve these goals, edge computing can generate data through deep learning on the cloud to develop deductive and predictive models at the data origin point, i.e. the device itself (the edge).
We can see an example of this at work in factory robots. AI technology can be used here to visualize and assess vast amounts of multimodal data from surveillance cameras and sensors at speeds humans can’t process. We can also use it to detect faulty data on production lines that humans might miss. These kinds of IoT structures can store vast amounts of data generated from production lines and carry out analysis with machine learning. They are also at the heart of the deductive and predictive models that improve the smartification of factories.
Edge AI is often talked about in relation to the Internet of Things (IoT) and 5G networks.
The term IoT refers to devices connected to each other through the internet, and includes smartphones, robotics, and electronic devices. As a platform that performs analysis with AI, edge AI can collect and store the vast amount of data generated by IoT, making it possible to use clouds with scalable characteristics. This allows for improved data processing and infrastructural flexibility.
5G networks can enhance the above-mentioned processes because their three major features — ultra-high speed, massive simultaneous connections, and ultra-low latency — clearly surpass that of 4G.
5G is indispensable for the development of IoT and edge AI, because when IoT devices transmit data, data volume swells and impacts transfer speed. Drops in transfer speed can create latency, which is the biggest issue when it comes to real-time processing.
There are an increasing number of cases in which device data can’t be handled via the cloud. This is often the case with factory robots and cars, which require high-speed processing because of issues that can arise when increased data flow creates latency.
For example, imagine a self-driving car suffering from cloud latency while detecting objects on the road, or operating the brakes or steering wheel. Any slowdown in data processing will result in a slower response from the vehicle. If the slowdown is such that the vehicle does not respond in time, this could result in an accident. Lives are literally at risk.
For these IoT devices, a real-time response is a necessity. This means the ability for devices to analyze and assess images/data on the spot without relying on cloud AI.
By entrusting edge devices with information processing usually entrusted to the cloud, we can achieve real-time processing without transmission latency. In addition, by limiting cloud data transmissions to only vital information, it is possible to reduce data volume and minimize communication interruptions.
The edge AI market is chiefly comprised of two areas: industrial machinery, and consumer devices. We’re seeing progress with demonstration tests in areas including controlling and optimizing equipment, and automating skilled labor techniques.
Progress is also being made with consumer devices that have cameras with AI that automatically recognize photographic subjects. Because the number of devices is larger than industrial machines, the consumer device market is expected to rise drastically from 2021 onwards.
We’ve put some common use cases for edge AI below:
Self-driving cars are the most anticipated area of applied edge computing. There are many cases where self-driving cars have to make instantaneous assessments of a situation, and this requires real-time data processing. In December of 2019, revisions to the Road Traffic Act and Road Transportation Vehicle Law in Japan made it easier to get level 3 self-driving cars on the road. These include the safety standards that autonomous vehicles are held to, and the areas in which they can operate. As a result, car manufacturers are working on self-driving cars that adhere to these standards. Toyota, for example, is already testing full automation (level 4) with the TRI-P4.
There’s been an increase of news about drones losing control and going missing while on remote flight experiments. This has even resulted in accidents. Depending on where the drone lands, a crash can be catastrophic.
With autonomous drones, the pilot is not actively involved in the drone’s flight. They monitor the operation remotely, and only pilot the drone when absolutely necessary. The best known example of this is Amazon Prime Air, a drone delivery service which is developing self-piloting drones to deliver packages.
Facial recognition systems are a development in surveillance cameras, which can learn to recognize people by their faces. In November 2019, WDS Co., Ltd began supplying Eeye, an AI camera module that analyzes facial features in real-time through edge AI computing processes. Eeye recognizes faces quickly and accurately, and is suited for marketing tools that target characteristics such as gender and age, and face identification for unlocking devices.
This edge AI device is the one we’re all most familiar with. Siri and Google Assistant are good examples of edge AI on smartphones, as the technology drives their vocal user interfaces. With on-device AI, processing happens on the device (edge) side, meaning there is no need to deliver device data to the cloud. This helps secure privacy and reduce traffic.
Edge AI is growing, and we’ve seen big investments in the technology. Companies like Konduit AI are making it a key part of their AI strategy in Southeast Asia. Another example was in January of 2020, when it was reported that Apple invested 200 million dollars to acquire the Seattle-based AI enterprise, Xnor.ai. Xnor.ai’s AI tech processes data on the user’s smartphone with edge processing. With built-in AI on the smartphone itself, we’ll likely see advancements in voice processing, facial recognition technology, and enhanced privacy.
According to the “2019 AI Business Aggregate Survey” published by Fuji Keizai Group, the edge AI computing market in Japan had a forecast market size of 11 billion yen in the 2018 fiscal year. The survey predicts the market to expand to 66.4 billion yen in the 2030 fiscal year.
And with the spread of 5G, we’ll also likely see decreasing costs and increasing demand for edge AI services across the world.
From self-employed field engineer to PHP programmer, Tatsuo Kurita is now a UX director working mainly as a technical director to support corporate products. His expertise covers a wide range of areas, including certification in applied information technology, information security management, mental health management grade II, HTML, general deep learning, and AI implementation.
Previously published on: https://lionbridge.ai/articles/what-is-edge-ai-computing/