This is the fifth of a series of articles exploring the benefits of Edge AI for a variety of applications.
Computer Vision (CV) has been an exciting field in artificial intelligence for nearly two decades with interesting applications in areas such as autonomous driving, urban security and surveillance, medical diagnosis, and many more image and video analysis use cases. The development and deployment of these applications are powered by a wide range of compute intensive algorithms, which is why computer vision is often deployed within the cloud infrastructure that offer scalable computing capacity.
However, running computer vision applications on the cloud also has several drawbacks. As a prominent example, cloud-based CV applications require the transfer of large volumes of data from field sources, vision enabled IoT devices such as security cameras, to the cloud over a wide area network connection. This incurs significant latency, which makes cloud-based CV inappropriate for low-latency applications such as real-time urban surveillance. Furthermore, the transfer and processing of data in the cloud is typically associated with a considerable CO2 footprint, which results in poor environmental performance.
The limitations of cloud computing for computer vision have given rise to the deployment of CV systems and algorithms at the edge of the network i.e., close to the source of the data. However, the deployment of CV applications at the edge is challenging as well. Most edge devices (e.g., smart doorbell) are constrained in computational power and memory, as well as size and power.
Fortunately, a new class of wide range of CV algorithms have emerged. These are simple, with a small footprint and a high efficiency, making them idea for deployment in embedded devices like cameras, drones or OBU (On Board Units) in connected driving applications. These algorithms, for instance, comprise time- and space-efficient methods that can be deployed in CPU (Central Processing Unit) limited devices. Nevertheless, most CV deployments at the edge require a power connection based on some sort of battery-powered devices.
The benefits of Edge Computer Vision can be experienced in a wide array of edge CV applications that process images or videos within compute-constrained embedded devices. Here are some prominent examples:
In the transport sector, there is a variety of security and safety applications that rely on the detection of the license plate of a given vehicle (e.g., locating a stolen car). One of the most prominent use cases involves the identification of a plate at an intersection. With a cloud-based implementation, information captured from every car passing an intersection will need to be transmitted and processed remotely. Whereas with processing right on the edge, only once the target license plate has been detected, data will be transmitted and an action will be taken, significantly reducing complexity and cost, as well as alleviating privacy concerns.
The proliferation of video surveillance cameras has led to an increase in the amount of data that needs to be processed by urban security and safety applications. Hence, it is no longer practical to transfer, store and process multiple camera streams at the cloud, as this requires significant storage and bandwidth costs. Rather it is much for efficient to deploy computer vision at the edge of the network towards detecting potential abnormalities (e.g., intruders detected).
Upon the detection of such behaviors, urban security applications might opt to collect and persist the related camera streams to local storage and to the cloud. This ensures evidence gathering and facilitates post processing. Unless the computer vision application detects any suspicious activity, there is no need for storing information locally and/or transferring it to the cloud, resulting in major costs savings. Furthermore, the deployment and use of edge CV facilitates the implementation of low-latency, timely notification functions that alert security officers almost in real time, not to mention improves privacy.
In the Industry 4.0 era, manufacturers are increasingly deploying embedded sensors and cyber-physical systems in their production lines, including for example robotics cells, connected machines and various other internet connected devices. Several of these systems include vision sensors, which are typically used to inspect the position, the quality, and the completeness of a manufactured product. Such quality inspection scenarios provide excellent use cases for computer vision at the edge.
This is because it is essential to detect defective products or other quality problems with very low latency. The latter is a foundation for developing a fast and efficient response that alleviates quality issues in products or production processes. In this direction, computer vision algorithms are embedded inside the vision sensors to support quality control and non-destructive testing of products or parts. By employing computer vision at the edge, manufacturers can also economize on cloud and network costs, as images need not be transferred to the cloud for storage and processing.
Syntiant’s Core 2 Neural Decision Processors enable the development and deployment of energy efficient hardware capable of running sophisticated CV mdoels. Specifically, NDP200, which is optimized for image applications, has shown a 100X improvement in power efficiency and 30X improvement in throughput, compared to an Arm A53.
Using NDP200, integrators and developers can deploy world class TinyML systems, which host and execute deep learning CV models. This provides a foundation for exploiting the benefits of edge computing to the maximum possible extent. Specifically, it delivers exceptional benefits in the following areas:
For more information on Syntiant’s solutions for computer vision at the edge refer to: https://www.syntiant.com/computer-vision