A Vision Processing Unit or VPU is a critical piece of technology that is advancing at a rapid pace. For example, we are seeing more and more uses for object recognition and deep learning – computers that can understand that the photo you posted online was of your cat, or detect a minuscule manufacturing defect on an assembly line floor.
Data is king in the IoT age and the more data you can collect and process, the better the results. But there’s often a limit to how much data you can process when you’re trying to balance system cost, temperature, size and performance. It might not be realistic to have a large tower PC with a GPU running an intensive machine vision set up on a manufacturing floor, but smaller systems don’t pack the performance required for the task.
These trade-offs are being challenged as innovation and technology have created new ways to handle machine vision and learning for facial and object recognition.
Enter the Vision Processing Unit – VPU
A VPU works as a co-processor, similar to a GPU, to take the load off of the central processor and assign it to a more efficient, application-specific integrated circuit. It enables low powered systems to run machine vision and machine learning frameworks such as Google TensorFlow and Meta Caffe2 for object and facial recognition and machine learning using only 2 to 3 watts of power and generating significantly less heat. For comparison, typical GPUs use around 75 watts of power and require active cooling to operate effectively.
VPUs, like this Movidius card (right), has a distinct size, power, and thermal advantage over large GPUs (shown to the left) and can fit into a range of industrial fanless systems for machine vision applications.
What this means is that you can take an extremely compact and efficient system, like our CL210G-11, and set it up to run machine vision or learning applications and send that data back to your model for updating.
Because of the low cost and accessibility of a VPU, it’s able to scale as you add more compute throughout your application. This also has the added benefit of processing more data at the edge before transmitting to the cloud, reducing the data costs associated with cloud computing.
The CL210 makes great use of a VPU’s vision capabilities in an extremely compact and affordable system.
Moving Forward With VPUs
According to Intel, digital video will be making up 82% of video traffic by 2021 and revenue from deep learning will grow to $39.9 billion by 2025. VPUs, like Movidius, provide developers and builders the opportunity to tap into that growth with more affordable and efficient technology, improving the lives of their customers along the way.
Recognizing this trajectory, OnLogic has begun integrating VPUs into our own systems to provide reliable edge platforms that power these innovations. The combination of low-power, fanless computing with the specialized VPU modules mean there are now completely new alternatives to what was once a delicate game of compromise.
Subscribe to our newsletters to get updates from OnLogic delivered straight to your inbox. News and insights from our team of experts are just a click away. Hit the button to head to our subscription page.