As technology continues to advance at a rapid pace, the benefits of this development continuously trickle down and integrate into more of the things that impact our lives.
This is particularly true for object recognition and deep learning – computers that can understand that the photo you posted online was of your cat, or detect a minuscule manufacturing defect on an assembly line floor.
Data is king in the IoT age and the more data you can collect and process, the better the results. But there’s often a limit to how much data you can process when you’re trying to balance system cost, temperature, size and performance. It might not be realistic to have a large tower PC with a GPU running an intensive machine vision set up on a manufacturing floor, but smaller systems don’t pack the performance required for the task.
These trade-offs are being challenged as innovation and technology have created new ways to handle machine vision and learning for facial and object recognition.
Enter Vision Processing Units (VPUs)
VPUs work as a co-processor, similar to a GPU, to take the load off of the central processor and assign it to a more efficient, application-specific integrated circuit. It enables low powered systems to run Google Tensorflow and Facebook Caffe frameworks for object and facial recognition and machine learning using only 2 to 3 watts of power and generating significantly less heat. For comparison, typical GPUs use around 75 watts of power and require active cooling to operate effectively.
What this means is that you can take an extremely compact and efficient system, like our CL210G-11, and set it up to run machine vision or learning applications and send that data back to your model for updating.
Because of the low cost and accessibility of a VPU, it’s able to scale as you add more compute throughout your application. This also has the added benefit of processing more data at the edge before transmitting to the cloud, reducing the data costs associated with cloud computing.
Moving Forward With VPUs
According to Intel, digital video will be making up 82% of video traffic by 2021 and revenue from deep learning will grow to $39.9 billion by 2025. VPUs, like Movidius, provides developers and builders the opportunity to tap into that growth with more affordable and efficient technology, improving the lives of their customers along the way.
Recognizing this trajectory, Logic Supply has begun integrating VPUs into our own systems to provide reliable edge platforms that power these innovations. The combination of low-power, fanless computing with the specialized VPU modules mean there are now completely new alternatives to what was once a delicate game of compromise.