Artificial Intelligence

What is OpenVINO and What are the Hardware Requirements?

Deep learning, a type of AI (artificial intelligence), is a subset of machine learning that focuses on teaching a computer to learn in a way similar to how a human brain would learn. To accomplish this, deep learning models utilize neural networks.

In an effort to increase the performance and accuracy of a deep learning model, there are several software offerings that have been developed for the purpose of deep learning optimization. One of the most popular options is OpenVINO™. Let’s go through what exactly OpenVINO is, how it works, how it can be used, and OpenVINO hardware requirements.

What is OpenVINO?

OpenVINO, or Open Visual Inference and Neural Network Optimization, is an open-source toolkit developed by Intel® for the optimization of deep learning models. OpenVINO can be used to accelerate solutions for natural language processing (NLP), machine learning, computer vision, and more. The OpenVINO toolkit can be used from model creation through deployment, but one of its core strengths is in the optimization of deep learning AI models.

How does OpenVINO work?

The OpenVINO workflow can be summed up in four main steps:

  • Model training: First, the deep learning model is trained outside of OpenVINO using the framework of choice. OpenVINO is able to accept PyTorch, ONNX, and TensorFlow models.
  • Model optimization: The Model Optimizer loads this model, reads it, builds an internal representation of it, then optimizes it using the chosen framework. After this is done, the IR (Intermediate Representation) is generated. This comes in the form of two files; .xml and .bin.
  • Inference Engine: The Inference Engine is used to read the IR and make inferences on the data using a specific plugin, including CPU (Central Processing Unit), GPU (Graphics Processing Unit), NPU (Neural Processing Unit), FPGA (Field-Programmable Gate Array), and Intel GNA (Gaussian and Neural Accelerator).
  • Application deployment: The application is deployed to various specified devices.

What does OpenVINO do?

The main benefit of deep learning optimization using OpenVINO is efficiency and speed. OpenVINO helps coordinate the various hardware elements required to train and optimize an AI model. Effective optimization helps to more quickly minimize loss function. Loss function is the difference between the predicted output and the actual output of an AI model. Minimizing loss function is achieved by adjusting the hyper-parameters of the model during training. Hyper-parameters are details supplied to the model during training, not to be confused with the output parameters which are part of the resulting model.

Hyper-parameters that can be adjusted within the model include things like batch size, weight decay, momentum, and learning rate. Let’s look at the optimization of the learning rate as an example.

Learning rate optimization

In order to minimize loss function, the learning rate (the speed at which the model adapts) can be made either higher or lower. If the learning rate is too low, it will use more time and memory to train the model. However, a lower rate also means a steady march toward minimum loss function. On the other hand, a higher learning rate can make the model learn much faster, but it also comes with the risk of overshooting the minimum loss value.

Adjusting the learning rate helps to find the ideal point between a learning rate that is too low and one that is too high.

OpenVINO examples: how is it used?

OpenVINO is used in a variety of fields ranging from manufacturing to smart cities and healthcare. Some examples of commonly used computer vision models OpenVINO can help optimize include:

  • Image inpainting: Using image inpainting, damaged or missing pixels in images and videos can be restored. You can check out this demo from Intel for an example of what image inpainting looks like.
  • Facial recognition: Facial recognition is used to identify an individual’s face from an image or video. Many modern smartphones, for example, feature facial recognition technology to unlock your phone just by reading your face.
  • Object detection: Object detection enables a computer vision system to detect certain objects in visual input such as images and video. For example, an object detection model could analyze an image of a street and identify things like faces, handbags, or anything else the model has been trained to recognize.

OpenVINO and generative AI

Generative AI, a type of artificial intelligence focused on the generation of new content from training data, largely relies on model inference. OpenVINO can be used to accelerate this inference and the development of generative AI models. Let’s take Stable Diffusion as an example.

Stable diffusion is a type of generative AI text-to-image model used to create realistic, detailed images from text prompts. Diffusion models work by adding Gaussian noise to the training data, then removing that noise (or “denoising”) to recover the data. This teaches the model how to remove the noise and create higher-quality images. However, this process often proves to be very slow, and the models can take up a lot of space. Because of this, you must optimize your model training and inferencing to make the solution viable. This is where OpenVINO comes in.

OpenVINO accelerates model training and optimizes inferencing, expanding the possibilities of where and how you can run model inference.

OpenVINO hardware requirements

OpenVino can only run on Intel hardware. In order to use the OpenVINO toolkit, your hardware needs to either have an Intel CPU, GPU, NPU, FPGA, or GNA. To learn more about OpenVINO devices and its supported hardware, you can visit this page.

Leveraging OpenVINO with the Axial Edge Server from OnLogic

Intel’s OpenVINO allows for deep learning model optimization from almost any framework and allows you to deploy it on a wide range of Intel hardware including CPUs, GPUs (including iGPUs), NPUs, FPGAs, and GNAs. 

The AC101 Axial Edge Server from OnLogic supports Intel’s line of 13th generation hybrid-core processors (formerly known as “Raptor Lake”) which feature integrated Intel UHD 770 graphics cards and can support up to 128GB of DDR5 memory. Additionally, the AC101 supports full-length GPUs such as the Intel Arc A40 and A60, enabling for even more powerful processing capabilities. When paired with OpenVINO, Axial provides the ideal solution for AI model training and inference at the edge.

To find out more about how OpenVINO can be used with the Axial edge server from OnLogic, contact our technical support team today.

Claireice Mathai

Claireice Mathai is a content creator for OnLogic. When not writing, she enjoys playing guitar and gaming.

Published by
Claireice Mathai

Recent Posts

Turning a Staircase into an Art Installation

At the heart of OnLogic’s headquarters is an incredible staircase - what our team of…

5 days ago

Edge AI Architecture: The Edge is the Place to be for AI

AI seems to be part of every technology-related conversation lately. While for some time AI…

1 month ago

Predictive Maintenance – how AI and the IoT are Changing Machine Maintenance

 The Internet of Things (IoT) and artificial intelligence (AI) are changing how we maintain equipment…

1 month ago

Deploying NVIDIA Jetson for AI-powered Automation

NVIDIA® Jetson™ has emerged as an early leader in the ongoing race for hardware platform…

2 months ago

Partnering with UVM Students on Sustainable Innovation

Developing and acting on initiatives for sustainable technology solutions is a team effort. At OnLogic,…

3 months ago