AI at the Edge – A Vision for the Future

By ·Categories: Artificial Intelligence, Depend OnLogic·Published On: June 23rd, 2022·20.5 min read·

Machine Vision, and other solutions providing AI at the edge, are enabling businesses in all industries to do things they never thought were possible. The question for many has become not if they’ll use AI, but when. 

A conversation about machine vision and AI at the edge

Maxx Garrison, product manager at OnLogic, and Johnny Chen, Edge AI ISV account manager at Intel® Corporation, got together to talk about AI at the edge and how organizations can leverage hardware and software solutions to take AI projects from virtual to reality. We shared the recorded conversation as part of our OnLogic Live event and also answered questions live. Watch the recorded session or read the recap below.

Challenges at the edge

OnLogic: When organizations are trying to deploy AI solutions to the edge, it can be difficult to go from a prototype, or proof of concept (POC), to full deployment. That holds true especially in rugged environments such as on a factory floor, in a warehouse or even in vehicles. 

When a solution is being developed in a lab, you basically have unlimited compute with lab systems, servers and workstations. How do you take that and move that to the edge which may be a challenging environment? 

Johnny: You are going to have challenges at the edge: 

  • Power constraints
  • Limited compute
  • Limited bandwidth
  • Limited Internet
  • High availability needs

You have to look at the entire AI pipeline from beginning to end – inference is only one small part of the entire equation. After you ingest, and process the AI inference, you have to store the data. It’s all about time series databases, smart dashboards and getting the data back to the user in an understandable way. This is the entire AI pipeline that customers have to understand. 

Taking an AI solution from virtual to reality

Behind the scenes at OnLogic Live

Johnny: How do you go from the lab to the edge? I think that’s where it gets interesting. Intel and OnLogic have a great partnership. You’ve taken our core technology of processors and XPUs and you put them into form factors like the Karbon 800, as well as others, that really are designed for scalability at the edge. Understanding the workload to how it’s deployed is the key to go out of POC and to production and scale.

A picture of the Karbon 801 from OnLogic

OnLogic: When someone is trying to benchmark an AI system, they might be doing a lot of that work in the lab where you could have access to essentially unlimited compute. For example, if you’re in the cloud, you can spin up another GPU instance. But when you move to the edge, you’re going to be most likely on fixed hardware.

What are some methods or tools to really help people right-size the AI solution if they don’t have the experience of going all the way to the edge at first?

Using Intel Dev Cloud

Johnny: The first step is to understand your workload and your performance KPIs. Then you can use Intel Dev Cloud to actually test out your workload in the cloud. And, the best part is in the cloud, we have partners like OnLogic, they have systems there in the Dev Cloud. Having the exact system that you will deploy at the edge actually gives you that advantage. You can test your exact workload on that hardware before you get to deployment. That makes it a lot easier to actually rightsize that hardware.

OnLogic: The Intel Dev Cloud is made up of real hardware that you would deploy at the edge. OnLogic has systems in the Intel Dev Cloud. That means that developers can run their solution on an OnLogic system and validate that it works with that workload. Then they can configure their own OnLogic system and deploy it to the edge. And they’ve already validated that it works.

Code once, deploy everywhere

Johnny: Exactly. And the idea is all these tools help you streamline that process. Intel also has other tools like OpenVINO. In our new release OpenVINO 2022.1, we actually have an auto mode. That means that after you run your model on the open inference engine, the auto mode can select the best inference processor. Whether it be the CPU, GPU, Movidius accelerator and other things like that. It’s all about XPU. It doesn’t matter where it runs, just where it runs best.

OnLogic: OnLogic offers a variety of hardware. For example, the Karbon 800 is available with a wide variety of XPU: CPU, integrated GPU, discrete GPU, FPGA, Movidius – lots of options. And then you can go down the stack to our CL200 with Movidius or up the stack to edge servers. We offer many options, but OpenVINO makes it essentially seamless to bridge across those different technologies.

Ignition Edge Gateway

Johnny: The whole idea is “code once – deploy everywhere”. So you put it on the hardware that is right for your specific application or right for that environment. You wouldn’t want to take a server room system and put it at the edge where it’s all dusty and not the best environment. It’s going to fail. This is all about rightsizing.

Systems designed for a rugged edge environment

OnLogic: We purpose-built the Karbon 800 Series for the edge. It’s a rugged system. You can go on a vehicle, it can have LiDAR input and it will withstand the shock and vibration and the high temps you would expect in edge environments. On the other hand, if you took the lab equipment that you used to develop your ML, maybe the desktop with a gaming CPU or a rackmount server, those were not designed to survive at the edge.

Johnny: A system not designed for the edge is going to fail. OnLogic understands the difficulties of the edge. You guys are dealing with customers all day long that are at the edge, in these difficult environments. You’ve taken our core technology and wrapped your know-how and created a unique system for that scalability of production.

If you look at the production environment of today, not only does the system have to be reliable, but a lot of these systems have to be on HA or high availability because they can not go down. They are mission critical. If you lose a system, you lose the whole factory, and now you’re talking about huge losses. 

OnLogic: I think when you have that scalable infrastructure and you’re in a location where you don’t have the connectivity, you can’t just have a redundant cloud instance. You need tough hardware that’s going to stand up and continue to operate in those tough environments. 

Develop a process and prove efficiencies with vision data

Johnny: In addition, if you look at the edge compute that’s happening today, more and more of that is moving to the edge because more decisions are made at the edge. Vision data by itself is actually pretty useless if you think about it. Because it just tells you, “Hey – this failed” or “This is good”. 

But when you combine data from the machine itself, from the PLC, from sensors, with the vision data – then you have a process. Now you can audit the process and when the part is bad, you can look at the settings. That way you can answer – what were the things that made this part bad? And once you know, you can correct the issues. And all those decisions have to be made at the edge so it can be done faster and in real time.

OnLogic: Some unique applications that we are seeing are those that enable dynamic adjustment. So you have machine vision to understand the product that’s coming off the line. It can identify the defects. It also has control of the rest of the system behind it to make adjustments. For example, it could adjust the extrusion process to account for humidity changes in the factory. All of that can be dynamic because you have that power at the edge and you can do it instantly. This low latency is critical for edge compute.

Johnny: Once all this data is gathered, you can put it up into the cloud with data from other machines and start correlating data. Then you start seeing trends, you start seeing a story. And now you can prove efficiencies and not just one site, but across the entire enterprise.

Maxx and Johnny talking about AI at the Edge and machine vision

Software to help with AI implementation

OnLogic: Are there any software developments that would make that easier on the software and implementation side? That platform sounds like it would be extremely complex to build and integrate. 

Johnny: Absolutely. Beyond Intel’s OpenVINO, there’s a lot of other toolkits we offer. One is a reference design that we have – called Edge Insights for Industrial. It’s a reference software stack, everything from data ingestion and vision ingestion, to inference, to time series database to dashboarding. It’s the entire reference design that we give free of charge to our customers to take and then make their own product, for their specific solution. 

Real world example of AI in the fast food industry

OnLogic: When we work with customers, they’re in a huge wide range of industries. Are there any examples that you can give of a business that is using AI?

Johnny: Yes – in fact, the question for the entire industry is not if they’ll use AI, but when. 

One interesting example that I think everyone can understand and appreciate is fast food. Fast food can take advantage of AI and improve the customer experience without the customer even knowing it. 

In a fast food restaurant, you can get cameras looking at the drive through. It can be used to know how many cars are lined up and how many people are in the restaurant lined up to order. This information can be used to do micro forecasting. 

If you think about it, almost everyone orders fries with their burgers. So by knowing how many people are lined up, you can actually forecast how many french fries to make and get it pretty right. 

The other way to use AI in a fast food restaurant is to track the quality of the food. No one likes soggy fries, right? Getting bad fries is one of the worst things that you could do at a fast food place.

Maxx Garrison and JP Ishaq laughing during OnLogic Live - A Vission of the future

A camera could be in the kitchen to track the quality of the food. When were the fries made? How long were they there? Was it there too long? Should you just throw it away before you serve it? These are all things to improve the quality, improve the customer experience. And the best part is, it not only improves the customer experience, but it also improves the experience for the enterprise, because now you’re more efficient, you have a better service, and you actually have data points of how all your restaurants are doing.

OnLogic: Yeah, I imagine that in that situation, if you wanted to do the same without AI, you would need a control room filled with people monitoring the lines. “There are 5 people in the line. You got to get more fries moving!” We don’t have those resources in these businesses. AI can step in and really augment that business in ways that wasn’t possible in the past.

Johnny: And, the best part is, it makes life easier for the workers. Because now workers are not scrambling. They know ahead of time how much food to prepare. And this goes to another point. It can reduce food waste because you’re not over preparing. You have a much better idea of how much food you need at that particular moment.

Brownfield AI implementation 

OnLogic: One of the unique challenges that we see is with older manufacturing sites. Often, they are not brand new factories and yet they would like to bring in that AI capability. Is that feasible with the technology we have now?

Johnny: Oh, absolutely. I think one of the biggest growth areas is existing infrastructure – often called brownfield infrastructure. How do you upgrade it? Truth is, no one’s going to upgrade an entire existing factory to make it smart. Why would they do that if they have equipment that’s performing perfectly fine for 20 years. 

This is where the OnLogic IoT gateways with the multiple interfaces come in. They have the ability to interface into PLCs and traditional older machines. You can pull data from them and digitize. 

Leverage software to pull data together

Johnny: In addition, OnLogic has a great partnership with Inductive Automation. Their Ignition software pulls all that data together. At the end of the day, data is not decreasing, it’s growing. Now, in these older factories, there is a way to pull all that data to digitize it. This is where the technology gets really exciting. Because now with all this data pulled together, you can start making sense of these older factories and improve efficiencies. In the past, you just couldn’t see the information. 

OnLogic: When we’re looking at these older factories, you might have an older PLC that’s controlling equipment and you have a PLC and then a gateway and an edge server in the cloud, and we’re looking at future and new factories. How do you see that kind of topology of equipment evolving? Are we going to consolidate into soft PLCs with accelerators built in, or do you still expect to see that kind of break out of functions?

Move to edge for real time decision making

Johnny: I still see some break out, but I do see computers moving more and more to the edge. Because one of the key things about moving to the edge is you want to make decisions in real time. When you’re doing this defect detection, you want to be able to see what’s going on. You want to be able to adjust these machines in real time. So more compute is moving to the edge in order to do this in real time. 

But the second part is you also want to take all this correlated data and put it into the cloud. Hybrid architecture will become more and more prevalent. You’re going to see more and more data being put into the cloud, but only the important data. Data that’s been processed at the edge and then it can be put in a dashboard in the cloud. So this way, anyone in the enterprise can actually look at the data and make sense of it. 

OnLogic: A good example of that is a customer that can take in LiDAR data from a vehicle. LiDAR creates a massive amount of data – most of it is not useful. But if there is a section of that data that is useful. That can get uploaded to the cloud for future training of models, right?

Johnny: Absolutely. And that’s where the edge intelligence is so important. Because the data is not getting smaller, it’s growing faster, faster and more and more. The problem is how do you sort through all this data? In the old days, you had to have a person sit there and actually sort through this manually. But now with the AI, it can automatically sort it out and only present to you what is relevant. And that’s how we’re going to get efficient and get better at this. I think one of the exciting things with our partnership is your ability to take Alder Lake-S, and actually create a very scalable platform. 

A scalable platform for AI

Johnny: Scalability as a whole, especially in AI, is really important. If you look at Intel’s OpenVINO, it is all about “write once, deploy everywhere”. You put it on small systems all the way to server class systems and our code stays the same. In some ways. I think that’s what’s also so exciting about the Karbon series. What you guys have done is create a platform for AI. You have a high iGPU count on the Alder Lake-P that you guys have created. But OnLogic has also created a scalable system beyond that with your Alder Lake-S series in which now it’s scalable with GPUs, PCIe cards. The other big thing is network ports.

OnLogic systems offer a ton of network ports because – what’s running all the vision? It’s cameras – PoE cameras and so forth. So I think that the exciting part is what you’ve done with Alder Lake, creating the Karbon series, it’s become a scalable platform for AI.

OnLogic: When you look at the low end, we’ve got Alder Lake-P, which is a very complex system, but has a really powerful iGPU. When using OpenVINO, you can really take advantage of that for AI workloads. Then you move up to Karbon 800 and then you really focus on that really high CPU performance. But also, as you said, that expandability and the modularity of I/O.

OnLogic offers four systems in the Karbon 800 Series. When we’re looking at these edge AI applications, you’re exactly right, it’s machine vision and you have to plug in cameras. So, with the K804 specifically, I think we’re up to 22 PoE cameras that are possible or 30 USB cameras. Of course again you got to right-size your AI workloads and so you can actually ingest and compute on all that data. But we do offer that scalable platform where for almost any application, we can find a system that fits that edge AI workload.

Photo of the Karbon 804 rugged edge computer has ability to power 22 PoE Cameras

Do more with AI at the edge

Johnny: And that scalability is going to become more and more important at the edge because a lot of the enterprise, once they get a taste of what they could do with AI, they will want to add more and more features through software and to do more and more AI at the edge.

OnLogic: We’re really excited to see what’s ahead for AI in our industries. All the customers that we’re working with are pushing hard to deploy AI at the edge. I think that’s one of the fascinating things, working with edge companies is that we’re seeing it in production. These aren’t just research projects. We’re seeing these actual real benefits out in the field, and it is real.

Johnny: Yeah, absolutely. You know, like we were talking about earlier, it’s not a matter of “if”, it’s a matter of “when”.

Wrap up

Three pictures of JP and Darek having a conversation during OnLogic Live - AI at the Edge

JP Ishaq, Director of Product Enablement and Darek Fanton, Communications Manager provided additional insight and information after the pre-recorded conversation of Maxx and Johnny.

Artificial Intelligence – not a matter of if, but when 

Darek: What were some of the key takeaways from Maxx and Johnny’s conversation? 

JP: It’s really exciting to see this ecosystem take shape. Between the partnership of OnLogic and Intel we are able to provide scalable solutions for emerging applications. Like Johnny said, artificial intelligence is not a matter of if, but when. In many cases, that “when” is now. We’re seeing that everywhere from retail analytics to help with customer shopping experiences to deployments in industrial automation to help with preventative maintenance and product quality. 

What is inference in AI? 

In AI augmented applications, we talk about machine learning or deep learning. That is the process of building your model and training it using really high performance computing and cloud computing. Inference comes in after you have the model trained on the object recognition and you are now actually using it to recognize incoming data (usually images or video). 

What is brownfield vs. greenfield?

Brownfield and greenfield are terms used in a few different industries, but they essentially refer to the same thing. In the hardware space, greenfield project deployment is a new installation that can be custom designed from the ground-up. It would include all new devices, hardware, custom software stack, peripherals and protocols. 

On the other hand, a brownfield project makes use of, and integrates with existing infrastructure. This presents some unique challenges of integrating with existing legacy devices. That’s one of the reasons many of our systems include what the consumer computing world would consider “antiquated” connectivity, like COM ports. A lot of the devices in existing factories require those connections. They’re also often not equipped to communicate with network resources, or the cloud, which is why IoT gateways have become such a key part of the hardware landscape. OnLogic really excels at supporting that cutting edge performance and maintaining that legacy connectivity. 

Machine vision AI in action

One example of an innovative company that has leveraged an OnLogic solution is Datacadabra in the Netherlands. They have developed a smart solution for roadside mowing called the Mowhawk. Using machine vision it can identify invasive species, litter or even animal habitats and instruct the mower on areas to mow around or avoid. 

Mohawk roadside mower with machine vision by Datacadabra

This is a great example of an AI and machine learning solution that is being used today. And as a general consumer, you may be passing by incredible AI and vision solutions every day and not even know it. So that tractor in the median may actually be a machine vision solution. 

Examples of exciting current or upcoming hardware technologies

A photo of Darek Fanton and JP Ishaq discussing upcoming hardware technology

As we mentioned, we are very excited to release our Karbon 800 Series which is based on Intel’s 12th generation platform – formerly known as Alder Lake. We have more Alder Lake based systems coming soon – so stay tuned. We’ll have more opportunities for different applications to be served with different levels of scalable architecture. 

Scalability is the name of the game! Whether it’s one of our K800 models that can support both the integrated Alder Lake processing and also add additional peripherals and connectivity, or one of our smaller gateways. We’re going to have a solution to help our customers. 

Our partnership with Intel allows us to take advantage of the performance found in the newest generations while continuing to support our customers and maintain their hardware platforms. 

Part of scalability is making sure you have the right fit solution for now and also the ability to evolve over time as your application develops. For example, we offer several models in our K800 series: K801, K802, K803 and the K804 which can actually use a GPU. So you have flexibility as to where the computing is being done.

A picture of the Karbon 800 series from OnLogic

What makes Alder Lake a good fit for AI?

Of course we see gains in overall performance and compute. But one of the things that is particularly interesting is the hybrid architecture that combines both performance cores and efficient cores so you can really scale to the workload needed and handle very diverse workloads. 

And, the integrated GPU is unprecedented in its ability to handle AI workloads without necessarily needing an external or additional peripheral. That is something we haven’t seen to this level before. If needed, our systems do all the integration of additional peripherals, so it really allows our customer to right size their solution for their own application. 

What makes Alder Lake a good fit for AI?

With the current supply chain issues, how is the availability of the Karbon 800 Series? 

There is no question that supply chain issues are impacting just about every product and industry worldwide. Our team is working really hard to maintain our supply and they have been very successful in doing that. The K800 is released and shipping now.

It’s about being flexible. What’s exciting is that you can get the base platforms now, and additional features will roll out over time. 

Ready to launch your AI at the edge application? Learn more about the Karbon 800 or contact our team today!

Editor’s Note: Inductive Automation has ended their Ignition Onboard program. Ignition licenses must now be purchased directly through Inductive Automation. While the IGN versions of our solutions are no longer available, our computers remain a great fit for use with Ignition software. Explore our recommended hardware here.

Get the Latest Tech Updates

Subscribe to our newsletters to get updates from OnLogic delivered straight to your inbox. News and insights from our team of experts are just a click away. Hit the button to head to our subscription page.

Share

About the Author: Sarah Lavoie

Sarah Lavoie is a content creator for OnLogic. When not writing, she can usually be found exploring the Vermont landscape with her camera looking to photograph something amazing.