Home>Posts>Depend OnLogic>AI at the Edge – A Vision for the Future

AI at the Edge – A Vision for the Future

By ·Categories: Depend OnLogic·Published On: June 23rd, 2022·20.2 min read·

Machine Vision, and other solutions providing AI at the edge, are enabling businesses in all industries to do things they never thought were possible. The question for many has become not if they’ll use AI, but when. 

A conversation about machine vision and AI at the edge

Maxx Garrison, product manager at OnLogic, and Johnny Chen, Edge AI ISV account manager at Intel® Corporation, got together to talk about AI at the edge and how organizations can leverage hardware and software solutions to take AI projects from virtual to reality. We shared the recorded conversation as part of our OnLogic Live event and also answered questions live. Watch the recorded session or read the recap below.

Challenges at the edge

OnLogic: When organizations are trying to deploy AI solutions to the edge, it can be difficult to go from a prototype, or proof of concept (POC), to full deployment. That holds true especially in rugged environments such as on a factory floor, in a warehouse or even in vehicles. 

When a solution is being developed in a lab, you basically have unlimited compute with lab systems, servers and workstations. How do you take that and move that to the edge which may be a challenging environment? 

Johnny: You are going to have challenges at the edge: 

  • Power constraints
  • Limited compute
  • Limited bandwidth
  • Limited Internet
  • High availability needs

You have to look at the entire AI pipeline from beginning to end – inference is only one small part of the entire equation. After you ingest, and process the AI inference, you have to store the data. It’s all about time series databases, smart dashboards and getting the data back to the user in an understandable way. This is the entire AI pipeline that customers have to understand. 

Taking an AI solution from virtual to reality

Behind the scenes at OnLogic Live

Johnny: How do you go from the lab to the edge? I think that’s where it gets interesting. Intel and OnLogic have a great partnership. You’ve taken our core technology of processors and XPUs and you put them into form factors like the Karbon 800, as well as others, that really are designed for scalability at the edge. Understanding the workload to how it’s deployed is the key to go out of POC and to production and scale.

A picture of the Karbon 801 from OnLogic

OnLogic: When someone is trying to benchmark an AI system, they might be doing a lot of that work in the lab where you could have access to essentially unlimited compute. For example, if you’re in the cloud, you can spin up another GPU instance. But when you move to the edge, you’re going to be most likely on fixed hardware.

What are some methods or tools to really help people right-size the AI solution if they don’t have the experience of going all the way to the edge at first?

Using Intel Dev Cloud

Johnny: The first step is to understand your workload and your performance KPIs. Then you can use Intel Dev Cloud to actually test out your workload in the cloud. And, the best part is in the cloud, we have partners like OnLogic, they have systems there in the Dev Cloud. Having the exact system that you will deploy at the edge actually gives you that advantage. You can test your exact workload on that hardware before you get to deployment. That makes it a lot easier to actually rightsize that hardware.

OnLogic: The Intel Dev Cloud is made up of real hardware that you would deploy at the edge. OnLogic has systems in the Intel Dev Cloud. That means that developers can run their solution on an OnLogic system and validate that it works with that workload. Then they can configure their own OnLogic system and deploy it to the edge. And they’ve already validated that it works.

Code once, deploy everywhere

Johnny: Exactly. And the idea is all these tools help you streamline that process. Intel also has other tools like OpenVINO. In our new release OpenVINO 2022.1, we actually have an auto mode. That means that after you run your model on the open inference engine, the auto mode can select the best inference processor. Whether it be the CPU, GPU, Movidius accelerator and other things like that. It’s all about XPU. It doesn’t matter where it runs, just where it runs best.

OnLogic: OnLogic offers a variety of hardware. For example, the Karbon 800 is available with a wide variety of XPU: CPU, integrated GPU, discrete GPU, FPGA, Movidius – lots of options. And then you can go down the stack to our CL200 with Movidius or up the stack to edge servers. We offer many options, but OpenVINO makes it essentially seamless to bridge across those different technologies.

Ignition Edge Gateway

Johnny: The whole idea is “code once – deploy everywhere”. So you put it on the hardware that is right for your specific application or right for that environment. You wouldn’t want to take a server room system and put it at the edge where it’s all dusty and not the best environment. It’s going to fail. This is all about rightsizing.

Systems designed for a rugged edge environment

OnLogic: We purpose-built the Karbon 800 Series for the edge. It’s a rugged system. You can go on a vehicle, it can have LiDAR input and it will withstand the shock and vibration and the high temps you would expect in edge environments. On the other hand, if you took the lab equipment that you used to develop your ML, maybe the desktop with a gaming CPU or a rackmount server, those were not designed to survive at the edge.

Johnny: A system not designed for the edge is going to fail. OnLogic understands the difficulties of the edge. You guys are dealing with customers all day long that are at the edge, in these difficult environments. You’ve taken our core technology and wrapped your know-how and created a unique system for that scalability of production.

If you look at the production environment of today, not only does the system have to be reliable, but a lot of these systems have to be on HA or high availability because they can not go down. They are mission critical. If you lose a system, you lose the whole factory, and now you’re talking about huge losses. 

OnLogic: I think when you have that scalable infrastructure and you’re in a location where you don’t have the connectivity, you can’t just have a redundant cloud instance. You need tough hardware that’s going to stand up and continue to operate in those tough environments. 

Develop a process and prove efficiencies with vision data

Johnny: In addition, if you look at the edge compute that’s happening today, more and more of that is moving to the edge because more decisions are made at the edge. Vision data by itself is actually pretty useless if you think about it. Because it just tells you, “Hey – this failed” or “This is good”. 

But when you combine data from the machine itself, from the PLC, from sensors, with the vision data – then you have a process. Now you can audit the process and when the part is bad, you can look at the settings. That way you can answer – what were the things that made this part bad? And once you know, you can correct the issues. And all those decisions have to be made at the edge so it can be done faster and in real time.

OnLogic: Some unique applications that we are seeing are those that enable dynamic adjustment. So you have machine vision to understand the product that’s coming off the line. It can identify the defects. It also has control of the rest of the system behind it to make adjustments. For example, it could adjust the extrusion process to account for humidity changes in the factory. All of that can be dynamic because you have that power at the edge and you can do it instantly. This low latency is critical for edge compute.

Johnny: Once all this data is gathered, you can put it up into the cloud with data from other machines and start correlating data. Then you start seeing trends, you start seeing a story. And now you can prove efficiencies and not just one site, but across the entire enterprise.

Maxx and Johnny talking about AI at the Edge and machine vision

Software to help with AI implementation

OnLogic: Are there any software developments that would make that easier on the software and implementation side? That platform sounds like it would be extremely complex to build and integrate. 

Johnny: Absolutely. Beyond Intel’s OpenVINO, there’s a lot of other toolkits we offer. One is a reference design that we have – called Edge Insights for Industrial. It’s a reference software stack, everything from data ingestion and vision ingestion, to inference, to time series database to dashboarding. It’s the entire reference design that we give free of charge to our customers to take and then make their own product, for their specific solution. 

Real world example of AI in the fast food industry

OnLogic: When we work with customers, they’re in a huge wide range of industries. Are there any examples that you can give of a business that is using AI?

Johnny: Yes – in fact, the question for the entire industry is not if they’ll use AI, but when. 

One interesting example that I think everyone can understand and appreciate is fast food. Fast food can take advantage of AI and improve the customer experience without the customer even knowing it. 

In a fast food restaurant, you can get cameras looking at the drive through. It can be used to know how many cars are lined up and how many people are in the restaurant lined up to order. This information can be used to do micro forecasting. 

If you think about it, almost everyone orders fries with their burgers. So by knowing how many people are lined up, you can actually forecast how many french fries to make and get it pretty right. 

The other way to use AI in a fast food restaurant is to track the quality of the food. No one likes soggy fries, right? Getting bad fries is one of the worst things that you could do at a fast food place.

Maxx Garrison and JP Ishaq laughing during OnLogic Live - A Vission of the future

A camera could be in the kitchen to track the quality of the food. When were the fries made? How long were they there? Was it there too long? Should you just throw it away before you serve it? These are all things to improve the quality, improve the customer experience. And the best part is, it not only improves the customer experience, but it also improves the experience for the enterprise, because now you’re more efficient, you have a better service, and you actually have data points of how all your restaurants are doing.

OnLogic: Yeah, I imagine that in that situation, if you wanted to do the same without AI, you would need a control room filled with people monitoring the lines. “There are 5 people in the line. You got to get more fries moving!” We don’t have those resources in these businesses. AI can step in and really augment that business in ways that wasn’t possible in the past.

Johnny: And, the best part is, it makes life easier for the workers. Because now workers are not scrambling. They know ahead of time how much food to prepare. And this goes to another point. It can reduce food waste because you’re not over preparing. You have a much better idea of how much food you need at that particular moment.

Brownfield AI implementation 

OnLogic: One of the unique challenges that we see is with older manufacturing sites. Often, they are not brand new factories and yet they would like to bring in that AI capability. Is that feasible with the technology we have now?

Johnny: Oh, absolutely. I think one of the biggest growth areas is existing infrastructure – often called brownfield infrastructure. How do you upgrade it? Truth is, no one’s going to upgrade an entire existing factory to make it smart. Why would they do that if they have equipment that’s performing perfectly fine for 20 years. 

This is where the OnLogic IoT gateways with the multiple interfaces come in. They have the ability to interface into PLCs and traditional older machines. You can pull data from them and digitize. 

Leverage software to pull data together

Johnny: In addition, OnLogic has a great partnership with Inductive Automation. Their Ignition software pulls all that data together. At the end of the day, data is not decreasing, it’s growing. Now, in these older factories, there is a way to pull all that data to digitize it. This is where the technology gets really exciting. Because now with all this data pulled together, you can start making sense of these older factories and improve efficiencies. In the past, you just couldn’t see the information. 

OnLogic: When we’re looking at these older factories, you might have an older PLC that’s controlling equipment and you have a PLC and then a gateway and an edge server in the cloud, and we’re looking at future and new factories. How do you see that kind of topology of equipment evolving? Are we going to consolidate into soft PLCs with accelerators built in, or do you still expect to see that kind of break out of functions?

Move to edge for real time decision making

Johnny: I still see some break out, but I do see computers moving more and more to the edge. Because one of the key things about moving to the edge is you want to make decisions in real time. When you’re doing this defect detection, you want to be able to see what’s going on. You want to be able to adjust these machines in real time. So more compute is moving to the edge in order to do this in real time. 

But the second part is you also want to take all this correlated data and put it into the cloud. Hybrid architecture wil