The Past, Present and Future of Edge Computing: A Recap of OnLogic Live – Edge of Tomorrow

By ·Categories: Industrial IoT·Published On: June 24th, 2021·16.6 min read·

The past, present and future of edge computing was our topic for OnLogic Live: Edge of Tomorrow. We talked about how edge computing power and innovative software is driving the current modernization of industrial spaces.  Joining us were Rick Lisa, Director of IoT Business Development, North America at Intel, and Travis Cox, Co-Director of Sales Engineering at Inductive Automation

A photo of Patrick Metzger, Rick Lisa, and Travic Cox

Watch OnLogic Live – Edge of Tomorrow

We encourage you to watch the entire event, or check out our recap of the Q & A below.

What is Edge Computing?

Cloud computing has historically been a major focus for many applications, wherein data is transferred to off-site servers for processing and analysis. Edge computing on the other hand is where data is collected, processed and acted upon closer to the source. Edge computing has increasingly been seen as a valuable component of a holistic industrial automation strategy across many industries. 

Evolution and Value of Edge Computing

How have you seen edge computing evolve from the early days? What value have you seen it provide to production facilities in the real world?

TC

When you look at SCADA and how we historically built SCADA systems, we would deploy a central server and we would then connect that directly to all of the intelligent devices or PLCs out in the network. We would pull response protocols to get that data. For example, every second we’d bring that information back up to a centralized control system. That inherently has some issues. Especially in industries where the communication isn’t solid. 

As an example, in the wastewater management industry, there are booster pumps communicating data over cellular, satellite or microwave. If that communication network happens to go down, all the information is lost.

The edge became really important to be able to deploy a part of the stack at the edge of the network to give us that functionality like store and forward. Caching that data locally and then forwarding it back up to our centralized system. Or providing a local HMI. Classically, an HMI has kind of always been at the edge. It’s right there talking to those devices to give you that critical control system. 

Today, communication capabilities and bandwidth isn’t the issue. The goal today is to get actionable insights from the data. We need the ability to efficiently pull more data from the source at faster rates to the enterprise. 

Facility Upgrading – Is Rip and Replace the Only Option?

There’s this conception that upgrading a facility to a smart facility would require a complete overhaul of a facility – a costly rip and replace of controllers. Can edge devices help to upgrade those systems? How can facilities protect their existing investment? 

RL

Factories typically control processes and automation using industrial PCs and industrial servers.  This is maturing into edge computing and cloud computing. More data center-like capabilities are being deployed at the edge. Machine learning, analytics, and predictive maintenance activities are now possible. Now we’re seeing a move toward digital twins and prescriptive environments where the systems are self-healing. 

Businesses and the opportunities are growing beyond basic control automation to more of a data processing type environment. Industrial PCs and industrial servers are offering computing that is more capable. It doesn’t require that a facility rip out all the equipment to put in new equipment. We’re able to connect this higher level of compute technology to the legacy equipment. Factories can seamlessly grow the compute complex on top of the existing infrastructure – not replacing. It’s a pretty seamless approach to the current deployed technologies that bring all new functionalities to the local environment. It can also be a seamless integration to the centralized or core enterprise compute resources of the company whether they be on-prem or off-prem.

TC 

When we look at the digital transformation journey, the worlds of OT and IT have been segregated. Today, there is a demand for the data – IT and OT need to come together. Everyone wants access to the data and they have the ability to do more with that data. 

In the past, the management side of a business has tried to obtain insights from the tremendous amount of data from OT using middleware or scripting. However, this can disrupt operations. To minimize these disruptions, not rip and replace. Rather, we can now introduce technologies that fundamentally look at the data from an edge first standpoint. This offers a single source of truth that leverages today’s modern open standard technologies and communication protocols like MQTT. Now the data can be securely and efficiently transferred to the cloud where the business side can take advantage of that data to extract value.  

RL 

The OT world “of old” was a closed environment often with proprietary communication protocols. Whereas the IT environment was much more open and communicated using standards. What’s happening now is that the computing environments on the edge have become more and more open – more IT like – while they retain many of their OT underpinnings. We see more and more companies merging these two environments. Technology is enabling that with complex computing capabilities and new communication capabilities including 5G technology. These don’t have to be deployed as disruptive technologies, they can be seamlessly married into the current environment. 

The Technologies That are Part of Edge Computing

Right sizing the hardware and having the right technologies is so important. What are some of those technologies that can benefit an edge computing application? 

RL

There’s going to be a greater dependence on higher bandwidth networks. 5G will become a greater and greater piece of the equation. Scalable compute is important – the ability to right-size the computing complex for the size of the workload. The ability to seamlessly scale from a small compute footprint to a larger compute footprint that can then actually scale seamlessly to a multi-processor or multi-platform environment. 

I often work with companies who say they have one problem to solve. Once it has been solved, then they have three more problems, and then five more, and then ten more. The demand for the services and the applications that are required at the edge together with those in the centralized core of the company are growing endlessly. So the compute complex has to be able to start at a point and grow out. That is why that scalable compute capability with unlimited ability to grow take on capacity is so important. 

Other technologies that are a benefit include artificial intelligence, machine learning and a digital twin of the environment. That’s the ability to take an analog environment and represent it digitally in an environment. It allows you to compare and contrast the ideal world and the real world. But you can use that digital twin data to advance preventative maintenance, predictive maintenance and prescriptive self-healing. These augment worker experiences. So the ability to essentially model the real world against the digital world is going to be an important part of this.

The vast amount of data we need to move is going to drive the need for data compression and data management and device management. Also, the ability to provision support and maintain all these different connected devices. As you look at these different activities with user rights management, digital rights,  access controls and security – if we can’t trust the environment, there’s no way we can actually execute that environment reliably. Moreover, it’s not only security within an organization, in terms of access control and rights, but it’s also external cyber threats. The news has recently been full of examples of cyber threats happening today. There’s actually quite a bit of technology that we’re going to be deploying at the edge that hasn’t been considered up to this point. It really does change the dynamic significantly as we go forward. 

TC

It’s incredibly important to fix the architecture – to get away from OT applications talking directly to devices. We want to get to a place where we have the devices publishing data into infrastructure with open standards. Then we can leverage the data in a way that provides all the context you need – it will assist with the digital twin. We want all the data at the edge, more of it, being published up. We have to look at it from an OT first mindset. You have to fix the architecture then look at next steps. It’s important to select the right edge computer at the beginning that has the ability to not only solve the challenges that are needed right now, but can also solve challenges of the future. 

RL

One of the things we have to keep in mind is that these compute domains, the edge, the core, the cloud, need to be a seamless compute continuum. It has to be serving the needs of the enterprise across all of its operations – not only its facilities or its premises, but also the products and services that they build and deploy for their customers. Also, the processes that they run within the company and the people integration that goes with that. 

At Intel, we call these the “4 Ps”: processes, premises, people and the products. We look at the seamless integration of IT equipment as a backbone to that whole compute continuum. They are not islands of compute. So this is an architectural approach to Travis’s point that we really have to help companies navigate to deliver benefits across the entire corporation, not just POC (proof of concept) pilot projects. No pilot purgatory projects. 

TC 

Yes – where the OT managers and the IT managers are fundamentally working together to solve the challenges that the business requires. They can’t just look at their own domain, everything is connected edge to cloud. There’s a tremendous benefit we’re gonna get from accessing and learning from the data. However, sometimes there is resistance to this change. So there needs to be acceptance across the board.

OnLogic 

If you are looking for that solution today, it’s critical to ensure that you have the right device with the right I/O, right capabilities with the right computing power and the expansion needed to be able to scale. 

The Value of a Digital Twin

Where do we start if we want to head towards digital twin? Can you talk about digital mirror?  How can it be beneficial? 

TC

In order to reap the benefits from data analytics, we have to be looking at our data in the right context. This data forms the basis of the digital mirror – a reflection of the business in the form of data. And the data is discoverable at every layer of the business including the cloud. So with all the data along with the right information and context that allows us to go a lot further. That really comes down to defining those models and defining those assets that we can be working with. 

RL

With a digital twin, you get into not just modeling but execution of an environment. It allows a facility to evolve past preventative and predictive maintenance to a state of prescriptive maintenance. We also get to a state of worker augmentation. We actually start to build human machine interactions – not just a human machine interface (HMI). You start actually putting the machine and the human engagement together simultaneously. We will begin to trust machine execution more and more and more. This will lead you to an autonomous system. It will go beyond self-healing and self-prescribing to a fully autonomous executing environment. 

We’re seeing fully autonomous solutions in some areas such as smart building technology. I  have one example where hundreds of people left a building in New York City at the same time. If you remember the eclipse that was visible a couple of years ago – all the employees left to check it out. The automated building technology noticed the environment change after all the people walked out and it adjusted its functions, the air conditioning for example, and it did it all autonomously in a matter of minutes. The building itself recognized that a thousand people walked out of the building and the building adjusted. This is an example of how a rapid response to the data can enable businesses to save energy and save money. 

I think we’re going to get to that level where we’ll trust what the twin is telling you about the execution world. We’ll let the machine do the work but we always have the ultimate control back in the human machine interaction. Getting to this place takes growth that companies need to start planning for today. 

Avoiding the Pitfalls of Digital Transformation

Are there any common mistakes that should be avoided early on when thinking about project requirements? What usually gets in the way of scaling later on?  

RL

I think a big mistake is thinking too small. You really have to think about the enterprise impact of what you’re doing and the enterprise value. Many companies will do tiny little projects that have a small impact without the corporate investment. They need to tie back to the greater purpose of the company. I think what you’ve got to think about is as you’re building out these programs is not to silo activities. Think holistically and how the project can affect the entire company. I’ll come back again to that “4 P” concept of processes, premises, people and products. If you think across the “4 Ps”, then the project has relevance for the business. 

TC

I also think it’s very important to put the right solutions in place early.  We talked about hardware – putting the right type of hardware in place at the edge. That will help solve not only challenges now but also into the future. A lot of times we see a business choose the bare minimum which may solve the current challenges, but it’s not it’s not going to let the business grow and scale. Scalability is incredibly important here. Plus, we have to have solutions that are transparent. They need to leverage open technologies that we understand and not lock us into proprietary technology or a single vendor. We need to be able to access different technologies and have interoperability. It’s important to think holistically and put the right solutions in place at the beginning. 

OnLogic

It’s important to have a relationship with your vendors and to work with them in a consultative manner. It’s important that your vendors understand your vision of what you need and where you are going to ensure that you are getting the proper hardware and technologies.

Software Integration

Attendee question: Do any one of your products integrate with NetSuite? 

TC

With our product Ignition, we offer a lot of integration. We offer drivers to all different PLCs leveraging protocols like MQTT and OPC UA. With other products, there’s different integrations like REST and SOAP APIs that exist or database standard communication. There are some cases where the integration is direct, but more often it’s based on these open APIs or open standards.  

RL

It’s important to go back to the problem of starting too small. If you’re not thinking about the tools at the enterprise level that you want to marry together with the tools at the edge domain or even at the appliance level then you come up with inconsistencies. But it is possible. We are helping companies today make a deep edge domain appliance become seamlessly compatible with back-end office tools like NetSuite or others that are available in the market. So, I would say yes, compatibility is built into the systems architecture if it’s designed the right way. And, the seamless migration of data back and forth between multiple tools is always possible – if you plan ahead.  

TC

I’ll just add one more quick thing, today, there’s a methodology – DataOps. We’re all familiar with DevOps from a development standpoint but now there’s DataOps which is a methodology to get data moving. Fundamentally it’s based on the concept we have that standardization, or normalization – being able to to freely bring that information between these different systems in the right context. That is starting to gain momentum today. There are many products and tools out there that help with that methodology. 

RL

Basically what you’re talking about is technologies that run on servers. Environments that are installed and deployed on server environments. So when we talk about the edge, it’s just a scaled down data center environment in the forward context. But it has this unique ability to talk to the OT world on one side, and the IT world on the other side. So if your edge compute environment is built to seamlessly marry into the IT environment, then you know that architecture matters. Intel has been talking about what we call the digital transformation enterprise architecture. It’s not just about building an edge domain that can talk to an IT or central business domain, it’s about actually integrating the path across those two domains so that it’s seamless and it’s seen as a natural continuum of computing analytics in order to support the needs of the business. 

Edge vs Cloud

When is edge computing not enough and what functions still belong in the cloud?

RL

You have to look at the things that would drive the need for a stronger edge compute domain. When we’re talking with companies, cost is always an important factor. Moving and migrating data from edge to cloud and back again can be extremely costly, especially if it’s off-prem and it’s got to come back on-prem. Latency is also a big issue. We’re seeing that in our own factories – sometimes moving data off-prem and bringing it back on-prem just doesn’t doesn’t happen fast enough. Bandwidth is a big issue. If you start moving petabytes of data from a factory up into a cloud, it requires a huge amount of bandwidth to support that. In an autonomous world, you need interoperability and if you open portals to your organization you open up the opportunities for cyber threats and cyber attacks. I think these are all issues you have to look at.

It’s important to remember that the edge is not unlike the cloud and it’s not unlike the core. It’s actually designed to be architecturally identical to those environments. In truth, the edge can grow to be as big as you need it to be or as small as you need it to be in order to meet the cost and operational requirements of the business. It comes down to operational efficiency, security, cost and bandwidth. These things are not separate domains. They have to be viewed as an integrated set of technologies 

TC

I think it really looks at a balance between the two. You have to look at the business requirements you have for the edge and understand what you would need. There’s maintenance considerations as well. That balance has got to be important as to how do we solve what we need to do locally as well as how we bring in the cloud to augment that and how everything works together in that continuum.  

To see more conversations like this, be sure to subscribe to the OnLogic YouTube channel. Ready to start your own digital transformation? Check out our full line of hardware. 

Learn more about OnLogic computers with Intel’s newest Atom, Celeron, and Pentium processors designed for the IoT.

Learn more about OnLogic computers with Ignition onboard.

Editor’s Note: Inductive Automation has ended their Ignition Onboard program. Ignition licenses must now be purchased directly through Inductive Automation. While the IGN versions of our solutions are no longer available, our computers remain a great fit for use with Ignition software. Explore our recommended hardware here.

Get the Latest Tech Updates

Subscribe to our newsletters to get updates from OnLogic delivered straight to your inbox. News and insights from our team of experts are just a click away. Hit the button to head to our subscription page.

Share

About the Author: Sarah Lavoie

Sarah Lavoie is a content creator for OnLogic. When not writing, she can usually be found exploring the Vermont landscape with her camera looking to photograph something amazing.