What are edge servers and how are they used? | OnLogic | Edge Servers

What are edge servers and how are they used?

What are edge servers and how are they used?

What are edge servers and how are they used?

In this I/O Hub video, Product Manager Tasha Dickinson answers the question, ‘what are edge servers?’, and goes into detail on how edge servers are solving today’s most challenging hardware problems. A transcript of the video can also be found below.

Edge servers refer to computers that reside at the “edge” of a given network. In other words, they are physically close to the systems or applications that are creating the data being stored on, or used by, the server. Click here to explore our line of edge servers.

Who created edge servers?

Christopher: (00:06)
Welcome to the I/O Hub where we talk about the latest industry trends, hardware and technologies around industrial and embedded computers. I’m Chris and joining me today is Tasha Dickinson, Product Manager here at OnLogic (formerly Logic Supply).

Today we’re going to be talking about edge servers. But before we get into that, can you give us just a quick history on how servers came to be?

Tasha: (00:28)
Definitely! Servers have been around for quite a bit of time. The founder servers, Tim Berners-Lee, worked for CERN in Switzerland in 1989. He actually had a problem. Tim needed to get data from his scientists all over the world and all the CERN labs. And he wanted to make sure that he could access them no matter where he was.

For example, each different computer had different languages. Sometimes you had to know multiple different languages just to get data off of these different computers. As a result, he was able to have one repository that amassed all of this data. The result of this allowed him to push it out to all of his scientists. And so that’s actually where servers started really in this content caching type of an idea.

Tim quietly started this whole server history that we have to this day. He was even brought out during the Olympics, was acknowledged by the queen, and knighted.

He tweeted at that time, “this is for everyone.” And I just love the idea that somebody who started this has a really enriching the history around servers. And I think we’re doing it here today with OnLogic (formerly Logic Supply) with some of our edge servers.

What are edge servers?

Christopher: (01:50)
So Tasha, enlighten us. We know that servers are computers that serve as a central source of information to give resources, distribute resources across the network. But what are edge servers?

Tasha: (02:03)
When people hear the word server, most think of a cold room in the back of a building or underground in cement bunkers. But that is the model of 10 years ago. That was before the proliferation of the cloud and rapid growth of the internet.

And so edge servers really are going to be kind of the next step, what we are needing next. So, you know, I talked about Tim and how he had a problem and what he was doing. People are really looking at artificial intelligence, machine vision, and deep learning. As a result, what we’re finding that a lot of the analytics and things that need to happen are now happening farther out from the data centers.

In addition, it’s important to think about those problems that are facing our software providers or the people who are inventing what’s happening next in these areas. An edge server able to put out a lot of the computation that is needed right at the edge. Right where those things are happening in our everyday life.

So in NEMA cabinets, custom cabinetry in the middle of the desert, a closet, warehouse, or on a desk. Or right in the middle of a welding studio. All of those things are problems that people are having, that we’re trying to solve with edge servers.

Why use edge servers?

Let’s talk about some of those. You’re talking about taking what’s been traditionally in these large server rooms and moving them closer to where the action is happening. What are the benefits? Why are people doing that?

Tasha: (03:45)
Yeah, exactly. What makes sense? There’s a couple of different problems that we’ve been hearing from our customers. What we’ve seen as a lot of this technology has developed is that decision making is big. Especially with artificial intelligence.

If you’re going to react to something? Let’s say you have a robotic arm on a manufacturing line. And it needs to be able to react quickly because it’s too close to a person for safety reasons. Or stop a line because it has identified something with a vision capability.

It needs to be able to make that decision quickly. The system doesn’t have the time to send data to a large analytic center to make that decision. It needs to stop immediately because robots don’t feel.

Using edge servers to offset cloud computing costs

Christopher: (04:34)
There are two, right? That’s the crux of the situation.

Tasha: (04:38)
Exactly. Moving that decision making next to that robot or on-site is important. There’s the need for that computation. But cloud services and big data has been growing exponentially.

There are more nodes and data collectors pushing data up to the cloud than ever before. People are realizing pushing unfiltered data to the cloud is expensive.

Christopher: (05:18)
So then there’s that cost management piece, the cost of transmitting data…

Tasha: (05:23)
Well, that and it’s the cost of transmitting it. Cloud data usage is based on the amount of data that you use. Or the amount of data you store. You want a store only what you really need and only what you use. So making the computational decisions ahead of time before sending it up saves you money both on transfer fees as well as cloud fees.

Christopher: (05:46)
You’re saying that it’s trimming fat at the source. So what’s being transmitted is leaner, more efficient. It’s a more cost-effective way of dealing with your data.

Tasha: (05:58)
It’s definitely an efficient way to do it. Additionally, having extra data held there isn’t going to help. You want to find out what you need, quickly. We’re able to help people do that with the use of these edge servers.

A lot of people realize they need this computation right there on the line. Right next to a robot, for example. But you can’t exactly roll a rack up to a robot and have it sit there. You need a specially built server to do that. Edge servers get that level of compute into a small and reliable form factor that can handle that robot.

Moving the data center to the edge

Christopher: (06:52)
Right? Because we’re talking about what was in a massive data center and making that small enough to go into a box, a box, or a robot enclosure.

Tasha: (07:02)
Yup. A NEMA cabinet, small NEMA cabinet, exactly. So, what we’re really trying to do here is be able to have that amount of compute in such a small box and be able to connect to it. You know, that’s the other big thing is connectivity because once all that information gets generated, gets sorted, gets reacted upon, you’re going to need to be able to still connect up to the cloud or some sort of an intranet to be able to move that data around.

Most of the time data centers are highly connected, but they also have massive storage centers attached to them. So, being able to have that hybrid, have enough storage on-site, you know, even be able to use hot-swappable right there and be able to pull your data or be able to send it up to the cloud via Wi-Fi, 4G, or really fast internet is an important thing.

It’s that’s kind of problem we see and that’s really what we’ve tried to do in creating what an edge server would be. High connectivity, high compute, some storage capacity or at least flexible storage capacity for those kinds of needs. And really, meeting the needs of exactly what that is with the right level of processing power.

What does an edge server look like?

Christopher: (08:10)
We’ve created this visualization of what the data center looks like. But what does a typical edge server look like?

Tasha: (08:24)
I don’t think there is one. An edge server fits the location you need it to. We have concentrated on multiple platforms. Being able to have powerful compute in a small box or in a rack and programming on one platform for all of it. This is where we start thinking about the form factor a little bit more. From fan-cooled boxes to fanless systems, to rackmount servers.

Maybe you need a rackmount with multiple CPUs, storage, or connectivity. Thinking about that whole breadth of what we need or what our customers really need to solve their problems is kind of important as we craft what really is an edge server.

Christopher: (09:26)
How does that transfer to the actual systems themselves?

Tasha: (09:30)
Edge servers here as we’ve been developing them have a wide range of form factors to meet the needs of different environments. So we have systems like Karbon 700, with Xeon processing and operates up to 70 degrees Celsius.

It’s an amazing thing; a system that’s got a Xeon processor and a ton of connectivity. This is important because once you make those decisions, you still need to send that information to a computing cloud. Being able to have a system like this in your portfolio is an important part of it, especially as you’re deploying your solution to solve your problems. So there’s, there’s this…

What features make up an edge server?

Christopher: (10:34)
So something like this Karbon 700… What makes this an edge server? What can we do with this?

Tasha: (10:46)
The idea of an edge server is the compute. It’s the connectivity and being where it needs to be. This can handle up to a Xeon E 32176 processor that allows a lot of compute. But it also can handle expansion. You’ve got standard PoE and ModBays give you really fast expansion.

You can do Wi-Fi and 4G cellular on this. This connectivity at the edge enables the decision making. Being connected to the network or cloud is an important part of that.

Christopher: (11:23)
So what I’m seeing here then is this the extreme end of an edge server. Wee’ve got some other toys here too. What’s this?

Tasha: (11:40)
This small but mighty system is the MC850. It’s an amazing box because for the size of it you’re getting a scalable Xeon processor. You can go up to 28 cores and 48 PCIe lanes for heavy computing, especially for AI. This is really built for AI right here. You have the ability to do expansion and add a GPU. It has tremendous power for such a small box. This is it could go on a desk, NEMA cabinet, or wherever you need.

We’ve everything from virtual reality to parking garages to analytics, to video analytics, intelligent NVRs and, and other types of applications. Things that require a lot visual processing. This embodies what an edge server is. And that is compute plus the connectivity in a small box that allows it to go anywhere.

Early iterations of edge server technology and design

Christopher: (13:07)
And this is interesting because this is like the MC850’s sort of shorter little brother. Can you tell us a little bit about this and sort of what went into this design?

Tasha: (13:23)
Yeah. So the reason why I brought this out is this was a project we did with Intel. It’s really the start of our story in terms of edge computing. We created this with Intel and the idea that computing is starting to move out onto the edge.

It was the first iteration a couple of years ago and it really got us started on this journey of where does compute happen. And Intel brought this to us and said, ‘Hey, listen, we have a lot of customers who are talking about this needing more than server processing all the way in the back room. They’re needing processing right there on the edge.’

And this is how, how it kinda came to existence. I heard a story once about a guy who was in a meeting with a whole bunch of people and trying to explain both cloud computing and the fact that there’s cloud and all this data’s running around, it’s all over the place, but it’s starting to get a little dissipated a more and more. It’s starting to go out farther and the guy said, ‘Oh, so, so it’s kind of like not so much a cloud but like a fog.’ And that’s how we ended up with a fog box.

From fog boxes to edge servers

Christopher: (14:27)
So that’s actually the title of this, the fog reference design. So what did we learn with that project? I think it was for video analytics, image and object recognition. What did we learn there? Why do we do this?

Tasha: (14:49)
To start off, it came to an idea that came from the needs and the problems that are out there that we’re trying to solve. So this idea of compute at the edge, we had customers who were coming to us asking us for solutions, or actually, in this case, it was until they’re coming to Intel and Intel came to us to work with them on a project like this.

But we learned that when this was deployed out, we needed to think about not only compute but we also need to think about connectivity and also about scalability. So you know, a project like this, it was a great start, but it also wasn’t able to scale up in a way that a lot of people needed. A lot of people understand that we need to have the compute at the edge.

It needs to work with the nodes that you’ve got out there already deployed. And it also needs to grow a little bit across your entire organization. From that, we’ve started to grow the portfolio in terms of getting into 1U, 2U, and 4U servers to really scale to somebody’s operation to be able to understand from start to finish how that data path kind of lives.

Examples of edge server applications

Christopher: (15:49)
So working with Intel and developing this reference design we learned that creating networks of computers pushes the ability of compute and connectivity closer to where it’s happening. This is where we see benefits in terms of speed and response time. Can you give us some examples of what edge servers might look like in action?

Tasha: (16:16)
So right now in a packing and separation plant we have the MC850. It’s in a NEMA cabinet because it’s small enough to fit. The system is making decisions about the robotic arm that’s doing vision analysis. It’s able to make those decisions right there in the location in real-time.

Our 1U servers are going into a custom cabinet that’s in the middle of a desert. It’s protecting its location and doing security and analysis right there on site.

There’s a lot of other neat applications that are talking about decision making where a data center isn’t going to make sense. But there are also examples in security where data centers are part of the solution as well.

That scalability allows innovators to handle real-time decision making, analytics, and processing onsite and remotely through a variety of deployment methods.

Christopher: (17:54)
So it’s almost like an extension of edge computing. But instead of using lightweight, smaller systems, you’re using a more powerful, more capable systems to do that.

Tasha: (18:12)
Yes. With edge servers, we’re putting that power all the way out to the edge. People are requiring it. However, the problems that they’re running into with processing and analysis on the edge is something we’ve been able to solve with edge servers.

Using edge servers to solve problems

Christopher: (18:31)
In other words, the smarter the industry gets the more powerful computers are needed to actually do that and make things more intuitive.

Tasha: (18:40)
Definitely. I mean, do you think about where the industry is really moving right now in terms of video analytics? Right now people are going out and making decisions, right? You know, with the video analytics in terms of security. So if you think about if you’re in an event or something like that, you’re able to do a lot of the analytics there to do detection of problematic people or problematic occurrences right there.

For example, we’re working with a solution provider right now offering our solution at the edge to do video analysis preventing crime, theft, and other loss events. That’s just the first foray into what will be happening in the future. Obviously we know self-driving cars are really moving analytics to the edge in a very big way.

A lot of the solutions that we have, especially around the Karbon 700 is really moving that kind of edge computing out to the edge. It’s driving in a car, all the way to shipping ports, and we’re in all different kinds of warehouses and locations pretty much wherever you’re going.

I heard recently that we’re helping to do some analysis as well as transmitting some data for the Tour de France and in occasions like that, it’s fun to hear that our computers are really making a difference out on the edge.

How do software and edge servers work together?

Christopher: (20:08)
As you’ve been developing on our edge server platform, what’s excited you about that experience and where the industry’s headed and as we’re trying to help solve some really challenging problems?

Tasha: (20:28)
One of the cool things I get to do as a product manager is listening to what is going on around me in the market, the industry, and even our suppliers about what’s exciting and what’s coming up. This allows us to stay on top of industry trends and take a leadership role in this area.

So as we’ve defined what an edge server is, being able to listen to all those different things has been exciting. We’re going out into trade shows and our partners are so excited to be able to say we really had a problem of being able to do analytics right there at the edge and we have a solution to that now.

Edge server solutions for innovators

Christopher: (21:13)
And you’re talking about software partners, right?

Tasha: (21:15)
Yeah. Folks who need bare metal to run their program. As a result, a lot of our hardware customers are similar to our software partners. They’re solving problems. Maybe it’s through software and we’re able to make that a reality with our hardware. We’re the last bit that makes your ideas and your thoughts come together.

This happens because we’re able to design hardened industrial boards and systems. In addition, we work with our suppliers all the time to do a long life cycle to have that solution people are dreaming of.

And to have it last. To make sure you don’t have to worry if it is going to work because you know it will. We know that our hardware is going to make it possible on that platform. So one of the other things I really enjoy about my job is being able to also look at where the industry is going and what’s coming next.

So we have talked a lot about compute and right here I have this Xeon scalable Cascade Lake system from Intel. And we managed to put this in the MC850, but I think that there are so many other platforms that our customers are using and getting excited about. So in addition to Xeon and Intel in general, we’re able to now move into AMD.

AMD edge servers

Tasha: (22:46)
AMD presents a whole other opportunity of solutions for us. Coming up soon we’re going to start talking more and more about AMD, what they’re bringing to the game, why people should use AMD, and why people should use Intel.

You know, one of the things I think we really do well here at OnLogic (formerly Logic Supply) is to find the right solutions to the problems that our customers are asking or aren’t asking. So having a portfolio as we grow to be able to meet all of those various needs is important.

Also, AMD is going to be part of that solution. We’re also working with software partners like Gorilla and Ignition. As a result, they’re going to become part of our solution for ThinManager. All of those things are part of the solution to the problems that our customers are seeing.

Other technologies to look for in edge servers

We’ll be getting more of that, especially in the edge server line. Being able to do more expansion options. Our customers are coming to us with different connectivity needs, as a result.

We’re getting into 5G wireless. But we’ve already seen the requests for fiber, being able to integrate fiber into some of these smaller boxes as part of the connectivity thing. It’s kind of old hat, but being able to package it into the sizes that we’re seeing is a new opportunity for us and for the market in general. So we’re excited about that.


Christopher: (24:00)
Right. Because it’s not a one size fits all solution. There are different ways to solve these problems. And furthermore, what you spoke about is being able to apply these different technologies, connectivity, processing, and form factors; to products that solve those specific problems.

Tasha: (24:25)
Yes, and all of us are excited about that. And so, working on that and then being able to bring that here and talk to you a little bit about some of these new things will be exciting. 

Christopher: (24:45)
Thank you, Tasha, for joining me and waxing poetic about edge servers. This concludes our first talk on edge servers. Join us next time on the I/O Hub where we talk about edge servers and their reliability, durability, and what makes them survive any environment.

Leave a Comment

Your email address will not be published.