The Edge Will
Deliver The Future

Data generated by industrial systems is so vast, we just can’t use it all or fast enough. It’s too expensive to collect and process in the cloud and too slow to act upon insights. The message is clear: cloud is not enough.

Forrester reports that 60 to 73 percent of the data in a given enterprise doesn’t even make its way to analytics.

Enter Edge Computing

A full 10% of enterprise-generated data is created and processed outside the cloud. By 2025, Gartner predicts this figure will rise to 75%.

So what is “Edge” or Cloud Edge Computing? Cloud Edge Computing is a distributed computing system that brings storage and bandwidth to the edge of the network. This means it is close to the data inputs and the end user.

Here’s an easy way to explain edge computing and its relationship to the cloud: Consider a driverless car. The car’s sensors and cameras can only have situational awareness if it can respond to obstacles in real time.

With 800 hours of downtime every year, breakdowns are a $647 billion dollar problem that Edge Computing can substantially solve for.

Self-driving cars will soon generate as much as 3.6 terabytes of data per hour from cameras and sensors. There isn’t the time or the bandwidth to send all that data back to the cloud. The computation must be brought to the edge. A self-driving car isn’t just a next-generation car; it’s an edge compute node.

Edge Computing as a Core Requirement for Use Case

Edge computing provides machines with the ability to become situationally aware. Much like muscle memory, it has the potential to give a machine the ability to act quickly and nimbly. With 800 hours of downtime every year, breakdowns are a $647 billion problem that Edge Computing can substantially solve for.

WATCH Nitin Ranjan’s full presentation at RIOT XLIX: Competitive Edge – How Edge Computing Takes Business to the Next Level.

What Else Can Edge Computing Deliver?

When you make every machine an edge compute node, you are delivering a network of machine learning tools. These machines can communicate with each other intelligently. Also, various edge compute nodes can work in tandem, sharing computational power and enabling flexible computational design implementation.

The aim of lean thinking here is to provide a selection criteria of the main techniques that can guarantee the execution of machine learning models on hardware with limited computational power which paves the way for the Internet of Conscious Things.

For applications or use cases that involve video analysis, fast inference is your key metric.

Based on available hardware like Raspberry Pi or Motorola, we need to select the right technique between CNN or SVM machine learning models, while allowing for some acceptable accuracy trade-offs. On the other hand, when accuracy is the key metric as it is for applications surrounding code execution or prediction, we would select from logistic regression or Deep Neural Network techniques. Therefore, it’s all about identifying the key metric, the hardware at play, and the use case we are solving for.

The Right Trade Offs

Implementing powerful Deep Neural Networks on these edge devices is still challenging, and in some circumstances, it is essential to transfer these computations from the edge devices to a more powerful edge server on the premise. But while sending data to an edge server, data preprocessing can still be done on these devices. This will reduce redundancy and decrease communication time.

Additionally, we can use the transfer learning technique which enables multiple applications to share the common lower layers of the Deep Neural Network model and only compute higher layers unique to the use case. This will further reduce the overall amount of computation. The developer should look for the right trade-offs between accuracy, latency, and other performance metrics. That’s what I call Lean Thinking while we empower these edge compute nodes with machine learning capabilities.

Enabling Machine Learning requires Edge and Cloud infrastructure which is distributed, fast, and flexible. This is why many industrial use cases will require a 5G network. It’s predicted that edge and 5G technology together could unlock $740BN of value in manufacturing in 2030. (Source: STL Partners)

5G Brings the Edge

5G connects the edge and the cloud which both rely on data transmission and bandwidth to keep systems smart and functioning.

Edge devices, generating continuous inferencing on live-streaming industrial data (including audio and video) regularly sends targeted insights back to the cloud. These edge insights enhance the model retraining and significantly improves its predictive capabilities. Reacting quickly to changing conditions and generating higher quality predictive insights improves asset performance and process adjustments. The tuned models are then pushed back to the edge in a constant closed loop.

Furthermore, in augmented reality (AR) and virtual reality (VR) applications, creating entirely virtual worlds or overlaying digital images and graphics on top of the real world in a convincing way requires a lot of processing power. Even when phones can deliver that horsepower, the tradeoff is extremely short battery life. Edge computing addresses those obstacles by moving the computation into the cloud in a way that feels seamless. It’s like having a wireless supercomputer follow you everywhere and 5G enables it. In essence, lean code at edge unlocks the value of industrial data.

There are Many Benefits of Designing for the Edge

  1. Streamlines data volume for the ability to act quickly and nimbly in Manufacturing 5.0.
  2. Reduces computation cost.
  3. Eliminates of the need to transmit raw, sensitive operational data across networks.
  4. Minimizes investments in heavy compute or new industrial-control systems hardware.

– Summary of RIoT’s Edge Computing event talk given by Nitin Ranjan, BLDG25 SVP of Edge Products.