Edge computing in space addresses the same issues as use cases across many industries. They all collect and analyze data to prevent unnecessary downtime.
Edge applications, no matter the use case, are, in many cases, about very similar issues – how to use data to make faster decisions that positively impact the business or about delivering better application experiences. A project that Red Hat is doing with NASA illustrates this point. With all edge computing applications, you want to get compute power and sophisticated analysis close to the source of data generation so critical decisions can be made in time to take suitable action. The challenge is how to balance the tradeoffs in available compute power in a limited amount of physical space, bandwidth and latency with the need to quickly gather, analyze and act on the data to design an optimal solution. RTInsights recently sat down with Chris Sexsmith, Program Director for Data Science and Edge in the North American public sector at Red Hat, to discuss these and other edge computing issues.
Here is a summary of that conversation.
RTInsights: What are the drivers for edge computing?
Sexsmith: There’s a ubiquitous desire to have answers at the tip of your fingers across all industries these days. You want data to be gathered and analyzed fast, with some fairly complex compute, to have a relevant action available as close to where that action needs to be implemented. With the advent of AI/ML technologies, having that type of analysis very close to where the data comes in is an absolute requirement for many companies to stay competitive.
Further, it’s incredibly important for us to update how we’re using that data as time goes by. Data is not a static thing; it’s constantly changing. Organizations need to actively gather/analyze data, update ML models, and push them back out to the edge as new things are learned. Some of the potential use cases in the public sector, such as satellite telemetry analysis, edge devices on the International Space Station (ISS), or the Forward Operating Base “back of a Humvee,” all require us to respond to the data coming in at a very rapid rate. In such applications, lives can be at stake and every millisecond has consequences.
RTInsights: How do companies make sense of the data?
Sexsmith: This is reliant on data analytics and machine learning. To properly react to that data, we must understand the data coming in and be very fussy about the data that we leverage. To do that, data scientists and data engineers spend a lot of time analyzing that data and ensuring that their predictions from a machine learning model are accurate. Not only that, they must ensure that it continues to be accurate out into the future, so an iterative process is a requirement that is often overlooked. You need to be able to accommodate new inputs, spot new patterns, and always make sure the data, as it changes, is analyzed to confirm that the model in use isn’t drifting or giving bad results.
This is why edge and data science go hand in hand. The need to perform these actions at the edge usually means that we’re performing processing that is incredibly time-sensitive and dependent on getting that response to the user. Many times, these are life-and-death situations where we can’t afford to wait for data to be sent back across the country, across the globe, or even back to earth in the ISS example to perform these operations and get the insights we need to take immediate action. Those answers must be very fast and that’s the real driving force behind how companies are using data at the edge.
RTInsights: How do companies develop edge applications?
Sexsmith: It’s often a type of hub-and-spoke pattern. That’s where applications with AI capabilities need to be developed and life-cycle managed centrally and then pushed out to edge sites. There must be some sanity in how we’re developing these applications, entangled with ML models, and one way to do that is with containers. Containerization, in general, creates a very easy way for us to pick up and move applications and push them to disparate locations, other data centers, and straight to the edge. Containers are lightweight and portable, so building on a platform such as OpenShift allows us to have a ubiquitous platform for developers to not only develop these applications but to also push them out to whichever location makes the most sense even in physically small spaces. Organizations can be assured that these containerized applications will run at the edge just as they did in the data center. Similarly, we can push those containerized ML models out to where they need to be used, so they’re relevant for the task at hand.
RTInsights: What use cases and benefits have you seen?
Sexsmith: The use cases and benefits for edge computing center around fast responses. It’s about being able to process a ton of data, get a very quick response to that data, and deliver the insight in time for appropriate actions to be taken, especially in life-and-death situations. An example is self-driving cars. We don’t have time to beam data back to a data center to determine whether we have to stop for a pedestrian. We have to make that decision very, very quickly and that requires an edge solution.
The benefits around this are straightforward; it’s the fact that humanity is not getting more patient. We’re getting less patient, and we must respond faster and faster to the data coming in to make these decisions that drive many businesses and a lot of the programs around the world today. It’s a requirement, not a “nice to have” feature for most businesses anymore. Competitive advantage often lies in the accuracy and speed of the response to the consumer.
RTInsights: A particularly interesting case you’re working on is the role that edge computing is playing in space. Can we talk about that?
Sexsmith: Absolutely. The requirements in space cannot be understated in the sense that we have to approach it very much as I’ve been saying, as a life-and-death scenario, because it often is. For an astronaut on a spacewalk, their spacesuit is of obvious critical importance, so they need the ability to make decisions based on sensor data coming from the suit, and once the astronaut is back on the ship, they assess preventative maintenance work to prevent future failures. To take this data in and quickly respond to it, we have to move further and further towards an edge model that also considers things like: once the application is in space, we probably will not be able to change it for a very long time and that sending the data back for analysis to Earth is not an option when you have to make almost instant decisions with slow, intermittent, or non-existent connectivity. So, we have to think about the platform’s stability, the stability of the solution that we push out, and we need to ensure that we’re doing it in a very responsible manner.
For example, in an International Space Stations project in which Red Hat partnered with IBM and HPE, we sent a box to the ISS to do genetic analysis and research. This provides a net benefit to NASA because we’re still learning a lot about genetics in radiation-laden, zero G environments. The faster we can perform these activities on the ISS, the more science we can effectively perform. We’re only just scratching the surface as to what the long-term impact is on humans in space, and we can shorten that learning curve by doing a lot of that processing onboard the ISS itself using edge computing with OpenShift.
The future of us working with NASA is extremely exciting. We’re all-in on doing everything that we can to help the space industry move forward and eventually have colonies on the moon and on Mars.
RTInsights: You said that once there’s an application up in space, it stays there for a very long time. Are you able to upload fixes and changes and send data back to Earth using high bandwidth, low latency communication to then do further analysis and send results back up?
Sexsmith: There is satellite connectivity, which is extremely low bandwidth to the ISS – that constraint dictates largely how the data should be utilized. So, we have a window where we can push fixes and pull down results, but this points directly to the benefits of edge computing. We can push and pull back only what is absolutely necessary. If we had a big pipe like 5G everywhere, then we might have other options with regard to edge computing since we might just be able to go directly to the datacenter or cloud, but again, this all depends on the use case requirements and the specifics at these remote locations.
RTInsights: I can see where this could apply to other areas. Instead of the ISS, it could be an oil rig in the middle of the North Sea or an autonomous car where you do not have the time to send data somewhere to be processed. Is that right?
Sexsmith: Exactly. The use cases for edge computing are very similar across industries. For example, for industrial manufacturing, the preventative maintenance use case can be extended to manufacturing lines where sensor data is collected and analyzed to prevent unnecessary down time, like in the space suits on the ISS, which can also apply to aircrafts – it is all about fatigue of the equipment, time to replace, the relative statistics and understanding when to replace a component before it fails. These are the things that we must actively monitor, regardless of the industry. And that’s one of the more interesting points around edge computing: we think of them as very separate use cases, but around 90% of the underpinnings are actually quite similar. It’s the last 10% or so that is highly contextual and industry-specific.
If you think about it, it’s really about acting fast using data. One of the more interesting quotes I’ll happily steal that I recently heard is that organizations need to “operate at the speed of relevance.” Whether we’re talking about autonomous cars or oil rigs, doing data science in space, or aggregating telemetry on the satellites and performing analysis on data before it gets sent down to earth, what we’re all trying to do is give that speed of relevance to organizations that need to make data-driven decisions fast, ensuring that they’re getting the answers they need in the time that they need them. The requirements for that are always getting tighter and tighter. We need increasingly fast responses to everything from our thermostats at home to genomic data on the ISS. As the world speeds up from a technology standpoint, getting accurate and useful insight from the data at hand becomes critical, and we’re constantly improving and iterating our understanding of that data with the technology as fast as humanly possible.