The Internet of Things no longer suffers from being abstract; its growth has been phenomenal. But could this growth spurt be its downfall too?
Soon, there will be seven times more data-generating devices on the planet than there are human beings. And that number just keeps growing. The latest stats out of Juniper Research put the projected number of connected IoT (Internet of Things) sensors and devices to grow to more than 50 billion by 2022, up from today’s paltry 21 billion devices — or 140% growth.
Edge computing services will be behind most of this growth, the study’s authors predict. The research found that the rise of edge computing would be critical in scaling deployments up, owing to reduced bandwidth requirements, faster application response times and improvements in data security. Juniper predicted that a substantial proportion of the estimated 46 billion industrial and enterprise devices connected in 2023 will rely on edge computing.
The challenge with all these devices and connectivity is data — and figuring out how to integrate it into organizational systems, as well as give it value to the business. Many organizations have built their information technology from the ground up on relational database management systems tied to legacy infrastructures, which were not designed or equipped to absorb or manage the massive and continuous flow of data that IoT networks are delivering. Many executives assume they can simply plug into IoT networks and start reaping its rewards. However, a lot of work will be required to make this happen.
See also: Now the Internet of Things can be made self-aware
This point was borne out by Bill Franks, chief analytics officer for the International Institute for Analytics, in a recent post. Namely, IoT data seems simpler than it really is. “Most sensors spit out data in a simple format — there is a timestamp, a measure identifier (temperature, pressure, etc.), and then a value,” Franks says. “So, you can fairly quickly go from a raw feed to a dataset or table that’s ready for exploration.”
However, where is the material value to the organization? It may not be in the endless stream of timestamped information, but in the anomalies that pop up. Which anomalies are of relevance, and which are just anomalies? Is the particular source of data being streamed even an accurate gauge of the machine’s usage and performance? All these questions must be investigated beforehand. “Don’t let the simplicity of ingestion fool you,” Franks cautions.
What will happen is organizations will be inundated and overwhelmed with mainly useless data — requiring throughput, processing and some kind of storage — along with the expertise to handle these processes. For example, “with IoT data, it is necessary to determine the cadence that actually makes sense for your specific problem. For example, a temperature sensor may spit out a reading every millisecond. However, in most cases, receiving data at that cadence is overkill. That overkill has a price due to the cost of storing the extra data and the cost and complexity of analyzing masses of data that aren’t valuable.”
Some critical considerations
To get the most out of IoT, Franks recommends the following considerations:
Set an appropriate cadence. “Determine what cadence actually has value for the problem you’re tackling,” Franks states. “If you’re monitoring a car engine, readings once per second might be more than enough. It could be that readings every 5 or 10 or 60 seconds would be plenty. The point is that you have to assess each metric and determine what you need through some experimentation. Then, filter the data down to the proper level. Otherwise, you’ll be overwhelmed with data and meaningful patterns will be that much harder to identify.”
Identify complex patterns over time. “When analyzing IoT data, we are often interested in deviations from normal rather than projecting the expected,” says Franks. “After identifying what is normal we must do work to find abnormal patterns that are of importance. However, there are multiple ways that abnormal patterns might evolve. Sudden increases in temperature would naturally draw interest. But, what about the impacts of a very small rise in temperature that either persists for an extended period or that comes and goes with increasing frequency? There is much complexity in the identification of these time-based patterns.”
Figure out how to handle interactions. “Let’s assume you’ve figured out the proper cadence for each metric you care about and which patterns are important for each metric individually,” Franks continues. “How do you account for any interactions?” The problem is that there can be lags between impacts, he says. “For example, temperature may start rising in advance of pressure rising. To identify the interactions between various sensor readings requires complicated analysis to determine not just what metrics might interact, but also over what timeframe and with what lag.”
Account for errors and missing readings. “Sensors aren’t always reliable,” Franks says. “Any analytics process must build in checks and balances to account for missing data or data that is in error. Your analytics processes must include logic to identify suspected errors or transmission gaps and to handle those scenarios. You don’t want a multitude of warning lights or messages alerting to a problem if it is really a data issue.”
The bottom line is that as the amount of IoT-generated data grows, decision-makers need to sit down and architect systems and processes to gain the insight that is most important to their businesses. This requires an understanding of what is important to the business, as well as what IoT data is or is not telling us.