By leveraging AI tools to extract and analyze imagery data, many industries can improve processes, streamline operations, and generate efficiencies for their respective business sectors.
For thousands of years, people have been referencing geospatial data in various forms to navigate and depict certain areas of the Earth. Cartography dates back as early as 16,500 B.C., and primitive methods of mapping are likely even older than that. Today, most of the Earth has been mapped down to a scale of 1cm:5km, though polar regions and some parts of Central and South America haven’t been covered to this level.
But even in a modern world where it seems like Google Maps has solved all our mapping needs, pursuing better geospatial data is more than a throwback to the Age of Exploration. In fact, the information that these efforts can yield is critical to many modern industries because each data point that is collected improves the accuracy of the existing record. With remotely sensed data from satellites, stratospheric balloons, aircraft photography, and drones, information-gathering about properties and regions is far more cost-effective than ever before, and these methods boast very high accuracy when analyzed correctly.
And this information can be put to good use. Using machine learning and big data, aerial imagery can offer deep, wide-ranging insights that deliver a range of important functions, including measuring environmental risks for fires and floods, evaluating crop health and production, identifying individual property characteristics without in-person assessments, and more.
Different imagery data sources
While there is great potential value in this data, the road to finding that value is paved with significant complexity.
For starters, each different type of data provider — satellite, stratospheric balloon, airplane, drone — produces images that have different resolutions and therefore usages. Satellite providers are able to cover huge areas and are extremely cost-effective; however, you can’t observe small objects (the highest industry available resolution from MAXAR is ~30 cm, which means one pixel is roughly 30 cm) from these images since the resolution is too low. Aerial imagery from drones and aircraft from companies including Nearmap and Vexcel are higher resolution, which lets you see finer details (on average about 6 cm per pixel) about a property – but in exchange, this footage takes longer to capture and is traditionally more expensive. One of the newest and most compelling options is the stratospheric balloons operated by companies including Near Space Labs and Urban Sky. These balloons generate imagery nearly as hi-res as aerial footage, but at a much lower price. Additionally, the balloons are able to cover a larger swath of land than a pilot on a given day, making large-scale, timely image capture — e.g., in the wake of a national disaster — viable. Ultimately, different imagery sources let you answer different questions about a property, depending on what’s needed.
With these varying forms of remotely sensed data analyzed using AI, we are able to build a database of property information that utilizes persistent processing for timeliness. In other words, although literally trillions of image data points exist — and many for a specific property over time — and we typically don’t license them until someone puts in a specific property address. When this request is made, our AI processes all relevant images, selects the ones that are most current and effective for the use case and outputs these temporal analyses about a property in seconds.
What the future holds
There’s a lot of progress to be made at the cross-section of remotely sensed data (imagery), artificial intelligence, and data analytics, and those of us who are entrenched in the space are truly excited about what the future will bring both in terms of refining existing methods and finding new use cases.
For example, over the last few decades, imagery providers have been highly disjointed in terms of how they format the data they capture, with very little cross-industry standardization. As the industry matures and different players start to align, it opens up the door for standardized open-source imagery databases that make sense for all types of customers to interact with and consume. This is one of the reasons we actively invest in and contribute to open-source standards like STAC and COG – not only to make it easier for our team – but for anyone who wants to tackle similar challenges.
Perhaps equally compelling are the many new functions that will be unlocked in the years to come, including things like predictive analytics that observe a property and prevailing weather conditions, detect a risk — such as wildfires — and suggest preventive measures to ward off disaster. However, it’s also imperative that organizations are making future-forward decisions based on the most current images for a period of time. Persistent analysis, or the concept of “always-on processing,” provides the most up-to-date and accurate analysis of places and spaces with the speed and accuracy industries need.
While the insurance industry might seem like an odd sector to incubate cutting-edge technology like AI that operates on stratospheric imagery, the reality is that artificial intelligence can have a positive impact on nearly any sort of business. As the digital revolution continues, data becomes ubiquitous and standardized, but the size and complexity of the datasets hinder any individual’s ability to capitalize on them. By leveraging machine learning tools to extract and analyze data — and putting this analyzed data in a format already used by millions of professionals — industries from insurance to agriculture can improve processes, streamline operations and generate efficiencies for their respective business sectors.