As more AI models are built and deployed, organizations will need to continuously document how each update changed the model’s behavior.
Google has added a set of artificial intelligence (AI) explainability tools to its cloud service as part of an effort to increase the development and deployment of applications that employ machine learning algorithms in production environments.
Many organizations are abandoning AI projects because they can’t sufficiently document how the models being constructed really work, says Tracy Frey, director of strategy for Cloud AI at Google. Not only does an inability to explain how an AI model works make business leaders uncomfortable relying on these types of applications, Frey adds it puts companies in a position where they will have challenges passing compliance audits.
To provide organizations with a way to explain how any AI model works, Google has added Google Cloud AI Explanations to Google Cloud Platform (GCP). That tool consists of a set of tools and frameworks to deploy interpretable and inclusive machine learning models that quantifies the contribution to the output of a machine learning model made by each data factor included. AI Explanations doesn’t reveal any fundamental relationships in a data sample, population, or application. It only reflects the patterns discovered in the data. However, organizations can pair AI Explanations with an existing Google What-If Tool to further explore how the AI model created is working, notes Frey.
Explainability has become a hot button AI issue because early adopted of organizations are discovering how assumed biases in their AI models are impacting outcomes in a way that are suboptimal or in some cases may be outright illegal. It’s often hard to distinguish whether the AI model itself is biased or whether the data used to train the AI model simply reflects an existing deeply flawed process. Regardless of the root causes, end customers impacted by the application of AI are demanding greater transparency into how those models are constructed. There are already no less than 13 bills and resolutions relating to AI technologies working their way through the U.S. Congress, with many more to assuredly follow.
There are only a small number of AI models running at scale in production environments today because most organizations don’t have the data science expertise required to build them.
“There’s a scarcity of talent,” says Frey.
Unfortunately, most of the AI models that have been deployed are the IT equivalent of a “black box.” Most of the teams building these AI models had their hands full building them in the first place, so documenting precisely how they worked took a back seat to simply trying to prove they worked.
However, as the number of AI models being built and deployed continues to increase, many organizations are going to discover they will be required to continuously document how each update to an AI model changed the way that model behaves. As it turns out, most AI models are not static IT projects. As new data sources or discovered or business conditions change, AI models need to be regularly modified or replaced. The pressure to make sure each change to those AI models can be sufficiently explained will be acute.
It’s too early to say to what degree explainability concerns are impacting the rate at which AI projects are being deployed. There can be no doubt massive amounts of money is being allocated to building AI models. However, as every IT organization knows from hard-won experience, there is a world of difference between building any kind of software and actually deploying and managing it at scale.