Reliance on AI has been growing exponentially, creating urgent demand for greater transparency and trust in AI decisions.
Are businesses becoming too reliant on decisions based on artificial intelligence, and risking the opportunities that come out of empathy and deeper understanding of human nature? This is a vexing question particularly for financial services firms that now look to systems to make decisions about loans, as well as the risky nature of financial markets. This is also a question all industries need to consider as they move forward with machine-based decisioning.
The risks and rewards of AI-driven decision-making was the subject of debate at a panel hosted at the recent AI Summit in New York, focusing on the emerging role of AI in financial services sector decisions. (I had the opportunity to co-chair the conference, and moderate the panel.)
Reliance on AI has been growing exponentially, creating urgent demand for greater transparency and trust in AI decisions. “A data scientist will looks at AI and know under the covers, it’s algorithms and statistics,” said Rod Butters, chief technology officer for Aible. “There are good statistics, and there are guys out there that do bad statistics, or take algorithms that have no statistics. At the end of the day, it’s a machine, and what’s important is to get better tooling and craft and experience with apply these machines in ways that first and foremost is transparent, and secondly understandable in some way, and ultimately something that is achieving an outcome that is desired.”Panelists also dispensed the notion that AI and associated robotics represent a “digital workforce” that is intelligently handling tasks alongside a human workforce. “Digital workforce is just a way to sell more stuff, right? I can sell 50 digital workers rather than one system,” said Drew Scarano, vice president of global financial services at AntWorks. “But digital workforce is just nothing more than what it is — a bunch of code that does a specific task, and that task can be repeatable, or be customized to do what you’d like it to.”
AI is applied in many ways for many decisions these days — “today we can use AI for anything from approving a credit card to approving a mortgage to approving any kind of lending vehicle,” said Scarano. “But we need human intervention to be able to understand there’s more to a human than a credit score, there’s more to a person than getting approved or denied for a mortgage. We can do that today with technology. But when the banks are out there providing services, that they’re providing survives and making decisions not just using technology, but looking at it in an emphatic and holistic way. What is that potential borrower? Their whole view, not just their credit score. You can do that today, you can find out what their balances are, what their checking account, what have you. But sitting across from somebody is still human to human.”
“The biggest systemic risk in the notion that artificial intelligence is artificial,” said Rik Willard, founder and managing director of Agentic Group, and member of the advisory board of the World Ethical Data Foundation. “It’s all done by humans; it’s all manifested by humans. When we look at risk versus returns, it’s only as good as the financial institutions, and the regulatory frameworks around those institutions. Are we supporting the same human and economic algorithms that we set up before technology, or are we working to make those better and more inclusive?”
Butters also advised against entrusting too much to the intelligence of an AI system — “in the end, it’s all 1s and 0s,” he said. “In the past, in ERPs, we’d write all these rules, and half of them worked right, and half of them were ignored, and half of them did weird crap that we could never figure out why they were doing it. But AI takes that to a whole new level. The blessing is it can find these levels of fine-grained detail that can be very empowering. But there’s a major risk, which is the transparency. Are you able to understand why it’s happening or where it’s happening? Because if you can’t have the visibility, or you don’t create tooling around it to create visibility, you get the unintended consequences.”
A higher-level risk is that the unintended consequences of AI may set companies on a wrong course. “The actions of AI might actually be counter to what the business needs to be doing strategically,” Butters continues. “The actions of the collective have unintended consequences that have nothing to do what you’re trying to achieve from a business directive.”
For example, algorithms developed for artificial intelligence are built to be risk-averse, but the world tends to be more nuanced than risk/no risk. “We all used to write checks,” Scarano illustrates. “Banks would say, if you’re overdrawn by $5, we’re going to pay that automatically. There’s risk there. But it’s not worth it to go back and call the customer up. We don’t want to spend the time and effort to do that, because at the end of the day, it might cost $10 to get $5. So we have to build models that say it’s okay to have acceptable risk. I fear we’re building models with no risk. And once we get there, then we’re in trouble.”
The idea that AI can be programmed to deliver decisions aligned with risks the business is willing to take “is the fantasy we all fall into here,” said Butters. “Somehow we think that model has embodies something. Well, the reality is that an artificial intelligence is just a statistical engine, and in a lot of cases, it’s a bad statistical engine. Because it’s not really telling you probabilities. We need to peel back the layers on this thing to understand the drivers behind it. And you have to relate it back to the business to understand what’s the acceptable risk-to-reward ratios, as well as your capacity to actually process things, as well as what your trying to do from a business participating in a community standpoint. And only then can you create modeling around these AIs that will allow you to make the right decisions.”