If businesses do not adopt their own best practices in addressing AI transparency and bias issues, they may not have a choice in the future.
Continuous intelligence (CI) relies on the use of artificial intelligence (AI) and machine learning (ML) to derive actionable information in milliseconds to minutes from streaming data. Adoption is booming, but one obstacle could derail industry efforts if not addressed immediately. That obstacle is regulatory oversight and interference.
The looming problem relates to the way AI and ML are used in CI applications today. Embedding AI and ML into business processes only enable success if a company knows what goes into the algorithms.
See also: Humans are Key to Ethical and Responsible AI
Too often, AI and ML routines are treated as black boxes. A data scientist creates them, and everyone runs them without knowing the details that could impact the outcomes the algorithms generate or predict. Common areas of concern include:
- A lack of transparent: Users have no idea what assumptions went into the models, why certain analysis methods were chosen, or what data was used to create the models.
- Data expiration issues: If a model is trained using data that changes, such changes must be incorporated for accurate results or predictions. Unfortunately, many AI efforts today do not make such adjustments.
- Data completeness: A model may be trained, and assumptions made based on a data set that only represents specific circumstances. For example, a medical diagnostic application that was trained using data only from people of European ancestry might not deliver accurate predictions when applied to other ethnic groups. Or a predictive maintenance algorithm may only cover a narrow operational temperature change.
- Bias: Basing models and predictions on data that is narrowly focused or includes intentional or unintentional prejudices can lead to erroneous outcomes.
Addressing the situation
Some companies and initiatives have led the field and taking actions to remedy such problems.
GE has spearheaded an effort called humble AI, which has gotten a lot of attention in the last year. In a sense, the approach programs humility into AI. According to GE, that means giving the AI an awareness of the limitations of the simulation of the real world it relies on, and an alternative way of proceeding that removes any uncertainty from its behavior. That plan B is typically an old “deterministic” algorithm that sacrifices peak performance for extra caution and predictability.
In a corporate blog about humble AI, Colin Parris, head of software at GE’s Research Center, notes: “When you build a model, you make an implicit assumption that the data you use when you execute the model will be the same as the data when you built the model. For example, if I build a model in the summer, it may not be accurate in the winter.”
With humble AI, the model knows its range of competency. That might include things like operation temperatures and pressures for mechanical devices. “When [the model] goes outside that region, right away it says, I’m going to revert back to an old algorithm, which has been robust for many years.”
Going forward, whenever the algorithms encounter conditions beyond their realm of competency, they will ask for new data so that they can expand the situations they’re able to handle.
GE Research has been piloting humble AI with its digital twin deployments for wind farms and gas turbines. The plan is to build humble AI into GE’s digital twin products and services.
In the case of wind farms, an AI algorithm is forecasting the wind speed, adjusting the pitch of the blades, and other factors related to two turbines to catch as much wind as possible. In the pilot program with electricity-generating gas turbines, the algorithm controls the nozzles that mix air and gas and regulate the pressures in the combustion chamber. In both cases, GE engineers have seen improvements in the energy output of 1 or 2 percent.
Other efforts are coming to the fore under the banner of ethical AI. Microsoft, Google, FICO, and others have launched such efforts, as to have professional organizations and governing bodies, including:
- The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: The IEEE Global Initiative’s mission is, “To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.”
- The European Commission’s Ethical Guidelines for Trustworthy AI: According to the guidelines, trustworthy AI should be lawful – respecting all applicable laws and regulations, ethical – respecting ethical principles and values, and robust – both from a technical perspective while taking into account its social environment.
- The Organization for Economic Co-operation and Development (OECD) Recommendation of the Council on Artificial Intelligence: Among the recommendations are a set of principles for responsible stewardship of trustworthy AI that covers factors including transparency, accountability, and explainability.
To date, such efforts simply provide suggestions as to how to make AI less mysterious and more accountable.
If issues arise, such as complaints of bias or mistaken predictions lead to unfavorable consequences, government bodies will likely get involved and could create specific laws and regulations.
Already, The New York Department of Financial Services is looking into allegations of gender discrimination against users of the Apple Card, which is administered by Goldman Sachs. And some U.S. Senators are urging healthcare organizations to combat racial bias in AI algorithms.
If businesses do not adopt their own best practices in addressing AI transparency and bias issues, they may not have a choice in the future.