Business Roundtable: Continued thought-leadership in the regulatory landscape by companies deploying AI/ML technologies will help ensure a successful future where AI/ML technologies safely improve many aspects of daily life.
Businesses with an interest in developing and using Artificial Intelligence (AI) and Machine Learning (ML) technologies are wisely looking to get ahead of regulators. Earlier this year, the Business Roundtable, an organization consisting of 230 CEOs from some of the largest companies in the world, met to consider the implications of a future where AI and ML play an increasingly important role in making automated decisions at scale—and where regulators are paying close attention. The meeting resulted in a proposed framework that is intended to guide regulators in developing policies to ensure the responsible use of AI and ML technologies without curbing their great potential. Growing appetite for additional regulatory oversight around the globe is certainly a catalyst for the discussions among the world’s most successful businesses.
1) A Growing Call for AI Regulations As Regulators Realize the Potential For Consumer Harm
How AI affects consumers varies immensely based on the industry, the company and the purpose for which the technology is to be used. The Business Roundtable recognizes that broad regulations over the nascent industry will be challenging, and no one-size-fits-all regulatory solution is appropriate if companies are to maximize the potential of this new technology. While a one-size-fits all regulatory approach is not practicable, several universal legal and ethical themes have emerged as targets for regulatory oversight from state and federal policymakers as well as government enforcement agencies ranging from the Federal Trade Commission to state attorneys general. Namely, as companies shift to solutions that are partially reliant on AI and ML, the potential for consumer harm in the following areas increases:
1. Unfair bias. Algorithms have the potential for perpetuating stereotypes in society. Some companies have scrapped complex recruiting tools after years of use because the companies came to realize that the system perpetuated certain biases, for example by favoring male applications. This was due to the fact that programming used to vet applicants was developed by observing patterns in previous hiring decisions. In industries where most of the resumes of candidates hired in the past came from men, the machine “learned” to favor male applicants.
2. Transparency. As AI and ML increase in sophistication, it will be harder for companies to explain to consumers how the company made decisions. One example is the insurance industry, where more companies are experimenting with using algorithms and predictive models based on an individual’s social media presence and other external factors to make decisions on whether to extend coverage and to make cost determinations. Companies need to think about how to defend such decisions and whether the inputs that support those decisions are appropriate and fair.
3. Accountability. In industries with high-stakes decision making, employing AI and ML also introduces confusion around responsibility when the AI and ML fails and causes harm. One infamous example involved Uber’s self-driving car, which struck and killed a woman. This incident illustrated the issues an individual (or in this case an individual’s family) may face in identifying the entity that caused the harm and where the breakdown resulting in the harm occurred. The individuals developing the algorithm or technology, an engineering failure in the car itself, and the individual Uber hired to operate the semi-autonomous vehicle to oversee the driving all potentially could have fallen short of their obligations to the pedestrian. Although in this particular case, the National Transportation Safety Board determined the cause of the accident, it took over a year for the board to do so.
4. Data management and security. Machine learning often requires an immense data set to establish patterns. As more sophisticated companies amass large amounts of data on consumers, companies will be urged to develop strong policies related to privacy, data management, retention, and security to mitigate the potential impact in the event of a data breach. Threat actors will recognize the value of such data and companies with such resources will be targets.
2) The Business Roundtable Recommendations for Businesses and Policymakers
A) Recommendations for Policymakers
Emphasizing the “complex, context-dependent, and rapidly evolving AI ecosystem,” the Business Roundtable also offered “Policy Recommendations” to establish regulations without unduly limiting innovation. The Recommendations reject a one-size-fits-all solution. Rather, the Business Roundtable encourages regulators and policymakers to assess gaps, tailor standards, and collaborate with business to find the right regulations for various industries engaged in using AI.
Beyond advocating for restrained, principled, and informed policy, the Business Roundtable also emphasized the importance of clear standards once regulators determine regulation is necessary, global coordination on standards and guidelines, and education and development of training in AI as crucial for building trust and enabling innovation. The Policy Recommendations demonstrate the Business Roundtable’s encouragement to regulators to allow industry growth, draw from a depth of international expertise, and ensure parties are well informed before drafting legislation and regulations.
The Business Roundtable presented ten specific Policy Recommendations that will serve as overarching concepts when regulators begin to engage more directly in oversight activity. The Business Roundtable Policy Recommendations include:
- Adopt regulatory approaches to AI that are contextual, proportion and use-case specific.
- Embed AI rules and guidelines into existing frameworks as appropriate.
- Employ an agile and collaborative approach to AI governance.
- Adopt an adaptive approach to enforcement.
- Calibrate targeted and clear enforcement standards.
- Prioritize strategic international engagement on AI issues.
- Engage on global AI standards and guidelines.
- Strive for common principles and interoperability.
- Invest in AI education and proficiency at all levels.
- Support industry training and reskilling efforts.
B) Recommendations for Businesses
The Business Roundtable also outlined a roadmap and set of core principles for self-regulation as companies deploy AI innovations to promote fairness and transparency. The roadmap and core principles align with familiar legal and ethical critiques of AI. Specifically, the Business Roundtable highlighted the importance of the following corporate actions:
- mitigating the potential of unfair bias,
- increasing transparency/explainability, and
- improved data collection and management, and data security.
Additionally, the core principles also highlight good governance procedures like additional training and hiring in AI, diversity among teams responsible for AI, and company-wide understanding of AI.
Businesses should keep these recommendations in mind as they begin to rely more heavily on the use of AI/ML products in day-to-day business to ensure that their programs are generally aligned with industry best practices (and mitigate against the risk of the need to significantly re-work programs to meet compliance objectives).[1] The Business Roundtable specifically recommended:
- Innovate with and for diversity.
- Mitigate the potential for unfair bias.
- Design for and implement transparency.
- Invest in a future-read AI workforce.
- Evaluate and monitor model fitness and impact.
- Manage data collection and data use responsibly.
- Design and deploy secure AI systems.
- Encourage company-wide culture of Responsible AI.
- Adapt existing governance structures to account for AI.
- Operationalize AI governance throughout the whole organization.
Policymakers and Enforcement Agencies Heed the Call to Regulate AI.
The Business Roundtable’s recommendations coincide with a burgeoning effort by policymakers and government enforcement agencies to oversee businesses that use AI and ML.
Legislative Efforts
In Illinois, lawmakers amended the Artificial Intelligence Video Interview Act in 2021 to require employers who selected applicants for interviews solely using artificial intelligence to report certain demographic information to the Illinois Department of Commerce and Economic Opportunity. See 820 ILCS 42/20.
In Summer 2021, Colorado enacted a law prohibiting insurers from unfair discrimination in their use of predictive models or algorithms. See Colo. Rev. Stat. § 10-3-1104.9. The law authorizes adoption of additional regulations to carry out its purposes and requires companies to provide information to the government concerning their use of algorithms and predictive models in insurance. Id. at § 10-3-1104.9(3). This law is focused on concerns already identified by previous regulators when it comes to AI and ML. See e.g. N.Y. Dep’t. Fin. Services, Ins. Circular Letter No. 1 (2019).
At a federal level there have been continued attempts to regulate companies using AI and ML. On February 3, 2022, Senator Cory Booker, Sen. Wyden, and Rep. Yvette Clarke introduced in both the House and Senate an updated version of a bill previously introduced in 2019: the Algorithmic Accountability Act, (“AAA”) S.3572/H.R.6580. This bill which would direct the Federal Trade Commission to oversee impact assessments of covered entities[2] that use automated decisions related to consumers’ access to education, employment, utilities, family planning, financial services, healthcare, housing, legal services, or other comparable services as determined by the FTC. Section 9 of this proposed bill would permit both the FTC and state attorneys general to enforce the new regulations the FTC would promulgate under this law.
Enforcement Efforts
State attorneys general are also recognizing the impact that AI/ML technologies will have on consumers as the industry and usage of the technology expands. Attorneys general are directed to protect consumers in their respective jurisdictions and concepts of fairness, bias, transparency and accountability are top of mind for state regulators. This shift in focus is evidenced by increased publicity and legislative lobbying by state regulators and policymakers.
After being elected president of the National Association of Attorneys General (NAAG), Tom Miller selected “Consumer Protection 2.0: Tech Threats and Tools” as his presidential initiative. Algorithms and AI was a primary focus of the annual Capital Forum meeting. State attorneys general and members of the Biden Administration, such as Federal Trade Commission Chair Lina Khan and Consumer Financial Protection Bureau (CFPB) Chair Rohit Chopra also emphasized their concerns about increasing role that algorithms play in making important decisions that directly affect consumers.
In December 2021, D.C. Attorney General Karl Racine worked with his city council to introduce legislation to regulate algorithmically-generated discriminatory practices, called the Stop Discrimination by Algorithms Act of 2021, B24-0558. The legislation would require companies to send notice to individuals affected by the algorithm if it adversely affected the individual as well as annual audits. There is both a private right of action and public enforcement aspects of this law, with penalties of $10,000 per violation. The bill was referred to the Committee of Government Operations and Facilities on December 21, 2021.
Takeaways
Regulators are increasingly turning their attention to AI/ML technologies with the specific angle of protecting consumer interests. Businesses have taken a good first-step toward partnering with regulators to help construct a framework that encourages optimization of the potential of the new technology while also protecting consumers from potential harm. These efforts will allow give the industry credibility for the future. It is much easier to proactively engage with regulators to help shape the regulatory landscape than it is to modify enacted regulation and legislation that has been drafted by policymakers who may lack the depth of knowledge and foresight of industry participants. Continued thought-leadership in the regulatory landscape by companies deploying AI/ML technologies will help ensure a successful future where AI/ML technologies safely improve many aspects of daily life.
[1] Similar to the Fair Information Practice Principles (“FIPPs”) which provide the foundational principles for privacy policy and guideposts for their implementation and assist companies at a fundamental level for compliance with privacy regulations, companies may wish to consider using the Policy Recommendations as a framework for developing programs that align with regulatory objectives in the absence of existing law.
[2] AAA specifically defines its application to entities that 1) exceed $50,000,000 in annual gross receipts or having greater than $250,000,000 in equity value; 2) possess information on 1,000,000 consumers or consumer devices; or 3) is substantially owned, operated, or controlled by an entity that fits the requirements of 1 or 2.