Trust and Transparency: Consumer Expectations and Concerns in the GenAI Era

PinIt

Businesses can reap the benefits of GenAI as a competitive differentiator, but only if they understand the implications of transparency and unbiased GenAI and follow the necessary steps to build a responsible platform.

Building consumer confidence and trust has always been a hallmark of the most successful brands. Trustworthy brands that build a strong base of loyal customers can be more confident of repeat sales and guaranteed long-term revenue streams. Edelman found that when consumers trust a brand, 59% are more likely to repurchase from the same company, even when their products and services are more expensive than competitors. Furthermore, two-thirds are more inclined to stay loyal and recommend the brand to friends and family, even if it makes a mistake.

But the generative artificial intelligence (GenAI) era presents a new landscape with a different set of challenges, and the stakes are much higher when brands get it wrong. When GenAI systems aren’t trained properly, they can perpetuate harmful stereotypes and biases at scale about gender, race, and other socioeconomic factors. For example, when Bloomberg asked a GenAI platform to produce images related to job titles, the system overwhelmingly linked high-paying roles to men with lighter skin tones and low-paying roles to men and women with darker skin tones. Disturbingly, keywords like “inmate” and “drug dealer” produced images of people with darker skin, while “terrorist” displayed men with dark hair and head coverings. GenAI systems have also been found to disseminate inaccurate content, producing false information about the James Webb Telescope and citing fabricated legal cases.

In an era dominated by digital interactions, consumers expect brands to provide helpful, accurate, and unbiased information. They also want transparency about where and how brands are sourcing their training data. Those who proactively communicate their data governance practices can help alleviate consumer concerns, fostering trust and loyalty and, in turn, driving business growth.

See also: AI Bias Can Kill a Business: Prevent It

Why Is Transparency and Bias Mitigation Important?

In its nascent stage, GenAI has demonstrated its ability to improve user experience and transform the business-consumer relationship for good. Unfortunately, biased data can seep into the training process. Developers must understand the business implications of employing a prejudiced platform. When GenAI models are built on historical data, their behavior might be based on outdated information or stereotypes that don’t represent current views. For example, a hospital using AI to rectify workforce gender inequities might undermine its efforts with data that says male nurses and female doctors are nonexistent. This information is clearly false, and in an increasingly diverse society, perpetuating these stereotypes hurts users’ willingness to trust GenAI.

There’s also the issue of transparency. Every new technology has an adjustment period, but companies can alleviate consumer hesitations about AI with openness. Customers are more willing to share data when they know how it will be used and that it will be stored securely. Being upfront about data sourcing and use and any potential biases or inaccuracies will maximize the information consumers are willing to share. While many companies have built data governance teams to oversee transparency initiatives, they can take it one step further with a dedicated forum where customers can flag potential issues, similar to social media platforms that allow users to flag spam or other harmful content.

To mitigate these issues, developers must address the opacity of GenAI. These platforms are built on complex algorithms taking in terabytes of data daily, so it can be challenging to understand how the system calculates outputs from user inputs. Without proper monitoring solutions, it can be challenging to rectify the issues that turn users off in a timely manner, such as incorrect product recommendations or the use of offensive language or stereotypes. Employing reinforcement learning from human feedback (RLHF) techniques is necessary. Once a model is pre-trained, a human inputs a series of prompts and responses, and the system generates an output based on the prompts. Those responses are then evaluated, and the model is fine-tuned through a human-led reward process, creating a constant feedback loop focused on improvement.

See also: AI Bias: FTC Cautions Businesses and Offers Guidance

What Do Consumers Think of GenAI?

Today, AI is more accessible than ever, with numerous free, open-source tools that users can leverage to write college essays, create music, design art, and so much more. The beauty of GenAI is its ever-growing and never-ending list of use cases. According to SimilarWeb, GenAI assistants, like chatbots, represent 68% of use cases, but two other categories are rising. In the first half of 2023, user traffic to AI companion builders, where users can create virtual characters and interact with them, grew from fewer than 100 million monthly visitors to nearly 400 million. During the same timeframe, traffic to content generation platforms increased from under 200 million to more than 300 million.

Despite this exponential growth, consumers still have reservations about how GenAI is being used, and many do not understand the processes behind the technology. In our recent study, we found that 52% of consumers said the onus is on the companies developing and building GenAI applications to “police” the training datasets they use and determine if they are being used with the content creator’s permission. The majority (75%) said they wanted transparency about developers’ data sourcing methods. There was, however, a noticeable variation in consumers’ understanding of model training: although 55% of all respondents say they understand the data training process, more than half of 18-24-year-olds (54%) and people over 54 (58%) don’t grasp it.

Regarding trust, consumers are almost evenly split: 37% believe GenAI is accurate, while 32% don’t and 31% don’t know. However, users are most concerned about bias: only 31% trust GenAI content to be unbiased, while 43% don’t. Interestingly, though most consumers aged 18-24 don’t understand GenAI, most trust its accuracy (42%) and unbiased nature (42%).

It’s clear that while consumers’ trust levels in GenAI varies, brands must take the time to understand and address consumer expectations or risk customer attrition. In another TELUS International study, users highlighted some top concerns: the spread of misinformation (61%), biases that hurt career or financial status (57%), and biased or irrelevant search engine results (56%). Consumers know they have many options and aren’t afraid to find a better experience on a different platform. PWC found that 46% of consumers say their loyalty is defined by whether they like using a product or service, and 37% will jump ship after one bad experience, including 42% of Gen Z.

Building Responsible, Unbiased GenAI

The use of RLHF produces several important benefits. Where RLHF differs from traditional reinforcement learning is the incorporation of human-designed guardrails. With reinforcement learning, a GenAI model could employ wasteful trial-and-error techniques or find loopholes that don’t contribute to the best possible outcome. With human feedback, the model learns the complexities of human behavior in the appropriate context. The constant feedback loop means bias can be identified and mitigated faster; this is why it’s crucial that brands employ a team of employees with diverse cultural backgrounds, education levels, and perspectives. Also, RLHF can address GenAI hallucinations, where models fill knowledge gaps with inaccurate information. As part of the ongoing training process, humans can correct false responses and train models to enhance the accuracy of future outputs. 

But, while RLHF is a necessary element of GenAI model development, it isn’t the only aspect. Designing responsible GenAI models is an iterative four-step process that includes:

1) Identifying potential threats. Security and responsibility go hand-in-hand. Organizations can employ red teams (hackers hired to attack cybersecurity defenses) and blue teams (hired to defend against red team attacks), stress testing, or other methods.

2) Evaluating the significance. Organizations must develop metrics to help identify the severity of any threats. If the model is particularly susceptible to harm during testing, those issues will only multiply under the stress of millions of daily users.

3) Employing mitigation strategies. This step is where RLHF, prompt engineering, and other strategies come into play. Humans must always be in the loop, implementing changes and testing after each round to ensure success.

4) Deploying an operational readiness plan. Implementing GenAI requires total organizational commitment. The steps above ensure the platform is ready to go, but employees must be prepared and empowered to use AI in their daily tasks, and customers must be kept informed about changes to their experience.

Reaping the Benefits of GenAI

Brand-consumer interaction rarely occurs today without some form of AI, but as its usage grows, it’s become increasingly polarizing. The missteps have been highly publicized, and the negative outcomes for brands are amplified because, despite its many positive use cases, many are still learning and trying to understand the societal impacts and implications of this technology. While consumer curiosity is certainly there, the onus is on businesses to build responsible, transparent tools to establish and maintain customer trust.

Businesses can reap the benefits of GenAI as a competitive differentiator, but only if they understand the implications of transparency and unbiased GenAI and follow the necessary steps to build a responsible platform.

Michael Ringman

About Michael Ringman

Michael Ringman is the Chief Information Officer at TELUS International, a global customer experience provider powered by next-gen digital solutions, and has been with the company since 2012. As CIO, Michael remains focused on driving continuous innovation for both customers and team members and has built his career on implementing technology services, especially developing public and private cloud solutions for retail, government, technology, and finance verticals. Michael holds a Bachelor of Science degree in Aerospace Engineering and a Master of Science in Telecommunications, both from the University of Colorado.

Leave a Reply

Your email address will not be published. Required fields are marked *