Key considerations around data security and privacy, the future of work, and government regulation will be paramount for business leaders as they evaluate AI adoption within their organizations.
The launch of ChatGPT, the harbinger of change, has led to a cultural phenomenon that has spurred a wealth of interest in developing artificial intelligence (AI). According to Swiss banking giant UBS, the generative artificial intelligence (GenAI) application may have become one of the fastest-growing apps in history after it was estimated to have reached 100 million monthly active users. The introduction of this groundbreaking tool into the market, combined with the growth of Large Language Model (LLM) solutions, has provided enterprises with a new paradigm for how to evolve their day-to-day business processes, or at least the promise. This is a genie that is unlikely to go back in the bottle.
Yet, as business leaders and global organizations eagerly seek to accelerate their AI adoption efforts, concerns about data security, privacy, AI “hallucinations,” and regulatory compliance remain top of mind. As enterprises seek to leverage the strengths of AI, they must also mitigate its risks.
Recently, there has been increasing scrutiny over how accurate and reliable ChatGPT’s intelligence is, adding another layer of complexity to the current AI boom. Intelligence might be the wrong way to think about what these models and derivative technologies do. Still, in this accelerated environment, it will be critical for organizations to carefully evaluate each new solution before integrating it into critical operations and processes.
The State of AI
While GenAI technology remains nascent, it is poised to accelerate the maturity of tasks like document parsing, code generation, and effective information extraction. Diverse forms of AI are being adopted across industries, with the financial services sector leading the way. Here, AI’s predictive algorithms are proving highly effective in risk assessment, prediction, and mitigation. In contrast, domains that heavily rely on qualitative decision-making, such as marketing and manufacturing, have exhibited a more hesitant adoption of AI, as noted by Statista.
The question arises: How does GenAI achieve enterprise readiness? Much has been written and spoken about in conferences and sales pitches about its promise to fundamentally disrupt the enterprise workforce. While it is true that this represents a generational agent for change, but it has yet to be quite ready.
OpenAI was sprung onto the global stage, igniting a whirlwind of imagination as users swiftly grasped the immense potential of this technology. Yet, at the heart of innovation lies three fundamental considerations every product professional thinks about:
- Product Market Fit: Does it do something really well in a way that has not been done before?
- Security: In today’s world, numerous benchmarks evaluate security, encompassing data stewardship for these intrinsic original equipment manufacturer (OEM) technologies.
- Contextual Shift: Have the innovators broken the confines of conventional systems and tools, seamlessly integrating innovation to facilitate adept management and quality control within the jobs they aim to support?
GenAI excels at using its training data and providing context to produce impressive information synthesis and novel content generation. This proficiency applies across various domains, including code, responsive prose, or general information structures. To validate its efficacy in the legal sector, we subjected dozens of LLMs to testing across different legal jobs, with experts evaluating their outcomes. This assessment encompassed tasks like addressing legally pertinent inquiries and structuring intricate legal documents into manageable frameworks. Although this pertains to a singular sector and a limited set of illustrations, significant enterprises aim for generative AI solutions that exceed being mere human process substitutes. Consequently, considerable effort remains to ensure that the solutions utilizing these technologies align with market demands and achieve product-market fit.
However, specific prerequisites have yet to be met. For example, if deploying these technologies for cost-effective innovation in market entry, there’s a need for high-volume, high-performance information processing. Nonetheless, the existing capacity constraints of many LLMs may pose challenges. Graphics processing unit (GPU) shortages have led to rationing for resource-intensive LLMs, aiming to ensure fair demand distribution and avoid substantial user discrepancies, which in turn results in performance discrepancies. A magical answer that needs to be submitted multiple times or that takes a rather long time to complete is not the magical experience that some expect.
See also: How Generative AI Promises to Make Certain Jobs Easier
Can we trust it? Data Security, Privacy and Regulation
Today, Organizations lacking appropriate safeguards can face immediate risks when using AI, as exemplified by the Samsung ChatGPT data leak. The inadvertent inclusion of company data in ChatGPT’s training set highlights the need for careful consideration in deploying enterprise applications of ChatGPT and of AI in general. In a KPMG study, 81% of executive respondents considered cybersecurity as a primary concern with AI adoption, while 78% of executives saw data privacy as a primary concern. To avoid compromising privileged data, business leaders adopting AI technology for enterprise applications must establish appropriate guardrails for AI tools and for any data used in training sets.
Fabrication of evidence presents another worrisome risk, including the proliferation of “deepfake” photographs and video imagery. However, as falsified evidence will likely spur greater forensics involvement in legal review, faked photographs can wreak havoc in other areas as well. Consider the impact of a faked photograph of the Pentagon shared on social media earlier this year that caused a drop in stock prices before the error was widely known.
As AI manipulation becomes more sophisticated, businesses across all sectors must do their due diligence to conduct fact checks and verify their sources. In response to the rapid advancement of AI, the White House launched an initiative on “responsible AI,” addressing worker impact, employer use of surveillance technology, and regulatory standards. Enterprise adopters must vigilantly monitor the evolving international regulatory framework as governments establish standards and practices aimed at safeguarding against foreseeable data-related risks.
See also: Responsible Generative AI Consortium Established
Use Cases for Generative AI in Sensitive Industries
Research from KPMG found that 65% of US executives believe GenAI and LLM solutions will have a significant impact on their organization in the next 3-5 years. However, 60% say that we are still potentially a few years away from actual implementation. While full-scale AI implementation may seem distant, investing time and resources into understanding crucial business needs and capabilities will pay dividends in the long run.
GenAI is already displaying its potential to dramatically impact business use cases within highly regulated industries such as finance, healthcare, and legal. In finance, AI can improve accuracy in forecasting, reduce errors, lower operational costs, and optimize decision making for organizations investing in development (Gartner). In healthcare, it facilitates synthetic data generation for drug development, diagnostics, administrative tasks, and streamlined procurement, though trustworthiness and validation are imperative (Goldman Sachs). In the legal field, lawyers are cautious about AI adoption due to the importance of accuracy and data integrity during legal proceedings (Bloomberg Law). However, in the near future, GenAI will offer new capabilities like document parsing, code generation, improved natural language understanding, and optimizing legal workflows.
Many vendors are adapting their terms and technologies to address data security and privacy concerns, but not all. Strategies exist to mitigate these risks. However, the overall maturity of participants remains nascent.
AI and the Future of Work – Management and Workflow
According to research from Goldman Sachs, AI could potentially impact as many as 300 million jobs globally over the next five to ten years. While warnings of job displacement may seem dire, historically, advancements in technology have led to the creation of new employment opportunities. As technology streamlines tasks and liberates human labor, it concurrently unleashes creative endeavors. The use of AI technology could enhance labor productivity and contribute to global GDP growth of up to 7% over time. Office and administrative support jobs have the highest automation potential at 46%, followed by 44% for legal work and 37% for tasks in architecture and engineering. However, the impact of AI on jobs will vary across industries.
If we look at jobs that can be impacted, one could argue that the extent of adoption depends on the effectiveness of the management layer in facilitating these technologies. Not all jobs have the same level of management requirements, leading to varying degrees of readiness for GenAI across enterprises. This implies that certain roles might be more prepared for GenAI sooner than others.
For instance, in the case of a marketing professional, they receive a draft of marketing collateral and then incorporate that into their normal human, word-based editing process before it is submitted for review and publication. In this scenario, the process is straightforward and does not require complex workflows or special tools. Conversely, for a financial services firm using AI to analyze risk or provide investment advice, effective management of information is crucial. This management encompasses aspects like provenance, accuracy, and distribution (use), demanding GenAI implementation that is both auditable and easily managed within a system.
There has been much debate around GenAI’s ability to transform and improve the cost of legal outcomes. However, these discussions often overlook the intricate technical and human systems that govern the pursuit of results like justice or legal advice. The strategic implementation of GenAI within the workflow of legal professionals is critical to ensure its trustworthiness, reviewability, and scalability across a wide range of topics and contexts.
Conclusion
AI innovation stands at the threshold of immense possibilities and has the potential to impact a myriad of aspects of society as we know it. While businesses may approach adoption cautiously, navigating risks and regulatory landscapes, the allure of gaining a competitive advantage will drive widespread adoption.
As enterprise enthusiasm continues to skyrocket, it becomes essential to balance excitement with a healthy amount of skepticism and healthy evaluation. Embracing the power of AI and shaping the future requires careful consideration of its limitations, potential risks, and ethical implications.