The problem with the content that’s produced by ChatGPT (or other generative AI tools) is it possesses no warranties or accountability whatsoever.
By now, you’ve probably heard enough gushing reviews of ChatGPT and other generative AI models. It is undoubtedly a major step toward the democratization of AI, which means you no longer need a Ph.D. in data or computer science, or a multi-million-dollar IT budget to explore its possibilities. At the same time, some industry experts are cautioning against placing too much trust in these tools.
One the plus side, generative AI services are “taking assistive technology to a new level, reducing application development time, and bringing powerful capabilities to nontechnical users,” states a report from McKinsey.
“This latest class of generative AI systems has emerged from foundation models – large-scale, deep learning models trained on massive, broad, unstructured data sets (such as text and images) that cover many topics. Developers can adapt the models for a wide range of use cases, with little fine-tuning required for each task. For example, GPT-3.5, the foundation model underlying ChatGPT, has also been used to translate text, and scientists used an earlier version of GPT to create novel protein sequences. In this way, the power of these capabilities is accessible to all, including developers who lack specialized machine learning skills and, in some cases, people with no technical background. Using foundation models can also reduce the time for developing new AI applications to a level rarely possible before.”
AI users – especially those employing it for enterprise decision-making – need to tread cautiously into this new world, however. There has yet to be accountability for the content and code ChatGPT generates, according to Andy Thurai, analyst with Constellation Research. In an interview with The Cube, “ChatGPT is a new shiny object, but the problem is most of the content that’s produced either by ChatGPT or others is it possesses no warranties or accountability whatsoever.”
See also: Reports of the AI-Assisted Death of Prose are Greatly Exaggerated
The legalities of AI-generated content are murky at this time, with uncertainty about the copyright protections of machine-generated code, and even more uncertainty on who takes responsibility for machine-generated content that leads to harm. Platforms such as ChatGPT also generate software code, which opens up another can of legal worms. “It allows you to produce code, but the problem is with that is while the models are not exactly stolen, they’re created using the GitHub code, and they’re getting sued for that.”
The bottom line, Thurai says, is feel free to employ ChatGPT for personal uses, but not commercial purposes. “You use it either to train or to learn, but in my view it’s not ready for enterprise grade yet.”
See also: Recommender Systems: Why the Future is Real-Time Machine Learning
Plus, the McKinsey authors urge that any output from generative AI needs to be checked and double-checked. “ChatGPT, for example, sometimes hallucinates, meaning it confidently generates entirely inaccurate information in response to a user question and has no built-in mechanism to signal this to the user or challenge the result,” they state. “For example, we have observed instances when the tool was asked to create a short bio and it generated several incorrect facts for the person, such as listing the wrong educational institution. Filters are not yet effective enough to catch inappropriate content.”
The McKinsey report urges executives and managers to assemble a cross-functional team, including data science practitioners, legal experts, and functional business leaders, to think through basic questions, such as selecting targeted use cases, and establishing legal and a community standards.