The challenges associated with shadow AI will likely get worse before they get better as the implementation of AI tools is occurring at a faster rate than most organizations are able to secure them.
Research shows that almost half (49%) of people have used generative AI, with over one-third using it on a daily basis. There’s no denying that generative AI has many benefits and use cases, but the rogue or unsanctioned use of these tools within an organization outside of IT governance, known as shadow AI, can lead to significant risk.
Over the past year, we’ve seen tech giants like Amazon jump at the opportunity to leverage ChatGPT and other AI tools for business gain. But others are already banning its use: last year, Samsung banned employees from using tools like ChatGPT and Google Bard after an accidental data leak. More banks like Goldman and Citigroup are also restricting AI use due to concerns around sensitive information sharing.
We’re seeing prominent institutions respond in this way because shadow AI poses unknown threats and presents a new threat vector in the broader shadow IT threat category for both security and compliance.
It’s estimated that one-third of successful cyberattacks come from shadow IT. Despite this, the threat of shadow IT is generally well-known and is relatively easy to manage once spotted. Just decommission it and move on. Shadow AI, on the other hand, carries more unknown risks that are hard to quantify and manage. It’s not just about the unsanctioned use of these tools. It’s also about the unsanctioned use of company data in an unauthorized and currently unregulated space. Furthermore, the scope of people who use, or could use, these AI tools is not limited to technologists.
These factors, combined with the promise of getting more work done in less time, open the door to more people feeding company data into unauthorized tools, putting sensitive information and intellectual property at risk.
See also: Considerations and a Blueprint for Responsible AI Practices after a Year of ChatGPT
Addressing shadow AI threats
While the traditional method of managing and monitoring data activities is critical to securing a data environment, ensuring authorized data usage, and meeting privacy requirements, it’s not enough to defend against shadow AI outside data center walls. Nor is an outright ban on all AI use, as many employees still find ways to discreetly use it to their advantage.
Luckily, there are a few additional steps IT leaders can take to help reduce these lurking threats. These include:
- Educating employees about the risks of unsanctioned AI use. It’s important to remember that most shadow AI violations are not malicious in intent. After hearing from colleagues, friends, and family about how popular platforms like ChatGPT can help with tedious work or even create art or music, employees often feel tempted to try it out and see how they can benefit. Therefore, a critical first step is to educate your workforce. This means being specific about the threats and implications that are introduced by shadow AI: First, the potential to feed sensitive or proprietary information into a black box AI. Second, the lack of a holistic picture of AI use in the company means that the impacts of AI failures will be unknown. Establishing this level of transparency with employees helps demonstrate that the risks are real.
- Updating AI policies and processes. An overall ban on all AI use is not a reasonable path forward for many businesses, especially those that use machine learning and large language enhancements on a daily basis. But IT leaders can update their policies to include specific AI restrictions and guidance as to how to gain approval for approved business needs. To provide an authorized route for legitimate use cases, organizations can implement a process for reviewing AI use, during which use cases can be reviewed, risk assessed, and approved or denied by the business.
- Adopting an endpoint security tool. Realistically, AI education and updated policies alone are not going to prevent all users from experimenting with shadow AI technology and services. Therefore, in order to protect themselves further, organizations must adopt more tools to enhance their visibility into all AI use and reduce risk outside their walls at the user level. The endpoint is going to be the most effective place to get control or visibility into if and how users are still using shadow AI. So, in many cases, adopting an endpoint security tool is the solution, combatting the greatest risk with remote users and cloud-based AI platforms. This can come in the form of technologies like Cloud Access Security Brokers (CASB) and other technologies that address the endpoint and remote workers.
- Establish end-user agreements with AI vendors. An end-user license agreement (EULA) is not uncommon in the world of IT. Also known as software license agreements, these agreements are implemented between end-users and software vendors. When implemented at the software level, these agreements set parameters around how users can use a specific software or application. The agreement also includes restrictions on using the software. For example, EULAs restrict users from distributing or sharing the software in any way that benefits themselves rather than the vendor. From an AI perspective, implementing similar agreements could be beneficial in controlling what data is entered into AI models and platforms. A formal agreement that clearly outlines what types of data employees can and cannot use when leveraging these models helps to set clear boundaries and guidelines. It also opens up communication with the AI vendors themselves to make the process more collaborative and transparent.
See also: Is AI Advancing Too Quickly?
Looking ahead
Unfortunately, the challenges associated with shadow AI will likely get worse before they get better as the implementation of AI tools occurs at a faster rate than most organizations are able to secure them. What’s more is that it will likely take organizations time to implement the right policies and deploy the required training to ensure data is being utilized correctly and securely within AI models. However, we will likely see more companies and solutions emerge to address these risks in response.
Never before has the technology industry faced a situation of this scale in which organizations don’t have a clear understanding of where data is going, who is receiving it, and how it’s being used. Gone are the days of quickly decommissioning problematic systems to address all shadow IT problems. At the end of the day, shadow AI’s footprint is massive, and organizations must take action now to prevent more unsanctioned AI use before it is too late.