Generative AI has tremendous potential to alleviate burdens faced by SecOps teams, but most companies remain cautious.
Almost nothing moves faster than cybersecurity needs. Security Operations (SecOps) teams face an uphill battle managing complex and rapidly changing environments thanks to the evolution of essential tech. Companies can’t afford to skip tech advancements, but that can leave teams overworked and understaffed. Generative AI, with its broad functionality and human-like processing, holds the promise of helping SecOps teams manage increasing demands as they enable businesses to integrate cutting-edge technology. But “the how” comes with a heavy dose of “stop and consider.”
Challenges in harnessing generative AI
Generative AI has a lot of great press, and companies from Cisco to Google are rushing to deploy their own cybersecurity solutions supported by it. But companies need to consider the downsides. Recent news has shown pitfalls of AI code generators, for example, where the code may or may not be secure. Generative AI relies on quality input and needs human oversight regardless of whether the environment is ideal. However, in some cases, the pitfalls of applying AI would far outweigh the benefits:
- Lack of contextual understanding: AI systems often struggle with understanding and interpreting contextual information. Context is critical to making accurate decisions in cybersecurity, and AI may provide recommendations that aren’t appropriate or effective for the security situation at hand. Human teams will spend time cleaning up the mess or double and triple checking recommendations and wasting time.
- Data bias: Generative AI models train on large data sets, and quality and representativeness can influence their performance. Data that contains biases and inaccuracies can generate misleading or outright incorrect outputs. Human teams will spend time tracking down vulnerabilities or playing catchup once an incident occurs.
- Limited explainability: If generative AI models are too much of a black box, human teams won’t be able to validate decisions or course correct if the models are making poor decisions.
- Complexity in the environment: Achieving seamless integration within complex cybersecurity environments can be a significant technical challenge. Human teams may save time in certain areas when applying AI but lose it again troubleshooting integration issues and chasing loopholes.
Companies have to strike a balance between the benefits and risks of generative AI in their SecOps strategies through collie collaboration between AI experts and cybersecurity professionals, as well as robust testing and validation processes.
See also: Net Security Requires Tight NetOps and SecOps Integration
Training and expertise for utilization
Cybersecurity professionals will likely need training in generative AI in order to know where to intervene and where to automate. Crafting clear and unambiguous requests is a critical part of quality control and a skill most will need moving forward. AI relies heavily on the quality of human input; skillful prompt engineering maximizes the likelihood of generating helpful responses. Cybersecurity practitioners also need strong critical thinking skills to review and evaluate output generated by AI.
Other critical skills include:
- Understanding of generative AI: This includes understanding different generative models, their capabilities, and their limitations.
- Model selection and evaluation: Although low-code tools have made development more accessible, understanding underlying algorithms will help craft safer cybersecurity tools.
- Contextualization and prompt engineering: Clear unambiguous requests maximize the quality of responses from these models.
- Critical analysis and interpretation: AI needs human oversight. This skill validates outputs and identifies potential errors or biases, particularly in cybersecurity.
- Integration and collaboration: SecOps experts must understand how to integrate AI into existing systems and when. Additionally, successful integration requires heavy involvement from AI domain experts.
- Ethical and legal considerations: AI will always require attention to detail when it comes to legal compliance. Ethics questions include privacy, bias, and discrimination issues.
Navigating limitations and ensuring accountability
SecOps teams need to get very good at recognizing the limitations of generative AI tools, particularly in complex, multistep, and multi-branch processes. In these scenarios, generative AI tools’ reasoning capabilities will likely be exceeded, leading to unintended consequences.
Contextual awareness is a significant challenge because AI doesn’t have the capability or sophistication to always respond based on contextual clues. The machine might apply certain commands too broadly, for example, and cause delays or access mistakes.
Proactive measures can help mitigate the risk of such scenarios:
- Context-aware prompts: Teaching AI to recognize context will improve accuracy and decision-making.
- Robust testing and validation: Implementation isn’t a one-time thing. Working with AI requires vigilance and continuous validation.
- Human-in-the-loop approach: Human involvement prevents over-reliance on AI and ensures context is a priority. Decisions should be a combination of AI-generated insights and human judgment.
- Explainability: Model introspection, attention mechanisms, and post-hoc interpretability methods should provide more insight into the reasoning behind AI outputs. This enables effective audits and better compliance.
Creating the right environment for generative AI
Early generative AI tools placed in SecOps environments will be effective under certain circumstances—largely, the level of orderliness and predictability within an organization’s systems and processes. This type of environment enables generative AI to act more intelligently.
For example, if an organization has established a system of record database able to distinguish between an executive’s PC and that of an entry-level employee, AI can make more informed (and accurate) decisions to mitigate risk.
Less orderly or inconsistent environments will pose challenges for generative AI because it relies on inferring correct behavior. Without parameters, the outcomes may be hit or miss. For companies willing to undertake the project, generative AI can help human teams inject more orderliness and consistency into the environment and make it more manageable. AI assistants can knit together and maintain an accurate picture of the environment through network logs, SMTP data, configuration databases, and other systems. Further, they can ask intelligent questions to get more information and make suggestions to human monitors.
Leverage generative AI with care
Generative AI does have tremendous potential to alleviate burdens faced by SecOps teams, but most companies remain cautious. AI can build better, more responsive security but only with heavy human involvement and oversight. While challenges like lack of contextual awareness and bias remain, cybersecurity professionals can prepare themselves for the advent of generative AI through necessary training and collaboration. However, generative AI will continue to evolve and has the power to transform SecOps for good.