SHARE
Facebook X Pinterest WhatsApp

The Key Components of a Comprehensive AI Security Standard

thumbnail
The Key Components of a Comprehensive AI Security Standard

An AI security standard founded on the principles and practices outline in this article address all key risk areas. Such a standard also enables practical implementation of AI security best practices.

Written By
thumbnail
Elad Schulman
Elad Schulman
Jan 23, 2026

It’s easy enough to recognize the need for an AI security standard – meaning a set of rules that can mitigate cybersecurity risks in AI-powered applications and services.

Where the challenge lies is in determining exactly what an AI security standard should include. Attempts to date to create something approaching a standard for AI security (such as the NIST AI Risk Management Framework and ISO/IEC 42001) are a start, but they have resulted in fragmented guidance that falls short of providing comprehensive coverage.

To do better, we need to develop an AI security standard that addresses all relevant risks, across all stages of the software development lifecycle. Here’s a look at what that entails.

The state of AI security standards

Before diving into a discussion of core elements of a truly effective AI security standard, it’s worth taking stock of existing AI security frameworks.

Again, some standards organizations, including NIST and ISO, have developed what they promote as AI security standards. Industry groups or companies, like Google and Microsoft, have also introduced some guidelines related to AI security or responsible AI. And there is one major example of an AI security regulatory framework, the E.U. AI Act, which includes some security controls for high-risk AI workloads.

These frameworks have value, but they are subject to some significant shortcomings. One is that most are voluntary standards, so there’s no obligation for organizations to follow them. The exception is the E.U. AI Act, but that is only mandatory for companies that are based in or operate in the E.U.

Existing attempts to create AI security standards also vary widely in how they define high-risk or mission-critical AI systems. This can lead to uncertainty about which types of controls businesses need to apply to which AI applications or services.

A third issue is that existing frameworks included limited, if any, specific security controls. Some include high-level requirements, like performing risk assessments for AI systems, but they don’t define in detail how to detect or mitigate risks.

In short, existing standards are too fragmented and inconsistent to deliver the protections and AI compliance risk management capabilities that businesses really need to keep their AI investments secure. Most also lack enforceability.

See also: Sovereign AI Explained: How and Why Nations Are Developing Domestic AI Capabilities

Advertisement

What to include in an AI security standard

Solving these shortcomings requires a new AI security standard – one that covers all relevant risks, and that includes specific, actionable practices.

Such a standard should include the following three overarching practice areas, with a number of specific controls defined within each area.

1.    AI model lifecycle management

The first key pillar of AI security is securing AI models. To that end, businesses should adopt practices that mitigate security risks across all stages of the model development lifecycle, including:

  • Model supply chain security: Libraries, tools, training data and other resources used to build models must originate from secure, trusted sources.
  • Development security: AI/ML developers must follow secure development practices as they write and document the code that powers AI models.
  • Testing: Businesses should use AI red-teaming and adversarial testing to validate models and identify security risks.
  • Run-time security: Continuous monitoring of models in production must be in place to detect risks that arise during run time.
  • Controlled updates and retirement: Predefined processes should be in place to govern model updates and sunsetting.
  • Data governance: The underlying data sources that models use for training and inference must be properly managed to ensure data integrity and prevent abuse.
Advertisement

2.    Access management

Controlling access to AI systems is another critical component of AI security. To do this, organizations should:

  • Identify and authorize users: Human and machine users should be subject to identity verification and authorization using context-based access control systems.
  • Least-privilege provisioning: When granting access to AI systems, businesses should follow the principle of least privilege and, where possible, use just-in-time provisioning.
  • Shadow AI mitigation: Processes must be in place for identifying “shadow” LLMs, AI language models that users deploy without the IT organization’s knowledge or approval.
  • Anomaly detection: Anomalous access requests should be detected in real time to help identify issues like credential theft or insider threats.
Advertisement

3.    Operational security

To secure AI systems during operation, it’s critical to deploy the following controls:

  • Continuous monitoring: Automated logging and analytics of interactions involving AI systems, human users, machine users, AI agents, models, tools and data sources should be in place to detect anomalous behavior.
  • Dynamic guardrails: Scalable controls like multi-layered content filtering and behavioral constraints can help prevent insecure access to AI resources.
  • Human-in-the-loop controls: Businesses should clearly specify when and how they will keep a human in the loop (HITL) to require manual validation of high-risk workflows.
  • Continuous testing: Continuous testing of AI systems during production helps to detect risks that teams missed during pre-deployment testing.
  • Incident response: AI-specific playbooks must define roles and responsibilities for reacting to security breaches that impact AI systems.
  • Governance oversight: Regular review cycles, including metrics-driven feedback loops, allow organizations to improve their AI security controls and policies on an ongoing basis.

An AI security standard founded on these principles and practices addresses all key risk areas. It also enables practical implementation of AI security best practices.

If enterprises are to move the needle toward stronger AI security, this is the type of framework they need to adopt.

thumbnail
Elad Schulman

Elad Schulman is the CEO and co-founder of Lasso Security. He is a seasoned tech entrepreneur, with experience in both enterprises and start-ups. He is also an investor in and mentor to early-stage startups. After selling his company Segasec to Mimecast in 2020, Elad acted as the VP of Brand Protection, focusing on protecting organizations from phishing attacks on their customers.

Recommended for you...

Real-time Analytics News for the Week Ending January 24
Beware the Distributed Monolith: Why Agentic AI Needs Event-Driven Architecture to Avoid a Repeat of the Microservices Disaster
Ali Pourshahid
Jan 24, 2026
Fastvertising: What It Is, Why It Matters, and How Generative AI Amplifies It
The Observability Gap AI Exposed
Tim Gasper
Jan 21, 2026

Featured Resources from Cloud Data Insights

The Manual Migration Trap: Why 70% of Data Warehouse Modernization Projects Exceed Budget or Fail
The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.