Threat Modeling of AI Applications Is Mandatory

November 21, 2025

Gone are the days of throwing PASTA at the wall to see what sticks. MAESTRO has now emerged as the go-to AI threat modeling framework, leaving PASTA and STRIDE for “traditional” applications.

 

 

Understanding the Attack Surface in AI Applications 

The broad attack surface of an AI-enabled application demands an architectural understanding of multiple dimensions to provide details of the technologies, data, models and underlying security being implemented. 

 

The AI attack surface is vast, beginning with the data used for training, which in some cases includes the majority of publicly available information on the internet. Organizations must understand the risks posed by launching their own AI quickly and securely to empower their business and prevent employees from seeking alternative methods to expedite work processes via shadow AI. When considering an organization’s desire for speed to market, undocumented risks to AI technologies, and lack of skilled resources, the need for a threat model before production release becomes vital in an organization’s overarching application security program. 

 

 

Initiating Threat Modeling: Stakeholder Engagement and Attack Surface Analysis 

The threat modeling process begins by reviewing documentation and interviewing the key stakeholders in development and management of infrastructure, application and integrations. This allows for the creation of, or improvement to, documents detailing the known trust zones, detection points, compensating controls, policies and key functionality. 

 

Breaking down the attack surface into manageable chunks (pun intended) allows humans to understand the risks and controls, not unlike how large language models (LLMs) break down the problem of human language. Based on the need to increase the identification of security threats and risks, enter MAESTRO, an agentic AI framework providing the foundation for consistent analysis.

 

 

The MAESTRO Framework: A New Approach to AI Threat Modeling

MAESTRO, which stands for Multi-Agent Environment, Security, Threat, Risk, and Outcome, is a specialized threat modeling framework designed specifically for agentic AI systems. Traditional threat modeling methods often fall short because they don't adequately address the unique risks of AI, such as adversarial machine learning, prompt injection, backdoor attacks, and the complex, unpredictable behaviors that can emerge from agents interacting with each other. MAESTRO provides a structured, seven-layer approach (see below) that breaks down an application’s architecture, from the foundational models and data operations to its reasoning and communication layers. 

 

This layered methodology helps organizations systematically identify, assess and mitigate vulnerabilities across the entire AI lifecycle, and validate that security is a core component of the system's design rather than an afterthought.

 

Image
7-layers

Sourcehttps://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro

 

Each layer is evaluated as separate chunks of the larger ecosystem, bringing clear delineation of functions and concerns to the table for risk-based analysis. Following the stakeholder interviews and architecture profiling, an organization must analyze layer-specific, and cross-layer threats that may exist so that the compounding of risk can be mitigated early and often via tailored mitigation strategies. 

 

MAESTRO seeks to identify and document potential exposures such as inference, hallucination, bias, compromising of AI Agents, model poisoning, resource hijacking, backdoors attacks, as well as how a threat on one layer can lead to further compromising by exposing and chaining of additional threat vectors in subsequent layers.

 

Detections and mitigations are often challenging to clearly articulate via legacy application security methods, reducing the value of standardized protections and driving the adoption of customized threat mitigations found via the MAESTRO-based threat model exercise. Some risks, such as cross-layer threats like supply-chain attacks and data leakage could have a substantial impact if not appropriately addressed. As an organization documents the threat model, including likelihood and impact of each threat, they must prioritize their mitigations and develop an action plan. It will take multiple teams, from data engineers, developers, IT and architecture teams, as well as buy-in from senior leadership to drive meaningful change and secure the environment.

 

 

Aligning Threat Modeling with Business Objectives and Organizational Change

Don’t forget how you got here in the first place. Optiv typically sees organizations bringing AI to bear for revenue production or to provide cost savings for existing processes. An organization must weigh each risk identified during the MAESTRO threat modelling exercise against the desired outcomes of enhanced production and/or reduced costs.

 

All parties agree that AI will provide value. The responsibility falls on the threat model team to make all parties aware of potential threats and risks the new technologies and machine “intelligence” introduce. MAESTRO helps the business understand and compare risks within the enterprise risk picture. The threat modeling exercise is one of the few proactive security measures that can be educational for the business and technical stakeholders. It’s a communication technique and dialog promoting exercise that can enable stakeholders to have a voice within IT and security processes, and understanding which risks are tolerable and which must be fixed before release.

 

MAESTRO is the new kid on the block, but it is needed/necessary to enable discussion of agentic AI solutions as the architecture is different than the traditional N-Tier web application. This new architecture demands a new approach to threat modeling, new controls, and tailored mitigation strategies. It was built by leading experts in generative AI and cloud, with the specific goal of threat modeling. It helps solve the challenges listed above and is an essential step in any AI-based application security program.

 

Learn more about securing AI applications in this video.

Kelvin Walker
Principal Security Advisor | Optiv
Kelvin Walker is a principal security advisor for Optiv’s strategy and risk management practice. Kelvin has over 25 years’ experience leading teams in the delivery of strategy, technology and information risk management. He advises and consults with clients in several information security and technology areas including artificial intelligence (AI), risk management, compliance activities and control definition requirements, offering expertise and insights reinforced by a strong depth and breadth of cybersecurity strategies across a wide array of information systems and platforms.
Brian Golumbeck
Director, Strategy and Risk Management | Optiv
Brian Golumbeck is a Practice Director within Optiv Risk Management and Transformation Advisory Services Practice. He has a history of leading challenging projects and building dynamic high impact teams. Mr. Golumbeck’s 25+ years working in Information Technology, include 20+ years as an information security professional. Brian is a Certified Information Systems Security Professional (CISSP), Certified in Risk and Information Systems Controls (CRISC), Certified Information Security Manager (CISM), Certificate of Cloud Security Knowledge (CCSK), EXIN/ITSMf ITIL Foundations, and Lean Six Sigma – Greenbelt.

Related Insights

Image
Securing-AI-Applications-Video-list-image-new
Securing AI Applications
AI is transforming how we build software — but it’s also changing how we secure it.
Image
Data-Governance-in-AI-Video-list-image_x2
Data Governance in AI
AI is changing the way organizations manage and protect their data. As businesses adopt agentic AI tools, sensitive information that was once buried is now easier to surface—creating new risks around identity, access and compliance.
Image
blog-navigating-ai-security-tools-landscape-list-image
Navigating the Vast AI Security Tools Landscape
If you feel like you’re drowning in an ocean of AI security tools, you’re not alone. Optiv can help you with evaluating what tools you already have (and cutting ones you don’t need), figuring out what is overlapping, mapping tools to specific business use cases and AI governance and controls program.