Secure AI: A Governance-First Approach for Risk-Free Adoption

December 09, 2025

AI adoption continues to accelerate across organizations, promising efficiency and innovation. Organizations are moving from a trial approach to using AI as a strategic tool. While the capabilities within the uses of AI continue to expand the development, the use of effective information security controls and capabilities continues to be limited. Yet, as the use of the AI capabilities, solutions and tools multiply, secure AI concerns are expanding at an ever-increasing pace. What’s the paradox?

 

Secure AI isn’t just about deploying the latest technology; it’s about establishing robust, focused information security governance programs built on a culture of fostering accountability of secure AI programs.

 

One such focus area within AI platforms is the new attack surfaces and exposures, like the deployment of the cloud concepts where new application security (AppSec) threats were identified. Similar application and AI solution models (training, application models, agentic models, etc.) create easy to overlook opportunities for misconfigurations, uncontrolled integrations, vulnerabilities and data management exposures. Not to mention the additional requirements of the third-party risks focused on AI vendors (some so small to not have the effective control requirements to maintain the secure AI concepts) adding complexity, ethical concerns, compliance challenges and risk, making trust and verification more critical today than ever.

 

Image
risk-governance

 

 

The Real Risk: Uncontrolled AI Tools

Today’s organizations access and store data everywhere. Agentic AI tools (task agents) can index massive amounts of data automatically and over a short period of time. What was once buried is now easily surfaced, creating new challenges around the control requirements supporting identity, access management and ongoing compliance.

External AI tooling providers may lack robust security controls, increasing the focus to continually monitor partners’ and vendor’s supply chain vulnerabilities and make third-party risk management processes essential.

 

 

Why We Must Go Beyond a Tools-Only Approach 

The marketplace is flooded with tools focused on a solution to narrowly defined problems, potentially with limited use case perspectives leaving dangerous gaps in security. Many times, the overemphasis on the tool of the day distracts from what matters most: proper governance, risk accountability and effectively understanding and managing data requirements.

 

This continual expansion in the number of and/or capabilities of both business and technology tools continues to outstrip the ability to effectively focus on and understand how a tool is using, managing and delivering value. This includes implementing tools and features within tools without a focus on the risk of what data the tool is accessing or using to process the AI requirements.

 

AI, information technology and security leaders must focus and think beyond “tool-centric” strategies. Risk-based management focus should include the AI capabilities and the deployment of effective solutions and integrated tools critical to secure AI effectiveness. Layered defenses and governance controls, data management concepts and risk management approaches provide effective AI solutioning and deployments. The requirements from a vendor risk management process must evolve as we continue to expand the AI footprint and the traditional manual programs and questionnaires will not meet the requirements surrounding efficient and effective AI third party management requirements.

 

 

Building an AI Governance Focused Approach

An AI governance approach is the foundation of a secure AI approach. This means:

 

  • Data Classification and Lifecycle Management: Labeling and lifecycle policies ensure sensitive data is handled appropriately. Employees must treat “highly confidential” data with the same urgency as “top secret.”
  • Risk Governance: Employees must understand AI is exposing risk and poor information security practices at the “speed of sound.” This is now identifying where inefficient risk management and governance activities are falling short of the required speed requirements.
  • Ethics Governance: Ethical uses of not only critical security practices; but AI based solutions (including tooling) exacerbates the critical nature of partner, vendor and supply chain contracts. Simple concepts like effective contractual statements focused on data usage, data locations, third-party usage of client data within their AI solutioning – and the list continues to expand. Strong governance can reduce risk and build trust. Embed AI ethics and vendor governance into all areas of procurement, communications onboarding, so third-party risk is managed from the start.

 

 

Controlling AI Tools Effectively

Controlling AI tools requires a blend of technical and procedural safeguards, such as:

 

  • Build AI into your SDLC: Establish proper policies, processes and security into the secure development lifecycle (SDLC) of AI tools as part of your application security program
  • Tuning AI Tools: Configure platforms to ignore or exclude sensitive data without appropriate access management and technical and procedural safeguards
  • Adding Guardrails: Define and configure AI tools to employ the organizational AI guardrails and the information security guardrails defined within the appropriate control requirements
  • Testing and Validation: Manual penetration testing of in-house developed AI tools to validate the efficacy of hardening, training and tooling. Use of automated vulnerability and penetration testing based on well designed and documented testing models can increase the success and timing of ongoing AI usage and development activities
  • Continuous Monitoring and Reporting: Watch for hallucinations, bias and drift to keep outputs accurate. Borrow proven information security and technology practices like secure coding and continuous monitoring to help control AI tools effectively. For third-party risk, demand transparency on model training, data handling and compliance certifications.

 

 

Optiv’s Advisory Framework for Secure AI

Optiv’s approach to secure AI is a comprehensive approach designed to meet both your organizational and budget needs. The pillars include:

  • Consulting: Build governance frameworks and classification schemas tailored to your organization
  • Define Effective Use Cases: Identify and provide unbiased use case recommendations
  • Discovery: Identify and classify sensitive data across all environments
  • Technical Controls: Assist client to configure AI tools for safe, effective adoption
  • Enablement: Train teams on responsible AI use and risk awareness
  • ROI Focus: Maximize existing investments without adding unnecessary complexity

 

Success isn’t just about avoiding breaches; it’s about enabling safe, resilient AI adoption. Beyond “hard ROI” like reduced regulatory fines and breach costs, Optiv’s approach delivers “soft ROI” benefits such as operational resilience, trust and a culture of responsible AI use.

 

 

Secure AI Tools

Ready to take a comprehensive approach to secure AI? Explore these video resources:

 

 

Discover more about Optiv’s secure AI advisory offerings here.

Kelvin Walker
Principal Security Advisor | Optiv
Kelvin Walker is a principal security advisor for Optiv’s strategy and risk management practice. Kelvin has over 25 years’ experience leading teams in the delivery of strategy, technology and information risk management. He advises and consults with clients in several information security and technology areas including artificial intelligence (AI), risk management, compliance activities and control definition requirements, offering expertise and insights reinforced by a strong depth and breadth of cybersecurity strategies across a wide array of information systems and platforms.
Jeff Carey is a Principal Security Advisor for Optiv’s Threat Management and Application Security practice. He has 15+ years’ experience in Cyber Security and Information Technology with a strong focus on creative solutioning to meet client objectives and evolving maturity. Jeff primarily supports and advises clients on key areas of their security program such as Adversarial Simulations (Red/Purple Teaming), SDLC Programs, vulnerability management, and artificial intelligence (AI) application testing and threat modeling.