When Your Scanner Is the Spy: Lessons from an Agentic Supply Chain Attack

April 15, 2026

Vector Shift: Why Supply Chain Is the New Perimeter

Analysis of recent breaches shows that attack surfaces have expanded beyond an organization’s internal environment into its extended vendor ecosystem. The long-held assumption that controlling access equates to controlling risk no longer holds. When modern software delivery relies on interconnected tools and vendors across upstream and downstream workflows, monitoring the perimeter alone is insufficient.

 

As AI emerges as the next frontier of an already expanding attack surface, organizations must rethink how they approach supply chain security. Attack surfaces have evolved from static network boundaries to endpoints, then identity, and now into automated vendor pipelines and tools operating in AI-enabled environments. As reputational trust erodes, organizations must shift toward verifiable trust, focusing not just on vendors as entities but on agentic vectors as mechanisms of risk.

 

 

The Scanner that Became the Weapon

In March 2026, a widely deployed open-source vulnerability scanner in a DevSecOps ecosystem was used against the organizations that trusted it most. Threat actors exploited a misconfigured GitHub Actions workflow to steal a personal access token with write access to the scanner’s repository. The issue was identified, credentials were rotated and the incident was initially considered resolved.

 

However, one critical credential was not rotated. Attackers retained residual access and weeks later repointed 76 of the scanner’s 77 release tags to malicious artifacts containing two silent Python infostealers. One targeted CI/CD runner memory and environment variables, while the other harvested SSH keys and cloud tokens from the local file system.

 

Affected pipelines continued to run as expected while silently capturing sensitive data. The breach cascaded downstream as stolen credentials enabled additional compromise.

 

Four core factors contributed to the attack:

  1. Incomplete remediation as a reentry point
     The initial intrusion was identified, but credentials were not rotated atomically, allowing attackers to retain access
  2. Implicit trust in open-source tooling
     The attack exploited trust that is rarely formalized or independently validated. In this case, the release pipeline became the exploit, an area traditional vendor assessment processes do not typically evaluate
  3. Privileged access concentration in pipeline tooling
     CI/CD runners routinely hold cloud tokens, API keys and deployment secrets, making open-source tools attractive targets for credential harvesting at scale
  4. Silent execution as cover
     Malicious code executed before the scanner ran, allowing pipelines to complete normally without alerts. Detecting this activity requires runtime visibility that many organizations lack

 

 

Trust as an Exploit

This breach did not rely on a novel vulnerability or unprecedented exploit, but on trust. In open-source ecosystems, trust is built on reputation and widespread adoption. The scanner was used because it worked, was recommended and was embedded across pipelines. Formal evaluation of release pipelines, contributor access controls or incident response maturity rarely enters the decision process.

 

When tools of this stature are embedded in CI/CD pipelines with access to production credentials, the trust they carry has real consequences. This incident illustrates how implicit trust can become an attack vector.

 

Compliance frameworks are not designed to identify this type of risk. A SOC 2 report reflects controls around a vendor’s internal systems but does not assess release pipeline integrity or contributor account compromise. Vendor questionnaires capture stated practices at a single point in time and would not have surfaced the incomplete token rotation that enabled the second breach.

 

No annual review would have detected this issue. Only continuous, execution-level visibility into tool behavior could have exposed it. When pipelines operate as blind spots, trust can propagate risk.

 

 

The AI Blindspot

The impact extended beyond the scanner. Downstream exposure reached an open-source AI gateway used by enterprises to route traffic across large language model providers, including OpenAI, Anthropic and Azure OpenAI. Malicious versions of the gateway appeared on PyPI within days of the compromise and remained available for approximately five hours before removal.

 

These are not peripheral tools. AI gateways often hold API keys for multiple providers, have visibility into data passed as model context and, in many deployments, log prompts and responses. Despite their privileged position, they frequently exist outside formal third-party risk inventories.

 

This represents the AI blind spot. LLM gateways, orchestration frameworks, vector databases, AI-powered developer tools and autonomous agents form a new infrastructure layer that did not exist a few years ago. These components often enter environments as dependencies rather than formally assessed vendors. They carry the same trust debt as other open-source tools, amplified by the influence they exert. A compromised AI gateway can enable data exposure, response manipulation, credential harvesting and, in agentic deployments, redirected autonomous actions.

 

 

The Need for AI-Era TPRM

Traditional TPRM was built for a stable vendor landscape of commercial software and managed services. In the era of open-source AI gateway with multiple version releases in a week, where the whole project is maintained by a distributed contributor community, and no contractual relationship – traditional TPRM becomes insufficient. This is not a procedural gap but a structural one. Adding questions to questionnaires or accelerating review cycles does not address the underlying issue.

 

 

Optiv Approach

We work with CISOs and risk leaders who understand that their vendor landscape has outgrown their assessment programs. Our practice is grounded in the recognition that third-party risk in the AI era is a visibility problem. Organizations do not have a shortage of frameworks; they have a shortage of honest answers about what is being executed in their environments and whether it can be trusted.

 

AI TPRM needs to look at the following four domains:

  1. Inventory before assessment: You cannot assess what you have not catalogued. Every component participating in AI workflows, including gateways, orchestration libraries, embedding pipelines and agent frameworks, all belong in the third-party inventory
  2. Execution over documentation: A SOC 2 report reflects what a vendor claims their security practices look like at a point in time and does not show what a tool does inside your environment. In the AI era, the gap between those two things is where risk resides
  3. Continuous monitoring over point-in-time: The LLM compromise was live for five hours. Annual reviews are not designed to address a breach that opens and closes within a deployment window
  4. Remediation validation as governance: In open-source AI tooling, a vendor confirming a fix is not the same as a verified fix. In fast moving AI ecosystems, assuming a fix works is itself a risk

 

Our approach to AI-era TPRM is built around capabilities that speak directly to the gaps of traditional TPRM exposed by the breach. AI-specific risk assessments should go beyond compliance scores. There must be continuous monitoring of AI and DevSecOps ecosystems, and remediation validation that independently verifies closure rather than accepting a vendor's word for it. The organizations best positioned for what comes next are the ones with the clearest picture of what is executing in their environments. If this breach example raised questions about your AI ecosystem visibility, that is a conversation worth having.

 

Our comprehensive line of third-party risk management solutions helps in securing your organization from supply-chain driven vulnerabilities. Click here to learn more about what we do at Optiv, or reach out to our experts.

Rohitha Chowdary is a manager at Optiv specializing in AI security and governance, where she leads and delivers advisory engagements from India. With over a decade of experience across security management and governance, risk and compliance (GRC), she helps organizations design and implement robust controls, manage risk and enable secure adoption of emerging technologies in complex enterprise environments.
Srishti Ahuja is a senior consultant at Optiv specializing in cyber strategy, risk management, and AI security. With 8+ years of experience advising Fortune 500s, financial institutions, and private equity firms, she guides organizations through regulatory assessments, M&A due diligence, and the evolving risk landscape that AI introduces, translating technical complexity into strategies that executive stakeholders can operationalize.
Felix Koottakkara is a cybersecurity consultant with Optiv specializing in third-party risk management, helping organizations identify, assess and mitigate vendor-related security risks. With experience spanning audits, advisory and governance frameworks, he focuses on enabling secure, resilient partnerships in complex enterprise environments.

Optiv Security: Secure greatness.®

Optiv is the cyber advisory and solutions leader, delivering strategic and technical expertise to nearly 6,000 companies across every major industry. We partner with organizations to advise, deploy and operate complete cybersecurity programs from strategy and managed security services to risk, integration and technology solutions. With clients at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can. At Optiv, we manage cyber risk so you can secure your full potential. For more information, visit www.optiv.com.