AI Supply Chain
- Definition
- The end-to-end chain of dependencies that an AI system relies on — from training data providers and model vendors to inference infrastructure, third-party plugins, and monitoring tools. AI supply chain risk is the emerging security discipline focused on vulnerabilities at every link in this chain.
- Why it matters
- Your AI is only as secure as its weakest dependency. A compromised fine-tuning dataset, a vulnerable MCP server, or a model vendor's API outage can cascade through your entire AI stack. Traditional software supply chain security (SBOMs, dependency scanning) does not cover AI-specific attack surfaces like data poisoning, model backdoors, or prompt injection through retrieved documents. AI supply chain attacks are the next frontier of cybersecurity threats, and most organizations have zero visibility into their AI dependency graph. If you cannot enumerate every component your AI system depends on — from training data provenance to inference provider SLAs — you have unmanaged risk.
- In practice
- The International Association of Privacy Professionals (IAPP) published AI supply chain risk frameworks covering data, model, and infrastructure layers. Wiz and Snyk are building AI dependency scanning tools that map model provenance and detect known vulnerabilities in AI components. The Australian Cyber Security Centre issued formal AI supply chain guidance in 2025, one of the first national cybersecurity agencies to do so. The SolarWinds attack model is being studied for AI-specific variants — researchers have demonstrated that poisoning as few as 0.01% of training examples can implant exploitable backdoors. Enterprise AI procurement teams are beginning to require supply chain documentation alongside traditional vendor security questionnaires.
We cover safety & governance every week.
Get the 5 AI stories that matter — free, every Friday.
Related terms
AI Bill of Materials (AIBOM)
A comprehensive inventory documenting every component of an AI system — training data sources, model architecture, fine-tuning datasets, third-party APIs, infrastructure dependencies, and known limitations. An AIBOM is the AI equivalent of a software bill of materials (SBOM), designed for auditability and regulatory compliance.
AI governance
The organizational frameworks, policies, and processes that govern how AI systems are developed, deployed, monitored, and retired within an enterprise. AI governance covers model risk management, bias auditing, access controls, and regulatory compliance.
Prompt injection
An attack where malicious text in a prompt tricks an AI model into ignoring its instructions or leaking sensitive data. Prompt injection is the top security concern for production AI applications.
Data poisoning
An attack that corrupts a model's training data to introduce backdoors, biases, or degraded performance. Data poisoning can be targeted (affecting specific outputs) or untargeted (generally degrading model quality).
Know the terms. Know the moves.
Get the 5 AI stories that matter every Friday — free.
Free forever. No spam.