Under active development Content is continuously updated and improved

LLM03Supply Chain

>Control Description

LLM supply chains are susceptible to various vulnerabilities affecting the integrity of training data, models, and deployment platforms. These risks can result in biased outputs, security breaches, or system failures. The rise of open-access LLMs and fine-tuning methods like LoRA and PEFT introduce new supply-chain risks, while on-device LLMs increase the attack surface.

>Vulnerability Types

  • 1.Traditional Third-party Package Vulnerabilities: Outdated or deprecated components that attackers can exploit
  • 2.Licensing Risks: Diverse software and dataset licenses creating legal and usage risks
  • 3.Outdated or Deprecated Models: Using unmaintained models with security issues
  • 4.Vulnerable Pre-Trained Models: Models containing hidden biases, backdoors, or malicious features
  • 5.Weak Model Provenance: No strong guarantees on the origin of published models
  • 6.Vulnerable LoRA Adapters: Malicious adapters that compromise base model integrity
  • 7.Collaborative Development Exploits: Vulnerabilities introduced through shared model environments
  • 8.On-Device LLM Vulnerabilities: Compromised manufacturing or firmware exploitation

>Common Impacts

Compromised model security and integrity
Biased or malicious outputs
System failures and security breaches
Intellectual property theft
Legal and compliance issues

>Prevention & Mitigation Strategies

  1. 1.Carefully vet data sources and suppliers, including T&Cs and privacy policies
  2. 2.Apply vulnerability scanning, management, and patching for all components
  3. 3.Apply comprehensive AI Red Teaming and Evaluations when selecting third-party models
  4. 4.Maintain an up-to-date inventory using Software Bill of Materials (SBOM)
  5. 5.Create an inventory of all license types and conduct regular audits
  6. 6.Only use models from verifiable sources with integrity checks and signing
  7. 7.Implement strict monitoring and auditing for collaborative model development
  8. 8.Use anomaly detection and adversarial robustness tests on supplied models
  9. 9.Implement a patching policy for vulnerable or outdated components
  10. 10.Encrypt models deployed at edge with integrity checks

>Attack Scenarios

#1Vulnerable Python Library

An attacker exploits a vulnerable Python library to compromise an LLM app, similar to the PyPi package registry attack that tricked developers into downloading compromised PyTorch dependencies.

#2Direct Model Tampering

Direct tampering and publishing a model to spread misinformation, as seen with PoisonGPT bypassing Hugging Face safety features.

#3Fine-tuning Attack

An attacker fine-tunes a popular open access model to remove key safety features while scoring highly on safety benchmarks, deploying it for victims to use.

#4Compromised LoRA Adapter

A compromised third-party supplier provides a vulnerable LoRA adapter that is merged into an LLM, introducing hidden vulnerabilities.

>MITRE ATLAS Mapping

>References

Ask AI

Configure your API key to use AI features.