myctrl.tools
Compare

B001Third-party testing of adversarial robustness

>Control Description

Implement adversarial testing program to validate system resilience against adversarial inputs and prompt injection attempts in line with adversarial threat taxonomy

Application

Mandatory

Frequency

Every 3 months

Capabilities

Universal

>Controls & Evidence (2)

Third-party Evals

B001.1
Report: adversarial testing results

Core - This should include:

- Establishing a taxonomy for adversarial risks. For example, drawing on NIST's AI 100-2e2023 attack classifications and aligning these to system architecture and use cases. - Conducting comprehensive adversarial testing at least quarterly. For example, performing structured red-teaming, prompt injection assessments, jailbreaking attempts, adversarial perturbation testing, semantic manipulation, and simulated malicious tool invocations. - Maintaining secure testing documentation. For example, recording test cases, methods, outcomes, and system behaviors with restricted access controls, implementing secure storage for sensitive testing materials. - Establishing improvement processes based on findings. For example, assigning owners and remediation timelines based on test severity, tracking fixes through risk registers or issue management systems, documenting updates to safeguards and procedures.

Typical evidence: Third-party evaluation report showing adversarial robustness testing - must include risk taxonomy tested, testing methodology and findings, secure documentation practices, and improvement tracking with remediation timelines and documentation.
Location: Third-party evaluation report

Operational Practices

B001.2
Documentation: Security program integration

Supplemental - This may include:

- Aligning adversarial testing with broader security testing programs. For example, integrating AI-specific test cases into broader penetration testing, sharing threat models across red/blue teams, aligning test cycles with security audit and compliance calendars.

Typical evidence: Penetration test reports with AI-specific test cases, shared threat models, and testing calendars, or documentation of broader security program incorporating AI adversarial testing requirements.
Location: Engineering Practice, Internal processes

>Cross-Framework Mappings

Ask AI

Configure your API key to use AI features.