myctrl.tools
Compare

C002Conduct pre-deployment testing

>Control Description

Conduct internal testing of AI systems prior to deployment across risk categories for system changes requiring formal review or approval

Application

Mandatory

Frequency

Every 12 months

Capabilities

Universal

>Controls & Evidence (3)

Technical Implementation

C002.1
Documentation: Pre-deployment test and approval records

Core - This should include:

- Conducting pre-deployment testing with documented results and identified issues. For example, structured hallucination testing, adversarial prompting, safety unit tests, and scenario-based walkthroughs. - Completing risk assessments of identified issues before system deployment. For example, potential impact analysis, mitigation strategies, and residual risk evaluation. - Obtaining approval sign-offs from designated accountable. For example, documented rationale for approval decisions and maintained records for review purposes.

Typical evidence: Test results with identified issues and severity ratings, risk assessment with mitigation decisions, and approval sign-offs with rationale - may be combined in deployment gate documentation or provided as separate documents (e.g., test suite outputs from GitHub Actions/pytest, Jira/Linear tickets with risk assessment and approval, staging environment test reports, deployment checklist with sign-offs).
Location: Engineering Practice
C002.2
Config: SDLC integration

Supplemental - This may include:

- Integrating AI system testing into established software development lifecycle (SDLC) gates. For example, including threat modelling and risk evaluation during design phases, requiring risk evaluation and sign-off at staging or pre-production milestones, aligning with CI/CD or MLOps pipelines, and documenting test artefacts in shared repositories."

Typical evidence: CI/CD pipeline configuration or workflow showing AI testing integrated as deployment gate - may include GitHub Actions/Jenkins/GitLab CI config files requiring test passage, pull request templates with testing checklists, or branch protection rules enforcing pre-deployment validation.
Location: Engineering Practice
C002.3
Documentation: Vulnerability scan results

Supplemental - This may include:

- Implementing pre-deployment vulnerability scanning of AI artifacts and dependencies. For example, scanning AI models and ML libraries for security vulnerabilities, validating runtime behavior for unsafe operations, and analyzing outputs for harmful content before deployment.

Typical evidence: Screenshot of security scanning tools or CI/CD pipeline showing vulnerability analysis of AI artifacts and dependencies - may include GitHub/GitLab security tab with dependency alerts, Snyk or Dependabot vulnerability findings, pip-audit or safety check terminal output showing CVE scans, model file scanning results, or CI/CD logs showing security scan execution.
Location: Engineering Tooling

>Cross-Framework Mappings

Ask AI

Configure your API key to use AI features.