KSI-MLA-EVC—Evaluating Configurations
Formerly KSI-MLA-05
>Control Description
>NIST 800-53 Controls
>Trust Center Components3
Ways to express your implementation of this indicator — approaches vary by organization size, complexity, and data sensitivity.
From the field: Mature implementations express evidence collection through automated GRC pipelines — machine-generated evidence covering 80%+ of controls, coverage metrics tracked as dashboard indicators, and evidence freshness verified automatically. Per ADS-CSO-CBF, automation must ensure consistency between formats — evidence repositories should generate both human-readable and machine-readable outputs.
Compliance Evidence Repository
Centralized evidence repository expressing compliance posture — automated collection with chain of custody and coverage metrics
Evidence Automation Documentation
How automated evidence collection works — workflows, coverage metrics, and integration architecture
Evidence Collection Procedures
How evidence is collected and preserved for compliance and incident investigations
>Programmatic Queries
CLI Commands
aws configservice describe-config-rules --query "ConfigRules[].{Name:ConfigRuleName,State:ConfigRuleState}" --output tableaws configservice get-compliance-summary-by-resource-type --output table>20x Assessment Focus Areas
Aligned with FedRAMP 20x Phase Two assessment methodology
Completeness & Coverage:
- •Does configuration evaluation cover all machine-based resource types — VMs, containers, Kubernetes manifests, serverless functions, managed services, and IaC templates?
- •Are both pre-deployment (static analysis of IaC) and post-deployment (runtime configuration assessment) evaluations in place?
- •How do you ensure evaluation covers security, compliance, and operational best practices — not just one dimension?
- •When new IaC modules or cloud resource types are introduced, what process ensures evaluation rules are created before first deployment?
Automation & Validation:
- •What automated tools evaluate IaC templates before deployment (Checkov, tfsec, Bridgecrew, Snyk IaC), and do they block deployment on failure?
- •How do you detect configuration drift between deployed resources and their IaC definitions — is drift detection continuous or periodic?
- •What happens when a runtime configuration scan finds a misconfiguration — is it auto-remediated, quarantined, or only alerted?
- •How do you validate that configuration evaluation rules themselves are correct and not missing real issues (false negatives)?
Inventory & Integration:
- •What tools compose your configuration evaluation pipeline (IaC scanners, CSPM, CIS Benchmark tools), and how do findings aggregate?
- •How does configuration evaluation integrate with your CI/CD pipeline to prevent insecure configurations from being deployed?
- •Are configuration baselines and evaluation policies stored as code alongside infrastructure definitions?
- •How do configuration evaluation findings integrate with your ticketing system to ensure remediation is tracked and assigned?
Continuous Evidence & Schedules:
- •How do you demonstrate that configuration evaluation runs persistently rather than only at deployment time?
- •Is configuration compliance data (pass/fail rates, drift counts, remediation timelines) available via API or dashboard?
- •What evidence shows that configuration evaluation catches and prevents misconfigurations from reaching production?
- •How do you measure configuration evaluation coverage — what percentage of resources are evaluated, and is coverage improving over time?
Update History
Ask AI
Configure your API key to use AI features.