KSI-INR-AAR—Generating After Action Reports
Formerly KSI-INR-03
>Control Description
>NIST 800-53 Controls
>Trust Center Components3
Ways to express your implementation of this indicator — approaches vary by organization size, complexity, and data sensitivity.
From the field: Mature implementations express incident response capability through measurable metrics — KSI failure tracking with MTTR per indicator, IR exercise results with actual response times, and improvement implementation tracked as backlog items. Incident response becomes a continuously measured capability backed by SOAR platform data, not a static plan document.
Incident Response Plan Summary
IR plan summary expressing roles, escalation paths, and communication procedures — backed by SOAR platform workflows
Incident History and Lessons Learned
Anonymized incident summaries expressing response effectiveness and improvements — evidence of a learning organization
IR Tabletop Exercise Reports
IR exercise results with actual vs. expected response times and improvement findings
>Programmatic Queries
CLI Commands
gh issue list --label postmortem --state all --json number,title,state,createdAt --limit 20gh issue view <number> --json title,body,labels,createdAt>20x Assessment Focus Areas
Aligned with FedRAMP 20x Phase Two assessment methodology
Completeness & Coverage:
- •Are after action reports generated for all incident types — security breaches, service outages, near-misses, and third-party incidents that affected your CSO?
- •Do AARs cover all phases of incident handling — detection, containment, eradication, recovery, and communication — or only selected phases?
- •How do you ensure lessons learned address root causes, not just symptoms, and that systemic issues across multiple incidents are identified?
- •Are lessons learned from AARs shared with all relevant teams (engineering, security, operations, leadership), not just the incident response team?
Automation & Validation:
- •How do you track that remediation actions identified in AARs are actually implemented — not just documented — and within what timeframe?
- •What automated tracking ensures AAR findings are assigned, prioritized, and closed with evidence of completion?
- •How do you validate that implemented improvements actually prevent recurrence — do you test through simulations or track recurrence rates?
- •What happens if an AAR remediation action is not completed by its deadline — what escalation triggers?
Inventory & Integration:
- •What platform manages AARs and their follow-up actions (Jira, ServiceNow, dedicated incident management tool)?
- •How do AAR findings integrate with your risk register, vulnerability management, and change management processes?
- •Are AAR templates standardized to ensure consistent coverage of root cause analysis, timeline, impact, and remediation actions?
- •How do lessons learned from AARs feed into training content, runbooks, and detection rule updates?
Continuous Evidence & Schedules:
- •How do you demonstrate that an AAR was completed for every qualifying incident over the past year?
- •Is AAR data (findings, remediation status, closure evidence) accessible via API or structured export for assessor review?
- •What evidence shows that lessons learned from past AARs have been persistently incorporated — not just documented but operationalized?
- •How do you measure whether AAR-driven improvements are reducing incident frequency or severity over time?
Update History
Ask AI
Configure your API key to use AI features.