KSI-AFR-PVA—Persistent Validation and Assessment
Formerly KSI-AFR-09
>Control Description
>FRMR Requirements21
Normative requirements from the FedRAMP Requirements and Recommendations document — 15 mandatory, 3 recommended, 3 optional.
Persistent Validation
Providers MUST persistently perform validation of their Key Security Indicators; this process is called persistent validation and is part of vulnerability detection.
Issues As Vulnerabilities
Providers MUST treat issues detected during persistent validation and failures of the persistent validation process as vulnerabilities, then follow the requirements and recommendations in the FedRAMP Vulnerability Detection and Response process for such findings.
Report Persistent Validation
Providers MUST include persistent validation activity in the reports on vulnerability detection and response activity required by the FedRAMP Vulnerability Detection and Response process.
Independent Verification and Validation
Providers MUST have the implementation of their goals and validation processes assessed by a FedRAMP-recognized independent assessor OR by FedRAMP directly AND MUST include the results of this assessment in their authorization data without modification.
The option for assessment by FedRAMP directly is limited to cloud services that are explicitly prioritized by FedRAMP, in consultation with the FedRAMP Board and the federal Chief Information Officers Council. During 20x Phase Two this includes AI services that meet certain criteria as shown at https://fedramp.gov/ai.
FedRAMP recognized assessors are listed on the FedRAMP Marketplace.
Non-Machine Validation
Providers MUST complete the validation processes for Key Security Indicators of non-machine-based information resources at least once every 3 months.
Persistent Machine Validation
Providers MUST complete the validation processes for Key Security Indicators of machine-based information resources at least once every 3 days.
Underlying Processes
Assessors MUST verify and validate the underlying processes (both machine-based and non-machine-based) that providers use to validate Key Security Indicators; this should include at least:
- The effectiveness, completeness, and integrity of the automated processes that perform validation of the cloud service offering's security posture.
- The effectiveness, completeness, and integrity of the human processes that perform validation of the cloud service offering's security posture
- The coverage of these processes within the cloud service offering, including if all of the consolidated information resources listed are being validated.
Processes Derived from Key Security Indicators
Assessors MUST verify and validate the implementation of processes derived from Key Security Indicators to determine whether or not the provider has accurately documented their process and goals.
Outcome Consistency
Assessors MUST verify and validate whether or not the underlying processes are consistently creating the desired security outcome documented by the provider.
Mixed Methods Evaluation
Assessors MUST perform evaluation using a combination of quantitative and expert qualitative assessment as appropriate AND document which is applied to which aspect of the assessment.
Procedure Adherence
Assessors MUST assess whether or not procedures are consistently followed, including the processes in place to ensure this occurs, without relying solely on the existence of a procedure document for assessing if appropriate processes and procedures are in place.
Assessment Summary
Assessors MUST deliver a high-level summary of their assessment process and findings for each Key Security Indicator; this summary will be included in the authorization data for the cloud service offering.
Static Evidence
Assessors MUST NOT rely on screenshots, configuration dumps, or other static output as evidence EXCEPT when evaluating the accuracy and reliability of a process that generates such artifacts.
No Overall Recommendation
Assessors MUST NOT deliver an overall recommendation on whether or not the cloud service offering meets the requirements for FedRAMP authorization.
Implementation Summaries
Providers MUST maintain simple high-level summaries of at least the following for each Key Security Indicator:
- Goals for how it will be implemented and validated, including clear pass/fail criteria and traceability
- The consolidated _information resources_ that will be validated (this should include consolidated summaries such as "all employees with privileged access that are members of the Admin group")
- The machine-based processes for _validation_ and the _persistent_ cycle on which they will be performed (or an explanation of why this doesn't apply)
- The non-machine-based processes for _validation_ and the _persistent_ cycle on which they will be performed (or an explanation of why this doesn't apply)
- Current implementation status
- Any clarifications or responses to the assessment summary
Provide Technical Evidence
Providers SHOULD provide technical explanations, demonstrations, and other relevant supporting information to all necessary assessors for the technical capabilities they employ to meet Key Security Indicators and to provide validation.
Provider Experts
Assessors SHOULD engage provider experts in discussion to understand the decisions made by the provider and inform expert qualitative assessment, and SHOULD perform independent research to test such information as part of the expert qualitative assessment process.
Application within MAS
Providers SHOULD apply ALL Key Security Indicators to ALL aspects of their cloud service offering that are within the FedRAMP Minimum Assessment Scope.
3 optional guidance (MAY)
Receiving Advice
Providers MAY ask for and accept advice from their assessor during assessment regarding techniques and procedures that will improve their security posture or the effectiveness, clarity, and accuracy of their validation and reporting procedures for Key Security Indicators, UNLESS doing so might compromise the objectivity and integrity of the assessment (see also PVA-TPX-AMA).
Sharing Advice
Assessors MAY share advice with providers they are assessing about techniques and procedures that will improve their security posture or the effectiveness, clarity, and accuracy of their validation and reporting procedures for Key Security Indicators, UNLESS doing so might compromise the objectivity and integrity of the assessment (see also PVA-CSX-RIA).
AFR Order of Criticality
Providers MAY use the following order of criticality for approaching Authorization by FedRAMP Key Security Indicators for an initial authorization package:
- Minimum Assessment Scope (MAS)
- Authorization Data Sharing (ADS)
- Using Cryptographic Modules (UCM)
- Vulnerability Detection and Response (VDR)
- Significant Change Notifications (SCN)
- Persistent Validation and Assessment (PVA)
- Secure Configuration Guide (RSC)
- Collaborative Continuous Monitoring (CCM)
- FedRAMP Security Inbox (FSI)
- Incident Communications Procedures (ICP)
>Trust Center Components3
Ways to express your implementation of this indicator — approaches vary by organization size, complexity, and data sensitivity.
From the field: Mature implementations express persistent validation through automated evidence pipelines — GRC platforms collecting machine-generated evidence continuously, compliance dashboards showing control status derived from live system state, and assessment results published as OSCAL artifacts. Per ADS-CSO-CBF, automation must ensure consistency between formats, making point-in-time manual assessments supplementary to continuous automated validation.
Continuous Assessment Dashboard
Dashboard expressing ongoing validation posture — automated compliance checks, evidence freshness, and assessment status as a living view
Automated Compliance Evidence
Machine-generated evidence demonstrating continuous compliance validation — the artifacts that feed the dashboard
Assessment Cadence and Methodology
How persistent validation is maintained between annual assessments — the cadence and methodology behind automated evidence collection
>Programmatic Queries
CLI Commands
aws configservice get-compliance-summary-by-config-rule --output tableaws configservice describe-config-rule-evaluation-status --query "ConfigRulesEvaluationStatus[].{Rule:ConfigRuleName,LastRun:LastSuccessfulEvaluationTime,Compliant:FirstEvaluationStarted}" --output table>20x Assessment Focus Areas
Aligned with FedRAMP 20x Phase Two assessment methodology
Completeness & Coverage:
- •Which KSIs or security controls are currently validated only through point-in-time assessment rather than persistent validation, and what is the plan to close those gaps?
- •How do you ensure persistent validation covers all system components — including those managed by third parties or inherited from IaaS/PaaS providers?
- •Are there security decisions or policies that are not yet subject to automated effectiveness measurement, and how are those exceptions tracked?
- •When new controls or KSIs are added to the assessment scope, how quickly are they incorporated into persistent validation?
Automation & Validation:
- •What happens if a persistent validation check returns a false-positive or false-negative — how do you detect and correct inaccurate results?
- •How do you validate that your validation tools themselves are working correctly (i.e., who watches the watchers)?
- •What automated remediation is triggered when persistent validation identifies a control that is no longer effective?
- •If a validation data source goes offline, how quickly is the gap detected and what interim measures apply?
Inventory & Integration:
- •What tools compose your persistent validation stack (CSPM, CWPP, compliance-as-code, custom scripts), and how do they feed into a unified view?
- •How does persistent validation data integrate with your ADS and CCM reporting to FedRAMP?
- •Are there resources or environments (e.g., staging, DR sites) that are not covered by your validation tooling, and how do you account for them?
- •How do validation results flow into your risk register or GRC platform for tracking and decision-making?
Continuous Evidence & Schedules:
- •How do you demonstrate that persistent validation is truly continuous rather than just running on a daily or weekly batch schedule?
- •Is validation evidence available in machine-readable format via API, or does it require manual export and formatting for assessors?
- •How do you detect when the gap between your reported security posture and actual control effectiveness is widening?
- •What evidence shows the cadence and results of persistent validation activities over the past 90 days?
Update History
Ask AI
Configure your API key to use AI features.