Under active development Content is continuously updated and improved

KSI-SVC-EISEvaluating and Improving Security

LOW
MODERATE

Formerly KSI-SVC-01

>Control Description

Implement improvements based on persistent evaluation of information resources for opportunities to improve security.
Defined terms:
Information Resource
Persistently

>NIST 800-53 Controls

>Trust Center Components
4

Ways to express your implementation of this indicator — approaches vary by organization size, complexity, and data sensitivity.

From the field: Mature implementations express network isolation through policy-enforced segmentation — zero trust architecture with identity-aware access, micro-segmentation rules verified by firewall APIs, and network security monitoring dashboards showing east-west traffic patterns. Defense-in-depth is demonstrated through multiple automated isolation layers.

Network Security Architecture

Architecture & Diagrams

Architecture expressing network segmentation, firewall rules, and security zones — shows defense-in-depth through isolation layers

Network Security Monitoring

Dashboards

Dashboard expressing network security posture — IDS/IPS alerts, traffic anomalies, and segmentation enforcement status

Automated: Firewall APIs verify segmentation rules match documented policy

Network Segmentation Enforcement

Product Security Features

Automated enforcement of network segmentation policies — micro-segmentation rules preventing unauthorized lateral movement

Automated: Network policy engines verify segmentation rules are enforced

Zero Trust Architecture Documentation

Documents & Reports

Zero trust implementation including micro-segmentation and identity-aware access

>Programmatic Queries

Beta
Security

CLI Commands

Test for open source vulnerabilities
snyk test --all-projects --severity-threshold=medium
Test container images
snyk container test <image>:<tag> --severity-threshold=high

>20x Assessment Focus Areas

Aligned with FedRAMP 20x Phase Two assessment methodology

Completeness & Coverage:

  • Does your security evaluation and improvement process cover all information resource categories — infrastructure, applications, data stores, identity systems, and network components?
  • How do you ensure improvement evaluations consider all security dimensions — hardening, patching, architecture, access controls, encryption, and monitoring?
  • Are improvements prioritized using risk-based criteria that consider both likelihood and impact, not just severity scores?
  • How do you identify improvement opportunities proactively (benchmarking, threat intelligence, industry best practices) rather than only in response to findings?

Automation & Validation:

  • What automated evaluation tools (CSPM, CWPP, benchmark scanners) continuously identify security improvement opportunities?
  • How do you validate that implemented improvements actually improved security posture — through before/after measurement, re-scanning, or testing?
  • What happens when an improvement opportunity is identified but deprioritized — how do you track the accepted risk and re-evaluate periodically?
  • How do you detect regression — previously implemented improvements that are undone by subsequent changes?

Inventory & Integration:

  • What tools and processes compose your continuous security evaluation pipeline?
  • How do security improvement findings integrate with your backlog management and sprint planning to ensure they are resourced?
  • Are security improvement opportunities tracked in the same system as vulnerability findings and compliance gaps, or in a separate process?
  • How do evaluation results from different tools (CSPM, vulnerability scanners, penetration tests, audits) aggregate into a unified improvement roadmap?

Continuous Evidence & Schedules:

  • How do you demonstrate that security evaluation and improvement is persistent rather than episodic?
  • Is security posture trending data (improvement counts, risk score changes, benchmark conformance) available via API or dashboard?
  • What evidence shows that security improvements implemented over the past year have measurably improved your posture?
  • How do you prove that the evaluation cadence is maintained and that identified improvements are implemented within defined timelines?

Update History

2026-02-04Removed italics and changed the ID as part of new standardization in v0.9.0-beta; no material changes.

Ask AI

Configure your API key to use AI features.