KSI-IAM-SUS—Responding to Suspicious Activity
Formerly KSI-IAM-06
>Control Description
>NIST 800-53 Controls
>Trust Center Components3
Ways to express your implementation of this indicator — approaches vary by organization size, complexity, and data sensitivity.
From the field: Mature implementations express separation of duties through automated enforcement — IAM platforms detecting SoD conflicts during role assignment, policy engines preventing incompatible role combinations, and conflict detection metrics tracked as dashboard indicators. Role conflicts are prevented by design through technical controls, not just policy.
Role Conflict Detection
Automated SoD conflict detection and enforcement — IAM platform prevents incompatible role assignments in real time
Separation of Duties Matrix
SoD matrix expressing incompatible roles and how conflicts are prevented — reference for automated enforcement rules
SoD Compliance Reports
SoD compliance reports showing violation status and remediation — generated from IAM platform enforcement data
>Programmatic Queries
CLI Commands
splunk search 'index=main sourcetype=access_combined action=failure | stats count by src_ip user | sort -count | head 20' -earliest -24hsplunk search 'index=main action=failure | stats count by user src_ip | where count > 10 | sort -count' -earliest -1h>20x Assessment Focus Areas
Aligned with FedRAMP 20x Phase Two assessment methodology
Completeness & Coverage:
- •Does automated suspicious activity detection and response cover all privileged account types — cloud admin accounts, database admins, CI/CD pipeline accounts, and root/break-glass accounts?
- •What suspicious activity indicators trigger automated account actions — impossible travel, unusual API calls, off-hours access, failed MFA attempts, privilege escalation patterns?
- •How do you ensure detection covers privileged activity across all systems, not just the primary identity provider?
- •Are there privileged accounts excluded from automated suspension (e.g., break-glass accounts), and what compensating controls apply to those?
Automation & Validation:
- •What is the maximum time between detection of suspicious privileged activity and automatic account disablement or restriction?
- •How do you prevent false positives from disrupting legitimate administrative work — what tuning and safeguards are in place?
- •What happens if the automated response system itself is compromised or disabled by an attacker — what secondary detection exists?
- •How do you test automated suspicious activity response — do you run simulated attacks or adversary emulation against privileged accounts?
Inventory & Integration:
- •What behavioral analytics or UEBA platform detects suspicious privileged activity, and how does it integrate with your IdP to disable accounts?
- •How do automated account actions integrate with your incident response workflow to ensure human investigation follows automated containment?
- •What tools monitor privileged session activity (session recording, command logging) to provide context for suspicious activity alerts?
- •How does the account restoration process integrate with your ticketing system to ensure investigation is completed before access is restored?
Continuous Evidence & Schedules:
- •What evidence shows the automated detection and response system is operational and has been effective over the past 90 days?
- •Are detection rules and response actions auditable — can assessors review the criteria, thresholds, and recent trigger events via API?
- •How do you demonstrate that false positive and false negative rates are tracked and that detection rules are tuned over time?
- •What evidence shows that every automated account action resulted in proper investigation and documented resolution?
Update History
Ask AI
Configure your API key to use AI features.