govern — Govern
19 actions in the Govern function
GOVERN-1.1Legal and regulatory requirements involving AI are understood, managed, and documented.
GOVERN-1.2The characteristics of trustworthy AI are integrated into organizational policies, processes, and procedures.
GOVERN-1.3Processes and procedures are in place to determine the needed level of risk management activities based on the organization's risk tolerance.
GOVERN-1.4The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities.
GOVERN-1.5Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review.
GOVERN-1.6Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.
GOVERN-1.7Processes and procedures are in place for decommissioning and phasing out of AI systems safely and in a manner that does not increase risks or decrease the organization’s trustworthiness.
GOVERN-2.1Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.
GOVERN-2.2The organization’s personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.
GOVERN-2.3Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment.
GOVERN-3.1Decision-makings related to mapping, measuring, and managing AI risks throughout the lifecycle is informed by a diverse team (e.g., diversity of demographics, disciplines, experience, expertise, and backgrounds).
GOVERN-3.2Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems.
GOVERN-4.1Organizational policies, and practices are in place to foster a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize negative impacts.
GOVERN-4.2Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate and use, and communicate about the impacts more broadly.
GOVERN-4.3Organizational practices are in place to enable AI testing, identification of incidents, and information sharing.
GOVERN-5.1Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those external to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI risks.
GOVERN-5.2Mechanisms are established to enable AI actors to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation.
GOVERN-6.1Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third party’s intellectual property or other rights.
GOVERN-6.2Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.