Defines the company governance model for AI (governance structure, principles, policies, risks), including roles and responsibilities | Governance: AI Principles, Risks, Policies, Organizational Structures and Steering Mechanisms | ||||
Structured approach for how to evaluate and mitigate the risks of AI systems, connected into standard AI lifecycle processes | Risk Assessment | Reporting & Escalation | |||
Methodologies, tools and training that supports relevant teams with operationalizing the requirements defined per the governance model | Best practices & Methodologies | Tools | Training & Culture | ||
Data capabilities that enables Responsible AI use-cases | Data Quality | Data Lineage | Data Compliance |
Governance: AI Principles, Risks, Policies , Organizational Structures and Steering Mechanisms | |
|
|
Risk Assessment | Reporting & Escalation |
|
|
Best practice & Methodologies | Tools | Training & Culture |
|
|
|
Data Quality | Data Lineage | Data Compliance |
|
|
|
Budget: Allocates financial resources for responsible AI initiatives. Communication: Communicates the importance of responsible AI to internal and external stakeholders. | Responsible AI integration: Ensures that responsible AI principles and policies are integrated into product development, from design to deployment. | Innovation with responsibility: Focuses on developing AI solutions and technologies that align with the organizations principles and policies on responsible AI. |
Data governance: Ensures the quality, privacy, and security of data used in AI systems. Data compliance: Ensures data usage aligns with legal and ethical standards. | Regulatory compliance: Interprets laws and regulations and defines the legal obligations put on the organization’s AI operations Risk management: Identifies and manages legal risks associated with AI use. | Risk management: Integrates AI specific risk management in the organization’s existing risk management practices. |
Note that the roles and role descriptions provided here are examples and might not be applicable to all organizations. Each organization needs to identify their critical roles and respective responsibilities needed to operationalize ethical and responsible AI. |
C•Stakeholder engagement: Communicates responsible AI practices internally and externally. Public relations: Manages the public image and perception of the company regarding its AI practices. | Vendor responsibility: Helps evaluate that third-party AI vendors adhere to ethical and responsible AI principles and policies set forth by the organization Due diligence: Conducts assessments to ensure ethical practices in the AI supply chain. | Skill development: Ensures that employees have the necessary knowledge and skills to operationalize the organization’s principles and policies for ethical and responsible AI Culture development: Promotes a culture of ethical AI use within the organization. |
Security: Ensures the security of AI systems to prevent malicious use and data breaches. Threat mitigation: Addresses potential risks and vulnerabilities associated with AI applications. | Audit trail: Ensure that all data and actions are documented and can be traced back to explain outcomes as well as mitigate compliance risks. Internal Control: Defines and evaluates internal controls created to ensure compliance | Feedback: Provides feedback from the perspective of end-users and civil society on the organization’s use of AI, to help validate and improve internal responsible AI approach. |
Note that the roles and role descriptions provided here are examples and might not be applicable to all organizations. Each organization needs to identify their critical roles and respective responsibilities needed to operationalize ethical and responsible AI. |