Go to content

Organizational Approach to Ethical and Responsible AI

Chapter Introduction

Nordic perspectives on how to create and operationalize an organizational approach for ethical and responsible AI

Background

  • This chapter is based on the results of the first workshop and webinar in the Nordic Ethical AI Sandbox, which took place during November-December 2023.
  • Ensuring that AI is developed and used in an ethical and responsible way, necessitates organizational-wide implementation of a systematic approach.
  • This chapter explores key building blocks of an organizational approach to ethical and responsible AI and includes suggested actions to mature each capability area. Each organization needs to carefully consider their own unique context, their role in the AI value chain, the regulatory landscape and AI risks when adopting an organizational approach to ethical and responsible AI. These recommendations should therefore be considered as a starting point, and need to be further detailed for each organization.

What this chapter will help organizations with

  • Understanding what the key building blocks are of an organizational approach to ethical and responsible AI
  • Understanding what key actions are needed to operationalize each building block
  • Understanding what stakeholders are normally involved in creating and implementing the organizational approach, and what their roles and responsibilities are 

The building blocks of ethical and responsible AI

Ethical and responsible AI covers multiple layers of an organization and will therefore need involvement from stakeholders across the C-suite, risk, compliance, tech, data, procurement, and HR
Oversight & Control
Defines the company governance model for AI (governance structure, principles, policies, risks), including roles and responsibilities 
Governance: AI Principles, Risks, Policies, Organizational Structures and Steering Mechanisms
Risk Management
Structured approach for how to evaluate and mitigate the risks of
AI systems, connected into standard AI lifecycle processes
Risk Assessment
Reporting & Escalation
Good Practice
Methodologies, tools and training that supports relevant teams with operationalizing the requirements defined per the governance model
Best practices & Methodologies
Tools
Training & Culture
Data Foundation
Data capabilities that enables Responsible AI use-cases
Data Quality
Data Lineage
Data Compliance
Source: Nordic Ethical AI Sandbox Workshop #1. The workshop structure was based on Accenture’s Capability Framework for Responsible AI

Actions for oversight and control of AI

AI governance guides the design, development and deployment of AI across an organization
Governance: AI Principles, Risks, Policies , Organizational Structures and Steering Mechanisms
  • Identify regulatory frameworks that will impact how the company and summarize the legal requirements imposed by these.
  • Define the scope for enterprise AI governance, meaning what should be covered (for example legal, ethics and security).
  • Define what constitutes an ‘AI’-system (referencing industry and governmental standards) and create an inventory of all AI-systems developed and/or put on the market  across the organization.
  • Create an accountability framework for AI by defining what roles are involved in the governance of AI, and what their responsibilities are.
  • Define the company position on why commitment to ethical and responsible AI is important and secure C-suite sponsorship.
  • Document responsible AI policies and distribute to all relevant stakeholders within the organization to ensure that employees follow the guidelines and recommendations put in place.
  • Define the values that should drive ethical and responsible use of AI. Responsible AI principles sets the company position internally and externally on AI use, defines the risk appetite and informs policies. Defining AI principles require analysis of existing company principles and policies, to avoid inconsistencies or duplications. These can for example include, the Code of Conduct, Code of Business Ethics, Corporate Governance, Privacy Policy, Procurement Guidelines, Diversity & Inclusion Policy.
  • Define rules for governance and compliance with the defined principles and any legal obligations, these should guide employees' activities in relation to AI development and usage, e.g. human-in-the-loop requirements.
    • Consolidate both local and international policies that are part of the AI value chain.
  • Define risk categories that AI-systems can fall into and categorization logic. These should be mutually exclusive, and collectively exhaustive. There can also be categories for prohibited systems. The categorization logic determines the criteria for classifying risk categories for individual AI systems.
Source: Nordic Ethical AI Sandbox Workshop #1. The workshop structure was based on Accenture’s Capability Framework for Responsible AI

Actions for AI risk management

Adopting a risk-based approach to AI helps with building regulatory readiness and mitigating potential harms
Risk Assessment
Reporting & Escalation
  • Define a methodology for risk screening AI-systems, meaning assessing and categorizing AI-systems by level of risk. This methodology should cover both already deployed AI-systems and to-be developed.
  • Create a methodology for assessing the impact of AI systems, including identifying, assessing and measuring risks of AI-systems, and controlling alignment with regulations, principles and policies:
    • Define assessment questionnaire
    • Define scoring and aggregation methodology
  • Define documentation requirements and process for identified impacts, risks and mitigation strategies.
  • Design the escalation paths in the organization, meaning the organizational model for making critical decisions regarding risks and negative impacts of AI-systems:
    • Define who is involved in what step of the escalation chain
    • Define what issues, scenarios or topics gets escalated when
  • Define documentation requirements and process for escalations.
  • Define how to perform internal, and potentially external, reporting on AI development and usage related to the defined organizational AI principles and policies.
Source: Nordic Ethical AI Sandbox Workshop #1. The workshop structure was based on Accenture’s Capability Framework for Responsible AI

Actions for ensuring good AI practices

AI practitioners across the organization require support from standardized practices, tools and training
Best practice & Methodologies
Tools
Training & Culture
  • Define standardized methods for how to ensure alignment with any principles or policies for responsible AI. For example, statistical methods for various responsible AI dimensions, such as, fairness, explainability, robustness, soundness, human-centered design and human-in-the-loop approaches, and transparency mechanisms.
  • Develop standardized instructions and templates for documentation along the development process, including risk assessment and mitigation results, system design, testing results, monitoring and functioning.
  • Identify effective tools that supports relevant stakeholders across functions in operationalizing principles or policies related to responsible AI that support with risk assessment, monitoring, governance, and AI solution evaluation.
    For example (non-exhaustive):
    • Governance platform to perform conformity assessment with regulations or policies
    • Bias detection tool
    • Automated record-keeping tracker of AI system activities to maintain a log of developments and changes.
    • Post-deployment monitoring tool for registering all input, output and supporting with overseeing control that requirements are met
  • Upskill the organization in ethical and responsible AI by creating learning paths and training material tailored for different roles and responsibilities:
    • C-suite and board training
    • Technical training for AI teams
    • Upskilling business functions on the responsible AI approach
    • Company wide education series on current and upcoming AI regulations
    • Company wide education series on the organization's governance structure, risk categories and everyone's role in the organization
  • Foster a responsible AI culture by engaging the whole organization in viewing responsible AI as a critical business imperative.
Source: Nordic Ethical AI Sandbox Workshop #1. The workshop structure was based on Accenture’s Capability Framework for Responsible AI

Actions for making data AI-ready

Operationalizing responsible AI is dependent on high-quality and traceable data and robust supporting infrastructure
Data Quality
Data Lineage
Data Compliance
  • Incorporate user feedback to validate and improve data quality.
  • Use data that is relevant to the problem being solved, and address bias in training data to prevent biased outcomes in AI models.
  • Ensure data lineage by mapping data sources, data processes, data transformations, and data assets.
  • Create documentation for how data is transformed for final input of AI model.
  • Implement proper data archiving, versioning, and deletion policies.
  • Safeguard user privacy and secure sensitive data by implementing strong data encryption, data anonymization, and access controls.
  • Document the data compliance processes and regularly review, and assess the compliance level of the data process and AI models.
Source: Nordic Ethical AI Sandbox Workshop #1. The workshop structure was based on Accenture’s Capability Framework for Responsible AI

Stakeholders involved (I/II)

Ethical and responsible AI covers multiple layers of an organization and will therefore need involvement from a wide variety of stakeholders. Each organization needs to define what functions and roles are involved in operating responsible AI
Non-exhaustive
C-suite (Leadership)
Budget: Allocates financial resources for responsible AI initiatives.
Communication: Communicates the importance of responsible AI to internal and external stakeholders.
Product Owners
Responsible AI integration: Ensures that responsible AI principles and policies are integrated into product development, from design to deployment.
R&D (Research & Development)
Innovation with responsibility: Focuses on developing AI solutions and technologies that align with the organizations principles and policies on responsible AI.
Data Owners
Data governance: Ensures the quality, privacy, and security of data used in AI systems.
Data compliance: Ensures data usage aligns with legal and ethical standards.
Legal
Regulatory compliance: Interprets laws and regulations and defines the legal obligations put on the organization’s AI operations
Risk management: Identifies and manages legal risks associated with AI use.
Risk & Compliance
Risk management: Integrates AI specific risk management in the organization’s existing risk management practices.
Note that the roles and role descriptions provided here are examples and might not be applicable to all organizations. Each organization needs to identify their critical roles and respective responsibilities needed to operationalize ethical and responsible AI.
Source: Nordic Ethical AI Sandbox Workshop #1. The workshop structure was based on Accenture’s Capability Framework for Responsible AI

Stakeholders involved (II/II)

Ethical and responsible AI covers multiple layers of an organization and will therefore need involvement from a wide variety of stakeholders. Each organization needs to define what functions and roles are involved in operating responsible AI
Non-exhaustive
Communication
CStakeholder engagement: Communicates responsible AI practices internally and externally.
Public relations: Manages the public image and perception of the company regarding its AI practices.
Procurement
Vendor responsibility: Helps evaluate that third-party AI vendors adhere to ethical and responsible AI principles and policies set forth by the organization
Due diligence: Conducts assessments to ensure ethical practices in the AI supply chain.
HR & Learning
Skill development: Ensures that employees have the necessary knowledge and skills to operationalize the organization’s principles and policies for ethical and responsible AI
Culture development: Promotes a culture of ethical AI use within the organization.
Security
Security: Ensures the security of AI systems to prevent malicious use and data breaches.
Threat mitigation: Addresses potential risks and vulnerabilities associated with AI applications.
Internal Audit
Audit trail: Ensure that all data and actions are documented and can be traced back to explain outcomes as well as mitigate compliance risks. 
Internal Control: Defines and evaluates internal controls created to ensure compliance
End-users & Civil Society
Feedback: Provides feedback from the perspective of end-users and civil society on the organization’s use of AI, to help validate and improve internal responsible AI approach.
Note that the roles and role descriptions provided here are examples and might not be applicable to all organizations. Each organization needs to identify their critical roles and respective responsibilities needed to operationalize ethical and responsible AI.
Source: Nordic Ethical AI Sandbox Workshop #1. The workshop structure was based on Accenture’s Capability Framework for Responsible AI
Check Copied to clipboard