Go to content

Transparent and Explainable AI

Chapter Introduction

Nordic perspectives on challenges and mitigation strategies related to transparency and explainable AI

Background

  • This chapter is based on the results of the fourth workshop and webinar in the Nordic Ethical AI Sandbox, which took place during  April/May 2024.
  • The lack of transparency and explainability is a common criticism of AI system deployments. The urgency to integrate AI into strategic operations across Nordic businesses underscores the necessity to mitigate such issues.
  • This chapter explores Nordic perspectives on challenges related to transparency and explainability when developing and adopting AI solutions, and what strategies can be applied to ensure that solutions are understandable and accountable. Each use case comes with its unique set of challenges, and these recommendations should therefore be considered as a starting point for further development.

What this chapter will help organizations with

  • Emphasizing the importance of building trust through transparent and explainable AI systems
  • Identifying key challenges related to the lack of transparency and explainability of AI systems
  • Highlighting some of the methods and strategies that exist to aid with transparency and explainability of AI systems

Transparent & Explainable AI

Use of AI technologies impacts trust on individual as well as societal levels. Transparency and explainability of models are crucial elements in responsible and ethical deployment of AI systems.

Transparency

  • Transparency seeks to ensure that all stakeholders can understand how an AI system arrives at a result, such as a decision or recommendation.
  • Several factors are important for AI transparency, most notably:
    • Interpretability: the capability to provide information about the relationships between model inputs and outputs.
    • Explainability: the capability to explain the model’s decision-making process in terms understandable to the end user.
    • Accountability: the capability of AI systems to learn from  mistakes and improve over time, while organizations should take suitable corrective actions to prevent similar errors in the future.
  • Social transparency focuses on the implications of AI deployment on society as a whole.

Explainability

  • Explainable AI (or XAI) refers to the ability of an AI system to provide easy-to-understand explanations for its decisions and actions.
  • Many modern AI models are hugely complex, which tend to obfuscate the model’s decision-making process to a level where not even experts can explain it.
  • XAI helps build trust with the users by enabling them to understand, trust, and effectively manage these AI systems.
  • This is especially important in high-stakes domains and safety critical applications of AI.

Transparent and explainable AI ​challenges 

Nordic companies explored challenges in ensuring transparency and explainability in the different phases of the AI life cycle

Challenges identified from workshop insights

Scoping ↓
Training ↓
Testing↓
Production↓
Balancing transparency with competitive advantage: There is a challenge in determining how much of the AI's operational logic can be disclosed without compromising business secrets or competitive advantages.
Determining the right level of transparency: It is challenging to establish the appropriate level of transparency for different stakeholders without negatively impacting the functionality or security of the AI.
Explaining errors: Explainability aids in debugging and refining AI systems, especially in handling corner cases or errors arising from inadequate data or model understanding.
Monitoring and adjustment: Organizations must have the capability to continuously monitor, evaluate, and adjust AI systems, ensuring they remain effective and aligned with ethical standards over time.
Inclusion of sensitive features: It is essential to consider when and how sensitive features like gender or ethnicity should be incorporated into the AI model, ensuring they are used objectively and justifiably.
Bias and fairness considerations: The need to recognize and mitigate biases, which may vary culturally and regionally, is crucial during the deployment phase to ensure fairness and ethical use.
Data representation and blind spots: Understanding whether the data used is representatively and free of biases is vital for the effective functioning of the AI, necessitating mechanisms to identify and address any blind spots.
Educating users: There is a need to educate end users on the AI’s functionalities and limitations to ensure responsible usage.
Interpretability of outcomes: The AI system must be designed to not only perform tasks but also provide understandable outputs. This includes creating models that can explain decisions in a way that end users can comprehend.
Ensuring model understandability: For AI systems that operate at higher levels of autonomy, there is a heightened demand for transparency to ensure that stakeholders can conceptually understand AI outputs and their implications.
Relevance over time: The ability to incorporate new knowledge or correct misunderstandings within the AI system is crucial for its ongoing relevance and accuracy.
Source: Nordic Ethical AI Sandbox Workshop #4
Note:These are aggregated results from the workshop and does not necessarily apply for all participating organizations. The list should not be considered exhaustive. 

Transparent and explainable  AI miti­gation mechanisms

Nordic companies explored solutions and mitigation mechanisms relevant in different phases of the AI life cycle

Recommendations based on workshop insights

Scoping ↓
Training ↓
Testing↓
Production↓
  • Understanding the intended audience of an AI system is key in determining the necessary level of explainability.​
  • Opening up data used for training AI models can enhance transparency and accountability.
  • Understanding which data is used and why it is chosen for training of AI systems is crucial for ensuring the quality and trustworthiness of the system.
  • Understanding how a well-designed user interface can enhance the explainability of an AI system.​
  • Understanding the relevance and functionality of the algorithms and models in use is crucial for enhancing explainability.
  • Ensuring transparency at every stage of model training aids the development team in detecting shortcomings in the model’s performance.
  • Ensuring that the AI system is explainable to the end user prior to deployment.
  • Transparently describing the architecture of the AI system for deployment to ensure there are no leaks or potentially harmful stages.
  • Assessing that the active learning and process of injecting more data to the model is reliable and robust.
  • •Assessing whether the necessary resources to actively monitor, modify, and adjust the AI over time are available.
  • Assessing whether the organization possesses the capacity and skills required to comprehend potential issues.
  • Maintaining the system in a way that ensures it remains transparent and explainable is crucial.
Source: Nordic Ethical AI Sandbox Workshop #4
Note:These are aggregated results from the workshop and does not necessarily apply for all participating organizations. The list should not be considered exhaustive.

Key takeaways for ensuring Transparent and Explainable AI 

Workshop participants considered the following factors to be critical for ensuring responsible, robust and  secure AI
Transparency and Explainability
Transparency alone is insufficient; hence explanations are crucial
Transparency and Demand
Higher transparency is in some cases demanded from AI than from human operators
Transparency and Competitive Edge
In some cases, transparency and the protection of someone’s competitive edge needs to be balanced
Purpose and
Representation
AI models need to be representative of tasks and inputs. Step-by-step explanations can aid understanding
User Responsibility
End users responsible for not misusing AI systems
Adaptability and Modification
AI systems should be designed to avoid architectural lock-in. This could help in cases where the regular monitoring identifies needs for adjustments
Bias and fairness
Address bias and fairness considering cultural contexts and use this to ensure representation in training data
Explainability and Debugging
Explainability aids in debugging and improving AI systems
Resource Availability
Assess to organizational resources and skills for ongoing AI maintenance is critical. In some cases this includes the education of end users
Trust and Understanding
Provide tools and meaningful user interfaces to help users understand AI decisions
Learning From Mistakes
Use transparency to learn and improve data preparation
Source: Nordic Ethical AI Sandbox Workshop #4
Note:These are aggregated results from the workshop and does not necessarily apply for all participating organizations. The list should not be considered exhaustive. 
Check Copied to clipboard