Go to content

Risks and Safeguards for Generative AI Systems

Chapter Introduction

Nordic perspectives on potential risks and safeguards when developing and using Generative AI

Background

  • This chapter is based on the results of the second workshop and webinar in the Nordic Ethical AI Sandbox, which took place during  February 2024.
  • Generative AI has become a top strategic priority for Nordic leaders and businesses​, and many organizations are piloting and/or deploying the technology. The attention to the capabilities of Generative AI has raised concerns about potential risks, and how to effectively mitigate or manage them.
  • This chapter explores Nordic perspectives on potential risks with developing and adopting Generative AI solutions, and how to safeguard against them. Each organization needs to carefully consider their own unique AI risk landscape. These recommendations should therefore be considered as a starting point, and need to be further detailed for each organization.

What this chapter will help organizations with

Insight into Generative AI adoption among selected Nordic organizations
Understanding important factors when assessing and prioritizing between Generative AI use-cases
Understanding Generative AI risks and mitigation strategies explored by selected Nordic organizations

Generative AI use-cases

Generative AI is being piloted and deployed by Nordic businesses where a common theme across workshop participants is that the initial use-case(s) are internal-facing, focused on increasing productivity
Non-exhaustive responses

Examples of Generative AI Use-cases Explored

Content Creation & Editing
  • Text generation & summarizing (text-to-text)
  • Image generation & editing (text-to-image)
  • Video generation & editing (text-to-video)
  • Speech synthesis (speech-t0-text)
Information Retrieval
  • Intelligent Search engine (internal & external data)
  • Domain expert chatbots
  • Summarize & assess internal information
R&D &
Innovation
  • Security: Network alarms & cyberattack defence
  • Sustainability: Accelerate green transitions
Customer Experience
  • Customer service chatbots
  • Hyper personalization & customization
Data Science & Analysis
  • Coding (generation & assessment)
  • Analysis (sentiment analysis, text classification)
Source: Nordic Ethical AI Sandbox Workshop #2
Note: These are aggregated results from the workshop, and do not necessarily apply for all participating organizations

Generative AI tools & techniques

Nordic businesses are exploring different Generative AI tools and techniques. Using pre-built applications lowers barriers to adoption, but custom solutions are also built with different techniques or solution patterns.
Non-exhaustive responses

Examples of Generative AI Tools & Techniques Explored

Tools & Applications
Techniques & Solution Patterns
Third-party APIs
External software interfaces for accessing off-the-shelf LLMs, for example OpenAI’s API
GitHub Copilot
AI code completion tool by GitHub and OpenAI
RAG
Retrieval-augmented generation model for improved performance in Gen AI solutions & NLP tasks
Multimodal
Processing and understanding data from multiple modalities, for example text, image, audio, simultaneously
Chat-GPT
Interface for accessing OpenAI’s text-to-text models, for example GPT3.5, GPT-4
Google Duet AI
Google’s Gen AI co-pilot, integrated with Google's various creative applications and platforms
Opensource LLMs
Publicly available large language models with accessible source code, for example  Llama 2, BLOOM
Distributed and federated AI
Spreading AI tasks across multiple devices in a network, enabling AI to be independently trained through federated learning
O365 Copilot
Microsoft’s Gen AI Co-pilot, integrated with Microsoft Office 365
Fine-tuned models
Pre-trained AI models that are trained on contextual data to achieve a higher performance for specific tasks
Improve LLMs’ explainability
Enhancing the interpretability of transformer models to understand their decision-making
Edge deployment
Run on devices at the network edge, closer to where data is generated, to reduce latency and bandwidth usage
Source: Nordic Ethical AI Sandbox Workshop #2
Note: These are aggregated results from the workshop, and do not necessarily apply for all participating organizations

Use-case assessment framework

Based on Accenture’s Use-case Assessment Framework, insights were captured on what factors Nordic businesses consider when prioritizing between Generative AI use-cases 
This framework can be used as a starting point when assassing and prioritizing between different Generative AI (and traditional AI) use-cases. In addition to factors included in this framework, there might be other factors specific to each organization that should be considered.
In the second Nordic Ethical AI Sandbox workshop, participants mapped different factors considered into three overarching categories: 1) Value 2) Effort & Feasibility, and 3) Risks & Ethics.
Identified assessment factors were also prioritized to better understand which ones are more important than others according to the workshop group.
1. Value Why we do it?
The benefits and value potential of deploying the use-case
2. Effort & feasibility Are we able to do it, and how?
The required technical capabilities, resources and costs to realize the use-case
3. Risks & ethics Can and/or should we do it?
The potential risks and ethical implications that could occur because of the use-case which could hinder value creation
Source: Nordic Ethical AI Sandbox Workshop #2, based on Accenture’s Responsible AI Use Case Assessment Framework

Use-case assessment framework

Nordic companies identified different factors relevant for them when assessing and prioritizing Generative AI use-cases
1. Value Why we do it?
The benefits and value potential of deploying the use-case
Efficiency gains
Customer experience & communication
Positive sustainability impact
(Environmental, Social, Economical)
Innovation & unlocked insights
Business growth
2. Effort & feasibility Are we able to do it, and how?
The required technical capabilities, resources and costs to realize the use-case
Technical feasibility and complexity
Compliance burden
Time to market
Cost
Skills and competences
Change management and culture
3. Risks & ethics Can and/or should we do it?
The potential risks and ethical implications that could occur because of the use-case which could hinder value creation
Unreliable output
Workforce and talent risks
Bias & harm
Privacy and security
Fast evolving technology
Negative sustainability impact
(Environmental, Social, Economical)
Lack of understanding and control
Liability and compliance
Source: Nordic Ethical AI Sandbox Workshop #2, based on Accenture’s Responsible AI Use Case Assessment Framework
Note:These are aggregated results from the workshop, and does not necessarily apply for all participating organizations

Value considerations

Factors that contributes to valuable outcomes because of the use-case
Non-exhaustive
1. Value Why we do it?
Efficiency gains
Increased operational efficiency and productivity that could be gained from for example automating repetitive tasks, streamlining processes, and using data-driven insights to optimize resource allocation and decision-making
Customer experience & communication
Improved customer experience and communication, enabled by for example personalized interactions and delivering timely and relevant content
Positive sustainability
impact
Increased positive impact on society or the environment, enabled by for example facilitating development of more sustainable products and services and optimizing resource usage
Innovation & unlocked insights
Supporting innovation through generation of ideas, designs, and solutions, fostering creativity and augmenting human ingenuity across various domains
Business growth
Supporting generation of new revenue streams through product and service development, and optimizing revenue through tailored sales and marketing activities 
Source: Nordic Ethical AI Sandbox Workshop #2, based on Accenture’s Responsible AI Use Case Assessment Framework

Effort & feasibility considerations

Factors that could contribute to complexity and cost when realizing valuable outcomes of the use-case
Non-exhaustive
2. Effort & feasibility Are we able to do it, and what is the cost?
Technical feasibility & complexity
The practicality and viability of implementing the Generative AI solution within existing technological infrastructures, considering factors such as data availability, data accessibility, solution lock-in etc.
Compliance capabilities
Technical and organizational factors that affect the capacity to adhere to regulatory requirements and adapt to changing legal frameworks while maintaining agility in adjusting to new guidelines, policies or constraints.
Time to market
The urgency of bringing the use-case from conception to deployment, requiring efficient deployment strategies and rapid iteration cycles, for example through AI prototyping tools or automating and validating tasks
Costs
The costs of developing and implement the solution, for example licensing costs, cloud storage costs, hardware costs and maintenance costs.
Skills & competences
The required knowledge and skills of the team involved in bringing the use-case from conception to deployment, including potential need to hire new employees or consult external experts
Change management & culture
The required change in existing processes, skills profiles or culture that could affect the ability to adopt and realize value from the use-case , for example lack of AI literacy or stakeholder buy-in
Source: Nordic Ethical AI Sandbox Workshop #2, based on Accenture’s Responsible AI Use Case Assessment Framework

Risks & ethics considerations

Potential risks or ethical considerations that could have business, reputational or regulatory effects unless mitigated
Non-exhaustive
3. Risks & ethics Can and/or should we do it?
Unreliable outputs
The risk that the outputs generated are thought to be correct, but instead may be false, misrepresenting or misleading, for example from hallucinations
Workforce & talent risks
The resulting impact on workforce and required skills profiles from utilizing Generative AI in the organization, for example from displacing certain roles or reducing resource needs
Bias & harm
The risk that the outputs generated express prejudice, toxicity or include harmful content of any kind due to bias in data, model or human review
Privacy & security risks
Uncertainty around protection of proprietary data and sensitive information used to train and prompt the model
Fast evolving technology
The uncertainty that comes with a fast evolving, less tested technology, making it difficult to fully foresee all potential risks with deploying it
Negative sustainability impact
Negative environmental and social consequences resulting from Generative AI, for example climate impact or power imbalance of technology development
Lack of understanding & control
The risk of not understanding the outputs or behavior of Generative AI solutions, caused by for example low transparency and explainability of the AI model
Liability & compliance
Legal obligations and compliance with regulations related to the use of Generative AI, including ownership of content and potential implications of misuse or harm caused
Source: Nordic Ethical AI Sandbox Workshop #2, based on Accenture’s Responsible AI Use Case Assessment Framework

Most important prioritization factors

As a general takeaway, participants in the workshop consider the ability to generate efficiency gains and business growth, while maintaining privacy and security as the most important factors to consider when evaluating individual Generative AI use-cases
Efficiency gains
Nordic businesses see a lot of value potential from optimizing business operations both to cut costs, as well as speed up processes in order to quicker deliver their services. Using generative AI can potentially enable improved productivity, automation of repetitive tasks, reduction manual labor, reduction of required resources, identification of inefficiencies and faster decision making processes.
Business growth
Nordic businesses see a lot of value potential  from using generative AI to contribute to growth of the business by strengthening brand reputation, providing more personalized services to their customers and finding new market opportunities.
Privacy & security
Nordic businesses prioritize the protection of sensitive data, with privacy and security being critical factors when evaluating whether to pursue a generative AI use-case, as they are wary of potential risks such as data leakage, security breaches or privacy violations.
Source: Nordic Ethical AI Sandbox Workshop #2, based on Accenture’s Responsible AI Use Case Assessment Framework
Note: This is a conclusion made by the writers of this guidebook based on the discussions in the Nordic Ethical AI Sandbox Workshop #2. It does not mean that this applies for all participating organizations. 

Identifying and mitigating AI and generative AI risks

  • The following section includes AI and generative AI risks identified in the Nordic Ethical AI Sandbox, together with examples of risk mitigation methods and learnings from implementing and using generative AI in Nordic organizations.
  • Mitigating actions can target both the root case of the risk and the effect. In this guidebook, both technical and non-technical mitigations are included.
  • Risks and risk mitigation methods are highly contextual and specific to individual AI use-cases. Organizations therefor need to screen each use-case for the level and type of risk to identify the correct mitigations.
Identify and Assess
 Risks
The potential risk of which an organization is threatened by, with consideration to both adverse impact and likelihood of occurrence
Effects

The consequences of AI risks materializing
Mitigate and Remedy
Organizational
Non-technical practices that help identify, assess, reduce or eliminate different risks across the AI lifecycle
Technical

Tools, techniques or technical methods that help identify, assess, reduce or eliminate different risk factors in the AI lifecycle
Risks and risk mitigation methods are use-case specific, and organizations need to assess which are applicable for them and for specific use-cases

Risks identified by Nordic companies

Risks
The potential risk of which an organization is threatened by, with consideration to both adverse impact and likelihood of occurrence
Unreliable outputs
Workforce & talent risks
Bias & harm
Privacy & security risks
Fast evolving technology
Negative sustainability impact
Lack of understanding & control
Liability & compliance
EFFECTS
The consequences identified by Nordic businesses that come as an effect of the risks
Non-compliance (effect)
Legal and compliance (for example with industry standards, GDPR, EU AI Act)
Copyright infringement
Reputational damage (effect)
Loss of trust
Negative brand reputation
Loss of Control / Power Imbalance
Power and control imbalance (for example market oligopolies)
Loss of Control
Business & Financial Loss
Loss of income
Loss of intellectual property
Regulatory fines
Loss of innovation & competitive advantage
Adverse Sustainable impact (effect) 
Long term societal implications (for example on human rights, democracy, propaganda, misinformation)
Negative environmental impact
Source: Nordic Ethical AI Sandbox Workshop #2
Note: This list is non-exhaustive. The risks and effects mentioned were identified during the workshop and may not apply or be exhaustive for to all organizations.

Organizational risk mitigation for (Gen) AI risks

Organizational mitigations are in this context non-technical practices and structures that help identify, assess, reduce or eliminate different risks across the AI lifecycle. These risk mitigations were identified by workshop participants and could in most instances also apply for other AI technologies as well.

Organizational Mitigation Strategies

Non-exhaustive
Principles & Governance
Standardized Processes
Training & Culture
Ecosystem Collaboration
AI Policy
AI policy which defines what AI is and how it can be deployed and used across the organization
Risk Classification
Standardized process for defining the risk level of individual Gen AI systems, based on predefined risk categories
Leadership Sponsorship
Clear commitment from leadership to foster a culture which promotes ethical and responsible deployment and use of AI 
Industry Sandboxes and Toolkits
Participation in industry sandbox to learn from peers, and leverage existing and guides and toolkits provided by the ecosystem
Gen AI Council
Gen AI Council or Board that reviews and accept GenAI models and use-cases
Responsible AI-by-Design
Documented guidance on how to integrate ethical and responsible AI requirements during each AI lifecycle stage
Upskilling and AI Literacy
AI educational programs, AI certifications  and AI awareness programs tailored for different roles
Vendor Collaboration
Collaboration with AI-model providers, including upstream transparency requirements and data sharing agreements
Accountability Framework
Documented roles, responsibilities, and accountability structures and processes in relation to AI
Human-in-the-Loop
Processes and guidance for reviewing and quality-checking AI outputs
Gen AI Community of Practice
Educational and knowledge sharing forums for practitioners for learn and share practical examples
Stakeholder Engagement
Involvement of external stakeholders to understand the broader impact of AI on society and environment
AI System Inventory
Central oversight of all deployed (Gen) AI systems (applies for all AI, not just Generative AI)
Pre-deployment Monitoring
Process to continuously review deployed AI systems and monitor for significant changes
Culture Building on RAI
Establish regular AI focus groups, Document organizational values on responsible AI
Source: Nordic Ethical AI Sandbox Workshop #2
Note: This list is non-exhaustive. The risk mitigation strategies mentioned here were identified during the Nordic Ethical AI Sandbox workshop. These are non-exhaustive and and may not be relevant for all organizations.

Technical risk mitigation for (Gen) AI risks

Technical mitigation strategies are in this context tools, techniques or technical methods that help identify, assess, reduce or eliminate different risk factors in the AI lifecycle. These risk mitigations were identified by workshop particpants, and could in most instances also apply for other AI technologies as well.

Technical  Mitigation Strategies

Non-exhaustive
Data
Development
Deployment
Cross-Lifecycle
Data Loss Prevention
Data encryption, Data masking  redaction, Antivirus software, Data loss prevention software
Prompt Engineering
Best practice for creating prompts to balance specificity with openness to optimize prompt effectiveness
Monitoring System
Network Failure Monitoring, Output Monitoring, Error Monitoring
Technical Sandbox & Toolkits
A protected technical environment that allows developers and engineers to test software or system
Sensitivity Labels & Access Controls
Classification of data assets based on sensitivity and with clear access rights
Fine-tuning
Adapting pre-trained models to for specific tasks or use-cases to achieve higher accuracy
Record-Keeping
Techniques and tools to automatically record events and enable traceability
AI Frameworks
Open-source and proprietary frameworks to architect, train, validate and deploy AI systems
AI Model & Data Cards
Documentation format to provide standardized information to downstream users – ”nutrition label” for AI models
Grounding the Model
Set up parameters, settings and boundaries for what is accurate behaviour for intended use
Harmful Content Classification
Techniques and methodologies for reviewing generated content to flag harmful or toxic content
Governance Platform & Toolkit 
A platform  or toolkit that allow organization to direct, manage, and monitor AI activities according with internal policy
AI Model Explainability
Techniques and methodologies for improving explainability of AI-outputs, for example decision trees, LIME, SHAP
Guardrails
Technical guardrails that limits the types of user prompts that can be made
System Deactivation
Tool that can deactivate or disable entire system or certain features or services
Source: Nordic Ethical AI Sandbox Workshop #2
Note: This list is non-exhaustive. The risk mitigation strategies mentioned here were identified during the Nordic Ethical AI Sandbox workshop. These are non-exhaustive and and may not be relevant for all organizations.
Check Copied to clipboard