Third-party APIs External software interfaces for accessing off-the-shelf LLMs, for example OpenAI’s API | GitHub Copilot AI code completion tool by GitHub and OpenAI | RAG Retrieval-augmented generation model for improved performance in Gen AI solutions & NLP tasks | Multimodal Processing and understanding data from multiple modalities, for example text, image, audio, simultaneously |
Chat-GPT Interface for accessing OpenAI’s text-to-text models, for example GPT3.5, GPT-4 | Google Duet AI Google’s Gen AI co-pilot, integrated with Google's various creative applications and platforms | Opensource LLMs Publicly available large language models with accessible source code, for example Llama 2, BLOOM | Distributed and federated AI Spreading AI tasks across multiple devices in a network, enabling AI to be independently trained through federated learning |
O365 Copilot Microsoft’s Gen AI Co-pilot, integrated with Microsoft Office 365 | Fine-tuned models Pre-trained AI models that are trained on contextual data to achieve a higher performance for specific tasks | Improve LLMs’ explainability Enhancing the interpretability of transformer models to understand their decision-making | |
Edge deployment Run on devices at the network edge, closer to where data is generated, to reduce latency and bandwidth usage |
This framework can be used as a starting point when assassing and prioritizing between different Generative AI (and traditional AI) use-cases. In addition to factors included in this framework, there might be other factors specific to each organization that should be considered. In the second Nordic Ethical AI Sandbox workshop, participants mapped different factors considered into three overarching categories: 1) Value 2) Effort & Feasibility, and 3) Risks & Ethics. Identified assessment factors were also prioritized to better understand which ones are more important than others according to the workshop group. | 1. Value Why we do it? The benefits and value potential of deploying the use-case |
2. Effort & feasibility Are we able to do it, and how? The required technical capabilities, resources and costs to realize the use-case | |
3. Risks & ethics Can and/or should we do it? The potential risks and ethical implications that could occur because of the use-case which could hinder value creation |
1. Value Why we do it? | The benefits and value potential of deploying the use-case | |
2. Effort & feasibility Are we able to do it, and how? | The required technical capabilities, resources and costs to realize the use-case | |
3. Risks & ethics Can and/or should we do it? | The potential risks and ethical implications that could occur because of the use-case which could hinder value creation | |
1. Value Why we do it? | ||
Increased operational efficiency and productivity that could be gained from for example automating repetitive tasks, streamlining processes, and using data-driven insights to optimize resource allocation and decision-making | Improved customer experience and communication, enabled by for example personalized interactions and delivering timely and relevant content | Increased positive impact on society or the environment, enabled by for example facilitating development of more sustainable products and services and optimizing resource usage |
Supporting innovation through generation of ideas, designs, and solutions, fostering creativity and augmenting human ingenuity across various domains | Supporting generation of new revenue streams through product and service development, and optimizing revenue through tailored sales and marketing activities |
2. Effort & feasibility Are we able to do it, and what is the cost? | ||
The practicality and viability of implementing the Generative AI solution within existing technological infrastructures, considering factors such as data availability, data accessibility, solution lock-in etc. | Technical and organizational factors that affect the capacity to adhere to regulatory requirements and adapt to changing legal frameworks while maintaining agility in adjusting to new guidelines, policies or constraints. | The urgency of bringing the use-case from conception to deployment, requiring efficient deployment strategies and rapid iteration cycles, for example through AI prototyping tools or automating and validating tasks |
The costs of developing and implement the solution, for example licensing costs, cloud storage costs, hardware costs and maintenance costs. | The required knowledge and skills of the team involved in bringing the use-case from conception to deployment, including potential need to hire new employees or consult external experts | The required change in existing processes, skills profiles or culture that could affect the ability to adopt and realize value from the use-case , for example lack of AI literacy or stakeholder buy-in |
3. Risks & ethics Can and/or should we do it? | ||
The risk that the outputs generated are thought to be correct, but instead may be false, misrepresenting or misleading, for example from hallucinations | The resulting impact on workforce and required skills profiles from utilizing Generative AI in the organization, for example from displacing certain roles or reducing resource needs | The risk that the outputs generated express prejudice, toxicity or include harmful content of any kind due to bias in data, model or human review |
Uncertainty around protection of proprietary data and sensitive information used to train and prompt the model | The uncertainty that comes with a fast evolving, less tested technology, making it difficult to fully foresee all potential risks with deploying it | Negative environmental and social consequences resulting from Generative AI, for example climate impact or power imbalance of technology development |
The risk of not understanding the outputs or behavior of Generative AI solutions, caused by for example low transparency and explainability of the AI model | Legal obligations and compliance with regulations related to the use of Generative AI, including ownership of content and potential implications of misuse or harm caused |
Identify and Assess | |
The potential risk of which an organization is threatened by, with consideration to both adverse impact and likelihood of occurrence | The consequences of AI risks materializing |
Mitigate and Remedy | |
Non-technical practices that help identify, assess, reduce or eliminate different risks across the AI lifecycle | T Tools, techniques or technical methods that help identify, assess, reduce or eliminate different risk factors in the AI lifecycle |
The potential risk of which an organization is threatened by, with consideration to both adverse impact and likelihood of occurrence | ||
Unreliable outputs | Workforce & talent risks | Bias & harm |
Privacy & security risks | Fast evolving technology | Negative sustainability impact |
Lack of understanding & control | Liability & compliance |
EFFECTS The consequences identified by Nordic businesses that come as an effect of the risks | |||
Non-compliance (effect) •Legal and compliance (for example with industry standards, GDPR, EU AI Act) •Copyright infringement Reputational damage (effect) •Loss of trust •Negative brand reputation Loss of Control / Power Imbalance •Power and control imbalance (for example market oligopolies) •Loss of Control Business & Financial Loss •Loss of income •Loss of intellectual property •Regulatory fines •Loss of innovation & competitive advantage Adverse Sustainable impact (effect) •Long term societal implications (for example on human rights, democracy, propaganda, misinformation) •Negative environmental impact |
AI Policy AI policy which defines what AI is and how it can be deployed and used across the organization | Risk Classification Standardized process for defining the risk level of individual Gen AI systems, based on predefined risk categories | Leadership Sponsorship Clear commitment from leadership to foster a culture which promotes ethical and responsible deployment and use of AI | Industry Sandboxes and Toolkits Participation in industry sandbox to learn from peers, and leverage existing and guides and toolkits provided by the ecosystem |
Gen AI Council Gen AI Council or Board that reviews and accept GenAI models and use-cases | Responsible AI-by-Design Documented guidance on how to integrate ethical and responsible AI requirements during each AI lifecycle stage | Upskilling and AI Literacy AI educational programs, AI certifications and AI awareness programs tailored for different roles | Vendor Collaboration Collaboration with AI-model providers, including upstream transparency requirements and data sharing agreements |
Accountability Framework Documented roles, responsibilities, and accountability structures and processes in relation to AI | Human-in-the-Loop Processes and guidance for reviewing and quality-checking AI outputs | Gen AI Community of Practice Educational and knowledge sharing forums for practitioners for learn and share practical examples | Stakeholder Engagement Involvement of external stakeholders to understand the broader impact of AI on society and environment |
AI System Inventory Central oversight of all deployed (Gen) AI systems (applies for all AI, not just Generative AI) | Pre-deployment Monitoring Process to continuously review deployed AI systems and monitor for significant changes | Culture Building on RAI Establish regular AI focus groups, Document organizational values on responsible AI |
Data Loss Prevention Data encryption, Data masking redaction, Antivirus software, Data loss prevention software | Prompt Engineering Best practice for creating prompts to balance specificity with openness to optimize prompt effectiveness | Monitoring System Network Failure Monitoring, Output Monitoring, Error Monitoring | Technical Sandbox & Toolkits A protected technical environment that allows developers and engineers to test software or system |
Sensitivity Labels & Access Controls Classification of data assets based on sensitivity and with clear access rights | Fine-tuning Adapting pre-trained models to for specific tasks or use-cases to achieve higher accuracy | Record-Keeping Techniques and tools to automatically record events and enable traceability | AI Frameworks Open-source and proprietary frameworks to architect, train, validate and deploy AI systems |
AI Model & Data Cards Documentation format to provide standardized information to downstream users – ”nutrition label” for AI models | Grounding the Model Set up parameters, settings and boundaries for what is accurate behaviour for intended use | Harmful Content Classification Techniques and methodologies for reviewing generated content to flag harmful or toxic content | Governance Platform & Toolkit A platform or toolkit that allow organization to direct, manage, and monitor AI activities according with internal policy |
AI Model Explainability Techniques and methodologies for improving explainability of AI-outputs, for example decision trees, LIME, SHAP | Guardrails Technical guardrails that limits the types of user prompts that can be made | System Deactivation Tool that can deactivate or disable entire system or certain features or services |