Go to content

Robust & Secure AI

Chapter Introduction

Nordic perspectives on challenges and mitigation strategies related to ensuring robust and secure AI

Background

  • This chapter is based on the results of the third workshop and webinar in the Nordic Ethical AI Sandbox, which took place during  April 2024.
  • AI systems comes with different vulnerabilities, hacking threats, and complexities of operations, compared to other software systems. This necessitates attention at several levels of organizations and businesses.
  • This chapter explores Nordic perspectives on challenges related to robustness and security when developing and adopting AI solutions, and what strategies can be applied to mitigate these. Each use-case comes with its unique set of challenges, and these recommendations should therefore be considered as a starting point for further analyses.

What this chapter will help organizations with

  • Understanding the phases of the AI lifecycle and their influence on AI robustness and security
  • Identifying key challenges related to ensuring robustness and security of AI systems
  • Understanding what methods and strategies exist to mitigate challenges related to AI robustness and security

Robust and secure AI

New vulnerabilities emerge with the adoption of complex machine learning models. The robustness and security of AI models pose challenges distinctly different from those associated with traditional technologies.

Robustness

  • AI robustness is the ability of an AI system to maintain its performance level under varying conditions, including  unexpected conditions.
  • This is related to the system’s sensitivity to minor changes, in some cases changes imperceptible to humans
  • Such changes typically are caused by one of these phenomena:
    • Minor differences in data collection, for example in medical imaging where different installations and/or equipment from different vendors will create outputs with minor differences
    • Minor (or major) changes in the statistical properties of the input data change, for example caused by seasonality factors

AI Security

  • The deployment of AI models introduces new threat surfaces and requires new defence methods compared to traditional IT technologies
  • Attacks typically are designed to manipulate machine learning models into:
    • Extracting sensitive information from the data that the model has been trained on. This could lead to the disclosure of business or person sensitive data.
    • Inducing erroneous outputs, manipulating the models to cause harmful outputs harmful to the deployer and/or  beneficial to the attacker

AI lifecycle process flow

The AI lifecycle describes the different phases from the inception of AI systems to operation and is an iterative process
↙ UPDATE ↘
Scoping
Training Environment
Testing Environment
Production Environment
Define
Prepare
Train
Test
Deploy
Sustain
Maintain
↖ ITERATE ↗
The Definition phase is important for scoping the intended use and setting internal requirements (for, e.g., performance and availability), but also for making sure potential fairness issues (if applicable) will be addressed properly.
The Training environment contains data management, data preparations, and model building activities (including algorithm selection, training, and optimization).
The role of the Testing environment is to test data quality, model performance, prediction uncertainty,  to make decisions whether to deploy or not, and to deploy models for production if tests pass.
The Production environment contains activities required for flawless operation; monitoring,  logging, model update logic, drift detection & countermeasures,  provision of robust inference service, and mechanisms for failure recovery

Managing robustness and security challenges

Nordic companies explored challenges and pitfalls along the different phases of the AI lifecycle regarding robustness and security

Challenges identified from workshop insights

Scoping ↓
Training ↓
Testing↓
Production↓
  • Lack of clarity regarding the goal and accessible data, including sensitivity levels
  • Benefits of the AI use-case for different user groups not well understood, together with the value levers 
  • Unclear value of leveraging AI  compared to other existing approaches
  • Lack of clear business case analysis:
    - Identifying the challenge to be solved
    - Determining if AI is the appropriate tool for the challenge
  • Lack of explainability of the model and its decisions
  • Fairness based on factors like location and occupation not sufficiently evaluated
  • Lack of trustworthy training data, due to unreliable data sources, bias, etc.
  • Unreasonable expectations for model accuracy
  • Poor handling of sensitive data sharing and consent issues
  • Highly automated model training processes excludes human judgement from the training phase
  • Lack of target group involvement in the model setup process
  • Retention of unnecessary data
  • Testing efforts not sufficiently prioritized
  • Lack of budgets for red teaming
  • Models lacking robustness to environmental effects
  • Model developers not  detecting combinations causing underperformance, leading to blind spots
  • Unclear criteria for "go/no-go" decisions for production deployment.
  • Lack of control and specifications of  the data on which results are based
  • Data changing meaning or context over time
  • Overly specialized model will not be applicable in other business contexts
  • Operations teams not following a/the governance process
  • Adversarial inputs
  • Understanding of when to reassess privacy and risk not shared by all stakeholders, especially with changes to models and data
  • Over-reliance on the model's output as being superior to other analyses
  • Lack of  human operators who can competently infer issues and intervene when necessary
Source: Nordic Ethical AI Sandbox Workshop #3
Note:These are aggregated results from the workshop and do not necessarily apply for all participating organizations. The list should not be considered exhaustive. 

Robustness and security mitigation methods

Nordic companies explored solutions and mitigation methods relevant in different phases of the AI lifecycle

Recommendations based on workshop insights

Scoping ↓
Training ↓
Testing↓
Production↓
ORGANIZATIONAL
  • Define clear business case
  • Define AI risk categories/thresholds 
  • Establish procurement
  • Conduct fundamental rights impact assessment 
  • Establish procurement guidelines 
  • Conduct fundamental rights impact assessments
  • Ensure effective monitoring
  • Consider manual moderation
  • Implement a change control board Communicate clear expectations of use and reliability 
  • Consider relevant organizational reporting lines
  • Discuss potential responsibilities to withdraw the AI system from use
TECHNICAL
  • Ensure effective human oversight to implement risk screening
  • Consider techniques as poison filters and differential privacy
  • Remove unneeded attributes in the data engineering process
  • Investigate the relevance of using synthetic data
  • Document model transparency 
  • Find model blind spots (underperforming combinations)
  • Discuss minimum requirements for quality and fairness
  • Facilitate the regular monitoring of adversarial actions
LEGAL & POLICY
  • Agree minimum requirements for quality and fairness
  • Create AI relevant procurement practices, including how to mitigate risks in contractual terms 
  • Create an AI incident policy (what to do if something goes wrong)
Source: Nordic Ethical AI Sandbox Workshop #3
Note:These are aggregated results from the workshop and do not necessarily apply for all participating organizations. The list should not be considered exhaustive. 

Key takeaways for ensuring robust and secure AI

Workshop participants considered the following factors to be critical for ensuring responsible, transparent and explainable AI
Overarching factors
  • Task Definition: Defining tasks as simply as possible aids in clarity and effective execution
  • Human Factors: Considering the long-term impact of AI and ML on humans, including discussing human factors in deployment, is crucial for ethical AI implementation
  • Complexity and Mitigation: Addressing the complexity of problem solving and emphasizing the importance of getting it right, including considering both mitigation measures and the involvement of domain experts
  • Framework and Sensitivity: Understanding the sensitivity of sharing how models work in the real world and iterating models based on real-world performance and failures helps in refining and improving AI systems
  • Sector Challenges: Discussing common challenges across different sectors, such as health and maintenance models, highlights the universality of certain issues and facilitates shared learning
Data and training
  • Data Supply: Ensuring access to the right data for the project is vital. Validating data availability is a crucial initial step
  • Training Data Issues: Focusing on cleaning and resolving issues with training data is fundamental to developing reliable models
Testing
  • Standardized metrics: Utilizing standardized metrics (based on models from ISO or other standardization organizations) to measure success in AI and ML applications is important for benchmarking
  • Robustness: Exploring the robustness of models, including testing for vulnerabilities and potential hacking, is necessary for maintaining security
  • Validation and Stress Testing: Conducting thorough validation and stress testing of models ensures they can perform reliably under various conditions
Production
  • Deployment: Safety and mitigation of failure modes during deployment are essential. Establishing governance processes for both deployment and solution phases helps maintain oversight and accountability
  • Usage Drift: Monitoring and addressing how AI and ML models may deviate from their intended use over time ensures relevance and accuracy
Source: Nordic Ethical AI Sandbox Workshop #3
Note:These are aggregated results from the workshop and do not necessarily apply for all participating organizations. The list should not be considered exhaustive.
Check Copied to clipboard