This report uses the phrases “ethical and responsible AI” as well as “ethical and responsible AI practices” to describe an approach that organizations developing, deploying, and using AI technologies and applications can take. Nordic Innovation and this report defined these terms as:
AI: This report references the updated OECD definition that holds a broad and flexible description of AI. Since the AI field is moving at an extremely fast pace, the definition of what constitutes AI moves with the technological development over time.
Ethical and responsible AI: Refers to design, development, deployment and operation of AI aligned with ethical principles and requirements, and the field of questions that accompany how AI impacts its surroundings (e.g. humans, society, flora, fauna, and the planet). In practical terms, ethical and responsible AI can as an example be realized by upholding the seven key requirements for trustworthy AI, set forth by the European Commission and its High-level Expert Group on AI in 2019:
Human agency and oversight
Technical robustness and safety
Privacy and data governance
Transparency
Diversity, non-discrimination and fairness
Societal and environmental well-being
Accountability
Ethical and responsible AI practices: Refers to the organizational structures and steering mechanisms, processes, activities, tools, and methods that organizations apply to achieve ethical and responsible AI, exemplified by the seven key requirements of trustworthy AI defined above. These practices can be multi-disciplinary and cover for example data science, governance, legal, risk management, data security, learning, and development, and company culture.