Go to content

Accelerating Ethical and Responsible AI in the Nordics

Summary: accelerating ethical and responsible AI

The ethical and responsible AI journey will look different for organizations, depending on factors such as company size, capabilities, organizational complexity and exposure to global markets and regulation

Key takeaways for different types of organizations

Start-ups

For start-ups with few employees, ensuring responsible development, procurement and use of AI is likely going to be less about having strict governance with dedicated forums, processes and controls. Instead, fostering a culture from the ground up that puts ethical and responsible AI practices at the core of all AI operations will be critical. Making ethical and responsible AI a central part of the company’s core values and culture will enable all employees to feel accountability and responsibility for living up to those values.
Unless AI is part of the core business, start-ups are less likely to develop their own AI models, instead they will rely on pre-built AI solutions. Therefore, they should focus on building knowledge about the vendor landscape, and make informed decisions about which vendor to partner with, considering their approach to responsible AI and use of data.
Small organizations with no legacy also has great potential to ”do right” from the start, by adopting a responsible-by-design approach to AI.

SME

As organizations grow, it will be more important to develop formal AI governance to keep an overview of AI operations and potential risks. Building on existing principles, core values and responsible AI culture can help enforce new policies, processes and controls. Ensure there is clarity on roles and responsibilities of ensuring ethical and responsible AI.
Empower employees to adopt a responsible approach to their use of AI, by creating clear instructions and education material about potential risks and limitations with AI tools. For AI that is being deployed by the organization, standardize best practice for system development and testing based on key responsible AI dimensions such as explainability, fairness, robustness and fairness. Additionally, adopt robust documentation practices for AI development.
As organizations grow, they might enter new markets and jurisdictions. Staying informed about new data and AI regulations will be essential to successful market entry.
Treat ethical and responsible AI as an ongoing, iterative process, continually assessing and improving practices within resource constraints.

ENTERPRISE

Most Nordic enterprises are not AI-companies, even though many might rely on a strong digital and technical foundation. These organizations are likely to already have significant structures in place already in terms of policies, governance and processes that impact how AI is developed, procured and used across the organization. Because of this, operationalizing ethical and responsible AI is likely to be centered around uplifting existing structures rather than creating new ones. Some organization could need to create new teams, roles and capabilities, but implementing this requires working with what is already in place. Some responsible AI capabilities might be centralized, and some federated. Creating a scalable operating model for ethical and responsible AI will be important.
A major challenges for large organizations is ensuring oversight of AI. Identifying all AI systems, screening these for risks, and keeping an AI inventory will be immediate actions, especially with the EU AI Act.
Some large enterprises could operate across multiple markets, with varying legal landscapes. Continuously monitoring legal developments, and harmonizing frameworks will be essential for multi-national organizations.  
Source: Aggregated results from all workshops in the Nordic Ethical AI Sandbox
Check Copied to clipboard