Go to content

The Benefits of Adopting Ethical and Responsible AI

Ikon-2.png

Nordic Views on the Benefits of Ethical and Responsible AI

In a survey developed by Accenture and Rehumanize Institute in 2023, 204 businesses leaders from Denmark, Finland, Sweden, and Norway were asked about their company’s most important reasons for adopting ethical and responsible AI practices.
The top three reasons were:

1.

To enable adoption of new technology

2.

To be more commercially successful

3.

To meet customer expectations
Accenture, Re:humanize Institute, Impact leadership in the age of Generative AI Survey Results, 2024 (see methodology section) https://rehumanizeinstitute.org/wp-content/uploads/2024/01/Nordic-Responsible-Business-Report.pdf
Interviews held with different actors from the Nordic AI ecosystem for this report confirms this view. Many organizations are actively preparing for implementing the requirements of the EU AI Act, even though their current use of AI would most likely be classified as low risk. Companies think that looking beyond short-term regulatory compliance will benefit them, by staying prepared for other emerging regulations on data, AI, or sector-specific laws. Additionally, many interviewed businesses state that they see other benefits from adopting ethical and responsible AI practices, such as meeting the expectations from their customers, having a positive impact on society, and building better, more in-demand AI-products for their customers.
Companies interviewed for this report state that they typically take a cautious approach when implementing AI, starting with low-risk areas and internal applications to test the technology, before scaling and making it customer-facing. This holds especially true for those companies exploring generative AI use-cases.

“We have implemented guardrails for putting AI in production. As a company operating and using critical infrastructure, we are by definition a high-risk category company. We want to ensure trust in our business internally and in society when we apply AI.”

—Patrick Blomquist
Head of Responsible AI at Equinor
In the same survey from 2023 mentioned above, companies across Denmark, Finland, Sweden, and Norway also expressed a need for more responsible AI practices with increased adoption of generative AI
Accenture, Re:humanize Institute, Impact leadership in the age of Generative AI, 2024 https://rehumanizeinstitute.org/wp-content/uploads/2024/01/Nordic-Responsible-Business-Report.pdf
. The interviews conducted for this report highlighted the need for robust AI governance, to have humans-in-the-loop and upskilling their employees on the correct usage of generative AI solutions. Many also mentioned that the impact on the workforce concerns them. Both in terms of the workforce displacement effects, and the health aspects of the rapid introduction of new internal AI-tools, which can cause stress for employees.
Chapter2-2.png

“We have a model committee that assesses and gives approval of generative AI tools, as well as provide company-wide trainings on AI, re-skilling staff in generative AI best practices and risks. Being one of the largest banks in Iceland and a major employer, we need to consider the broader effects our business practices have on our culture and country.”

—Riaan Dreyer
Chief Digital Officer at Islandsbanki

“Trust is fundamental to how we leverage AI in our business. Journalists will not reference our news and media unless they can trust that it comes from a reliable source. We include clear communication to the users of our platform when AI has been used to generate content. There are also guardrails built into the platform for those publishing AI-generated press releases or other content, to ensure that there has been a human review of the output.”

—Louise Barnekow
CEO at Mynewsdesk

“We need to ensure that our use of AI serves the intended purpose. As a recruitment company, we are tasked with finding and recommending the best suitable candidate. We often get asked by customers how we ensure diversity and inclusion in our processes, and the same will apply for our use of AI. Over time, our customers will likely put more pressure on us to use AI responsibly.”

—Johannes Setänen
Director of Services Sector and Marketing at Bolt.works
Interviewed companies also consider how their use of AI can have broader societal impact, especially those whose business operations play a major role in society. These companies recognize that adopting ethical and responsible AI practices – for example by ensuring that AI outputs are accurate and fair, and that users are informed about the use of AI – is fundamental for them to succeed with AI investments.

Benefits of Ethical and Responsible AI

Combining the findings from the interviews conducted for this report with global research on ethical and responsible AI, the following benefits have been identified. It should be noted that the benefits listed below do not necessarily apply for all organizations equally. Different organizations will experience varying outcomes from adopting ethical and responsible AI practices, depending on factors such as industry, product, customer segment, business model(s), country of origin, size etc.
Chapter2-3.png

Enable faster adoption of new technologies

Help ensure compliance

Contribute to building trust

Support industrialization of AI

Facilitate positive impact

Ethical and responsible AI practices can enable faster adoption of new technology and help realize the full business value of AI by avoiding potential risks from materializing, which could halt or stop development of AI solutions and use-cases

Responsible-by-design: The use of AI can help businesses save costs and generate growth, improving their financial performance and competitive advantage. To capture this value, businesses need to deploy AI at scale. In a study performed by Accenture, companies that had successfully generated business growth and positive outcomes from deploying AI were examined. The purpose was to investigate their behaviors and define the key performance indicators for generating value from AI. Among several factors, these businesses are responsible-by-design. It means that they recognize the importance of incorporating ethical and responsible considerations into their data and AI strategies and across the full lifecycle of AI models. A similar study conducted by McKinsey in 2021 concluded that organization’s seeing highest return from AI engage in risk mitigation practices when developing AI, for example through AI governance, measuring data and model bias, and having robust technical documentation. A responsible-by-design approach integrates ethical considerations into the inception of technology innovation. This proactive strategy can help avoid potential risks from materializing, which could otherwise have a negative effect on the business. Historically, there have been instances where businesses have launched AI-enabled products or services that have later been pulled off the market due to risks materializing; as with the example a recruitment tool that showed bias against women
Reuters, Insight: Amazon scraps secret recruitment tool that showed bias against women, 2018 https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G/
or a media website that paused the use of an AI tool after finding that more than half of the AI generated articles contained errors
The Verge, CNET found errors in more than half of its AI-written stories, 2023 https://www.theverge.com/2023/1/25/23571082/cnet-ai-written-stories-errors-corrections-red-ventures
. By not considering a responsible approach from the start, companies are at risk of financial, reputational or operational damage and because of that may potentially lose out on value creation from AI. With a responsible-by-design approach to AI, there is a potential for businesses to generate more business value from their AI investments at a faster pace.

Ethical and responsible AI practices can help ensure compliance with regulatory and industry standards on AI, so that organizations can avoid costs of non-compliance and gain access to global markets

Avoid Cost of Non-Compliance: Adopting ethical and responsible AI practices when developing and deploying AI can help ensure proactive compliance with legal requirements on data and AI. This involves a comprehensive understanding of relevant legal frameworks, including for example data protection regulation, AI regulation, and sector-specific regulation. Regular legal reviews, conducted in collaboration with legal experts, serve as essential checkpoints to ensure that AI initiatives align with legal standards. By integrating legal considerations into the development process, businesses can enable robust legal safeguards, reducing the risk of non-compliance which can have costly effects (e.g. as per latest version of the EU AI Act, non-compliance can result in fines of up to 7% of global annual turnover
European Parliament, Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI, 2023 https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai
).
Global Market Access: The legal landscape for AI is evolving, many countries and regions are developing their own AI regulations and guidelines (see section about Global Approaches to AI Regulation). Adopting ethical and responsible AI, through for example robust AI governance, risk management and incorporating controls along the AI lifecycle, can not only help contribute to compliance with existing regulations, but also positions a business to adapt swiftly to new regulations as they emerge. This can be crucial for businesses seeking to expand globally. Looking at the direction of regulatory developments on AI, taking a risk-based approach would help prepare for the larger regulatory initiatives such as the EU AI Act, and make adapting to specific needs of new jurisdictions easier.

Ethical and responsible AI practices can contribute to building trust with key stakeholders such as customers and employees, which could support with increasing use or adoption of AI-products and services, as well as attracting and retaining top talent

Customer Trust: People globally are becoming more nervous of products and services that utilize AI. In a global survey on the view of AI, the percentage of respondents that expressed nervousness in AI grew from 40% in 2021 to 52% in 2023. This could partly be explained by the proliferation of AI-tools and the media attention on AI during 2022 and 2023. However, this attitude towards AI-enabled products and services could create a risk to user adoption, which could have potential negative implications on businesses developing these products and services. In a global survey from 2023 covering over 11 000 consumers, 68% of customers said that advances in AI make it more important for companies to be trustworthy, while just over half currently trust companies to use AI ethically. With an ethical and responsible approach to AI, businesses are better equipped to address the risks with AI that concern their customers. Having robust governance and processes in place to mitigate risks with AI can help generate trust from customers, especially if claims about risk mitigations can be supported by evidence. When asked about what would deepen customer trust in AI, the top three factors mentioned by global consumers were greater visibility into the use of AI by business, human validation of AI outputs, and more customer control.
Talent Attraction & Retention: Similarly to how ethical and responsible AI can have potential positive effects on customer trust in AI, it could also help build trust with employees and attract top talent. In the 2023 version of McKinsey’s State of AI report, survey respondents predicted that the adoption of AI will reshape many roles in the workforce over the next three years, and 43% predicted that the total number of employees will decrease by at least 3%. 8% of survey respondents said that they think their workforce will decrease by more than 20% in the next three years. Adopting an ethical and responsible approach to AI used in the workplace, that takes into consideration potential concerns expressed by employees, could be a way to preserve employee trust and as a result both attract and retain talent. This could for example be done by adopting Responsible AI Governance underpinned by principles that employees have contributed to defining, and engaging the workforce in shaping the plan for how AI should be leveraged across the organization.

Ethical and responsible AI practices can support industrialization of AI by creating standardized processes and practices that can be scaled across the organization, contributing to operational efficiency

Standardized Processes and Practices: Ethical and responsible AI involves establishing governance, processes, and best practice for the development of AI solutions, actions that often accompany industrializing AI development to create economies of scale. Standardizing processes and procedures can help bring consistency in how AI is applied across different projects and departments, how risks are identified and assessed, what tests should be performed and what mitigation strategies to apply. This could help bring transparency and efficiency to the development process of AI, and the process of identifying and managing AI risks, potentially shortening time-to-market for new AI solutions. Interviewed Nordic businesses also expressed the potential for ethical and responsible AI to support with rapid development and scaling of AI initiatives.
By having standardized processes, frameworks and methodologies, development teams can save time by not having to identify or create their own versions. This approach also enables comparison across different AI projects, which could contribute to better understanding about what strategies are most effective and help identify process improvements.

Ethical and responsible AI practices can contribute to creation of positive impact on the environment, people and society from the use of AI

Environmental Sustainability: An ethical and responsible approach to AI can contribute positive impact on environmental sustainability through two mechanisms. Firstly, AI can be used as a tool to achieve positive impact on the environment. In a review of evidence from 2020, it was indicated that AI may act as an enabler for 25 (93%) of the Environmental Sustainable Development (SDG) targets
Vinuesa, R., Azizpour, H., Leite, I. et al., The role of artificial intelligence in achieving the Sustainable Development Goals, 2020 https://www.nature.com/articles/s41467-019-14108-y
.
Secondly, training AI models involves significant computational power, requiring vast amounts of energy. There are many factors that determine the amount of carbon emissions emitted by AI systems, for example the number of parameters in a model, the power usage effectiveness of a data center and the grid carbon intensity
AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Artificial Intelligence Index Report 2023, 2023 https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf
. Having a responsible approach to AI that also involves assessing the environmental effect of different design choices, enables organizations to move funding to more environmentally friendly options. For example regarding model choice, fine-tuning and re-training.
Social Sustainability: There is also potential to use AI for the benefit of people and societies. 67 targets (82%) within the Society group of SDG targets are estimated to potentially benefit from use of AI
Vinuesa, R., Azizpour, H., Leite, I. et al., The role of artificial intelligence in achieving the Sustainable Development Goals, 2020 https://www.nature.com/articles/s41467-019-14108-y
. There are many definitions of ethical and responsible AI, but in most definitions from larger governance bodies or institutions, AI’s impact on people and planet is central to the concept of ethical and responsible AI. The OECD AI Principles stipulates for example that AI should among all have “(..) beneficial outcomes for people and the planet”
OECD.AI, AI Principles Overview, 2019 https://oecd.ai/en/ai-principles
. The EU High-level Expert Group’s Guidelines for Trustworthy AI Systems states that one of the key requirements is that AI systems should address societal and environmental wellbeing
EU High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, 2019 https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html
. AI can for example be leveraged as a tool to improve equality in communication and media coverage. Swiss media company Ringier uses AI to measure how men and women are represented in the companies’ media coverage. This helps them bring awareness to any imbalances and enables them to make informed decisions that improve representation.
World Association of News Publishers, Switzerland’s Ringier uses AI for gender-equal reportage, 2021 https://wan-ifra.org/2021/06/switzerlands-ringier-uses-ai-for-gender-equal-reportage/
This is one example of how AI can serve as an enabler for creating positive impact on people’s lives and society at large, if used for the right purposes.
chapter2-4.png