Go to content

Global State of Ethical and Responsible AI, Emerging Regulation, and Their Implication on Businesses

ikon 1.png

Accelerating AI Capabilities and Increasing Stakeholder Demands

The last decade has seen exponential growth in AI capabilities. In a few areas, the capabilities of AI systems are now comparable to those of humans in some cases, for example in language and image recognition
World Economic Forum, The history of AI systems and how they might look in the future, 2022 https://www.weforum.org/agenda/2022/12/how-ai-developed-whats-next-digital-transformation/
. 2022 was the breakthrough year for generative AI, with the explosive growth of generative AI tools. Generative AI can increase efficiency in a wide variety of processes, as well as support creation of innovative new products and services, for example virtual assistants. It is estimated that generative AI could add the equivalent of 2.6 to 4.4 trillion US dollars annually to the global economy, which is comparable to the size of Germany’s gross domestic product (GDP) in 2023
International Monetary Fund, World Economic Outlook Database: October 2023, 2023 https://www.imf.org/en/Publications/WEO/weo-database/2023/October/weo-report
. Businesses recognize the potential benefits AI can bring for them. For example, 78% of global executives see scaling AI as a top priority for their data strategy. Simultaneously, in 2022 only 6% of global businesses stated that they had implemented responsible AI.
In parallel to the increasing capabilities of AI, is the growing awareness about potential negative implications from the use of the technology. The Ipsos Global Views on AI 2023 report found that people are becoming more nervous of products and services that utilize AI (from 40% in 2021 to 52% in 2023). 36% also stated that they are worried about AI replacing their jobs. Even AI professionals themselves are raising questions around the accelerating capabilities of AI. In March 2023, more than 1000 technology leaders and researchers signed a moratorium urging AI labs to pause the development of the most advanced systems for six months, warning that AI tools could present “profound risks to society and humanity”
Future of Life Institute, Pause Giant AI Experiments: An Open Letter, 2023 https://futureoflife.org/open-letter/pause-giant-ai-experiments/
. The increase in citizen awareness of AI risks is putting pressure on governments and industry to steer the development and use of AI in a responsible way.

Global Approaches to AI Regulation

For Nordic businesses operating globally, or using AI models developed by a third-party, it will be important to understand potential divergences and alignments in global approaches to AI and AI governance. Governments across the globe are fiercely debating the positive and negative implications of regulating AI
Harvard Business Review, Who Is Going to Regulate AI?, 2023 https://hbr.org/2023/05/who-is-going-to-regulate-ai
. Approaches range from flexible and voluntary, to comprehensive legally binding. Many of the approaches being developed are structured around a risk-based approach that assesses the risk of an AI product or service, with a context-specific use-case as a key consideration. In parallel, different regulatory approaches are also being explored for foundation models and Generative AI due to their broad application in society.
Below follows a summary of selected national and intergovernmental approaches to AI governance. Note that what is listed in this report is only a short extract of all global regulatory initiatives on AI, last updated in early January 2024, and should not be interpreted as an exhaustive list of all regulations on data and AI.

European Union

The EU’s approach to AI “centers on excellence and trust, aiming to boost research and industrial capacity, while ensuring safety and fundamental rights”.
European Commission,  European Approach to Artificial Intelligence https://digital-strategy.ec.europa.eu/en/library/g7-leaders-statement-hiroshima-ai-process
The European Commission has created three inter-related legal initiatives that aims to contribute to building trustworthy AI; a legal framework for AI
European Commission, Regulatory framework proposal on artificial intelligence, 2024 https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
, a civil liability framework, and a revision of sectoral safety legislation (e.g. Machine Regulation
European Commission, Proposal for a Regulation of the European Parliament and of the Council on machinery products, 2021 https://ec.europa.eu/docsroom/documents/45508
and General Products Safety Directive
EUR-Lex, Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on general product safety, amending Regulation (EU) No 1025/2012 of the European Parliament and of the Council, and repealing Council Directive 87/357/EEC and Directive 2001/95/EC of the European Parliament and of the Council, 2021 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0346
European Commission, A European approach to artificial intelligence, 2023 https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
. The legal framework for AI, the EU Artificial Intelligence (AI) Act, takes a risk-based approach, and places requirements on developers and deployers of AI based on the level of risk. There are four risk levels; Low or Minimal risk systems; Limited Risk systems; High-risk systems, and Unacceptable risks
EUR-Lex, Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS, 2021 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
. The EU AI Act is viewed to become the first comprehensive regulatory scheme on AI globally. In early December 2023, the European Parliament and the Council reached a provisional agreement on the EU AI Act. The agreement stipulates that specific guardrails are to be included for general-purpose AI systems, with a two-tiered approach that places stringent obligations for providers of models that are categorized as having systemic risk under certain criteria. The provisional agreement provides that the EU AI Act will follow a staggered application calendar (6-12-24-36 months depending on the use-cases) after its entry into force upon voting of the Council and European Parliament on the final text.
European Parliament, Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI, 2023 https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai
In January 2024, the Council of the EU’s Committee of Permanent Representatives voted to advance the Act, to be followed by voting in the Parliament in spring 2024. The technical standards following the AI Act will be developed by the European Committee for Standardization (CEN) and European Electrotechnical Committee for Standardization (CENELEC).

Germany

In addition to Germany’s role in contributing to the EU AI Act, the German government has established initiatives that focus on upholding Germany’s role as a research hub, to enhance the competitiveness of the German industry and enable application of AI across all sectors of society. The primary aim is to generate tangible societal advancements that maximize benefits for individuals and the environment
European Commission, Germany AI Strategy Report, 2021 https://ai-watch.ec.europa.eu/countries/germany/germany-ai-strategy-report_en
. Germany has for example created a data ethics commission for building ethical guidelines, and regulatory sandboxes to enable innovation and advancement of regulation.

France

France’s approach to AI is similar to Germany’s. In addition to France’s role in contributing to the EU AI Act, France focuses on improving the country’s AI education and training to develop and attract top AI talent, to become world leading in AI innovation and research. The French Data Protection Agency (CNIL) and the French government are collaborating on several initiatives to advance the development of ethical AI. Examples of initiatives include a personal data sandbox program for digital health-related projects
CNIL, “Sandbox” of personal data: the CNIL supports 12 projects in the field of digital health, 2021 https://www.cnil.fr/fr/bac-sable-donnees-personnelles-la-cnil-accompagne-12-projets-dans-le-domaine-de-la-sante-numerique
, a National Committee for Ethical AI and the Digital Republic Bil covering data protection rights and right to privacy for French citizens.

United States of America

In 2023, President Joe Biden signed the Executive Order on Safe, Secure and Trustworthy Development and Use of AI, which establishes a direction for federal AI regulation going forward
US White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 2023 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
. The goal of the executive order is to improve AI safety and security, promote transparency and fairness, and foster innovation and competitiveness
US White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 2023 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Additionally, in December 2023, U.S. Representatives introduced the AI Foundational Model Transparency Act. Specifically, the Act calls on the Federal Trade Commission (FTC), in consultation with the National Institute of Standards and Technology (NIST), the Copyright Office, and the Office of Science and Technology Policy (OSTP) to set transparency standards for foundation model providers. Information identified for increased transparency include for example training data, model training mechanisms, and whether user data is observed or estimated through predications from a sample (collected in inference).
The US government has already developed a voluntary standard on AI Risk Management, published by the National Institute of Standards and Technology (NIST).
National Institute of Standards and Technology, AI Risk Management Framework, 2023 https://www.nist.gov/itl/ai-risk-management-framework
At state level, domain-specific AI regulations have been introduced. In the 2023 legislative session, at least 25 states introduced AI bills, and 15 states adopted resolutions or enacted legislation
National Conference of State Legislatures, Artificial Intelligence 2023 Legislation, 2023 https://www.ncsl.org/technology-and-communication/artificial-intelligence-2023-legislation
.

United Kingdom

The UK government has suggested to create a principles-based framework for regulators with domain expertise to interpret and apply to AI within their remits. This approach makes use of regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used. The UK is also actively driving the conversation about the standard for global AI governance. In November 2023, 29 countries met during the UK AI Safety Summit and signed the Bletchley Declaration, which promotes the development of human-centric AI and encourages international collaboration
UK Department for Science, Innovation & Technology, Foreign, Commonwealth & Development Office, UK Prime Minister’s Office, The Bletchley Declaration by Countries Attending the AI Safety Summit, 2023 https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute
. Simultaneously, the UK launched the AI Safety Institute for advancing ethical AI research to advance ethical AI testing and research
UK Department for Science, Innovation and Technology, Introducing the AI Safety Institute, 2023 https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute
.

Canada

Canada has taken a similar approach as the EU, by adopting a risk-based approach to AI. In 2022, the Artificial Intelligence and Data Act (AIDA) was proposed. The draft regulation puts requirements on private sector organizations to ensure safety and fairness of high-impact AI systems
Government of Canada, Artificial Intelligence and Data Act, 2023 https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act
. The Canadian government has also introduced a voluntary code of conduct for generative AI systems, until formal regulation is in place.
Government of Canada, Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, 2023 https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems

China

China has introduced several regulations on algorithms and AI. For example, regulation on recommendation algorithms in 2021; rules for deep synthesis (synthetically generated content) in 2022; and in 2023, China drafted Measures for the management of generative AI. Information control is a central goal in all three regulations, but they also put other requirements on users or providers of AI systems. All three regulations require developers of AI to register the solution in an algorithmic registry, a government repository that gathers information on how algorithms are trained, as well as requiring them to pass a security self-assessment.
Carnegie Endowment for International Peace, China’s AI Regulations and How They Get Made, 2023 https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117
More specifically, the Interim Measures for the Management of Generative AI Services, which came into effect in August 2023, aims to ensure protection of national interests, social values, user rights, prevention of the dissemination of false or harmful content, and safeguarding personal information. The regulation also states that AI-generated content must be in accordance with Chinese law. Providers of generative AI services are responsible for ensuring the legality of such content, and violations thereof are subject to penalties.
PwC, Tiang & Partners, China’s Interim Measures for the Management of Generative Artificial Intelligence Services officially implemented, 2023 https://www.pwccn.com/en/tmt/interim-measures-for-generative-ai-services-implemented-aug2023.pdf

Singapore

Singapore’s approach to AI focuses on creating a collaborative industry ecosystem, through frameworks and guidance on how industry actors can make use of AI and uphold ethical principles and ensure trustworthy innovation. In 2023, Singapore launched
their second National AI Strategy (NAIS 2.0), outlining an ambition to build a trusted and responsible AI ecosystem
Smart Nation Singapore, National Artificial Intelligence Strategy 2 to Uplift Singapore’s Social and Economic Potential, 2023 https://www.smartnation.gov.sg/media-hub/press-releases/04122023/
. As part of Singapore’s strategy to increase the use of AI and strengthening the AI ecosystem, the Monetary Authority of Singapore (MAS) began partnering with industry actors in 2018 to develop guiding principles to promote responsible use of AI in the financial services sector
Monetary Authority of Singapore, MAS Introduces new FEAT Principles to Promote Responsible use of AI and Data Analytics, 2018 https://www.mas.gov.sg/news/media-releases/2018/mas-introduces-new-feat-principles-to-promote-responsible-use-of-ai-and-data-analytics
. Since then, the collaboration has resulted in frameworks, methodologies and tools supporting industry actors with adopting the guiding principles
Monetary Authority of Singapore, MAS-led Industry Consortium Published Assessment Methodologies for Responsible Use of AI by Financial Institutions, 2022 https://www.mas.gov.sg/news/media-releases/2022/mas-led-industry-consortium-publishes-assessment-methodologies-for-responsible-use-of-ai-by-financial-institutions
Monetary Authority of Singapore, Veritas report from methodologies to integration, 2023 https://www.mas.gov.sg/-/media/mas/news/media-releases/veritas-document-5---from-methodologies-to-integration.pdf
. In 2023, MAS launched the project MindForge, an initiative aimed at creating a risk framework for the use of generative AI in the financial sector
Monetary Authority of Singapore, MAS Partners Industry to Develop Generative AI Risk Framework for the Financial Sector, 2023 https://www.mas.gov.sg/news/media-releases/2023/mas-partners-industry-to-develop-generative-ai-risk-framework-for-the-financial-sector
.

India

India’s approach to AI focuses on strengthening the country’s AI research, with special attention on sectors that are of local significance such as the agriculture sector
NITI Aayog, National Strategy for Artificial Intelligence, 2018 https://www.niti.gov.in/sites/default/files/2019-01/NationalStrategy-for-AI-Discussion-Paper.pdf
. The NITI Aayog, the Indian government’s public policy think tank agency, has put forward two approach documents to support with operationalizing responsible AI
NITI Aayog, Part 2 Operationalizing Principles for Responsible AI, 2021 https://www.niti.gov.in/sites/default/files/2021-08/Part2-Responsible-AI-12082021.pdf
and principles for responsible AI, based on the nation’s AI strategy. In 2022, the Indian government also introduced the Digital India Act (DIA) to establish a legal framework for the country’s digital ecosystem, and is expected to roll out after the 2024 Lok Sabha elections
The Hindu, Digital India Act to address disbalance between digital news publishers and big tech platforms: Rajeev Chandrasekhar, 2024 https://www.thehindu.com/sci-tech/technology/digital-india-act-to-address-disbalance-between-digital-news-publishers-and-big-tech-platforms-mos-rajeev-chandrasekhar/article67822466.ece
. The key elements of the Act include ensuring online safety, building trust and accountability, maintaining an open internet, and regulating new-age technologies such as AI. The Act proposes to safeguard innovation by defining and regulating high-risk AI systems.
Indian Ministry of Electronics and Information Technology, Digital India, Proposed Digital India Act, 2023 https://www.meity.gov.in/writereaddata/files/DIA_Presentation%2009.03.2023%20Final.pdf

Japan

Japan has taken a flexible approach to regulating AI. In the country’s AI strategy, Japan aims to become a digitalized AI society that contributes to solving global challenges
CAO, Overview of AI Strategy 2022, 2022 https://www8.cao.go.jp/cstp/ai/aistratagy2022en.pdf
. There is currently no hard restriction on AI models, however there are several guidelines for how organizations should develop and adopt AI in regards to governance
METI, Call for public comments on “AI Governance Guidelines for Implementation of AI principles Ver. 1.0” Opens, 2021 https://www.meti.go.jp/english/press/2021/0709_004.html
, research
The Conference toward AI Network Society, Draft AI R&D Guidelines for International Discussions, 2017 https://www.soumu.go.jp/main_content/000507517.pdf
, and utilization of AI
The Conference toward AI Network Society, AI Utilization Guidelines, 2019 https://www.soumu.go.jp/main_content/000658284.pdf
. The Japanese Liberal Democratic Party which has been the ruling party in Japan for the last 50 years,
Britannica, Liberal-Democratic Party of Japan, 2024 https://www.britannica.com/topic/Liberal-Democratic-Party-of-Japan
released a white paper in April 2023 on a new, suggested AI regulatory approach for Japan to stay competitive. The white paper primarily focuses on legal measures around violation of human rights, national security measures, and the democratic process.

Brazil

Brazil has also introduced risk-based AI regulation. In 2023, Bill No. 2338/2023 was introduced, with obligations varying depending on the level of risk with the AI system. The bill also focuses on right given to individuals. For example, the right to information about prior interaction with AI systems; right to an explanation about the outputs of AI systems; and right to non-discrimination and correction of discrimination biases.

Global AI Governance

Governments are also collaborating on global AI governance. In November 2023, the G7 leaders announced an agreement on a set of international guiding principles for AI, and a voluntary code of conduct for AI developers
European Commission, G7 Leaders’ Statement on the Hiroshima AI Process, 2023 https://ec.europa.eu/commission/presscorner/detail/en/ip_23_5379
. Moreover, in the New Delhi leaders’ Declaration from 2023, the G20 countries have decided to pursue a pro-innovation regulatory/governance approach that maximizes benefits and considers the risks associated with the use of AI
India’s G20 Presidency, G20 New Delhi Leaders’ Declaration, 2023 https://www.consilium.europa.eu/media/66739/g20-new-delhi-leaders-declaration.pdf
. Similarly in 2023, the UN formed an AI advisory board of 38 experts across the globe that will collaborate to analyze technological developments and advance recommendation on AI governance.
UN, AI Advisory Body, 2024 https://www.un.org/en/ai-advisory-body

Non-governmental Organizations and Standardization

Non-governmental organizations are involved in shaping global standards and best practices for AI as well. The OECD has for example created a platform for tracking developments and encouraging international collaboration on AI governance. Furthermore, the OECD has developed tools, frameworks, recommendations, and principles to support governments with integrating responsible AI in their countries.
OECD.AI, Artificial Intelligence, 2023 https://www.oecd.org/digital/artificial-intelligence/
In 2021, United Nations Educational Scientific and Cultural Organization’s (UNESCO) produced the first global standard on the ethics of AI, which was adopted by all 193 member states
UNESCO, Ethics of Artificial Intelligence, 2024 https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
. The World Economic Forum (WEF) has also established an AI Governance Alliance, an initiative that unites industry leaders, govern­ments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems. The aim of the initiative is to promote the adoption of safe AI systems, sustainable applications, resilient governance and regulation. This also includes accelerating the development of ethical guidelines and governance frameworks for regulating generative AI.
WEF, World Economic Forum Launches AI Governance Alliance Focused on Responsible Generative AI, 2023 https://www.weforum.org/press/2023/06/world-economic-forum-launches-ai-governance-alliance-focused-on-responsible-generative-ai/
In parallel to legal developments on AI, standardization of AI is maturing. The International Organization for Standardization (ISO) released ISO/IEC 42001 in December 2023, that specifies requirements for responsible use and development of AI systems. It is an inter­national standard that specifies requirements for establishing, implementing, main­taining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations.
ISO, ISO/IEC 42001:2023 Artificial Intelligence – Management System, 2023 https://www.iso.org/standard/81230.html
Another example is the certification program offered by the Institute of Electrical and Electronics Engineers (IEEE), CertifAIEd. This is a certification program for assessing ethics of Autonomous Intelligent Systems (AIS) to help protect, differentiate, and grow product adoption.
IEEE, IEEE CertifAIEd – The Mark of AI Ethics, 2024 https://engagestandards.ieee.org/ieeecertifaied.html

Nordic AI Regulation

In a survey with businesses from Denmark, Finland, Sweden, and Norway, 90% of respondents believed that AI regulations will have an impact on them in the future. 41% believed regulations will have a great impact on how their operations are shaped
Accenture, From AI compliance to Competitive Advantage, 2022 (Data extracted from respondents from Denmark, Finland, Sweden, Norway) https://www.accenture.com/content/dam/accenture/final/a-com-migration/r3-3/pdf/pdf-179/accenture-responsible-by-design-report.pdf
.
Nordic businesses are impacted by regulations imposed by national governments, as well as EU law for the Nordic countries that are members of the EU and in the case of Iceland and Norway, European Economic Area (EEA). Denmark, Finland, Sweden, Norway, and Iceland have developed national strategies for AI, and initiated activities to operationalize these
OECD.AI, Policies, data and analysis for trustworthy artificial intelligence, 2024 (See country dashboards and data collected) https://oecd.ai/en/
. Currently, there exists some regulations on AI in the Nordic countries. Most are sector-specific, but in Denmark there is an industry-agnostic AI regulation focused on disclosure of data ethics policies for large enterprises, which was introduced in 2020. In Finland, the government enacted the Act on Automated Decision-making in Public Administration in 2023. However, this regulation does not apply to AI, which is instead expected to primarily be regulated by the upcoming EU AI Act
Suomi.fi, Legislative challenges for public administration: AI legislation now and in the future, 2023 https://kehittajille.suomi.fi/guides/responsible-ai
. Another example of national, sector-specific regulation is Norway’s regulation on testing self-driving vehicles, to safeguard road safety and privacy considerations
Lovdata, Self-Driving Vehicle Testing Act, 2018 https://lovdata.no/dokument/NL/lov/2017-12-15-112
. Denmark, Finland, Sweden, Norway, and Iceland have also introduced several policy and soft law initiatives on AI. Selected examples of these include the Norwegian Artificial Intelligence Research Consortium (NORA) promoting the topic of ethical AI in the country
Norwegian Artificial Intelligence Research Consortium, NORA Strategy 2023-2026, 2023 https://www.nora.ai/about/strategy.html
. AI Sweden has piloted an AI ethical lab that provide guidance and support on implementing ethical AI development.
AI Sweden, AI Ethics Lab, 2021 https://www.ai.se/en/project/ai-ethics-lab
In Denmark, the Data Ethics Council provides national guidance on ethical dilemmas associated with data and AI developments. Both Norway’s and Iceland’s Data Protection Authorities have also launched consultative AI sandboxes to stimulate compliance and innovation in AI, helping to bridge the gap between regulators and innovators.
Datatilsynet, Regulatory privacy sandbox, 2023; Persónuvernd, „Sandkassi“ sem öruggt umhverfi fyrir þróun ábyrgrar gervigreindar, 2022 https://www.datatilsynet.no/en/regulations-and-tools/sandbox-for-artificial-intelligence/
Norway and Iceland, both being outside of the EU jurisdiction, have implemented the version of GDPR incorporated into the Agreement on the EEA shortly after GDPR entered into force in 2018. For an EU act to apply to the EEA EFTA states Iceland, Norway and Liechtenstein, the EEA Joint Committee must adopt a decision to incorporate the act into the EEA agreement. The aim is to incorporate EU acts as closely as possible to their date of entering into force in the EU, to ensure that the same rules apply throughout the EEA
EEA EFTA, How EU Law Becomes EEA Law, 2023  https://www.efta.int/eealaw
. If the EU AI Act is accepted into the EEA agreement, it is likely that the Act will impact Icelandic and Norwegian companies similarly as other EU-based businesses.
The Faroe Islands are also not part of the EU, instead the formal relationship between the EU and the Faroe Islands is based on three separate bilateral agreements dealing with fisheries, trade in goods and scientific and technological cooperation.
European Union External Action, The European Union and the Faroe Islands, 2021 https://www.eeas.europa.eu/eeas/european-union-and-faroe-islands_en
The Faroe Islands have implemented data protection regulation with similarities to the principles, rights and obligations defined in the GDPR.
Farao Islands Prime Minister’s Office, Act on the protection of personal data (Data protection Act), 2020  https://dat.cdn.fo/media/opccxh1q/act-on-the-protection-of-personal-data-data-protection-act-act-no-80-on-the-7-june-2020.pdf?s=LA6IqXBchs1Ryn1Kp9h3KSPuFog
Greenland, an autonomous territory of the Kingdom of Denmark, is part of The Overseas Countries and Territories (OCTs), which are not directly subject to EU law. Greenland has implemented their own data protection regulation, but shares the same regulator as Denmark (Datatilsynet)
Datatilsynet, Anordning om ikrafttræden for Grønland af lov om behandling af personoplysninger, 2016 Log på din konto (microsoftonline.com)
. Finally, Åland, a self-governing province between Finland and Sweden, is part of the EU but its relationship with the EU is regulated by special protocol
Nordic Co-operation, About Åland, 2023 https://www.norden.org/en/information/about-aland
. Åland has also implemented the GDPR.
Datainspektionen Åland, EU:s dataskyddsförordning (GDPR), 2023 https://www.di.ax/lagstiftning-och-vagledning/eus-dataskyddsforordning-gdpr
There has been an increase in global regulatory activity for AI, where many of the approaches being developed are structured around a risk-based approach that assesses the risk of an AI product or service with a context specific use case as a key consideration. There are also different measurement approaches being explored for foundation models and generative AI due to their broad application with safety-oriented evaluation to support mitigating associated risks. Countries, institutions, and organizations are trying to determine how foundation model governance might integrate with other forms of risk-oriented AI governance; however, these efforts have not resolved some of the worries and issues that come with applying generative AI in practice.
Chapter1_A!3.png