European Union
The EU’s approach to AI “centers on excellence and trust, aiming to boost research and industrial capacity, while ensuring safety and fundamental rights”. The European Commission has created three inter-related legal initiatives that aims to contribute to building trustworthy AI; a legal framework for AI, a civil liability framework, and a revision of sectoral safety legislation (e.g. Machine Regulation and General Products Safety Directive. The legal framework for AI, the EU Artificial Intelligence (AI) Act, takes a risk-based approach, and places requirements on developers and deployers of AI based on the level of risk. There are four risk levels; Low or Minimal risk systems; Limited Risk systems; High-risk systems, and Unacceptable risks. The EU AI Act is viewed to become the first comprehensive regulatory scheme on AI globally. In early December 2023, the European Parliament and the Council reached a provisional agreement on the EU AI Act. The agreement stipulates that specific guardrails are to be included for general-purpose AI systems, with a two-tiered approach that places stringent obligations for providers of models that are categorized as having systemic risk under certain criteria. The provisional agreement provides that the EU AI Act will follow a staggered application calendar (6-12-24-36 months depending on the use-cases) after its entry into force upon voting of the Council and European Parliament on the final text. In January 2024, the Council of the EU’s Committee of Permanent Representatives voted to advance the Act, to be followed by voting in the Parliament in spring 2024. The technical standards following the AI Act will be developed by the European Committee for Standardization (CEN) and European Electrotechnical Committee for Standardization (CENELEC).
Germany
In addition to Germany’s role in contributing to the EU AI Act, the German government has established initiatives that focus on upholding Germany’s role as a research hub, to enhance the competitiveness of the German industry and enable application of AI across all sectors of society. The primary aim is to generate tangible societal advancements that maximize benefits for individuals and the environment. Germany has for example created a data ethics commission for building ethical guidelines, and regulatory sandboxes to enable innovation and advancement of regulation.
France
France’s approach to AI is similar to Germany’s. In addition to France’s role in contributing to the EU AI Act, France focuses on improving the country’s AI education and training to develop and attract top AI talent, to become world leading in AI innovation and research. The French Data Protection Agency (CNIL) and the French government are collaborating on several initiatives to advance the development of ethical AI. Examples of initiatives include a personal data sandbox program for digital health-related projects, a National Committee for Ethical AI and the Digital Republic Bil covering data protection rights and right to privacy for French citizens.
United States of America
In 2023, President Joe Biden signed the Executive Order on Safe, Secure and Trustworthy Development and Use of AI, which establishes a direction for federal AI regulation going forward. The goal of the executive order is to improve AI safety and security, promote transparency and fairness, and foster innovation and competitiveness Additionally, in December 2023, U.S. Representatives introduced the AI Foundational Model Transparency Act. Specifically, the Act calls on the Federal Trade Commission (FTC), in consultation with the National Institute of Standards and Technology (NIST), the Copyright Office, and the Office of Science and Technology Policy (OSTP) to set transparency standards for foundation model providers. Information identified for increased transparency include for example training data, model training mechanisms, and whether user data is observed or estimated through predications from a sample (collected in inference).
The US government has already developed a voluntary standard on AI Risk Management, published by the National Institute of Standards and Technology (NIST). At state level, domain-specific AI regulations have been introduced. In the 2023 legislative session, at least 25 states introduced AI bills, and 15 states adopted resolutions or enacted legislation.
United Kingdom
The UK government has suggested to create a principles-based framework for regulators with domain expertise to interpret and apply to AI within their remits. This approach makes use of regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used. The UK is also actively driving the conversation about the standard for global AI governance. In November 2023, 29 countries met during the UK AI Safety Summit and signed the Bletchley Declaration, which promotes the development of human-centric AI and encourages international collaboration. Simultaneously, the UK launched the AI Safety Institute for advancing ethical AI research to advance ethical AI testing and research.
Canada
Canada has taken a similar approach as the EU, by adopting a risk-based approach to AI. In 2022, the Artificial Intelligence and Data Act (AIDA) was proposed. The draft regulation puts requirements on private sector organizations to ensure safety and fairness of high-impact AI systems. The Canadian government has also introduced a voluntary code of conduct for generative AI systems, until formal regulation is in place.
China
China has introduced several regulations on algorithms and AI. For example, regulation on recommendation algorithms in 2021; rules for deep synthesis (synthetically generated content) in 2022; and in 2023, China drafted Measures for the management of generative AI. Information control is a central goal in all three regulations, but they also put other requirements on users or providers of AI systems. All three regulations require developers of AI to register the solution in an algorithmic registry, a government repository that gathers information on how algorithms are trained, as well as requiring them to pass a security self-assessment. More specifically, the Interim Measures for the Management of Generative AI Services, which came into effect in August 2023, aims to ensure protection of national interests, social values, user rights, prevention of the dissemination of false or harmful content, and safeguarding personal information. The regulation also states that AI-generated content must be in accordance with Chinese law. Providers of generative AI services are responsible for ensuring the legality of such content, and violations thereof are subject to penalties.
Singapore
Singapore’s approach to AI focuses on creating a collaborative industry ecosystem, through frameworks and guidance on how industry actors can make use of AI and uphold ethical principles and ensure trustworthy innovation. In 2023, Singapore launched
their second National AI Strategy (NAIS 2.0), outlining an ambition to build a trusted and responsible AI ecosystem. As part of Singapore’s strategy to increase the use of AI and strengthening the AI ecosystem, the Monetary Authority of Singapore (MAS) began partnering with industry actors in 2018 to develop guiding principles to promote responsible use of AI in the financial services sector. Since then, the collaboration has resulted in frameworks, methodologies and tools supporting industry actors with adopting the guiding principles. In 2023, MAS launched the project MindForge, an initiative aimed at creating a risk framework for the use of generative AI in the financial sector.
India
India’s approach to AI focuses on strengthening the country’s AI research, with special attention on sectors that are of local significance such as the agriculture sector. The NITI Aayog, the Indian government’s public policy think tank agency, has put forward two approach documents to support with operationalizing responsible AI and principles for responsible AI, based on the nation’s AI strategy. In 2022, the Indian government also introduced the Digital India Act (DIA) to establish a legal framework for the country’s digital ecosystem, and is expected to roll out after the 2024 Lok Sabha elections. The key elements of the Act include ensuring online safety, building trust and accountability, maintaining an open internet, and regulating new-age technologies such as AI. The Act proposes to safeguard innovation by defining and regulating high-risk AI systems.
Japan
Japan has taken a flexible approach to regulating AI. In the country’s AI strategy, Japan aims to become a digitalized AI society that contributes to solving global challenges. There is currently no hard restriction on AI models, however there are several guidelines for how organizations should develop and adopt AI in regards to governance, research, and utilization of AI. The Japanese Liberal Democratic Party which has been the ruling party in Japan for the last 50 years, released a white paper in April 2023 on a new, suggested AI regulatory approach for Japan to stay competitive. The white paper primarily focuses on legal measures around violation of human rights, national security measures, and the democratic process.
Brazil
Brazil has also introduced risk-based AI regulation. In 2023, Bill No. 2338/2023 was introduced, with obligations varying depending on the level of risk with the AI system. The bill also focuses on right given to individuals. For example, the right to information about prior interaction with AI systems; right to an explanation about the outputs of AI systems; and right to non-discrimination and correction of discrimination biases.