4. Impact of Proposed EU AI Act
This section assesses how the proposed EU regulation on artificial intelligence (the AI Act) will supplement national administrative law and to what extent it (sufficiently) will alleviate the challenges we have identified. Specifically, it explores the impact of the AI Act from two perspectives: Firstly, how the Act addresses the challenges concerning human rights protection, and secondly, how it aids in overcoming the barriers to AI adoption by public agencies.
4.1 The Impact of the Proposed AI Act in Strengthening Human Rights Protection
Section 3.1 evaluates the current national legal framework concerning AI adoption by public agencies and the protection of citizens from AI-related harms. Challenges remain in effectively safeguarding citizens' rights in the specific context of digitalisation. This has been highlighted by the Commission for Data Protection, especially in terms of data protection and privacy. However, this overarching weakness in the national framework extends to other areas as well. In this regard, the discussion in section 3.2 has shown the limitations of existing laws in addressing new discrimination harms associated with AI systems.
The AI Act could be pivotal in addressing many of these concerns. The proposed AI Act is geared towards promoting human-centric AI, ensuring its development respects human dignity, upholds fundamental rights, and ensures the security and trustworthiness of AI systems. Central to the AI Act is the principle that AI should be designed and developed with full regard for human dignity and fundamental rights, such as privacy, data protection, and non-discrimination. Furthermore, the AI Act emphasizes the creation of AI that is safe, secure, and robust. AI designs should mitigate risks of errors or biases and remain transparent and interpretable for users. Additionally, the Act mandates rigorous testing and evaluation of AI systems to confirm their reliability and safety.
The proposed AI Act adopts a risk-based approach, categorizing AI systems into four risk levels: (1) ‘unacceptable risks’ (that lead to prohibited practices), (2) ‘high risks’ (which trigger a set of stringent obligations, including conducting a conformity assessment), (3) ‘limited risks’ (with associated transparency obligations), and (4) ‘minimal risks’ (where stakeholders are encouraged to follow codes of conduct). This classification depends on the potential risk posed to health, safety, and fundamental rights.
Most of the prohibited practices concerning AI usage are directed at public agencies. This encompasses the use of real-time biometric identification and social scoring. Similarly, most of the stand-alone high-risk AI applications focus on public agencies' use of AI in the following areas: access to and enjoyment of essential services and benefits, law enforcement, migration, asylum, and border management, administration of justice and democratic processes. Clearly, the public administration sector is under scrutiny, and many of these provisions aim to enhance the protection of individuals from harms within this domain.
Examining the prohibited practices, the AI Act addresses two primary categories of AI systems used by public agencies. First is the use of real-time biometric identification by public agencies for law enforcement purposes. While biometric identification includes fingerprints, DNA, and facial features, the prohibition notably emphasizes facial recognition technology. A system that would fall under this prohibition might be an expansive CCTV network on public streets integrated with facial recognition software. The deployment of such systems has significant ramifications for individual rights, including data protection, privacy, freedom of expression, and protection against discrimination. Facial recognition technology possesses the capability to process and analyse multiple data streams in real time, enabling large-scale surveillance of individuals, subsequently compromising their rights to privacy and data protection. The pervasive nature of this surveillance can also influence other foundational rights, such as freedom of expression and non-discrimination. The omnipresence of surveillance tools may inhibit individuals from voicing their opinions freely. People tend to self-censor and alter their behaviour when they feel overly surveilled. Similarly, in most cases, the negative impact of AI-driven surveillance is felt acutely by the marginalized groups in the population. Thus, strengthening existing safeguards against potential harms from facial recognition technology is vital.
Another prohibited practice pertinent to public administration is social scoring. The AI Act prohibits public authorities from employing AI systems to generate 'trustworthiness' scores, which could potentially lead to unjust or disproportionate treatment of individuals or groups. This prohibition seems inspired by the Chinese Social Credit System, where the government assigns scores to citizens and businesses based on various factors, including financial creditworthiness, compliance with laws and regulations, and social behaviours. These scores can then be employed to either reward or sanction individuals or entities. China's Social Credit System has sparked widespread concerns about human rights violations. To derive these social credit scores, the system gathers comprehensive data on its citizens. This broad data collection infringes on an individual's right to privacy. Moreover, the system might penalize individuals for online expressions or content shared, thereby potentially stifling freedom of speech. There is also concern that this system exacerbates social inequality. Those with lower scores might struggle with tasks like securing jobs or renting properties, and they could even be subject to public humiliation. Thus, these safeguards against the use of real-time biometric identification and social scoring undoubtedly complement national laws protecting user privacy and non-discrimination, including those in Norway.
Indeed, Norwegian law already outlines certain restrictions on AI use by public agencies, even before the introduction of the AI Act. There are existing laws that prevent public agencies from making specific decisions using AI. A prime example is the limited scope of the NAV Act, Article 4 a. While this provision is meant to facilitate automated decision-making, it does not facilitate the use of AI technologies. It prevents NAV from using fully automated decision-making except for cases where the applicable criteria are absent of discretion and the outcome of the decision is obvious. This is grounded in the belief that methods capable of automating decisions relying on more discretionary criteria (i.e, in practice, advanced AI systems) present ‘a greater risk of unjust and unintended discrimination.’
In contrast, while the AI Act categorizes AI systems intended for these purposes as high-risk systems, it permits the placement of such systems on the market. Hence, a certain tension arises between the legal framework in Norway and the AI Act’s ambition for harmonization. While Norwegian law does not permit certain uses of AI in the public sector due to concerns about the risks of discrimination (among other concerns), the AI Act assumes that these risks are sufficiently addressed if the requirements pertaining to high-risk AI systems are complied with. There may be good reasons for limiting the use of AI systems through national legislation, but it is worth questioning whether such limitations remain justified when they rely on risks that are addressed by the AI Act. Going forward, we would advise Norwegian legislators to consider this aspect of the relationship between the AI Act and national legislation.
Many AI systems pertinent to the public administration sector fall under the AI Act’s high-risk category. For example, this includes public agencies' use of AI in distributing benefits, making decisions in immigration and border control, law enforcement, and infrastructure management. In this context, the requirements for conducting risk assessments, ensuring human oversight, maintaining data quality, and adhering to cybersecurity standards will bolster protection against potential harms. These obligations are especially significant for countries like Norway, which boasts a vast public administration sector and a comprehensive social safety net. Given this context, AI could play a pivotal role in the government's initiatives to modernize and optimize the welfare system. The discussions in section 1, detailing implemented and planned projects, underscore the use of AI in automating decisions related to citizenship applications, NAV's ongoing project to leverage AI in predicting the duration of sick leaves, and Lånekassen’s use of AI in student loan applications. Similarly, many of the ongoing AI projects in the health sector would also qualify as high-risk AI systems. In this context, the above-mentioned requirements for high-risk AI systems are crucial in strengthening the protection of human rights. For instance, requirements assessing the relevance and representativeness of data can mitigate potential biases embedded in datasets. Requirements on human oversight and involvement can help public agencies detect and rectify potential biases. While reflecting overarching rights and values that are protected by general provisions in Norwegian law, these legal requirements address AI technologies and associated risks at a level of specificity that is currently not found in the Norwegian framework.
The Dutch welfare scandal serves as a stark example of public agencies deploying AI systems without essential safeguards. This system was notoriously opaque. When the non-profit organization 'Bij Voorbaat Verdacht' requested insights into the software's evaluation criteria for welfare abuse, the government countered that disclosing such information might aid potential wrongdoers. The absence of human oversight was glaringly evident, as even minor omissions in filling a form led to high-risk classifications. The provisions of the AI Act on risk assessment, transparency, and human oversight could likely have averted or lessened the repercussions of this scandal.
In Norway, a report by the Data Protection Authority highlighted that the Norwegian Tax Authority has developed a predictive tool to aid in the selection of tax returns for potential discrepancies or tax evasion. This tool is crafted through a comprehensive analysis of data, encompassing details like current and previous year deductions, age, financial specifics such as income and assets, and individual tax return elements. Intriguingly, the Tax Authority admitted that they ‘don’t necessarily know what it is that gives a taxpayer a high ranking for risk. The ranking is the result of complex data aggregation in the model.’ The AI Act, particularly the requirements concerning transparency and human oversight, are expected to influence the deployment of such systems.
The obligations for high-risk AI systems introduced by the AI Act also complement and address some of the gaps present in the GDPR. One significant area where the AI Act provides additional clarity is concerning decisions that, while not entirely automated, could have substantial impacts, such as credit scoring. As highlighted earlier, the study commissioned by the Commission for Data Protection underscores that process-driven decisions, like selections for inspections, can be so intrusive that they might equate to a ‘decision’ in their impact on an individual. However, the protections stipulated by the GDPR, especially Article 22(3), do not necessarily cover such uses of AI or profiling for inspection and fraud monitoring. The current Norwegian legislative framework is also oriented towards automated decision-making while paying less attention to AI-supported decision-making. In contrast, the AI Act appears to offer a broader scope of protection and safeguards for AI systems employed in the distribution of public benefits. This arguably encompasses the use of AI in areas like fraud detection and monitoring.
Despite this, many civil society organizations, including Amnesty and Human Rights Watch (HRW), have criticized the inadequate human rights safeguards, especially considering governments' increasing use of AI to deny or limit access to lifesaving benefits and other social services. This exacerbates existing concerns over inequality and the digital divide. For instance, HRW conducted a detailed study on the AI Act’s impact on the distribution of social security and highlighted the following:
'While the EU regulation broadly acknowledges these risks, it does not meaningfully protect people’s rights to social security and an adequate standard of living. In particular, its narrow safeguards neglect how existing inequities and failures to adequately protect rights – such as the digital divide, social security cuts, and discrimination in the labour market – shape the design of automated systems and become embedded by them.'
This is partly related to the narrow focus of the prohibitions and high-risk AI systems. Consider, for instance, the mounting evidence over recent years about the potential dangers of biometric identification. The prohibition in this domain appears so narrowly defined that its relevance is debatable. Firstly, it targets only ‘real-time’ systems that can capture, compare, and identify individuals ‘instantaneously, near-instantaneously, or without a significant delay.’ This leaves out ‘post’ systems which may analyse biometric data after an event, such as retrospectively identifying individuals present at protests. Notably, the prohibition is restricted to biometric identification used by public authorities for law enforcement. This means it does not cover the use of remote biometric identification for non-law enforcement purposes, like authentication for social welfare. This limitation is particularly concerning given the rising use of facial recognition technology by public agencies to provide public benefits.
HRW has documented how various governments use of facial recognition to verify the identities of those applying for welfare benefits. A case in point is the national welfare office in Ireland, the Department of Employment Affairs and Social Protection (DEASP). The Irish Council for Civil Liberties questioned the DEASP's extensive personal data collection for identity verification, challenging the necessity of analyzing facial images when simpler methods, such as passport and address verification, could suffice. Furthermore, substantial research underscores the racial and gender biases inherent in facial recognition technology. For example, a 2018 study from MIT revealed that commercial facial recognition systems from leading tech giants like IBM and Microsoft demonstrated significantly higher accuracy when identifying white males compared to women or individuals with darker skin tones. Such inaccuracies in the technology, when used by law enforcement, have led to a number of wrongful arrests, predominantly of people of colour. Similarly, the use of such systems in verifying for social security purposes heightens the risk of discrimination. However, because of the narrow scope of the prohibition in the AI Act, the use of facial recognition technology in social welfare settings is not addressed or restricted.
Similarly, the prohibition on ‘trustworthiness’ scoring seems to target ‘general purpose’ scoring systems where public authorities generate a single score that can be applied across various contexts, such as deciding whether individuals can board a plane, obtain a loan, or secure certain jobs. However, this focus on ‘general purpose’ scoring systems overlooks the potential harms arising from the growing reliance on scoring systems in welfare fraud detection, such as the Dutch SyRI. As noted above, the Norwegian Tax Authority uses AI to detect tax evasions. Even though such systems are specifically designed for detecting fraud and might not fall under the prohibition, they can still lead to severe human rights implications. For instance, these systems may erroneously flag individuals as fraud risks or deprive them of the necessary support. Consequently, there are calls for broader protection in this domain.
Indeed, the use of facial recognition technology, as well as the application of AI for distributing public benefits, falls under the high-risk category. This implies that both fraud detection systems, like the Dutch SyRI, and facial recognition technology used for verifying identity in welfare would need to adhere to certain obligations. Yet, concerns persist regarding the adequacy of these safeguards in protecting individuals against the harms from high-risk systems in the context of social welfare.
A primary concern is that the bulk of the AI Act’s obligations for high-risk systems are placed on the ‘providers’ of welfare technology rather than the agencies that use them. Thus, while obligations like risk assessment, transparency, and human oversight apply when public agencies develop AI systems in-house, the responsibility shifts to the provider when agencies procure such tools off the shelf. This skewed distribution of regulatory responsibility means that harm caused by off-the-shelf technologies might not be as rigorously regulated, even when their impacts can be as profound as those caused by in-house software. This indicates that regulation of AI users could be an important area where national legislation and, potentially, regional legislative cooperation could supplement the AI Act. Particularly, public procurement regulation emerges as a crucial venue for ensuring the protection of rights and values when AI is purchased by the public sector.
Relatedly, the obligations for high-risk applications overlook systemic issues. While the requirement for establishing a data governance framework, which mandates the data used to train AI systems to be relevant and representative, might help mitigate discrimination arising from biased data, it does not tackle the systemic concerns ingrained in both the systems and their human overseers. The Dutch welfare scandal is a poignant illustration: the deployment of the system predominantly targeting impoverished neighbourhoods is discriminatory by design. Similarly, the extensive exemptions from transparency requirements for law enforcement and migration control authorities could obstruct accountability for AI systems, posing significant threats to individual rights. For instance, providers are expected to disclose ‘electronic instructions for use’ that elucidate the underlying logic of how a system functions, and limitations in the performance of the system, including known or foreseeable risks to discrimination and fundamental rights. However, the Act stipulates that this information ‘shall not be provided in the areas of law enforcement and migration, asylum, and border control management.’ Consequently, there is a risk that vital information about a wide array of law enforcement technologies, which might affect human rights – including criminal risk assessment tools and ‘crime analytics’ software analyzing vast datasets to identify suspicious behaviour patterns – will remain concealed.
To address these concerns, there are recommendations to mandate human rights impact assessments throughout the entire lifecycle of high-risk systems when public agencies deploy AI in distributing public benefits. This encompasses scenarios where public agencies purchase high-risk AI systems from third parties or make significant modifications to the operations of such acquired systems that heighten or introduce human rights risks. Furthermore, many civil society organizations have underscored the importance of empowering individuals and public interest groups to lodge complaints and pursue remedies for damages caused by these systems. The identified gaps highlight opportunities for national, Nordic, and Baltic region initiatives to supplement the AI Act's measures in enhancing fundamental rights.
4.2 The Impact of the Proposed AI Act in Enabling Public Agencies’ Use of AI
In addition to the measures that strengthen human rights, the AI Act contains provisions that facilitate the use of AI by public agencies. Notable examples include provisions that permit the processing of sensitive personal data to scrutinize AI systems for potential discrimination and the introduction of regulatory sandboxes. While the provision on using sensitive data for testing seems a measure to strengthen human rights protection, it can also be seen as an enabler of digitalisation efforts. This is because it establishes a legal basis for the use and reuse of data for testing, which is currently a significant hurdle for public agencies implementing AI.
As highlighted in section 3, the National AI Strategy recognizes the significant constraints posed by regulatory restrictions on repurposing existing data for AI development, including testing. This is evidenced by the NAV sandbox example. In this instance, the Data Protection Authority determined that NAV required a specific legal basis to utilize data for AI training. Similar reservations have been voiced regarding AI systems assisting in email archiving. Although the agency conceded that public agencies might invoke Article 6(1)(c) in conjunction with specific provisions under the Archive Act, the Regulations Relating to Public Archives, and the Freedom of Information Act, such provisions do not explicitly provide a legal basis for an algorithm’s continuous learning. In both cases, the agency advocated for the anonymization of personal data prior to its use in training or refining algorithms.
Additionally, the NAV AI sandbox illustrates some of the tensions between data protection and fairness where detecting and counteracting discrimination requires more processing of personal, often sensitive, information about individuals. Indeed, the AI Act does resolve some of the problems. Article 10(5)) creates an exception to the prohibition of processing such type of data to the ones listed in GDPR Article 9(2). However, the exception only applies to high-risk AI systems and allows the processing of special categories of personal data to the extent that this is strictly necessary for the purposes of ensuring bias monitoring, detection and correction. Importantly, this provision does not allow the use of data for training purposes, which is the first hurdle in public agencies’ adoption of AI. Thus, whether a more widely applicable legal basis for training, bias monitoring and the avoidance of discrimination is needed, is a question that legislators should assess at the national level.