Automation may endanger fundamental values in the EU. In a panel discussion on 28 January 2022, experts discussed how the use of AI by the EU administration affects individuals’ access to justice.
On the 28 of January 2022, the Department of Legal Studies (Central European University), the Europa Institute (Leiden University), and the ESIL Interest Group ‘The EU as a Global Actor’ organised and virtually hosted an Expert Panel Discussion on the topic 'AI in the EU and Access to Justice'.
Reliance on artificial intelligence (AI) for assistance in decision-making is on the rise. While new technologies can help make EU bureaucracy more efficient, automating public power may also endanger fundamental values in EU law. These include not just substantive rights, such as data protection and non-discrimination, but also cover rules and mechanisms that evolved for decades to control public power, including the principle of legality, transparency, due process, the right to a reasoned decision, and accountability. The automation of government challenges and disrupts these models, often leaving the individuals affected in a vulnerable position.
The fundamental question this panel discussed is how the use of AI by the EU administration affects individuals’ access to justice. Do the current rules, models, and procedures set up in the EU legal system to ensure individuals can meaningfully object to and challenge public power still work in the face of automation?
After an introduction by Mathias Möschel (Central European University) and Melanie Fink (Leiden University/Central European University), the discussion proceeded in two sessions chaired by Clara Rauchegger (University of Innsbruck) and Melanie Fink. Overall, the presentations focused on two central themes. The first concerned rights and fundamental principles particularly affected by the use of AI. The second theme related to accountability for and effective remedies against administrative decisions that rely on AI technology.
Rights, vulnerability and empathy
Cameran Ashraf (Central European University/Wikimedia Foundation) kicked the event off with a presentation on AI and its impact on ‘non-charismatic rights’. These are human rights that typically draw less attention than rights such as privacy and freedom of expression. Ashraf focused on the freedom of association and the freedom of religion - two rights often overlooked in the context of AI - illustrating the impact of the use of AI on these rights and challenges to individuals’ access to justice. One challenge, according to Ashraf, is the fact that we often do not understand how AI works (called the ‘black box effect’), limiting our ability to comprehend the impact AI may have in this respect. Another challenge is that AI recommendations can benefit some human rights while harming others simultaneously. Lastly, Ashraf called attention to the importance of developing conceptual clarity and examining ‘non-charismatic rights’ before developing assumptions on how AI interacts with these rights and access to justice. Read more of his work on this topic here and here.
Sofia Ranchordás (University of Groningen/LUISS Guido Carli) spoke on vulnerability and empathy in the automated state. She explained that certain groups of citizens are disconnected from digital governments twice: (1) because technology may hold biases against them; and (2) because they are not capable of engaging with technology. The second disconnect, she shared, has not had much attention. This double disconnect creates ‘administrative vulnerability’. For example, in some jurisdictions, certain required transactions with the government can only be done digitally (such as applying for a birth certificate or a benefit). This added layer of digital technology placed on top of bureaucracy may not only cause vulnerable citizens to feel excluded, but it may also affect their ability to exercise their fundamental rights. Ranchordás then introduced the concept of empathy in the digital administrative state as a potential solution, translating, for instance, in a right to make mistakes and expanding the networks of digital assistance. She concluded that when it comes to digitisation, there is not just a need for human interference, but for empathic human interference. Her presentation was based on articles that can be found here and here.
Accountability and effective remedies
Madalina Busuioc (Vrije Universiteit Amsterdam) spoke about accountable AI. She focused on whether and under what circumstances human oversight can be an adequate safeguard to prevent harm and rights violations that may arise in the use of AI systems. Busuioc stressed the pressing need for accountable AI, given the transformative powers of AI technologies for the public sector and the often high-stakes nature of the domains in which they are deployed. In her view, human oversight is crucial and one of the primary safeguards for AI algorithmic failures. However, human oversight might be affected by two types of (cognitive) biases that influence human decision-making: automatic adherence to AI advice (or: automation bias) and selective adherence essentially, biased selective deference to AI advice when algorithmic predictions correspond to pre-existing beliefs, biases and stereotypes. The latter aspect, documented experimentally, highlights potential negative effects of automation of the administrative state for already vulnerable and disadvantaged citizens. She concluded by emphasising the importance of investigating how decision-makers interact with AI and what biases arise in the interaction, as a crucial step to understand our limitations in this respect to be able to devise effective oversight measures. Otherwise, human oversight risks becoming an empty safeguard incapable of preventing potential AI harm. Read more of her work on this topic here and here.
Tamás Molnár (Fundamental Rights Agency) gave the last presentation on algorithmic decision-making in the EU and its challenges in ensuring effective legal remedies, specifically in the area of migration and security. He explained that in recent years discussions have intensified at the EU level surrounding the potential use of AI-driven technology in these areas. On the one hand, if carefully conceived, monitored and implemented, AI can speed up asylum processes, protect from fraud, unveil related abuses, and improve authorities’ ability to quickly identify the vulnerabilities of migrants and asylum seekers. On the other hand, Molnár explained that AI technologies challenge Article 47 of the EU Charter of Fundamental Rights, which contains the right to an effective remedy. Being aware that personal data is processed with the support of AI is a precondition to exercising the right to an effective remedy. This is particularly challenging due to the ‘black box effect’ and the fact that there is no access to information about the AI technologies or algorithms used in specific processes. Molnár then presented some suggestions to address these challenges, such as empowering judiciaries and providing equality of arms between the victim and the defendant – for example, by shifting the burden of proof.
Conclusion
The Expert Panel Discussion on ‘AI in the EU and Access to Justice’ provided much food for thought and inspiration for further research. The EU administration's use of AI in decision-making poses considerable risks to access to justice, fundamental rights, the control of public decision-making, and the EU’s administrative system in general. To ensure that individuals can meaningfully object and challenge public power, it is critical – among other solutions – to better understand (the use of) AI, incorporate more empathy in decision-making processes by EU administrations, and detect and eliminate decision-making biases.
Article is reposted from: AI in the EU and Access to Justice – A Panel Discussion - Leiden Law Blog