Interview with Yannick Meneceur, Policy advisor on digital transformation and artificial intelligence of the Council of Europe

June 24, 2020

On March 25, 2020 Cezara Panait LL.M. in Human Rights student at the Department of Legal Studies of the Central European University, conducted a Skype interview for her Master's Thesis entitled "Tackling Biases and Discrimination in the AI Regulatory Framework - A Comparative Analysis of EU and U.S." with Yannick Meneceur – Policy advisor on digital transformation and artificial intelligence, Council of Europe and Fellow researcher at IHEJ (Institut des Hautes Etudes sur la Justice) and Author of “L’intelligence artificielle en Procès”, Bruylant, 2020.

1.         There is already international agreement on the fact that Artificial Intelligence will have an essential impact on our society and on human rights. In this regard, I would like to ask you what is the position of the Council of Europe on regulating Artificial Intelligence, especially considering the work of the Ad Hoc Committee on Artificial Intelligence? Are there any standards on how the legally binding regulatory framework of AI should look like in the near future?

The Ad Hoc Committee on Artificial Intelligence (CAHAI) of the Council of Europe received a clear mandate from the Council of Europe Member States in September 2019: to study the feasibility of a legal framework on artificial intelligence. There is no question at this stage of locking up the thinking of the 47 Member States on a particular legal form. The result could be a Convention or a framework Convention, or recommendations or guidelines. What seems certain at this stage is the need to understand the existing legal frameworks - as well as those being developed in other international organizations or within Member States. CAHAI has therefore started its reflections with an in-depth mapping of the existing frameworks, in order to identify possible gaps.

From this, it will then be possible to ascertain the appropriate, proportionate and suitable measures for a high level protection of human rights, democracy and the rule of law, in a context where the Organization's objective is not at all to hinder innovation. However, such innovation can only be sustainable if it is part of the overall development goals, as defined by the United Nations, and within which the protection of fundamental rights is essential.

2.         Principles on AI governance frameworks, especially related to ethical or trustworthy AI have been established by different forums, such as the High Level Expert Group on Artificial Intelligence set at EU Level, or the OECD level etc. But could there be drawn, from all the international documents that have been issued, a list of certain principles that the proposed framework should essentially contain?

Common principles are emerging from all the work you have mentioned, as well as other contributions from civil society, academia and even the private sector. The Fundamental Rights Agency (FRA) counted nearly 265 ethical or governance frameworks for artificial intelligence as of December 2019. However, a substantial difficulty remains: different interpretations of certain principles can be made in these different documents. The concept of transparency, for example, is understood quite differently from the perspective of a private operator, for whom intellectual property and business secrecy are very important, more than from the perspective of civil society.

The Council of Europe is therefore particularly interested in the work of the European Union, the OECD, UNESCO - and the United Nations in general - and the way in which they have begun to translate these principles. The issue is therefore not simple and is not yet resolved: CAHAI will try to play a role in the development of a global regulation of technologies such as AI. It is fully within the Council of Europe's mandate to set high standards for the protection of human rights, democracy and the rule of law.

3.         Would soft law be enough in an era of a fast-changing environment like AI or do we need hard law to be developed? What would be the drawbacks of implementing soft law/ hard law?

The debate on the benefits and inadequacy of soft law is not unique to AI and digital in general. The trend in recent years has been to look for more flexible methods of regulation, capable of adapting to rapidly changing environments and taking into account the constraints of different operators. The ethical initiatives of private operators who say they are concerned about the consequences of their actions and who seem to be looking for solutions to make their actions ethical and human-centered are to be welcomed. 

However, this desire alone is not enough - a first essential reason: in the event of failure to comply with a principle laid down by such non-binding frameworks, there are no sanctions. Above all, these commitments are not the product of a democratic debate but of a unilateral act of will. We must remain careful to possible attempts at "ethical washing" and slowing down the construction of binding legal frameworks. Confidence-building cannot be decreed. Confidence must be proven in concrete acts and it stems from the certainty of sanctions in the event of failure.

4.         What would be your opinion on the idea of self-regulation/ codes of conducts developed by the giant tech companies or co-regulatory approach between private sector and governments or competent authorities? How do we ensure liability and accountability for AI development and deployment in different regions of the world?

Again, this type of initiative is to be welcomed as it is reassuring to see private operators concerned about the consequences of their actions. But their priority is not the protection of the general interest. The utility function of entrepreneurs is to make a profit and not to maximize social utility. These initiatives must therefore be seen as a contribution of these businesses to a regulatory approach that would ensure not only identical conditions of access to the market - and I am thinking here of small and medium-sized enterprises - but also legal certainty.

If we take the example of liability in the event of damage, the digital industry can find itself in great difficulty. Depending on the legal systems in the various states, the search for liability will be conducted according to very different criteria. Although the regulation of defective products has been unified, the treatment of disputes in civil matters in the 27 Member States of the Union, disparities still exist. Not to mention the fact that if we look at the 47 Member States of the Council of Europe, the risk of different interpretations is even greater - not to mention the criminal risk. Various mechanisms could be devised and, here again, the Council of Europe has a contribution to make.

It should also be noted that not all AI applications may require the same level of regulation. The stakes between sectors such as justice and culture are not the same. An AI that would contribute to the assessment of a personal injury in a court of law, as is the case in France with the DATAJUST application, and an AI that creates musical recommendations, do not involve the same risks.

5.         The EU has recently published the White Paper on Artificial Intelligence and as well the EU Data Strategy. Is the risk-based approach from the White Paper sufficient to address the necessary safeguards for protecting fundamental rights (especially non-discrimination, privacy and freedom of expression)?

The European Commission's White Paper has adopted a risk-based approach which responds exactly to the point I have just made. The idea is to create as many constraints as possible on high-risk applications, with the risk being assessed by cross-referencing two criteria: the sensitivity of the area and the type of application in the area. The Commission takes the example of an appointment scheduling tool in a hospital which operates well in a sensitive area, but whose impact is still measured. The Commission also puts forward a number of requirements to be imposed on these sensitive applications and draws on the work of the independent group of high-level experts which published in April 2019.

However, this work does not aim to answer the specific question of the protection of fundamental rights. Although the Commission is concerned about this in its developments, it cites as a reference in its White Paper the Council of Europe, whose mandate is precisely to protect human rights, democracy and the rule of law, not only in the European Union area, but also in the immediate neighborhood. The CAHAI is therefore expected by the other international organizations to produce this "brick" of human rights standards in order to be able to rely on it in the execution of their own mandate. The Council of Europe has already been very active on the specific subject you mention (non-discrimination, privacy and freedom of expression). I would like to refer here to an ECRI study - "Discrimination, Artificial Intelligence and Algorithmic Decision-Making", prepared by Professor Frederik Zuiderveen Borgesius. The Advisory Committee on Convention 108 has issued guidelines on the data protection implications of artificial intelligence which provide a set of basic measures that governments, designers, manufacturers and providers of artificial intelligence should follow to ensure that applications of artificial intelligence do not infringe human dignity and the fundamental rights and freedoms of individuals. With regard to freedom of expression and AI, a committee of experts (MSI-DIG) started its work in March 2020.

6.         How can bias and discrimination be tackled more efficiently in the AI governance framework?

Bias and discrimination seem to be one of the major risks to the massive use of AI in decision-making, especially in the field of public policy. The absence of a data governance policy creates a major risk in the first place. Algorithms cannot be considered on their own: they can reinforce biases and discriminations depending on the way they are trained or programmed. But the first biases come from the datasets. Poorly selected - or too faithful to the discriminations already present in society - the risks are major.

There are technological solutions that claim to be able to mitigate these risks, such as IBM's AI Fairness 360 Open Source Toolkit. But technology cannot be expected to solve problems that are primarily the responsibility of humans. That's why I'm talking about a data governance policy that should be able to deliver quality datasets from the source. Open data policies, for example, aim to make such datasets available and the European Commission has even made this one of its major strategic axes with its desire to create a single data market. However, the governance of this type of market should also be able to include a dimension of protection not only of personal data, but also a broad dimension of prevention of risks weighing on the community. Here again, the Council of Europe has a role to play as a European conscience in order to steer the compass of such a market not only towards the creation of economic profit.

7.         Many issues related to the use of AI are stemming from the datasets, how the models are trained and the fact that they are perceived as "black-boxed" and the developers cannot test properly before how the Big Data sets will interact. How can this problem be solved? Is it possible to have a functional global data governance system?

I would like to come back to a notion that seems to have taken a back seat with the Big Data but an important one: that of the purpose of a system. Under Convention 108 - and even the new version of Convention 108+ - data must be collected fairly, i.e. the user must be informed of this collection and what is going to be done with it. But even anonymized data can become meaningful again by cross-referencing several databases. Moreover, it can be used for the purpose of creating statistical groups, for advertising targeting or even for political purposes. We can therefore see that one of the major problems is the ability of platforms to capture data on a massive scale under the pretext of a service (social network, search engine) and then reuse it for purposes that are totally beyond the control of users. As well as the value!

I could take the precise example of the open data of court decisions in France as well. Public decision-makers were pushed by some private operators to accelerate the movement under the pretext of a better publicity of court decisions whereas they wanted above all to capture its value to develop their own commercial offer. Not to mention the debate on the names of judges in the open datasets: private operators argued that the transparency of justice was necessary to have these names available... their idea was above all to make profiling, the major damage that such practices pose. So the problem is the dual use of many digital applications, which should be able to be regulated even if many individuals are not very concerned about their digital footprint. If there is a "free" service in return, many of us are prepared to give up almost everything in our lives: except that the collective damage caused can be major, as we saw in the Cambridge Analytica case.

8.         In your opinion, what are the main challenges and the main differences between the regulatory framework on Artificial Intelligence in the EU and in U.S., given the fact that the biggest tech companies are U.S. based, and both EU and U.S. would like to be the leaders on AI developments?

I cannot be so precise to answer to you because I know less about the regulatory framework in the U.S. and I do not want to be imprecise in my answer. 

I just want to point out that many people, both in academia and in business, are concerned in the US about the current lack of regulation of AI. The Council of Europe has for example entered into a partnership with the IEEE, an association known for creating standards in the digital industry, which is campaigning for human rights, and Council of Europe standards, to inspire the American legislature. The “laissez-faire” policy during the expansion of the Internet has been disastrous. We can even see today that, in order to combat the pandemic, the federal authorities would like to exploit the data held by the GAFAMs but, in the absence of a legal framework, they are reluctant to do so.

The work in Europe will therefore inspire the United States, and raising standards in Europe will also benefit US citizens. For example, if American businesses want to operate in Europe, they will have to adopt rules, and American citizens are likely to find this disparity difficult to understand.

9.         Is there any possibility of developing a common global framework on Artificial Intelligence, or the disparities in the national legislation would prevent this objective?

I will try to be optimistic for start: this framework is necessary in this day and age. Once again, the lack of regulation of the Internet has led to a completely anarchic development of what was initially a space for knowledge sharing and human progress. The principles and values of the Council of Europe are more relevant than ever to get the best out of this technology.

More pessimistically, the European project is not going to emerge unscathed from the health crisis that we went through in 2020. One of the possible scenarios, as we see in Hungary, is a withdrawal of nations into themselves and the rejection of any common destiny. AI - and its regulation - will not escape this movement of history. 

Category: 

Share